Saturday, June 20, 2009

Do public prediction markets really fail?

A few days ago I came across this article explaining why public prediction markets fail. The article gives an example where three different PMs failed to pick a winner in American Idol (Betfair) or Britain's got talent (Hubdub and Intrade). While it was definitely an interesting read, I feel that the author didn't take some things into account.
First, the success of PMs depends on the notion of participants playing risk neutral strategies (see, for example, the Manski 2005 paper) -- when people play with fake money, this may well not hold (they will tend to risk more than usual because they have nothing to lose).
Also, PMs are said to be more accurate than other ways of aggregating opinions such as polls, or exert opinions. It would be nice if the author had compared these predictions with some predictions made by opinion polls or experts and shown how the predictions differ.
Then there's the thing with data from Hubdub. As noted in the article, Hubdub's market concerned with Britain's got talent failed to predict the true outcome, giving Susan Boyle 78% chance of winning. What was not taken into account is that there were in fact more markets on Hubdub concerned with Britain's got talent. The one used in the article can be found here. However, here's another market that accurately predicted that "Susan Boyle OR Diversity" will win (Diversity won). Note the following: the market that got Susan Boyle wrong had 25k$ of activity, whereas the one that got it right had twice as much activity. So, if you were to make any decisions based on PM predictions, you would probably go for the one with more activity and you would be right. I don't know if Intrade of Betfair had more markets for the same event.
Anyway, I agree in the bottom line that accuracy of PMs depends on how much information its participants have, and that, ultimately, they will fail some of the time. However, we should be careful about making statements like "public prediction markets fail", especially since there are so many examples when they don't (this might be a topic for another post). And even when they do fail, it's important to understand why.

Monday, June 8, 2009

Wasting just a little bit more time: a tale of dzen and reddit

Are you tired of switching from the console to browser, and then pressing F5 to see what's new on reddit? So am I. There's got to be a better way of seeing new posts on reddit without leaving the comfort of the command line. And there is. If you run Linux with dzen, here's a possible solution. Simply use this script. It checks reddit and outputs the first ten titles in a way that can be used by dzen. It should be clear from the source how to make the script output more than ten titles or how to monitor something other than the main reddit (e.g., programming, pics, funny or others subreddits). Now all one needs to do is run this script every n seconds (depends on how often you expect new articles to arrive), you can use something like this. One last thing left to do is add a line to your .xsession that calls your checker script, e.g., I have the following line:

~/.dzen/ | dzen2 -y 800 -ta c -l 10 -bg '#2c2c32' -fg 'grey70' -p -e 'entertitle=uncollapse;leavetitle=collapse' &

And here are the results (the reddit bar sits quietly at the bottom of your screen until you move your mouse over it, when you get the situation shown on the picture to the right):

Of course, once you see what's new you still have to switch to your browser if you want to actually follow the links, but this should at least save you the trouble of switching only to find out that there's nothing new. Note that there are many things left to be done here, this is just a little sample of how you can waste a few more minutes of your life.