Game theory and Schelling points: miscellaneous comments, examples

Scott Alexander (who writes over at Slate Star Codex) has a really cool series of intro-level posts on game theory as elucidated in Dixit and Nalebuff’s The Art of Strategy. I’ve said before that I like Scott for his breezy conversational style and almost disarmingly simple way of explaining complex ideas. Today I won’t talk about (read: copy-paste) his posts though, merely some remarks by commenters on his posts.

Here kilobug corrects the notion that rational actors pursuing incentives are purely self-interested:

That’s not really the case. Game theory usually consider that everyone is an utility maximizer, but nothing says that the utility function has to be selfish. Utility function can factor well-being and happiness of others in it.

You can apply game theory in cases like a parent-child relation, in which the parent and the child disagree, but the parent still is motivated with the interest of the child. Even in more classical cases, nothing forces the utility function to be selfish and not value the other well-being. Game theory only apply when the agents have different goals, but it can just be “I value my own well-being twice as much as the well-being of the other”, which is not “purely self-interested”.

It makes me “wail and gnash” because it’s a very frequent cliché that rationalists and utility maximizer are necessarily selfish and don’t care about others, and it’s a cliché we should fight.

I didn’t know that before.

Kaj Sotala, on how (quoting Scott’s conclusion to the first post) “simple sequential games can often be explored by reasoning backwards over decision trees representing the choices of the players involved”:

This seems to have obvious evpsych implications regarding emotions such as love and friendship – if you love somebody enough that you can’t take serious actions against them, even if it would otherwise be rational (for a purely selfish agent), then it’s also more profitable for your partner to keep interacting with you. Of course, handicapping yourself is only a good idea if the other person isn’t out to ruthlessly exploit you anyway, so love often demands a lot of reciprocity – unless, perhaps (getting into strong armchair evpsych territory here), the status difference is so large that you have potentially more to gain than lose anyway.

This real-life application of mixed Nash equilibria (the example uses biased coin flips) in, of all things, penalty kicks in soccer blew my mind – selection effects, probably?

A right-footed kicker has a better chance of success if he kicks to the right, but a smart goalie can predict that and will defend to the right; a player expecting this can accept a less spectacular kick to the left if he thinks the left will be undefended, but a very smart goalie can predict this too, and so on. Economist Ignacio Palacios-Huerta laboriously analyzed the success rates of various kickers and goalies on the field, and found that they actually pursued a mixed strategy generally within 2% of the game theoretic ideal, proving that people are pretty good at doing these kinds of calculations unconsciously.

Schelling points, illustrated via a game show:

Art of Strategy describes a game show in which two strangers were separately taken to random places in New York and promised a prize if they could successfully meet up; they had no communication with one another and no clues about how such a meeting was to take place. Here there are a nearly infinite number of possible choices: they could both meet at the corner of First Street and First Avenue at 1 PM, they could both meet at First Street and Second Avenue at 1:05 PM, etc. Since neither party would regret their actions (if I went to First and First at 1 and found you there, I would be thrilled) these are all Nash equilibria.

Despite this mind-boggling array of possibilities, in fact all six episodes of this particular game ended with the two contestants meeting successfully after only a few days. The most popular meeting site was the Empire State Building at noon.

How did they do it? The world-famous Empire State Building is what game theorists call focal: it stands out as a natural and obvious target for coordination. Likewise noon, classically considered the very middle of the day, is a focal point in time. These focal points, also called Schelling points after theorist Thomas Schelling who discovered them, provide an obvious target for coordination attempts.

What makes a Schelling point? The most important factor is that it be special. The Empire State Building, depending on when the show took place, may have been the tallest building in New York; noon is the only time that fits the criteria of “exactly in the middle of the day”, except maybe midnight when people would be expected to be too sleepy to meet up properly.

What makes something “special” is of course dependent on the observer, as David Friedman writes:

Two people are separately confronted with the list of numbers [2, 5, 9, 25, 69, 73, 82, 96, 100, 126, 150 ] and offered a reward if they independently choose the same number. If the two are mathematicians, it is likely that they will both choose 2—the only even prime. Non-mathematicians are likely to choose 100—a number which seems, to the mathematicians, no more unique than the other two exact squares. Illiterates might agree on 69, because of its peculiar symmetry—as would, for a different reason, those whose interest in numbers is more prurient than mathematical.

The cool thing about Schelling points is that they “explain almost everything”:

….stock markets, national borders, marriagesprivate property, religions, fashion, political parties, peace treaties, social networks, software platforms and languages all involve or are based upon Schelling points. In fact, whenever something has “symbolic value” a Schelling point is likely to be involved in some way.

Here’s a couple of examples on Schelling points:

A democracy provides a Schelling point, … an option which might or might not be the best, but which is not too bad and which everyone agrees on in order to stop fighting. … In the six hundred fifty years between the Norman Conquest and the neutering of the English monarchy, Wikipedia lists about twenty revolts and civil wars. … In the three hundred years since the neutering of the English monarchy and the switch to a more Parliamentary system, there have been exactly zero. … Democracy doesn’t always perform optimally, but it always performs fairly, … and that is enough to prevent people from starting civil wars.

Academia is different. Its state resembles that of pre-democratic governments, when anyone could choose a side, claim it was legitimate, and then get into endless protracted fights with the partisans of other sides. If you believe ObamaCare will destroy the economy, you will have no trouble finding a prestigious academic who agrees with you. Then all you need to do is accuse the other academics of bias, or cherry-picking, or using the wrong statistical test, or any of the other ways to discredit scientists you don’t like. …

A democratic vote among the scientific establishment is insufficient to settle these topics. The most important problem is that it gives massive power to the people who determine who gets to be part of “the scientific establishment”. … So not having any Schelling point – being hopelessly confused about the legitimacy of academic ideas – sucks. But a straight democratic vote of academics would also suck and be potentially unfair.

Prediction markets avoid these problems. There is no question of who the experts are: anyone can invest in a prediction market. There’s no question of special interests taking it over; this just distributes free money to more honest investors. Not only do they escape real bias, but more importantly they escape perceived bias. It is breathtakingly beautiful how impossible it is to rail that a prediction market is the tool of the liberal media or whatever. …

Just as democracy made it harder to fight over leadership, prediction markets make it harder to fight over beliefs. We can still fight over values, of course – if you hate teenagers having sex, and I don’t care about it, we can debate that all day long. But if we want to know whether a certain law will raise the pregnancy rate, there will be only one correct answer, and it will only be a mouse-click away.

I think this would have more positive effects than anyone anticipates. If people took it seriously, not only would the gun control debate be over in an hour, but it would end on the objectively right side, whichever side that was. If single-payer would be better than Obamacare, we could implement single-payer and anyone who tried to make up horror stories about how it would destroy health care would be laughed out of the room. And once these issues have gone away, maybe we can reach the point where half the country stops hating the other half because of disagreements which are largely over factual issues.

At this point I will admit to not knowing enough about prediction markets and the issues it can address to have any sort of reasonably informed opinion on the matter, so in the spirit of not agreeing with someone just because I like the way he writes I implore the reader to actually find out more about prediction markets before coming to any kind of conclusion. Wikipedia on prediction markets:

Prediction markets are speculative markets created for the purpose of making predictions. The current market prices can then be interpreted as predictions of the probability of the event or the expected value of the parameter. For example, a prediction market security might reward a dollar if a particular candidate is elected, such that an individual who thinks the candidate had a 70% chance of being elected should be willing to pay up to 70 cents for such a security.

People who buy low and sell high are rewarded for improving the market prediction, while those who buy high and sell low are punished for degrading the market prediction. Evidence so far suggests that prediction markets are at least as accurate as other institutions predicting the same events with a similar pool of participants.

My absurdity heuristics say the last paragraph on possible implications of taking prediction markets seriously for e.g. gun control and healthcare reform sounds a bit too good to be true, but then my absurdity heuristics aren’t even calibrated in this case. Any ideas?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s