In this post he gives a nice example of how rational so-called futarchies (and global rationality in general) can be far more appealing than modern-day democracies and other current modes of governance, at least from the utilitarian standpoint of e.g. thousands of people not needlessly dying. I’ll quote at length from that post below, but first: futa-what?
There are many good introductions to economist Robin Hanson’s idea of futarchy, a form of government based on prediction markets where we would (in his words) “vote for values, but bet on beliefs”. (As an aside, I find his short bio one of the more interesting ones I’ve seen, probably because of lots of common interests he’s pursued more vigorously for longer.) From his own introductory article:
“In “futarchy,” we would vote on values, but bet on beliefs. Elected representatives would formally define and manage an after-the-fact measurement of national welfare, while market speculators would say which policies they expect to raise national welfare.”
Why is this better than democracy? You really should read the article for a fuller treatment (or his paper, if you prefer a “more detailed and academic” treatment), but anyway here’s why according to Hanson:
“Democracy seems better than autocracy (i.e., kings and dictators), but it still has problems. There are today vast differences in wealth among nations, and we can not attribute most of these differences to either natural resources or human abilities. Instead, much of the difference seems to be that the poor nations (many of which are democracies) are those that more often adopted dumb policies, policies which hurt most everyone in the nation. And even rich nations frequently adopt such policies.
These policies are not just dumb in retrospect; typically there were people who understood a lot about such policies and who had good reasons to disapprove of them beforehand. It seems hard to imagine such policies being adopted nearly as often if everyone knew what such “experts” knew about their consequences. Thus familiar forms of government seem to frequently fail by ignoring the advice of relevant experts (i.e., people who know relevant things).
Would some other form of government more consistently listen to relevant experts? Even if we could identify the current experts, we could not just put them in charge (sound familiar?). They might then do what is good for them rather than what is good for the rest of us, and soon after they came to power they would no longer be the relevant experts. Similar problems result from giving them an official advisory role.
“Futarchy” is an as yet untried form of government intended to address such problems. In futarchy, democracy would continue to say what we want, but betting markets would now say how to get it. That is, elected representatives would formally define and manage an after-the-fact measurement of national welfare, while market speculators would say which policies they expect to raise national welfare. The basic rule of government would be:
When a betting market clearly estimates that a proposed policy would increase expected national welfare, that proposal becomes law.
Futarchy is intended to be ideologically neutral; it could result in anything from an extreme socialism to an extreme minarchy, depending on what voters say they want, and on what speculators think would get it for them.”
Futarchy, in short, “seems promising given the following assumptions”:
- It’s not that hard to tell rich happy nations from poor miserable ones.
- Democracies fail largely by not aggregating available information.
- Betting (prediction) markets are our best known institution for aggregating available information.
Back to Scott, who introduces the idea by intriguingly comparing prediction markets to an oracle who could give useful and accurate answers to questions of great consequence:
“What if anyone could ask an oracle the outcome of any war, or planned war, and expect a useful response?
When the oracle predicts the aggressor loses, it might prevent wars from breaking out. If an oracle told the US that the Vietnam War would cost 50,000 lives and a few hundred billion dollars, and the communists would conquer Vietnam anyway, the US probably would have said no thank you.
What about when the aggressor wins? For example, the Mexican-American War, where the United States won the entire Southwest at a cost of “only” ten thousand American casualties and $100 million (with an additional 20,000 Mexican deaths and $50 million in costs to Mexico)?
If both Mexico and America had access to an oracle who could promise them that the war would end with Mexico ceding the Southwest to the US, could Mexico just agree to cede the Southwest to the US at the beginning, and save both sides tens of thousands of deaths and tens of millions of dollars?
Not really. One factor that prevents wars is countries being unwilling to pay the cost even of wars they know they’ll win. If there were a tradition of countries settling wars by appeal to oracle, “invasions” would become much easier. America might just ask “Hey, oracle, what would happen if we invaded Canada and tried to capture Toronto?” The oracle might answer “Well, after 20,000 deaths on both sides and hundreds of millions of dollars wasted, you would eventually capture Toronto.” Then the Americans could tell Canada, “You heard the oracle! Give us Toronto!” – which would be free and easy – when maybe they would never be able to muster the political and economic will to actually launch the invasion.
So it would be in Canada’s best interests not to agree to settle wars by oracular prediction. For the same reasons, most other countries would also refuse such a system.
But I can’t help fretting over how this is really dumb. We have an oracle, we know exactly what the results of the Mexican-American War are going to be, and we can’t use that information to prevent tens of thousands of people from being killed in order to make the result happen? Surely somebody can do better than that.”
(This is also why I get irked at touchy-feely arguments against a utilitarian form of ethics, which basically boil down to saying that “honor” is worth thousands of lives permanently lost; of course there are relevant arguments against utilitarianism, these just aren’t some of them. And never mind that some of them will also insist that “you can’t put a price on life”: yes we can.) Scott continues:
“What if the United States made Mexico the following deal: suppose a soldier’s life is valued at $10,000 (in 1850 dollars, I guess, not that it matters much when we’re pricing the priceless). So in total, we’re going to lose 10,000 soldiers + $100 million = $200 million to this war. You’re going to lose 20,000 soldiers + $50 million = $250 million to this war.
So tell you what. We’ll dig a giant hole and put $150 million into it. You give us the Southwest. This way, we’re both better off. You’re $250 million ahead of where you would have been otherwise. And we’re $50 million ahead of where we would have been otherwise. And because we have to put $150 million in a hole for you to agree to this, we’re losing 75% of what we would have lost in a real war, and it’s not like we’re just suggesting this on a whim without really having the will to fight.
Mexico says “Okay, but instead of putting the $150 million in a hole, donate it to our favorite charity.”
“Done,” says America, and they shake on it.
As long as that 25% savings in resources isn’t going to make America go blood-crazy, seems like it should work and lead in short order to a world without war.
Unfortunately, oracles continue to be disappointingly cryptic and/or nonexistent. So who cares?
We do have the ordinary ability to make predictions. Can’t Mexico just predict “They’re much bigger than we are, probably we’ll lose, let’s just do what they want?” Historically, no. America offered to buy the Southwest from Mexico for $25 million (I think there are apartments in San Francisco that cost more than that now!) and despite obvious sabre-rattling Mexico refused. Wikipedia explains that “Mexican public opinion and all political factions agreed that selling the territories to the United States would tarnish the national honor.” So I guess we’re not really doing rational calculation here. But surely somewhere in the brains of these people worrying about the national honor, there must have been some neuron representing their probability estimate for Mexico winning, and maybe a couple of dendrites representing how many casualties they expected?
I don’t know. Could be that wars only take place when the leaders of America think America will win and the leaders of Mexico think Mexico will win. But it could also be that jingoism and bravado bias their estimate.
Maybe if there’d been an oracle, and they could have known for sure, they’d have thought “Oh, I guess our nation isn’t as brave and ever-victorious as we thought. Sure, let’s negotiate, take the $25 million, buy an apartment in SF, we can visit on weekends.”
But again, oracles continue to be disappointingly cryptic and/or nonexistent. So what about prediction markets?
Prediction markets are not always accurate, but they should be more accurate than any other method of arriving at predictions, and – when certain conditions are met – very difficult to bias.
Two countries with shared access to a good prediction market should be able to act a lot like two countries with shared access to an oracle. The prediction market might not quite match the oracle in infallibility, but it should not be systematically or detectably wrong. That should mean that no country should be able to correctly say “I think we can outpredict this thing, so we can justifiably believe starting a war might be in our best interest even when the market says it isn’t.” You might luck out, but for each time you luck out there should be more times when you lose big by contradicting the market.
So maybe a war between two rational futarchies would look more like that handshake between the Mexicans and Americans than like anything with guns and bombs.”
This isn’t so much starry-eyed idealistic as it is simply nonstupid. This is also what I mean when I say the world needs more rationality: not emotionless calculating Spocks who aren’t actually rational (game-theoretic analyses that assume everyone else is a perfect reasoner aren’t much better), but rather what you might call more sensible decision-making based on obvious-to-a-child assumptions like “it’s better to feel emotionally hurt (aka “tarnished honor”/status loss) than to get outright killed”, systematized.
Also note that I’m not explicitly advocating futarchy. I’m just saying that if the democracy we have now doesn’t prevent nations going to war and getting thousands of people killed needlessly, then it’s not optimized for the pursuit of happiness, so we should stop pretending that democracy (at least what we have now) is “the best form of government that can possibly exist” and that anyone who says otherwise “is an anarchist” or some other nonsense, and that furthermore futarchy might be one of many many ways we can improve on it.
The rest of his post is interesting only if you’re familiar with the literature on superintelligence, existential risk, and what people like to call “the Singularity”, with all its confused misconceptions. Here’s an exhaustive (352-page) introductory treatment to superintelligence and its relevance to our future affairs by Nick Bostrom one of the leaders in the field; here’s a quick overview in the form of a response to the 2009 EDGE question “what will change everything?”; here’s a quick overview of the three different schools of academic thought on the Singularity (as well as a fourth “school” that pretty much sums up what everyone else who’s vaguely heard about the word thinks); here’s an early paper by Bostrom on existential risks (potential human extinction scenarios) and why we should care.