One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay I offer a detailed case that one would be wrong: in particular, I argue that computational complexity theory—which studies the resources (e.g. time, space, randomness) needed to solve computational problems—leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume’s problem of induction, Goodman’s grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest….
—Scott Aaronson, in the abstract to “Why Philosophers Should Care About Computational Complexity”
I initially wanted to write about Scott’s paper (the abstract of which appears in part in the quote above), because it appeared so tremendously exciting, but decided I’d give it a miss for now because not being able to copy-paste with reckless abandon is sort of disincentivizing. Instead I’m going to write about my disappointment with the way philosophy is done today, how we might do better, and then go on a complete tangent and talk about Scott’s skeptical review of Max Tegmark’s big idea that “all reality is math” (not just isomorphic to/can be modeled by math, but is in fact math). I agree it should be two different posts, but IS-OUGHT SO HA!
I’m all for (potential) solutions to Big Philosophical Problems coming from “completely out of left field” (to bastardize Fields Medalist Martin Hairer’s description of his “Tolkienesque” publication). AI researcher and philosopher Aaron Sloman lucidly expounds upon this attitude with respect to his main field of expertise (AI) in his book “The Computer Revolution in Philosophy”, available entirely online:
Very many of the problems and concepts discussed by philosophers over the centuries have been concerned with processes, whereas philosophers, like everybody else, have been crippled in their thinking about processes by too limited a collection of concepts and formalisms. Here are some age-old philosophical problems explicitly or implicitly concerned with processes. How can sensory experience provide a rational basis for beliefs about physical objects? How can concepts be acquired through experience, and what other methods of concept formation are there? Are there rational procedures for generating theories or hypotheses? What is the relation between mind and body? How can non-empirical knowledge, such as logical or mathematical knowledge, be acquired? How can the utterance of a sentence relate to the world in such a way as to say something true or false? How can a one-dimensional string of words be understood as describing a three-dimensional or multi-dimensional portion of the world? What forms of rational inference are there? How can motives generate decisions, intentions and actions? How do non-verbal representations work? Are there rational procedures for resolving social conflicts?
There are many more problems in all branches of philosophy concerned with processes, such as perceiving, inferring, remembering, recognising, understanding, learning, proving, explaining, communicating, referring, describing, interpreting, imagining, creating, deliberating, choosing, acting, testing, verifying, and so on. Philosophers, like most scientists, have an inadequate set of tools for theorising about such matters, being restricted to something like common sense plus the concepts of logic and physics. A few have clutched at more recent technical developments, such as concepts from control theory (e.g. feedback) and the mathematical theory of games (e.g. payoff matrix), but these are hopelessly deficient for the tasks of philosophy, just as they are for the task of psychology.
The new discipline of artificial intelligence explores ways of enabling computers to do things which previously could be done only by people and the higher mammals (like seeing things, solving problems, making and testing plans, forming hypotheses, proving theorems, and understanding English). It is rapidly extending our ability to think about processes of the kinds which are of interest to philosophy. So it is important for philosophers to investigate whether these new ideas can be used to clarify and perhaps helpfully reformulate old philosophical problems, re-evaluate old philosophical theories, and, above all, to construct important new answers to old questions. As in any healthy discipline, this is bound to generate a host of new problems, and maybe some of them can be solved too.
I am prepared to go so far as to say that within a few years, if there remain any philosophers who are not familiar with some of the main developments in artificial intelligence, it will be fair to accuse them of professional incompetence, and that to teach courses in philosophy of mind, epistemology, aesthetics, philosophy of science, philosophy of language, ethics, metaphysics, and other main areas of philosophy, without discussing the relevant aspects of artificial intelligence will be as irresponsible as giving a degree course in physics which includes no quantum theory. Later in this book I shall elucidate some of the connections.
Philosophy can make progress, despite appearances. Perhaps in future the major advances will be made by people who do not call themselves philosophers.
The rest of the chapter, indeed the whole book, is worth reading both for its wealth of insights and for what it represents (to me): a tragedy. Because this book was published all the way back in 1978.
That’s over a third of a century.
Why oh why haven’t theoretical advances in this field, indeed in other (seemingly “unrelated”) fields, permeate the discursive sphere of the Big Problems in philosophy as much as they should – why did beginner-me have to go to the fringes of the discursive sphere to find these ideas? Why are beginning students like me, introduced to these Big Problems, still given the impression that we are “nowhere near close to making progress” in answering them? Do we even want to see these problems resolved, or do we just want to revel in the “sense of mystery” that accompanies our asking these questions?
In that sense it’s a bit tragic. Lots of beginning students in philosophy spend long years retracing the missteps of past thinkers on Big Problems through the annals of history, as if it’s the most effective way to prepare them to face these Bigs themselves. (If you want to focus on them, call it “history of philosophy” or something, and don’t pedagogically focus on them to the exclusion of modern developments in philosophical thought. Simply put, teach philosophy something like how science is taught.)
I’m not saying it’s useless. I’m saying it’s not optimized.
Luke Meuelhauser writes:
Large swaths of philosophy (e.g. continental and postmodern philosophy) often don’t even try to be clear, rigorous, or scientifically respectable. This is philosophy of the “Uncle Joe’s musings on the meaning of life” sort, except that it’sdressed up in big words and long footnotes. You will occasionally stumble upon an argument, but it falls prey to magical categories and language confusions and non-natural hypotheses. You may also stumble upon science or math, but they are used to ‘prove’ things irrelevant to the actual scientific data or the equations used.
Analytic philosophy is clearer, more rigorous, and better with math and science, but only does a slightly better job of avoiding magical categories, language confusions, and non-natural hypotheses. Moreover, its central tool is intuition, and this displays a near-total ignorance of how brains work. As Michael Vassar observes, philosophers are “spectacularly bad” at understanding that their intuitions are generated by cognitive algorithms.
What about Quinean naturalists? Many of them at least understand the basics: that things are made of atoms, that many questions don’t need to be answered but instead dissolved, that the brain is not an a priori truth factory, that intuitions come from cognitive algorithms, that humans are loaded with bias, that language is full of tricks, and that justification rests in the lens that can see its flaws. Some of them are even Bayesians.
Like I said, a few naturalistic philosophers are doing some useful work. But the signal-to-noise ratio is much lower even in naturalistic philosophy than it is in, say, behavioral economics or cognitive neuroscience or artificial intelligence or statistics. Why? Here are some hypotheses, based on my thousands of hours in the literature:
- Many philosophers have been infected (often by later Wittgenstein) with the idea that philosophy is supposed to be useless. If it’s useful, then it’s science or math or something else, but not philosophy. Michael Bishop says a common complaint from his colleagues about his 2004 book is that it is too useful.
- Most philosophers don’t understand the basics, so naturalists spend much of their time coming up with new ways to argue that people are made of atoms and intuitions don’t trump science. They fight beside the poor atheistic philosophers who keep coming up with newways to argue that the universe was not created by someone’s invisible magical friend.
- Philosophy has grown into an abnormally backward-looking discipline. Scientists like to put their work in the context of what old dead guys said, too, but philosophers have a real fetish for it. Even naturalists spend a fair amount of time re-interpreting Hume and Dewey yet again.
- Because they were trained in traditional philosophical ideas, arguments, and frames of mind, naturalists willanchor and adjust from traditional philosophy when they make progress, rather than scrapping the whole mess and starting from scratch with a correct understanding of language, physics, and cognitive science. Sometimes, philosophical work is useful to build from: Judea Pearl’s triumphant work on causality built on earlier counterfactual accounts of causality from philosophy. Other times, it’s best to ignore the past confusions. Eliezer made most of his philosophical progress on his own, in order to solve problems in AI, and only later looked around in philosophy to see which standard position his own theory was most similar to.
- Many naturalists aren’t trained in cognitive science or AI. Cognitive science is essential because the tool we use to philosophize is the brain, and if you don’t know how your tool works then you’ll use it poorly. AI is useful because it keeps you honest: you can’t write confused concepts or non-natural hypotheses in a programming language.
- Mainstream philosophy publishing favors the established positions and arguments. You’re more likely to get published if you can write about how intuitions are useless in solving Gettier problems (which is a confused set of non-problems anyway) than if you write about how to make a superintelligent machine preserve its utility function across millions of self-modifications.
- Even much of the useful work naturalistic philosophers do is not at the cutting-edge. Chalmers’ update for I.J. Good’s ‘intelligence explosion’ argument is the best one-stop summary available, but it doesn’t get as far as theHanson-Yudkowsky AI-Foom debate in 2008 did. Talbot (2009) and Bishop & Trout (2004) provide handy summaries of much of the heuristics and biases literature, just like Eliezer has so usefully done on Less Wrong, but of course this isn’t cutting edge. You could always just read it in the primary literature by Kahneman and Tversky and others.
Of course, there is mainstream philosophy that is both good and cutting-edge: the work of Nick Bostrom and Daniel Dennett stands out. And of course there is a role for those who keep arguing for atheism and reductionism and so on. I was a fundamentalist Christian until I read some contemporary atheistic philosophy, so that kind of work definitely does some good.
But if you’re looking to solve cutting-edge problems, mainstream philosophy is one of the last places you should look. Try to find the answer in the cognitive science or AI literature first, or try to solve the problem by applying rationalist thinking: like this.
(That last link by the way references Scott Alexander’s “Dissolving questions about disease” post, which is great fun.)
This is of course not to say that mainstream philosophy is completely useless: precisely the opposite. Luke himself summarizes “some useful contributions of mainstream philosophy” in a comment from a Less Wrong post I’m reproducing in full below since it has quite a number of nonobvious gems:
Here are some useful contributions of mainstream philosophy:
- Quine’s naturalized epistemology. Epistemology is a branch of cognitive science: that’s where recursive justification hits bottom, in the lens that sees its flaws.
- Tarski on language and truth. One of Tarski’s papers on truth recently ranked as the 4th most important philosophy paper of the 20th century by a survey of philosophers. Philosophers have much developed Tarski’s account since then, of course.
- Chalmers’ formalization of Good’s intelligence explosion argument. Good’s 1965 paper was important, but it presented no systematic argument; only hand-waving. Chalmers breaks down Good’s argument into parts and examines the plausibility of each part in turn, considers the plausibility of various defeaters and possible paths, and makes a more organized and compelling case for Good’s intelligence explosion than anybody at SIAI has.
- Dennett on belief in belief. Used regularly on Less Wrong.
- Bratman on intention. Bratman’s 1987 book on intention has been a major inspiration to AI researchers working on belief-desire-intention models of intelligent behavior. See, for example, pages 60-61 and 1041 of AIMA (3rd ed.).
- Functionalism and multiple realizability. The philosophy of mind most natural to AI was introduced and developed by Putnam and Lewis in the 1960s, and more recently by Dennett.
- Explaining the cognitive processes that generate our intuitions. Both Shafir (1998) and Talbot (2009) summarize and discuss as much as cognitive scientists know about the cognitive mechanisms that produce our intuitions, and use that data to explore which few intuitions might be trusted and which ones cannot – a conclusion that of course dissolves many philosophical problems generated from conflicts between intuitions. (This is the post I’m drafting, BTW.) Talbot describes the project of his philosophy dissertation for USC this way: “…where psychological research indicates that certain intuitions are likely to be inaccurate, or that whole categories of intuitions are not good evidence, this will overall benefit philosophy. This has the potential to resolve some problems due to conflicting intuitions, since some of the conflicting intuitions may be shown to be unreliable and not to be taken seriously; it also has the potential to free some domains of philosophy from the burden of having to conform to our intuitions, a burden that has been too heavy to bear in many cases…” Sound familiar?
- Pearl on causality. You acknowledge the breakthrough. While you’re right that this is mostly a case of an AI researcher coming in from the outside to solve philosophical problems, Pearl did indeed make use of the existing research in mainstream philosophy (and AI, and statistics) in his book on causality.
- Drescher’s Good and Real. You’ve praised this book as well, which is the result of Drescher’s studies under Dan Dennett at Tufts. And the final chapter is a formal defense of something like Kant’s categorical imperative.
- Dennett’s “intentional stance.” A useful concept in many contexts, for example here.
- Bostrom on anthropic reasoning. And global catastrophic risks. And Pascal’s mugging. And the doomsday argument. And the simulation argument.
- Ord on risks with low probabilities and high stakes. Here.
- Deontic logic. The logic of actions that are permissible, forbidden, obligatory, etc. Not your approach to FAI, but will be useful in constraining the behavior of partially autonomous machines prior to superintelligence, for example in the world’s first battlefield robots.
- Reflective equilibrium. Reflective equilibrium is used in CEV. It was first articulated by Goodman (1965), then by Rawls (1971), and in more detail by Daniels (1996). See also the more computational discussion in Thagard (1988), ch. 7.
- Experimental philosophy on the biases that infect our moral judgments. Experimental philosophers are now doing Kahneman & Tversky -ish work specific to biases that infect our moral judgments. Knobe, Nichols, Haidt, etc. See an overview in Experiments in Ethics.
- Greene’s work on moral judgment. Joshua Greene is a philosopher and neuroscientist at Harvard whose work using brain scanners and trolley problems (since 2001) is quite literally decoding the algorithms we use to arrive at moral judgments, and helping to dissolve the debate between deontologists and utilitarians (in his view, in favor of utilitarianism).
- Dennett’s Freedom Evolves. The entire book is devoted to explaining the evolutionary processes that produced the cognitive algorithms that produce the experience of free will and the actual kind of free will we do have.
- Quinean naturalists showing intuitionist philosophers that they are full of shit. See for example, Schwitzgebel and Cushman demonstrating experimentally that moral philosophers have no special expertise in avoiding known biases. This is the kind of thing that brings people around to accepting those very basic starting points of Quinean naturalism as a first step toward doing useful work in philosophy.
- Bishop & Trout on ameliorative psychology. Much of Less Wrong’s writing is about how to use our awareness of cognitive biases to make better decisions and have a higher proportion of beliefs that are true. That is the exact subject of Bishop & Trout (2004), which they call “ameliorative psychology.” The book reads like a long sequence of Less Wrong posts, and was the main source of my post on statistical prediction rules, which many people found valuable. And it came about two years before the first Eliezer post on Overcoming Bias. If you think that isn’t useful stuff coming from mainstream philosophy, then you’re saying a huge chunk of Less Wrong isn’t useful.
- Talbot on intuitionism about consciousness. Talbot (here) argues that intuitionist arguments about consciousness are illegitimate because of the cognitive process that produces them: “Recently, a number of philosophers have turned to folk intuitions about mental states for data about whether or not humans have qualia or phenomenal consciousness. [But] this is inappropriate. Folk judgments studied by these researchers are mostly likely generated by a certain cognitive system – System One – that will ignore qualia when making these judgments, even if qualia exist.”
- “The mechanism behind Gettier intuitions.” This upcoming project of the Boulder philosophy department aims to unravel a central (misguided) topic of 20th century epistemology by examining the cognitive mechanisms that produce the debate. Dissolution to algorithm yet again. They have other similar projects ongoing, too.
- Computational meta-ethics. I don’t know if Lokhorst’s paper in particular is useful to you, but I suspect that kind of thing will be, and Lokhorst’s paper is only the beginning. Lokhorst is trying to implement a meta-ethical system computationally, and then actually testing what the results are.
Of course that’s far from all there is, but it’s a start.
Coming back to the original topic, Luke gives a large number of suggestions on how to do philosophy better, which I found pretty good:
So let me tell you what I think cutting-edge philosophy should be:
- Write short articles. One or two major ideas or arguments per article, maximum. Try to keep each article under 20 pages. It’s hard to follow a hundred-page argument.
- Open each article by explaining the context and goals of the article (even if you cover mostly the same ground in the opening of 5 other articles). What topic are you discussing? Which problem do you want to solve? What have other people said about the problem? What will you accomplish in the paper? Introduce key terms, cite standard sources and positions on the problem you’ll be discussing, even if you disagree with them.
- If possible, use the standard terms in the field. If the standard terms are flawed, explain why they are flawed and then introduce your new terms in that context so everybody knows what you’re talking about. This requires that you research your topic so you know what the standard terms and positions are. If you’re talking about a problem in cognitive science, you’ll need to read cognitive science literature. If you’re talking about a problem in social science, you’ll need to read social science literature. If you’re talking about a problem in epistemology or morality, you’ll need to read philosophy.
- Write as clearly and simply as possible. Organize the paper with lots of heading and subheadings. Put in lots of ‘hand-holding’ sentences to help your reader along: explain the point of the previous section, then explain why the next section is necessary, etc. Patiently guide your reader through every step of the argument, especially if it is long and complicated.
- Always cite the relevant literature. If you can’t find much work relevant to your topic, you almost certainly haven’t looked hard enough. Citing the relevant literature not only lends weight to your argument, but also enables the reader to track down and examine the ideas or claims you are discussing. Being lazy with your citations is a sure way to frustrate precisely those readers who care enough to read your paper closely.
- Think like a cognitive scientist and AI programmer. Watch out for biases. Avoid magical categories and language confusions and non-natural hypotheses. Look at your intuitions from the outside, as cognitive algorithms. Update your beliefs in response to evidence.
- Use your rationality training, but avoid language that is unique to Less Wrong. Nearly all these terms and ideas have standard names outside of Less Wrong (though in many cases Less Wrong already uses the standard language).
- Don’t dwell too long on what old dead guys said, nor on semantic debates. Dissolve semantic problems and move on.
- Conclude with a summary of your paper, and suggest directions for future research.
- Ask fellow rationalists to read drafts of your article, then re-write. Then rewrite again, adding more citations and hand-holding sentences.
- Format the article attractively. A well-chosen font makes for an easier read. Then publish (in a journal or elsewhere).
Note that this is not just my vision of how to get published in journals. It’s my vision of how to do philosophy.
Meeting journals standards is not the most important reason to follow the suggestions above.
- Write short articles because they’re easier to follow.
- Open with the context and goals of your article because that makes it easier to understand, and lets people decide right away whether your article fits their interests.
- Use standard terms so that people already familiar with the topic aren’t annoyed at having to learn a whole new vocabulary just to read your paper.
- Cite the relevant positions and arguments so that people have a sense of the context of what you’re doing, and can look up what other people have said on the topic.
- Write clearly and simply and with much organization so that your paper is not wearying to read.
- Write lots of hand-holding sentences because we always communicate less effectively then we thought we did.
- Cite the relevant literature as much as possible to assist your most careful readers in getting the information they want to know.
- Use your rationality training to remain sharp at all times. And so on.
That is what cutting-edge philosophy could look like, I think.
The rest of the series of posts is interesting reading. Now for the tangent.
Scott Aaronson, an associate professor of EE and CS at MIT, is one of the more interesting figures around in the areas of theoretical computer science (specifically quantum computing). His blog is an absolute delight to read, and I’ve written about him before in a long, rambling post here. He’s my go-to guy when it comes to examining whether the hype on some “radical new development” in quantum computing or P vs. NP is warranted (see e.g. “Eight signs a claimed P vs. NP proof is wrong” with respect to Vinay Deolalikar’s claimed proof, and here for a discussion of Google’s D-Wave “quantum computer” acquisition – also here for “likely the most thorough and precise study that has been done on the performance of the D-Wave machine” which found no quantum speedup, and here for “how to measure quantum speedup” when evaluating speedup claims, and “avoiding pitfalls that might mask or fake such a speedup” – okay I digressed a bit there).
Here Scott Aaronson reviews Max Tegmark’s ever-interesting Mathematical Universe Hypothesis, or MUH (see Max’s own description here) in a fascinating post that caught my eye right around here (see text in bold):
In my view, the MUH gestures toward two points that are both correct and important—neither of them new, but both well worth repeating in a pop-science book.
The first is that the laws of physics aren’t “suggestions,” which the particles can obey when they feel like it but ignore when Uri Geller picks up a spoon. In that respect, they’re completely unlike human laws, and the fact that we use the same word for both is unfortunate.
Nor are the laws merely observed correlations, as in “scientists find link between yogurt and weight loss.” The links of fundamental physics are ironclad: the world “obeys” them in much the same sense that a computer obeys its code, or the positive integers obey the rules of arithmetic. Of course we don’t yet know the complete program describing the state evolution of the universe, but everything learned since Galileo leads one to expect that such a program exists. (According to quantum mechanics, the program describing our observed reality is a probabilistic one, but for me, that fact by itself does nothing to change its lawlike character. After all, if you know the initial state, Hamiltonian, and measurement basis, then quantum mechanics gives you a perfect algorithm to calculate the probabilities.)
The first part was nothing new. People sometimes mix up their referents whenever they use the same label (word) for them, especially if one referent is familiar and the other(s) forbiddingly technical, which is one good reason for the need to be precise. Put more verbosely:
There is a trivial mapping between a graph and its complement. A fully connected graph, with an edge between every two vertices, conveys the same amount of information as a graph with no edges at all. The important graphs are the ones where some things are not connected to some other things.
When the unenlightened ones try to be profound, they draw endless verbal comparisons between this topic, and that topic, which is like this, which is like that; until their graph is fully connected and also totally useless. The remedy is specific knowledge and in-depth study. When you understand things in detail, you can see how they are not alike, and start enthusiastically subtracting edges off your graph.
Likewise, the important categories are the ones that do not contain everything in the universe. Good hypotheses can only explain some possible outcomes, and not others.
But the second part of Scott’s remark honestly ran completely askew of my intuitions regarding scientific laws. Sure there’s an underlying territory, and there are our maps describing the territory in various degrees of detail (with associated trade-off between practical utility and precision, etc), but how do we know (or can be e.g. 99+% Bayes-confident) our map perfectly describes the territory? Is this some computational-complexity insight into the nature of induction I’m missing somehow, or just mountains of empirical evidence via real-world applications of QM in various industries, or….?
Maybe I’m just making a fuss over a distinction that doesn’t really make a difference: thinking of physical laws as mathematically codified expressions of previously observed very very good correlations between physical phenomena seems perfectly fine, leaves them open to falsification, and reminds us not to just dismiss “bad” experimental results (read: those not in agreement with predictive model) as wholly due to sloppy experimental implementation/design etc (no doubt it is in the majority of cases), because we’re carrying the experiment out to test the model’s predictive power in the first place (so beware circularity!). Saying the world “obeys physical laws” in the axiomatic manner the positive integers “obey the rules of arithmetic” sure sounds stronger than saying “previously observed correlations between physical phenomena match up perfectly well with our model’s predictions”, but functionally speaking they’re equivalent (up to the present), so I’ll shut up right here and reserve my “correlations mindset” to myself.
Anyway, here are other interesting tidbits from his review of Max’s MUH:
The second true and important nugget in the MUH is that the laws are “mathematical.” By itself, I’d say that’s a vacuous statement, since anything that can be described at all can be described mathematically. (As a degenerate case, a “mathematical description of reality” could simply be a gargantuan string of bits, listing everything that will ever happen at every point in spacetime.)
The nontrivial part is that, at least if we ignore boundary conditions and the details of our local environment (which maybe we shouldn’t!), the laws of nature are expressible as simple, elegant math—and moreover, the same structures (complex numbers, group representations, Riemannian manifolds…) that mathematicians find important for internal reasons, again and again turn out to play a crucial role in physics. It didn’t have to be that way, but it is.
See e.g. Eugene Wigner’s now-classic article on this, related quotations by e.g. Kant, Einstein, Russell, Norvig, Gelfand (because I like name-dropping heh), Richard Hamming’s assertion that “evolution has primed humans to think mathematically” (really? Sorry bad absurdity heuristic), and finally our very own Max Tegmark’s response to Wigner that “of course math describes reality unreasonably well: math is reality!”.
This is where we return to Scott, who argues that this contention is vacuous.
Putting the two points together, it seems fair to say that the physical world is “isomorphic to” a mathematical structure—and moreover, a structure whose time evolution obeys simple, elegant laws. All of this I find unobjectionable: if you believe it, it doesn’t make you a Tegmarkian; it makes you ready for freshman science class.
But Tegmark goes further.
He doesn’t say that the universe is “isomorphic” to a mathematical structure; he says that it is that structure, that its physical and mathematical existence are the same thing. Furthermore, he says that every mathematical structure “exists” in the same sense that “ours” does; we simply find ourselves in one of the structures capable of intelligent life (which shouldn’t surprise us). Thus, for Tegmark, the answer to Stephen Hawking’s famous question—“What is it that breathes fire into the equations and gives them a universe to describe?”—is that every consistent set of equations has fire breathed into it. Or rather, every mathematical structure of at most countable cardinality whose relations are definable by some computer program. (Tegmark allows that structures that aren’t computably definable, like the set of real numbers, might not have fire breathed into them.)
Scott briefly runs through Max’s multiverse hierarchy, going all the way from Level I (“the entirety of spacetime”) to Level IV (“the ensemble of all computable mathematical structures, constituting the totality of existence”), which Max puts on the same ontological footing as the “set of things we can observe”: to (for instance) deny the physical existence of a computable function is, to Mark, to deny our own physical existence. It’s Platonism taken to the limit, what most people would consider a reductio ad absurdum for the MUH, but Max bites the bullet on that one so there.
Bad absurdity heuristic notwithstanding, it does force you to confront this: What counts as a proper explanation, a proper belief?
Eliezer Yudkowsky has written a ton of enlightening stuff on how to see through “explanations/answers/beliefs/statements that don’t really say/answer/mean anything”; check out his 29-post series for instance. From Eliezer, Scott gets the notion of a proper belief as one that “pays rent” (in anticipated experiences). Scott writes:
Why should you believe in any of these multiverses? Or better: what does it buy you to believe in them?
If you believe physical existence to be the same thing as mathematical existence, what puzzles does that help to explain? What novel predictions does it make?
When most scientists say they want “predictions,” they have in mind something meatier than “predict the universe will continue to be describable by mathematics.” (How would we know if we found something that wasn’t mathematically describable? Could we even describe such a thing with English words, in order to write papers about it?)
What’s worse is that Tegmark’s rules appear to let him have it both ways.
To whatever extent the laws of physics turn out to be “as simple and elegant as anyone could hope for,” Tegmark can say: “you see? that’s evidence for the mathematical character of our universe, and hence for the MUH!” But to whatever extent the laws turn out not to be so elegant, to be weird or arbitrary, he can say: “see? that’s evidence that our laws were selected more-or-less randomly among all possible laws compatible with the existence of intelligent life—just as the MUH predicted!”
Note the remark on Max “having it both ways”. Proper explanations constrain anticipated experiences: if they’re equally good at explaining why something happens and why it doesn’t happen, then it doesn’t explain anything. It doesn’t allow for future predictions. You might as well have monkeys throwing darts at walls. To have it both ways is to have none of them at all. Eliezer:
The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict—or better yet, prohibit. Do you believe that phlogiston is the cause of fire? Then what do you expect to see happen, because of that? Do you believe that Wulky Wilkinsen is a post-utopian? Then what do you expect to see because of that? No, not “colonial alienation”; what experience will happen to you? Do you believe that if a tree falls in the forest, and no one hears it, it still makes a sound? Then what experience must therefore befall you?
It is even better to ask: what experience must not happen to you? Do you believe that elan vital explains the mysterious aliveness of living beings? Then what does this belief not allow to happen—what would definitely falsify this belief? A null answer means that your belief does not constrain experience; it permits anything to happen to you. It floats.
When you argue a seemingly factual question, always keep in mind which difference of anticipation you are arguing about. If you can’t find the difference of anticipation, you’re probably arguing about labels in your belief network—or even worse, floating beliefs, barnacles on your network.
Above all, don’t ask what to believe—ask what to anticipate.
I like that last line. It should be a slogan or something.
Coming back to Scott’s review of Tegmark’s MUH:
Still, maybe the MUH could be sharpened to the point where it did make definite predictions? As Tegmark acknowledges, the central difficulty with doing so is that no one has any idea what measure to use over the space of mathematical objects (or even computably-describable objects). This becomes clear if we ask a simple question like: what fraction of the mathematical multiverse consists of worlds that contain nothing but a single three-dimensional cube?
We could try to answer such a question using theuniversal prior: that is, we could make a list of all self-delimiting computer programs, then count the total weight of programs that generate a single cube and then halt, where each n-bit program gets assigned 1/2n weight. Sure, the resulting fraction would be uncomputable, but at least we’d have defined it. Except wait … which programming language should we use? (The constant factors could actually matter here!) Worse yet, what exactly counts as a “cube”? Does it have to have faces, or are vertices and edges enough? How should we interpret the string of 1’s and 0’s output by the program, in order to know whether it describes a cube or not? (Also, how do we decide whether two programs describe the “same” cube? And if they do, does that mean they’re describing the same universe, or two different universes that happen to be identical?)
These problems are simply more-dramatic versions of the “standard” measure problem in inflationary cosmology, which asks how to make statistical predictions in a multiverse where everything that canhappen will happen, and will happen an infinite number of times. The measure problem is sometimes discussed as if it were a technical issue: something to acknowledge but then set to the side, in the hope that someone will eventually come along with some clever counting rule that solves it. To my mind, however, the problem goes deeper: it’s a sign that, although we might have started out in physics, we’ve now stumbled into metaphysics.
Some cosmologists would strongly protest that view. Most of them would agree with me that Tegmark’s Level IV multiverse is metaphysics, but they’d insist that the Level I, Level II, and perhaps Level III multiverses were perfectly within the scope of scientific inquiry: they either exist or don’t exist, and the fact that we get confused about the measure problem is our issue, not nature’s.
My response can be summed up in a question: why not ride this slippery slope all the way to the bottom? Thinkers like Nick Bostrom and Robin Hanson have pointed out that, in the far future, we might expect that computer-simulated worlds (as in The Matrix) will vastly outnumber the “real” world. So then, why shouldn’t we predict that we’re much more likely to live in a computer simulation than we are in one of the “original” worlds doing the simulating? And as a logical next step, why shouldn’t we do physics by trying to calculate a probability measure over differentkinds of simulated worlds: for example, those run by benevolent simulators versus evil ones? (For our world, my own money’s on “evil.”)
But why stop there? As Tegmark points out, what does it matter if a computer simulation is actually runor not? Indeed, why shouldn’t you say something like the following: assuming that π is a normal number, your entire life history must be encoded infinitely many times in π’s decimal expansion. Therefore, you’re infinitely more likely to be one of your infinitely many doppelgängers “living in the digits of π” than you are to be the “real” you, of whom there’s only one! (Of course, you might also be living in the digits of e or √2, possibilities that also merit reflection.)
At this point, of course, you’re all the way at the bottom of the slope, in Mathematical Universe Land, where Tegmark is eagerly waiting for you. But you still have no idea how to calculate a measure over mathematical objects: for example, how to say whether you’re more likely to be living in the first 1010^120 digits of π, or the first 1010^120 digits of e. And as a consequence, you still don’t know how to use the MUH to constrain your expectations for what you’re going to see next.
Now, notice that these different ways down the slippery slope all have a common structure:
- We borrow an idea from science that’s real and important and profound: for example, the possible infinite size and duration of our universe, or inflationary cosmology, or the linearity of quantum mechanics, or the likelihood of π being a normal number, or the possibility of computer-simulated universes.
- We then run with that idea until we smack right into a measure problem, and lose the ability to make useful predictions.
Scott then writes on the idea of a scientific theory having to be “impressive” to be an achievement, which I found interesting:
What is it, in general, that makes a scientific theory impressive? I’d say that the answer is simple: connecting elegant math to actual facts of experience.
When Einstein said, the perihelion of Mercury precesses at 43 seconds of arc per century because gravity is the curvature of spacetime—that was impressive.
When Dirac said, you should see a positron because this equation in quantum field theory is a quadratic with both positive and negative solutions (and then the positron was found)—that was impressive.
When Darwin said, there must be equal numbers of males and females in all these different animal species because any other ratio would fail to be an equilibrium—that was impressive.
When people say that multiverse theorizing “isn’t science,” I think what they mean is that it’s failed, so far, to be impressive science in the above sense. It hasn’t yet produced any satisfying clicks of understanding, much less dramatically-confirmed predictions.
Yes, Steven Weinberg kind-of, sort-of used “multiverse” reasoning to predict—correctly—that the cosmological constant should be nonzero. But as far as I can tell, he could just as well have dispensed with the “multiverse” part, and said: “I see no physical reason why the cosmological constant should be zero, rather than having some small nonzero value still consistent with the formation of stars and galaxies.”
He uses “impressive” in the “whoa such explanatory power much predictive accuracy!” sense of the word, as the examples make clear. (Digression: “clicks” reminded me of David Foster Wallace’s “Puig clicks like a fucking Geiger counter” line – see here).
I don’t know how to end this long-winded post, so I’ll quote one of my favorite writers:
“And then, as she stared into space trying to address the universal conundrums (peculiar to her Writer Self and her infinitely vast, infinitely lonely Writer Universe) of ‘What Would Happen Next?’ or ‘Whether You Could Fill Up The In Betweens With More Pleonastic Shit, Florid Details and Imagery Than Necessary’…. the writer was attacked by A Severe Case Of Writer’s Block, Most Deadly, and struck by the sudden realization of this newfound freedom, heaved a deep sigh of relief and died.