“You ever hear of the Chinese Room?” I asked.
She shook her head. “Only vaguely. Really old, right?”
“Hundred years at least. It’s a fallacy really, it’s an argument that supposedly puts the lie to Turing tests. You stick some guy in a closed room. Sheets with strange squiggles come in through a slot in the wall. He’s got access to this huge database of squiggles just like it, and a bunch of rules to tell him how to put those squiggles together.”
“Grammar,” Chelsea said. “Syntax.”
I nodded. “The point is, though, he doesn’t have any idea what the squiggles are, or what information they might contain. He only knows that when he encounters squiggle delta, say, he’s supposed to extract the fifth and sixth squiggles from file theta and put them together with another squiggle from gamma. So he builds this response string, puts it on the sheet, slides it back out the slot and takes a nap until the next iteration. Repeat until the remains of the horse are well and thoroughly beaten.”
“So he’s carrying on a conversation,” Chelsea said. “In Chinese, I assume, or they would have called it the Spanish Inquisition.”
“Exactly. Point being you can use basic pattern-matching algorithms to participate in a conversation without having any idea what you’re saying. Depending on how good your rules are, you can pass a Turing test. You can be a wit and raconteur in a language you don’t even speak.”
“But—the argument’s not really a fallacy then, is it? It’s spot-on: you really don’t understand [Chinese].”
“The system understands. The whole Room, with all its parts. The guy who does the scribbling is just one component. You wouldn’t expect a single neuron in your head to understand English, would you?”
The third thing that annoys me about the Chinese Room argument is the way it gets so much mileage from a possibly misleading choice of imagery, or, one might say, by trying to sidestep the entire issue of computational complexitypurely through clever framing. We’re invited to imagine someone pushing around slips of paper with zero understanding or insight, much like the doofus freshmen who write (a + b)^2 = a^2 + b^2 on their math tests. But how many slips of paper are we talking about! How big would the rule book have to be, and how quickly would you have to consult it, to carry out an intelligent Chinese conversation in anything resembling real time? If each page of the rule book corresponded to one neuron of a native speaker’s brain, then probably we’d be talking about a “rule book” at leas the size of the Earth, its pages searchable by a swarm of robots traveling at close to the speed of light. When you put it that way, maybe it’s not so hard to imagine this enormous Chinese-speaking entity that we’ve brought into being might have something we’d be prepared to call understanding or insight.
(Daniel Dennett makes the same point in one of his many books; I think it was Intuition Pumps and Other Tools for Thinking, but I’m not sure.) As Scott Alexander in his review of Democritus remarks: “This is a really clever counterargument to Chinese Room I’d never heard before. Philosophers are so good at pure qualitative distinctions that it’s easy to slip the difference between “guy in a room” and “planet being processed by lightspeed robots” under the rug.”
Unfortunately this is a distinction John Searle (from whom the Chinese room argument actually originated – interestingly, as a sort of “argument by contradiction” that in my opinion failed) doesn’t seem to be able to accept, at least publicly. Why not just accept it’s an emergent property? After all, look at Conway’s Game of Life and how you can make Turing machines in it goddamnit (there’s a reason Wolfram spent ten years on his A New Kind of Science book).
Tangential: “emergent phenomenon” as an explanation is bandied about too much in science nowadays as an “explanation” of things we don’t really understand yet but want to pretend we do; unless you can (in principle at least) provide a reductionistic explanation of the same thing without using the word “emergent” you might as well say “magic“.