Peter Watts on Reddit

I like Peter Watts. Heck, I post a longform quote from his novel Blindsight every Monday. Here are a couple of quotes from time he decided to open the floor to questions from Redditors.


(On the tenuousness of the notion of the “individual”)

How do the scramblers relate to Rorschach? Is the ‘ship’ the alien organism and are the scramblers a sort of hyper intelligent white blood cells. Or are the scramblers the aliens and Rorschach is their space ship. Or are they all one big greasy ecosystem extending their slimy tendrils across the stars?

It’s a meaningless question.

Were do you draw the line between organism and environment? The thousands of mitochondria paying rent in the least of your cells are arguably organisms in their own right; they just need the intracellular environment to survive. Your mammalian body is the same; it’s “self-contained”, has its own replicative machinery (although not nearly so self-sufficient as the replicative machinery of your mitochondria), but requires an external biosphere to survive. How do we justify calling ourselves “individuals”, while reducing our mitochondria to “organelles”?

It’s nesting Russian dolls all the way out. Any internally-consistent definition of “individual” recedes toward the horizon until you basically have to call the whole biosphere a single entity (not that I buy into that Gaia Earth-mother bullshit, mind you).

Scramblers are every bit as much “individuals” as you are. As mitochondria are.


(On attitudes towards the scientific view of the world)

Much of your work tends to be rather cynical in a bleak, existential way. After your time spent working in biology and then researching other disciplines for your books and/or self-interest do you really look at things that way or do you just get off on instilling that kind of existential dread in people?

I really look at things “that way”– but look, “that way” is not nearly as nihilistic as you all seem to think it is. I was trained as a biologist. Humans are vertebrates, humans are mammals, and when you take a clade-wide perspective you can’t not notice that we’re all connected by far more than that which separates us. People are so used to exalting themselves as the pinnacle of goddamn creation that they assume that anyone who regards us as just another mammal must be a cynic, must be doing it for shock value or trendy points. But I remember whole buildings where everyone had that perspective, and it wasn’t considered grim or nihilistic. It was cool; we were discovering patterns, we were seeing commonalities in the precopulatory bribes of mating spiders and the spousal assaults of three-spined sticklebacks and the male harems of phalaopes and all those human behaviors that everyone thought were so unique. We were connecting the dots in a global puzzle. It wasn’t depressing. It was exciting.

Sometimes I miss those days.


(On solving real-world problems game-theoretically, in response to a personal encounter with police)

In light of your unfortunate encounter with US authorities, do you have any advice for those of us in Ferguson?

I don’t know if I’d call this “advice” so much as a thought experiment– but it occurs to me that cops (from whatever jurisdiction) can afford to be pretty blase about killing us civilians because they are so rarely held to any serious account. Put simply, they don’t pay much of a price for murder.

Suppose they did?

It’s well-known among the game-theory crowd that the most effective long-term strategy is simple tit-for-tat: give the other payer the benefit of the doubt in transactions until the other player screws you, then screw them back. It’s the Old Testament edict of an eye for an eye, but with SCIENCE!

Apply that to Ferguson. Hell, apply it to Toronto or Vancouver. Suppose that every time a cop killed one of us, one of us killed a cop. Not the cop, not the shooter, but some other cop, at random. Suddenly, all their shouting about “due process” (which never seems to apply when they’re gunning kids down in the street, but which always seems to get raised at strident deciblage when the next-of-kin have the temerity to get outraged at said shooting) means nothing. Suddenly, every time one of yours kills one of ours, you could be next.

The ol’ Blue Wall of Silence might crumble pretty fast when every time your partner kills someone, you might have to pay with your life. In a world of optimal tit-for-tat, unthinking loyalty to the badge isn’t the thing that keeps you unaccountable: it’s the thing that could get you killed. Why, the police might even start policing themselves faced with such a prospect.

Of course it’s not fair. You’re denying due process. You’re killing an innocent human being who, in all likelihood, had nothing to do with the murder you’re reacting to. (You have to: the actual shooter will be too well-protected.) It’s not justice– but then, it’s not meant to be. And it’s not like we have any kind of justice now.

Sure, it’s a revenge fantasy. But it’s more than that. It’s operant conditioning.

And if we scrupulously abided by the algorithm, I’d say the odds are good it would save lives in the long run.


(On free will)

Do you believe humans have free will?

Not in the classic sense, no. Our behavior may be unpredictable complex (or not), but that’s not the same thing. The physical universe seems to be a mix of determinism and randomness. A lot of free-will advocates like to point to quantum uncertainty to defend their position, but as far as I can tell having your behavior determined by a dice roll doesn’t give you any more “freedom” than having it determined by a flowchart. The fact that something is unpredictable does not make it autonomous. Given what we know about the physical makeup of the universe, the whole idea of “free will” as commonly understood is fundamentally supernatural and logically incoherent.

I could always be wrong, of course. But I’d be in a lot of extremely smart company.

How would you define it?

Classic free will? The idea that our minds can take actions that aren’t in response to any previous event; that there is such a thing as a “causeless effect”.

What do you think is the most compelling argument FOR free will?

“Compelling” in the sense that it convinces everyone that it exists? Just the gut-level intuition, the feeling that we are in control. But that’s not the same as logically compelling.

What evidence would you need to see to convince you that free will DOES exist?

A neuron firing in the absence of any stimulus.


(Killer whales. Also, the last sentence is a nice example of why proclaiming things (literally) “are profound” is just the mind projection fallacy.)

What is your favourite marine mammal and what things do you think humans could learn from these animals?

Killer whale, I’d have to say. Those guys are not only smart– they utilize coordinate diving to generate directed waves that tip seals off of ice floes– but they’re culturally diverse, to boot.

I don’t know what we could learn from them, beyond the awesome insights you can glean from studying any species. Our morphologies are so different that any insights considered profound or essential to one species would probably be utterly irrelevant to the other.


(On ideas)

How do you discover the ideas that inspire your books primarily? Obviously it is going to be a mix of sources, but is it more like random wikipedia binges, personal recommendations from friends, strangers on twitter, following active scientific journals, or something else?

All of that. None of that. Sometimes, just looking into a tidal pool can give you ideas. (Back when I was 11 years old, I looked into a tidal pool and thought Whoa, what if all that plankton was, like, connected somehow like neurons, and what if that meant the whole ocean was, like, a single thinking being?. I even wrote a few pages, but they never went anywhere.)

(Just as well, too. A couple of years after that I discovered “Solaris”.)


(On unconscious people doing…stuff)

Do you think there could be people that seem normal, yet are functioning without consciousness?

Depending on how you define “consciousness”, I know there are. One of them, who lives right here in Toronto, drove across town and killed someone while he was asleep. Someone else fucked a series of complete strangers while in the same state.

It’s pretty rare, though, and I don’t know if there are any humans who are functional p-zombies 24-7. Consciousness seems to come standard on our species.

Which is not, of course, to say it has to come standard on someone else’s…


(On how “hard SF” is just a function of the reader)

Would you consider writing some non-hard SciFi novels, and if so, why, & which bits of physics would be most interesting to you to break?

Don’t know if I buy the whole “hard-sf” thing in principle, actually. It seems at least as much a function of the reader’s background as the rigor of the author’s science. Case in point, Larry Niven: held up by no less than Arthur C. Clarke as worthy inheritor of the hard-sf mantle, but his stories deal with psionics and ftl and– most egregiously– genes that somehow code for luck, which is pure fantasy. I loved Niven’s stuff back in high school (and I still really admire him for his aliens), but the more I learned about actual science, the less “hard” his SF became.

You may think my own SF is pretty hard (a recent grumpy review of Echopraxia over at The Register listed it as “diamond cutter” on the hardness scale), but I can guarantee you that there are people in labs and universities all over the world who’d look at the hand-waving in my books and see SF that was about as hard as hobbits.

I guess what I’m saying is, I already write non-hard SF. It just depends on who’s reading it.


(On consciousness, again)

Let me ask you about this quote from Blindsight:

Your circuitry hums with strategies for survival and persistence, flexible, intelligent, even technological—but no other circuitry monitors it. You can think of anything, yet are conscious of nothing.

As far as I can tell, monitoring would include any form of error-correction that is not triggered by feedback from the environment. So a response from outside the system saying “no, you fucked that up” prompting changes in strategy would not count as “monitoring”, but anything based on modeling the process itself, independent of outside cues, would be “monitoring”. How had you envisioned the word?

Also, how plausible do you think it is really that a species could evolve to the level of Scramblers without monitoring/consciousness?

Okay, what you’re doing here is forcing me to confront an issue I kept as far away from as I possible in Blindsight, and I am not happy about it. You’ve started talking not about what consciousness is good for, but what it actually is. How computation running through meat just the right way can wake up as it does.

And I don’t have a clue. I don’t think anyone does. You could study every ion hopping across every synaptic junction in every brain from here until heat death, and there’d be nothing in any of that data to lead you to expect the emergence of coherent self-awareness. It’s ions bumping around in fluid. It’s computation. How can it, how can we, possibly be self-aware? It’s enough to make you find God.

Self-modelling does seem to have something to do with the conscious state, so I threw that in– but I don’t know how it works. You’re quite right, internal monitoring should count. And if that’s all it took, then a thermostat would be conscious. We don’t know definitively that thermostats aren’t conscious, of course, but I remain skeptical.

There’s gotta be something else, but I don’t know what it is, so there’s a big honking conceptual hole in passages like the one you just quoted. I can try to paper them over, but I cannot fill them. If I could, I wouldn’t be wasting my time languishing in the mid-list ghetto– I’d be packing for Stockholm to accept my Nobel.

Either that, or figuring out how to use my insights into the Nature of Consciousness to rob banks.

(Quick one: the reference to thermostats and self-awareness reminded me tangentially of a paper published in the physics arXiv on whether your iPhone has free will. Here’s the paper: it argues that thermostats don’t have free will, but iPhones do. Oversimplified caricature of course – go read the paper!

Might as well throw in Conway’s free will theorem while we’re at it.)


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s