Yesterday I quoted Watts on his spectacularly counterintuitive (and comprehensively publication-backed) argument in his novel Blindsight (available online) that self-awareness/sentience, rather than being the “next step in the progression of increased intelligence”, actually hinders it. It’s so awesome you should just read the book and decide for yourself.
Robert J. Freitas is one of the early pioneers of nanotechnology and among the most forward-thinking visionaries alive. He goes in the other direction with regards to the sentience/intelligence relationship, which is typical of most people’s positions but bears outlining in full (because baring assumptions taken for granted before the scalpel of analysis tends to be illuminating etc). In Xenopsychology, also one of the most fascinating articles I’ve ever read and the potential subject of a future post itself, he writes about this in the concluding section, on the way devising a sliding scale for comparing different sentient intelligences by what amounts to computational capacity called the Sentience Quotient or SQ (I love telling people about this idea):
“Perhaps the most interesting aspect of intelligence from the human point of view is that we, possibly alone among all creatures on this planet, have an awareness of self. Consciousness may be an emergent property of intelligence, a fortuitous feature of a terrestrial animal brain architecture originally designed for other jobs. Is it possible that there could exist yet higher-order emergents beyond consciousness?
It is possible to devise a sliding scale of cosmic sentience universally applicable to any intelligent entity in the cosmos, based on a “figure of merit” which I call the Sentience Quotient. The essential characteristic of all intelligent systems is that they process information using a processor or “brain” made of matter-energy. Generally the more information a brain can process in a shorter length of time, the more intelligent it can be. (Information rate is measured in bits/second, where one bit is the amount of information needed to choose correctly between two equally likely answers to a simple yes/no question.) Also, the lower the brain’s mass the less it will be influenced by fundamental limits such as speed of light restrictions on internal propagation, heat dissipation, and the Square-Cube Law….
he lower end of our cosmic scale is easy to pin down. The very dumbest brain we can imagine would have one neuron with the mass of the universe (1052 kg) and require a time equal to the age of the universe (1018 seconds) to process just one bit, giving a minimum SQ of -70.
What is the smartest possible brain? Dr. H. Bremermann at the University of California at Berkeley claims there is a fundamental limit to intelligence imposed by the laws of quantum mechanics. The argument is simple but subtle. All information, to he acted upon, must be represented physically and be carried by matter-energy “markers.” According to Heisenberg’s Uncertainty Principle in quantum mechanics, the lower limit for the accuracy with which energy can be measured–the minimum measurable energy level for a marker carrying one bit–is given by Planck’s constant divided by the duration of the measurement. If one energy level is used to represent one bit, then the maximum bit rate of a brain is equal to the total energy available for representing information, divided by the minimum measurable energy per bit divided by the minimum time required for readout…. Hence the smartest possible brain has an SQ of +50.
Where do people fit in? A human neuron has an average mass of about 10-10 kg and one neuron can process 1000-3000 bits/sec. earning us an SQ rating of +13. What is most interesting here is not the obvious fact that there’s a great deal of room for improvement (there is!), but rather that all “neuronal sentience” SQs, from insects to mammals, cluster within several points of the human value. From the cosmic point of view, rotifers, honeybees, and humans all have brainpower with roughly equivalent efficiencies. Note that we are still way ahead of the computers, with an Apple II at SQ +5 and even the mighty Cray I only about +9.
Another kind of sentience, which we may call “hormonal sentience,” is exhibited by plants. Time-lapse photography shows the vicious struggles among vines in the tropical rain forests, and vegetative phototaxis (turning toward light) is a well-known phenomenon. All these behaviors are mediated, it is believed, by biochemical plant hormones transmitted through the vascular system. As in the animal kingdom, most of the geniuses are hunters–the carnivorous plants. The Venus flytrap, during a 1- to 20-second sensitivity interval, counts two stimuli before snapping shut on its insect prey, a processing peak of 1 bit/sec. Mass is 10-100 grams, so flytrap SQ is about +1. Plants generally take hours to respond to stimuli, though, so vegetative SQs tend to cluster around -2.
How about intelligences greater than human? Astronomer Robert Jastrow and others have speculated that silicon-based computer brains may represent the next and ultimate stage in our evolution. This is valid, but only in a very limited sense. Superconducting Josephson junction electronic gates weigh 10-12 kg and can process 1011 bits/sec, so “electronic sentiences” made of these components could have and SQ of +23 – ten orders beyond man. But even such fantastically advanced systems fall short of the maximum of +50. Somewhere in the universe may lurk beings almost incomprehensible to us, who think by manipulating atomic energy levels and are mentally as far beyond our best future computers as those computers will surpass the Venus flytrap.
Just as consciousness is an emergent of neuronal sentience, perhaps some broader mode of thinking–call it communalness–is an emergent of electronic sentience. If this is true, it might help to explain why (noncommunal) human beings have such great difficulty comprehending the intricate workings of the societies, governments, and economies they create, and require the continual and increasing assistance of computers to juggle the thousands of variables needed for successful management and planning. Perhaps future computers with communalness may develop the same intimate awareness of complex organizations as people have consciousness of their own bodies. And how many additional levels of emergent higher awareness might a creature with SQ +50 display?”
You might object here that the very notion of “intelligence” itself is extremely hairy and subject to much interpretation blah blah. That’s absolutely true: in fact, here’s a fascinating paper by AI researchers Marcus Hutter and Shane Legg that catalogues over seventy different informal definitions of intelligence in academic use (forget lay interpretations etc). You might also find Hutter’s research on rational decision-making agents in uncertain worlds (given that “the true environmental prior probability distribution is known” – yes, it’s Bayesian) particularly fascinating, given that he claims his formal model for a rational intelligent agent, unfortunately not physically realizable (because formal, therefore worse than spherical cow), is the most intelligent unbiased agent that can possibly exist. You’ll see Solomonoff induction (think Occam’s razor, but formalized aw yeah) etc etc thrown about.
This is why it might be more productive, to escape anthropomorphic bias and other factors, to treat “intelligence” as an optimizer, capable of “steering the future” to preferred states, and to compare optimizers by their capacity to steer futures; this can at least be formalized.
Damn I wish I understood AI and information theory better.