Thursday, June 17, 2010

Blindsight by Peter Watts

Blindsight is a 2006 science fiction book. It was mentioned in an Overcoming Bias blog and so I read it. The book is available in electronic form for free on the web. I read it for free from my local public library.

The quality of the ideas about consciousness and mind are outstanding. 5 stars, 2 thumbs up. You don't really get to the main punch line until page 325. What he has to say about consciousness and awareness and brains before then is merely excellent.

A quote: "... People aren't rational. You aren't rational. We're not thinking machines, we're --- we're feeling machines that happen to think." This is where I have been going in my own thinking about brains and consciousness. My model is Lucy, my Golden Doodle dog. In my opinion, you can't read about brains, biology, evolution, especially evolutional psychology, and not see it all laid out before you when you have your own dog. A highly pleasant way to reach your own conclusions about evolutionary psychology, get a dog. So in particular, and only sorta, you've got a lizard brain wrapped in a mammalian brain wrapped in a neocortex. Of course all mammals have a neocortex, but a dog's is a lot smaller than a human's. Whether I've got the anatomy right or not, most of what I think of as feeling is pretty similar between dogs and people.

And then you wrap it in a neocortex. With a sorta mini-dog neocortex you get a little bit of help figuring out what to be mad at, what to be hungry on, what to be horny on, what to be scared of. But mostly, if you are a dog, you are happy, sad, mad, glad, scared, excited etc., and these things dictate your actions. They also dictate your interactions. You can be a social animal without a lot of rationality. Just be loyal to what you love, angry at what seems to be frightening you and you have a pretty functional system.

Wrap it in a big-old human sized neocortex, and throw in nasty monkey emotions (have you seen chimpanzees interact?) and you get a deep need for psychiatrists, psychologists, and many other paid professionals, not to mention a significant and growing pharmacopia. Take the straight forward emotional reactions to things and graft a GIGANTIC rational model of the world, including all the people around you on to it, courtesy of your friendly local neocortex, and you have the basis of some great tragedies and comedies.

That which does not kill us, makes us stranger. -Trevor Goodchild
This is a quotation leading off a sectino of the book. I googled Trevor Goodchild, he is a character in a science fiction TV show that used to be on MTV. But I like the quote, it reminds me of Nietzsche.

I will talk more about consciousness after the "read more" link. SPOILERS about the book will be here. If you are thinking of reading the book, I recommend reading the book before reading the rest of this post.
WARNING SPOILERS! if you read the rest of this post.

WARNING SPOILERS!
The punchline is that the alien intelligence is not conscious, or perhaps you could say not conscious of itself.  Further, characters in the book hypothesize that consciousness is an inefficiency, a defect of intelligence.  That it is or was a kludge, an inefficient way to get some things done, and that it should have, would have, will eventually, atrophy and wither away as evolution continues.

Now part of what is great about this is the intriguing idea that intelligence can exist without consciousness.  If you study Artificial Intelligence, you will be accustomed to trying to figure out how to get consciousness to show up in an intelligent machine.  Not easy, with Searle's Chinese Room really driving home to me that a simulation of consciousness is not consciousness.  If I program a computer to answer "yes" when you ask it "are you conscious," I have not created consciousness, any more than programming it to report "you have killed 1 million people with your nuclear weapons" during a video game makes you a mass-murderer.  A simulation of a nuclear explosion is NOT a nuclear explosion.  Searle's point or claim is, a simulation of consciousness is NOT consciousness.

Blindsight turns this on its head.   Stop thinking about how to turn a thinking machine conscious.  Rather consider what you lose by skipping the consciousness entirely, if anything.  In Blindsight's universe, you lose nothing.  Rather, you gain stuff.  You gain efficiency and speed of execution.

Peter Watts has the right question: what do you gain by having consciousness.  Does he have the right answer in our universe?  I suspect not.  But I don't have well-developed ideas of what consciousness does buy us.

One thought is that consciousness is needed to have the absolutely overwhelmingly intense social integration that we humans operate with.  Just as an approximation of this social integration, how many bits do we exchange with other humans on a daily basis?  Just my written and the trivially semantic part of my verbal communicatinos must correspond to a few hundred book pages a day of communication.  Consider a few hours a day of stories on TV and hours a day of low- to mid- level interaction with family, co-workers, and friends, and you have a lot of interaction.  How would consciousness help this?  In Blindsight, consciousness is described as when the brain goes from modeling the world to modeling the modeling of the world, including modeling the modelers and other levels of recursion.  Perhaps in order to link with other humans in a meaningful way we need to have a highly complex model of the human, and that model is enabled by consciousness.  I don't know, I am just making this up as I go along.

Another thought about consciousness comes from one of Penrose's objections to strong AI.  From wikipedia, "Penrose presents the argument that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventionalTuring machine-type of digital computer."  Among other things, if I recall correctly, Penrose talks about proving theorems about the limits of Turing machines.  I think he claims that you could not write a Turing machine to prove those theorem's about Turning machines!  


So if we can prove these theorems, and we can write a simulation of human consciousness, that simulation will NOT be able to prove these theorems.  Now I think a lot of engineers and physicists working on simulations will think "yeah, I could see that."  Whereas people who don't work on simulations may think "if it is a good simulation of consciousness then it will do the things consciousness does."  It is important to understand that ALL simulations are by definition imperfect.  The only completely correct simulation of a nuclear explosion is a nuclear explosion.  Computational simulations consist of listing a set of equations and rules that you believe adequately describe the physics of that which you are simulating.  Notice the words "believe" and "adequately."  An "adequate" result of a consciousness simulation might be that the simulation can compose email that fools your boss into thinking the email is from you.  Writing that code makes no guarantee what the simulation will do when you give it Penrose's descriptions of these Turing machine proofs to study.  


Another interesting defect, or feature, of a simulation is how we deal with randomness or noise.  Consider the firing of a neuron.  The exact instant it fires may depend on thermal motions of ions.  If I were writing a simulation, I would model this thermal motion using a random number generator.  Further, there may be quantum choices being made in the brain, collapses of the wavefunction as Blindsight describes them.  In one version of the theory of physics we say the way the wavefunction collapses is random.  But all that means is we don't KNOW what dictates how it collapses, we just know the statistics of those choices in some limited and controlled situations.  

So if I were writing simulations of the brain, of consciousness, I would use random number generators to "simulate" how these choices get made, how these wavefunctions collapse.  But what if, what if there is some physics in there that I, that WE, just don't know?  What if something we have not taken in to account yet determines how it collapses?  In that case, my simulation will not behavnig, or may not behave, like the real system.  Where my mind (if I pushed it hard enough) might comprehend Penrose's theorems about Turning machines, my simulation will just respond to it randomly, because some part of my understanding of this theorem involved the non-random collapse of some wavefunctions, and I simulated them as random.  



Maybe my simulation of consciousness will be a "zombie."  Lights on, motor's running, even the cruise control and autopilot are set, but nobody's driving, nobody's home.  


If you think about it, its probably a lot easier to build a dead brain than a live brain.  


I think I have wandered far afield in this post.  Oh well.

No comments:

Post a Comment