Friday, September 23, 2005

(book) On Intelligence

I just finished Jeff Hawkins' book, On Intelligence.

I first became interested in computers in the late 1970s because I thought it might be a way to make me smarter. At the time this led naturally to an interest in Artificial Intelligence (which was much more fashionable then than now) and I hung out with friends who were interested in neurology. For college I applied to Stanford and MIT, the two Centers of the Universe for that sort of thing, and I soaked it all up. After graduation I considered AI grad schools and I moved to Japan because I thought it was the place where a lot of the new innovation would happen. Eventually, partly out of the influence of people like Terry Winograd and others, I gave up on AI and moved into the mainstream of PC software.

Jeff Hawkins is just like me! Well, almost. He’s a bit older, so he had more exposure to the Real World than I did, and he turned his natural entrepreneurial talents toward start-ups, eventually founding Palm and then Treo. His business success enabled him to devote all his time to his real passions, including an in-depth look at how to make computers intelligent. That’s what I admire most about him: in contrast to other successful people, he is devoting his money and time to an intellectual passion rather than, say, collecting big boats or houses. In short, he’s doing what I would be doing if more good fortune had come my way.

I mention all of this to say that I am naturally sympathetic to the cause, and I think I understand Hawkins’ Mission. I wish more people with the means to explore these topics would do what he is doing.

He is heavily influenced by the observations of Vernon Mountcastle, who notes that the uniformity of the neocortex is best explained by a uniformity in the way all our senses operate. Sight, sound, touch—by the time they hit your neocortex, they might as well be the same thing.

By the way, you have more than the five senses than you were taught in grade school: Vision is really the separate senses of motion, color, and luminance; Touch is pressure, temperature, pain, and vibration; you have a sense of balance; and there is an entire sense called the proprioceptive system that tells you about joint angles and the body position. But all of these senses enter and are processed by the brain in the same way.

Hawkins then argues that sensory inputs are tied to another rich channel of predictive outputs that he calls the memory-prediction framework of intelligence. In short, your brain is constantly outputting predictions for how the world works, which it then confirms with your sensory inputs: if the two match, the inputs are ignored. If they don’t match, then more high-level
processing is applied until you reach a state where your senses match your expectations.

He suggests that if you built a similar system in silicon, you’d have a brain—intelligence. It would be infinitely scalable and could be applied to super-human problems like weather forecasting, which depend on massive pattern recognition of the type that humans can do instantly.

I’m not up to date on modern neuroscience or studies of intelligence, so I don’t know how Hawkins’ ideas have been received by “mainstream” academics, but I have no quibbles with his basic idea, which seems plausible to me. The one big hole I see is that he doesn’t allow for all the hard-wired aspects of intelligence. To be “intelligent” is not just to be able to do massive pattern recognition or the other cool things that Hawkins machine would do. Human intelligence is much
more, and it’s far more hard-wired than I think Hawkins’ framework allows.

Why do all cultures have a taboo against incest? Why do all humans know what it means to sing and dance? Why do most girl babies throw balls under-handed while boys throw over-hand? Many (and I argue, most) activities that we often think of as arbitrary consequences of our intelligence are really hard-wired and built into our brains as much as the more abstract aspects of intelligence that Hawkins’ framework explains.

I bet someday, when all of this is eventually explained, we’ll realize that you can’t make something recognizably intelligent unless you make it human. To be intelligent is to have human experiences: unless that computer had the experience of getting dumped by his girlfriend the night before his algebra test, nobody will recognize it as intelligent. Very clever, yes; able to solve extremely complicated general-purpose problems like Chess or Go, sure. But you still won’t think of it as intelligent, and humans will still be able to outsmart it in areas that we think of as important.

No comments: