On Valentine's Day 2011, and for two nights following, we were all invited to witness two old gladiators clash. One gladiator: human intelligence. The other, machine intelligence. The coliseum was the TV quiz show called Jeopardy. For human intelligence, we had the two best human players ever of this game. For machine intelligence we had a computer program called Watson, a program that answers questions, Jeopardy-style. Picture two guys at podiums, and between them a big blue (of course) box, Watson's avatar, with a face on it that might remind you of a Keith Haring "Radiant Child."
Watson was designed by a team at IBM, and named for the company's founding father, Thomas J. Watson, Sr. If you know anything about the history of IBM and its long-standing . . . allergy . . . to the field of machine intelligence, you can easily imagine old T. J. Watson rising from his grave, fist clenched, at the very idea.
High drama. Playing Jeopardy, as you probably know, not only demands fast access to a lot of trivial knowledge, but it also demands that the contestants figure out riddles, puns, jokes; interpret ambiguous statements. Who would triumph, man or machine? The first evening, Watson pulled ahead, but just barely. The humans could've come back the second evening. But no, the second evening, Watson-the-machine pulverized its human competition. The third evening was mop-up—Watson ended with three points for every one point of its nearest human competitor.
To me, Watson's great glory is its grasp of natural language, English in this case. When I first started writing about machine intelligence in the 1970s, linguists would say: "The pen is in the box. The box is in the pen. No computer will ever be able to understand the difference between those two statements." Wrong: they have for quite a while. Watson goes far beyond pen-in-the-box understanding. Watson doesn't wait to be force-fed information; instead, Watson learns from examples. Moreover, Watson knows what it doesn't know. In humans, we consider all these good signs of intelligence.
The Jeopardy match followed the great chess duel of 1997, where again, IBM's computer program, called Deep Blue, took on world chess champion Gary Kasparov, and slaughtered him in the last, decisive game.
Here's a rough definition of machine intelligence: computers doing something that, if humans did it, we'd say, ah, that's intelligent behavior. It all sounds very contemporary. But the idea of intelligence outside the human cranium goes far back in our history, to the early Greek, Egyptian, and Asian civilizations. For example, robots appear explicitly in Homer's Iliad. These classical robots are designed by the god Hephaestus, or, as you might know him, Vulcan:
These are golden, and in appearance like living young women./
There is intelligence in their hearts, and there is speech in them and strength,/
and from the immortal gods, they have learned how to do things.
Such beings, such creations, pervade human history, and they're wonderful acts of imagination. All the medieval wise men are said to have had them—Albertus Magnus, Pope Sylvester II, Ramon Llull, Judah ben Loew, with his Golem. By the 19th century, thinking machines were everywhere, though they were literary: Olympia, in The Tales of Hoffman, Goethe's Faust, and of course the monster created by Dr. Frankenstein. What all this means, I think, is that the idea of intelligence outside the human cranium is deeply fascinating to us.
But, side by side with this fascination is a deep revulsion against such human overreaching. The whole project skirts taboo, sin, hubris. Maybe this derives from the Second Commandment: "Thou shalt not make unto thee any graven image or any likeness of any thing that is in heaven above or that is in the earth beneath, or that is in the water under the earth… This is to trespass on the privilege of the divine. This sense of sin, of hubris, exists to this day, at least in Western culture. It hardly exists at all in Asian cultures, but that's the subject of another talk.
Real, not imaginary, machine intelligence begins in the middle of the 20th century with the computer. At first machine intelligence is very cerebral, abstract: computers prove theorems in logic, they play amateurish checkers and chess. This is because we then thought intelligence was all about the power of reasoning. We thought these tasks, especially the game of chess, were the peak of human intellectual achievement.
In 1979, I published a history of all this, called Machines Who Think. As a consequence, I was often invited to college campuses to talk. Now please recall that in those days there was no Internet, no World Wide Web, not even personal computers. The computer in general was pretty mysterious. How was I going to convince college students about the great possibilities of machine intelligence?—and at that point, they were mainly possibilities.
So . . . I invented something I called "the geriatric robot," at that time, entirely a figment of my imagination. The geriatric robot would be a caretaker and a companion to the elderly. The geriatric robot feeds you . . . cleans you . . . wheels you out in the sun . . . but more important, it listens to you. Tell me again, it says, about that stunning coup you pulled off in '84. Tell me again how brilliant—or dreadful—your children and grandchildren are. It never gets tired of hearing your stories, just as you never get tired of telling them. It knows your favorites, and those are its favorites too. The geriatric robot doesn't hang around hoping to inherit your money. It won't slip you a little something to speed the inevitable. It isn't there because it can't find work elsewhere. It's there because it's yours. It was made for you. Now, shouldn't human caretakers do this? Humans grow bored, make mistakes, get greedy, yearn for variety. That's human intelligence, and it's part of our charm. And realistically, most of us can't afford to pay human caretakers what they deserve.
At that time, I was also writing a new book with my friend Ed Feigenbaum, one of the great experts in machine intelligence. I was afraid the book was getting too techie and boring, so for fun, I included an account of the geriatric robot. Well! Despite very broad rhetorical signals of just kidding here, some people chose to take it seriously. In the opening paragraph of a five-page review in the New York Review of Books, the reviewer compared me to Stalin, Hitler, and Pinochet—all because of the geriatric robot! Even more amazing to me, the Japanese, with a rapidly aging population, also decided to take the geriatric robot seriously. They invited me to scholarly meetings on the topic.
As the global population grows older, versions of the geriatric robot are in fact being developed in many places around the world, including here in the United States. One robot, called Nursebot Pearl, has been underway for nearly ten years, and is being field-tested in nursing homes and hospitals in Pittsburgh and elsewhere.
The book that introduced the geriatric robot to the world was published at a time when the field of machine intelligence had come to a great scientific fork. I said that in the beginning, the central property of intelligence seemed to be reasoning. If you could reason, then here alone was the great kernel of intelligence. But this was wrong, and led to all sorts of dead ends. So at this fork, one group of scientists went off along a road that pretty much ignored reasoning, and instead, relied on sensors and fast reaction to guide its robots. Their slogan was "fast, cheap, and out of control." This brought us Mars robots, and the Roomba that vacuums your living room—and just a few weeks ago, robots that roam the radioactive debris of the Fukushima nuclear power plant in Japan.
But other scientists took a different approach. Yes, for intelligence, you do need reasoning powers, but what gives reasoning its authority is knowledge. Knowledge is power. The scientists leading this research were Ed Feigenbaum, whom I've mentioned, and the late president of Rockefeller University, Nobel laureate Joshua Lederberg. What these two believed, and eventually showed, is that to be as good as a human expert, the machine must have a great deal of specialized knowledge. These programs, called expert systems, mimicked, and often outperformed, human experts at narrow, but difficult, tasks.
The first such expert system built by my two friends was intended to interpret the data from mass spectrometry. Never mind what that is, but it's very a difficult, very important problem that requires expert knowledge to solve (but you can see why I invented the geriatric robot to amuse my college student audiences). From this pioneering, visionary work has finally come Watson, playing—and winning decisively at—Jeopardy. Of course Watson is to these first early systems as we are to the Neanderthals.
Naturally, we ask ourselves larger questions about Watson, and even Nursebot Pearl. Is this really thinking? Is it really intelligence? If you insist that the human way is the only way to process symbols, then no. These machines do not process symbols the way we do, using electro-chemical means. But to me, this is like denying that airplanes fly. Flying, you say, is what birds do. Since airplanes don't flap their wings, they must be doing something else up there in the sky, but let's not call it flying. Okay. But, if instead you measure what these machines do by end results, the answer can only be yes, it's thinking, it's intelligence, in ever wider and more important ways.
After Watson's triumph, the usual suspects were out, saying how it isn't really intelligence we see here. Not like our intelligence, human intelligence, the real McCoy. Do you hear a certain nervousness here? Mirror, mirror, on the wall, who's the smartest of them all?
Myself, I long ago stopped worrying about whether it's really intelligence we're seeing in these machines, or only very clever and very useful, programming. To me, it simply doesn't matter. Christopher Columbus went to his grave believing he'd reached the Indies. He was wrong. Instead, he'd reached a new world that was going to drastically change the old world he came from. That seems to me the best way to regard machine intelligence: a new world.
Machine intelligence has worked its way into our everyday lives for a long time. For more than twenty-five years, credit card companies have used machine intelligence to decide if it's really you putting those charges on your credit card. When the GPS in your car tells you to turn left at the next corner, that's machine intelligence (very early stuff; we've come a long way, baby). When you use Google to find something on the World Wide Web, that engine is using machine intelligence in its search. Your smartphone, video games, war games, soldier training, homeland security, nailing bin Laden. You couldn't do molecular biology without machine intelligence, nor pharmaceutical research, nor many other things.
At Columbia, a scientist is working with IBM to develop a Watson-like physician's helper, informally known as doc-in-a-box. This is so the human physician can practice medicine at quote, the highest, evidence-based levels . . . something that has been impossible for any human for twenty or thirty years, unquote. We hope that doc-in-a-box can help rescue us from the very high costs of medical care, at the same time it improves quality of care. Some of you recently read about machine intelligence that dramatically cuts the cost and time of discovery in a legal proceeding. All these are the early shoots of what will be an enormous crop of machine intelligence in our future.
We are, all of us, overwhelmed by information, drowning in it, suffocated by it, desperate for help in making good decisions. People trying to run an organization are up against a tidal wave of information, events, rules of thumb, regulations, deadlines, that they simply must pay attention to. A Watson-like helper on their desk—designed not to play Jeopardy but to analyze huge amounts of knowledge specific to that organization's needs and goals—will be a tremendous help.
Where will it all lead? Some people imagine what they call "the singularity," a moment when machines become more intelligent than humans. True, the singularity is more the darling of science fiction writers than of those who work in the field. This doesn't mean it can't, or won't, happen. Will the singularity be harmful to humans or beneficial to us? All we can say is that if the singularity occurs, it will be the most disruptive of technologies, a whole new world.
Meanwhile, the machines are getting smarter, and I believe they're making us smarter as together we co-evolve. As a species, we have collective problems that we simply don't know how to begin to solve. Some of those problems, like global climate change, or nuclear proliferation, are almost beyond our ability to solve. Other problems that we could, in principle, solve, like hunger, demand such effort and expense that, for all practical purposes, they go unsolved. We humans need all the help we can get. If part of that help comes in the form of intelligent machines that we ourselves have created and designed, I say wonderful.
So think about this, all you flesh-and-blood machines out there who think. Everyone in this room is at this moment living through one of the most profound transformations—yes, revolutions, in that abused word—in human history. For the first time ever, we are face-to-face with deep, effective intelligence outside the human cranium. And it resides in artifacts that we have created ourselves.
So hooray for Watson, and hooray for the real-life geriatric robot, Nursebot Pearl. On the subject of Nursebot Pearl, I can only quote the great Betty Comden: Just in time/You found me just in time/Before you came my time/Was running low…
[return to top]