BoundedRationality100x151
The Edge of Chaos
Machines Who Think
Aaron's Code
Email
Machines Who Think

Machines Who Think: 25th anniversary edition

Pamela McCorduck

Natick, MA: A K Peters, Ltd., 2004

Machines Who Think

 

FAQ answered by Pamela McCorduck

View the publisher's site, AK Peters

Order from Amazon.com

Recent commentary about the 25th anniversary edition of Machines Who Think

"Over the course of the last half-century, a number of books have sought to explain AI to a larger audience and many more devoted to writing the formal history of AI. It is a tribute to her powers of observation and her conversational style that none has really proven more successful than Pamela McCorduck's Machines Who Think, now approaching the quarter century mark. Currently, it is the first source cited on the AI Topics web site on the history of AI. Based on extensive interviews with many of the early key players, it managed to forge the template for most subsequent histories, in the sense of providing them both the time line and the larger frame tale."

        –AI Magazine, Winter 2004

"In summary, if you are interested in the story of how the pioneers of AI approached the problem of getting a machine to think like a human, a story told with verve, wit, intelligence and perception, there is no better place to go than this book."

        –Nature, February 19, 2004

"The enormous, if stealthy, influence of AI bears out many of the wonders foretold 25 years ago in Machines Who Think, Pamela McCorduck's groundbreaking survey of the history and prospects of the field…. [T]aken together, the original and the afterword form a rich and fascinating history."

        –Scientific American, May 2004

A 25-year-old book about science has some explaining to do. Machines Who Think was conceived as a history of artificial intelligence, beginning with the first dreams of the classical Greek poets (and the nightmares of the Hebrew prophets), up through its realization as twentieth-century science.

The interviews with AI's pioneer scientists took place when the field was young and generally unknown. They were nearly all in robust middle age, with a few decades of fertile research behind them, and luckily, more to come. Thus their explanations of what they thought they were doing were spontaneous, provisional, and often full of glorious fun. Tapes and transcriptions of these interviews, along with supporting material and working drafts of the manuscript, can be found in the Pamela McCorduck Collection at the Carnegie Mellon University Library Archives.
If you believe (and I do) that artificial intelligence is not only one of the most audacious of scientific undertakings, but also one of the most important, these interviews are a significant record of the original visionaries, whose intellectual verve set the tone for the field for years to come. That verve–that arrogance, some people thought–also set teeth on edge, as I've pointed out.

Practicing scientists, more interested in what will happen than what once did, are apt to forget their field's history. The new is infinitely seductive. But an interesting characteristic of AI research is how often good ideas were proposed, tried, then dropped, as the technology of the moment failed to allow a good idea to flourish. Then technology would catch up, and a whole new set of possibilities would emerge; another generation would rediscover a good idea, and the dance would begin once more. Meanwhile, new ideas have come up alongside the old: far from "the failure" its critics love to claim, the field thrives and already permeates everyday life.

But above all, the history of AI is a splendid tale in its own right–the search for intelligence outside the human cranium. It entailed defining just what "intellience" might be (disputed territory even yet) and which Other, among several candidates, might exhibit it. The field called up serious ethical and moral questions, and still does. It all happens to be one of the best tales of our times.

From the new foreword:

"Machines Who Think has its own modest history that may be worth telling. In the early summer of 1974, John McCarthy made an emergency landing in his small plane in Alaska, at a place called (roughly translated) the Pass of Much Caribou Dung, so remote a spot he could not radio for help."

From the 30,000-word afterword, that summarizes the field since the original was published:

"In the late 1970s and early 1980s, artificial intelligence moved from the fringes to become a celebrity science. Seen in the downtown clubs, boldface in the gossip columns, stalked by paparazzi, it was swept up in a notorious publicity and commercial frenzy."

The new edition also has two separate time-lines, one tracing the evolution of AI in its narrowest sense, and a second one taking a much broader view of intellectual history, and placing AI in the context of all human information gathering, organizing, propagation, and discovery, a central place for AI that has only become apparent with the development of the second generation World Wide Web, which will depend deeply on AI techniques for finding, shaping and inventing knowledge.

Herb Simon himself urged me to re-publish. "Pamela," he wrote in email a few months before he died. "Do consider what might be done about bringing Machines Who Think back into print. More machines are thinking every day, and I would expect that every one of them would want to buy a copy. Soccer robots alone should account for a first printing."

 


[return to top]

FAQ answered by Pamela McCorduck:

Q: How long has the human race dreamed about thinking machines? 

A: Since at least the time of classical Greece, when Homer's Iliad tells us about robots that are made by the Greek god Hephaestos, also known, in Roman mythology, as Vulcan. Some of these robots are human-like, and some of them are just machines–for example, golden tripods that serve food and wine at banquets. At about the same time, the Chinese were also telling tales of human-like machines that could think. It's also important to remember that this is the time in human history when the Second Commandment was codified, prohibiting the making of graven images, which in reality forbids humans to take on the creative privileges of divinities. In my book, I describe each attitude: I call one the Hellenic point of view, meaning out of Greece, and generally welcoming the idea of thinking machines. The other I call the Hebraic, which finds the whole idea of thinking machines wicked, even blasphemous. These two attitudes are very much alive today. The history of thinking machines is extremely rich: every century has its version. The 19th century was particularly fertile: Frankenstein and the Tales of E. T. A. Hoffman were published, and the fake chess machine called "The Turk" was on exhibit.

Q: What's the difference between all those tall tales and what you're writing about?

A: They were exactly that–tall tales. However, by the middle of the 20th century, a small group of farsighted scientists understood that the computer would allow them to actually realize this longstanding dream of a thinking machine.
 
Q: What does it mean that a machine beat Garry Kasparov, the world's chess champion?

A: It's a tremendous achievement for human scientists to design a machine smart enough to beat not only the reigning chess champion, but also a man said to be the best chess champion ever. Kasparov, for his part, claims that these programs are making him a better chess player.

Q: And what about the recent wins by IBM's program Watson at the guessing game, Jeopardy?

This was spectacular. Watson had to understand natural language—in this case, English—to the point where it (yes, everyone on the Jeopardy program was referring to it as "he," but I'll continue to say "it") could outguess two of the best human players ever. To play Jeopardy, you must be able to crack riddles, puns, puzzles, and interpret ambiguous statements. Watson is a tremendous achievement.

Q: Does this mean that machines are smarter than we are?

A: Machines have been "smarter" than us in many ways for a while. Chess and Jeopardy are the best-known achievements, but many artificially intelligent programs have been at work for more than two decades in finance, in many sciences, such as molecular biology and high-energy physics, and in manufacturing and business processes all over the world. We've lately seen a program that has mastered the discovery process in a large, complex legal case, using a small fraction of the time, an even smaller fraction of costs—and it's more accurate. So if you include arithmetic, machines have been "smarter" than us for more than a century. People no longer feel threatened by machines that can add, subtract, and remember faster and better than we can, but machines that can manipulate and even interpret symbols better than we can give us pause.

Q: Those are very narrow domains. Do general-purpose intelligent machines as smart as humans exist?

A: Not yet. But scientists are trying to figure out how to design a machine that exhibits general intelligence, even if that means sacrificing a bit of specialized intelligence.

Q: If the human chess champion has finally been defeated, and the best human Jeopardy players went down, what's the next big goal?

A: It took fifty years between the time scientists first proposed the goal of a machine that could be the world's chess champion, and when that goal was reached. It took another fourteen years for Watson to emerge as the Jeopardy champion. In the late 1990s, a major new goal was set. In fifty years, AI should field a robot team of soccer players to compete with and defeat the human team of champions at the World's Cup. In the interim, more modestly accomplished soccer robots are teaching scientists a great deal about physical coordination in the real world, pattern recognition, teamwork, and real-time tactics and strategy under stress. Scientists from all over the world are fielding teams right now–one of the most obvious signs of how international artificial intelligence research has become.

Q: Artificial intelligence–is it real?

A: It's real. For more than two decades, your credit card company has employed various kinds of artificial intelligence programs to tell whether or not the transaction coming in from your card is typical for you, or whether it's outside your usual pattern. Outside the pattern, a warning flag goes up. The transaction might even be rejected. This isn't usually an easy, automatic judgment–many factors are weighed as the program is deciding. In fact, finance might be one of the biggest present-day users of AI. Utility companies employ AI programs to figure out whether small problems have the potential to be big ones, and if so, how to fix the small problem. Many medical devices now employ AI to diagnose and manage the course of therapy. Construction companies use AI to figure out schedules and manage risks. The U.S. armed forces uses all sorts of AI programs–to manage battles, to detect real threats out of possible noise, and so on. Though these programs are usually smarter than humans could be, they aren't perfect. Sometimes, like humans, they fail.

Q: What so-called smart computers do–is that really thinking?

A: No, if you insist that thinking can only take place inside the human cranium. But yes, if you believe that making difficult judgments, the kind usually left to experts, choosing among plausible alternatives, and acting on those choices, is thinking. That's what artificial intelligences do right now. Along with most people in AI, I consider what artificial intelligences do as a form of thinking, though I agree that these programs don't think just like human beings do, for the most part. I'm not sure that's even desirable. Why would we want AIs if all we want is human-level intelligence? There are plenty of humans on the planet. The field's big project is to make intelligences that exceed our own. As these programs come into our lives in more ways, we'll need programs that can explain their reasoning to us before we accept their decisions.

Q: But doesn't that mean our own machines will replace us?

A: This continues to be debated both inside and outside the field. Some people fear this–that smart machines will eventually get smart enough to come in and occupy our ecological niche, and that will be that. So long, human race. Some people think that the likeliest scenario is that smart machines will help humans become smarter, the way Garry Kasparov feels that smart chess-playing machines have made him a better player. Some people think that smart machines won't have any desire to occupy our particular niche: instead, being smarter than we are, they'll lift the burden of managing the planet off our shoulders, and leave us to do the things we do best–a rather pleasant prospect. But a few years ago, Bill Joy, an eminent computer scientist who helped found Sun Microsystems, was worried enough to write an article that calls for a halt in AI and some other kinds of research. He's far from the first, by the way. Most of the arguments against halting suggest that the benefits will outweigh the dangers. But nobody believes that there's no chance of danger.

I should add that forbidding AI research is pretty hopeless. Research isn't being done on some mountaintop in secret. It's being done all over the planet. Suppose a group of nations (or firms, or universities) decided to stop doing AI research. Would that stop other researchers elsewhere? No, the perceived advantage of continuing this research would make at least a small group continue. The abstainers would be forced to continue themselves for their own protection.

Q: Aren't you yourself worried?

A: I agree that the dangerous scenarios are entirely plausible. I explore that further in my book. But I also believe that the chance is worth taking. The benefits could be tremendous. Let's take some examples. Scientists are at work right now on robots that will help the elderly stay independently in their own homes longer than otherwise. I think that's terrific. At the 2003 Superbowl (and presumably at the 2004 Superbowl too) a kind of artificial intelligence called "smart dust"–smart sensors a millimeter by a millimeter–was deployed to sense and report on unusual activity, looking for terrorists. Scientists are also at work on a machine that can detect the difference between a natural disease outbreak and a bio-terror attack. Unfortunately, these are issues we must address for the foreseeable future. We've recently had a lot of bad news about cheating going on in the financial sector. At least one part of that sector, the National Association of Security Dealers, uses AI to monitor the activities of its traders, looking not only at the trading patterns of individual traders, but at articles in newspapers and other possible influences.

Q: Whoa! Isn't that a big invasion of privacy? In fact, didn't we hear that AI was going to be used for the government's Total Information Awareness project? That makes me very uncomfortable.

A: Americans cherish their privacy, and so they should. American ideas about privacy have evolved legally and socially over a long period. Moreover, Americans aren't the only ones with such concerns–the European Union is even stricter about the use of personal information than the U.S. But the European Union also understands that the best defense against terrorism is to be able to detect patterns of behavior that might alert law enforcement officers to potential terrorism before it happens. Like the privacy you give up for the convenience of using a credit card, it's a trade-off. I think that trade-off should be publicly debated, with all the gravity it deserves.

Q: Shouldn't we just say no to intelligent machines? Aren't the risks too scary?

A: The risks are scary; the risks are real; but I don't think we should say no. In my book, I go further. I don't think we can say no. Here's what I mean: one of the best things humans have ever done for themselves was to collect, organize, and distribute information in the form of libraries and encyclopedias. We have always honored that effort, because we understand that no human can carry everything worth knowing inside a single head. The World Wide Web is this generation's new giant encyclopedia, and the Semantic Web, which is the next generation Web, will have intelligence built in. It will be as if everybody with access to a computer can have the world's smartest reference librarian at their fingertips, ready to help find exactly what you need, no matter how ill-formed your question is. And it will be able to offer some assurance that the information you are getting is reliable–the present World Wide Web cannot do that. In other words, intelligent machines seem to be part of a long human impulse to educate ourselves better and better, to make life better for each of us.

Q: What's ahead as AI succeeds even more?

A: Many of us already deal with limited AI in our daily lives–credit cards, search engines like Google, automated voice instructions from our GPS devices to help us drive to our destinations; we order prescriptions over the phone from semi-intelligent voice machines. But visionary projects are underway to make–hey, read my book!

Q: Would you consider yourself an AI optimist?

On the whole, yes, though I'm not nearly as certain that AI will succeed on the time scale that some observers, such as Ray Kurzweil, believe. However, I'm re-thinking my skepticism as programs like Watson exceed my expectations. I've always thought that significant AI would come to us, but not in a rush. Now I'm not so sure—it might be sooner than I expected. Maybe much sooner. My book talks about my experience with intelligent robots at a meeting in the summer of 2003. Some people have said I was too critical, too negative about that. But in March 2004, DARPA staged a race for autonomous vehicles over a 30-mile desert course. The best vehicle (Carnegie-Mellon's entry) did just over 7 miles before it quit. Some vehicles didn't even get started. The following year, however, the winning intelligent, autonomous car did its entire course without mishap, and it had a few others in the rearguard doing just fine too. These DARPA competitions have continued with ever more difficult problems posed and solved. But we still have a way to go, and no wonder. In just a few decades, we're trying to mimic and even surpass millions of years of natural evolution.

Q: How do you feel about what's called "the singularity"?

A: Oh, boy. I've long felt that "the singularity"—the moment when machine intelligence exceeds human intelligence—was so far off (if it happened at all) that anything I could say about it could only be hot air. With AI climbing the learning curve as fast as it has been lately, I've needed to revisit that stance. For now, I still maintain that if and when it happens, humans then and there will have the best opinions on how to confront this unprecedented event. We also need to question whether this singularity will arrive, as most proponents seem to think, in the form of a homogenous membrane, spreading all over the planet all at once. It makes more sense to me that it will arrive in fits and starts, and will sometimes be self-contradictory—which would raise other kinds of problems for us, and for it. I wouldn't be astonished if, at that point, we turn to our own smart machines for advice on what the best next move is for the human race.

[return to top]

[Home] [Bounded Rationality] [The Singularity] [Machine Intelligence] [The Edge of Chaos] [Alan Turing's Centenial] [Inkwell Interview] [YouTube Video] [Machines Who Think] [The Futures Of Women] [Aaron's Code] [Anthony Trollope] [Cuba, October 2010] [Tiananmen Square] [Marconi  Talk] [Where are the  Women in IT?] [Women in Design A Different Gaze]

Copyright © 2005 - 2012 by Pamela McCorduck. All rights reserved.
Modified: July 03, 2012