BoundedRationality100x151
The Edge of Chaos
Machines Who Think
Aaron's Code
Email
The Singularity

 

line1

A response to "The Singularity," by David Chalmers.  

Pamela McCorduck
 

line1

This essay by David Chalmers is refreshing in its lucidity and moderation—both unusual in discussions of the singularity. It lays out the arguments far more clearly, and hence more persuasively, than the usual rhetoric, whether pro or contra.

As a consequence, I have reconsidered my own position on the singularity. I long believed that if such an event were possible, it would be many years, perhaps centuries, before it occurred, and anything we might have to say about it now would be inane: a great premise for science fiction, but not so great for examining functional or ethical stands to be taken now or in the immediate future. Sufficient unto the day is the evil thereof, says the old Biblical warning, an admonition that begins, Take therefore no thought for the morrow, for the morrow shall take thought for the things of itself.

Lately, however, artificial intelligence has improved some of its performances dramatically, and David Chalmers' essay offers a plausible argument that the singularity might indeed happen, and within decades, not centuries. As he argues, the singularity is therefore worth thinking about.

However, I do question the way the singularity is generally imagined to arrive. Will it really be a homogeneous, single-minded (so to speak) membrane that spreads everywhere over the planet? Or will it continue to be heterogeneous, and perhaps self-conflicting, odds and ends, just as it is beginning to be now? For there are certainly many particular machines that are already smarter than we are. A heterogeneous singularity would raise many interesting problems for humans, and for itself.

In any case, Chalmers lays out certain kinds of defeaters that might derail the singularity's arrival: disasters, disinclination, and active prevention. Of these, disasters seem to me to be the only likely defeater. Disinclination and active prevention, on the contrary, are very unlikely. If artificial intelligence were being pursued in a secret location on a remote mountaintop, someone, or a committee of someones, might be able to beg, or forbid, the researchers to stop, as might possibly have happened with the Manhattan Project, at least for a while.

But artificial intelligence is being pursued internationally, in many different, quite diverse locations, such as research institutes, universities, and firms. (I can even imagine lone wolf researchers, though given the resources needed, this seems unlikely.) Thus, even if a bloc of nations adopted a policy of no further research in artificial intelligence, they could not achieve universal prohibition—there would always be nations or groups who see local advantage to the research, and persist in continuing it. Therefore the preventer bloc would be forced, in self-defense, to continue research too.

Since research that will lead to AI+ and AI++ is inevitable (and let me say, to my mind, a generally good thing) this leads to Chalmers' suggestion that we solve the potential problems ahead by shaping these technologies in ways that are agreeable, or at least not harmful, to human beings, conforming "broadly to human values."

Who will decide what is agreeable, what is not harmful, and what conforms, even broadly, to human values? Values around the world are diverse. I wouldn't expect Chinese AI+ to be identical to Indian AI+, or to Brazilian AI+, never mind the AI+ of Euro-American democracies—which diverge among themselves as well. Differences about what "conforms broadly to human values" might even split right down the gender divide. The seductive, but elusive "we" in this essay is, in fact, a congeries of opinions, interests, and values. I doubt even responders to this essay will agree altogether on values; thus a worldwide consensus is a chimera. Slow coding, analogous to slow cooking, might work, but human beings have a long history of leaping before they look, and this would probably not be different. The international experience with responses to global climate change, for example, can give us no optimism about the human ability to shape AI+ cooperatively on any international scale.

If, then, the singularity is possible, maybe probable, maybe even inevitable, where does that leave us?

Chalmers offers us some possibilities. We could try everything out in a leak-proof virtual world first. People with experience in model-building will understand the enormous difficulties of constructing such a virtual world: even if it's possible to capture the major properties of our world in such a model, each and every property cannot be captured, and who really knows where mischief lies? Still, I'd think that this kind of slow-cooking approach, however provisional, is worth pursuing to save us from fast and stupid unintended consequences. Slow unintended consequences would take longer to emerge.

We could bid goodbye to our mortal remains and upload our brains or ourselves (whatever we think that means) into advanced hardware. Though I'm sentimental about my younger corporeal self, I wouldn't miss my older self's aches and pains in the least, and would consider it a fair tradeoff. Of course, as Chalmers points out, this is not yet a proved technology, and slips of any magnitude could happen. Some people also argue that human proprioception is deeply significant to human intelligence, but if this is true, it could probably be simulated.

Moreover, what happens when platforms change? There's something undignified, not to say humiliating, about living, moving, and having our being as no more than a piece of legacy software that has to be tediously accommodated, like ancient FORTRAN routines. If this digital version of the self is somehow preserved, I think most of us would not want it frozen in perpetuity. We might opt for slow cognitive enhancement, as Chalmers suggests. Or we could do it the old fashioned way, via evolution. But either method would tend to change the nature of this self, eventually profoundly. In any case, Chalmers presents a suitably modest uncertainty about just how well this uploading would work.

Others say the lesson from human history suggests that as the machines grow smarter, our own intelligence will improve apace. Therefore, while we'll outsource tedious computations to the machines, it will be our luck to grow ever smarter as a result of these outsourced computations, and therefore our worries about being bested by the machine are unfounded.

Can it be possible that the best advice for human action, post-singularity, will come from the machines themselves?

Wait. I seem to have come back to where I started: "Take therefore no thought for the morrow, for the morrow shall take thought for the things of itself." But I'm grateful to David Chalmers and his provocative essay for encouraging me to think about the issues once again.


[return to top]

[Home] [Bounded Rationality] [The Singularity] [Machine Intelligence] [The Edge of Chaos] [Alan Turing's Centenial] [Inkwell Interview] [YouTube Video] [Machines Who Think] [The Futures Of Women] [Aaron's Code] [Anthony Trollope] [Cuba, October 2010] [Tiananmen Square] [Marconi  Talk] [Where are the  Women in IT?] [Women in Design A Different Gaze]

Copyright © 2005 - 2012 by Pamela McCorduck. All rights reserved.
Modified: July 03, 2012