Nick Bostrom, Oxford University Press, 352 pages, $29.95.
Those keen to preserve the idea that humans are special often point to intelligence. Crows may dabble with simple tools and elephants may be able to cope with rudimentary arithmetic. But humans are the only animals with the braininess necessary to build airplanes, write poetry or contemplate the Goldbach conjecture.
They may not stay that way, as humans strive to create intelligence in the lab. That is the goal of research into artificial intelligence (AI) — and the possible consequences are the subject of a new book by Nick Bostrom, a philosopher from the University of Oxford.
Taking the possibility of AI as given, Bostrom spends most of his book on the implications of building it. He worries about a fundamental problem. Once intelligence is sufficiently well understood to build a clever machine, that machine may prove able to design better versions of itself. That could lead to an “intelligence explosion,” in which a machine arrives at a state as far beyond humans as humans are beyond ants.
For some, that is an attractive prospect, as such godlike machines would be better able than humans to run human affairs. But Bostrom is not among them. It is far from obvious that such a machine would have humanity’s best interests at heart — or, indeed, that it would care about humans at all.
Because nobody knows how such an AI might be built, Bostrom is forced to spend much of the book discussing speculations built upon plausible conjecture. But the book is nonetheless valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of actually doing so seems remote.