Pandora's Box
I became fascinated with artificial intelligence (A.I.) in graduate school. In fact, my dissertation made use of a primitive form of A.I. to help select manufacturing variables for targeted metal cutting operations. At the time, A.I. was a fringe topic, something that was continually derided as a pipe dream. Today, it's anything but.
In the early 21st century, Ray Kurweil wrote a book, The Singularity is Near, in which he predicted that full-blown artificial general intelligence (AGI)—that is, a computer-based intelligence that is virtually indistinguishable from human intelligence—would appear by 2040. He suggested, correctly I think, that technology is improving on an exponential curve and that advances in tech and particlarly in A.I. will occur more and more rapidly. We're already seeing that. 2040 may be a conservative estimate.
Kurweil is a utopian, believing that the benefits of A.I. would far-outweigh its obvious dangers. Many others, myself included, are more guarded, believing that A.I. can lead to a dystopian world in which, much like the Terminator scenario, machines rule. Sound crazy? Read Nick Bostom's treatise, Superintelligence. Bostom is often characterized as a dystopian, believing that AGI will quickly evolve into a superintelligence that cannot be controlled and could potentially be malevolent. He's not alone. The likes of Elon Musk, Bill Gates, and Steven Hawking, among many serious technologists, are also concerned.
Today there is a race to develop powerful A.I. with China, the United States, and other competent players pushing to gain an advantage. The question is, what happens if and when AGI appears. Once developed, it could, like a human, improve itself (at the speed of light). It might have an IQ of 90 on day one, 200 on day n, then 1000 on day ... We cannot even conceive of an AI that has access to all the world's data, has an intelligence that is 1, 2 or 3 (!!) orders of magnitude greater than ours, and always getting more intelligent. Would it solve all the world's problems? Would it decide that humans are superfluous to its self-defined goals? Would it lead to a utopia or a dystopia? No. One. Knows.
The big question is this: Are we willing to take the very real risk to find out.
Of course, most of this tech is simply beyond the grasp of politicians who believe that the greatest threat to the human race is climate change. Unlike that scientifically questionable long-term threat, which won't become acute (if it ever does) for almost 100 years, A.I. could become a very real issue 20 or 30 years. A.I. represents a unique dilemma. Can it be controlled? Can we develop "ethical" standards for it? Should we ban development of it?
The answers to those questions aren't easy or obvious. We should not become 21st century luddites, but we have to recognize and respond to the risks. Is government regulation the answer? I'm not sure, but if past experience serves, probably not. Are industry standards viable? Not likely.
In our risk averse, politically correct society, all of those avenues will be tried, but there are others in other places who will disregard any controls and plow forward.
Why? Because they believe (incorrectly, I think) that a superintelligence can be controlled and weaponized against their economic or military adversaries. Pandora's box awaits.
<< Home