Artificial Intelligence is a Real Threat. Just Ask Siri.

by / 0 Comments / 114 View / March 19, 2015

If you tell Siri, your iPhone’s intelligent digital assistant, not to wake you up in the morning, she replies: “I’ve set an alarm for 7 AM.” If you complain to her, saying, “I’m sleepy,” she replies, “Listen to me. Put down this iPhone right now and take a nap. I’ll wait here.” Siri is our very own mass-produced and mass-consumed source of artificial intelligence. She is programmed to deliver everything from the location of nearby Mexican restaurants to snappy rejoinders to our questions. For those of you who imagine artificial intelligence to be those killer robots you saw in a science fiction movie once, or a mythical mechanical superintelligence hell-bent on exterminating all inefficient life forms, you only have to look at your phone and ask a question to hear the voice of mechanical intelligence. Pandora knows what songs you like based on your playlists; Facebook shows you advertisements about college life, winter wear, and rock music events in Brooklyn, all things you happen to be interested in. These represent the subtle heralding of a new age in intelligence design and progress. But what if these mechanical intelligences, these pre-programmed, amoral data processors, were to outwit their human creators?

            Taking an unexpected stance in the discussion of future artificial intelligence are computer scientist Stuart J. Russell, mathematician Frank Wilczek, cosmologist Max Tegmark, and theoretical physicist Stephen Hawking. In a piece published on May 1st, 2014, the four scientists warned us of the dangers inherent in creating advanced artificial mechanisms. In perhaps their most jarring claim, they say, “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of A.I. depends on who controls it, the long-term impact depends on whether it can be controlled at all.” And to what are these brilliant minds responding? A movie called Transcendence, starring Johnny Depp, had just hit theaters; what movie critics derided as “rhythmless, shapeless and… cheesy looking” in the words of The New Yorker’s David Denby, had succeeded in attracting the attention of four of the world’s most brilliant minds. They suggested superintelligent beings with no moral compass, the threat of military misuse (“who controls it”) and above all, the general ‘outsmarting’ of the human race. Is this regressive paranoia or a timely warning?

When our man-made creations, the beginnings of which reside in Siri or even in the computer program that keeps beating you at chess, escape our grip, even our wild human imagination might not be able to predict a future surrounded by A.I. Will it be one where sentient computers serve and live beside humans, controlled by cautious creators, or one of existential battles for survival, a struggle for evolutionary dominance? Either way, such ponderings on any given future with artificial intelligence speak of the danger of science fictionalizing.

But what of the present advantages of A.I.? Complex robots are already conducting surgeries on terminally ill patients, carrying out procedures (controlled by a human doctor, of course) too delicate and nuanced for even the human hand. The New York Times writer David Brooks suggests further benefits to the continued presence of artificial intelligence in human lives: They will be more modest machines that will drive your car, translate foreign languages, organize your photos, recommend entertainment options and maybe diagnose your illnesses. We are already seeing the beginnings of such progress, still leashed in by human programmers. The familiar cry that accompanies these advantages, however, is more than a little disturbing. What if the burden (or the privilege) of decision-making, of choice, is given to an artificial intelligence? It could choose to kill cancer cells inside humans or kill humans with a greater propensity for developing cancer. Within its millions of programmed lines, nowhere will there be a command to hesitate, to consider a morally correct approach, to take a subjectively-informed route.

There are some hackneyed arguments against A.I. research itself: Bonnie Docherty, Harvard Law lecturer and Human Rights Watch researcher, says, “If this type of technology is not stopped now, it will lead to an arms race,” said Ms. Docherty, who has written several reports on the dangers of killer robots. “If one state develops it, then another state will develop it. And machines that lack morality and mortally (sic) should not be given power to kill.” The mere fact that Ms. Docherty has taken the time to write several reports on this subject speaks to the gravity of the situation. Was I alone in imagining mad geniuses with a fleet of Iron Mans in their arsenal? Limit the hand of the military before limiting scientific progress itself. But the fear underlying such opponents’ views is this: master the machine, but don’t let the machine master us.

In weighing the incalculable benefits and the increasingly alarming risks, we can only caution and control those who set out to create. Ethics boards created to determine the use and impact of artificial intelligence seem to be a start in the right direction, but will they stem a growing tide of human curiosity in artificial intelligence? Our moral selves, informed in equal measure by fear, self-preservation, and ethical considerations, may yet overcome our urge to progress and invent. Stephen Hawking has made clear his stand: “Success in creating A.I. would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”