Georgia Tech Alumni Magazine, Vol. 91, No. 3 2015

Page 51

The Ethics of Artificial Intelligence, or

Why We Don’t Have to Worry Yet About Bowing Down Before Our Robot Overlords BY ELLIS BOOKER Once the stuff of science fiction, autonomous, “thinking” robots are increasingly ubiquitous. They explore the surfaces of Mars and comets, wheel medications up and down hospital corridors, and, more recently, even drive themselves around our freeways. So how did robots become so capable so quickly in the 21st century? Experts say a confluence of core technological advances—in processors, sensors and materials, as well as control algorithms and machine learning— are making robotic systems and other forms of artificial intelligence (AI) both more reliable and better able to navigate the world on their own. With such advances, however, comes an almost ageless concern: Are we on the brink of creating an artificial intelligence that will pose a threat to humanity? The faculty and researchers at Georgia Tech, one of the top centers of research on the topic of human/robot interaction, take this concern seriously. But they stress that the emergence of “strong” AI—where it becomes as functionally equal or superior to human intelligence— is unlikely in the foreseeable future, no matter what you see in movies or read in books. (See “Strong vs. Weak AI,” page 52.) “People are worried about super-intelligences, and their profound potential impact on the human race,” says Ronald Arkin, Regents professor and director of the Mobile Robotics Laboratory for Tech’s College of Computing. As one of the nation’s most respected roboticists and roboethicists, he’s personally more worried about the “questions that are confronting us in the here and now,” rather than those that might affect us somewhere far down the road. Arkin presented his views this summer in Washington,

D.C., at an Information Technology and Innovation Foundation panel titled “Are Super Intelligent Computers Really A Threat to Humanity?” As Arkin sees it, humanrobot interactions are already surfacing ethical quandaries. Examples include lethal autonomous systems on the battlefield and machines designed to mimic human qualities and elicit emotional reactions from us. The ethical questions prompted by such systems are worthy of immediate attention, “perhaps more than the potential extinction of the human race,” he says. There are more practical questions, too, that will soon be relevant. Take a self-driving car skidding on an icy street. Will the AI choose to crash the vehicle into a crowded school bus, a couple of adults on the street, or drive itself into a wall, potentially killing its owner? “Someone will have to design what the system chooses to do, under those different types of circumstances, if it is indeed perceptually able to recognize those situations,” says Arkin, noting this dilemma is a version of the classic Trolley Problem, in which we’re given the option of redirecting a runaway trolley to kill one person and so save five others on the tracks. Yes, the autonomous car may have to be programmed with strategies when confronted by a no-win crash. “Just don’t expect universal agreement,” Arkin cautions, citing the lack of agreement about many life and death questions, including smoking in public, capital punishment and abortion. “Part of the problem with ethics is, quite often, there are no universally agreed upon answers.” GTALUMNIMAG.COM VOLUME 91 NO.3 2015

0 5 1


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.
Georgia Tech Alumni Magazine, Vol. 91, No. 3 2015 by Georgia Tech Alumni Association - Issuu