The Political Voice Spring 2013

Page 23

other drivers and pedestrians? Should a plane land itself in an emergency if it risks harming citizens below? These and other questions have led to the rise of the field of machine ethics. Machine ethics, also known as machine morality, is the arena of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally. Concomitantly, the term “roboethics” was invented by roboticist Gianmarco Veruggio in 2002, referring to the ethics of how humans plan, make, use and behave toward robots and other artificially intelligent beings. Roboethics and machine ethics are quickly becoming the pinnacle of new industrial endeavors, bringing a combination of legal, ethical and technological facets to the issue. For anyone who has read Isaac Asmivov’s “I, Robot” series (or, more likely, seen the popular Will Smith cinematic version), we are familiar with the “Three Laws of Robotics” and how they attempt to govern artificial intelligence, but more importantly, how they fail to do so. If we program robots to strive for the protection of humans, how can that be achieved while avoiding the tyrannical results that occurred in science fiction? Oftentimes, building robots intended to do one function, but giving them the ability of discretion and intelligence leads them to perform another. For example, the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale in Switzerland undertook a 2009 experiment. In this, robots that were programmed to cooperate with one another whilst

22

The Political Voice

looking for a beneficial resource and eluding a lethal one learned to lie to each other and to conceal the truth in hopes of hoarding the beneficial resource. A theory popularized in the Terminator movies, Vernor Vinge predicts the possibility of “the Singularity,” a period when machines become smarter than the humans who invent them, possibly becoming dangerous for human beings. Academic and technical experts agree that a computer becoming autonomous and able to make choices on its own is transitioning from hypothetical to extremely possible — potentially posing threats to humankind. In a 2009 conference, scientists acknowledged the capability of machines to achieve several degrees of partial sovereignty, encompassing capacities to both autonomously choose and fire upon militaristic targets, to find power sources, and to develop evasive maneuvers. However, how much speculation through science fiction movies can actually predict the volatility of future machinery? Although many suggest that one method of dealing with the problems posed by teaching machines right from wrong is to eliminate such machines altogether, such as banning autonomous bots, others attest to the unending benefits such machines can bring to humankind. Android soldiers would not rape, destroy a village in anger or become irrational amidst the strain of battle; likewise, driverless cars are likely to be far safer than ones susceptible to human error.

With such complex issues arising in the field of machine morality and roboethics, the question arises on how to teach robots to become AMAs while simultaneously avoiding the pitfalls and hazards echoed in science fiction. When ethical systems are programmed into machines, they must parallel the morals of the majority of society. With this comes the concern of implementing algorithms into machines and whether they should be decision trees or neural networks. Although Chris Sangos-Lang says neural networks and genetic algorithms allow for changing societal norms and an ability to evolve the decision-making process is necessary to progress, others argue that decision support tools that uses a tree-like model of options and possible consequences will habitually follow collective standards of accountability and predictability, eliminating the margin in which machines could make their own — perhaps “incorrect” — decisions. Technology has propelled societal progress, but society has also impelled scientific advancements. There may not currently be a precise formula for a perfected “morality core;” but, the sooner the precarious quandaries of AMAs are answered, the sooner humanity can relish in the remunerations advanced machinery will certainly bring to society.


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.