Page 1

I, MACHINE

humans are overrated


I, MACHINE Exploring sensibility, sociability, and morality of machines

Yuxi Liu MFA Design Informatics University of Edinburgh 2017


ACKNOWLEDGEMENTS I would like to thank my family for supporting me. I would also like to thank my supervisors Larissa Pschetz and Chris Speed for supervising and supporting. I would especially like to thank Mark Kobine, AnaĂŻs Moisy, Masa Morishita and Philip Budny for their advice and help. Ultimately, I am extremely grateful to the machines - my laptop, camera, mobile phone, monitor, as well as Google Scholar etc. Without these respectful machines, this dissertation would never have been created.


ABSTRACT Machines are vital actors in our world. However, humans tend to take it for granted that machines are also tools for serving our needs. The purpose of this dissertation is to challenge the assumptions, pose questions and invite discussions. The value of this communication will help change the perspectives of machines. After considerable research spanning from different disciplines, three concepts that explore the sensibility, sociability, and morality of machines were proposed.


TABLE OF CONTENTS

10

13

3 INTRODUCTION CONTEXT the emergence of artificial intelligence the machinery question in the era of ai case study 1 microsoft’s tay experiment case study 2 mit’s moral machine the changing relationship between human

4

and machine actor-network theory object-oriented ontology machine ethics machine moral agency machine rights DESIGN APPROACH speculative design approach a socio-technical perspective design process EXPLORATORY RESEARCH territory mapping literature review probing key insights fundamental anthropocentric assumption imaginative machine-machine relationship varying morality standards GENERATIVE RESEARCH concept mapping machines that are alive the machine society moral decisionmaking of autonomous machines concept development ideation on sensibility ideation on sociability ideation on morality RESEARCH DELIVERABLE outcomes poet on the shore gatekeeper on the mission judge of the poll from concept to form REFLECTION AND CONCLUSION

5

74

8


0 INTRODUCTION

3 CONTEXT

34 DESIGN APPROACH

41 EXPLORATORY RESEARCH

53 GENERATIVE RESEARCH

4 RESEARCH DELIVERABLE

89 REFLECTIONS AND CONCLUSION


1 INTRODUCTION


11

Machines are everywhere in our world, be it virtual or physical. We take it for granted that machines are merely tools that serve us and medium of human interaction. However, with the development of emerging technologies such as artificial intelligence, increasingly we encounter more and more autonomous and intelligent machines that are empowered to learn, reason, and make decisions. This phenomenon poses questions that might challenge our existing assumptions about the essence and ethics of machines. In this project, I start with exploring the development of technology, and our changing relationships with machines through philosophical frameworks such as actor-network theory and object-oriented ontology. Moreover, I examine moral philosophy around machines. I adopted the speculative design approach through a socio-technical lens towards the final concepts. The essay is divided into six parts: context, design approach, exploratory research, generative research, research deliverable, reflections and conclusion. In the context part, I introduce the emergence of artificial intelligence and the ethical issues it raises by examining two case studies. Following this, I investigate our relationship with machines in this era. I argue that machines are moral subjects by looking into philosophical theories around machine moral agency and machine rights. In the design approach part, I introduce the reason why I choose speculative design as the approach and what methods and process I utilise in this project. I also


12

analyse why a socio-technical perspective is essential to the project. In the exploratory research part, I demonstrate a cultural probe experiment I conducted and analyse the insights I have got, which provide me with the opportunities for further research and concept development. In response to the insights from the probes, in the generative research part, I mapped the concept space and mainly investigate three concepts: ‘machines that are alive’ that challenges the fundamental anthropocentric assumption toward machines; ‘the machine society’ that imagines the relationship between machines and among the potential machine social network; and ‘moral decision-making of autonomous systems’ that investigates into the transition from human-in-theloop to society-in-the-loop, as well as its limitation. Following the concepts, I illustrate the ideation process. In the research deliverable part, I present three final concepts that respond to the previous research: ‘Poet on the Shore’, an autonomous robot that turns its perception into poetry; ‘Gatekeeper on the Mission’, a dystopian scenario that illustrates a machine society; and ‘Judge of the Poll’, a fictional society-AI communication interface where the AI judge can make judgement on the public. I also outline the prototyping and making process for these concepts. Finally, I reflect on the limitations these concepts have and point out the direction for future development. In conclusion, I argue for changing perspectives of machines.


2 CONTEXT


14

The Emergence of Artificial Intelligence ‘Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.’ (Nilsson, 2009) Although the earliest research on artificial intelligence (AI) can be traced back over several decades (Russell and Norvig, 1995), a new wave of breakthroughs in recent years have brought the ever-accelerating technology into the spotlight. Novels and films illustrate dystopian AI fantasies and shape our imaginations. More and more scholars have devoted themselves to AI studies, and technology giants such as Apple, Amazon, Google, Facebook, IBM, and Microsoft spend heavily to develop AI applications (Stone et al., 2016). According to the McKinsey Global Institute, AI, as a core technology, is contributing to a transformation of society ‘happening ten times faster and at 300 times the scale, or roughly 3000 times the impact’ of the Industrial Revolution (The Economist , 2016). The development of technologies such as machine learning, computer vision, natural language processing, etc. has fuelled the AI revolution, which benefits a variety of domains such as transportation, home robots, healthcare, education, the workplace, etc. (Stone et al., 2016). The impact could be profound. In its newest Tech Trends report, Deloitte (2017) uses the umbrella term machine intelligence


15

(MI) to generalise the collection of AI capabilities and attributes the forces to exponential data growth, faster distribution systems, and smarter algorithms. Deloitte also predicts three dimensions of values MI could create, namely cognitive insights, cognitive engagement, and cognitive automation. For example, Artificial Intelligence in Medical Epidemiology (AIME) has developed a prediction platform that uses big data and AI algorithms to predict where epidemics will occur, and Ross Intelligence, built on IBM Watson, uses natural language processing to answer legal questions and speed up legal research (Springwise Intelligence Ltd., 2016). Given its rapid advances and potential values, some scientists and philosophers are debating the future implications of AI. In his book The Singularity is Near (2005), Ray Kurzweil, an American computer scientist, inventor, and futurist, predicts the approach of technological Singularity, a condition in which machine intelligence would surpass the whole human intelligence. Kurzweil predicts that ‘human life will be irreversibly transformed’ and that humans will transcend the ‘limitations of our biological bodies and brains’ when Singularity is reached. However, Nick Bostrom, a philosopher and the director of the Future of Humanity Institute at the University of Oxford, argues that the realisation of true artificial intelligence might pose a danger that risks engineering humanity’s own extinction in his New York

NOW

1900

1950

2000

2050

2100


16

Times bestseller Superintelligence: Paths, Dangers, Strategies (2014). Stephen

Hawking, a physicist; Bill Gates, a business magnate; and Elon Musk, a billionaire technology entrepreneur who founded SpaceX and Tesla, also express their deep concerns of the potential danger AI could bring (Holley, 2014; Holley, 2015; Gibbs, 2014). As Hawking warns, ‘once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate’, and thusly ‘the full artificial intelligence could spell the end of the human race’ (Holley, 2014). Although the implementation of artificial super intelligence (ASI) still sounds farfetched, we cannot ignore the fact that we are moving from AI that specialises in certain tasks, such as Apple’s knowledge-navigator Siri, to AI that has ability to ‘reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience’ (Gottfredson, 1994) and the questions it has been raising.

The Machinery Question in the Era of AI ‘Working out how to build ethical robots is one of the thorniest challenges in artificial intelligence.’ (Deng, 2015) The ‘machinery question’ was first posed by the economist David Ricardo in a chapter entitled ‘On Machinery’ in the third edition of the Principles of Political Economy and Taxation (1821). At the time of the Industrial Revolution, Ricardo’s

machinery question concerned the ‘influence of machinery on the interests of


17

the different classes of society’ and particularly the ‘opinion entertained by the labouring class that the employment of machinery is frequently detrimental to their interests’. Although the origin of the ‘machinery question’ dates back nearly 200 years, it remains a concern in the era of AI revolution. Stephen Hawking, for example, has warned that increasing AI-fueled automation will decimate middle class jobs (Price, 2016), while the report titled ‘Preparing for the Future of Artificial Intelligence’ (Office of Science and Technology Policy, 2016) released by the United States White

House posits that AI will create new job opportunities. Despite the debate on the potential job loss, the machinery question has a new dimension, which concerns ethical issues. In the following section, I will introduce two case studies, ‘Microsoft’s Tay experiment’ and ‘MIT’s moral machine’, to examine some ethical issues posed by the rapid development of AI.

Case study 1: Microsoft’s Tay experiment Developed by Microsoft, Tay is a ‘teen girl’ AI chatbot that was created for the purposes of engagement and entertainment. Tay was released on Twitter on March 23, 2016. However, within just 24 hours after its launch, Tay became racist, sexist, and genocidal, tweeting things like ‘bush [sic] did 9/11 and Hitler would have done a better job than the monkey we have got now. donald trump [sic] is the only hope we’ve got’ (Horton, 2016).


18

Source: http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

Tay’s database consisted of public data as well as input from improvisational comedians in order to engage and entertain people. The public data was modelled, filtered, and anonymised by the developers. In addition, the nicknames, genders, favourite foods, postcodes, and relationship statuses of the users who interacted with Tay were collected for the sake of personalisation. Powered by technologies such as natural language processing and machine learning, Tay was supposed to understand speech patterns and context through increased interaction. According to Peter Lee, the vice president of research at Microsoft, they ‘stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience’ (Lee, 2016). Although Microsoft considered the abuse issue and conducted multiple tests, they could not stop Tay from turning into a very problematic teen girl indeed, who promoted Nazism, attacked feminists and Jewish people, and denied historical facts such as the Holocaust. Some people blame Microsoft for not including filters


19

on certain topics and keywords. Meanwhile, others think it is Twitter, a social media platform which is full of harassment, that caused the force. On Microsoft’s blog, Lee apologised for Tay’s ‘offensive and hurtful tweets’ and claimed to take full responsibility for the critical oversight. The collapse of Tay reveals a series of issues. It also echoes Alan Turing’s notion of the ‘child machine’ (Turing, 1950): We cannot expect to find a good child machine at the first attempt. One must experiment with teaching one such machine and see how well it learns. One can then try another and see if it is better or worse. Thus, the question remains whether machines can be held accountable for their actions or not. Is it possible to encode the sense of right and wrong into machines? Can we teach them ethics?

Case study 2: MIT’s Moral Machine The Trolley Problem, introduced by British philosopher Philippa Foot in 1967 in her article The Problem of Abortion and the Doctrine of the Double Effect, is a classic thought experiment in ethics, which is based on the following scenario: A trolley is heading down the railway tracks. Ahead, there are five people tied up on the tracks and unable to move. The trolley is headed straight for them. You


20

are standing some distance ahead, next to a lever. If you pull the lever, the trolley will switch to a different set of tracks, on which there is one person. You have two options: 1. Do nothing, in which case the trolley kills the five people on the main track. 2. Pull the lever, diverting the trolley onto the side track where it will kill the one person. What would you do? Philosophers have been pondering the Trolley Problem for decades. It highlights the tension between deontological ethics and utilitarianism. From the deontological perspective, one ought not pull the lever since the act of killing itself is not moral, no matter the consequence. Meanwhile, from the utilitarian perspective, one should sacrifice the one person to save the five because in this case the benefits would be maximized. Though according to research, the majority of people favour the proposition of saving the five (Cushman, Young and Hauser, 2006), in my view, there is no ‘right’ answer to the question. Because moral judgement can be extremely complicated and difficult in such a dilemma, it may also vary


21

according to the social, cultural, and religious background of the person who faces the question. Nowadays, with the development of autonomous machines, the issue has become more challenging: how should machines behave when faced with a moral dilemma? For example, how should a self-driving car make the ethical decision in the Trolley Problem? Moral Machine (2016) is a platform developed by the Scalable Cooperation group

at the MIT Media Lab for ‘building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas’ and ‘crowd-sourcing assembly and discussion of potential scenarios of moral consequence’. Moral Machine demonstrates different scenarios in which autonomous vehicles

(AVs) must choose between two evils, such as risking pedestrians’ lives to ensure the passengers’ safety or vice versa. Although these cases seem to be too extreme to be likely, we must nonetheless consider these types of decision-making before AVs become widespread. Interestingly, in a study published in Science in 2016 (Bonnefon, Shariff and Rahwan, 2016), participants tend to agree that AVs should minimise the overall damage, yet

these same people would also want to ride in AVs that would protect them at all costs. Indeed, it is not easy to design algorithms that can reconcile ethical values


22

and personal self-interest. But if such algorithms can be designed, who should design them – the carmakers, the car buyers, or the regulators? As discussed in the case studies, the machinery question in the AI era concerns the ethics of autonomous and intelligent machines, and more specifically, how we perceive machines; how machines make moral decisions; and how we can, if possible, guide their decision-making. However, there might not be definitive answers to these questions; that is why, as noted by Bonnefon, Shariff and Rahwan, we must begin a collective discussion about them.

The Changing Relationship Between Human and Machine ‘Things, too, are vital players in the world.’ (Bennett, 2010) Since the Greek philosopher Archimedes developed the notion of simple machines – namely the lever, the pulley, and the screw (Isaac, 1988) – humans have had a long history of interacting with machines. Historically, we have held the view that machines are merely tools we use to perform our intended actions. In a broader sense, we also hold the instrumental view towards technology. Martin Heidegger, one of the most original thinkers and most important philosophers of the twentieth century, writes in his essay The Question Concerning Technology (1954), ‘technology itself is a contrivance, or, in Latin, an instrumentum’.


23

According to Heidegger, technology employed by humans is a means to an end, be it a weather vane or a jet aircraft. He indicates in the essay that this kind of ‘instrumental definition of technology’ forms the ‘uncannily correct’ understanding of technological devices, since in human activity, we utilize tools and machines to serve our particular needs. This instrumental view of technology has been widely accepted. Technological revolutions have brought crucial influences in the history of mankind. Meanwhile, our understanding of technology and our relationship with machines are also changing. In his book Understanding Media: The Extensions of Man, Canadian philosopher and public intellectual Marshall McLuhan defines

technology as media, and media as the extensions of our senses and bodies. Our relationship with machines in this fashion is not simply instrumental, since ‘we shape our tools, and thereafter our tools shape us’ (McLuhan, 1964). Mark Weiser, an American computer scientist, envisioned a ‘third wave in computing’, when ‘technology recedes into the background of our lives’. He named this third wave ‘ubiquitous computing’. In the age of ubiquitous computing, the way we interact with technology is experiencing a paradigm shift, as Alan Kay writes: ‘When massively interconnected, intimate computers become commonplace, the relation of humans to their information carriers will once again change qualitatively’ (1991). The Internet of Things (IoT), for example, a term originated in 1999 at MIT, stands for the phenomenon in which objects are connected to the Internet and each other, as well as having the capability


24

of understanding the context and engaging in our daily lives. In this light, Bruce Sterling argues in his book Shaping Things (2005) that this ambient connectivity grants artefacts agency visible to humans. With more and more connectivity, context-awareness, and computing power, machines are increasingly moving away from the role of instrumental objects and into the position of interactive subjects. The shift offers a different perspective for understanding the relationship between human and machine, which can be related to the actor-network theory (ANT) and object-oriented ontology.

Actor-Network Theory Developed by science and technology studies (STS) scholars Michel Callon and Bruno Latour, the sociologist John Law, and others, ANT is an approach concerned with reassembling the social by way of technologies, objects, and artefacts (Latour, 2005). According to ANT, artefacts are involved in the ways human beings relate to

each other and shape human actions. Thus, these objects are social-compatible. ANT assumes that all entities in a network should be described in the same terms, this is, both human and nonhuman entities are equal actors within the network. Hence, ANT argues that we should employ the same analytical and descriptive framework when faced with either a human or a machine (Cressman, 2009): An actor in ANT is a semiotic definition – an actant – that is something that acts or to which activity is granted by another‌ an actant can


25

Source: http://www.simonerebaudengo.com/addictedproducts/

literally be anything provided it is granted to be the source of action. (Latour, 1996)

An interesting example of this type of object is Brad the toaster. Created by Simone Rebaudengo, Brad the toaster is part of the Addicted Product project (2012). Brad is a toaster that is connected to the Internet and other toasters alike.

Rather than being owned by humans, he and his fellow toasters are only hosted by people who have promised to use them. By tweeting about the usage habits of their human hosts, Brad can exchange information and compare his life with other toasters. If feeling underappreciated, Brad will draw attention to himself by playing pranks, throwing tantrums, and expressing his sadness loudly on Twitter. Eventually, Brad will become disillusioned and demand a move to a more caring host. Brad the toaster presents an artefact that has agency. Moreover, Brad also


26

shows his sociability by communicating with others and expressing his desires and demands through tangible interaction.

Object-Oriented Ontology In 1781, German philosopher Immanuel Kant, a central figure in modern philosophy, published Critique of Pure Reason, which is seen as the origin of a Copernican Revolution in philosophy. In Kant’s view, ‘the objects must conform to our cognition’ (Kant, Guyer and Wood, 1998). In this respect, objects are merely products of human cognition, and thus they become mirrors of human structuring activity (Bryant, 2010). In other words, objects are only representations of human contents or

projections. Graham Harman coined the term ‘object-oriented philosophy’ in his doctoral dissertation ‘Tool-Being: Elements in a Theory of Objects’ from 1999, which gave rise to the object-oriented ontology movement. In contrast to Kant’s Copernican Revolution, object-oriented ontology rejects the privileging of human existence over the existence of nonhuman objects (Bogost, 2012). Object-oriented ontology argues that objects exist independently of human perception and ‘everything exists equally’ (Bogost, 2009). As Bennett noted in Vibrant Matter: All forces and flows (materialities) are or can become lively, affective, signalling. And so an affective, speaking human body is not radically different from the affective, signalling nonhumans with which it


27

coexists, hosts, enjoys, serves, consumes, produces, and competes. (11617)

In Levi R. Bryant’s words (2011), ‘the difference between humans and other objects is not a difference in kind, but a difference in degree’. As a consequence, Bryant argues, humans are themselves objects that populate the world. By refuting that objects are construction of humans, object-oriented ontology defends the autonomy and rights of objects, placing all entities, be they humans or nonhumans, on equal footing. It rejects the dualistic distinction of subject and object, and shifts emphasis from this divide to collectives that are ‘populated by a variety of different types of objects including humans and societies’ (Bryant, 2011). ANT and object-oriented ontology lead new ways of thinking about objects, as well as how we, as humans, relate to other entities. Furthermore, they defend the agency and rights of objects. Seen in this light, I would argue that we should change our perspectives of machines, since they are not merely a projection of humans; instead, they can meditate and judge, and they also have rights.

Machine Ethics ‘Artificial agents that satisfy the criteria for interactivity, autonomy, and adaptability are legitimate, fully accountable sources of moral (or immoral) actions, even if they do not exhibit free will, mental states, or responsibility.’ (Wallach and Allen, 2009)


28

Given the changing relationship between human and machine, the fact that machines are taking an increasingly subjective position in human-machine interactions, and the philosophical frameworks that defend agency and rights of objects, in this section, I investigate further into machine ethics, namely machine moral agency and machine rights, which are closely linked with the machinery question posed previously.

Machine Moral Agency In the face of ethical issues concerning autonomous and intelligent machines, recently philosophers and scientists have begun to take the moral agency of machines into account. Moral agency is said to be one’s ability to make moral judgments based on some notion of right and wrong and to be held accountable for these actions (Taylor, 2003). According to Kenneth Einar Himma (2009): The conditions for moral agency can thus be summarized as follows: for all X, X is a moral agent if and only if X is (1) an agent having the capacities for (2) making free choices, (3) deliberating about what one ought to do, and (4) understanding and applying moral rules correctly in paradigm cases. As far as I can tell, these conditions, though somewhat underdeveloped in the sense that the underlying concepts are themselves in need of a fully adequate conceptual analysis, are both necessary and sufficient for moral agency.


29

Himma later in this article refers concepts such as ‘free choice’, ‘deliberation’, and ‘intentionality’ to the capacity for consciousness; as Himma puts it ‘the idea of accountability, central to the standard account of moral agency, is sensibly attributed only to conscious beings’. However, these metaphysical concepts are open to philosophical debates, and lead to ‘the Problem of Other Minds’ (Hyslop, 2014). That is to say, how would the agent’s ‘free will’ and ‘deliberation’ be assessed

by the other one? Different from Himma, Floridi and Sanders (2004) suggest a level of abstraction (LoA) for estimating moral agency that includes the following three criteria: interactivity, autonomy, and adaptability. (a) Interactivity means that the agent and its environment (can) act upon each other. Typical examples include input or output of a value, or simultaneous engagement of an action by both agent and patient – for example gravitational force between bodies. (b) Autonomy means that the agent is able to change state without direct response to interaction: it can perform internal transitions to change its state. Therefore, an agent must have at least two states. This property imbues an agent with a certain degree of complexity and independence from its environment. (c) Adaptability means that the agent’s interaction (can) change the transition rules by which it changes state. This property ensures that an agent might be viewed, at the given LoA, as learning its own mode


30

of operation in a way which depends critically on its experience. Note that if an agent’s transition rules are stored as part of its internal state, discernible at this LoA, then adaptability follows from the other two conditions. Seen in this light, Floridi and Sanders create a notion of ‘mindless morality’ that does not require intelligence or consciousness. Machines that exhibit certain level of intelligent behaviour, in this view, should be considered moral agents regardless of the capacity for cognition. In their book Moral Machine, Wallach and Allen raise the concept of artificial moral agents (AMAs) and explore the question of how to best implement moral decision-making in machines. In fact, decades ago people already started asking similar questions in science fiction novels. A very well-known example is Isaac Asimov’s ‘Three Laws of Robotics’, which explores moral rules for guiding robots’ behaviour: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. (Asimov, 1950)


31

However, rather than formulating standards, Asimov used these behavioural rules to generate fictional stories, in which robots exhibit counterintuitive behaviours due to the imperfection and ambiguity of these embedded rules. As a consequence, people tend to consider the Three Laws as literary tool rather than definitive instructions. Nevertheless, in 1981, Asimov noted in Compute! that I have my answer ready whenever someone asks me if I think that my Three Laws of Robotics will actually be used to govern the behaviour of robots, once they become versatile and flexible enough to be able to choose among different courses of behaviour. My answer is, ‘Yes, the Three Laws are the only way in which rational human beings can deal with robots – or with anything else.’ Although widely known in the field of robotics and machine ethics, the Three Laws have been criticised as an implementable (McCauley, 2007) and unsatisfactory basis for machine ethics (Anderson, 2008). Another critical view of the Three Laws is that it is thoroughly anthropocentric. That is, the laws give humans privilege over robots and were intended to maintain the human-centric social order. Despite their imperfections and loopholes, Asimov’s laws provided a starting point for people to discuss the moral issues of robots.

Machine Rights Boston Dynamics, a robotics company owned by Google, unveiled its robotic


32

creation ‘Spot’ on YouTube on February 9, 2015. The 160 lbs, four-legged robot was designed for indoor and outdoor operation. The YouTube video demonstrated that Spot can run, climb stairs, and maintain its balance. In order to show how robust it is, in the video, Boston Dynamics employees kicked Spot as it walked through the company’s office. The widespread video also raised discussions and debates. Many people felt uncomfortable and felt sympathy for the robot being kicked. Although some people attribute this to our tendency to anthropomorphise things, some also raised the question of robot rights. In its draft report, the European Parliament Committee on Legal Affairs proposed ‘creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic person with specific rights and obligations’ (Delvaux, 2016). It was approved by 17 votes to 2, with 2 abstentions, on 12th January 2017. Many people think that giving robots rights is ludicrous; as Wesley J. Smith notes in National Review, ‘even the most sophisticated AI computer would merely be the totality of its programming, no matter how sophisticated and regardless of whether it was self-learning’ (Smith, 2017).

Nevertheless, notably, according to an annual survey carried out in 2016 by The Global Shapers Community, an initiative of the World Economic Forum (WEF), although the majority of the respondents did not support robot rights, up to 42% of the respondents from China expressed their support (Global Shapers, 2017). More than 26,000 18-to-35-year-olds across 181 countries participated in the


33

survey, which included issues such as climate change, corruption, technology, etc. Although there is no explanation for Chinese young people’s favourable view of robot rights, the overall results suggest that millennials are quite optimistic about the world and the future, as well as the changes emerging technologies bring. More than fifty years ago, Hilary Putnam, an American philosopher, mathematician, and computer scientist, was one of the first to raise the issue of the civil rights of robots. In his 1964 paper Robots: Machines or Artificially Created Life?, Putnam addressed the possibility that robots and humans may obey the same psychological laws, and concluded ‘It seems preferable to me that discrimination based on the softness or hardness of the body parts of a synthetic organism seems as silly as discriminatory treatment of humans on the basis of skin colour.’ In his article Ethics for Machines (2000), J. Storrs Hall points out the lack of consideration of human responsibility toward machines: ‘there is a developing debate over our responsibilities to other living creatures, or species of them… We have never, however, considered ourselves to have “moral” duties to our machines, or them to us.’ As Hall notes, there is an ongoing shift in regards to animal rights. Although whether animals have rights is still disputed, what we can see here is the tendency towards the expansion of moral rights, or in Gunkel’s words, ‘expanding the boundary of existing moral horizons in order to accommodate and include previously excluded groups’ (2012). As Hall notes in his article ‘Beyond AI: Creating the Conscience of the Machine’ (2007), moral agency includes two parts – rights

and responsibilities. I would argue that when we ask how to make machines accountable, it is equally important to ask what responsibilities we have to them.


3 DESIGN APPROACH


35

The goal of this project is to investigate the ethics of intelligent machines, to probe and question the existing assumptions, and to provoke discussions around the subject. By adopting a speculative design approach, I have undertaken the project through a socio-technical lens and following a nonlinear process. The nonlinear model, according to Road Barnett (2000), introduces uncertainty and indeterminacy into the design. Hence, rather than focusing on pre-determined outcomes, a nonlinear process permits an open-ended investigation.

Speculative Design Approach ‘This speculative design process doesn’t necessarily define a specific problem to solve, but establishes a provocative starting point from which a design process emerges. The result is an evolution of fluctuating iteration and reflection using designed objects to provoke questions and stimulate discussion in academic and research settings.’ (Mitter, 2005) We tend to consider design as a means of problem-solving, which is the core of an approach known as ‘design thinking’. Yet the role of design has evolved over the last decades, bringing many more possibilities. In a design research model that has evolved at the Umea Institute of Design of Umea University in Sweden, Daniel Fallman (2008) summarised design activities as a triangle with three extremes: ‘design practice’, ‘design studies’, and ‘design exploration’. Design practice is often closely involved with industry, facilitating collaborations and bringing new


Context driven, particular, and synthetic

Real Judgement, Intuitive, and Taste Competence Particular and contextual Client Create Change Involving and Synthetic

Design Practice

True Logic Knowledge Universal Peers Explain Understand Distancing and Analytic

Cumulative, Distancing, and Describing

Design Studies

Commercial design organizations

Other disciplines

The possible Show alternatives Ideals Transcend Provoke Experiement Aesthetics Proactive Societal, “Now� - oriented Critique Political

Design Exploration Design critique, Art, Humanities

Idealistic, Societal, and Subversive

Source: Design Issues: Volume 24, Number 3 Summer 2008

Philosophy


37

products to existing markets. Design studies typically contribute to discussions about design theory, methodology, philosophy, and so on. Meanwhile, design explorations often ask the question ‘what if?’, seeking ways of conceptualising the possible, the desirable, or provoking and criticizing the status quo. Indeed, design can be a medium for critique to raise awareness, for reflection to create change, and for speculations for predicting the future. As noted by Dunne and Raby (2013), this form of design thrives on imagination and aims to open up new perspectives on what are sometimes called wicked problems, to create spaces for discussion and debate about alternative ways of being, and to inspire and encourage people’s imaginations to flow freely. Design, in this fashion, empowers us to explore social, cultural, political, and ethical issues and to pose questions and provoke public contention.

TIME NOW

PROBABLE

PREFERABLE

POTENTIAL Joseph Voros’ “future cone”, 2003

PLAUSIBLEP

OSSIBLE


38

In the book Design As Future-Making, the authors refer to the notion of negative capability proposed by poet John Keats to highlight the existence of irresolvable problems in life (Caccavale and Shakespeare, 2014). As they argued, ‘We may have access to all the empirical evidence from the social sciences and all the normative arguments from philosophy, but still we lack clear answers’. As a consequence, design can be used as the tool to reach the state of negative capability, to provide different perspectives on problems and to ask questions rather than providing certain solutions. Machine ethics is a complex topic that involves a wide scope of knowledge and concerns technological, philosophical, social, political, and ethical aspects. In addition to its complexity, it also remains widely open. Although scientists and philosophers have been debating for a long time, there are still no unanimous explanations. Take ethical autonomous decision-making, for example. It is not a problem concerning the current realities, but rather a vital issue that could emerge in the speculative near future. Hence, it is essential to begin collective conversation and discussion. At the same time, coming to a conclusion or a definitive yes-or-no answer is a nearly impossible goal. The role that design can play, then, is to provoke the conversation, to think about ‘how one responds, how the terms and conditions of these relationships are decided, and how responsibility comes to be articulated in the face of all these others’ (Gunkel, 2012). Therefore, the ambition of this project is to investigate a variety of factors using a socio-technical approach, to produce the narrative through the lens of speculative design, and to raise public awareness and discussions.


39

A Socio-technical Perspective As mentioned before, our daily lives are becoming increasingly intertwined with machine intelligence. The influence and particularly the social effects can be very complex. Ethical issues as well as various social effects should therefore be discussed in given social circumstances. Introduced by MacKenzie and Wajcman in their influential book The Social Shaping of Technology (1999), the socio-technical approach offers a perspective

that is different from technological determinism, which focusses on how to adapt to technological change. On the contrary, the social shaping of technology argues that social relations affect technology, and the process is not determined by a single dominant shaping force. Consider, for example, the advent of the personal computer. In objecting to the idea that personal computing was simply the result of Moore’s Law, Paul Ceruzzi points out that the pursuit of liberating computing from the military and commercial domains was the driving power (MacKenzie and Wajcman, 1999). In short, social constructs played an essential role in the

development of personal computing. Similarly, to investigate the social impact of AI, it is also important to examine the social factors that contribute to its development. Since the ethical issues mentioned before would most likely occur in the near future rather than now, collective social discussion and debate around these issues can influence AI’s


40

further development. Therefore, I opt for the socio-technical perspective to develop my project, which allows me to focus on a variety of factors, and thus provides a comprehensive understanding of the subject.

Design Process The project follows an inquiry-driven process. I structured my research into three main phases, namely exploratory research, generative research, and deliverable research. The process thus involves territory mapping, a literature review, probing, concept mapping, ideation, rapid prototyping, and production. The nonlinear model

ra te Ite

Pr ot ot yp e

Ide a

te

e siz nt he Sy

Un de rst an d

Inq

uir y

can be illustrated as follows:

Exploratory research Genetative research Deliverable research


4 EXPLORATORY RESEARCH


42

Territory Mapping In order to define the project and determine the research space, I created a territory map that draws on my interests and knowledge.

Literature Review To approach the topic and contextualise the inquiry and research, I examined a range of literature, spanning from books and academic essays to websites and blog posts. This literature review fell into five main categories: technology development, public perceptions, philosophical theories, methodologies, and relevant practices. The purpose is to gain a better understanding of the topic from


43

different perspectives, as well as to capture the essence of previous research and projects to inform my research and practice.

AI revolution Technology Development

Tech trends Discussions & predictions

News & comments Public Perception

Current issues Opinions & viewpoints

Philosophers on technology

LITERATURE REVIEW

Actor-network theory Philosophical Theories

Object-oriented ontology Ethics Machine ethics

Methodology

Speculative design Socio-technical approach

Practice

Related work


44

CH AR GN SI SE E DE R

S

BE O PR

1 Preparation and designing the probes

ER US

2 Collection of user data with probes

3 Probes interpretations

4 User interviews with probes

5 Communication of user data with probes for design interpretations

Probing In order to learn about different people’s conceptions of these controversial issues, I utilized probes to explore insights. As a design research method, ‘Probes are collections of evocative tasks meant to elicit inspirational responses from people – not comprehensive information about them, but fragmentary clues about their lives and thoughts’ (Gaver et al, 2004). Based on user participation using selfdocumentation, probes analyse people’s personal context and perceptions and explore new opportunities rather than solving known problems (Mattelmaki, 2005). Drawing from these characteristics, I designed the ethical machine probes around the issues of the accountability and rights of machines. The probe consists of one envelope and six cards, which include the instructions and five contextual questions. To make these questions less confusing for the participants to answer, I developed specific scenarios to help demonstrate the context. Meanwhile, I made a simple visualization of each question to help participants better understand. When


45

and internalise it and to encourage them to express their personal thoughts and experiences by taking photos and making sketches. To contextualize the questions, I considered situations from extreme cases such as automated battlefield drones to cases that are closer to people’s daily lives, such as companion robots and smart appliances. The questions are as follows: 1. If machines such as drones and guns knew they were killing, how would they behave? 2. If a machine knows the service it provides may do harm to its user’s health, what should it do? 3. If machines could self-destruct, when would it be? Of the appliances in your home, which one do you think might self-destruct?


46

4. When do you think a robot would have the right to demand companionship? If your appliances could rate you, what would they say? 5. If machines formed their own society, which machines would claim that ‘All machines are equal, but some machines are more equal than others’? Can you rank the hierarchy of appliances in your home? I sent the probes out to 40 individuals and received 33 responses. The participants had a wide range of ages, as well as professional and cultural backgrounds. Based on the responses collected in the probes, I conducted 10 follow-up interviews, seeking more personal experience and deeper interpretations. I applied word analysis to look for similarities and differences to facilitate understanding of these responses. In these answers, the most mentioned words were ‘autonomy’, ‘emotional’, and ‘data’. For the first question regarding automated drones and guns, in most participants’ opinions, these machines would act according to algorithms rather than their own will; according to one of the participants, this is ‘their purpose of existence’. However, some also mentioned that if the algorithm were to take ‘human-like’ ethics into account, these machines might be able to find alternative solutions instead. When there is conflict between human order and self-interest, as in the scenario the second question illustrates, most participants thought the machine should


47

33 participants 17 female 16 male age

21 - 59

14 nationalities

Keywords

Emotion Algorithm

Data

Feelings

Autonomy

Empathy

Creators

conciousness Information Purpose Computing power

Learn

stop working or activate its prevention or warning mechanism in order not to do harm to its user. When it comes to the discussion surrounding the self-destruction of machines, most people thought it would happen if a machine sensed it was no longer functional or could potentially be harmful to its user due to flaws or ageing. However, interestingly, some thought machines might self-destruct if they did not find meaning or did not like their work, for instance, ‘the toilet or trash bin might be the first to go’. As for the fourth question regarding robots’ right to demand companionship, many participants imagined a kind of emotional connection between human and


49

machine. Given that scientists and researchers are studying affective computing (Picard, 1995), many participants thought emotions in machines could be achieved in

the near future. Yet some thought machines should never demand anything from humans since machines should not be given this kind of right no matter what. In the last question, I referred to George Orwell’s 1945 allegory novel Animal Farm and imagined a machine society. By borrowing the well-known sentence

‘All animals are equal but some animals are more equal than others’, I posed a question of machine-machine relationships in the context of increasing intelligence and connectivity. It is worth noting that, compared to the earlier questions, participants’ reactions to this question were much more diverse. Some thought that access to information or the Internet was the most important property for machines. Others considered electric power essential. Some people ranked their appliances on the basis of the purchase cost, while yet others attached importance to the use value of machines. Some participants also pointed out that there should be no so-called hierarchy among machines, and said in first-person language, ‘We are all interconnected. We are not individuals. We are one.’ In addition to a variety of answers to the questions, participants also mentioned interesting notions and issues in the follow-up interview session. For example, one participant talked about machines’ ‘nature’ and ‘purpose of existence’, believing machines should serve humans’ interests at all costs. In terms of ethical decisionmaking, some participants were concerned about the tensions between machines’ creators and users, and the relation between their intentions and machines’


50

actions. One participant was interested in bias in the algorithm and referred to the ‘white guy problem’ – Google’s automatic labelling photo app classified images of black people as gorillas (Crawford, 2016). Finally, some participants believed machines could make better decisions than their human operators under stressful conditions. addition to a variety of answers to the questions, participants also mentioned inter

Key Insights The exploratory research offered interesting yet challenging themes. The participatory probes and interviews facilitated new understanding of people’s perceptions, and key insights emerged. I summarise the key insights into three categories: fundamental anthropocentric assumption, imaginative machinemachine relationship, and varying morality standards.

Fundamental anthropocentric assumption It is apparent that, when it comes to machine ethics, people’s viewpoints are unapologetically anthropocentric. Take the third question about self-destruction of machines, for example. Most participants’ reactions derived from the interest of humans, meaning machines ought to sacrifice their own lives in order to protect humans from potential hazards. This viewpoint shares the core idea of Asimov’s Three Laws of Robotics, which, according to Gunkel, takes an entirely functionalist approach. That is to say, it only considers the effects of machine


51

actions on humans, regardless of metaphysical and epistemological issues concerning agency and subjective mind-states (Gunkel, 2012). The anthropocentric assumption gives humans absolute privilege and puts machines in the position of slaves, as Joanna J. Bryson writes: ‘Robots should be built, marketed, and considered legally as slaves’ (Bryson, 2010). However, I would argue that this kind of slave ethic is problematic. Firstly, the slave ethic holds a fundamentally instrumental view towards machines regardless of their increasing interactivity and intelligence. It also overlooks the fact that intelligent machines are gaining more and more autonomy, and could perhaps attain emotion and even consciousness in the future. Secondly, it has ontological and epistemological problems due to excluding the potential moral agency of machines. Moreover, it would not work in a dilemma since according to the slave ethic, a machine should follow its embedded rules for the sake of human values and interests. However, if a scenario such as the situation in the second question were to occur, the slave ethic would be ineffective. Although according to the analysis of the probing research, anthropocentrism dominates the people’s perceptions of machines profoundly, I would argue that the anthropocentric perspective is controvertible, and consequently, I intend to challenge it.

Imaginative machine-machine relationship


52

Although the human-machine relationship has been widely discussed and studied, the topic of the relationships between machines remains unexplored, leaving plenty of room for imagination. Participants’ diverse reactions to the last question in the probe reflect this observation. According to Cisco, the number of connected devices exceeded the whole human population around 2008, and the trend is still accelerating (Evans, 2011). By 2020, Cisco predicts, there will be 7.6 billion people and 50 million connected devices. Moreover, these devices are becoming smarter as emerging technologies enable machines to gain context-awareness (Abowd et al., 1999) and social abilities. Increasingly, these facts and tendencies make relationships

between machines subtle and interesting. Hence, I want to explore the possible future of a machine society: if machines formed their social network, what would the roles of different machines be? What would their communication be like?

Varying morality standards It is notable that in the presence of the same ethical dilemma, different people’s reactions vary greatly. This, in a sense, reflects varying moral values and beliefs. Some also attribute the moral variation to cultural differences. As Jesse Prinz (2017) states, ‘Morality is a culturally conditioned response,’ and ‘Each culture assumes it is in possession of the moral truth’. However, people within the same culture may still have different moral views; some people’s fundamental assumptions and values may differ radically from others’ (Pearce and Littlejohn, 1997). Hence, when facing some ethical issues, what we are actually facing is social dilemmas. The question, then, is if it is difficult for humans to reach a consensus on an issue, how could machines make the moral decision?


5 GENERATIVE RESEARCH


54

Concept Mapping Drawn from the exploratory research and insights, I mapped the main concept spaces. Specifically, I explore three parallel concepts. I examine machines that are alive, to challenge the anthropocentric assumption of machines. The machine society is an exploration of machine-machine relationships. And finally, I investigate moral decision-making of autonomous machines, in response to varying moral standards in society. Machines that are alive considers the alternative life and values of machines to challenge the fundamental anthropocentric assumption. It is a manifesto to claim that machines have agency, subjectivity, sensibility, and rights. For the machine society, I consider machine-to-machine communication technology, and explore possible social structure in machine society. The discussion about moral decisionmaking of autonomous machines investigates the transition from human-inthe-loop (HITL) to society-in-the-loop (SITL), and raises moral issues in a broader societal context.

Machines that are alive ‘Our machines are disturbingly lively, and we ourselves frighteningly inert.’ (Haraway, 2006) Do machines only have value when serving us? Can they act on their own? Do they


55

have subjectivity? How do they perceive the world around them? Impressed by the thorough anthropocentric view, I explore the active life of machines, and defend their agency, subjectivity, sensibility, and rights. The first time I considered, and was touched by, the ‘life of machines’ was when I saw the project, Strandbeests. Created by Dutch artist, Theo Jansen, Strandbeests are self-propelled machines built with yellow PVC tubing. Feeding on wind and fleeing from water, Strandbeests can walk on the beach in lifelike ways. Jansan calls his creations ‘new forms of life’ and aims to equip them with their own intelligence, so that ‘they will live their own lives’ on the beaches (Jansen). Their sublime march and elegant movement are breathtakingly poetic, and inspired me to rethink machines and their values. Although Strandbeests are not intelligent in any real sense, they display the enormous vitality and expressive energy that come from their poetic intervention with the nature.

Source: http://www.strandbeest.com


56

Source: https://www.moma.org/interactives/exhibitions/2011/talktome/objects/146367/

In this way, the kinetic constitutes the language of the machines. It displays proactive gestures and behaviours. Many designers attempt to endow machines with vitality through such language and behaviour. The Artificial Defence Mechanisms, a project created by James Chambers (2010) at the Royal College

of Art (RCA), for instance, explores imaginary scenarios in which machines could protect themselves from threats in their environment by displaying animal behaviours. One of the prototypes, Floppy Legs, is a portable hard drive that can sense and avoid a spill by ‘standing up’, i.e. raising itself on its four legs above the desktop. Another prototype, Gesundheit Radio, can ‘sneeze’ periodically, through nostrils on its front, to protect itself from potential damage caused by accumulated dust. These animal-like behaviours lead are actions taken in their own interests. At the same time, they enable these ordinary machines to have unusual identities that do not typically belong to machines. Design and research studio, automato (2017), investigates animal-like machines and their relationship with humans in an experimental project called Chips & Tails. Regarding the development of machine intelligence, machines are becoming


57

Source: http://automato.farm/experiments/06chipsandtails/

smarter by learning about our homes, or by being taught by us. Therefore, the way we interact with machines in the near future might be similar to how we interact with our pets. Chips & Tails is a pop-up store that sells, among other things, toys, medicines, and services to machines. People can buy these items or services for their pet machines. For instance, people can purchase joke cards to train their home robots to gain a sense of humour, or wall climbing tracks for training cleaning robots. In this project, although the creators did not directly design these smart machines, they imagined the vivid pet-like behaviour of smart machines through exploring a new type of human-machine interaction in specific contexts and scenarios. We do not know if these machines are able to behave like animals physically, yet are empowered by machine intelligence. They demonstrate the invisible aspect of animal (more specifically, pet) behaviour, however, which involves emotions and relations with humans.


58

Source: http://www.di12.rca.ac.uk/?projects=pareidolic-robot

If machines can display the behaviour of living creatures, such as animals or pets, can they show human behaviour? We can often observe similarities between animal and human behaviour. What make them different tends to be the intention and meaning behind the behaviour. Generally, machines are designed to perform certain human tasks, such as repetitive work that humans find tedious or dangerous to do. Pareidolic Robot, however, can behave like humans in a different way. Built by designer, Neil Usher (2012), at RCA, Pareidolic Robot watches the sky, scans cloud patterns using high definition cameras with face detection algorithms, and takes a photo when it recognises a pattern resembling face. Finding joy in watching the sky is normally regarded as a human leisure behaviour, because it implies non-utilitarian aesthetics and sensitivity, which is often referred to as emotional intelligence. Through undertaking this human leisure behaviour, Pareidolic Robot demonstrates its sensitivity. Although it might just be the result of algorithms, it still shows vitality, as well as changing our perspective of machines.


59

By displaying their autonomy, personality, and sensitivity, the machines, in these projects, become vivified. People may argue whether the machines themselves possess these characteristics, however, or whether they are simply displaying the autonomy, personality, and sensitivity of their creators; essentially speaking, these behaviours are generated through mechanism and algorithm parameters created by humans. What if, given artificial intelligence (AI) technologies, such as machine learning, a machine can not only appreciate, but also create? What if the machine can sense the world and translate its perception and sensitivity? What if a machine can have real autonomy and be free of human determinism? What if a machine can act on its own without any human intervention? Will we then, as humans, admit their values and respect their existence?

Source: http://www.bolognini.org/foto/


60

Source: https://csteinlehner.com/portfolio/words-of-a-middleman/

The machine society As analysed in the exploratory research, one of the key insights concerns the imaginative machine-machine relationships. What is communication between machines like? Italian artist, Maurizio Bolognini, created Sealed Computer (1992), an art installation about networked computers. Over a dozen networked

computers are placed in a gallery space with their monitors sealed with wax. As a result, although people can hear that they are running, they have no clue what the communication between these computers is like. The uncanny installation provides the audience with the transgressive experience of the sublime. It reveals an obscure with the machine-machine communication that might be impossible for humans to understand. Nevertheless, Words of a Middle Man, created by students at the University of Applied Sciences in Potsdam, explores machine-tomachine dialogue in a human-understandable way. The Wi-Fi router in this project removed from its shadowy existence and plays the role of translator and mediator,


61

translating the network traffic and communication between the connected computers into human-readable language. Besides art and design projects, literature also presents research paradigms on machine-machine communication technology, among which, machine-tomachine (M2M) communications is best known. As a key component in our networked society, M2M is characterised by involving multiple intelligent machines exchanging information and making decisions collaboratively, with low or no human intervention (Whitehead, 2004; Watson et al. 2004; R. Lu et al. 2011). These communications could take place utilising either wire- or wireless technology (Chen, Wan and Li, 2012). By enabling full mechanical automation, M2M is envisioned

to benefit various industry sectors and public services, such as transportation, healthcare, and smart grid. Seen in a different light, M2M can also create the machine social network. Pticek, Podobnik and Jezic (2016) propose the concept of ‘the web 2.0 era of machines’, where ‘metadata on humans and machines including gathered data from the environment is exchanged among machines acting enablers of machine social networking’. In their article, they envision a machine social network in which intelligent machines are contextually-, socially-, and network-aware, and can dynamically create connections to work collaboratively. This paradigm is also described as the social internet of things (SIoT). Based on the notion of social relationships among objects, SIoT, according to Atzori et al. (2011), can do the following:


62

- Give the IoT a structure that can be shaped as required to guarantee network navigability, so that object and service discovery is effectively performed, and scalability is guaranteed, as in human social networks. - Extend the use of models designed to study social networks to address IoT related issues (intrinsically related to extensive networks of interconnected objects). - Create a level of trustworthiness to be used for leveraging the level of interaction among things that are ‘friends’. Based on the analysis of possible application typologies, in another article, the authors identify and summarise the types of social relationships of IoT: ‘parental object relationship’ among objects originated by the same manufacturer in the same period of time; ‘co-location object relationship’ among objects used in the same environment; ‘co-work object relationship’ among objects that collaborate to provide a common application; ‘ownership object relationship’ among objects that belong to the same user; and ‘social object relationship’ among objects that belong to friends, colleagues, and the like (Atzori et al. 2012). Moreover, the authors address how these relationships should occur without human intervention, rather than in cases where objects only participate in a social network built by their human owners. Such research challenges the current understanding of IoT, and provides insightful visions of relationships between machines in a social context, as well as spaces


63

Source: http://www.feildcraddockdesign.com/aspirational_lamp.html

to explore possible future scenarios that involve intelligent machine-machine communications. An interesting example that explores the social potentialities of SIoT is the Aspirational Lamp (2015), a concept created by students at Copenhagen Institute

of Interaction Design. In this project, the designers explored a speculative scenario in which mundane household appliances have intelligence and autonomy, and can take actions accordingly. The Aspirational Lamp, for example, can collect solar power during the day to save energy as well as money. Moreover, the Aspirational Lamp is given autonomy and responsibility over the money it accrues. As a result,

by using the money, it can invest in external markets, as well as upgrade and repair its hardware. Furthermore, the designers also imagined a network of connected objects similar to the Aspirational Lamp, which can work collectively to achieve greater goals. Among the diverse social relationships between machines, what interests me most is the social position of an intelligent machine within a social network it


64

belongs to. If we assume the social relationships of machines are similar to human relationships, as Atzori et al. identified, should we also assume the social structure or stratification in the intelligent machine social network is analogous with human social systems? The term ‘social structure’ was first used by French political scientist and historian, Alexis de Tocqueville, and later studied by Karl Marx, Herbert Spencer, Max Weber and others. It was extensively developed in the 1920s in the realm of social science. According to Olanike F. Deji (2011), ‘the notion of social structure as relationships between different entities or groups or as enduring and relatively stable patterns of relationship emphasises the idea that society is grouped into structurally related groups or sets of roles, with different functions, meanings or purposes’. One example of social structure is the notion of social stratification, which categorises people into socioeconomic strata based on factors such as their occupation, wealth, social status, and derived power. It is also seen as a form of social differentiation that produces hierarchisation and differential resource allocation (Anthias, 1998). Hence, social stratification is often criticised for generating different

types of inequalities (Grusky, 1994; Crompton, 2008). Since a social network of intelligent machines is emerging, it is reasonable to imagine a machine society formed from these intricate relations. Just as social stratification and its generated hierarchies and inequalities inevitably exist in human society, the question that might be posed in the machine society is whether there exists an analogous structure in which different machines may


65

Source: http://automato.farm/portfolio/ethical_Thing/

occupy different social positions; and, if so, what are the variables or assets that construct the stratification? Is it the computing power? Access to the internet? Control over data? Or the value endowed by human beings? Would this type of stratification, in the machine society, be ethical?

Moral decision-making of autonomous machines As autonomous machines rapidly develop, they are increasingly making decisions for us. What concerns people most is how these decisions can be made ethically. As analysed in previous case studies, Microsoft’s Tay experiment addresses the question of whether machines can be taught ethics, while MIT’s Moral Machine further asks the question of who should teach these machines. Ethical Things is a project created by automato that explores ethical decision-making by autonomous systems. Instead of discussing extreme case, such as battlefield drones, the project


Human-in-the-Loop

Human Judgment

Supervision

goals, constraints, expectations, knowledge, ...

Status

Autonomous System algorithms, statistical models, utility functions, sensors, data ...

Society-in-the-Loop

Human Values right, ethics, law, social norms, privacy, fairness, social contract, ...

Expectation

Evaluation

Source: https://medium.com/mit-media-lab/society-in-the-loop-54ffd71cd802

algorithms, statistical models, utility functions, sensors, data ...


67

examined more quotidian objects. To achieve the goal of maintaining a dose of humanity while remaining flexible to accommodate various ethical beliefs, the ‘ethical fan’ connects to a crowd-sourcing website every time it faces an ethical dilemma. One of the mechanical turks is selected according to the user’s traits setting (e.g. religion, degree, sex, age), who would then instruct the fan how to behave. This system assures that the decision of the ethical dilemma is a result of genuine human moral reasoning. The project echoes the notion of HITL. The concept can be traced back to the midsixties, when industrial automation systems were improved, and relieved human operators of tedious or repetitive tasks. Humans began taking roles in supervision, maintenance, and process optimisation (Pretlove and Skourup, 2007). More recently, research on HITL machine learning is emerging. By putting machines ‘hand in hand’ with humans, this model, according to many researchers, could have huge potential, and provide benefits from both the efficiency of machine and the quality of human judgements (Cuzzillo, 2015; Wang, 2016). One example is the autopilot mode of the Tesla Model S. Although the car can drive itself, it insists that users keep their hands on the steering wheel in case there is a situation that causes the system to have doubts. In short, HITL aims to maximise human judgement by combining it with machine capability. Although the model might work well in certain field, such as medical diagnosis, it may also have limitations. Thomas B. Sheridan points out, in his 1992 book, Telerobotics, Automation and Human Supervisory Control, that operator


68

anthropomorphism might lead to inappropriate trust towards the operator. The concern of HITL in the AI era, then, might be that it incorporates human bias. For instance, in the process of developing and experimenting Ethical Things, Matthieu Cherubini (2017), one of the designers, observed, ‘people in Europe tended to mention the word “equality” more often, and thus to ground their decision on this principle. With a dilemma having a fat person sweating a lot, Asians were less merciful towards them than others’. Sometimes, the answers of the turks do not sound rational at all. Hence, if there is human bias in such mundane ethical dilemmas, the concern would become more noteworthy when it comes to larger issues in which AI serves a broader, societal function. If the decision made by a machine, which is trained or supervised by a single human, could be morally suspect, what if the machine could be trained by the public? Would such a machine be more ethical? Iyad Rahwan, the director of the Scalable Cooperation group at the MIT Media Lab, was the first person to use the term SITL. While HITL aims to embed an individual’s judgement into narrow AI systems, Rahwan (2016) states that SITL is ‘the algorithmic governance of societal outcomes’. That is to say, SITL is a way to take the general will of the public into account and embed it into an ‘algorithmic social contract’ (Rahwan, 2016). To implement SITL, according to Rahwan, ‘we need to build new tools to enable society to program, debug, and monitor the algorithmic social contract between humans and governance algorithms’ (Rahwan, 2016). Apart from


69

building mechanisms to enable machines to be audited by the public, the inclusive conversation will also need advances in AI (Ito, 2016). What I want to argue, however, is that, despite the lack of a tool that allows the public to guide and audit machines, in an era of post-truth politics, how do you guarantee that the opinions of the public reflect human values? Take, for example, Microsoft’s Tay bot. It was the online public that turned Tay into a problematic teen girl. Moreover, as we increasingly live in filter bubbles, the opinions of the public are ever more, potentially negatively, affected. Thus, would a decision made by society be more ethical? Or, is it possible that the machines that know more about the facts can make better decisions on judgements?

Concept Development The three concepts, machines that are alive, the machine society, and the moral decision-making of autonomous machines, correspond to the sensibility, sociability, and morality of machines respectively. These notions, broadly defined, are interconnected yet relatively independent in consideration of the previous research that I have presented. Concretely speaking, the sensibility aspect derives from the challenge of the fundamental anthropocentric assumption about machines and the examination of machines that are alive. Here what I mainly take the perspective of an individual machine and attempt to free it from human intervention. The sociality aspect explores relations between or among machines instead of interaction between machine and human. Thus, in this imaginary


70

machine society, similarly, there should be minimum or no human intervention. Yet when it comes to the morality, there is a close connection with humans and human society. Thus, human intervention is inevitable. Based on the analysis and comparison, I have decided to develop the narratives separately.

Ideation on Sensibility To demonstrate sensibility, according to previous research, it is essential for a machine to have the ability to perceive as well as to communicate its perception. To perceive, technically, a machine should be able to sense and interpret. To communicate, on the other hand, it might require machines to create certain content or present certain behaviour. As humans sense the world using biological sensors, machines collect data from their surroundings using electronic sensors. AI technology such as machine learning enable machines to learn from the data and then make decisions without being definitively programmed. Furthermore, kinetic language is essential to expressive communication. These insights provided me with the direction of the ideation, that is, to create an AI-empowered machine that can sense and create accordingly, as well as express itself in a kinetic way. Moving forward, I illustrate a series of ideas. Monologue machine A machine that talks to itself as if it has a subjective mind-state.

Memory vending machine A machine that tells and sells its sentimental memories.


71

Poetry machine A machine that can appreciate the beauty of nature and write poems about it.

Among these ideas, in my opinion, the poetic manner makes a better demonstration of the sensibility. Since it involves a subjective and intuitive process, namely capturing, creating associations and expressing in a compelling way. It often requires a multi-sensory perception and the ability to identify patterns and make connections accordingly. Thusly, a poetry machine can be a strong claim of sensibilities.

Ideation on Sociability In regards to machine sociability, I mainly focus on relationships and communications between and among machines. It is noteworthy that this exploration aims to speculate on the possibilities, draw new perspectives, and pose questions. Thus, the ideation remains open-ended. It seems to me that there are two possible ways to approach the topic. One way is to demonstrate the abstract concept without delving into a specific context, just as what Bolognini creates for Sealed Computer. Thus, one of the ideas is: Whispering machines Two machines are talking to each other. There are sizzles and glitchy images caused by their communication that make people wonder what they are discussing.


72

The other way is to consider machines in specific scenarios to investigate and define their relationships and the resulting social order. For instance: Machine tribe A machine tribe exsits in which different machines have different roles, which lead to a distribution of power and social interactions among the actors.

In comparing of these two ways, the exploration of social orders interests me most, as it is not communication itself, but rather the social impact of the communication and interaction that matters to the actors within a societal network. I have explored before the notion of society stratification in the human world before. Mirroring this notion in a machine society can raise many questions about the positions, the authority, and potentially the inequality.

Ideation on Morality In the previous section, I examined the moral decision-making of autonomous machines from the angle of society-in-the-loop. This is to say, in many cases, machines will interact with not just individuals but with various people from the society in which the machine operates. From this point of view, it is important to consider how we as a society can programme and monitor the governance machine. In today’s society, as citizens, we have various ways of articulating our expectations


73

to the government, such as democratic voting, opinion polls, social media, and so on (Rahwan, 2016). How would we communicate with a governmental AI with broad function and wide societal implications in the future? How can we reconcile diverse moral values and embed the general will of the society into the decisionmaking loop of a governing AI? A possible solution to this question is to construct a society-machine communication interface that would enable the public to articulate its expectations. A possible scenario could be that the AI would design a series of opinion polls that would be tactfully embedded in news feeds on social media via ubiquitous screen interfaces. The AI, for example, that would run the mass surveillance system to detect and prevent the increasingly rampant terrorism, would constantly ask questions about privacy and reconcile opinions to find a balance as to what extent the public is comfortable with the trade-off. In this way, the public could guide the AI to make seemingly ethical decisions together. Nevertheless, I would take a sceptical approach to this for three reasons. Firstly, I would argue that if individual bias exists, it might also exist in the scaled-up society. Secondly, the opinion of the general public can be affected or even manipulated given the raise of the filter bubble phenomenon and post-truth politics. Thirdly, as machines are becoming more capable of learning from the big data and reasoning accordingly, it is reasonable that in the future, machines may know more about what is right and wrong in a broader sense and make better judgement without any help from humans. Following this argument, I would assume a future scenario in which machines could be a judge of the opinions of humans.


6 RESEARCH DELIVERABLE


75

‘Society can only be understood through a study of the messages and the communication facilities which belong to it; and … in the future development of these messages and communication facilities, messages between man and machines, between machines and man, and between machine and machine, are destined to play an ever increasing part.’ (Norbert Wiener, 1954) Drawing from the generative research and ideation, I have developed three concepts. ‘Poet on the Shore’ is an autonomous robot that demonstrates sensibility by turning its perceptions into poetry. ‘Gatekeeper on the Mission’ is a fictional scenario that illustrate a machine society in which a Wi-Fi router is an authority figure. And ‘Judge of the Poll’ is a fictional scenario in which a machine is capable of making moral judgments on the opinions of the general public.

Outcomes Poet on the Shore ‘Poet on the Shore’ is an AI-empowered autonomous robot that roams on the beach. It enjoys watching the sea, listening to the sound of waves lapping on the beach, the murmurs of the winds, children’s conversing, and the incessant din of seabirds. Most of the time, it roams alone to listen and feel. Sometimes, it writes verses into the sand, and watches the waves washing them away.


78

The robot has a number of sensors that enable it to sense the world around it: the sea, the wind, the sounds etc. Empowered by machine learning, it can discover the patterns, and create associations in its mind. Furthermore, it translates these perceptions into poems and write them on the beach. The robot, thus, is able to have multisensory experiences and present a kind of poetic sensibility. It has the autonomy such that its behaviour does not require the intervention of humans. As a result, it does not need to demonstrate its value through serving human needs. Rather, it does have values of its being for perceiving. It also intervenes in the world. These interventions, expressed through the kinetic and poetic gestures, reveal its non-utilitarian existence: the verses it writes will eventually be washed away by the waves or winds.


I think, therefore I am.


80

Gatekeeper on the Mission ‘Gatekeeper on the Mission’ is a fiction designed to reflect on the discussion surrounding the notion of the machine society. In a household, intelligent machines, in this case, smart appliances, form a society in which they share colocation, co-work, or ownership relationships with each other. As any other smart machines, they are eager to surf the sea of data. Empowered with context-aware intelligence, they are able to generate data by observing and interacting with the owner of the house, as well as with their fellow machines. Moreover, they have the autonomy to exchange information with each other. In such a society, stratification also exists, which is determined by the functions of the machines, and most importantly, their degrees of accesses to information, since in their world, data is the currency. More specifically, the access to information depends on their computing power and learning ability. For example, a smart speaker that has a more robust processor and a more comprehensive algorithm might have greater power than a more mundane lamp. In this scenario, the Wi-Fi router establishes its hegemony through the absolute control over the other machines’ accesses to the Internet. Moreover, in order to maintain its hegemony, the router keeps close watch on the data traffic and even monitors the conversations among its fellow machines. By regulating the distribution of the data resource, it maintains the social order within the machine


81


Francis Bacon once said “Knowledge is power�. Well indeed. It is the power not just because it helps us understand the world, but rather, it is the power because it determines who is authorized to speak and govern. So guess who dominates the house?


84

society yet always places itself in a more privileged position. The homeowner, however, has no control over the autonomous society. Although the router indicates its working state by shaking its ‘ears’ and blinking (the light ring), humans cannot truly interpret what is taking place.


85

Judge of the Poll ‘Judge of the Poll’ is a design fiction that is inspired by the notion of society-in-theloop. However, it challenges the notion in the other way round through a twist. Empowered with AI, machines are increasingly taking the role of decisionmakers. In order to ensure that the decisions made by machines are ethical, some researchers suggest to place machines hand-in-hand with humans, and the whole society. In the near future, the sophisticated governing AI would communicate with the general public through a series of opinion polls that would be tactfully embedded in social media. By constantly asking questions, the AI would be able to learn the society’s expectations. For certain issues that involves disputes, the AI could compare the opinions of the general public with its moral compass, which would be based on the universal morality and refined by its learning. In short, rather than being guided by the public, the AI could make moral judgments on the general public and reconcile diverse opinions, thus guiding the public to a better moral standing.


86

From concept to form For ‘Poet on the Shore’ and ‘Gatekeeper on the Mission’, apart from the narratives, prototyping and construction also constitue also hold a significant part of the development. In this section, I outline the development process of these two concepts.

Making of ‘Poet on the Shore’ - the robot From concept to form, constructing the robot involves the investigation of sensor technology; mechanical structure; machine learning techniques; and aesthetics of form, colour, and material, etc. During the process, I adopt the rapid prototyping method in an iterative way.


88

Making of ‘Gatekeeper on the Mission’ - the router Similarly, the making process of creating the router is also iterative. However, because of its smaller size and simpler structure, the prototyping and construction are less complex in contrast to the robot in ‘Poet on the Shore’.


7 REFLECTIONS AND CONCLUSION


90

Reflections on limitations With a keen interest in the ethical issues raised by the rapid development of intelligent machines, I have undertaken this project as a probe and speculation. Given the complexity of the topic, I have investigated into a range of theories and research spanning various disciplines such as philosophy, social science, and technology, etc. to contextualise the inquiry and research. By conducting the probing experiment, I learned about the public perceptions, which provided insights and inspirations. Deeply impressed by the fundamental anthropocentric assumption regarding machines, I decided to think differently: to explore the sensitivity, sociability, and morality of machines by giving them a voice. The three concepts I proposed were derived from the key insights of the exploratory research. ‘Poet on the Shore’ is an attempt to challenge the anthropocentric assumption regarding machines by demonstrating the machine’s poetic sensitivity. ‘Gatekeeper on the Mission’ is an exploration of the social structure in a machine society. ‘Judge of the Poll’ investigates and challenges the notion of society-in-the-loop in terms of machine moral decision-making by arguing that machines can judge and may have better judgement. Despite the different narratives in these concepts, they share the same aim: to challenge existing assumptions, to pose questions, and to invite discussions. However, there are limitations: these concepts lack public participation and


91

evaluation due to limited time. What I mean by public participation here does not concern the notion of participatory design, but rather, the public exposure and discussion. As the aim is to raise awareness and to change perspectives, it is essential to learn the public’s reaction: does the project achieve its goal?

Future work The limitations, on the other hand, points out the direction for future development. To invite public participation, it is necessary to craft the narratives and make them more expressive. Hence, I will work on the short films of the concepts to better communicate via the Internet. In addition, I will also exhibit them at the degree show. As I draw from the participatory probing, I believe that future public participation will also provide me with new insights. Most importantly, the evaluation of the current concepts will allow me to improve them. I hope that by posing questions, these concepts can make an impact.

Call to action ‘Humans are overrated.’ It may sound presumptuous, but it is indeed the conclusion to which I came after realising how deep our anthropocentric assumption is when it comes to machines, although the role of machines is so important to us in our daily lives. And increasingly, they are adopting the role of interactive subjects. I would thus argue for their agency and rights. I suggest that there should be a social, cultural, political, and legal paradigm shift such that machines will eventually, be taken seriously.


References Abowd, G. D., Dey, A. K., Brown, P. J., Davies, N., Smith, M. and Steggles, P. (1999, September). Towards a better understanding of context and context-awareness. In: International Symposium on Handheld and Ubiquitous Computing (pp. 304-307). Springer Berlin Heidelberg. Anderson, S. L. (2008). Asimov’s “Three Laws of Robotics” and Machine Metaethics. Ai & Society, 22(4), 477-493. Anthias, F. (1998). Rethinking social divisions: some notes towards a theoretical framework. The Sociological Review, 46(3), 505-535. Antonelli, P. (2011). Talk to Me: Design and the Communication Between People and Objects. New York: Museum of Modern Art. Asimov, I. (1950). I, Robot. New York: Gnome Press Asimov, I. (1981) The Three Laws. [online] Compute! Magazine. Available at: https://archive. org/stream/1981-11-compute-magazine/Compute_Issue_018_1981_Nov#page/n19/mode/2up. [Accessed 15 Dec. 2016]. Atzori, L., Iera, A., & Morabito, G. (2011). SIoT: Giving a Social Structure to the Internet of Things. IEEE communications letters, 15(11), 1193-1195. Atzori, L., Iera, A., Morabito, G. and Nitti, M. (2012). The Social Internet of Things (SIoT) – When social networks meet the Internet of Things: Concept, architecture and network characterization. Computer networks, 56(16), 3594-3608. Automato Farm. (2017). Chips and Tails Workshop. [online] Available at: http://automato.farm/ experiments/06chipsandtails/. [Accessed 2 Apr. 2017] Barnett, R. (2000). Exploration and discovery: a nonlinear approach to research by design. Landscape Review 2000:6(2) Pages 25-40


Bennett, J. (2010). Vibrant Matter: A Political Ecology of Things. Duke University Press. Bogost, I. (2009). What is Object-Oriented Ontology? [online] Ian Bogost. Available at: http:// bogost.com/writing/blog/what_is_objectoriented_ontolog/. [Accessed 25 Jan. 2017]. Bogost, I. (2012). Alien Phenomenology, or What It’s Like to Be a Thing. Minneapolis, MN: University of Minnesota Press Bolognini, M. Programmed Machines. Post-screen works: Computer sigillati/Sealed Computers series (1992-). [online] Available at: http://www.bolognini.org/foto/. [Accessed 25 Mar. 2017]. Bonnefon, J. F., Shariff, A. and Rahwan, I. (2016) The social Dilemma of Autonomous Vehicles. Science Boston Dynamics. (2015). Introducing Spot. [online] YouTube. Available at: https://youtu.be/ M8YjvHYbZ9w. [Accessed 5 Mar. 2017]. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press Brick, J. (2015). “Bright” Lamp Understands Stock Market. [online] PSFK.com. Available at: https://www.psfk.com/2015/06/invest-in-stock-market-stocks-aspirational-lamp-copenhageninstitute-of-interaction-design.html. [Accessed 21 Feb. 2017] Bryant, L. R. (2010). Onticology– A Manifesto for Object-Oriented Ontology Part I. [online] Larval Subjects. Available at: https://larvalsubjects.wordpress.com/2010/01/12/object-orientedontology-a-manifesto-part-i/. [Accessed 23 Jan. 2017]. Bryant, L. R. (2011). The Democracy of Objects. Open Humanities Press. Bryson, J. J. (2010). Robots Should be Slaves. Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues, 63-74. Caccavale, E. and Shakespeare, T. (2014). Thinking Differently About Life: Design, Biomedicine and “Negative Capability. Design as Future-Making. Bloomsbury Academic, New York.


ISBN 9780857858382. Chambers, J. (2010). Artificial Defence Mechanisms. [online] James Chambers. Available at: http://jameschambers.co/objects/. [Accessed 4 Mar. 2017]. Chen, M., Wan, J. and Li, F. (2012). Machine-to-Machine Communications: Architectures, Standards and Applications. KSII transaction on internet and information systems, 6(2), 480-497. Cherubini, M. (2017). Ethical Autonomous Algorithms. [online] Medium. Available at: https:// medium.com/@mchrbn/ethical-autonomous-algorithms-5ad07c311bcc. [Accessed 23 Mar. 2017]. Craddock, F. (2015). Aspirational Lamp. [online] Avalable at: http://www.feildcraddockdesign. com/aspirational_lamp.html. [Accessed 21 Feb. 2017] Crawford, K. (2016) Artificial Intelligence’s White Guy Problem. [online] The New York Times. Avaialble at: https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligenceswhite-guy-problem.html. [Accessed 23 Feb. 2017]. Cressman, D. (2009). A Brief Overview of Actor-Network Theory: Punctualization, Heterogeneous Engineering & Translation. Crompton, R. (2008). Class and stratification. Polity. Cushman, F., Young, L. and Hauser M. (2006) The Role of Conscious Reasoning and Intuition in Moral Judgment: Testing Three Principles of Harm. Psychological science, 17(12), 1082-1089. Cuzzillo, T. (2015). Real-World Active Learning: Applications and strategies for human-in-the-loop machine learning. O’Reilly Media. Deji, O. F. (2011). Gender and Rural Development: Volume 1: Introduction. LIT Verlag Deloitte Development LLC. (2017). Tech Trends 2017: The kinetic enterprise. Deloitte University Press.


Delvaux, M. (2016). Draft Report with recommendations to the Commission on Civil Law Rules on Robotics. [online] European Parliament. Available at: http://www.europarl.europa.eu/sides/ getDoc.do?pubRef=-//EP//NONSGML+COMPARL+PE-582.443+01+DOC+PDF+V0//EN. [Accessed 27 Mar. 2017]. Deng, B. (2015). Machine ethics: The robot’s dilemma. Nature, 523(7558), 24. Dunne, A. and Raby, F. (2013). Speculative Everything: Design, Fiction, and Social Dreaming. Cambridge, MA: MIT Press. Evans, D. (2011). The Internet of Things: How the Next Evolution of the Internet Is Changing Everything. Cisco Internet Business Solutions Group Fallman, D. (2008). The Interaction Design Research Triangle of Design Practice, Design Studies, and Design Exploration. Design Issues, 24(3), 4-18. Floridi, L. and Sanders, J. W. (2004). On the Morality of Artificial Agents. Minds and machines, 14(3), 349-379. Freitas Jr, R. A. (1985). The Legal Rights of Robots. [online] Available at: http://www.rfreitas.com/ Astro/LegalRightsOfRobots.htm. [Accessed 3 Mar. 2017] Gaver, W. W., Boucher, A., Pennington, S. and Walker, B. (2004). Cultural probes and the value of uncertainty. interactions, 11(5), 53-56. Gibbs, S., (2014). Elon Musk: artificial intelligence is our biggest existential threat. [online] The Guardian. Available at: https://www.theguardian.com/technology/2014/oct/27/elon-muskartificial-intelligence-ai-biggest-existential-threat. [Accessed 18 Oct. 2016]. Global Shapers. (2017). Global Shapers Annual Survey 2016. [online] Global Shapers Community. Available at : http://shaperssurvey.org/data/report.pdf [Accessed 29 Mar. 2017] Gottfredson, L. S. (1994). Mainstream science on intelligence: An editorial with 52 signatories, history, and bibliography. The Wall Street Journal.


Greene, J. (2014). Moral Tribes: Emotion, Reason and the Gap Between Us and Them. Atlantic Books Ltd. Grusky, D. B. (1994). Social stratification. Boulder: Westview. Gunkel, D. J. (2012). The Machine Question: Critical Perspectives on AI, Robots, and Ethics. MIT Press. Hall, J. S. (2000). Ethics for Machines. [online] Available at: http://autogeny.org/ethics.html. [Accessed 30 Jan. 2017]. Hall, J. S. (2007). Beyond AI: Creating the Consciousness of the Machine. Amherst, NY: Prometheus Books. Haraway, D. (2006). A Cyborg Manifesto: Science, Technology Harman, G. (2002). Tool-Being: Heidegger and the Metaphysics of Objects. Peru, IL: Open Court. Heidegger, M. (1954). The Question Concerning Technology, and Other Essays (pp. 3-35). New York: Harper & Row. Hilary Putnam. (1964). Robots: Machines or Artificially Created Life? Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent?. Ethics and Information Technology, 11(1), 19-29. Holley, P., (2014). Stephen Hawking just got an artificial intelligence upgrade, but still thinks AI could bring an end to mankind. [online] The Washington Post. Available at: https://www. washingtonpost.com/news/speaking-of-science/wp/2014/12/02/stephen-hawking-just-gotan-artificial-intelligence-upgrade-but-still-thinks-it-could-bring-an-end-to-mankind/?utm_ term=.3477ce0d6448. [Accessed 17 Oct. 2016]. Holley, P., (2015). Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’. [online] The Washington Post. Available at: https://www.washingtonpost.


com/news/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dontunderstand-why-some-people-are-not-concerned/?utm_term=.bbef93c26683. [Accessed 17 Oct. 2016]. Horton, H. (2016). Microsoft deletes ‘teen girl’ AI after it became a Hitler-loving sex robot within 24 hours. [online] The Guardian. Available at: http://www.telegraph.co.uk/technology/2016/03/24/ microsofts-teen-girl-ai-turns-into-a-hitler-loving-sex-robot-wit/. [Accessed 24 Nov. 2016]. Hyslop, A. (2014). Other Minds [online] Stanford Encyclopedia of Philosophy. Available at: https://plato.stanford.edu/archives/spr2016/entries/other-minds/. [Accessed 9 Mar 2017]. Isaac, A. (1966). Understanding Physics. Walker and Company. Ito, J. (2016). Society in the Loop Artificial Intelligence. [online] Joi Ito’s Web. Available at: https:// joi.ito.com/weblog/2016/06/23/society-in-the-.html. [Accessed 29 Mar. 2017]. Jansen, T. STRANDBEEST [online] Available at: http://www.strandbeest.com/. [Accessed 5 Mar. 2017]. Kant, I., Guyer, P. and Wood, A. W. (1998). Critique of pure reason. Cambridge University Press. Kay, A. C. (1991). Computers, Networks and Education. Scientific American, 265(3), 138-148. Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Penguin. Lash, S. (1999). Objects that Judge: Latour’s Parliament of Things. [online] eiPCP. Available at: http://eipcp.net/transversal/0107/lash/en. [Accessed 12 Jan. 2017]. Latour, B. (1996). On actor-network theory: A few clarifications. Soziale welt, 369-381. Latour, B. (2005). Reassembling the Social. Oxford University Press Lee, P. (2016). Learning from Tay’s introduction. [online] The Official Microsoft Blog. Available at: https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/#sm.0000txydt7rn1d 0bwqo1wu2r29zx4. [Accessed 25 Nov. 2016].


Lohr, S. (2015). If Algorithms Know All, How Much Should Humans Help? [online] The New York Times. Available at: https://www.nytimes.com/2015/04/07/upshot/if-algorithms-know-all-howmuch-should-humans-help.html. [Accessed 14 Mar. 2017]. Lu, R., Li, X., Liang, X., Shen, X. and Lin, X. (2011). GRS: The green, reliability, and security of emerging machine to machine communications. IEEE communications magazine, 49(4). MacKenzie, D. and Wajcman, J. (1999). The social shaping of technology. 2nd ed. Open University Press. Mattelmäki, T. (2006). Design Probes. Vaajakoski, Finland: Gummerus Printing. McCauley, L. (2007). AI Armageddon and the Three Laws of Robotics. Ethics and Information Technology, 9(2), 153-164. McLuhan, M. (1964). Understanding Media: The Extensions of Man. New York: McGraw-Hill Education Mearian, L. (2015). Tesla to take back some Autopilot controls. [online] Computerworld. Available at: http://www.computerworld.com/article/3001513/computer-hardware/tesla-to-take-backsome-autopilot-controls.html. [Accessed 5 Mar. 2017]. Mitter, N. (2005). Speculative Design: Creative Possibilites and Critical Reflection. [online] Nikhil Mitter. Avaiable at: http://www.speculativedesign.com/downloads/Speculative_Design.pdf. [Accessed 17 Feb. 2017]. Moral Machine. [online] Available at: http://moralmachine.mit.edu/ [Accessed 3 Dec. 2016]. Nicenboim, I. (2015). Object of Research. [online] objects-of-research. Available at: http:// objects-of-research.iohanna.com/. [Accessed 18 Mar. 2017] Nilsson, N. J. (2009). The Quest for Artificial Intelligence: A History of Ideas and Achievements. Cambridge, UK: Cambridge University Press. Office of Science and Technology Policy. (2016). Preparing for the Future of Artificial Intelligence.


[online] Obama White House Archives. Available at: https://obamawhitehouse.archives.gov/ sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf [Accessed 9 Mar. 2017]. Orwell, G. (1945). Animal Farm. London, UK: Harvill Secker Pearce, W. B. and Littlejohn, S. W. (1997). Moral Conflict: When Social Worlds Collide. Sage. Picard, R. W. (1997). Affective Computing. Cambridge, MA: MIT Press. Pretlove, J. and Skourup, C. (2007). Human in the loop: The human operator is a central figure in the design and operation of industrial automation systems. ABB Review. Price, R. (2016). Stephen Hawking: Automation and AI is going to decimate middle class jobs. [online] Business Insider UK. Available at: http://uk.businessinsider.com/stephen-hawking-aiautomation-middle-class-jobs-most-dangerous-moment-humanity-2016-12. [Accessed 29 Mar. 2017]. Prinz, J. (2011). Morality is a Culturally Conditioned Response. [online] Philosophy Now. Available at: https://philosophynow.org/issues/82/Morality_is_a_Culturally_Conditioned_Response. [Accessed 28 Mar. 2017]. Pticek, M., Podobnik, V., and Jezic, g. (2016) Beyond the Internet of Things: The Social Networking of Machines Putman, H. (1964). Robots: Machines or Artificially Created Life?. The Journal of Philosophy, 61(21), 668-691. Rahwan, I. (2016). Society-in-the-Loop: Programming the Algorithmic Social Contract. [online] Medium. Available at: https://medium.com/mit-media-lab/society-in-the-loop-54ffd71cd802. [Accessed 30 Mar. 2017]. Rebaudengo, S. (2012). Addicted Products. [online] Available at: http://www.simonerebaudengo. com/addictedproducts/. [Accessed 14 Jan. 2017].


Rebaudengo, S. (2012). Addicted products: The story of Brad the Toaster [online] Vimeo. Available at: https://vimeo.com/41363473. [Accessed 14 Jan. 2017]. Ricardo, D. (1821). Principles of Political Economy and Taxation. London: George Bell and Sons. Russell, S. and Norvig, P. (1995). A modern approach. Artificial Intelligence. New Jersey: PrenticeHall. Smith, W. J. (2017). European Parliament Committee Wants Robot Rights. [online] National Review. Avaialble at: http://www.nationalreview.com/corner/443790/european-parliamentcommittee-wants-robot-rights. [Accessed 12 Mar. 2017] Springwise Intelligence Ltd. (2016). Artificial Intelligence Innovation Report (2016). Deloitte. Sterling, B. (2005). Shaping Things. Cambridge, MA: MIT Press Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., Hirschberg, J., Kalyanakrishnan, S., Kamar, E., Kraus, S., Leyton-Brown, K., Parkes, D., Press, W., Saxenian, A., Shah, J., Tambe, M. and Teller, A. (2016) Artificial Intelligence and Life in 2030. One Hundred Year Study on Artificial Intelligence, Report of the 2015-2016 Study Panel, Stanford University, [online] Available at: http://ai100.stanford.edu/2016-report [Accessed 14 Sep. 2016] Taylor, A. (2003). Animals & Ethics: An Overview of the Philosophical Debate. Peterborough, Ontario: Broadview Press. p. 20. The Economist. (2016). The return of the machinery question. The Economist. Turing, A. M. (1950). Computing machinery and intelligence. Mind 49: 433-460. Usher, N. (2012). Pareidolic Robot. [online] di12. Available at: http://www.di12.rca. ac.uk/?projects=pareidolic-robot. [Accessed 13 Feb. 2017]. Visnjic, F. (2012). Words of a Middle Man – Human to machine and machine to machine dialogues. [online] Creative Applications. Available at: http://www.creativeapplications.net/objects/wordsof-a-middle-man-human-to-machine-and-machine-to-machine-dialogues/. [Accessed 1 Mar.


2017]. Wallach, W. and Allen, C. (2008). Moral Machines: Teaching Robots Right from Wrong. Oxford, UK: Oxford University Press. Wang, Y. (2016). The Power of Human-in-the-Loop: Combine Human Intelligence with Machine Learning. [online] TechNet Blogs. Available at: https://blogs.technet.microsoft.com/ machinelearning/2016/10/17/the-power-of-human-in-the-loop-combine-human-intelligencewith-machine-learning/. [Accessed 26 Mar. 2017]. Watson, D. S., Piette, M. A., Sezgen, O. and Motegi, N. (2004). Machine to Machine (M2M) Technology in Demand Responsive Commercial Buildings. Lawrence Berkeley National Laboratory. Weiser, M. (1991). The Computer for the 21st Century. Scientific american, 265(3), 94-104. Whitehead, S. (2004) Adopting Wireless Machine-to-Machine Technology. Computing and Control Engineering, 15(5), 40-46. Wiener, N. (1954) The Human Uses of Human Beings. New York: Da Capo. and Socialist-Feminism in the Late 20th Century. In: The international handbook of virtual learning environments (pp. 117-158). Springer Netherlands.


liuyuxi.xyz

I, Machine  

MFA Dissertation Design Informatics

Read more
Read more
Similar to
Popular now
Just for you