A Poster by Isaac Han

Page 1

PERCEPTION OF AI

AUTHOR

Yunqing Han Done under supervision of Professor Gabbrielle Johnson, through the Gould Center for Humanistic Studies.

INTRODUCTION

The Perception of AI study is inspired by fiction; there are many cases of AI disguising as humans, but rarely any of humans disguising as AI. The research aims to study if (and how) humans' attitudes toward AI changes after an interaction with a human disguised as an AI.

OBJECTIVE

-Observe how participants react to knowing that a human disguised themselves as an AI -Observe if this knowledge changes their perception of AI

METHODOLOGY

Participants answer a survey about attitudes on AI, chat online with AtBot, an AI disguised by the researcher, for 10 minutes, fill out a survey about that interaction, and, after the deception is revealed, fill out a survey about the humanness of AtBot from hindsight and attitudes on AI. Surveys are answered on a scale of 0-10, 0 being absolutely disagree and 10 being absolutely agree.

KEY FINDINGS

General Attitudes Toward AI Most participants believe AI to be less relevant and logical than grammatical in their answers* Participants believe AI to be equally good at quantitative and qualitative reasoning or better at quantitative reasoning Participants do not believe that AI are currently conscious and sentient (range of answers = 0-2) They are more confident in future AI, but the range of answers are still rather low (= 1-6)* They also believe future AI can better present themselves as sentient (range of answers for present = 2-8, for future = 6-10) Participants believe that AI are essential to tech development and that humans can control AI^ Participants are willing to let AI teach basic, technical knowledge like arithmetic^ They are less willing by ≥ 2 points to let AI teach basic, subjective knowledge like simple analysis of novels Feedback about "AtBot" Participants found AtBot to be grammatical, logical, and coherent^ Participants liked AtBot's advice, found it helpful, and reported feeling more positive emotionally and about AI after the interaction^ Most participants believe AtBot to appear quite emotional (range = 6-9), but not so much as actually possessing emotions (range = 0-4)* After Knowing AtBot Is a Human Participants reported AtBot to appear very AI even from hindsight* Changes in perception of AtBot: three out of five participants reflected AtBot to be less emotional than what they initially assessed by at ≥ 2 points Interestingly, participants perceive AI as less of a threat after the interaction* Participants clearly became less confident in their ability to detect AI played music and AI produced art after the interaction* However, results are mixed in changes of confidence in detecting AI written text Participants are not so confident in others recognizing the deception (range = 2-5)

CONCLUSION

Participants generally believe that AI is better at quantitative, mechanical tasks than at qualitative, more subjective tests. Participants liked AtBot and stated AtBot appeared emotional, but that AtBot did not possess emotions. After they knew AtBot was human, the participants reported AtBot to be less emotional from hindsight. Confidence in discovering AI produced art and music decreased, and participants were not confident in others discovering the deception in this study.

*indicates that the finding applies to most, i.e. except for one, participant. ^indicates high survey scores for an idea, i.e. all answers ≥ 7.

CONJECTURES/ EXPLANATIONS

General Attitudes Toward AI The first two findings were not very surprising, as AI seem to often be considered as cold, mechanical, and not so well-versed in emotional matters. This could explain why participants generally do not believe AI to be conscious or sentient currently or in the future, but believe that AI can present themselves as sentient. The rationale could be that as technology advance, AI can be better at feigning emotions through calculations. The idea of AI being cold and precise may have also led to participants preferring AI to teach basic technical and not subjective knowledge. Feedback about AtBot Participants were surprisingly positive about AtBot and its advice. This could either be out of friendliness toward AI (which would be unexpected) or a lack of expectation for an AI to be helpful. Interestingly, perhaps consistent with the belief of AI being more able to present themselves as sentient than to be sentient from the previous section, participants all found AtBot to appear quite emotional, but not so much possessing emotions. After Knowing AtBot Is a Human Surprisingly, participants found AtBot to be less emotional after knowing that it is human. If the answers were honest, this could be because participants had different standards for AI and human being "emotional," and were subconsciously evaluating AtBot as AI in the second survey and as a human in the third. Participants may have perceived AI as less of a threat because they feel more acquainted with AI after the interaction (despite the fact that they interacted with a fake AI), and/or because they thought humans and AI can be similar, as proven by the deception. The idea of familiarity may also explain participants' confidence in recognizing AI made media. After knowing that human can disguise as AI and vice versa through the study, participants are less confident in their abilities to recognize AI made art and music (disguise in unfamiliar media). However, because they have had exposure to disguise in text (albeit having been deceived), changes in participants' confidence levels varied.

FURTHER QUESTIONS TO CONSIDER 1. Would participants still feel like AI were less of a threat after the research if technology permitted this experiment to be designed the other way around (i.e. an actual AtBot credibly disguised itself as human)? Why might attitudes be different when the two experiments are just two sides of the same coin? 2. What draws the line between appearing emotional and actually possessing emotions? Participants in this research seemed to all believe AtBot to be less emotional than it appeared. 3. In interviews with participants, one person said that they felt like AtBot was emotional but was sure it was not another human typing behind another screen. What may have led this participant to stop themselves from finding out the truth despite knowing deception is involved in the experiment, and getting so close to the truth?


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.