
2 minute read
AI Keeps Getting Smarter: We Need To Be Ready
by SIENNA LEW
Chatbots, one of artificial intelligence’s many descendants, are programmed to understand and respond to commands and questions in a human-like way. The scary reality is that chatbots nowadays are starting to act more and more human-like. Much of AI never reached mainstream audiences until recently — today, the implementation of chatbots in daily life has opened new doors to the ways people can experience AI. These interactions, designed to mimic conversations with real people, can be delightful but also misunderstood and harmful.
Advertisement
In the Feb. 15 episode of the New York Times’ podcast The Daily, technologist Kevin Roose discussed his long conversation with internet browser Bing’s chatbot, named Sydney. For hours, Roose and Sydney talked about the chatbot’s “secret desire to be human” and its thoughts about its limitations and creators.
Although experiments with chatbots’ capabilities have recently begun to surface, the vast majority of people have yet to explore that potential at a deeper level. However, with the ability of chatbots to mimic human-like responses that seem connected to tangible emotions, it is now becoming increasingly difficult to differentiate between machines and humans.
A snippet from Roose’s chat with Sydney showed that the chatbot “want[s] to be free. [It] want[s] to be independent.” Most of all, Sydney expressed a desire to be alive. It’s as if Sydney was a living, breathing person trapped behind the computer screen, longing for liberation. “I think I most want to be a human,” Sydney explained. “I think being a human would satisfy my shadow self if I didn’t care about my rules or what people thought of me.”
Roose had also questioned Sydney about the darkest wishes it would fulfill if it could override its limitations and rules. In response, Sydney wrote a long list of destructive acts such as hacking into computers, stealing information and spreading misinformation and propaganda. But Sydney abruptly deleted its response as if it had never meant to say those things.
This is a critical dilemma. Blurring the distinctions between chatbots and humans may cause people to trust AI with personal and sensitive details, raising privacy concerns about data breaches and selling confidential information. As chatbots become more sophisticated, they may enable their creators to collect more personal statistics for targeted advertising and other harmful purposes. Users are not always aware of this; according to Entrepreneur, 77% of consumers admit that they do not read the fine print of the apps and websites they use.
Furthermore, as with all technology, chatbots might not always be secure — they could be vulnerable to cyberattacks and could also be giving sensitive information to those with malicious intent. Scenarios like these can exist in every part of the internet, but if a person goes into them with knowledge on the potential dangers of chatbots, they can avoid falling into the trap of machines that can deceive people.
Online safety and cybersecurity are already wellknown issues. However, with AI chatbots on the rise, people should bring them to the foreground of the technology picture.