
8 minute read
The True Danger of AI
from Today in Tech
by LASA Ezine
Picture by mikemacmarketing on wikimedia commons
The True Danger of AI
Advertisement
How the most dangerous thing about Artificial Intelligence is the opposite of what people think
By Jackson Edwards
ou just got back home after a long, tired day. You sink into a chair and begin to browse the endless pages of content for your favorite streaming service. There are so many good options today. And there were good options yesterday. And the day before that. It is like it knows what you want before you even know that you want it. Artificial Intelligence, or AI, is everywhere. It’s the autocomplete on our phones, it is in digital assistants like Siri, it even controls your TV show recommendations, and the advertisements that pop up on your screen. You are surrounded by it constantly, even when you do not realize it. AI is an increasingly important topic in our world. The problem is, there are a lot of misunderstandings about what AI is capable of. Roy Keyes, a data scientist who has experience in AI, explained that “Artificial Intelligence is trying to get computers to do
Ythings that we consider require human-like thinking. So that could be how to play games, how to make certain kinds of decisions, how to do things that are complicated. Specifically we think of it as doing things that are not super simple.” A “narrow” or “specific” AI is trained to perform only one task as efficiently as it can, such as play a game or identify an animal in a picture. In contrast, a general AI would be one that is able to apply
itself to nearly any task. This kind of AI has not been achieved yet. Keyes makes the comparison that like a human, a narrow AI can play a board game, but humans “can also tie their shoes, walk, and do all sorts of stuff” whereas an AI cannot accomplish nearly as much. Keyes said, “One of the big milestones was actually when Deepmind, which [is owned by] Google, was able to build a program that could beat the world’s best players in the game ‘Go’.” Go is an ancient Chinese board game that is still enjoyed today. It has many possible moves and strategies. The company “Deepmind” created AlphaGo, an AI trained to play the game. In 2016 AlphaGo was able to beat Ke Jie, the world’s number one player. “So that was a really big deal, [but] there is still a long way to go before you would get what we call general artificial intelligence,” said Keyes. One of the biggest misunderstandings about AI is that “AI” and “robot” mean the same thing. An AI is a computer program, which means it does not have a physical form. It can exist in many different places and control many different things. A robot is what one sees
in movies, something physical, usually metal or plastic, that is controlled by a computer program. They can exist together or separately, robots can be controlled by basic computer programs, and AI can operate a digital service of some sort. That is not to say that they cannot be combined, as the program controlling a robot can be an AI. Another very big misconception about AI is that AI knows what it is doing. Researchers and developers have always tried to sell their AIs as if they are more powerful than they are. However, as artificial intelligence-loving computer science teacher Anita Johnson puts it, “A lot of people think that somehow the computer is intelligent.” In reality, all AI does is learn and replicate patterns. AlphaGo may be able to play “Go” better than a human, but it does not know what a
board game is. Imagine an AI that is trained to identify cats. It might be able to learn that ears, fur, and whiskers mean that there is a cat in the picture. It could identify cats very well because of that. However, the AI is never going to understand what a cat, fur, whiskers, or paws actually are. All this AI does is classify collections of pixels. Keyes said, “A lot of the confusion comes about because [AI] does do interesting stuff, ..., but there’s a lot
“A lot of the confusion comes about because [AI] does do interesting stuff, ..., but there’s a lot of mystery as to what’s really going on. And it’s also not explained very well. So that sometimes makes it seem more powerful than it really is at the moment,” said Keyes
of mystery as to what’s really going on. And it’s also not explained very well. So that sometimes makes it seem more powerful than it really is at the moment.” This leads to people thinking that AI is much closer to human intelligence than it really is. When people think like that, it is easy to jump to the “Terminator Scenario.” The Terminator scenario starts with AI becoming more intelligent than humans. Then, the AI is able to create more of itself. Finally, it decides that it does not like or need humans, and decides to eradicate them. That is not remotely close to what AI can do. In reality, what we should be worried about is AI doing exactly what we tell it to do. The data that goes into training AI is what you will get as the result. “There’s really nothing beyond human intelligence that happens in artificial intelligence. So it’s a way to have machines make decisions based on what we know. That’s grossly oversimplified, but I think that might be the easiest way to say it,” Johnson said. “AI is very good at recognizing patterns. And so if we give it data that has
A concept image of a neural network, a certain kind of AI.

a certain pattern, it will do that.” If you feed Harry Potter into a writing AI, you will get Harry Potter back out. If you feed in ice cream names, you get ice cream names. Frequently, AI does not even fully understand its own goal because its creators do not tell it exactly what it should to do. Janelle Shane, author of “You Look Like a Thing and I Love You” and the AI Weirdness blog is an expert on this subject. Shane remarks, “There was a student who tried to train an AI to flip a pancake in a frying pan by having it maximize the amount of time the pancake spent in the air. So the PancakeBot used the frying pan to launch the pancake away across the room so it would stay in the air as long as possible.” There have been various AIs that were intended to build a virtual body that could travel or jump for as far or as long as possible in its virtual world. So, the AIs decided to assemble the body into a tall tower with a really long leg to maximize how far up they can go or to fall over to get to their destination fastest. AI’s misunderstandings frequently lead to unintended consequences. Keyes said, “What you do is you feed [AI] tons and tons of data from the real world. And then they kind of learn how to make the decisions. And then what they’ll do is take existing societal biases, and then they just further or even make those biases worse by making decisions around [them].” In 2014, Amazon technicians started building an AI that could scan resumes and job applications. It was taught by looking at the decisions humans made when faced with different applications. In 2015, the technicians realized that the AI they had built had learned to be sexist. Anything in resumes with the word “women” would be penalized. Eventually, Amazon disbanded the team and stopped the project. Keyes said, “There have been some examples around [asking AI], ‘Should this person be released from jail?’ and then if it’s based on their demographic information, maybe the decision would be biased.” “AI is going to be more and more important in society. It’s very crucial that people understand it better [and] understand the potentials and the limitations,” said Keyes. Researchers market it and showcase it like it is more intelligent than the
reality. In actuality, “AI is much closer in computing power to a worm than to a human being,” said Shane. It understands nearly nothing about what it is doing. It does not know what behaviors are good or bad, only what works and what it already sees. Shane said, “People are trusting AI to make important decisions about hiring people, giving people parole, and monitoring people taking tests, without checking to see if the decisions it’s making are correct and fair.” If what it sees is bias, it will copy that bias. If it is told to help companies, it will do so, no matter the moral implications. As Johnson said “the most dangerous thing about artificial intelligence is not that it’s gonna do something we don’t tell it to do, it’s [that it’s] going to do exactly what we tell it to do. I think people are dangerous enough.” AI will do exactly what humans tell it. That is the true danger of AI.