17 minute read

Octavia Clemons

OCTAYVIA CLEMONS

“Listen to My Ethics”

TED Talk

Audio Podcast

One can access Clemons’s talk here: https://bellevueuniversitymy.sharepoint.com/:u:/r/personal/kcarter_bellevue_edu/Documents/Oct ayvia%20Bellwether/Octayvias%20pROJECTt1111.mp3?csf=1&web=1&e =SHtv9J

Hey, it’s Octayvia Clemons, telling you to go to my podcast, Listen to My Ethics. It is a podcast that breaks down the most controversial, trendy technology and cybersecurity issues. It started out as a final project that became so much more. So don’t talk; and just listen to my podcast. Thank you.

Hi, guys, welcome to my podcast, Listen to My Ethics. Today, we’re gonna basically talk about artificial intelligence, and kind of the ethical surrounding of artificial intelligence. I’m not gonna lie to you: I didn’t know much about artificial intelligence before I started this podcast. But I am in my master’s course. And this is kind of like a final project for us. So I kind of, we got to choose, you know, what we want to talk about. And what’s ethical about it? I got into AI because of us discussing question, I did. I didn’t know much. So I, my teacher suggested, you know, to go look at a TED talk, you know, someone to explain it in depth. And I did find this TED Talk. And the speaker, she explained artificial intelligence so beautifully. She explained, not only what people normally think of, but she also explained what it actually is, how humans are such a huge influence on artificial intelligence. And we don’t even know that because of our pre-bias of, you know, the technology. So, before I get into all that, I want to kind of give everyone a general official definition of artificial intelligence; I decided to look up. And it says, Intel intelligence demonstrated by machines, which is unlike natural intelligence that’s displayed from humans and animals, which involves, you know, consciousness and emotions. Now, I didn’t want to just go off with that one definition. Even though it’s the official, I kind of wanted to just kind of play around and, you know, look at every what everyone’s opinion on artificial intelligence is. So another definition is shot down, which, which is unofficial. It said, it’s a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks typically required by human intelligence, which that definition is just crazy. It’s just, I just, the way it’s worded is it made me really kind of see that there’s two sides to how people can describe artificial intelligence. But honestly, no matter what word you choose to describe artificial intelligence, it’s basically our approach to science that goes into every realm and goes into multiple branches of not just knowledge, but technology. So let’s, let’s talk about how people view artificial intelligence, for you know, this podcast, I want to kind of talk to other people, you know, about their opinion. So I asked, either friends, you know, or, you know,

non-friends about what their opinions are. And I got back some of the craziest answers. I got people saying that if we allow technology to, you know, become just like us, it’s going to take over the world. It’s going to, you know, call off our walls, were gonna gang up on all of our technology, and we’re all going to be defensive, like the humans are the ones that are going to be the victims. If we keep teaching computers and technology to, you know, be smart, which is crazy, because, you know, I, you know, you see it in the movies, you see it all the time, how a machine learns how to be smart. It got jealous of the humans and ends up, you know, turning, you know, the world into a technology-owned and odd dictatorship, you know, against the humans. But I didn't think that I didn't think that was what, you know, AI is going to do, but I did have some issues or concerns about it when I first started, you know, learning about it, you know, just because if we do end up teaching, you know, intelligence or technology to be smart, what is going to happen to humans, like what's going to happen to jobs. You know, and how are we going to be able to provide and, you know, just just more of like a materialistic type or, you know, environmental type, you know, concerns in my head, but like I told you all before, you know, when I listened to that TED Talk, she explained it so well. She explained how Artificial Intelligence is actually pretty stupid. Her words. So basically, artificial intelligence only does what we tell it to do. There’s no other way around it. We are the dictators; we are the ones that are in control. And they’re throughout the whole process. The only issue with humans is it seems, cuz, you know, everyone has cybersecurity knows it, you know, which is it could be a debatable slogan. But the slogan basically is humans are the weakest link in cybersecurity, which, in some cases, that may be true, in some cases, it may be not. But for artificial intelligence, it kind of is true. When you are in a relationship with someone, your communication is key, as the same way it is with artificial intelligence. You tell your computer, or your artificial intelligence, what to do. You tell your AI, what you want to do, how not even how it should be done more of just what you want it to do, and they should find the solutions how to get done, but you need to be specific in what you tell it to do. Because they can think of multiple situations to get something done. It’s honestly, it’s the quickest way. So one of the examples that she used, she used, was they had a program, and it was a robot. And he not not just not an AI robot, but just a robot, it was a computer, and, like, if you, if you type that in, like code, code-wise, like if you type hand manually code, you know, the solution, you know, everyone knows how to do computer science, right? Everyone knows how to code. You, you tell it what you want it to do, how to do it, and how to get it. And if this doesn’t happen, then try this, you know, you have to you explain it step by step. But with AI, you don’t. It cuts out the middleman: it cuts out the extra steps that you need to

tell you, tell it what you want it to do, and how does, not how to physically do it, but you tell it how to do it, and it will find solutions to do it. So in this example, that she used the robot, we should tell the AI to, like, you know, move the robot from one side of the computer string to another open in the middle of the computer screen, I forget to say is like huge lake of water, right. And there’s two sides of the land. So she’s telling the robot to find a way to get from point A to point B. She, like I said, you didn't tell the robot how to do it, or the AI how to do it, or how you want to accomplish it. So they created their own simulation of it. So, what the AI did was made their whole robot super, super huge. So, when the robot would step in the water, it wouldn’t drown. It would just basically be stepping in a puddle. But that’s not how they wanted the AI to do it. So, communication is so important when you’re dealing with AI. Now, I decided to look up different, or I’m sorry, a lot of concerns. When it comes to AI, like what is everyone’s concern? And it kind of was, you know, the latter, like, you know, they don’t want AI to take over the human race. Which it’s hard to do that, like, you know, let’s be honest. Like, I don’t, I don’t see that happening. But one of them was the self-driving cars. They think that that is a huge ethical issue. And I, I don’t agree wholeheartedly. I, it’s kind of, it’s kind of both, I disagree and agree at the same time. I can see the side where they’re concerned about it. And I can see the side where they’re not really that concerned about it. So one of the, one of the examples I want to use about, you know, AI in cars is the Tesla. We all know the Tesla, we all want the Tesla. I want the Tesla, basically, wanted it when they were first testing out AI in the Teslas. There were so many trials and errors, so many, I mean, most of them people don’t even know about. So, as everyone knows, the test, like, can be a self-driving car, which is super cool. And honestly, I don’t have the money, but I would love one. Um, but one of the issues that I ran into is that during the testing and during the, I guess the creation of the car, they put the AI through simulations of driving, and that’s how the AI knows how to drive, how to turn or switch lanes, how to break things like that. But during the testing, they basically only showed the AI the highway; they didn’t show the AI, you know, the side roads or, you know, like neighborhood streets. So during the testing, they are doing a real-life event. I guess the AI was driving on a side road or in a neighborhood, and it didn't see a truck coming. So the truck hit the AI because the AI didn't know what to do. All right, at the Tesla, because the AI in the Tesla didn’t know what to do. When people wanted to—Tesla was asked about it. They basically said, “Well, during the testing, all we showed was the AI on a highway.” And everyone knows how it is to be on the highway, right? Like, you go straight, there’s no one coming from each side. I mean, there’s people on

the side of you, but no one’s coming at you on the side. So everyone’s straight. So all I knew was that a truck should be straight, either straight ahead of them, or straight behind them, or straight to the side of them, or straight to the you know, to the other side, but it shouldn’t be coming at an angle towards them, you know. So obviously, that is a huge communication issue. That is something that they will definitely have, like they will, they have definitely changed it. But that was something they definitely had, like, you know, taken concern about is that you have to be very specific, super specific, really, like about everything. Another one is, you know, I have a really cool relationship with one of my professors. She’s so awesome. And we were on Zoom one day and group talking about, you know, this topic and how I wanted to kind of present this podcast to people. And so she was telling me about, and she also sent me a lot of articles, which I will read one of the articles here soon because of the fact that it’s just so relevant to what we’re talking about. And also it’s so recent, it’s really recent, actually happened on February 19. But we’ll get into that later. But one of the examples that me and my professor, we’re, talking about on the Zoom call is hand sanitizers. Right, hand sanitizers. Um, first of all, did not know there was artificial intelligence in hand sanitizers. I’m not gonna lie to you. I know that sounds really like, you know, bad of me, because not only am I, you know, trying to get a career, my career, you know, advanced in cybersecurity. I’m going to try a master’s for, I kind of dedicated my life to it, but I did not know like in it, it stuns me that I did not know this. But one time, I guess, during a testing process, I think in India, there was this hand sanitizer incident, right. So, basically, this man, which he was darker skinned. He put his hand under the hand sanitizer, custom hand sanitizer, and it wouldn’t do it for him. But someone with a fair skin, not fair skin, but lighter skin, you know, I guess white skin, they put their hand under there, and it worked perfectly for them. But it’s not saying that the AI is racist or the AI is prejudiced towards people because really, technology is neutral. Technology is something that has no bias, has no reflections of people. So basically, during the testing process, though, because like I said, communication is key. So during the testing process of this hand sanitizer, they, they didn’t choose someone with darker skin. They didn’t choose people that have a tan to their skin. They chose light, light, you know, faired skin colors, so when the man put his hand under for the AI, or for the hand sanitizer, the AI didn’t know the difference. I didn’t know that was, that was what he was supposed to do. It’s not technology’s fault. You know, and that’s, and that brings me to another topic. I know, I’m just rambling here, y’all. But when I get into the topic of it, when I started discussing, I’ve kind of lights a fire under my belt, you know, really,

it’s something that I want to, I just I want to just discuss. I want to be able to communicate how I feel about things. You know, like I said, we are, we’re doing good and bad. I love technology. Like I said before, it’s neutral, it is something that is not going to choose to be a way because of the way it feels. Because you know, like we said earlier, it has no emotions, it has no consciousness. So what it’s doing is, is going off of what we want it to go off of, you know. So my dad recently just had, my mom and dad just recently had surgery, and what are their biggest concerns, was AI robots were doing the surgery, and they are super scared because they just didn’t understand what the process was, how they’re going to be able to, you know, like it, like, they’ree just scared like it’s not a human. It’s not someone that you could, you know, see. It’s a robot, something that’s new, it’s different. But that’s what a lot of people are upset about is, you know, robots in health care. And it’s crazy to me that they’re upset about it. Because if anything, you would think that robots would be more precise in health care. Now, I’m not saying that we should get rid of doctors, by any means, you know, but to have AI able to help you in situations, you couldn’t ask for anything better. And like, the way I kind of prepared them for it, you know, is like, you know, I discussed, you know, what I was going to talk about also, what I discussed, you know, just about AI. It’s no matter if it goes right or wrong. It’s always, it’s always a human of why, like the human is the reason why it’s happening. And I don’t want to say bad things, bad about humans. But again, like the technology does what we tell it to do. So why is that? They’re like, “Oh, well, it’s ethically bad.” It’s, you know, something that, sorry, it’s something that we should, you know, just keep as a human thing, but when you think about it, AI is going to be able to fix the problem that was questioned in the first place. Like, if there’s a question about something, AI is going to solve it without any judgment. And that kind of brings me to my next topic, if, you know, judgment of AI, like I said, neutral, neutral, neutral, like, there was, I guess, a hospital. I don’t want to name the hospitals; I don’t want anyone to get in trouble or anything like that. But there was a hospital that used AI after a while. And kind of one of the biggest, you know, scandals about the hospital was, you know, a lot of black men would go. They’re saying that their knees hurt. And the human doctor would say, “Oh, no, you’re fine. You know, I checked you out, you’re good, you’re good.” And then, you know, if a white male came in saying his knees hurt, you know, the doctor would be like, “Okay, well, let’s find a way, a solution to this problem.” I’m not saying every person’s like that. And I think every hospital is like that. But there are biases in this world that we have to be aware of. So they changed, you know, some of it out with or some of the technology, some of the, you know, the, I guess systems in the hospital with

AI. And they fixed, you know, the black man’s problem, or the, you know, they fixed the black man’s knee. They didn’t see then, see race, they just saw a knee had a problem, and we’re going to fix it. But, well, we need to learn as humans is to communicate. If we can’t communicate to ourselves or to our partners, or to our families, how are we going to be able to communicate to technology to advance in this world? I mean, I know I have communication issues. I’m gonna sit there and say that right now, but I’m also not trying to create an AI robot either. But if I did, it’s all about communication. It’s all about the way we talk, you know. Not only do we say we need to say things will be, you have meaning behind those things, we need to have an understanding, a clear definition of what we want to happen.

If we are able to communicate, if we’re able to put everything into a clear perspective, imagine the possibilities that we can come to. And like I said, it’s, it’s not that AI will take over our society. I guess, it is here to help build our society. Now, there are things like, you know, turning the robots into, like, you know, actual humans. And, you know, I know there was one for like a pregnant AI like that those things are unnecessary. Those things are, they’re pushing the line of ethical boundaries. But when you’re using it in actual meanings and actual things, imagine the possibilities we could have, it just, it’s, it’s beyond me. It’s crazy. It’s just, it’s wild. Oh, my goodness. So we’re basically coming to the end of the pot, the first podcast, you know, my first episode. Come back for more if you like, you know, what I discussed, how I talked, you know, things like that. But I kind of wanted to end it on kind of like a recommendation. A note of what we should do, what we should be aware of. Like I said, you see artificial intelligence in so many different ways. But the way I see it, like I said that the podcast is called Listen to My Ethics. So this is the way I see it. I see it as an opportunity. I see it as something that we can learn from, learn to adapt to, learn to use it. For great potential, what we need to learn as humans is just like I said, communication, we learn how to talk. We need to learn how to talk like in multiple aspects, not just technology. But that’s a whole other, you know, career field. But technology-wise, we just we need to learn how to communicate, it is there to help. So why not receive it—or not receive the help? Why? Why make it more than what it is, if you learn how to communicate, you can unstop, you can be unstoppable in this world. So that’s pretty much it, y’all. I had a great time talking to y’all. I had a great time discussing my ethics, my point of view. It’s just been it’s been great. I did want to leave off with kind of, like, a small joke, you know, to those who say, “Oh, no, I’m gonna just, I’m gonna keep thinking the way I think, you know. AI is going to basically

destroy us. We’re in the world, you know, blah, blah, blah.” If you think that, then you know, I recently heard that Mars just opened their doors for you know, people to come in there. So good luck, a new trip. And enjoy it.

Enjoy technology.

Enjoy life.

Enjoy happiness.

Thank y’all.

*A transcript reflects spoken language present in the corresponding video, so editing is minimized to reflect what was spoken as opposed to what is grammatically accurate.