An Ambiguous Dilemma

Page 1

An Ambiguous Dilemma I am a human. A thinking human, I’d like to believe. I am not a robot. But I understand if you, dear reader, have a hard time believing that. Just last week you read an op-ed by a robot, GPT-3, and if you’re reading this on the internet, then the odds that I am human are pretty low. After all, much of the internet is populated by robots. In fact, a report by the security giant Imperva found that in 2020, 37% of all internet traffic was created by bots, down from 51% in 2016. We interact with these bots all the time, even if we don’t recognize them as such. The relatable shower-thoughts we retweet, the effusive comments under our favourite influencer’s new Youtube video, the restaurant reviews that sound just a little too similar to each other – many of them are written by robots. The bots that have been “created in our image,” as GPT-3 puts it, have influenced our behaviour through their actions just as much as we have influenced theirs through programming. The cycles that the internet sees at regular intervals – scandals about public figures’ past tweets, viral videos of pets acting remarkably human – incite us to behave in a manner more ‘bot-like’. We participate in inane challenges, make comments we know will garner more retweets, or call for someone to be cancelled over some imagined infraction. These interactions are rooted in predictability, something that is at the core of robotics – we program machines to behave a certain way every single time: if this, then that. This predictability calms us, quells our fears of the unknown. We know the ‘then’ to every ‘if’. If robots behave only in the ways that we program them to, then we have nothing to fear from them. Yet, the unintended consequences of robots’ programmed actions trouble this dynamic. As we behave in more predictable ways due to the influence of bots, we can’t help but wonder if robots may behave in unpredictable ways due to our influence. These blurred lines between human and machine are bound to make us uneasy as we struggle to distinguish organic interactions online from bot-driven interactions. The unease turns to fear as we wrestle with the potential threat that AI poses to humanity. Though this dynamic may be rooted in our complex online world, one where robots can write op-eds, this fear of technology is as old as technology itself. I can’t say definitively if there were homo-erectus warning their fellow early humans about fire rising up in revolt against them, but the fear of technology runs deep. The invention of microwave ovens created panic


in the US, in a time where fear of radiation was at its strongest during the Cold War. Half a decade later, this sentiment is echoed in conspiracy theories about 5G radiation. In 1999, the United States Federal Reserve printed $50 billion in extra currency in anticipation of a complete technological collapse due to the Y2K (which never came). Today, countries such as China have banned cryptocurrency for fear of it destabilising our current economic systems. Now, as our relationship with technology evolves faster than we can pin it down, the most pressing fear today is of AI-powered factories and services leading to the obsolescence of numerous labour positions. This harkens back to Luddite attacks on stocking-frames in the 1760s. Marx argued that “the production of too many useful things results in too many useless people,” and while he could not have predicted AI, his words are hauntingly relevant in our hyper-capitalist world. But as we grapple with this impending threat of mass unemployment, we must first learn how to navigate the relationship between man and machine, one that evolves faster than we can pin it down. My stance on this might be coloured by my own bias as a human being, but I believe that while man is no longer the master of the machine, neither is machine the master of man. Though it might be tempting to imagine a world in which we are slaves to robots, or robots are our slaves, we must not forget that the tools we create have always had a hand in our evolution. Whether it's a rough stone chipper a million years ago, the first plough, the printing press or the telegraph, we have evolved together with what we invent. Robots enhance our lives while we, in turn, provide them with the programming and data they need to improve. This unique relationship need not be adversarial. As disconcerting as it might feel to read an op-ed written by a robot, we must remember that the control, ultimately, lies with us. Robots, even seemingly free-thinking ones like GPT-3, act only according to their programming. They do not have the capacity to deceive. So, when a robot tells you that they are a robot, you can trust them. GPT-3 was kind enough to sign his op-ed with his true identity, as he is programmed to do. I can’t help but toy with the possibility that at some point in the future, other robots might not be so forthcoming. Dear reader, I have spent most of this piece convincing you there is nothing to fear from AI. I truly believe that we need not worry about robots revolting against us, but even I can’t help but get carried away in the rhetoric sometimes. After all, I’m only human…or am I?


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.