3 minute read

SHOULD WE BE AFRAID OF AI?

How scared should we be of the Bard?

That, strangely, is the question that has been on my mind this past week – no, not because of an upcoming exam on Shakespeare, but rather, because that is what Google has named its new artificial intelligence chatbot.

Advertisement

Released two months ago, Google’s new offering follows similar, much hyped products from Microsoft and OpenAI that also work by having you ask questions and returning answers. The reason for the excitement is that, unlike a Google search, AI chatbots like Bard can sometimes do a remarkably uncanny job of synthesizing information and spitting out something almost like meaning.

No longer do you type in a question like “when did the Second World War start” and then trawl through results for an answer of 1939. Now you ask an AI chatbot “why did the Second World War start?” and it will give back a response about the invasion of Poland and post First World War tension in Europe. That is a sea change.

Navneet Alang

Bard is perhaps a little more circumspect and careful than its rivals. After bots from Microsoft were prompted into spitting out unnerving or flat out inflammatory information, Google seems bent on preventing Bard from, say, advising you how to make a bomb or engaging in inflammatory topics.

But we suddenly find ourselves at the precipice of a new era of technology that threatens to be just as, if not more, profound in its effects than the internet or the smartphone. And I can’t help but wonder at the question: how scared should we be? Here’s the thing: AI chatbots can at least appear to make sense of things, put ideas together and even write songs or poetry in ways that aren’t altogether terrible. They can be used to make things, too. One recent example saw someone sketch the idea for a website on a napkin, and an AI bot was able to produce a website from it.

In response to an undeniably novel set of technologies, we have seen Silicon Valley suggest that we are but a few years away from a genuine artificial intelligence that would not only be classified as sentient but vastly supersede our abilities as humans.

Yet on the other, there is a chorus of technology watchers who are ringing the alarm – that the technology is being released recklessly, is unreliable and will exacerbate misinformation, polarization and more.

The whole mess is being made worse by the rapid pace of change. Just a year or two ago, it seemed impressive when software could recognize objects in pictures.

Now you can ask for an image of Donald Trump playing Barack Obama at basketball and you’ll get a reasonably convincing but entirely fake image. Again: sea change. Having thought about it and read reactions endlessly, here is what I believe; somehow, AI is both overhyped and underestimated at once.

It is overhyped because the belief, prevalent among technology boosters, that what we call AI will soon become an actual intelligence is overblown. It’s not that it is not advancing incredibly quickly; it’s that actual intelligence and sentience are far more complicated than merely putting bits of information together. They require intent, ego and self-awareness.

What we have seen so far from AI tech is nothing like that, and any eerie humanlike qualities it may appear to have are all anthropomorphic projection. AI doesn’t actually “say” or “do” anything; it is a series of highly complex inputs or outputs that merely appear to, and there is thus far no sign that is going to change any time any time soon.

This is to say nothing of how riddled with errors its answers can be, how it amplifies misinformation in authoritative sounding tones, or how, since it is trained on what already exists online, can replicate existing bias.

Yet in its current state, it is not hard to see how transformative it might be.

Simple human thought like analyzing data or statistics, forming basic pieces of writing, or performing simple tasks-all of these are conceivably doable by AI within the next few years.

This is profound. It is not that we are about to be replaced by technology. It’s that each major iteration of technology restructures fundamental parts of how we relate to one another and the world. The printing press changed how we thought of the self and the nation. The TV changed how we thought of mass culture. And AI may well change that delicate balance between what we think people are good at and what we believe technology is useful for.

How scared should we be of AI? If the worry is some sci-fi dystopia about superintelligence deciding to eradicate humanity, then likely not too much. How, where and to what ends we deploy technology is up to us and even with AI, that is still true. Claims of humanity’s obsolescence are, at least for now, wildly overblown.

Yet all the same, critics who dismiss AI as some silly fad – a glorified spell check or autocomplete, as some have said –are missing something significant. It’s not that machines can truly think. It’s that they can get close enough. And in that simple distinction is a coming cataclysm. Fear – and the willingness to do something about it – is exactly the right response.