8 minute read

Horror Scope!

Mark Gash predicts a future without critical thinking

Many years ago, I worked as a Sub-Editor for The Press Association, where one of my less mundane jobs was writing the horoscope listings for most of the major national newspapers here in the UK.

The deal was that celebrity astrologers - Jonathan Cainer, Mystic Meg - would send us briefing notes for a month’s worth of horoscopes, covering the main points, such as relationships, work, money and health for each zodiac sign.

I would then take these notes and craft individual horoscopes for every star sign, for every day of the week, in the style of whatever news publication I was working on.

However, the reality was that going through the supplied notes from the “fortune-telling experts” was a lengthy pain in the arse for a 23 year old on a low graduate wage. So I skipped that part and just made up whatever horoscopes I liked. For a year.

I still think back and laugh at the thought of people on the daily commute, excited to read what their stars had in store for them, and potentially basing their day’s decisions on the nonsense I wrote. I apologise to Claire in Procurement for her never finding that tall, dark, handsome stranger in the copy room, to Jim in HR for getting him to waste his wage on all those scratch cards and to Jasmine, who never did get that promotion I promised. But hopefully, I put the fear of God into Mike, and he continues to check his balls every morning for lumps.

My point is, I had a platform where I could write drivel (nothing much has changed…) and people were possibly making life-changing decisions based on the factually inaccurate information I presented to them. Sound familiar? *Cough* ChatGPT *Cough* Gemini…

Now, admittedly, if the Horoscope page byline had read - Mark Gash, Lazy Graduate, then readers perhaps wouldn’t have put much stock in my words. Maybe they would have employed a bit of critical thinking that would have led them to surmise that I was, at best, winging it, or at worst, maliciously taking the piss. But because they were fooled into believing the predictions were coming from an authority on the subject, a world-renowned celebrity astrologer, they were primed to believe the lies I fed them.

And that’s the problem with AI. Google is a household name, so surely their Large Language Model, Gemini, is the font of all knowledge? Ask it a question, and its answer must be correct, right?

We live in an age of instant gratification. Need a recipe? A translation? A complex calculation? A few clicks and the answer is yours. Our hyper-connected world has fostered a culture of immediate access, where information, once the product of lengthy fact-finding and truth-sifting during a bout of 2 am insomnia, is now dispensed instantly. And the rise of LLMs has only amplified this phenomenon, injecting a dangerous dose of perceived authority into the mix.

A good old-fashioned search engine marathon (what’s a library, eh?) provided a pick ‘n mix of information where you’d have to use your brain to sift the cola bottles from the white mice, before deciding which of the sugar-coated treats you were going to shove down your throat. LLMs take all the effort and fun out of pick ‘n mix, instead handing you a sealed bag of Haribo and assuring you that it’s filled with all of your favourites.

This shift is subtle but profound. We’re not just accessing information; we’re accepting interpretations, syntheses, and even creative outputs generated by an algorithm. The convenience is undeniable, but at what cost?

The concern is that, little by little, day by day, we’re increasingly outsourcing our critical thinking. If the answer is always a prompt away, why bother with the messy, time-consuming process of analysis, evaluation, and independent thought? Why question the seemingly authoritative response served up by a sophisticated AI? This isn’t a philosophical musing - this is already happening - in fact, it’s probably already happened to you, and the implications are potentially devastating.

The office round the corner from you...

Blind Trust

Danny in Accounts usually checks the weekly figures himself but reckons he’ll give AI a chance to prove what it can do. In week one, he checks the AI output against his own figures - it’s spot on. So he uses AI again in week two, and again, the figures are correct. By week four, Danny doesn’t bother checking the AI data - he knows it’ll be fine. Six months go by and Danny has pretty much forgotten how to check the figures manually, but that’s okay, because AI never fails. Until, like a condom, it does, and ChatGPT’s misplaced decimal point costs the company, and Danny, dearly…

Echo Chamber Effect

Alex in HR needs to hire a new Accounts Manager to replace Danny. But Alex is too busy creating AI-generated online Health and Safety courses to spend time writing a job description for the role. So she asks AI to write a job description that will inspire the perfect candidate to apply for the position. The LLM aggregates everything it knows about the stereotypical Account Manager, summarises market trends, reinforces existing biases and overlooks crucial Diversity, Equality and Inclusion guidelines. It spits out a job description and helpfully posts it on a Career Site, without needing Alex to check it over.

Reputational Damage

The company receives a hefty fine for advertising for a “Young, dynamic white British male with 5 years’ experience in accounts.” Alex joins Danny at the job centre, and the company employs a PR firm to try to salvage their reputation.

Okay, so I’m having a bit of fun with the above but, like Bart Simpson, I’ll eat my shorts if at least one of those scenarios hasn’t already played out somewhere in the world. The inability to discern fact from fiction, bias from objectivity, and logical reasoning from algorithmic mimicry can have severe consequences. But the real danger comes when we stop trying to.

When we become so blasé, so unaware of our laziness, that we automatically trust Artificial Intelligence to process information and formulate opinions, are we not relinquishing our intellectual autonomy? Are we not accelerating the very scenario we fear - the replacement of human roles by AI?

The fact is, AI, or what passes for it, is already here. For all the outcry from aging artists, writers, course authors and coders that we should heavily regulate, if not outright ban, AI, your kids have been asking Alexa to play Taylor Swift songs for the past 5 years, and there’s no going back from that. As adults, we need to educate ourselves to understand the risks and employ some of that common sense we all used to have, to use LLMs, and whatever comes next, in a balanced and responsible way (and yes, I spat out my tea as I wrote that). It’s too late for us - we just need to deal with it.

But for the Taylor-loving kids, whose minds are too immature to separate reality from YouTube, we need to get our act together and start to prioritise critical thinking as a core competency in schools. Young people need to be taught that AI is a tool, not a substitute, for human intelligence.

As a 45-year-old man staring down the barrel of another mortgage hike, I wish my school had taught me about interest rates, credit and debt, rather than the assassination of Franz Ferdinand. Schools today need to consider swapping out some of their irrelevant subjects to educate students on how to evaluate sources, identify bias, analyse arguments and formulate their own independent conclusions, and to be able to apply this not only across academic subjects, but in everyday life.

I said that I still laugh at people who believe their horoscopes, but perhaps that’s unfair of me - like an LLM, I skimmed the “facts” provided to me by the “experts” (sorry, can’t help myself) and delivered something passable that the readers would consume. Who knows, maybe like a broken clock, my predictions were occasionally right, and I made somebody’s day.

But whether we’re talking horoscopes or AI, bring your wits, your skepticism, plus a healthy dose of critical thinking, and hopefully nobody gets hurt. The question isn’t whether AI will replace our roles. It’s whether we’ll allow it to replace our minds.

Now go buy yourself a lottery ticket, but beware of that tall, dark, handsome stranger - it might be Google in disguise.

Mark Gash Writer. Designer. AI Image Prompter. Mark is a jack of all these trades but only a master of Dirtyword. And toy collecting.

Connect wth him here: https://www.linkedin.com/in/markgash/

This article is from: