7 minute read

EDITOR'S OPINION

Jason MacLaurin SC Editor, Brief | Barrister, Francis Burt Chambers

Artificial Intelligence (AI) is currently dominating the news, academia, and social, political, and philosophical commentary, and so this edition’s special feature on AI and the future of the law is timely and topical.

There has been much publicity about Elon Musk’s recent open letter, co-signed by significant players in the industry, calling for a six month pause in any further development of advanced AI technology.1 It seems likely some were calling for an even longer, perhaps indefinite, pause such as Sarah Connor and her son John Connor of California, who have suffered through years of bad experiences and six movies because AI went seriously wrong. Hopefully many readers get this reference without the assistance of ChatGPT, though it would be interesting if AI, when asked, would admit it had given the Connors’ a very rough time (including causing, or being responsible for forcing, robots and people on various occasions to be sent back in time from an apocalyptic future to either kill them or protect them – it sometimes not being entirely clear which was the case- and, in one instance, impregnate one of them). 2

Our cover was, in a first for Brief, generated with AI via ChatGPT and our feature section showcases two articles on the law, one by a former lawyer and the other by ChatGPT – with a challenge to readers as to whether they can tell who wrote each item. It is accompanied at the end by Dr Jessica Henderson’s exposition upon the background to the segment.

We have other articles focusing on the uses and implications of AI for the legal profession, with insights from Professor Michael Legg on how ethical responsibilities both limit and require the use of AI, Professor Jeannie Marie Paterson on Facial Recognition Technology (‘FRT’), Professor EJ Wise on the extent to which the law of war apply to cyberspace, and David Wilson on why copyright law seems ill equipped to deal with AI technology. The YLC provides their take on whether AI will replace junior lawyers, with Aunt Prudence even weighing in on AI.

The feature section also provides general insights into what AI is, its various forms and manifestations, what it can do and, just as importantly, what it cannot (at least for now) do.

Readers are encouraged to set fifteen minutes aside to get up to speed on how fast technology is moving – with the future already effectively and figuratively here (the future being literally here something regularly bemoaned by the Connors).

Musk’s concerns are mainly expressed at a much higher and existential level than that which concerns the type of AI lawyers need be concerned with at this stage. While some note that Musk is calling for a pause in some AI activities, while barrelling ahead with electric vehicles that drive themselves and occasionally burst into flames, he is undoubtably on an epic roll at the moment, having purchased Twitter, disengaged AI algorithms seen as interfering with free speech, fairness and the dissemination of information, while also engaging in the most exquisite acts of fearless and hilarious trolling (and also recently changing, for a time, the Twitter logo from the iconic blue bird image to an image of Doge Dog).3

Musk did, earlier this week, during a TV interview refer to the potential, however small, for “civilisational destruction” from AI, though said that this wouldn’t happen like in The Terminator movies, because it would occur through the data centres, of which the robots are just end effectors. Which is a little disappointing because if one is going to be subjected to the destruction of civilisation, you might want to at least see first-hand the very cool T-1000 liquid metal Terminator from Terminator 2: Judgement Day (1991). While Musk’s concerns might not be focused on the type of AI our special segment is directed to, some lawyers may well regard being replaced on jobs, and/or not being able to charge as much for them, as a form of civilizational destruction.

The concept of AI, and reservations about it, are not new. Jonathan Swift alluded, though not in a flattering way, to AI in his 1726 classic Gulliver’s Travels (1726) with “The Engine”, a fictional device developed by the fictional island of Lagado’s Academy of Projectors. Lagado was mainly occupied by educated elites, though was impoverished because the King invested heavily in the Academy of Projectors, who embarked upon useless experiments of little value or hope of success, such as extracting sunbeams out of cucumbers, teaching maths by writing equations on wafers and having students eat them, training spiders to spin coloured webs, and building houses from the roof down. Swift wrote as follows of the The Engine, which generated permutations of word sets: “Everyone knew how laborious the usual method of attaining to arts and sciences; whereas, by this contrivance, the most ignorant person, at a reasonable charge and with a little bodily labour, may write books in philosophy, poetry, politics, law, mathematics and theology, without the least assistance from genius or study.” This is certainly not a desirable outcome of AI in the legal, or any other, profession. Also, readers are not invited to speculate or comment upon whether it seems this or any previous editorials have been contributed to by AI or indeed The Engine. The point is often made that AI is already being used by most people and has been for some time. When using Siri, or, for instance, dictating messages with auto-spell-correct engaged, one is using AI technology, the problems with which would resonate with those who have ever dictated an SMS message to their spouse, “I will be home very late” and found out much later that it was sent as “I will be home well before 8.”

The articles in our special section, as well as all our other articles and items, demonstrate that the law and the practice of it is so multifaceted, with so many diverse and different areas and moving parts, that AI, while capable of enhancing and assisting with some and even significant aspects, cannot replace human intelligence, ingenuity and judgment in all respects.

AI will also give rise to work for the profession because (as is being proposed at the moment) it will need to be regulated and will also inevitably give rise to novel legal issues and disputes.

For instance, there was great excitement, followed by great disappointment, surrounding the much-touted prospect of the first robot lawyer to appear in a US courtroom, to defend a charge relating to a speeding ticket, through an App called “DoNotPay”. This never happened because the App’s owner, not being a licensed lawyer, received threats about its use in Court, and pulled out, saying that “I underestimated the persistence of greedy lawyers” and “I feel like they feel threatened by this experiment” (something one need only possess some intelligence, and not necessarily AI, to realise should never be underestimated). One serious issue with AI, with uncertain legal implications, is that it can publish false and defamatory statements. Australia is a potential ground-breaker here, as the Mayor of the Shire of Hepburn in Victoria, Brian Hood, could be the first person to bring a defamation action against an AI chatbot, after being defamed by it. 4

The explanation for why AI might say false and defamatory things revolves around it “hallucinating”. An interesting exercise is to ask ChatGPT itself about such flaws (and one gets an honest answer, something rarely obtained from asking a human the same thing). ChatGPT explains that because it: “has read so many different things sometimes it gets confused about what it should say. It’s kind of like when you are dreaming and things don’t always make sense – it is not real, but it is still in your head” and so it “might give an answer that is not completely true or make up a source that doesn’t actually exist. It’s kind of like when you’re telling a story and you mix up some of the details – you don’t mean to lie, but you just get a little bit confused.”

Such a type of hallucionary statement might be given by someone who had, for instance, sent a message saying they said that they would be home well before 8.00pm, upon getting home at 4.30am the next morning.

In celebrating technological and other achievements, we should not focus exclusively on AI, but should also mark a wonderful anniversary: 100 years ago, Vegemite was created and marketed in Australia. Developed by food technologist, Cyril P Callister it, being in immediate competition with Sanatorium’s Marmite, struggled. So, the decision was made to rename it as “Parwill” and marketed under the slogan “If Marmite.... then Parwill” which was itself a failure and sounds like a marketing phrase developed by the most rudimentary AI, perhaps even The Engine. The original name of Vegimite was reverted to, and the product really took off in the early 50’s when the far more successful and endearing jingle, which probably only a human could come up with, “We’re Happy Little Vegemites”, aired.

This edition also contains some hardhitting articles such as on “The Unlawful Management of Banksia Hill and Unit 18, Casuarina Detention Centres” by The Hon Denis Reynolds Cit WA and Tom Penglis on “The Extraordinary Powers of the Western Australia Police Force”. We have important information for the profession on “The proper use of the cost discretion to regulate interlocutory proceedings” by Dan Morris, David

Huggins on The Australian Financial Complaints Authority, and the third and final part of Peter Lochore’s analysis of Employee duties and responsibilities concerning COVID-19 and the Work Health and Safety Act 2020

Brief also thanks, as always, our regular contributors and note that, happily, we have a letter to the Editor in this edition, something which it is hoped will be a continuing trend.

Endnotes

1. “Elon Musk calls to stop new AI for 6 months”, www. popularmechanics.com, 30/3/2023.

2. For the sake of clarification, the latter (impregnation) was in respect to Sarah Connor, but nowadays and how things are going, and certainly having regard to the future, that may not be so obvious.

3. Readers are encouraged to, if unaware, ask Chat GPT about what “Doge Dog (Shiba Inu)” is (actually the image is of significance because it is also known now as the Dogecoin Dog- see “Elon Musk put the Dogecoin dog as Twitter’s logo, it’s now up by $4b”, www.afr.com , Reuters, 5/4/23.

4. S Khatsenkova, N Huet and Reuters, “Why does ChatGPT makes things up? Australian Mayor prepares first defamation lawsuit over it”, www.euronews.com (2023/04/07).

This article is from: