6 minute read

FICTION TO FACT: THE RISE OF DEEPFAKES AND THEIR LEGAL IMPLICATIONS IN AUSTRALIA

BY ZACHARY BOSWELL INTRODUCTION

In an increasingly digital age, individuals and communities have the power to reach anyone through the internet, convey anything, and showcase the real experiences of people all over the world. But what happens when a video isn’t real at all? When something that doesn’t exist has the power to influence millions of people?

Advertisement

A deep learning fake (‘deepfake’) is a form of digital media, typically an image or short video, that has been manipulated using artificial intelligence (‘AI’).1 The altered videos usually depict persons doing or saying something they have not said,2 yet they are often virtually indistinguishable from real video.

This piece will explore who benefits from the distribution of deepfakes, the scope of the problem, and pose some potential solutions Australia could adopt to address this issue.

WHO BENEFITS?

It’s difficult to identify a specific person or group who benefits from the distribution of a deepfake as it is inherently difficult to identify the media’s creator.3 Whilst it would be easy to claim that the creator is the one who benefits, this is not always the case.

Towards the end of March 2023, images depicting former US President Donald Trump being arrested prior to his indictment began circulating. These deepfakes were initially created using Midjourney,4 an incredibly effective generative AI, by journalist Elliot Higgins. Despite Higgins clearly stating that the pictures were created using generative AI,5 other sources went on to reshare and claim the authenticity of these images, highlighting how easily deepfakes can cause misinformation to run rampant.

Ukrainian President Volodymyr Zelensky was also exposed to deepfake news when images depicting

¹ Ted Talas, ‘Real or (deep)fake? Responding to the legal challenges created by the emergence of deepfakes’ (2022) 18(9) Privacy Law Bulletin 181 (‘Talas’).

² Dean Gerakiteys and Natalie Coulton, ‘is that you? Deep dive into deepfakes part 1: What is a deepfake?’, Clyaton Utz Knowldege (webpage, 2 March 2023) <https://www. claytonutz.com/knowledge/2023/march/is-that-you-deep-dive-into-deepfakes-part-1what-is-a-deepfake>

³ Dean Gerakiteys, Lex Burke and Natalie Coulton, ‘is that you? Deep dive into deepfakes part 2: Legal issues and regulatory landscape’, Clyaton Utz Knowldege (webpage, 24 March 2023) <https://www.claytonutz.com/knowledge/2023/march/is-that-you-deepdive-into-deepfakes-part-2-legal-issues-and-regulatory-landscape>; see also John Channing Ruff, ‘The Federal Rules of Evidence are Prepared for Deepfakes. Are you?’ (2021) 41(1) Review of Litigation 103 <https://www.proquest.com/docview/2635270713?accountid=17095&parentSessionId=GNG14%2F1BDnX86yBg17Qo6B7OB31vogHvb2Ndi- him telling Ukrainian soldiers to lay down their arms were posted to Facebook in early 2022.6 Though the video was quickly deleted and was low quality, a deepfake such as this has incredibly dangerous consequences, potentially affecting a country’s future. Had this deepfake been sufficiently convincing, it may have turned the tide of the Russia-Ukraine war in Russia’s favour.

The potential benefits vary depending on the type and target of the deepfake. While the intended outcome of the distribution may also play a factor, not all deepfakes are intended to cause harm. Thus, a better question to consider is not who benefits but rather the scope of the harm caused by the distribution of the deepfake and what consequences can arise.

TyXh%2F4%3D&pq-origsite=primo>

4 Ultan Byrne, ‘A Parochial Comment on Midjourney’ (2023) 21(1) International Journal of Architectural Computing’ https://doi-org.ezproxy.lib.uts.edu.au/10.1177/147807712311 70271open_in_new

5 ‘AI-generated images of Trump being arrested circulate on social media’, AP News (online, 22 March 2023) <https://apnews.com/article/fact-check-trump-NYPD-stormy-daniels-539393517762>

6 Jane Wakefield, ‘Deepfake presidents used in Russia-Ukraine war’, BBC Technology (online, 18 March 2022) <https://www.bbc.com/news/technology-60780142>

SCOPE:

On the surface, issues surrounding deepfakes would do little more than affect a small group of people. The harm caused by a revenge porn deepfake, a type of deepfake often depicting a naked individual in sexual activity,7 could only be viewed as affecting the individual depicted. However, the scope of this problem is much deeper and may have heightened consequences with long-term harm. In the case of the Zelensky deepfake, a deepfake had the potential to detriment a nation’s political identity at the height of a crisis. Moreover, the fact that a deepfake can be generated using only a single image, and, as a result, can ‘humiliate, shame and harass individuals’ is telling of the wide scope of the problem.8

In Australia, the bar required for defamation claims to succeed is incredibly high, with the quality and likeness of the deepfake being considerations in determining a successful claim.9 As a result, victims may be unsuccessful irrespective of the harm caused.

Alternatively, victims of a revenge porn deepfake may be able to pursue criminal charges. In New South Wales, under Division 15C (Recording and distributing intimate images), s 91N (1) of the Crimes Act 1900 defines image as ‘a still or moving image whether or not altered’.10 This was illustrated in the New South Wales District Court case of R v PW,11 in which the accused was convicted of 31 offences, including child pornography he had digitally altered.12

The number of deepfakes will likely continue to increase as technology develops, giving rise to more victims of deepfakes exacerbated by the inaction of many jurisdictions. With over 130 million views and 96% of deepfakes consisting of pornographic content in 2019,13 deepfakes cannot be viewed as a minor problem or trend, and relevant legislation should be enacted to combat this growing threat.

Potential Solutions

Australia is yet to take a direct legislative position regarding deepfakes. Other jurisdictions, however, have implemented specific legislation to combat this issue.

From 10 January 2023, the Cyberspace Administration of China began enforcing regulations on deep synthesis, which includes AI generated media,14 to prevent ‘one of the most explosive and controversial areas of AI advancement’.15 Similar legislation could be enacted in Australia, particularly Article 6, which upon translation states ‘any organisation and individual shall not use [deepfake] services to produce, copy, publish, and disseminate information prohibited by laws … [or] to engage in activities prohibited by laws’.16

Moreover, Chinese legislation has developed methods to track the creators of deepfakes, requiring verification of identity to utilise a deepfake service provider.17 Whilst this will not lock out those creators with specialist equipment, it is undoubtedly a step in the right direction and drastically reduces the spread of misinformation wrought by deepfakes. The Chinese legislation also reflects that service providers and regulators are, at a minimum, partially responsible for any harm caused.

Although Australia has recognised the issue in the form of a statement from the eSafety Commissioner,18 the government has no legal position, instead claiming to protect the public through approaches such as raising awareness and supporting people who have been targeted. Though image-based abuse breaches Part 6 of the Online Safety Act 202119 administered by the eSafety Commissioner, a perpetrator will only receive a civil penalty, as jail time is not admissible.20 This reactive approach does not minimise the irreparable harm caused and only seeks to exacerbate the potential complications of deepfake media.

In Australia, we do have specific legislation in which victims of deepfake could theoretically take legal action. However, the difficulty in identifying the creator and specific legislative requirements mean that victims are unlikely to see justice taken. For example, a victim who had their face deepfaked onto another individual’s body would not be able to claim damages under copyright infringement as the original owner of the creative work must bring the claim.21

Conclusion

In summary, deepfake service providers are easily accessible and are not adequately regulated to guarantee compliance with the laws of Australia. The laws we have to ‘protect’ victims of image-based abuse cannot continue to act only in response as this issue requires efficacious preventative measures Ultimately, Australia has not yet met the challenge posed by deepfakes. It will have to do so, and no doubt will, for it is not necessarily the mother of invention.

7 Tyrone Kirchengast, ‘Deepfakes and image manipulation: criminalisation and control’ (2020) 29(3) Information and Communications Technology Law 308 <https://www-tandfonline-com.ezproxy.lib.uts.edu.au/doi/pdf/10.1080/13600834.2020.1794615?needAccess=true>

8 Alexander Ryan and Andrew Hii, ‘Deepfakes: the good, the bad and the law’, (2021) 18(7) Privacy Law Bulletin 128, 129 (‘Ryan and Hii’).

9 Talas (n 1).

10 Crimes Act 1900 (NSW) s 91N(1).

11 [2019] NSWDC 963.

12 Ibid [62], [78], [80].

13 Henry Adjer et al, ‘The State of Deepfakes: Landscape, Threats and Impact’, (Report, September 2019). <https://regmedia.co.uk/2019/10/08/deepfake_report.pdf>

14 [Internet Information Service Deep Synthesis Management Regulations] (People’s

Republic of China) Cyberspace Administration of China, Order No 12, 10 January 2023 (‘Deep Synthesis Regulations’).

15 Karen Hao, ‘China, a Pioneer in Regulating Algorithms, Turns Its Focus to Deepfakes’, Wall Street Journal (online, 8 January 2023) <https://www.wsj.com/articles/china-a-pioneer-in-regulating-algorithms-turns-its-focus-to-deepfakes-11673149283>

16 Deep Synthesis Regulations (n 14), art 6.

17 Ibid, art 10.

18 ‘Deepfake trends and challenges – position statement’, eSafety Commissioner (webpage, 23 January 2022) <https://www.esafety.gov.au/industry/tech-trends-and-challenges/deepfakes>

19 Online Safety Act 2021 (Cth) pt 6.

20 Ibid.

21 Ryan and Hii (n 8).