6 minute read

Deepfakes

Next Article
LW Blogs

LW Blogs

Diversity & Inclusion

Deepfakes

Much has been written about the threat deepfakes pose to civil discourse, public trust, and political processes. But these manipulated videos can also cause untold emotional and reputational harm at the individual level, and overwhelmingly target women. In this article, Kelsey Farish – one of Europe’s leading legal experts on deepfakes and manipulated media – discusses a few key points to be aware of.

A deepfake is commonly defined by academics and technical experts as a piece of AI-generated audiovisual media which purports to show someone doing something they did not do, but in a manner that is so realistic that the human eye cannot easily detect the fake. In plain English, the word “deepfake” is typically used as a catch-all to describe face-swapping videos.

When the algorithm used to generate deepfakes was first shared online in 2017, it was released as a free software tool that anyone with a bit of technical knowledge could use. Principally, it was used as a means to insert the faces of women celebrities into pornographic films, transforming them into unwilling participants in a novel form of image based sexual abuse (the preferred terminology which includes harms such as “revenge porn”).

Although many deepfakes remain in the realm of sexually explicit videos, they can be used in any context. Some are completely innocent and humorous, or used as a form of political satire. Some deepfakes have even been employed in a therapeutic context, for example to allow individuals to virtually say goodbye to deceased loved ones. In the medical field, Alzheimer’s patients may benefit from deepfake technology, where it enables them to engage with younger versions of themselves and family members.

The above is mentioned because it is important to contextualise the current deepfake ecosystem. Like any technology, deepfakes are not inherently “bad” or “dangerous” – although they can be used for deceptive and harmful purposes, they can also be used for beneficial ends, too. This duality makes regulating deepfakes very difficult in practice, especially when noting the ease with which they can be created. Furthermore, and as many

lawyers will appreciate, just because a deepfake is offensive doesn’t necessarily mean that it is actionable as a criminal or civil offence. For example, a parody deepfake of a politician may be crude or distasteful, but the creator’s rights to freedom of expression may still be protected.

Today, a fairly believable deepfake can be generated using just one photograph of the intended target. In addition to deepfake mobile apps, specialist freelancers even sell bespoke deepfakes for as little as £5 per video on marketplaces such as Fivver. As of June 2020, almost 50,000 deepfakes made available to the public had been detected: by December 2020, that number had nearly doubled. It goes without saying that the age of the deepfake is only just beginning. So, here are a few important things to remember:

1. You needn’t be a celebrity to be at risk.

Deepfakes can be used by anyone with motive. This could include a colleague who seeks to hamper your professional ambition, or an (ex-) partner who submits falsified evidence to a family court. More recently, we have even seen cases of a parent seeking to damage the reputation of her teenage daughter’s cheerleading rivals. As with all forms of defamation or harassment suffered online, anyone can be the victim of an unwanted deepfake, irrespective of their celebrity status or public profile.

2. Deepfakes are a gendered issue.

As explained above, anyone can theoretically become a victim of a deepfake. That said, women nevertheless account for 90% of deepfake victims and other forms of image-based sexual abuse. Sir Tim Berners-Lee, the inventor of the world wide web, has stated that he believes the “crisis” of gendered abuse online “threatens global progress on gender equality.” Several campaigns and advocacy groups including the #MyImageMyChoice campaign call for tougher laws and policies on this important issue.

3. Convincing deepfakes can be made using only one image of the victim.

Unless you have absolutely no photographs of yourself online, it would be difficult to argue that you are truly immune from deepfake threats. For many reasons, having photos of yourself online (for example, on LinkedIn or your company’s website) is an important and beneficial aspect of modern life – and even just one of these images can be used as the source to generate a fairly realistic deepfake.

4. Be mindful of the quantity of images you share.

Notwithstanding the above, it is always best practice to post personal images (i.e. those just intended for friends and family) only to private accounts, or otherwise limit the quality and quantity of any such images shared publicly. It is also prudent to consider blocking or unfriending accounts which you know may pose a threat.

5. Consider what you might be “teaching” the algorithm.

Deepfakes are so-named because they rely on deep learning, a type of artificial intelligence which “learns” by being “trained” on a data set – which in the deepfake scenario, means images of people. Accordingly, sharing images which show an individual at different ages, such as those used for flashback memes, can provide algorithms with valuable information regarding how people’s faces change as they age. This makes deepfakes purporting to show someone at a different life stage all the more realistic. On a related note, it is generally best to carefully consider the relative pros and cons before sharing images of a child’s face.

6. Deepfakes may be difficult to control by way of legislation, but social media companies will remove deepfakes that violate their T&Cs.

Legislation specific to deepfakes does not yet exist in the United Kingdom. However, several popular social media companies and websites have officially banned deepfakes from their platforms, or otherwise regulate their dissemination. These platforms include Facebook, Instagram, Twitter, PornHub, and Reddit. If you find yourself the victim of an unwanted deepfake which has been posted to one of these platforms, turn to the take-down procedures or otherwise flag the content as harmful: you should receive a response within a few days.

7. Depending on the content of the deepfake, a criminal or civil offence may have been committed.

As discussed above, deepfakes are not themselves “bad” – but they may, by virtue of their content, constitute a civil or criminal offence. Taking these in turn, civil offences may include defamation, malicious falsehoods, misuse of private information, copyright infringement, passing off, and civil harassment. Relevant criminal offences include those under the Malicious Communications Act and Computer Misuse Act, as well as hate speech and criminal harassment. Furthermore, the United Kingdom is currently conducting a review of legislation, and how it can be updated with respect to the sharing intimate images without consent. This specifically includes potential revisions to voyeurism and exposure offences under certain existing statutes, as well as the common law offence of outraging public decency.

It remains an open question as to whether – and if so, to what extent – deepfakes can ever be fully removed from the online ecosystem. Given their possibility to be used for creative and beneficial purposes, as well as their ability to evade detection, this may be unlikely. As such, it is incumbent upon us to be more educated about how to mitigate risk – and to call out or otherwise report harmful content when we see it. ■

Kelsey Farish, Associate, DAC Beachcroft

Scarlett Johansson (image-based sexual abuse)

Gal Gadot (image-based sexual abuse)

Emma Watson (image-based sexual abuse)

Tom Cruise (satire)

Sensity is the leader for research in deep fakes. It reports that of the 85,000 deep fakes that have been detected, more than 90% depict non-consensual porn featuring women.

This article is from: