3 minute read

Deepfakes: Technology and Danger Behind Them

Next Article
The Mobius Strip

The Mobius Strip

· What if you were able to take someone’s face and put it over yours?

· What if you could take someone’s voice?

Advertisement

· What if you could create the face of a person that doesn’t exist?

All these questions are answered by a form of technology that has become more relevant than ever over the past few years: deepfakes. While this new software represents major advancements in AI technology, others see them as tools that could have dangerous ramifications in our current world where misinformation is more common than ever.

Firstly, let’s look at how deepfakes can be eated using the first example of taking someone’s face, in this case swapping the faces of two celebrities: Person A and Person B. Firstly, several face shots of the two people are run through an encoder – an AI algorithm that compares the two faces to find similarities, reducing them to their shared common features and compressing the images. Then, algorithms called decoders are taught to recover the faces from the compressed images: one decoder is used to recover each face. Finally, the face swap is done by feeding encoded images into the “wrong” decoder – for example, Person A’s face is fed into the decoder trained for Therefore, the decoder reconstructs Person B’s face with the expressions and orientation of Person A. For videos, this process must be done on each frame to make the swap look convincing.

Furthermore, making fake faces is done using a GAN (Generative Adversarial Network). It pits two AI algorithms against each other –the first one, known as the generator, is fed random noise which it turns into an image. The synthetic images created are then added to a stream of real images of people which are fed into the discriminator, the second algorithm, which improves the synthetic image made. This process is repeated countless times with feedback on performance so both algorithms improve over time. This leads to the synthetic image looking nothing like a face to looking hyperrealistic.

While most people may have seen deepfakes used harmlessly in memes, novelty mobile applications and Kendrick Lamar’s latest music video, they have become recognised recently due to their issues. Firstly, the ethics of deepfakes are questionable as they can involve using someone’s face or voice without their consent. A prime example of this was when the makers of a documentary about the late chef Anthony Bourdain got into controversy for using a deepfake of his voice in the movie to read out letters he wrote. However, other stories have proven the extent of havoc they could cause: a few months ago, the Ukrainian TV network Ukrayina 24 was hacked and aired a deepfake video of President Zelensky “talking of surrendering to Russia” [5]. Fortunately, it wasn’t convincing enough to fool the Ukrainian people but if the software drastically improves in the next few years, people could fall for them. As a result, authorities are concerned of the grave danger deepfakes could pose to the world. For instance, the FBI put out a notification in 2021 stating that Russian and Chinese agents “are using synthetic profile images derived from GANs” [3] because the images were traced to “foreign influence campaigns” [3] me too. Even with the efforts to detect them, it will get harder to spot deepfakes because the technology is constantly improving. For instance, US researchers found out deepfake faces blink abnormally in 2018 as the photos fed into the algorithms showed people with their eyes open so blinking was never learnt. However, blinking deepfakes appeared soon after the research was published. This demonstrates “the nature of the game” [4]: weaknesses are fixed as soon as they are revealed. So, can the deepfake community make their software undetectable? Can authorities and tech firms create a truly foolproof detection system before then? It’s still a tight race and I fear of what could happen if the developers take the lead.

However, the issues deepfakes could cause can be prevented. For instance, according to the FBI report, there are ways to spot deepfakes using “visual indicators” [3] These can include “flickering around the edges” [4], patchy skin tones or bad lip synching. In addition, a bill called “The DEEP FAKES Accountability Act” was passed in the United States in 2019 that regulates the use of unapproved deepfakes using irremovable digital watermarks. Furthermore, many tech firms and universities have funded research to detect deepfakes. For example, in 2019, Microsoft, Facebook and Amazon all backed the Deepfake Detection Challenge which involved teams around the world competing to create a deepfake detection system.

References

[1]60 Minutes (2021). How synthetic media, or deepfakes, could soon change our world YouTube. Available at: https://www.youtube.com/watch?v=Yb1GCjmw8_8&t=57s &ab_channel=60Minutes [Accessed 1 May 2022].

[2]David, D. (2021). Council Post: Analyzing The Rise Of Deepfake Voice Technology. Forbes. [online] 10 May. Available at: https://www.forbes.com/sites/forbestechcouncil/2021/05/10 /analyzing-the-rise-of-deepfake-voicetechnology/?sh=3974db866915 [Accessed 10 May 2022].

[3]Federal Bureau of Investigation (2021) Private Industry Notification: Malicious Actors Almost Certainly Will Leverage Synthetic Content for Cyber and Foreign Influence Operations. 210310-001. Washington D.C. FBI. Available from https://www.ic3.gov/Media/News/2021/210310-2.pdf [Accessed 2 May 2022]

[4]Sample, I. (2020). What are deepfakes – and how can you spot them? [online] the Guardian. Available at: https://www.theguardian.com/technology/2020/jan/13/what -are-deepfakes-and-how-can-you-spot-them [Accessed 1 May 2022].

[5]Wakefield, J. (2022). Deepfake presidents used in Russia-Ukraine war. [online] BBC News. Available at: https://www.bbc.com/news/technology-60780142 [Accessed 5 May 2022].

Ansh Bindroo 11CSI

This article is from: