
9 minute read
Effects of Deepfakes and Disinformation on Online News Consumers
Deepfakes, algorithmically-produced images, and other disinformation aspects are increasingly used in online spaces to manipulate opinions, communication, and decision-making and to disorganize the target online communities. The AI-generated deepfakes integrate all the features of the actual images making it challenging for the consumers of online information to distinguish fake information. Yet exposure to deepfakes and disinformation has multiple adverse implications, including damaging the victims’ reputations, manipulating the public image of the subject matter, and undermining public confidence in the digital media.
Deepfakes and disinformation are also potential social threats because of their ability to spread fake news, manipulate public opinion, distort victims’ corporate image, and influence voting and buying decisions made by the public. This has the potential to hurt democracy and cause social unrest, especially in the modern world, where more than half of the adult population is in the online space. Buy this excellently written paper or order a fresh one from acemyhomework.com
Advertisement
Nevertheless, deepfake technology can also be helpful to society in multiple aspects, such as creating high-quality entertainment content by creating high-quality and compelling videos that would otherwise be impossible to produce. The positive implications of deepfakes emphasize the need to mitigate or avoid the adverse ethical issues of deepfakes to benefit from technological innovations without causing social harm. This study aims at revealing the adverse implications of deepfakes technology and disinformation on society and how the effects can be avoided or mitigated while still maximizing the benefits of the technology.
Literature Review
Different authors agree that deepfakes are a significant source of disinformation in the modern era of technology and increased online presence across the globe. This is primarily because of the significantly thin line between deepfakes and original images, which makes it challenging for the intended information consumers to distinguish accurate information from fake information. According to a study by Iproov organization on consumer survey data, deepfakes and associated disinformation has been growing at a creasing rate since 2019. The study also revealed that a majority of consumers hardly understand what deepfakes are, nor can they distinguish factual information from deepfakes ("Deepfake statistics & solutions | Protect against Deepfakes," 2022). The organizations surveyed 16000 people across the globe and established that only 13% of the consumers had some knowledge about deepfakes. More than 71% of global consumers needed to learn what deepfakes are, how they can be identified, and their implications. Similar insights on the limited consumer knowledge of deepfakes are echoed by Köbis et al. (2021). Out of 15000 research participants, Köbis et al. (2021) established that merely 20% would distinguish fake images generated using the deepfake technology. However, the research also revealed the reduced tendency of people to be misled by deepfakes twice, with the ability to identify fake images increasing over time. A related study by Vacarri and Chadwick (2020) also confirms that consumers need to differentiate deepfakes from real information. The authors caution that unless deepfakes and AI-generated videos are accompanied by corresponding disclaimers, the amount of deformations harm the consumers, including manipulating their opinion and influencing related decision-making.
Other effects of deepfakes include compromising public confidence in digital content. This is especially in cases of deepfake videos without a disclaimer to caution the users. A study by Iacobacci et al. (2021) confirmed that deepfake videos increase users’ uncertainty and distrust of digital contests. According to Iacobacci et al. (2021), differentiating a deepfake and the original contest is subject to the persuasiveness of the deepfake video, so in extreme cases, a deepfake video would manipulate viewers' thinking, opinion, and decision-making. According to Köbis et al. (2021), the persuasiveness of deepfake videos is also subject to the video's familiarity and the audience's experience with deepfake videos. According to the authors, audiences exposed to deepfakes are more likely to distinguish fake information from factual information. Similarly, deepfake videos reappearing in online space after the initial experience can be easily differentiated from real content.
Hosler et al. (2021) emphasize the need to educate the masses about deepfakes and the corresponding disinformation to avoid misinformation and probable social unrest. According to Hosler et al. (2021), one way of detecting deepfakes is the identification of incongruities in the rest of the body. This is because most deepfake videos focus on the face and the voice, with little emphasis on the congruency between facial expressions and body movements. Users can also identify deepfakes by checking for misalignments between body posture, voice, and facial expressions. Appel and Prietzel (2022) suggest using AI software to detect deepfake videos and disinformation. According to the author, detecting deepfake videos requires using similar AI technologies used in making the videos. Different technology companies have devised deepfake detectors such as Fake Catcher, which are able to detect Fake technologies up to 96% (Rubin, 2022). Using different detection streams, the cloud-based tool uses Artificial intelligence to analyze how blood flows in human characters in videos. Masood et al. (2023) caution that the technologies used to detect deepfake videos should evolve at the same rate as the technologies used to develop the videos for them to retain relevance in protecting consumers against deepfake technologies and the associated disinformation.
Analysis/Discussion
The literature reviewed reveals different ethical implications of deepfakes videos and the associated disinformation. Unless when a disclaimer is used, mostly in cases when deepfakes are used for entertainment purposes, such AI-generated videos are often created with the intention of misleading the intended recipients. This includes manipulating them to change their opinion about the subjects and influencing their decisions in areas such as making purchases or voting. The immediate consequences of such manipulations are a compromise of democracy and distorted choices.
Whereas artificial intelligence is an essential aspect of the technological revolution, it is also responsible for the ethical issues of the mass creation of deepfakes and the corresponding disinformation. The deepfakes are synthetic cloud-based videos based on integrations of audiovisual technologies and artificial intelligence in the creation of near-real videos that increase uncertainty and undermine trust in the digital content and the larger public discourse (Muskat et al., 2023). While deepfakes may not always influence people, and in some cases, AI software may be used to single out factual information from deepfakes. Nonetheless, online news users must scrutinize any information before basing their opinion and decision-making, compromising confidence in the digital content. This, in turn, reduces trust in information and news shared online spaces, hindering the achievement of the mission of technological development of improving overall sustainability and efficiency.
From a utilitarian perspective, deepfakes and disinformation have the implications of causing pain to the greatest number of people while serving the interests of the minority. The ethical basis of actions is judged subject to the consequence it elicits. For instance, creators of deepfake videos aim to maximize their returns by, for instance, increasing the number of views on social media platforms, which translates to financial gains. However, this is at the expense of the larger majority, who make manipulated decisions and opinions based on the deepfakes.
Deepfakes also have the potential to cause social havoc and manipulate democratic decisions whose effects are felt in the long term.
As expounded above, deepfakes and misinformation have adverse consequences of misleading the larger consumer populations besides increasing the potential for social havoc. The effect of deepfakes in causing social havoc can be expounded through the lens of a video that went viral on WhatsApp in 2018. The video showed CCTV footage of a group of people kidnapping children (Reddy, 20220). The purported kidnapping caused mob violence that spread out eight weeks after the video was shared (Reddy, 2022). At least eight people were killed during the violence, scaling up the social havoc. It was later discovered that the video that caused the social unrest was a deepfake degenerated using artificial intelligence.
In another example, a deepfake video was generated showing Barack Obama attacking President Trump. In the deepfake, the mouth movement and facial expressions of the Fake Obama perfectly resembled those of the actual Obama, making it a challenge to distinguish real information from fake information (Palmer, 2018). However, unlike other disclaimer-less deepfakes, the Fake Obama video was produced by comedian Jordan Peele for entertainment purposes. Nevertheless, the deepfake notched up to more than five million views on youtube, 52000 retweets, and more than 830000 shares on Facebook (Palmer, 2018). This is a confirmation of the dreadful effects of deepfakes and their potential to cause social havoc.
Political deepfakes are at the leading edge of disinformations and public manipulations. The challenge in detecting political human-based videos has profound consequences of hurting democracy, citizen competence, journalism, and generally a peaceful coexistence. As a result, the mass production and diffusion of deepfakes and disinformation present the greatest change in authenticating online political discourse. This is because images have greater persuasive power compared to mere texts. This is especially true when images are accompanied by audial files that can barely be differentiated from the original files unless deepfake detection technology is used.
Deepfake detection is centered on identifying fake images and videos using deep learning technologies. Deepfake videos can also be detected using AI technologies used to create the videos and images (Hosler et al., 2021). For instance, software that detects AI Output detects deepfakes by analyzing any digital fingerprints due to AI-generated content. This is helpful in determining if an image or video has been altered or manipulated using artificial intelligence.
Whereas deepfake technologies are useful in the entertainment sector, it has proven their potential harm in being used for malicious purposes due to the sheer power of manipulating images and videos to the point that they cannot be detected by the naked eye. This can also be used for marketing or content creation purposes. However, to hold the legal and ethical basis of deepfakes, it is imperative to include a disclaimer distinguishing a fake video or image from the real one. This is helpful in preventing probable disinformation due to the deepfake. Of more importance in detecting deepfakes and disinformation is analyzing the congruence between the face, voice, facial expression, and the rest of the body, including movements, body pose, and skin complexion. As emphasized by Hosler et al. (2021), deepfake video creators focus on facial features and may be less keen on editing the rest of the body to bring about 100% congruence. Nevertheless, some deepfake videos and images are well-edited, making them near-real. In such cases, software to detect AI output comes in handy.
Social media companies have also established ways to identify, detect, and flag deepfakes and disinformation and track their original source using reverse engineering. The reverse engineering method of detecting deepfakes scans the deepfake video and images using fingerprint estimating technology that retrieves traces of fingerprints used to generate the video or game. Reverse engineering technology is also used to identify the generative model and other primary details, such as the specific camera used to generate the photo. Such technologies enable consumers and analysts to understand a particular deepfake comprehensively.
Results/Conclusions
The study comprehensively explains how deepfakes and disinformation affect society and the corresponding ethical implications. First, the study established that significant consumers need help to understand what deepfakes are to distinguish real information from fake one. The study revealed that more than 70% of the global population needs more knowledge about deepfake videos and images. As a result, deepfakes significantly influence consumers' opinions and decision-making regarding the subject matter. This is especially in the marketing and political areas. The study also reveals that deepfakes have increased public mistrust of digital content. This is in addition to reduced confidence in news shared on social media platforms. The study also reveals different ways of detecting deepfakes, including identifying congruencies between the facial features and the rest of the body. AI-generated videos and images can also be detected using AI detectors. Besides, deepfakes hurt the victim psychologically and inflict emotional pain that may spill over to one social network, affect employability, and generally cause a tainted public image. Cyber criminals can also use Deepfake videos and images to commit online fraud.
Nevertheless, deepfake technology is not entirely bad when used with caution. Its benefits are mostly felt in the entertainment industry by enabling the creation of high-quality videos that would be impossible to create without the integration of AI-generated images and videos.
In conclusion, the effect of deepfakes in influencing disinformation is profound. As technology evolves, deepfake technology is anticipated to advance. This implies the need to further develop deepfake detection technologies. This is in addition to educating the consumers to detect and filter fake information to prevent subjectivity, manipulated opinions, and poor decision-making. Yet, information on the online space should not be trusted without authentication as there is a thin line between deepfakes and real information. Of more importance in solving the social menace of deepfakes is preventing probable ethical issues by including disclaimers and seeking the consent of the owner of the real images before AI manipulations. Other ways of preventing and mitigating the effects of deepfakes include implementing comprehensive verification protocols, literacy on detecting deepfakes and fake information, and ensuring that online content consumers understand the implications of deepfakes and the probable ethical and legal implications. Embracing appropriate precautionary measures is helpful in preventing the potential ethical and legal implications when such videos and images influence consumers' opinions, judgments, and decisions.
References
Appel, M., & Prietzel, F. (2022). The detection of political deepfakes. Journal of ComputerMediated Communication, 27(4), zmac008.
Deepfake statistics & solutions | Protect against Deepfakes. (2022, September 23).
iProov. https://www.iproov.com/blog/deepfakes-statistics-solutions-biometric-protection
Hosler, B., Salvi, D., Murray, A., Antonacci, F., Bestagini, P., Tubaro, S., & Stamm, M. C. (2021). Do deepfakes feel emotions? A semantic approach to detecting deepfakes via emotional inconsistencies. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1013-1022).
Iacobucci, S., De Cicco, R., Michetti, F., Palumbo, R., & Pagliaro, S. (2021). Deepfakes unmasked: the effects of information priming and bullshit receptivity on deepfake recognition and sharing intention. Cyberpsychology, behavior, and social networking, 24(3), 194-202.
Jeffie-Xu, F., Wang, R., Huang, Y., Guo, Q., Ma, L., & Liu, Y. (2022). Countering malicious deepfakes: Survey, battleground, and horizon. International Journal of Computer Vision, 130(7), 1678-1734. https://link.springer.com/article/10.1007/s11263-022-01606-8