Political threats: With advancements in information and communication technologies and global prominence of social media, how, when and why individuals communicate and find news sources, are inevitably undergoing an unprecedented change. This transformation can be seen all around the world and has influenced the outcome of elections, encouraged popular protests, and empowered people to exercise their fundamental rights. At the same time, the prominence of social media can equally make people vulnerable to manipulation through misinformation and disinformation and has increased the capabilities of both public and private entities to conduct profiling and surveillance operations. The integration of AI into this equation, for instance through the proliferation of deepfakes, will greatly enhance the nature of this threat. As the authors of the 2018 report noted, these categories are not necessarily mutually exclusive. For example, AI-enabled hacking can be directed at cyber-physical systems resulting in physical harm, and physical or digital attacks could be carried out for political purposes. Moreover, “political” is a complex categorization, particularly in the context of terrorism, for which a political motivation is often very much linked to the general understanding of the concept of terrorism, along with social, ideological, religious and economic factors. In this regard, for the purposes of this report, the malicious use of AI for terrorist purposes will consider two primary types of threats, namely cyber threats and physical threats, as well as add to the discussion other relevant activities connected with the actions of terrorist groups and individuals, including financing methods, propaganda and disinformation strategies and other operational tactics.
V. FACT OR SCIENCE FICTION? Having examined some categories of threats posed by the malicious use of AI, this chapter will address whether there is any credible substance to such threats or if the malicious use of AI for terrorist purposes is little more than science fiction. From the outset, it is important to clarify that no clear evidence of the actual use of AI by terrorist organizations has been identified to date. In fact, in its most recent report the ISIL/Al-Qaida Monitoring Team observed that “Notwithstanding continuing Member State concerns about abuse of technology by terrorists, especially in the fields of finance, weaponry and social media, neither ISIL nor Al-Qaida is assessed to have made significant progress in this regard in late 2020.”65 There are, however, important caveats to this. AI, as has already been seen, is very much already part of daily life and is used by many individuals, often unbeknownst to them. For instance, NLP is the basis of smart assistants such as Apple’s Siri and Amazon’s Alexa, and is used to correct typos in text messages, emails and Word documents. Facial recognition is used to unlock smartphones and object recognition helps to classify images and improve the results of Google searches. In this regard, the above statement does not presume to exclude the possibility of terrorist groups and individuals having used AI indirectly – for instance, passively, or even unwittingly, as described above. Rather, it is intended to imply that AI has not been used directly, for instance, to specifically improve or amplify an attack.
65
22
Analytical Support and Sanctions Monitoring Team submitted pursuant to resolution 2368 (2017) concerning ISIL (Da’esh), Al-Qaida and associated individuals and entities, Twenty-seventh report, S/2021/68 (3 February 2021).