Page 10
AI
Artificial Intelligence and Human Rights: Contemporary and Future Problems Caused by the Rise of AI By Leesha Curtis, JS Law and Business & Eoin Jackson, SS Law Introduction In recent years, artificial intelligence (AI) has developed beyond a mere figment of science-fiction, and is now a key part of business and society. AI is currently developing faster than it can be regulated, with legal frameworks becoming obsolete as soon as software is updated. A resulting lack of human rights protections has left consumers open to breaches of their privacy rights and even discrimination. Furthermore, as AI begins to mirror human traits, does it, itself, become entitled to human rights protections? The unprecedented growth of AI needs to be reconciled with robust human rights frameworks to prevent injustice. While AI has provided us with countless opportunities and more efficient operations, its impact on human rights should not be underestimated. Contemporary Human Rights Issues with AI In an increasingly virtual market, AI is synonymous with competitiveness. For many businesses their success is now intertwined with their ability to embed AI in their operations. However, with AI comes data analytics, and with this comes privacy rights. In order to maximise AI’s profit-making abilities, businesses need to harvest vast amounts of consumer data. Consequently, profit-making has become data-driven. This has resulted in implications for the privacy rights of consumers, as regulators are unable to keep up with the increased digitalisation of industries. Data has been described as the “new oil” and is now akin to currency, with consumers divulging their data in exchange for personalised experiences. On a superficial level, this seems mutually beneficial. It positions businesses to provide better services, thus boosting sales, and giving consumers superior experiences. This can be seen with Netflix and Spotify utilising data and algorithms to create personalised recommendations for users.
However, at what point does personalisation become an infringement on privacy rights? The implications of big data analytics became particularly stark in 2016 when they manifested themselves in the political sphere. Facebook’s involvement with the Cambridge Analytica scandal is a prime example of the weaponization of AI. Here, the data of millions of Facebook users was collected and utilised for political advertising in the US Presidential elections and, allegedly, the Brexit referendum. While Facebook was penalised with heavy fines, robust human rights frameworks are a more appropriate means of protecting privacy rights. Such frameworks are needed to ensure businesses view human rights protection as a necessary part of value creation, rather than an obstacle to innovation. The use of monetary penalties for such breaches demonstrates how underdeveloped this area of law is. At present, the closest we have to robust protection is General Data Protection Regulation (GDPR). The Irish Data Protection Commission recently fined WhatsApp for a lack of transparency in the implementation of Articles 5(1)(a), and 12-14 of GDPR. These penalties demonstrate governments’ willingness to engage with these issues. Unfortunately, such penalties are often viewed by companies as the “cost of doing business.” The cur-