Language of the Future: the 2nd annual Adarga Symposium on Artificial Intelligence

from Adarga

Scroll for more

Page 16

“But It’s Just a Machine!”: Bias in the Age of AI Dr Tom Todd Data Science Team Lead, Adarga

Dr Colin Kelly NLP Team Lead, Adarga

A

I promises to accelerate and enhance decision-making in all spheres of life, but can we trust AI algorithms to make fair decisions? Biases in AI systems can lead to discrimination, poor performance, and a lack of trust in the system’s results. Tackling bias is a crucial step to enable the adoption of AI, particularly in the highstakes applications where it can be most valuable. Before the advent of AI, the source of algorithmic bias in an automated system was clear: it was due to the programmers who developed it. The decisions they made while solving the problem resulted (intentionally or unintentionally) in an algorithm which gave a biased result. The story with AI is a little less clear. The power of AI is its ability to learn the logic of an algorithm from data (we’ll call this AI derived algorithm a ‘model’). This means it’s no longer necessary for a programmer to meticulously think through each logical step of the algorithm as would be the case in traditional software development. This is an immensely powerful technique for solving difficult problems quickly, accurately and at scale. However, this introduces an effect that we’ve not seen before in algorithm development. When logic is inferred from data any bias in the data will be ‘baked in’ to the model produced by the AI. If AI practitioners are not careful, they can easily produce models which perpetuate the current state of the world, unwelcome biases and all. The aim of this paper is to consider different kinds of bias that we must consider when working with AI, the mitigation strategies we can pursue to avoid bias, and look to how this situation may change in the future. Defining bias as systematic error, or one not borne out of reason, it is easy to see that bias is undesirable and should be avoided. But it’s worth considering the various

30

If AI practitioners are not careful, they can easily produce models which perpetuate the current state of the world, unwelcome biases and all

sources of bias - the existence of bias is not a modern problem nor is it unique to scientific endeavours. As the saying goes, “history is written by the victors”, and within that we encounter our first form of bias – observer bias. History books tell us the ‘facts’ about our world in the past, but any budding historian learns they must work hard to control for the subjective prism of their sources’ viewpoint. When collecting data to train our algorithms, data scientists must be no less careful. Confirmation bias – placing greater weight on that which supports one’s preconceptions – can be easily capitalised upon, for good and for ill: if an algorithm has identified that a social media user engages with 5G conspiracy theory articles, who better to supply further 5G ‘evidence’ to? We data scientists and AI developers can be no less susceptible to confirmation bias. This is why the gold standard for scientific research involves double-blind experiments. After all, modern machine learning and AI platforms take a snapshot of a slice of the world through the data they ingest and then use it to characterise that world in some way. While we humans have (albeit imperfect) means to overcome our own biases, these machines do not. We can access other print, television and online news sources and possess our own ability to reason about and to test our theories – a pre-built, use-case-specific model or algorithm will not.