Who is watching the watchers? While machine learning has the ability to greatly enhance efficiencies, spectacular failures have already occurred. Systems are fortunately being developed to guard against irregularities that have led to these glitches.
2ND QUARTER 2019
ONE OF the initial ideas with artificial intelligence was to address human bias. This was relatively easy in the old days when computers only did what they were told, but is getting harder today with machines sifting through hundreds and thousands of data points to arrive at conclusions. Bias may creep in during many stages of the learning process, starting with poor framing of a problem. This example is nicely illustrated in Douglas Adams’ Hitchhiker’s Guide to the Galaxy, which was already written in 1979, with a super computer, named Deep Thought, that was programmed to answer the ultimate question of life, the universe and everything. After seven and a half-million years of computing, the computer ironically came up with the answer, “42”. Unlike today’s machines, Deep Thought was smart enough to realise that the answer was “probably meaningless”, because the beings who initiated it never really understood the question. In other words, they might not have known how to correctly frame the question, or did not understand the different ways in which the phrases or terms in the question could be interpreted.
SYNAPSE
36
Machine bias Take the word “fairness” as example, and the different interpretations of “fairness” when it comes to issues such as gender or racial equality as well as the way in which “fairness” should be expressed in mathematical terms when writing machine learning algorithms. Different people will have different interpretations of what fairness is, resulting in different machine learning outcomes. Bias may also slip in during the gathering and usage of data, when the data itself is biased or the data is insufficient to generate representative conclusions. A big problem with facial recognition programmes, for example, has been that the “sample faces” used to teach systems only reflect facial appearances of groups that are easily accessible to the
ABOUT THE AUTHOR:
“
by Paul Stemmet
A big problem with facial recognition programmes, for example, has been that the ‘sample faces’ used to teach systems only reflect facial appearances of groups that are easily accessible to the programmer.
”
programmer. In the America this has led to programmes easily identifying stereotypical Western faces, but struggling to identify minority groups. Another problem is that most systems are not “self conscious”, in other words, they have not been programmed to detect bias in their own “reasoning” or processing of information. In most cases, systems are tested for bias before they are deployed, with this testing done by the same company that designed the system, bringing us back to the age old Roman question: “Quis custodiet ipsos custodes.” which can be roughly translated into “who is watching the guards?” Real life applications So how does machine bias affect people? While Deep Thought’s conclusion of the meaning of life had a negligible impact on the galaxy, machine learning bias can ruin lives and actually be fatal. The very same year in which The Hitchhiker’s Guide to the Galaxy was released, was also the same year in which the world suffered its first robot fatality: The American car factory worker, Robert Williams. While it is still unclear what exactly went wrong, Williams died instantaneously after being crushed by a one-ton transfer vehicle that was part of a robot designed
Paul Stemmet is the co-founder and CEO of YAP – a company aimed at optimising advertising returns for publishing companies. For more information contact him at: Email: paul@shinka.sh Cell: 0762754279