3 minute read

Biased Intelligence

Next Article
Frame of Reference

Frame of Reference

Aashka Trivedi | Comps | B.Tech. 4

Humans love being biased. No matter how “politically correct” or “socially woke” you are, you have your reservations about something. Maybe you don’t trust people who put ketchup in their Maggi, maybe you judge vegans, maybe you (not so) subtly dislike people who act/pray/ love in a way that is unfamiliar to you. No matter which way you spin it, human perceptions are inherently biased. But is that a bad thing?

Advertisement

When left to their own devices, individual biases don’t really matter in the grand scheme of things. Every person has always been free to believe in whatever they want, and as long as they kept their opinions to themselves, all was well. People who hate carrots simply didn’t eat carrots. Then came the Internet. The Internet offered a platform with a reach hitherto unheard of, giving it the ability to amplify one’s personal bias strongly enough to start affecting collective reasoning. This meant that not only could people profess their hatred of carrots, but a bunch of carrot-haters could come together and convince others about the perils of this vile orange root vegetable. Moreover, for the most part, the Internet allows one to say whatever they want with almost no delay or forethought, and one may choose to do so anonymously. This lack of non-personal confrontation has made it easier to argue against and harder to empathize with different perspectives.

The simplicity through which a distorted view can be presented on a wide scale seems bad enough, but in reality, modern society faces a much bigger problem. Almost all modern technologies depend massively on data, and now we are not only analyzing that data, but feeding it to Machine Learning systems to teach them how to make the “best” decision. But where does this data come from? From a place where data is freely and abundantly available- the Internet. While many Artificial Intelligence applications were built with the hope that they would prevent unfair human perspectives from assimilating into society on a wider scale, the data presented to AI systems is not without prejudice of its own, resulting in seemingly “biased” algorithms. The major difference here is the rate and scale at which an unfair algorithmic perspective can spread, and how catastrophically it can affect the public at large.

Recently, Amazon’s artificially intelligent hiring algorithm was found to be biased towards female candidates. A few years before that, an MIT researcher made headlines when she claimed that the facial recognition software of a prominent company couldn’t identify her or any of her dark-skinned friends. In 2016, it was found that LinkedIn’s search engine may be gender biased- if a woman’s name was searched, a similarlooking male name would pop up as a suggestion. None of these algorithms were specifically designed to be biased, they were merely emulating to the data given to them- data that was presented as ideal. This gives us something additional to think about. If human perception can unconsciously change the neutrality of algorithms so drastically, what would happen if algorithms were purposely written to mirror the skewed perceptions of a group of individuals?

Think of Google. When you search for something on the site, information is displayed to you according to relevance only; there is no apparent partiality with respect to what is shown or how results are ordered. This is because Google maintains (or, is supposed to maintain) what is known as Search Neutrality. Now imagine that whenever you searched for a certain orange politician, you only got negative news about him. No matter how staunchly you think you support the guy, it’s going to start getting tough to avoid at least a few internal conflicts. Unfortunately, applications like search engines or automated news providers don’t even need to be this blatantly partisan to change communal perceptions- even a few tweaks in their algorithm would be catastrophic enough. This is because of the fact that people tend to draw different conclusions from information, depending on how it is presented to them.

This highlights a major problem. Biased algorithms have a huge effect on our personal perceptions. But now, because human biases are used to train these algorithms, our perceptions seem to have an effect on how unbiased an algorithm behaves. This Catch-22 situation leaves one to ponder about what is easierchanging our inherent perspectives, or remaining stoic to what is shown to us. I’ll let you decide.

This article is from: