This understanding of algorithms is incredibly naïve and fails to acknowledge how such algorithms are even developed. No algorithm can write itself, and a goal of remaining objective by using an algorithm rather than a human judgement often overlooks this. Though the Apple Card is the most recent in the news, issues of gender bias in algorithms go well beyond just this case. Another big tech giant, Amazon, is guilty of discriminating against women in an algorithm they developed as a hiring tool. Amazon’s goal was to help automate the hiring process. Ideally, their product would be able to go through any resume submitted for a job posting and select the top candidates, saving time and effort. In practice, this algorithm ended up learning to favor male candidates, based on their differences in language, especially in hiring for technical jobs. Jobs. Given that the training set relied on successful past applicants and that there is an imbalance of men in technical fields, descriptions used by men in their resumes were deemed to be a higher quality than their female counterparts. Amazon eventually pulled the initiative completely after realizing it was failing so spectacularly.
The Bottom Line: The common problem we see in these poorly designed algorithms is that they learn biases reflected in their training sets that are present as a result of preexisting social discrimination. They learn these biases and reinforce them. It’s important that as we further rely on algorithms to do work for us, time and time again, they are failing to remove human biases. Instead, algorithms simply automate them.
Read More About It: Algorithms of Oppression by Safiya Noble Automating Inequality by Virginia Eubanks Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech by Sara Wachter-Boettcher