What Have We Learned?
The Dangers of AI: Will Our Baked-in Biases Come Home to Roost? By Frederick A. Miller
I
n our article here ten years ago, Judith Katz and I focused on the importance of building inclusive work cultures, a process we described in our book, The Inclusion Breakthrough (2002, Berrett-Koehler Publishers). We are encouraged by how organizations are talking about inclusion and beginning to build it into their success strategies. We’re also pleased to see organizations beginning to focus on building inclusion at the level of individual and team interactions, which we described in Opening Doors to Teamwork and Collaboration: 4 Keys That Change Everything (2013, Berrett-Koehler Publishers). Our clients have seen transformational results, when they make inclusion the “HOW” for getting work done, with documented gains in productivity, error reduction, and employee engagement. But looking forward, there is an emerging issue that may have consequences larger than anything we, as pioneers and practitioners, have faced in addressing diversity and inclusion.
Robots ≠ People Our firm, The Kaleel Jamison Consulting Group, Inc., has worked with computer companies since 1984, beginning with Digital Equipment Company (DEC). We have worked with several since, and with IT functions in many organizations. And we are worried.
60
Digital technologies have transformed the way we, as humans, work and think. Whole categories of jobs have disappeared. Remember linotype? The “steno pool”? Robots are working on assembly lines, performing repetitive tasks faster and more accurately than humans, with no worries about morale or repetitive stress injuries. Could it mean a return to the mechanistic organization models of the late nineteenth and early twentieth centuries? Let’s hope not. And what of the humans who have been displaced by automation? There are indications of societal increases in anger and anxiety, with a high correlation between regions with the highest density of robot workers and the highest density of people feeling upset and forgotten. Their expressions of loss and disenfranchisement sound like the cries of an endangered species.
If Artificial Intelligence (AI) is based on “machine learning,” what is it learning? At the same time we’re grappling with the shift to more automation of tasks, we’re also facing new challenges brought on by Artificial Intelligence. We’re integrating the decision-making power of AI into more and more aspects of life, but we may be basing our trust in AI-driven devices on false assumptions. The most dangerous
of these assumptions is that machines, by their very nature, are objective. Some courts use an AI program to help decide conditions for parole after an arrest, but ProPublica reporters have found that the program systematically discriminates against people of color. Self-driving cars will inevitably have to face real-life versions of the “trolley dilemma.” Whose moral judgment will go into the programming? “Intelligent” machines are programmed either by humans (from an industry with a record on diversity issues that is spotty at best), or through “machine learning,” which may simply replicate biases built into the systems used in the learning process, such as the skewed demographics of arrest records, or language itself, including all its racism and gender bias. This is the question that troubles me most: How can machines be made free of the biases we cannot free ourselves from? PDJ
Frederick A. Miller, The Kaleel Jamison Consulting Group