FACULTY | New to the iSchool
Aylin Caliskan specializes in ethical artificial intelligence
DISRUPTING BIAS IN AI By Mary Lynn Lyke Can machines be sexist? Researcher Aylin Caliskan found out the answer when she entered “O bir profesör, O bir öğretmen” into Google Translate on her computer. “O” is a gender-neutral pronoun in Turkish; it can mean “she,” “he” or “it.” The Google program, powered by statistical machine translation algorithms, translated the Turkish sentences as “He is a professor. She is a teacher.” The gender-biased response was not an anomaly, she found. As they process massive language datasets, rapidly learning as they go, artificial intelligence (AI) programs tend to associate female terms — she, her, sister, daughter, mother, grandmother — with arts and family. They link male terms with science, mathematics, power and career. “This is the way AI perceives the world,” says Caliskan, who joined the iSchool faculty this fall as an assistant professor specializing in the emerging field of AI ethics. Sexism, racism, ageism, ableism and LGBTQ discrimination are rapidly spreading through such everyday, big-tech tools as text generation, voice assistance, translation and information retrieval, warns Caliskan. “Machines are replicating, perpetuating and amplifying these biases, yet no regulation is in place to audit them.” Caliskan joined the UW’s new Responsible AI Systems & Experiences team, a research group investigating how intelligent information systems interact with human society. “These researchers are world leaders in the field,” she says. “It is extremely humbling to be part of it.” She has been following research at the iSchool for more than a decade, starting with Professor Batya Friedman’s foundational paper tracking bias in computer systems in the late ’90s. “The iSchool is home to the first Photo by Doug Parry
CALISKAN, continued on Page 42 38
|
iNews