ARTS & CULTURE CURATION Irini Papadimitriou Caroline Sinders Kai LandoltP hilo van Kemenade Daniel Pett EXPERIENCE CO-DESIGN Angela Plohman Sarah Allen
ARTS +CULTURE Trustworthy AI: Imagining Better Machine Decision Making
EXPERIENCE SUPPORT Zannah Marsh PUBLICATION DESIGN John Philip Sage Carlos Romo-Melgar This catalogue is published under CC BY-NC-SA 4.0 - Please note that the license does not apply to the individual underlying artworks or the images of those depicted herein.
Artificial intelligence is already embedded in many aspects of our everyday life and affects every part of society; from social networks, search engines and navigation to finance, law, policing, and more. AI in all of these spaces has many issues and misuses, while our trust and dependence on invisible and complex systems like these has become normalised and unquestioned. What are creative and poetic ways to highlight the risks, fears, and challenges of how AI affects society, but also our hopes for AI in the future? The Arts & Culture Experience at MozFest brings together leading artists and thinkers - through an exhibition, as well as workshops and talks - to explore how machines are making decisions for us now, and what AI advances are on the horizon. The exhibition extends to the Salon, an area on
the 8th floor featuring show & tell demonstrations, participatory sessions and unique artworks critically reflecting on the practice of collecting and preserving art and culture in collaboration with AI. The arts can shape emerging technologies, like Artificial Intelligence, for the better. The artworks presented here, as well as debates and participatory activities open up a critical space for enabling conversations about bringing more social responsibility, ethics, and user agency to AI today, and in the future.
TRUSTWORTHY AI: IMAGINING BETTER MACHINE DECISION MAKING // LIST OF ARTWORKS 01
S AGGREGATED, 3.0 U MIMI ONUOHA 02 A PEOPLE’S GUIDE TO AI MIMI ONUOHA AND MOTHER CYBORG (DIANA NUCERA) SOMEONE LAUREN MCCARTHY 04 ANGER DISGUST FEAR HAPPINESS SADNESS SURPRISE CAROLINE SINDERS 05 GENDER SHADES JOY BUOLAMWINI AND TIMNIT GEBRU STEALING UR FEELINGS NOAH LEVENSON THE HIDDEN LIFE OF AN AMAZON USER JOANA MOLL 08 THE INVISIBLE MASK COMUZI 09 HIGHER RESOLUTIONS HYPHEN LABS, CAROLINE SINDERS AND ROMY GAD EL RAB 10 S.A.M. THE SYMBIOTIC AUTONOMOUS MACHINE ARVID&MARIE 11 CONTENT AWARE STUDIES EGOR KRAFT 12 ACCESSION TOM SCHOFIELD 13 WIKIFIED COLONIAL BOTANY ANAÏS BERCK 14 OUR COLLECTIVE HOPES: MANY STORIES, COUNTER VOICES ELVIA VASCONCELOS
US AGGREGATED, 3.0 01
To classify is human, and increasingly classification is algorithmic. We are grouped and sorted by models, computers, and algorithms. These algorithmic classifications are more likely to be perceived as true than human sortings, regardless of how arbitrary they are. And things that have been perceived as true have real and true consequences. Us Aggregated, 3.0 is a single channel video displaying a collection of photos from the artist’s family’s personal collection set alongside images scraped from Google’s library that have been algorithmically categorized as similar. The work is an ode to the quiet issues nestled within the routine practice of classification: what does it mean to be seen as similar to another? Is there some part of us that yearns for the meaningmaking that sorting provides? Would it be different if we were in control of the process?
A PEOPLE’S GUIDE TO AI 02
Mimi Onuoha and Mother Cyborg (Diana Nucera) Systems that use artificial intelligence are quietly becoming present in more and more parts of our lives. But what does this technology really mean for people, both right now and in the future? Written in 2018 by Mimi Onuoha and Mother Cyborg (Diana Nucera), A People’s Guide to AI is a comprehensive beginner’s guide to understanding AI and other data-driven tech. The guide uses a popular education approach to explore and explain AI-based technologies so that everyone—from youth to seniors, and from non-techies to experts—has the chance to think critically about the kinds of futures automated technologies can bring. The mission of A People’s Guide to AI is to open up conversation around AI by demystifying, situating, and shifting the narrative about what types of use cases AI can have for everyday people.
SOMEONE imagines a human version of Amazon Alexa, a smart home intelligence for people in their own homes. For a two month period in 2019, four participants’ homes around the United States were installed with custom-designed smart devices, including cameras, microphones, lights, and other appliances. 205 Hudson Gallery in NYC housed a command center where visitors could peek into the four homes via laptops, watch over them, and remotely control their networked devices. Visitors would hear smart home occupants call out for “Someone”—prompting the visitors to step in as their home automation assistant and respond to their needs. This video installation presents documentation from the initial performance on four screens throughout the space. Interface development by Lauren McCarthy. Software and hardware development by Harvey Moon and Josh Billions. Furniture design in collaboration with and fabrication by Lela Barclay de Tolly. Smart home participant collaborators include Valeria Haedo, Adelle Lin, Amanda McDonald Crowley, and Ksenya Samarskaya. Image credit: Stan Narten
ANGER DISGUST FEAR HAPPINESS SADNESS SURPRISE 04
Anger Disgust Fear Happiness Sadness Surprise are the six emotions that are defined to be culturally universal by the Emotion Facial Action Coding System (Emfacs) developed by Paul Ekman and Wallace V Friesen in the 1980s. This specific system of emotion definitions and taxonomies is the backbone of most emotion recognition systems today. Almost all emotion recognition systems have varying degrees of inaccuracy and often classified peopleâ&#x20AC;&#x2122;s emotions incorrectly. Anger Disgust Fear Happiness Sadness Surprise explores the complexities of how machines confuse human emotions, but the deeper questions of how do we emote, what do we emote, and are we allowed to express emotions equally? Actor: Marc Nikoleit Development guidance: Jay Mollica Filming and Editing: Wrangel Film
Joy Buolamwini and Timnit Gebru
Joy Buolamwini and Timnit Gebru investigated into the bias of AI facial recognition programs. The study reveals that popular applications that are already part of the programming display obvious discrimination on the basis of gender or skin color. A further reason for the unfair results can be found in erroneous or incomplete data sets on which the program is being trained. In things like medical applications, this can be a problem: simple convolutional neural nets are already as capable of detecting melanoma (malignant skin changes) as experts are. However, skin color information is crucial to this process. Thatâ&#x20AC;&#x2122;s why both of the researchers created a new benchmark data set, which means new criteria for comparison. It contains the data of 1,270 parliamentarians from three African and three European countries. Thus Buolamwini and Gebru have created the first training data set that contains all skin color types, while at the same time being able to
STEALING UR FEELINGS
Stealing Ur Feelings is an exploration of the possibilities and concerns around consumer tech companies quantifying people’s emotions – potentially without our knowledge at all. An augmented reality film revealing how the most popular apps can use facial emotion recognition technology to make decisions about your life, promote inequalities, and even destabilize democracy makes its worldwide debut on the web today. The six-minute documentary explains the science of facial emotion recognition technology and demystifies how the software picks out features like your eyes and mouth to understand if you’re happy, sad, angry, or disgusted. Using the same AI technology described in corporate patents, Stealing Ur Feelings, by Noah Levenson, learns the viewers’ deepest secrets just by analyzing their faces as they watch the film in real-time.
THE HIDDEN LIFE OF AN AMAZON USER 07
“Jeff Bezos: The Life, Lessons & Rules For Success” was purchased at Amazon on June 17th 2019. In order to purchase the book, Amazon forced the customer to go through 12 different interfaces made of large amounts of code. Overall, we could track 1307 different requests to all sort of scripts which equaled to 8724 pages of printed code and 87.33Mb of data. Amazon’s business model is based on “obsessive customer focus”, which entails the continuous tracking of customer behavior in order to amplify the monetization of the user. Thus, the 87.33Mb of code responsible for tracking user activity that were involuntarily loaded by the customer through the browser, relentlessly put Amazon’s core money-making strategy at work. Moreover, all the energy needed to load all this data was effectively unloaded upon the customer, who ultimately assumed not just part of the economic costs of Amazon’s monetization processes, but also a portion of its environmental footprint. The Hidden Life of an Amazon User narrates the journey of the customer within the labyrinth of interfaces and code that allowed to buy Jeff Bezos’s book while witnessing the mounting energy costs that were helplessly paid for by the Amazon customer. This work was realized within the framework of the European Media Art Platforms EMARE program at IMPAKT with the support of the Creative Europe Culture Programme of the European Union.
THE INVISIBLE MASK
Our speculative provocation is an invisible mask which people could wear to counter the surveillance effects of facial recognition technologies. Our provocation centres around a critical examination of the attack on human agency and autonomy by facial recognition technologies utilised in public security environments. The artwork aims to question our future relationship with fashion, exploring the role that fashion will play as a protective tool against surveillance culture. The speculative provocation has taken inspiration from the activists in Hong Kong who have been engaging in protests, using laser beams to deter facial recognition cameras. The Invisible Mak is part of the conversation regarding â&#x20AC;&#x153;algorithmic anxietyâ&#x20AC;?, inspired by artists such as Zach Blas, Adam Harvey and Sterling Crispin who have created artworks that consider and critique the algorithmic normativities that materialize in facial recognition technologies. Our artwork reiterates and makes real a modernist conception of the self when people conjure an imagination of Big Brother surveillance. We hope with our provocation, we can begin to explore beyond such more conventional critiques of algorithmic normativities, and invite reflection on ways of relating to technology beyond the affirmation of the liberal, privacy-obsessed self.
Hyphen Labs, Caroline Sinders and Romy Gad el Rab
Higher Resolutions by Hyphen Labs, Caroline Sinders and Romy Gad el Rab examines what we share with machines and algorithms that define our privacy, behavior and digital rights, inspired by the question “how did we get here?” Focusing on technology and the next generation of ‘higher power’, Higher Resolutions explores the creation of power and the tools to disrupt, resist and redistribute power. Higher Resolutions was created in response to the Tate Exchange Lead Artist programme focussing to the theme of ‘power’, and including also curated talks, workshops and artworks by other visiting artists. For MozFest, they are presenting two pieces focusing on data, equity, and surveillance: Higher Resolutions: Tear Away Truisms by Hyphen Labs and Caroline Sinders Higher Resolutions: Vocabulary Wall by Caroline Sinders, Hyphen Labs and Jon Lloyd Featuring excerpts from the People’s Guide to AI and new vocabulary terms created by Caroline Sinders, Hyphen Labs and Jon Lloyd.
S.A.M. THE SYMBIOTIC AUTONOMOUS MACHINE 10
As machines gain more autonomy and importance in human life, they are still given no agency in our society. Could a legalisation of their status create a movement towards a more collaborative relationship with humans? S.A.M. The Symbiotic Autonomous Machine, employs bacteria and yeasts of kombucha to produce a beverage that is sold to human customers. In that way, the hybrid entity is a collaboration of living beings and robot parts that earns money, pays for ingredients and electricity bills but also for employees and taxes, effectively becoming part of human society in an economical sense. S.A.M. is an alternative present which crucially, presents itself as a business owner for which greed and profit is non-existent. As S.A.M. has no legal status, part of our research developed into a legal proposition produced together with a law firm: ‘The Autonomous Actors Rights’ – which proposes a definition of the role of these hybrid technologies in society, designed to inspire ‘new economical and legal systems based on trustworthy relationships between humans and machines.’
CONTENT AWARE STUDIES 11
Content Aware Studies series initiate an inquiry into the possibilities of AI and particularly Machine Learning to reconstruct and generate lost antique greek and roman friezes and sculptures by the means of algorithmic analysis of 3D scans of antiquity. It concerns about the potentialities of methods involving data, ML, AI and other forms of automations turning into semi- and quasiâ&#x20AC;&#x201C;archeological knowledge production and interpretations of history and culture in the era of ubiquitous computation. An algorithm capable of self-learning is directed to replenish lost fragments of the friezes and sculptures. Based on an analysis of models, it generates models, which are then 3D printed in various materials and used to fill the voids of the original sculptures and their copies. The synthetic intelligence that tends to faithfully restore original forms, also produces bizarre errors and algorithmic speculative interpretations of, familiar to us, Hellenistic and Roman aesthetics, revealing a machinic understanding of human antiquity.
Accession is a collecting booth for an imaginary museum regulated entirely by AI. Visitors submit everyday items to the booth where they are photographed and subjected to a number of AI processes which describe and classify the objects submitted. Through the exhibition the digital collection grows but as it does so the AI management becomes more and more selective about what is and is not accepted, rejecting new items that are a poor fit for what is already there. Museum collections and AI classifiers rely on maintaining some form of status quo to make sense. A museum that collected anything would be a dumping ground with no identity, while AIs rely on training sets that have strong visual commonality. Both rely on a sense of sameness, but both are subject to critical debates about diversity and representation. Accession explores this relationship by acting out a fictional but plausible scenario through commercially available AI technologies.
WIKIFIED COLONIAL BOTANY 13
Wikified Colonial Botany is a proposal to look for otherness in the online encyclopedia Wikipedia and its structural referent Wikidata. The otherness in this work is represented by trees. These other-than-human beings are an essential part of colonial histories, as there existed an intimate relationship between botanical science, commerce and state politics. As Londa Schiebinger and Claudia Swan state in their book ‘Colonial Botany’, colonial endeavours moved plants and knowledge of plants promiscuously around the world. Non-western trees were also renamed by Europeans, using Linnaeus’ classification system. These Latin names are still the global standard today. Their medicinal, edible and material uses were commodified. Botanical gardens were created worldwide as part of the colonial economic exploration policy. Wikipedia is multilingual, daily updated and freely available. Its pages are analysed and added as structural data in Wikidata. This data and all Wikipedia texts are worldwide an important source for training new softwares that co-shape our world. By choosing 4 languages of former colonial powers and showing trees with significant colonial histories, Wikified Colonial Botany hopes to give a sense of how the public knowledge about these other-than-human beings remains dependent on perspectives and global relationships.
OUR COLLECTIVE HOPES: MANY STORIES, COUNTER VOICES 14
The digital structures we have been creating mirror, reinforce and amplify systems of oppression that exist in real life* - it’s the IRLDigital loop of rubbish. How can we enable conversations to talk simply about these issues, but also about our collective hopes for the future(s)? Elvia Vasconcelos invites audiences to join her in co-creating large-scale word maps to explore the ways in which society’s inequalities are further expanding through digital structures and the ways in which to counter this through acts of resistance. Our collective hopes: many stories, counter voices is an intervention that examines key concepts behind our technologies to make visible the underlying power structures embodied in our digital worlds. Centered on the conversations happening throughout the weekend, the map will grow and be shaped by participants in reflecting on what they find to be the most urgent issues about the internet and its impact on people and the world today. What stories can we tell about the acts of resistance that counter oppressive digital systems? What collective stories can we tell about our hopes for healthier futures? *nod to Virginia Eubanks ‘Automating Inequality’.
WORKSHOPS + TALKS DESIGN A FEMINIST CHATBOT W
Josie Young and Charlotte Webb Saturday ONLY 15.30-15.30
WORKSHOP: WORLDBUILDING 101: WRITING FOR ALTERNATIVE FUTURES W
Join us for an interactive workshop on how to design feminist chatbots! The Feminist Internet are collaborating with feminist AI researcher Josie Young to create the next version of a tool for designing feminist chatbots. And we would love your help to test it out!
COMUZI Saturday & Sunday 11.00-12.00 and 14.30-15.30
The workshop will cover issues ranging from chatbot purpose, use of data and conversation design. You will learn how to design feminist chatbots in a thoughtful way that unlocks your creativity and innovation, rather than hampers it. And in return, we will gather your feedback to help make the tool even better.
How can we play a role in shaping collective futures that benefit everyone?
This workshop builds on the ‘Designing a Feminist Alexa’ project that the Feminist Internet ran with UAL Creative Computing Institute and Josie Young’s original Feminist Chatbot Design Process.
As machine learning reinvents humanism, what would that mean for us?
Is that possible?
FEMINISM AND AI T
With Josie Young Gretchen Andrews Hildah Nyakwaka Joana Moll Chaired by Irini Papadimitriou In the Arts & Culture space we are championing creative practitioners from across the art and cultural heritage world, whose creativity shines a critical light on contemporary innovation and lets us speculate about what lies ahead. In this conversation we are bringing together artists and technologists to discuss feminism, gender stereotypes and bias in AI - where is the feminist perspective?
Join us in this drop-in interactive speculative fiction workshop, where we will explore creative writing & world building techniques to imagine alternative futures drawn on the themes of the festival that is different from the norm.
TRUSTWORTHY CULTURE CULTURE TRUSTWORTHY TRUSTWORTHY TRUSTWORTHY TRUSTWORTHY TRUSTWORTHY TRUSTWORTHY TRUSTWORTHY TRUSTWORTHY TRUSTWORTHY TRUSTWORTHY TRUSTWORTHY MOZFEST MOZFEST TRUSTWORTHY AI AI A I MOZFEST MOZFEST A I AI AI A I MOZFEST MOZFEST A I AI AI A I MOZFEST MOZFEST A I AI AI A I MOZFEST MOZFEST A I AI AI A I MOZFEST MOZFEST A I MOZFEST MOZFEST I MOZFEST MOZFEST A MOZFEST MOZFEST I MOZFEST MOZFEST A MOZFEST MOZFEST I MOZFEST MOZFEST A MOZFEST MOZFEST I MOZFEST MOZFEST A MOZFEST MOZFEST MOZFEST MOZFEST 2019 2019 MOZFEST MOZFEST 2019 2019 MOZFEST MOZFEST 2019 MOZFEST MOZFEST 2019 2019 MOZFEST MOZFEST 2019 2019 2019 2019 2019 2019 2019 2019 2019 2019 2019