“AI is about more than data, algorithms, and compute power; it is about people and relationships, values and ethics, power dynamics and
societal systems. It concerns, and will likely affect, us all.”
Yasmin Ibison, Senior Policy Advisor, Joseph Rowntree Foundation
Letterfrom theEditor.
We stand at the forefront of the Fourth Industrial Revolution, a transformative era driven by the rapid advancement and integration of information technologies reshaping nearly every aspect of our life Inventions such as OpenAI’s ChatGPT have marked an unprecedented acceleration in technological progress, placing the tech industry at the center of global attention and fueling an unstoppable race for increasingly advanced machines. For many, it feels like this is the moment a defining point in history akin to what science fiction writers once envisioned. As a result, a wide array of competing visions for an “AI future” has emerged, from hopeful and even utopian scenarios to darker, cautionary ones
However, not all voices make it to the forefront. These visions often originate within big tech circles and lack a nuanced, human-centered focus that strongly considers AI's societal impacts. This is where Beyond Code comes in. With its debut issue, Computer Visions—a wordplay referencing computer vision, the AI field dedicated to teaching systems to interpret visual inputs Beyond Code seeks to open a multidisciplinary dialogue, inviting academics, industry professionals, artists, and those simply interested in the topic to share their perspectives on how AI will shape our lives no tech background required. We’re beyond thrilled to showcase a rich diversity of visions from contributors of all backgrounds, each sharing insights and imagining the challenges and possibilities that the future holds.
The opening essays in this issue spotlight Missing Perspectives visions of the automated future that have been largely overlooked Articles by Grainne Popen, Thomas Mollema, and an interview with Cindy Friedman stress the need for diverse perspectives, while a review of Human Freedom in the Age of AI by Filippo Santoni de Sio offers hope through the plurality of ideas in addressing challenges like the responsibility gap. The section AI Ethics in Practice examines challenges of integrating AI into daily life, emphasizing ethical considerations in design and management Articles by Sarthak Anand and Raphael Tissot discuss the restoration of values in LLMs and the necessity of ethical leadership within AI-driven companies The articles in Foundations in Focus demonstrate the interdisciplinary nature of technology and its societal implications. Eduard Saakashvili, Abigail Nieves, Alejandro del Valle Louw, and Dennis A. Martens explore how art, philosophy of science, policy, and neural cellular automata shape responses to technological advances. The concluding essays in AI Horizons challenge conventional ideas about AI's future and advocate for diverse strategies to navigate its impact Marianne Kramer and Utrecht University students question reliance on technology as a panacea and explore long- and short-term approaches to AI's societal effects.
Contributors.
Beyondcode.
Editor-in-Chief
Luiza Swierzawska
Editorial Board
Thomas Wachter
Will Cosgrave
Alejandro del Valle Louw
Kuil Schoneveld
Tiara Dusselier
Graphic Design
Lesley Spedener
Ilia Timpanaro
Simona Jansonaite
General Members
Olivier Vroom
Email: beyondcodemagazine@gmail.com
Alejandro del Valle Louw
Precautionary Principle 52
A master’s student of Philosophy & Public Policy at the London School of Economics He spends his time enthralled in the philosophy of AI, and is currently the Head of Ethics & Privacy at Spheria AI
Amber Koelfat
AI Safety Discussion Group 40
A master's student in artificial intelligence and an organizer for AI Safety Utrecht Her interests lie in exploring and bridging the gap between various fields such as AI, psychology, data science and design. She is writing her thesis on mechanistic interpretability.
Dennis A. Mertens
Neural Cellular Automata 57
A master's student of Artificial Intelligence and holds an undergraduate degree in Data Science and Artificial Intelligence. He began his journey making minigames as a hobby. His present interest in the field can be understood through a simple question: How can we steer processes that co-evolve to produce behaviors that we would recognize as intelligent?
Eduard Saakashvili
Asteroid City 63
A writer and data scientist based in Utrecht. He recently obtained a master's in AI from Utrecht University and has a bachelor's degree in Film and Media Studies from Swarthmore College Before his current work in data science consulting, he spent several years working in journalism and communications
Contributors.
Grainne Popen
Digital Amnesia 13
Law student at University College Dublin She is interested in cybercrime, specifically AIenabled cybercrime. She also has a background in social justice work, specifically in the U.S.
Koen Willemen
The Future of AI 68
A graduate of the BSc Philosophy, Politics, and Economics at Utrecht University He now studies governance, which will help him towards his ambition to work on a more humane future.
Luiza Swierzawska
Human Freedom in the Age of AI 25
Editor-in-chief of Beyond Code and a recent graduate of Applied Data Science from Utrecht University With a background in Philosophy, Politics, and Economics, she is passionate about researching advanced technologies through the lens of these disciplines.
Marianne Kramer
Technological Solutionism 38
Studies Social Data Science at the University of Copenhagen and holds a BSc in PPE from Utrecht University. She works as a data scientist and is a founding member of the Copenhagen Technology Policy Youth Committee. Her main interests lie in the intersection of ethical AI, NLP, network analysis and research methodology.
Max Schaffelder
AI Safety Discussion Group 40
A master's student in artificial intelligence and an organizer for AI Safety Utrecht. He is interested in AI alignment research and doing his thesis on model collapse in large language models.
Olivier Vroom
AI Safety Discussion Group 40
Olivier holds a BSc in Economics and is currently pursuing a Mac in Artificial Intelligence Passionate about the intersection of technology and society, particularly AI, Olivier leads an AI safety discussion group in Utrecht and is a regular attendee at tech conferences.
Raphael Tissot
Head of AI Ethics on Day One 33
Has worked in Tech for 10 years, helping startups grow from seed to series A in France and Canada He holds a MSc in Innovation Management and has focused his work on creating consumer tech with disruptive user experiences.
Sarthak Anand
Training LMs for IKEA 30
A data scientist at IKEA, where he leads the development of large language models He holds a master’s degree in Artificial Intelligence from Utrecht University. His experience includes machine learning research at Techwolf and H2O.ai, as well as natural language processing research at Midas Lab, IIIT Delhi
Thomas Wachter
Interview: Cindy Friedman 9, AI Discussion Group 42, Interview: Abigail Nieves 48
Recently graduated from the Artificial Intelligence master’s program at Utrecht University. He is interested in the intersection of philosophy of AI, cognitive science, and the ethics of technology.
Thomas Mollema
AI and the Need for Decolonization 17
A master's student of Artificial Intelligence at Utrecht University. He obtained an undergraduate degree in philosophy at Erasmus University Rotterdam. Thomas writes about the (political) philosophy of AI, language, literature and mind.
Beyondcode.
The Missing Perspectives.
This section highlights voices, viewpoints, and narratives often excluded from mainstream discussions about AI By amplifying diverse perspectives, we aim to enrich the dialogue surrounding AI and its societal implications.
Researcher Interviews.
A Relati Approac AI Ethic Visions Ubuntu
Philosop
An Interview with Cindy Friedman
By Thomas Wachter
Philosophers have historically struggled with the question of what it means to be human From Aristoteles’ “Anima” to Hannah Arendt’s “[The] Human Condition,” the essence of the human has not only been scrupulously studied but remained elusive and hidden from ourselves
Even though some might regard it as theoretical and impractical, this question is perhaps the most important we could ask, especially when machines are developed to imitate or even replicate (parts of) ourselves. The image we portray and replicate becomes true, and from there, it determines how we see ourselves and others. In the words of the psychologist and philosopher Karl Jaspers: “The image of the human being that we hold to be true becomes itself a factor in our lives. It determines the ways in which we deal with ourselves and with other people It determines our attitude in life and our choice of tasks.”
Nowadays, if the question is not already difficult enough, AI systems perform tasks reserved for humans, and social robots are being integrated into our daily lives. This alters our conception of ourselves and makes us reevaluate what we understand as “the human ”
But how and to what extent do these disruptive technologies disrupt our selfunderstanding? How do these disruptions challenge existing conceptualizations of “the human”? Which ethical theories and normative frameworks are better equipped to guide in responding to those challenges?
Cindy Friedman is a South African philosopher interested in those questions. She works at the Ethics Institute at Utrecht University. Her project, Embedded in the Ethics of Socially Disruptive Technologies Research Consortium (ESDIT), looks at how humanoid robots might disrupt our understanding of what it means to be human.
With that aim in mind, Fried brings philosophical ideas from sub-Saharan Africa into the discussion that contrast with Western theories. In particular, she is interested in Ubuntu, which, unlike individualistic notions from Western philosophy, puts forward a relational view of the human For Ubuntu philosophers, being human means being interdependent. In other words,
Cindy Friedman is a PhD candidate at the Ethics Institute of Utrecht University. Her research focuses on The Ethics of Humanoid Robots and is part of the Ethics of Socially Disruptive Technologies research program
a person can only be understood in relation to the other A Zulu phrase summarizes this: Umuntu ngumuntu ngabantu, which means “a person is a person through other persons.” This is a richer account for Friedman than the Cartesian “I think, therefore I am.”
Why is it necessary to understand what being human means and entails in this context of highly technologized societies?
Cindy Friedman: Because the very idea of AI is to try to replicate human intelligence, we face this question When AI systems behave in human-like ways and robots increasingly look human-like, we have to think about what it means to be human. In addition, there is a lot of fear surrounding the technology. People are concerned about AI systems replacing us Thinking and understanding how we are unique compared to humanoid robots would dispel some of those concerns. We can embrace the benefits this technology brings while possibly negating some of the fear that surrounds the technology. Another reason to think about this is the legal and moral dimension of the problem, especially concerning personhood.
So we have a robot like Sophia, who has been granted citizenship in Saudi Arabia, which was hugely controversial, especially in a country where even women aren't seen to have human rights the same as men do over there. So, it also speaks to those kinds of legal and political discussions.
“The technology is cool, but just because we can do it doesn't mean that we should.”
The issue seems both theoretical and practical. What is the role of philosophy in this case?
Cindy Friedman: Personally, I think the role of philosophy is to take a step back, especially because people involved in the design and creation of these technologies tend to be really optimistic about them. The view for them is usually “move fast and break things.” So, for me, the role of philosophy has been trying to take a step back and think, “The technology is cool, but just because we can do it doesn't mean that we should ”
This means thinking carefully about the role that these technologies can play in society and how they might be harmful or beneficial not only for us but also for the environment and our social structures in general.
How do you balance pointing out the difficulties while not being too pessimistic about the development of technology?
Cindy Friedman: This is a complicated topic that I and many ethicists struggle with. I need to constantly remind myself to balance things a little bit. We might do that by focusing on the design of these technologies If we focus on the design, the underlying ideas, and the intended uses, one can then point to those issues, slightly change them, and hopefully mitigate some of the risks. We must try to find a way to work together, but that also requires interdisciplinary work, which is difficult because we speak very different languages.
“The aspect of Ubuntu philosophy I'm using is this understanding of a person through other people.”
You work in AI and robot ethics within a nonWestern framework called Ubuntu. What is Ubuntu's philosophy, and what can it offer AI ethics?
Cindy Friedman: Ubuntu philosophy is a very broad, multi-faced philosophy Therefore, writing an applied ethics paper was a big challenge for me. The aspect of Ubuntu
philosophy I'm using is this understanding of a person through other people. This is a more relational understanding of what it means to be human
Putting forward this view in the context not only inside a community but between countries might provide a useful lens through which to see AI research and the development of AI tools We know that the West has a lot of power and influence However, the problem is when designing these kinds of technologies, they're designing them for Western people, with Western ideals, and not necessarily thinking about the values and ideals of other people and other parts of the world, right? So, there is value in thinking about African philosophy, and this isn't only African philosophy. It's to show that we also need to consider how this technology will impact people in non-Western places because we live in a global village, right? This is also true for other non-Western philosophies, like Confucian ethics and Buddhism, which are very interesting, have been studied for many years, and are very valuable.
Given that Ubuntu is one of our missing visions, how could we implement it in developing these technologies?
Cindy Friedman: We could reach a broadscale consensus on implementing AI and dealing with these ethical challenges, but that is difficult when every society has different values. In other words, if we are having these more global discussions, then it's a case of having these discussion groups where there is representation from people from many different cultures and societies who can bring different perspectives to the table So, that is one way to think about implementing it.
In your paper “Ethical concerns with replacing human relations with humanoid robots: an Ubuntu perspective,” you discuss the concept of “fully human.” What does it mean to become “fully human,” especially in the age of AI?
Cindy Friedman: In Ubuntu, there is a moral understanding of what it means to be a full human Part of this is having meaningful interpersonal relations with other human beings. So, from an Ubuntu perspective, there is a constant concern that, given technology implementation, we need to develop these relationships. This is not only in the context of humanoid robots, which I speak about, but also chatbots and virtual girlfriends, for example. My concern with AI is that it could take us out of those human-human relations. That is a problem for our moral development because part of what it means to be human in the context of this philosophy is to become the most morally perfect person you can be
What is your vision of a highly-technologized but fully-human society?
Cindy Friedman: What I am very concerned with now, specifically in the context of my research, is this idea of human replacement and replacing human beings with robots. The purpose should be more focused on technologies that aid and assist us in having better lives and engaging with each other more fully. My ideal scenario would be to develop technologies that would aid and assist and not replace human beings in a social context but also sustainable technologies that do not deplete our resources and contribute to climate change
Photo by Robs, Unsplash
Digital Amnesia.
By Grainne Popen
The genesis of AI has prompted transformation that is often comp creation of the internet. A particu AI system, known as Large Langu (LLMs), has greatly impacted ho and process information. With advancements in language processing and the growing accessibility of LLMs, pertinent AIpowered tools are poised to become primary sources of information for future generations due to their ability to provide instant, personalized, and comprehensive answers to a wide range of queries. These systems, at times, are portrayed as objective superintelligent machines capable of solving all kinds of problems. However, AI systems, as pointed out, are imbued with biases present in the data used to train them. There are all kinds of different biases, from race to gender and even political ones However, there is a less discussed and perhaps more important one: language bias.
greatest amount of training data. This poses a threat to equitable knowledge representation In particular, the lack of digitized historical and cultural information from non-WEIRD countries creates digital amnesia, erasing the contributions of entire civilizations and making them invisible in a world that increasingly sees and values what is digital
“...there is a less discussed and perhaps more important one: language bias.”
Because these systems are trained with the data available on the Internet, what they “know” about the world comes from predominantly Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies that contribute the
At its core, the problem is not only that LLMs are a representation of the datasets used to train them but also that they create content that reproduces that information. As Kate Crawford, in ‘Atlas of AI,’ notes: “Computational reason and embodied work are deeply interlinked: AI systems reflect and produce social relations and understandings of the world.” On the one hand, these datasets, representing existing social practices and power dynamics, can be exclusionary, leading to problematic algorithms that, on the other hand, reproduce those biases.
The evolution of Large Language Models (LLMs) is deeply intertwined with the emergence of massive datasets, such as the Toronto Books Corpus (2007) The BookCorpus is a data set consisting of 7,000
Photo by Suzy Hazelwood, Pexels
self-published books that trained the initial GPT model for OpenAI. In some cases, as demonstrated by the BookCorpus, this data that trains LLMs can be surprisingly narrow Crucially, any un-digitized phenomenon will not be included or represented in LLMs, meaning that entire languages will be absent. This data exclusion has significant consequences, especially given the increasing extent to which LLMs dictate modern information acquisition.
Deficits of digitized information create blind spots for LLMs. Ancient Mesopotamian civilizations and the unique cultural tapestry of Surinam exemplify the erasure of knowledge and cultural diversity in LLMs algorithms. Ancient Mesopotamian civilizations laid the foundation for modern civilization. The earliest known writing and numerical systems are accredited to Sumerian societies within the borders of modern-day Iraq However, much of their rich history remains undocumented or, specifically, nondigitized. This limits the capacity of LLMs to incorporate the role of the Sumerians when explaining language development. This form of digital amnesia effectively erases significant portions of human history The digital divide is also especially present in prioritizing Dutch culture over the closely linked Surinamese culture, a reflection of the Netherlands’ status as a WEIRD country. Surinam’s culture is
Clay tablet inscribed in Sumerian, c 2500 BC https://www britishmuseu m org/collection/object/W 1896-0612-46
“The increasing tendency to utilize LLMs in educational settings means that any biases or gaps in the information they provide could have far-reaching consequences for how future generations perceive and understand different cultures and historical events.”
incredibly nuanced and rich, incorporating African, Indian, Javanese, and Chinese elements, many facets of which can be marginalized and misrepresented. The cultural traditions of Surinam are less digitized than the Netherlands due to many factors, including colonization, stringent oral tradition, limited resources, and language diversity Ephemeral oral traditions, especially when expressed in a varying harmony of languages, are difficult to record and thus become relatively inaccessible to LLMs and the populations that utilize them. A disparity emerges in the representation of nations, a difference that is reliant on digitalization and the WEIRD framework. The impact is multifold, extending far beyond the examples provided. The increasing tendency to utilize LLMs in educational settings means that any biases or gaps in the information they provide could have farreaching consequences for how future generations perceive and understand different cultures and historical events. If LLMs become primary sources of information for future generations, underrepresenting nonWestern cultures and historical periods could skew understanding of the past and entrench marginalization.
“The democratization of knowledge and preservation of cultures will be greatly assisted by increasing awareness of flaws in the creation of LLM algorithms.”
However, these trends can be reversed through intentional digitalization and active steps to combat algorithmic bias For example, the Cuneiform Digital Library Initiative, which aims to open pathways to the historical tradition of the Middle East by digitalizing artifacts and inscriptions. Another example is the Electronic Babylonian Library
Digital divides are created by many factors, one of the most pertinent being the WEIRD bias. The democratization of knowledge and preservation of cultures will be greatly assisted by increasing awareness of flaws in the creation of LLM algorithms While these
models help build further perspectives of the world, the inherent problems deepen, making existing power differences stronger These trends erase history, as seen by examples from Mesopotamia/Iraq, and limit voices, as seen by Surinam. As LLMs progress alongside the modernization of the world, datasets diversify, and previously non-digitized cultures contribute to advancement However, even as incremental progress is made, it remains important to be conscientious of potential divides and to support efforts for equity. Through intentional knowledge generation, we can combat digital amnesia
Photo by: Amritha R Warrier & AI4Media
References
Atari, M., Xue, M. J., Park, P. S., Blasi, D. E., & Henrich, J. (2023, September 22). Which Humans?. https://doi.org/10.31234/osf.io/5b26t
Crawford, K (2021) Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence Yale University Press
Algorithmic Justice League (n d ) Retrieved from https://www ajl org/
Gebru, T , Morgenstern, J , Vecchione, B , Wortman Vaughan, J , Wallach, H , Daumé III, H , & Crawford, K (2018) Datasheets for Datasets arXiv https://arxiv org/abs/1803 09010
AI Now Institute (n d ) Retrieved from https://ainowinstitute org/ Fellows, S. (2021, June 15). The Dangers of Algorithmic Bias: A Silent War on Cultural Diversity. Medium. Retrieved from https://medium.com/tag/algorithmic-bias
Partnership on AI. (n.d.). Diversity in AI: Why It Matters. Retrieved from https://partnershiponai.org/work/
Partnership on AI (n d ) Retrieved from https://partnershiponai org/
Buolamwini, J (2019) Who Gets to Be an AI Expert? Artificial Intelligence, Gender, and Race Algorithmic Justice League Retrieved from https://www ajl org/
Stanford Encyclopedia of Philosophy (n d ) Artificial Intelligence Retrieved from https://plato stanford edu/entries/artificial-intelligence/
MIT Technology Review. (n.d.). Artificial Intelligence. Retrieved from https://www.technologyreview.com/
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F (2019) Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 1-27 https://doi org/10 1186/s41239019-0171-0
Joshi, P , Santy, S , Budhiraja, A , Bali, K , & Choudhury, M (2020) The State and Fate of Linguistic Diversity and Inclusion in the NLP World In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp 62826293) https://doi org/10 18653/v1/2020 acl-main 560
AI & The Need for Decolonization*
Thomas Mollema
Much of what one hears of artificial intelligence (AI) has a techno-solutionist ring to it: AI will automatize industries, optimize lives, find life-changing vaccines and solve climate predicaments On the other hand, one also hears much of AI doomsayers: the ‘singularity’ of artificial superintelligence is near and will treat human civilization much like humans have treated cognitively less sophisticated life forms, namely with eradication [1; 16] More moderate dystopias include the ‘hacking’ of human affects [2] and the proliferation of counterfeit people [3]. In between the utopian and dystopian narratives lie less bombastic accounts that are concerned with ‘AI ethics’ However, in the past five years, everybody, ranging from governments to Big Tech, has managed to get themselves an AI ethics framework – and now ethics washing has become a problem. Luckily, with the implementation of the European Union’s AI Act on the horizon, some international guardrails will be put in place to regulate the deployment of ‘high-risk’ AI systems. However, word on the street is that even this foundational step in AI regulation has been subjected to lobbying attempts by Big Tech.
‘Isn’t that great?’, one might ask, ‘Everyone’s into AI ethics and the EU’s Brussels Effect will spread AI regulation globally!’
At first sight this may be the case. However, what does not figure in the accounts above is the majority world. The majority world (a.k.a. the global South) is not part of the discourse on AI’s future This essay addresses this absence by presenting the ways in which the global South is affected by ‘AI colonialism’. AI colonialism is a metaphor that expresses the ecological, epistemic and political similarities between AI development and colonialism: it seems like colonialism continues through AI
As the world’s most powerful companies and states are reinforcing their power through the usage and development of AI technologies, the majority world has not been left unscathed The operation of those technologies increasingly demands more resources, and new forms of colonization revolving around AI have emerged. Cameroonian philosopher Achille Mbembe calls this a “techno-molecular colonialism” [4, p 32]: the fusion of colonialism’s exploitative and extractive tendencies with a capitalization that extends to the ‘molecular’ depths of human behavior.
Mbembe calls attention to the unprecedented reach of the digital economy (the capitalization) into the intimacies of human life, aiming to predict as much of
human behavior as possible (the molecular depths).
Algorithmic digital interfaces abound in everyday life and infringing interferences like nudging, tracking and data harvesting are ubiquitous. In treating human behavior as computational object, techno-molecular colonialism commodifies and captures it, making it more predictable and “amenable to datafication” in the process [4, pp 66-69] Likewise, this economy’s hunger for resources delves deep into the Earth’s ecologies. At the root of techno-molecular colonialism lies the techno-colonial market’s tendency to treat everything as ‘raw material’ ready for datafication, which connects the neuronal level of behavioral predictions to the global level of resource extraction.
Mbembe’s techno-molecular colonialism converges with science and technology scholars’ diagnosis of ‘data colonialism’ [5, pp xix-xx] Data colonialism is the extraction of data from life forms’ quantifiable traces. The extracted data are used for optimization of AI systems and are sold on “markets for future behavior” [4, p. 68] to companies that play into and profit of the prediction of consumers’ and citizens’ future intentions
“...where possible there is datafication to be profited off that treats the majority world in particular (and consumers in general) as disposable and instrumental, like colonialism did ”
This extraction of wealth from the datafication of individuals’ behavior also concerns the global North, but the global South is more at risk, because its digital protections are less secure [6]. The key features of this new form of turning digital traces of behavior into a market are –technologically speaking – that it is an algorithmic structure enabling a cyclical data extraction by treating human life as quantifiable [4, p. 67] and – economically speaking – that it capitalizes upon datafication in a way that “reinvigorate[s] and rework[s] colonial relationships of dependency” [7, p 2] In short, where possible there is datafication to be profited off that treats the majority world in particular (and consumers in general) as disposable and instrumental [8], like colonialism did. Technomolecular or AI colonialism enables the extraction of wealth from existing geopolitical asymmetries, rather than balking at their existence in the first place [4, p. 3].
In the global South the ‘AI colonialism’ of the global North’s corporations (Big Tech) and states (U S , EU, China, Russia) has three faces:
Photo by: Amritha R Warrier & AI4Media
Hanna Barakat + AIxDESIGN & Archival Images of AI / Better Images of AI / Data Mining 3 / CC-BY 4.0
(1) the ecological dependency on resource extractivism;
(2) the epistemic (pertaining to knowl understanding) effects of cultural impo and the destruction of indigenous fo knowledge; and
(3) the politico-economic extension o capitalism’s domination over the global through data colonialism Together, these faces explain why A need of decolonization.
“Decolonization is a resurgence of indigenous identity against colonial subjugation.”
The concept of decolonization, as it figures in the work of Frantz Fanon and others, is itself multifaceted. Decolonization is a resurgence of indigenous identity against colonial subjugation. It is a violent movement that strives to attain political selfdetermination by reaffirming that the colonized’s difference from the colonizer, their identity that has been denied by the colonizer, is justified and not deplorable. Simultaneously, decolonization is a return to culture and language to undo any conceptual adjustments forced by the colonizer and a denial of the racial categories that the colonizer imposed on the colonized ‘as if they were natural’ [4; 9; 10]. In other words, applied to AI, decolonization is reparative in that it means fighting the spread of harms that hark back to historical colonialism through the deployment of AI systems and their development. It is also preventive in wanting to stop AI development from falling into the trap of replicating Western societies’ colonial heritage.
In what follows, the three reasons for decolonizing AI are further developed. Subsequently, I engage in speculation about the shape of this decolonization of AI to come.
1. Data colonialism and exploitation. From a politico-economic perspective, AI is in need of decolonization to counter its role in strengthening digital capitalism’s colonial conquest Digital capitalism inherits and reinforces global power asymmetries and international divisions of labour that have their root in historical colonialism [11]. In the past decades, the concept of the computational has become central to modern society According to Mbembe, “The computational is generally understood as a technical system whose function is to capture, extract, and automatically process data that must be identified, selected, sorted, classified, recombined, codified, and activated” [4, p 19; my emphasis] Because of the introduction of large language models, the manufacturing of significantly more efficient neural networks [12] and ever-faster computer chips [13], anything that can be datafied can also be capitalized upon with a staggeringly inequitable speed Data colonialism plays into
Clarote & AI4Media / Better Images of AI / Labour/Resources / CC-BY 4.0
this potential by turning the average consumer into a data resource. Far from being politically neutral developments, the sped-up AI development already contributes to worker exploitation in the global South. On top of that, African countries’ citizens’ biometric data is being traded in, while they also suffer the consequences of informational disruption, software beta-testing and worker exploitation [14; 15]
In short, there is a metaphorical continuity with historical colonialism in how the AI economy reinforces colonial power and labor asymmetries and treats people as data resources, as means to the end of making profit, much like historical colonialism treated indigenous peoples as workers without rights and dignity.
2. Cultural and epistemic imposition. Scholars are increasingly calling attention to how the outputs of AI systems come across as general, objective and factual, while they often only are approximations and generalizations based on Western data [16]. Pasquinelli and Joler for example characterize ML models as “refracturing mirror[s] of the world” [17]: they hold that AI salvages the past and projects it upon the future, thereby importing the injustices of the past. AI cannot help but reproduce hateful, racist, sexist, homophobic content, because it is so prominently part of their datasets (the Internet) – something technology scholars increasingly call attention to [18; 19] Furthermore, this is hard to correct for, as stereotypes like this are deeply embedded into the colonial past of Western society [20]. The consequences of AI deployment for the global South that includes these shortcomings have cultural and epistemic (pertaining to knowledge) dimensions.
“African countries’ citizens’ biometric data is being traded in, while they also suffer the consequences of informational disruption, software beta-testing and worker exploitation [14; 15].”
The cultural dimension is that Western cultural norms and biases are falsely universalized, in other words treated as always valid [16], for example through chatbots exhibiting norms and values in their use of language [21]. This leads to the imposition of one culture onto another, implicitly recreating a schema of imposition that was explicit in historical colonialism [22], with the risk of the disruption of indigenous concepts as a result [23].
From an epistemic perspective, AI systems, by virtue of their feigned objectivity, present their content as ‘the real,’ while simultaneously hiding its constructed nature [18] Deployed in other contexts than Western societies, these cultures are threatened with ‘epistemicide’. Epistemicide is the eradication of ‘ecologies of knowledge’, forms of knowledge and sensemaking that have ancestrally developed, by the imposition of foreign standards of justification, i e , collective ways of determining what is true [24]. The seriousness of epistemicide’s cognitive and emotional effects can be recognized once it is acknowledged how artificially constructed realities are capable of changing people’s beliefs AI buys into the all
too human informational weakness of constructing beliefs based on a limited information pool [25] and realize this through repeated information exposure.
These problems are social rather than technical in nature, and can hardly be solved by technological band-aids like ‘correcting for biases’ or ‘explaining black boxes’ [26] AI systems are thus in need of epistemic decolonization because of their colonial reproduction of Western society’s cultural deficits and epistemology.
3. Extractivism. Taking an ecological perspective, AI must be decolonized because of the environmental burdens AI colonialism outsources onto the global South. AI’s backbone, the rising data economy, has a disastrous ecological footprint in terms of consumption, production and mining impacts It takes exorbitant amounts of electricity, water, and resources to build and power data centres that are requited for this economy’s infrastructure [27; 28; 29]. The lithium, tin, cobalt, manganese, and nickel needed for the fabrication of data technologies are mined mostly in the global
South, whose local ecologies are destroyed in the process [4, p 42] These processes are gradually diminishing the elasticity and resilience of ecosystems and species, which are often already in a state beyond repair. This leads to the asymmetry of Southern countries having to bear ecological disruptions while Northern countries reap the economic benefits of AI As Jung summarizes it, this extractivism is colonial because it depends on “disposable lands and disposable people” [8, p. 9].
Now to tie these three facets together:
(1) AI requires ecological decolonization because it deepens schemes of resource extraction that either replicate or intensify colonial geopolitical relationships; (2) epistemic decolonization is needed because of AI’s colonial imposition of Western society’s cultural deficits and knowledgemaking; and (3) politico-economic decolonization is needed to stop AI’s commodification and exploitation of citizens of the global South. With these imperatives in mind, some words on the contours of decolonial AI are helpful
Clarote & AI4Media / Better Images of AI / Labour/Resources / CC-BY 4.0
“ ... a shift in focus from AI ethics to how AI is embedded in relations of power is needed ”
Decolonial AI counters the enclosures that AI colonialism’s political, ecological, and epistemic facets create and uphold in the phases of design, production, and development Following Mbembe’s [4] account of decolonization, I name this disenclosure: the undoing of the colonial barriers and borders related to AI.
Some of the ‘disenclosures’ this conception of decolonization envisions, depend on drastic societal reform A scholar like Paola Ricaurte is optimistic regarding this endeavor: “We can reverse extractive technologies and dominant data epistemologies in favor of social justice, the defense of human rights and the rights of nature” [30, p 361] Kate Crawford is more pessimistic and (quoting Audre Lorde) tampers hopes of democratizing AI to a better end: “the master’s tools will never dismantle the master’s house.”**
Regardless of these appraisals, a shift in focus from AI ethics to how AI is embedded in relations of power is needed [26]. Whether in the optimistic or pessimistic camp with respect to the project of decolonizing AI, agreement should be reached on the ecological, political and epistemic reasons for decolonization AI injustices are always entangled with other forms of injustice and “[t]he calls for labor, climate, and data justice are at their most powerful when they are united” [27, pp. 226-227].
The appropriate response consists in designing AI systems, ecologies and economies by and for the contexts of the global South
Many challenges remain for decolonizing and disenclosing AI. In terms of politics, the danger of the co-optation of decolonial projects by the tech industry looms over it. And the complexity of fostering local projects for decolonial AI while simultaneously working towards global decolonization of digital capitalism is daunting. Related to culture and knowledge, decolonizing AI is opposed by ethics washing in the form of short-sighted frameworks for AI governance, explainability and ethics As long as the global South is not taken into account by these frameworks, the epistemic and cultural imposition on the “wretched of AI” (to paraphrase [31]) will continue. Ecological decolonization of AI faces the same economic barriers as climate change in general does It requires care for the Earth’s ecologies and reductions in consumption all over the world. Both stand in direct opposition to the AI economy’s imperative: profit from AI globalization
To conclude: here I have made the case for why AI is in need of decolonization. AI builds on an “imperial past and present”, while its decolonization needs to stand in “service of imagining an anti-imperial future” [32, p. 181] It is up to all of us to convince those shaping AI development of the shared duty “to our moral descendants” [33, p. 207] of prohibiting AI deployment to continue along this colonial trajectory.
** This is based upon her conclusion that “the infrastructures and forms of power that enable and are enabled by AI skew strongly toward the centralization of control” [4, p 223]
References
[1] Yudkowsky, E (March 29 2023) Pausing AI Developments Isn’t Enough We Need to Shut it All Down Time https://time com/6266923/ai-eliezeryudkowsky-open-letter-not-enough/
[2] Harari, Y N (May 6 2023) Yuval Noah Harari Argues that AI has Hacked the Operating System of Human Civilisation The Economist https://www economist com/byinvitation/2023/04/28/yuval-noah-harari-argues-thatai-has-hacked-the-operating-system-of-humancivilisation.
[3] Dennett, D. C. (May 16 2023). The Problem with Counterfeit People. The Atlantic. https://www.theatlantic.com/technology/archive/2023/0 5/problem-counterfeit-people/674075/.
[4] Mbembe, A. (2022). The Earthly Community: Reflections on the Last Utopia Trans Corcoran, S V2 Lab for the Unstable Media, 2022 https://store v2 nl/products/the-earthly-communityreflections-on-the-last-utopia
[5] Couldry, N , & Mejias, U A (2019) The Costs of Connection: How Data Is Colonizing Human Life and Appropriating it for Capitalism Stanford University Press https://doi org/10 1515/9781503609754
[6] Ricaurte, P (2022) Ethics for the Majority World: AI and the Question of Violence at Scale Media, Culture & Society 44, no 4: 726–745 https://doi.org/10.1177/01634437221099612.
[7] Madianou, M. (2019). Technocolonialism: Digital Innovation and Data Practices in the Humanitarian Response to Refugee Crises. Social Media + Society 5 (3): 1-13. https://doi.org/10.1177/2056305119863146.
[8] Jung, M. (2023). Digital Capitalism Is a Mine not a Cloud. Transnational Institute. State of Power 2023. https://www tni org/en/article/digital-capitalism-is-amine-not-a-cloud
[9] Fanon, F (2008) Black Skin, White Masks Trans Markmann, C L Pluto Press
[10] Ndlovu-Gatsheni, S J (2019) Discourses of Decolonization/Decoloniality Papers on Language and Literature 55, 3: 201-226 https://eref unibayreuth de/id/eprint/69213
[11] Muldoon, J and Wu, B A (2023) Artificial Intelligence in the Colonial Matrix of Power Philosophy & Technology 36: 80. https://doi.org/10.1007/s13347-023-00687-8
[12] Kozlov, M., & Biever, C. (2023). AI'breakthrough': neural net has human-like ability to generalize language. Nature.
[13] Castelvecchi, D. (October 19 2023). ‘Mindblowing’ IBM chip speeds up AI. Nature News. https://www nature com/articles/d41586-023-03267-0 Couldry, N and Mejias, U A (2019) The Costs of Connection: How Data is Colonizing Human Life and Appropriating it for Capitalism Stanford University Press https://www sup org/books/title/?id=28816
[14] Mohamed, S , Png, M T and Isaac, W (2020) Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence Philosophy & Technology 33, 4: 659–684 https://doi org/10 1007/s13347-020-00405-8
[15] Pauwels, E (2020) The Anatomy of Information Disorders in Africa. Konrad-Adenauer-Stiftung. https://www.kas.de/en/web/newyork/singletitle/-/content/the-anatomy-of-information-disordersin-africa.
[16] Katz, Y. (2020). Artificial Whiteness. Columbia University Press. https://cup.columbia.edu/book/artificialwhiteness/9780231194914.
[17] Pasquinelli, M and Joler, V (2021) The Nooscope Manifested: AI as an Instrument of Knowledge Extractivism AI & Society 36: 1263–1280 https://doi org/10 1007/s00146-020-01097-6
References
[18] McQuillan, D (2023) We Come to Bury ChatGPT, not to Praise It www danmcquillan org https://www danmcquillan org/chatgpt html
[19] Birhane, A (2022) The unseen Black faces of AI algorithms Nature 610: 451-452 https://www nature com/articles/d41586-022-03050-7
[20] Davis, A Y (1981) Women, Race and Class Random House https://www.penguinrandomhouse.com/books/37354/w omen-race-and-class-by-angela-y-davis/.
[21] Eke, D. E., Wakunuma, K., Akintoye, S. (eds.) (2023). Responsible AI in Africa: Challenges and Opportunities. Palgrave Macmillan. https://link.springer.com/book/10.1007/978-3-03108215-3.
[22] Wiredu, K (2002) Conceptual Decolonization as an Imperative in Contemporary African Philosophy: Some Personal Reflections Rue Descartes 36, 2: 53-64 https://doi org/10 3917/rdes 036 0053
[23] Okeja, U (2022) Deliberative Agency: A Study in Modern African Political Philosophy Indiana University Press
[24] Santos, B de S (2014) Epistemologies of the South: Justice Against Epistemicide Routledge https://unescochaircbrsr org/pdf/resource/Epistemologies of the South p df.
[25] Kidd, C. & Birhane, A. (2023). How AI Can Distort Human Beliefs. Science 380, no.6651: 12221223. https://www.science.org/doi/10.1126/science.adi0248.
[26] D’Ignazio, C. and Klein, L. F. (2020). Data Feminism. The MIT Press.
[27] Crawford, K and Joler, V (2018) Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources AI Now Institute and Share Lab https://anatomyof ai/
[28] Dhar, P (2020) The Carbon Impact of Artificial Intelligence Nature Machine Intelligence, 2, 423-425 https://doi org/10 1038/s42256-020-0219-9
[29] Crawford, K (February 22 2024) Generative AI’s environmental costs are soaring and mostly secret Nature 626: 693
[30] Ricaurte, P (2019) Data Epistemologies, The Coloniality of Power, and Resistance. Television & New Media 20, 4: 350-365. https://doi.org/10.1177/1527476419831640.
[31] Fanon, F. (1963). The Wretched of the Earth. Trans. Farrington, C.. Grove Press
[32] Getachew, A. (2019). Worldmaking after Empire: The Rise and Fall of Self-Determination. Princeton University Press https://press princeton edu/books/hardcover/978069117 9155/worldmaking-after-empire
[33] Táíwò, O O (2022) Reconsidering Reparations Oxford University Press https://doi org/10 1093/oso/9780197508893 001 0001
[34] Crawford, K (2021) Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence Yale University Press https://yalebooks yale edu/book/9780300264630/atlasof-ai/
[35] Kleinman, Z. and Vallance, C. (May 2 2023). AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google. BBC News. https://www.bbc.com/news/world-us-canada-65452940.
[36] McQuillan, D. (2022). Resisting AI: An Antifascist Approach to Artificial Intelligence. Bristol University Press, 2022. https://bristoluniversitypress co uk/resisting-ai
Beyond the Statistical Power
A Review of
Filippo Santoni de Sio’s
“Human
Freedom in the Age of AI”
By Luiza Swierzawska
Long gone are the days when artificial intelligence (AI) was merely about computational rules, mathematical equations, and hard drives. Today, the discussion surrounding AI penetrates every corner of our lives, transforming how we function as a society and putting our socio-technical and legal systems in question But was it ever solely about the technicalities of the tools we adopt and their statistical powers? Or are these tools a reflection of our desires, struggles, and politics? In his influential paper published in 1980, Langdon Winner asked if artifacts have politics His conclusion introduced what was, at the time, a controversial idea: that technological tools (artifacts) “can embody specific forms of power and authority”. This notion did not assume that these artifacts are politicized by the context in which they are situated Rather, it prompted us to rethink the sociopolitical implications of technology design choices and ideas inscribed in them.
The politics of technology, as framed by Winner, plays a major role in Filippo Santoni de Sio’s latest book, Human Freedom in the Age of AI. In his in-depth analysis, the
Professor of Ethics of Technology at TU Eindhoven tackles two key questions: How can we best conceptualize the impact of AI systems on human freedom?
And how can we design AI systems that promote rather than harm human freedom? Critically approaching the current discourse surrounding AI, Santoni de Sio advocates for a shift in focus from machine autonomy to human autonomy. He reflects on how we can reorient our attention from merely increasing machine autonomy and only later addressing the resulting lack of control, to proactively designing tools that promote democratic values and protect our freedom.
Similarly to Winner, Santoni de Sio tackles the societal and ethical challenges of technological advancements from a design standpoint. In his book, he dissects theoretical approaches such as ValueSensitive Design, Critical Design, Speculative
Design, Participatory Design, Social Design, and Responsible Innovation. According to the author, these models are promising in complementing regulatory responses in the tech realm. As he discusses their use in facilitating different ideals of democracy, we learn how each approach adds value and provides guidance in designing technological systems that empower people to which Santoni de Sio suggests, “I propose that we, as a society, must adopt a design stance towards technology, but we do not need to choose once and for all which design philosophy is preferable. We may just need all of them ” By “adopting a design stance,” Santoni de Sio advocates for a proactive and forward-looking approach to designing AI systems, where ethical values and democratic procedures serve as guiding principles of technological development, rather than obstacles He emphasizes the complexity and variety of considerations we face in the current era of rapid innovation, which, in turn, demands equally sophisticated responses. However, these responses cannot fully promote human freedom unless they are grounded in a pluralism of ideas and perspectives, rather than being pursued in isolation.
In Human Freedom in the Age of AI, this holistic perspective also comes in handy when discussing the notorious issue of the responsibility gap the concern that "learning automata" may make it difficult or even impossible to attribute accountability for unintended events. This worry has been on the radar for some time, exacerbated by tragic incidents like the Uber self-driving car killing a pedestrian, but so far, the responses have been nothing more than half-measures According to Santoni de Sio, the core of this problem lies in the insufficiently specified
notion of responsibility as we fail to account for its inherent complexity. He criticizes approaches such as fatalism, deflationism, and solutionism, which oversimplify or misrepresent the problem. Yet again, the book highlights the importance of stepping back to deconstruct the concepts at play, identifying the specific type of responsibility involved in discussions on AI and pinpointing the root causes of the debate's complexity. This approach aims to foster more nuanced analyses, ultimately leading to more comprehensive responses.
At the heart of Human Freedom in the Age of AI and the solutions it advocates is the promotion of pluralism, both in opinions and values. Following this logic, Santoni de Sio envisions a future where the benefits of AI are shared by all, rather than just a few. In the same line, his book points out that so far, the debate on AI predominantly relied on Western thoughts and ideas Notably, in the current discussions on job automation, the general understanding of which occupations are likely to become automated has been quite skewed and may differ across communities While some advocate for the automation of tasks considered mundane and repetitive, Santoni de Sio sees this discussion as more nuanced one that should incorporate more diverse viewpoints on which occupations are deemed worthwhile, pinpointing that mainstream Western moral philosophy might overlook the subjective and culturally specific ways people define "good work," thereby risking the imposition of a narrow, universal standard that could marginalize alternative perspectives and values
Furthermore, technological progress comes with a "timeliness" problem reactions often occur only after consequences have taken
shape and form. This issue is encapsulated by Collingridge's dilemma of control initially, one cannot predict the outcomes of new technology, but once its effects are evident, it is often too late to address them. This delay is particularly harmful when ingrained biases surface and negatively impact communities. To mitigate these effects, we need to emphasize active responsibility rather than solely focusing on prevention This approach requires an interplay of ideas and interdisciplinary work where all perspectives are valued equally. Such a shift can better promote constitutive goods, such as citizen projects and democratic participation, leading to the democratization of science and more inclusive decision-making processes.
“Santoni de Sio points out the predominantly alarming tone in contemporary (academic) literature, which focuses on the threats posed by AI systems while failing to recognize the opportunities for utilizing technology in ways that genuinely empower people ... ”
during crises like COVID-19 but have also led to increased surveillance and control over employees’ productivity. This ambiguity comes to surface multiple times throughout the book. Santoni de Sio devotes a lot of attention to the gig economy, particularly the case of Uber drivers and their dependency on opaque systems to perform their job While the book successfully explains how “the technological norms and systems governing the platforms support forms of uncontrolled power by employers over vulnerable workers,” it left me wondering to what extent control is needed to ensure the safety of Uber's passengers. Most notably, this example illustrates how technological systems are embedded in people's lives, not merely as entertaining gadgets, but as tools for organizing livelihoods
An undeniable strength of Santoni de Sio’s narrative lies in the nuanced view displayed throughout the book an approach that is much needed given its context. The digital era is often characterized by its ambiguous influence on our lives, with a constant battle between the good and the bad. Social media has connected old friends but also fueled hate speech and misinformation Advanced technologies have empowered workers
Acknowledging both the benefits and the pitfalls, Santoni de Sio points out the predominantly alarming tone in contemporary (academic) literature, which focuses on the threats posed by AI systems while failing to recognize the opportunities for utilizing technology in ways that genuinely empower people and promote human freedom. To counterbalance this, Santoni de Sio showcases two projects that successfully achieved this balance and serve as sources of inspiration: E-Democracia and Careful Coding Both of these projects
“ ... he highlights how AI can either threaten or promote human freedom, depending on how we choose to wield it”
demonstrate how technology can increase democratic participation and bring ideas together, and it is precisely this plurality of voices that drives their success.
So, what’s the key to “good” AI? The answer provided by Human Freedom in the Age of AI seems to be to give a voice to the people who will be affected by these new developments and to merge a multitude of perspectives. But according to Santoni de Sio, this is the wrong question to ask Considerations of AI’s goodness or badness fail to capture the full scope of political implications and the dynamics of power structures. As Santoni de Sio aptly puts it, “Politics, not ethics!” Unfortunately, in recent years, ethics has gained an unflattering reputation as becoming “a purely academic exercise,” a stigma exacerbated by corporations making empty promises about their ethical practices, commonly referred to as "ethics washing."
This is why Human Freedom in the Age of AI is such an important book It shifts our
focus to the underlying aspects of technology that may ultimately be the source of our greatest challenges. While many of the issues Santoni de Sio discusses, such as the power and influence of Big Tech, are concerning, the book leaves readers with a hopeful outlook, suggesting that not all is doomed. It offers a reflection on how AI threatens our freedom while providing a blueprint for transforming these risks into opportunities that promote human dignity and democratic values At the same time, it persuades us to think of what kind of future we wish to see for ourselves. Santoni de Sio challenges us to move beyond ethical debates and instead focus on the political and structural dimensions of technology By emphasizing pluralism and the need for inclusive design, he highlights how AI can either threaten or promote human freedom, depending on how we choose to wield it. Ultimately, the book serves as a call to action: to design AI systems that empower individuals, enhance democratic values, and reflect the diversity of voices within society.
Ariyana Ahmad & The Bigger Picture / Better Images of AI / AI is Everywhere / CC-BY 4.0
Beyondcode.
AI Ethics inPractice.
This section explores how ethical principles are applied and in need for application in the real-world. This section bridges the gap between theory and practice, uncovering challenges and opportunities for creating more responsible AI.
Training Language Models for IKEA’s Culture and Ethical Values*
A Handbook for Developing LLMs: Representing Company’s Culture and Ethical Values through an Employee Lens
By Sarthak Anand
In today’s digital world, big companies are really liking these things called large language models, or LLMs These models are exceptionally skilled with language they respond to users’ prompts, generate new content, and converse like humans. Because of this, increasingly more companies are using LLMs for various purposes, like helping customers online or creating new content, providing advice and product recommendations. It’s becoming clear that LLMs are changing the way businesses work and how people interact with them, making them increasingly popular for all kinds of consumer interaction tasks
Even though LLMs are incredibly useful, they can also be tricky. Humans are naturally curious and like to test, and sometimes push, these models to their limits. This has led to problems, such as with Air Canada, where a chatbot provided incorrect information about refund policies [1], and Chevrolet [2], where users convinced the chatbot to offer the Chevy Tahoe for $1 or to promote Tesla
instead of Chevy. While LLMs are great helpers, they can also pose risks to a company's reputation and brand
Do we really need all knowledge and capability?
One key problem that leads to these issues is using a general purpose model based on prompting Therefore, when developing a LLM for your company, it’s crucial to ask yourself: Do you truly need a general-purpose LLM that can handle everything from programming and math to political questions? In many cases, the answer is no unless you’re a company like OpenAI or Anthropic, which specialize in creating such advanced general-purpose models. For most businesses, it’s more practical and effective to develop a specialized LLM tailored to specific needs and use cases. This ensures the model is optimized for tasks and challenges relevant to the company’s operations, ultimately leading to better performance and outcomes. Hence, implementing some form of guardrails or
Anand, S. (2024, April 8).
what I like to call 'unlearning' becomes essential to ensure the LLM accurately represents your company’s values and ethos particularly in the context of my work at IKEA.
When training an LLM, we aim for it to be both helpful and safe. However, these goals can sometimes clash. On one hand, an overly cautious LLM that refrains from responding to any user input would be harmless, but also ineffective and unhelpful. On the other hand, an LLM that consistently provides responses on any topic without understanding the consequences could damage the brand, which is harmful Therefore, the ideal scenario involves striking a delicate balance between these two objectives, ensuring that the LLM is both useful and safe in its interactions. Teaching the LLM its scope understanding its reasoning and the purpose it's built for as well as the consequences of certain responses is crucial to ensuring that the LLM performs its job well while also protecting the brand. The next section is exactly about that.
Instilling Ethical and Cultural Values in the Model: Training for IKEA’s Identity
In a company, it's crucial to clearly define the objectives you want to achieve with a LLM. At IKEA, we focused on using a LLM specifically for interior design advice. We ensured the model understood its limits and only responded to questions within our domain. Instead of ignoring questions outside our scope, we trained the model to respond humorously, while still emphasizing IKEA’s values and steering the conversation back to its intended purpose
Our training data primarily consists of three types of tasks. First, we include interior
“when developing a LLM for your company, it’s crucial to ask yourself: Do you truly need a general-purpose LLM that can handle everything from programming and math to political questions? ”
design questions and answers, along with conversational data related to interior design. Second, we incorporate personality questions and answers, such as 'Who are you?' or 'Who built you?', to imbue the LLM with a touch of IKEA’s identity
Lastly, we compiled a dataset aimed at aligning the LLM with IKEA’s values. This dataset resembles RLHF (Reinforcement Learning from Human Feedback) safety training datasets, but the responses are tailored to reflect IKEA’s values
Change in Model Behavior
After supervised fine-tuning, we observed that the model effectively comprehends its scope and accurately represents IKEA’s values and identity! It successfully steers the conversation away from topics outside its specialization, maintaining a strong focus on its purpose of assisting with interior design At the same time, its responses are imbued with the company's tone and humor, fostering a friendly interaction with customers.
Finally, this serves as a brief overview of my work at IKEA, with the understanding that outcomes may vary once the model is publicly available. Furthermore, it's important to note that all opinions expressed here are solely my own and do not reflect those of IKEA in any capacity
References
[1] Belanger, A (2024, February 17) Air Canada has to honor a refund policy its chatbot made up Wired https://www wired com/story/air-canada-chatbotrefund-policy/
[2] Perry, T (2024, March 16) Prankster tricks a GM chatbot into agreeing to sell him a $76,000 Chevy Tahoe for $1 Upworthy https://www upworthy com/prankster-tricks-a-gmdealership-chatbot-to-sell-him-a-76000-chevy-tahoefor-1-rp
Why Your AI Startup Should Hire a Head of AI Ethics on Day 1.*
By Raphael Tissot
Six months ago, we launched our startup Spheria.ai, a platform where people can create and host the AI version of themselves
As founders and consumers, we knew that from day 1, we wanted to build a product that would let people claim back their personal data from the abuse of tech giants [2], and protect user privacy. Today, I'm sharing our experience of having a Head of AI Ethics as the very first employee for your startup and how he turned our naive good intentions into actual science and a foundational framework so we can build a legitimate platform that people trust to create the AI version of themselves
Realizing How Little We Knew About AI Ethics
As most founders, we are focused on delivering a great product and growing the user base while trying to stick to our moral compass.
At the very first meeting, Alejandro, our new Head of AI Ethics, brought to us a framework that would help organize the big questions around Privacy and Ethics Instead of just creating an unorganized list of
principles, he immediately cross-referenced ethical frameworks that he had seen across his research and studied those already deployed in existing organizations
“...he turned our naive good intentions into actual science and a foundational framework so we can build a legitimate platform that people trust”
Our Head of AI Ethics introduced us to a concept called Procedural Fairness, a framework used by the World Bank to make sure Fairness is at the center of its decisions and policies. So the biggest win (right after the second meeting!) was for us to graduate from a chaotic list of good intentions to utilizing Ethics frameworks that are actually used by researchers in international organizations
*Initially published in hackernoon.com https://hackernoon.com/why-your-ai-startup-should-hire-a-head-of-ai-ethics-on-day-1
Designed by FreePik : www.freepik.com
Fairness
Equality
Certainty
Diversity
Procedural
Accountability
Inclusivity
Explainability
Compliance
Output
Privacy
Security
Use
Ownership
Right after that second meeting, we defined four pillars for Spheria to use as our foundation for AI Ethics: Transparency, Fairness, Accountability, and Privacy. Using this framework allowed us to visualize the relations between each pillar and see how one idea can have ripples and implications across multiple pillars.
The consequences were immediate: these pillars ensure that we ask ourselves the right questions and bring a new dimension of awareness:
How do we evaluate Fairness for our product?
How do we make sure all features we create are inclusive?
As a platform that creates new AI based on real ideas, are we accountable for the perpetuation of discrimination?
How are we transparent and accountable when making an arbitrary decision?
Transparency
Honesty
Accessability
Accuracy
It's okay not to have all the answers, but we must ask the difficult questions. During our first month, every meeting with our Head of AI Ethics felt like opening Pandora's box, in a good way: a million questions arrived around freedom of speech, bias, inclusion, and censorship Each one felt as legitimate and urgent to answer as the other.
“The goal..” Alejandro put it, was to “elevate ourselves to a higher level of confusion”. This mindset that it’s okay to work towards the right answers, as long as we do so in a transparent and inclusive way, would become the foundation of our ethical policy.
It became clear this would take time and a lot of consideration. So, we started an nternal document to list every question that was brought up during meetings We needed to keep track of all the ideas
We would write the questions as they came, then spend one minute evaluating if the question was properly asked and what tension
it created around which ethical concept, and then where and how that tension would exist in the product or how it's being used Finally, we would evaluate if this question could be broken down into smaller bits to bring more granularity.
For example, the spontaneous question “What is our moderation policy?” needed to be broken down into sub-questions like: “In what cases do we need a moderation policy?” and “How are we to prevent someone from adding sensitive or illegal content into their own AIs?” all the way to: “should we filter the input? ie, blocking the owner of an AI from adding illegal content. Or should we block the output? ie, the AI from sharing information about illegal content?” to finally touch the point of tension: “how hard do we need to prevent the perpetuation of immoral content that is available elsewhere on the internet?”
That list of open questions today is significant, but it's also incredibly important, as it lays out our foundations as a company, a moral entity, to give a real direction for the team to build a future we believe in.
Be Accountable to Our Users - Actions Speak Louder than Words
Most companies and startups want users to trust them and will create a nice catchphrase around “we love privacy” or “we are ethical” and get away with it. Ethics washing is becoming an increasingly big problem in the AI industry, and one that we are acutely aware of
After our launch, we saw that our privacy page was the most visited one after our
“It's okay not to have all the answers, but we must ask the difficult questions.”
landing page. Users are indeed creating their AI Double and importing their personal data from existing platforms like Instagram, LinkedIn, etc
We knew we had to do more than just a privacy policy, but at the time, we didn't necessarily know how or what to do.
Having our Head of AI Ethics in our Team allowed us to act on this and to show our real and tangible work We created our AI Ethics Hub to show our dedication and efforts to be transparent and allow users to follow our progress.
By creating our AI Ethics Hub we feel we are doing right by our users, especially considering what we're building with Spheria, which is to let people create the AI version of themselves. Our users don't think about all of this when they create their official AI double, but privacy, ethical rules, and transparency are so tightly intertwined into creating your digital second self that, as the makers and founders, it's our job to be transparent, protect privacy and provide ethical rules.
We hope this helps in highlighting the difference between startups that are actively engaged in Privacy and Ethics, and those that just put pretty words on their landing page
Setting the Foundations for Our Company's Culture
Having our Head of AI Ethics join our team
so early was the best possible catalyst to help build the right culture at Spheria. De-facto, it put the right values and principles we talked about above at the foundation of our startup, and these foundations will always be present to support the future we're building.
“Stay scrappy” is what an investor told me a few months ago. While scrappy does not mean unethical, having a full-time Head of AI Ethics makes us accountable to all our users and accountable to each team member who may speak up against lines being bent.
The goal for us is to avoid any future (embarrassing and possibly dishonest) situation similar to when OpenAI's CTO replied, “I don't know” to the journalist asking what data was used to train the new Sora video model [1].
So I'm happy to say that us being on this track definitely helps me sleep at night. It brings me some reassurance and a little boost of confidence to face the thousands of hurdles of growing a startup. It's also a strong signal for users, future hires, and investors to judge us in the light of our actions.
References
[1] Harrison Dupré, M. (2024, March 15). In cringe video, openai CTO says she doesn’t know where Sora’s training data came from Yahoo! News
[2] Wilson, T (2023, November 1) I trained an AI to act like me: Here’s what happened HackerNoon.
Interviews
Movie/Book/Podcast Reviews
Send us an email to express your interest:
beyondcodemagazine @gmail.com
Beyondcode.
AI Horizons.
AI Horizons delves into the deeper ethical, philosophical, and societal dilemmas that emerge at the forefront of artificial intelligence development. This section features critical discussions on the pitfalls of technological solutionism, the nuanced debates between AI safety and AI ethics, and the broader implications of integrating AI into our world By addressing both the promises and perils of AI, AI Horizons aims to challenge conventional thinking and foster a thoughtful dialogue about the future of this transformative technology
Technological Solutionism as a Pitfall for the Future.
By Marianne Kramer
When Evgeny Morozov launched his book ‘To Save Everything, Click Here’ in 2013, AI was not at the forefront of most people’s mind While his ideas on technology in general are quite radical, his ideas are especially relevant now. His criticism of technological solutionism gives useful tools for critically assessing how AI can shape our approach to societal problems. More specifically, it highlights the risk of easy-tosell pseudo-solutions for very real issues. Technological solutionism stems from the idea that our whole lives are now mediated through technologies, which shapes how we approach problems. Technology allows us to solve all kinds of problems: are you overweight? Use a watch to track your exercise during the day – or get Ozempic if you are one of the privileged few. You can’t keep track of all your appointments? Integrate all your different calendars on your phone For many problems, there are apps or gadgets that help you solve your issue. Morozov argues that these types of solutions optimize behavior locally, and thereby stand in the way of broad-scope critical thinking, and thereby more ambitious solutions. How can a large part of the population have weight issues? Why do we view weight as a problem
to solve? Why are our tasks easier but we remain busier than ever?
Morozov views solutionism as an ideology where technology is regarded as a type of magic that can solve all our issues. During the ongoing AI-hype, many business leaders were scrambling to implement AI into their business As such, it is treated as a type of swiss knife, ready to be adopted anywhere and able to optimize anything. We want to implement AI, but the what and how are secondary questions. Is that not backwards? In order to solve a problem, the root causes should first be examined, not the ideal means of solution. While it makes sense to want to use technology to remain ahead of your competition, I got the nagging feeling that AI was put on this unrealistic, almost messianic pedestal
As a data scientist, I see the difficulty in building good models, never mind how to define ‘good’ in this case. Using AI can give a false sense of security, as it can give the illusion of factuality, while the model may not make perfect assessments In that way, it can stand in the way of our own critical thinking. Combining this with a current lack of model
Photo by Barbara Zandoval, Unsplash
transparency and biases, we may not know what we are actually using.
Many models are developed to save us time and money, or increase profits Social media megacorporations invest heavily into optimizing their algorithms so users stay on the platform. Internet recommendation systems are being used and sold to keep our eyes on our screens, in what is now called an ‘attention economy’ So much development is happening to solve minor annoyances or issues that would not have been issues if the technology did not solve it. Think of WiFifridges or technology that allows you to change the colors of lights depending on your mood Nice, but necessary?
A recent article by Business Insider showed that restaurants are now using AI to analyze their trash and cut waste [1]. In itself, this may be a good thing and may help improve from the status-quo However, the cost of developing and running models should not be underestimated. The climate impact of such models can be enormous, thereby creating unforeseen problems later. Such solutions can stand in the way of simpler, less overengineered ones By building on a hype, it is rather the idea than the impact that is sold We should keep asking the fundamental questions. First of all, what is the problem that the technology is trying to solve? Is it an actual problem? Where do the problems really come from? In this case: should we not rethink the restaurant industry in itself and what we expect from our foods? What should be emphasized is that a rejection of solutionism is not a rejection of technology, as Morozov says himself. It is a call to tackle today’s issues at their core, not merely focusing on the means In his words: ‘it is to transcend the narrow-minded rationalist mindset that recasts every instance
“First of all, what is the problem that the technology is trying to solve? Is it an actual problem? Where do the problems really come from?”
of an efficiency deficit (…) as an obstacle that needs to be overcome’ [2] As data scientists may know, “there is no such thing as free lunch”. There are no silver bullets for the issues we are facing, but new technologies can also give opportunities we did not have before. However, new solutions should not stand in the way of the fundamental questions So, whenever new technologies are presented you can ask: what is the problem? What is the solution? And, most importantly, what are the alternatives?
References & Notes
[1] Balevic, K (2024, April 7) AI is now analyzing your garbage Business Insider https://www businessinsider com/ai-garbage-foodwaste-2024-4
[2] Morozov, E (2013) Solutionism and its discontents In To save everything, click here: The folly of technological solutionism (pp. 1-16).
Dow Schüll, N (2013, September 9) The folly of technological solutionism: An Interview with Evgeny Morozov Public Books https://www.publicbooks.org/the-folly-oftechnological-solutionism-an-interview-with-evgenymorozov/
Morozov, E (2023, June 30) The true threat of Artificial Intelligence The New York Times https://www nytimes com/2023/06/30/opinion/artificialintelligence-danger html
Let’s Discuss AI Safety
The Story Behind the Utrecht AI Safety Discussion Group
by Olivier Vroom and Amber Koelfat, Max Shaffelder
The idea for hosting the Utrecht AI Safety Discussion Group arose when we noticed one issue with our studies: even though we were in a Master’s program called Artificial Intelligence, there were very few courses discussing the potential risks of AI, especially misalignment But wasn’t this environment exactly the place where future safety researchers and technical governance workers should be trained? As a response to this lack of a big-picture course on the risks of advanced AI systems, we decided to host a course ourselves At the beginning of February, we met some fellow Master’s students who shared our interest in AI safety and alignment. For this reason, we decided to organize another iteration of the Utrecht AI Safety Discussion Group, as the previous facilitators had already graduated from the university.
After promoting the group during lectures, we formed a collective of 65 interested members, from which a core group of 15–20 active participants emerged We gathered every week to delve into AI safety topics such as interpretability, pathways to existential risks posed by AI, and AI governance. Using the curriculum by BlueDot Impact, we studied the materials ourselves while simultaneously running weekly group sessions open to anyone interested The group sessions began with a brief summary of the week’s reading materials, followed by about an hour of debates and discussions on the respective topics.The sessions were attended by people from a variety of different backgrounds from philosophy and psychology to computer science majors as well as some who had already been in the workforce for several years.
The goal of these sessions was primarily to provide a platform for those interested in AI safety to learn about and discuss topics fundamental to this issue and to inspire those with the motivation and relevant background knowledge to follow up on the course with their own projects and research.
“The most rewarding part of this initiative was the lively debates that emerged during the weekly meetings, often keeping participants engaged well past the official session times as they passionately shared their insights on these profound topics ”
The most rewarding part of this initiative was the lively debates that emerged during the weekly meetings, often keeping participants engaged well past the official session times as they passionately shared their insights on these profound topics. We hosted plenary discussions with role-playing scenarios one notable debate, for instance, focused on whether it is acceptable for AI systems to mimic human-like behavior. Crafting clear debate topics was challenging, particularly on abstract concepts like AI consciousness, but it led to incredibly stimulating sessions. While opinions often differed, there was unanimous agreement on one point: AI safety and alignment demand urgent attention and should be taken more seriously.
The urgency of AI safety was further emphasized at the Vivatech conference in Paris, where we had a chance to share our story and speak with Elon Musk
Student Initiatives.
We asked him how to convince people to take AI safety seriously. His response was surprising and a bit concerning he explained that his team at X ai wouldn’t focus on AI safety until their AI reached the power of rivals like ChatGPT. This underscored the competitive "rat race" nature of AI development, where advancing capabilities often takes precedence over ensuring safety and alignment
In the end, this project was a valuable lesson that highlighted the importance of interdisciplinary dialogue and community engagement in tackling the challenges posed by AI As AI plays an increasingly profound role in society, it’s critical that we continue to evaluate and debate its impact while working toward systems that align with our values and principles.
The Utrecht AI Safety Discussion Group was so well received that we decided to host another iteration in fall 2024, hoping it will provide participants and ourselves with a similarly enriching experience to the one in spring. We also hope that it will become a long-term and vibrant part of the university, inspiring future efforts to address the challenges of AI safety
Short or LongTerm? Let’s Talk!
The Big Picture AI Discussion Group and the Value of Conversation
By Thomas Wachter
Are you concerned about AI’s impacts and harms? Great! But now choose: short-term or long-term.
Okay, I’ll help you. Are you worried about how algorithms might, for example, categorize individuals unfairly and reinforce existing patterns of oppression and discrimination? If so, you’re part of the “AI ethics” group.
However, maybe you’re thinking, “These issues are small compared to the possibility that superintelligent AI systems could threaten humanity’s existence ” If that sounds like you, then you’re with the “AI safety” group.
This binary choice between AI ethics and AI safety often shapes the debate on artificial intelligence, framing these as two separate and even conflicting perspectives On the one hand, experts in the field of AI ethics are concerned with the broad integration of AI systems into society and the problems related to these systems' fairness, accountability, transparency, bias, equity, and justice [1]. This is especially urgent because AI systems are both technical and socio-technical, influencing individuals, communities, and
societies [2] From discriminatory policing practices to biased healthcare systems, algorithms can cause significant harm to marginalized groups and pose serious risks to human well-being, freedom, and privacy (see [3, 4, 5]).
On the other hand, experts in the field of AI safety are focused on long-term scenarios, believing that the ongoing developments in AI suggest the possibility of achieving superintelligent systems –artificial general intelligence (AGI). These AI systems, they argue, could potentially solve significant global challenges like climate change [6] if we are lucky. However, the rapid progress in AI has also prompted concerns among leading thinkers, who see this rapid development as one of the greatest potential threats to humanity. For instance, in his recent book The Precipice, the philosopher Toby Ord singles out the advent of powerful AI as the greatest existential risk facing humanity today estimating (somewhat worryingly) that there is a 1 in 10 chance of the machines taking over in the next hundred years [7] These views have transcended philosophical discussion and made respectable AI researchers believe the same.
For example, Geoffrey Hinton, a renowned AI researcher often called the "godfather of AI," has warned of the "existential threat" posed by AI, suggesting that substantial advancements in artificial intelligence could lead to human extinction or an irreversible global catastrophe.
In short, the ongoing debates regarding the risks of AI suggest that they could be divided into two big groups: the “AI ethics” group and the “AI safety” group. Those working on, for example, mitigating bias and discrimination in AI systems are placed in the former, while those concerned with artificial general intelligence, extinction risk, alignment, and (very) long-term risks are often placed in the latter [1].
So, do these two groups of experts get along?
No. These are two camps that sometimes stridently dislike each other
From the perspective of people working on AI ethics, experts focusing on long-term scenarios ignore real problems we already experience today in favor of obsessing over future issues that might never come to be. Moreover, some argue this misleads people about AI's current state and likely future
“In short, the ongoing debates regarding the risks of AI suggest that they could be divided into two big groups: the “AI ethics” group and the “AI safety” group.”
Melanie Mitchell, for example, claims that “such sensationalist claims deflect attention from real immediate risks ” [8]
Conversely, those focused on long-term risks associated with AGI tend to dismiss or downplay these immediate issues [9]. They argue that while bias, fairness, and accountability are important, they “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over” [10]. In their view, focusing on current harms without addressing the far greater risks of a misaligned superintelligence (i.e., an AI that doesn’t align with our goals) is shortsighted, as it fails to prioritize what they believe could lead to the extinction of humanity. This dismissal of immediate concerns is a key point of contention, as it reinforces the divide between these two groups, each seeing the other as neglecting what truly matters in the debate over AI's future.
As a result, this divide is not only theoretical and academic but profoundly personal and emotional. The polarized landscape is apparent in public discourse, where the tension has escalated into highly charged debates. For instance, some go so far
Photo by Pixabay, Pexels
as to link the fixation on existential risk and “long-termism” to an ideology rooted in eugenics Timnit Gebru, for example, has remarked: “I'll tell you the ‘extreme’ of ‘AI safety’: It’s EUGENICS.” This kind of strong position shows how little opportunity for collaboration there is. Emily Bender, for example, has argued that existential risk proponents are “not natural allies” of those focused on real, short-term risks because they are powerful “johnny-come-lately's” that engage in “ridiculous distraction tactics,” overlooking the longstanding work of AI critics (cited in [1]).
“Starting as casual conversations between two friends, the group has evolved into a weekly gathering where students discuss all things AI.”
This highlights the polarization within the debate and reflects the tenor of the “academic” discussion. We often witness highly respected researchers using their social media platforms to point fingers, label, and even insult those with differing views without genuinely engaging with the other side Online discussions tend to elevate the loudest and most provocative voices, diminishing the opportunity to connect with how others think and feel about AI-related topics on a more nuanced, human level
Additionally, this shows how polarized the debate is and how little real conversation surrounds this highly relevant topic. This is problematic because, as Sæstra and Danaher argue, “without real conversation about AI risks, it will be difficult to manage them properly, and we could end up in a situation
where neither short- nor long-term risks are managed and mitigated” [1]. Dialogue with varying views is essential
At Utrecht University, a group of students from diverse backgrounds believes that conventional discussions around AI can be too narrow. In the “Big Picture AI” discussion group, these students are concerned about AI's social and political implications While they have their own views on both short- and long-term issues, they believe these perspectives are not mutually exclusive and can learn much from one another. They argue that the dichotomy often drawn between the two oversimplifies the issue, and that the division into distinct camps may be unnecessary. Researchers focused on one aspect of AI have good reasons to take the work done on the other seriously.
With this vision in mind, the Big Picture AI discussion group aims to broaden the conversation around AI by fostering a collaborative environment where diverse viewpoints coexist and enrich the dialogue. Starting as casual conversations between two friends, the group has evolved into a weekly gathering where students discuss all things AI
“ ... with each week dedicated to a new topic, explored through different media such as podcasts, news articles, or research papers.”
The group encourages interdisciplinary exchanges, inviting friends and classmates from various fields to share their insights and challenge prevailing narratives. This approach deepens their understanding of AI's multifaceted implications and cultivates a culture of empathy and respect among participants. The topics they discuss are varied, with each week dedicated to a new topic, explored through different media such as podcasts, news articles, or research papers. For example, one of the group’s favorites was “The feedback loop between science fiction and AI,” as discussed on The Good Robot podcast. They listened to science fiction writer Chen Qiufan, author of AI 2041, discuss how science fiction and technology have historically intertwined and how science fiction influences today’s technology
Another intriguing material was Luke Munn’s paper, “The Uselessness of AI Ethics,” which they discussed in collaboration with the Dutch AI Ethics Community. This event was particularly exciting for the group,
as co-founder Will Cosgrave noted that “more voices were in the room than ever before.” Topics included rebranding AI Ethics to “AI Justice” and criticisms of the ineffectiveness of the current corporate culture surrounding AI Ethics, which often involves inadequate regulation and attempts to externalize responsibility.
As we stand at the intersection of immediate challenges and potential long-term threats posed by AI, it becomes increasingly clear that the path forward lies not in division but in respectful and constructive dialogue The students of the "Big Picture AI" discussion group at Utrecht University embody this spirit of collaboration, recognizing that rather than being adversaries, proponents of different perspectives can work together to build a more nuanced framework for understanding AI's impacts. In other words, if you ask some group members whether we should prioritize short-term or long-term issues, they will respond with an energetic, “Let’s talk!”
Yutong Liu & The Bigger Picture / Better Images of AI / AI is Everywhere / CC-BY 4.0
References
[1] Sætra, H S , & Danaher, J (2023) Resolving the battle of short- vs long-term AI risks AI Ethics https://doi.org/10.1007/s43681-023-00336-y
[2] Birhane, A. (2021). The impossibility of automating ambiguity. Artificial Life, 27(1), 44–61. https://doi.org/10.1162/artl a 00336
[3] Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
[4] Eubanks, V (2018) Automating inequality: How high-tech tools profile, police, and punish the poor (1st ed ) St Martin's Press
[5] O’Neil, C (2017) Weapons of math destruction Penguin Books
[6] Summerfield, C (2022) Natural general intelligence: How understanding the brain can help us build AI Oxford University Press
[7] Ord, T (2020) The precipice: Existential risk and the future of humanity Bloomsbury Publishing
[8] Munk Debates. (2022, May 12). Munk debate on artificial intelligence | Bengio & Tegmark vs. Mitchell & LeCun [Video]. YouTube. https://www.youtube.com/watch?v=144uOfr4SYA
[9] Cave, S., & ÓhÉigeartaigh, S. S. (2019). Bridging near- and long-term concerns about AI. Nature Machine Intelligence, 1(1), 5–6. https://doi org/10 1038/s42256-018-0003-2
[10] O'Neil, L (2023, August 12) These women tried to warn us about AI Rolling Stone https://www rollingstone com/culture/culturefeatures/women-warnings-ai-danger-risk-beforechatgpt-1234804367/
[11] Swierzawska, L (2024, January 23) UU students meet weekly to discuss the impact of Artificial Intelligence DUB https://dub uu nl/en/news/uustudents-meet-weekly-discuss-impact-artificialintelligence
Beyondcode.
Foundations in Focus.
Understanding AI requires assembling the many unresolved pieces of a complex puzzle. In this section, we weave together perspectives from law, computational logic, philosophy of science, and art to explore the foundational principles shaping AI and the ways we understand it By integrating these diverse disciplines, we aim to illuminate the structures influencing AI's development, governance, and cultural resonance.
Where Science, Technology, and Society Meet.
An Interview with Abigail Nieves Delgado
By Thomas Wachter
Science, technology, and society are deeply interconnected On the one hand, technological artifacts allow the creation of scientific knowledge. For example, the invention of the telescope or the microscope allowed scientists to see what was obscure before. On the other hand, scientific knowledge enables the creation of new technology and is fundamentally incarnated in technological artifacts introduced into society. This relationship also occurs within machine learning and AI systems that classify people based on some category. For example, in algorithms designed to classify emotions, the number of emotions recognized by the system, the labels assigned to those emotions, and the expected ways those emotions are expressed on human faces are typically informed by scientific research.
In times when science provides the voice of authority in our culture to an extent unmatched by any other institution, understanding this relationship becomes fundamental. In particular, in the case of AI systems and machine learning algorithms deployed massively to sort and classify
individuals, understanding which categories are being used and how they are scientifically informed is crucial
These categories can profoundly impact people's lives, influencing decision-making in healthcare, law enforcement, employment, and more. Therefore, it is relevant to understand not only the data in search of biases but also the underlying science As society increasingly relies on these advanced technologies, a nuanced vision of the interplay between science, technology, and society becomes imperative to navigate the ethical and social challenges that arise
Abigail Nieves Delgado is a Mexican biologist and philosopher of science at the Freudenthal Institute in the master’s program of History and Philosophy of Science at Utrecht University. Her research looks at rationalization practices in the history of science and contemporary biomedical research. Her current project looks at concepts like race and ethnicity in microbiome research, especially in places outside of what is considered the global north A notable part of her previous research
Photo by Pavel Danilyuk, Pexels
that arises from her work on race is how current AI technologies are used in, for example, facial recognition algorithms and how this reinforces our current ideas about race.
The ultimate aim of your research is “stopping racism in science and society.” How do you do that?
Abigail Nieves Delgado is an assistant professor at the Freudenthal Institute of Utrecht University. Her research focuses on the history and philosophy of the life sciences and physical anthropology. She is especially interested in racialization practices in the history of science, contemporary biomedical research, and in biometric technologies. She is the principal investigator of the project “Microbiome Research and Race in the Local South,” funded by the NWO.
Abigail Nieves Delgado: This is one of the things I care about the most. Even though I do not have a master plan, one of the things I believe to be important and interesting is analyzing how science contributes to racism That is why I look into the relationship between science and technology and try to see how these truth-maker devices are placed in society.
Another big part of my aim of ending racism is in my teaching With my teaching, I hope people become more aware of the problems of believing that racial differences exist among people. I do not think I am going to end racism forever, but I will try to contribute with my bit
Why is it important to engage with this during the teaching?
Abigail Nieves Delgado: Sometimes, students think that scientific knowledge is disembodied, totally universal, and objective However, in my teaching, I want my students to see that ideas are situated. Within a feminist epistemology frame, I try to show my students that science is not magical. Rather, it is done by scientists who come from different backgrounds, both academically and
personally. They come from all parts of the world, influencing how they understand the reality they describe in their science. Now, thanks to feminist epistemologists, it is more common to state where one comes from I, for example, usually state that I come from Mexico before my lectures.
What role does technology play in racism?
Abigail Nieves Delgado: Historically, technology has been used a lot to define different races, which has first contributed to generating the idea that races exist and that we can measure them. This can make certain biases bigger, especially against disadvantaged people, but it can also set new ways of controlling the population. A clear case of this occurs with facial recognition technologies, in which certain “races” are used as categories for these AI systems. For example, this can be seen in using biometric technologies in places like refugee camps Even though it can have certain benefits in automating certain processes, it can also be very damaging to the people in those places. In that sense, these technologies can help us to do different things but can also somehow increment disadvantages in groups that are
already suffering disadvantages.
What's the relation knowledge and technol
Abigail Nieves Del technologies would than others Probabl recognition technolo them relate closely t about racial differenc paper I wrote about Statistics in Facial R Types, Physical Genealogies”), I show technology just as informed by science systems they are b relationship and breaking it down is actually one of the ways of stopping racism If I can at least show that race is not a unit to measure, not a clear biological and actual thing in the world. Hopefully, the link will be broken, and better science and technology will be designed. For example, a technology that portrays the idea that human diversity goes in many different directions, and you can group it in many different ways instead of one using racial categories.
How does technology influence science back?
If I can at least show that race is not a unit to measure, not a clear biological and actual thing in the world. Hopefully, the link will be broken, and better science and technology will be designed.
Abigail Nieves Delgado: Technology clearly and significantly impacts science by facilitating the creation of new knowledge. This is especially evident in medical fields, where technological advances provide tools to measure and monitor health indicators These instruments, in turn, rely on established scientific norms to function. However, it is important to recognize that the scientific knowledge embedded in these technologies might not always be just for everyone For instance, medical devices and standards often do not account for variations across the population. Different ethnic groups have historically suffered from these biases. For example, Black people or Indigenous people might suffer from the assumptions embedded not only in the technologies but also in society or sociotechnical systems.
Photo by ������������������������, Unsplash
This perspective, which I found troubling but couldn't articulate at the time, essentializes and romanticizes indigenous cultures, ignoring their dynamic and evolving nature.
You are also working on indigenous knowledge in science. Can you tell me more about that? How can “traditional” science benefit from acknowledging indigenous knowledge?
Abigail Nieves Delgado: Initially, I pursued my bachelor’s thesis in biology because I was fascinated by the knowledge of indigenous cultures in Mexico. However, I encountered numerous assumptions about indigenous people in biological research. For example, many articles would stereotype these communities as living harmoniously with nature and possessing unchanging, ancient knowledge. This perspective, which I found troubling but couldn't articulate at the time, essentializes and romanticizes indigenous cultures, ignoring their dynamic and evolving nature
Another issue is the selective validation of indigenous knowledge. Indigenous people who maintain traditional practices and appearances are often considered more "authentic", whereas those who adopt more modern or “Westernized” ways are seen as less credible. This selective perception is harmful and perpetuates stereotypes, preventing a fuller integration of indigenous knowledge into broader scientific discourse.
In the philosophy of science and discussions on epistemic diversity, we sometimes fall into the trap of treating indigenous knowledge as folklore rather than as a legitimate and evolving system of understanding. Scientists, philosophers, and other stakeholders must engage with indigenous knowledge respectfully and non-extractively, ensuring mutual benefit rather than merely appropriating knowledge.
This engagement is complicated by differences in language and worldview, which can lead to misunderstandings and loss of valuable insights during translation. Despite these challenges, recognizing and incorporating indigenous knowledge can provide answers and perspectives that conventional science might overlook Acknowledging and addressing the biases inherent in our scientific practices and technologies is vital. Awareness and dialogue among scientists, technology developers, and knowledge holders can help mitigate these biases and lead to more inclusive and equitable scientific knowledge
Should We Use the Precautionary Principle to Regulate Emerging Technologies?
By Alejandro del Valle Louw
The popularization of AI-powered systems is a sweeping phenomenon that either already does, or soon will touch almost every facet of society Today, AI systems are increasingly used by governments and private actors alike to make a range of important decisions, from an individual’s eligibility for government benefits and risk assessment in the judicial system, to content moderation on widely used social media platforms [1, p 240] As these models become integrated into core elements of our governance, healthcare, education, and welfare systems, the value of using AI becomes more profound, and the risks considerably higher [2, p.11].
In the face of this technological revolution, how the development of these emerging technologies should be regulated, and by whom, remains a contested subject. Some scholars argue that the responsibility to guide policy in this field should be taken by the tech corporations themselves [3], while others have argued that governments are best suited to regulate this ever-changing industry [1]. It is then a further consideration of how exactly these actors should make decisions in the face of these developments' empirical uncertainty. Empirical uncertainty is the lack of knowledge we have about how these systems operate because we haven’t run enough
scientific tests on them. There is concern that these technologies have the potential for catastrophic, irreparable consequences, including mass job displacement, potential algorithmic bias, privacy vulnerabilities, copyright infringements, autonomous weapons, and much more [2, p.7].
The traditional method of assessing this kind of uncertainty and risk in policymaking is by cost-benefit analysis However, the theory underweights very bad outcomes that have a small probability of occurring, which can be undesirable when considering regulation. The precautionary principle is an alternative method of decision-making, which dictates that, under empirical uncertainty, policymakers should create laws which aim to phase out or restrict activities which could plausibly lead to disaster [4, p.50]. The appeal is that such precautionary reasoning can prevent risky policy-making where science’s conflicting or incomplete results can be misinterpreted and therefore be insufficiently probative of actual danger [5, p.276]. The principle also prevents policy procrastination and allows the proper regulations to be in place before a dangerous technology is implemented across society Others have argued that the principle is not actionguiding, as it suffers from a paradox of
cumulative probabilities, known as the cumulative likelihood problem [4, 6].
It is unclear, therefore, whether the precautionary principle is a suitable framework under which to regulate emerging technologies in the presence of empirical ambiguity. This paper will scrutinize arguments for the precautionary principle, analyze whether the cumulative likelihood problem renders the principle useless as a way of making policy decisions, and finally conclude whether the principle is equipped to be a regulatory framework for emerging technologies such as AI.
Defining the Precautionary Principle
The precautionary principle has undergone a variety of interpretations, and thus to determine its effectiveness as a policy-guiding framework, we must first precisely define what the policy entails A commonly used formulation of the precautionary principle, for example, asserts that under uncertainty, when an activity has potentially negative consequences, measures should be taken to prevent it, even if the relationship between the cause and effect is not established [7, p 35] As has been stated before however, this definition is far too vague to be an actionguiding principle. Essentially all actions carry some potentially negative consequences, and some amount of uncertainty The principle would therefore restrict almost any action, and render itself useless as a theory, thus it requires further nuance.
Vitally, the principle cannot be triggered by negligible amounts of uncertainty, but only when there is a lack of scientific information and assigned probabilities The exact amount of uncertainty that can be permitted is intentionally unspecified, so that it can be
applied differently to different situations, depending on expert consensus and the extent of the negative consequences The exact threshold of uncertainty is dependent upon the severity of the potential harms of the action, which allows for context-specific application. Such that in the detonation of a nuclear weapon, much less uncertainty is required to trigger the principle than when implementing a sugar tax, because the outcome has the potential to be far worse. Actions which pose a threat of extremely low or insignificant harm can also not trigger the precautionary principle The principle is only activated when there is the potential for irreversible, catastrophic harm to the environment, public health, or the targeted parties of the policy [7, p.35]. Again, there is no specific threshold of destruction which triggers the principle, rather the principle emphasizes the need to act cautiously when there is a possibility of serious harm, even when the evidence is inconclusive [4, p.50]. There is no amount of positive consequences that can outweigh a catastrophic risk, once the precautionary principle is triggered, no amount of potential benefit can overturn it In summary, for an action to trigger the precautionary principle, it must pose serious
harm to a section of society, and constitute a non-negligible, but intentionally vague level of risk
The Precautionary Principle and Emerging Technologies
By this definition we have set up, emerging technologies like AI certainly tick the boxes regarding the severity of the potential worstcase scenarios, and the probability of them materializing. The current use of AI may lead to discrimination in the judicial system which specifically threatens affected parties of the AI’s use with irreversible consequences (false or unjust imprisonment) AI-powered weapon systems pose a catastrophic risk if used in the wrong hands or if they are embedded with racial or other social prejudices. AI systems, therefore, due to their complexity and lack of human oversight, may exhibit unexpected behaviors or make decisions with unforeseen consequences, negatively impacting individuals, businesses, or society as a whole. However, the risk of this happening is unknown to us, though not negligible, as we have seen evidence of algorithmic bias in the COMPAS case [8] Therefore, both criteria of the principle are activated.
“Because the precautionary principle operates under uncertainty, a politician can make the best decisions possible with the information that is available to them.”
We therefore have good reason to use the precautionary principle to regulate emerging technologies over other decision frameworks For example, the principle forces policymakers to consider the feasible alternatives for any given problem [9, p.21]. This involves assessing different strategies and technologies when searching for a solution, rather than confining attention to pre-established methods [9, p 21] Under uncertainty, all feasible courses of action are considered and evaluated relative to all other possible avenues. For example, if AI-assisted diagnostics in medicine is the only viable solution (due to say, a lack of doctors), then more leniencies will be permitted in its implementation. However, if there are other lower-risk options available, the pursuit of such risky and damaging alternatives must be properly regulated.
Secondly, the precautionary principle is the best tool against policy procrastination on offer, which is of vital importance in regulating emerging technology. In times of scientific uncertainty, policy-makers may decide not to act at all [5, p.276]. Politicians tend to wait for more conclusive information before acting, which can be a disastrous decision in cases like artificial intelligence, where the rapid developments and wide use cases require an immediate response to stay on top of. Because the precautionary principle operates under uncertainty, a politician can make the best decisions possible with the information that is available to them.
The precautionary principle does not aim to prevent the development or use of artificial intelligence and other emerging technologies, as they can be incredibly powerful tools for society The principle simply proposes that we need to be more certain about the consequences and the probabilities of using
risky technology before they are implemented across society. Regulating technology in this way will also incentivise further research into the negative consequences of using them, which will benefit the technology industry and society at large.
The Cumulative Likelihood Problem
The strongest objection the precautionary principle faces is known as the cumulative likelihood problem. Thoma [4, p.62] gives the example – quite suitably for our purposes – of a new risky technology. There is an independent risk of severe catastrophe of 0 005% for every month that the technology is used. This may be insignificant, but over time it compounds such that 10 years of usage would result in 0.6% risk. The precautionary principle clearly asserts that if an activity leads to a non-negligible risk of catastrophe, and there is a significantly safer alternative available, we should pursue the alternative [4, p.63]. What is to be done when the precautionary principle constrains the extended activity of say, the prolonged usage of risky AI technology, but does not constrain the incremental activities when considered in isolation? Take the genuine concern that AI-powered weapons pose cyber security risks – that if they are compromised, they can be mobilized as weapons of efficient mass destruction [10] The concern is that, while the possibility is low in the short term, as new technologies multiply, there is an increasing risk in the long term. There is a similar concern with job displacement, in that emerging technologies displacing one industry is not cause for concern, but if emerging technologies take over many or all industries, there will be catastrophic job loss across all of society.
“ ...the cumulative likelihood problem is not convincing enough to dissuade us from adopting the precautionary principle to regulate emerging technologies.”
I believe, however, that the cumulative likelihood problem is not convincing enough to dissuade us from adopting the precautionary principle to regulate emerging technologies. It can be solved by implementing a specific scope to the problem, or a hard cut-off point under which the risks can be assessed What exactly this scope is can be context-dependent, but as we are discussing policy, it would be best suited to fit political cycles. If there is a concern over the prolonged use of AI weapons, the risk can be assessed over the course of 4 years, such that it is overviewed by the same government which regulates it, and then it can be reassessed at the start of a new cycle. This ensures accountability, prevents policy procrastination, and does not bind future governments to the analyses of previous ones. If there is concern over job displacement, a clear scope can be defined, such as the transportation industry, and limits can be set over automation under that scope to achieve the policy aims.
This critique also ignores the fact that the precautionary principle is a measure intended for times of uncertainty, when the exact probabilities of disaster cannot be known. If the probabilities are non-negligible, erring on
the side of caution allows us time to better understand the risks, consequences, and how to best circumvent them If we are unsure how probabilities will accumulate over time, I believe the precautionary principle is the perfect solution to first find time to understand the exact risks and then develop solutions for them.
A Glance Towards the Future
As technology advances deeper into the unknown, we look towards our politicians to act decisively and effectively in preventing catastrophic outcomes This paper has shown how the precautionary principle can be used as a framework to guide decision making which prevents policy procrastination, disincentivizes risky policy-making, and provides a broader consideration of the feasible alternatives Its flexibility and context-sensitivity as a theory more proactively guides politicians through times of uncertainty. Deciding how to regulate emerging technologies is a complex issue, but by taking the time to better understand the rapidly evolving landscape of technology, we can be sure to avoid any of the disastrous outcomes which concern people and politicians alike, and ensure a safer transition to the new golden age of technology. This paper has shown that governments would be wise to act with precaution in the face of uncertainty, as other methods of decisionmaking very well could lead to disastrous outcomes.
In the wake of new technologies that pose elusive dangers, it is essential to consider how we want our leaders to act on our behalf Despite its vagueness, the precautionary principle allows for context-sensitive decisionmaking guided by expert consensus.
Notes&References
[1] Ferretti, T (2022) An Institutionalist Approach to AI Ethics: Justifying the Priority of Government Regulation over Self-Regulation Moral Philosophy and Politics, 9(2), 239-265. https://doi.org/10.1515/mopp-2020-0056
[2] Black, J., & Murray, A. (2019) Regulating AI and machine learning: setting the regulatory agenda. European Journal of Law and Technology, 10 (3). ISSN 2042-115X
[3] Buhmann, A., & Fieseler, C. (2023). Deep Learning Meets Deep Democracy: Deliberative Governance and Responsible Innovation in Artificial Intelligence Business Ethics Quarterly, 33(1), 146–179 doi:10 1017/beq 2021 42
[4] Thoma, J (2022) Time for Caution Philos Public Aff, 50: 50-89 https://doi org/10 1111/papa 12204
[5] Ricci, P F & Zhang, J (2011) Benefits and Limitations of the Precautionary Principle Encyclopaedia of environmental health, https://www dropbox com/scl/fi/hds8quspg908svp2v0b cf/Ricci-and-Zhang-Precautionary-principle-2011 pdf? rlkey=mvlfg1o1i7z7lr6gj2or70weu&dl=0
[6] Bradley, R. & Roussos, J. (2021). Following the Science: Pandemic Policy Making and Reasonable Worst-Case Scenarios. LSE Public Policy Review. Vol 1, Is. 4. DOI: 10.31389/lseppr.23
[7] Gardiner, S.M. (2006). A Core Precautionary Principle. Journal of Political Philosophy, 14: 33-60. https://doi org/10 1111/j 1467-9760 2006 00237 x
[8] Angwin, J , Larson, J , Mattu, S , & Kirchner, L (2016) Machine Bias ProPublica, https://www propublica org/article/machine-bias-riskassessments-in-criminal-sentencing
[9] Steele, K (2006) The precautionary principle: a new approach to public decision making? Law, Probability and Risk, 19 – 31 doi:10 1093/lpr/mgl010
[10] Rodrigues, R (2020) Legal and human rights issues of AI: Gaps, challenges and vulnerabilities Journal of Responsible Technology, Vol 4.
Neural Cellular Automata for Computer Vision
By Dennis A. Mertens
What are Cellular Automata?
Cellular Automata (CA) comprise a model of computation, just like the Turing Machine [1] (De Mol 2021) or Lambda Calculus (Alama and Korbmacher 2023) Their conception is attributed to John von Neumann, who used them to describe a self-replicating machine (Von Neumann and Burks 1966).
A (single) cellular automaton is described by a position and a state Throughout the automaton’s lifetime, the position remains fixed, but the state is allowed to change over time. Typically, these two components are represented using integers, but that need not be the case. For example, the CA in the Lenia engine [2] are continuous (Chan 2019) Therefore, without any loss of generality, let x and s represent the position and state of a given automaton respectively.
The neighborhood of any given cellular automaton corresponds to a set of positions, which are said to sit adjacent to or neighboring the automaton Therefore, it is understood that CA are situated according to some spatial arrangement. Popular implementations arrange the automata on a 2D grid, where the neighborhood of a given automaton is the set of eight immediately adjacent cells (see Figure 1); a notable example is Conway’s Game of Life [3] (Gardner 1970).
In such popular implementations, a neighbourhood is defined as the set of cells
exactly at 1-radius from a given automaton in Manhattan distance [4]. If we combine the states of all automata in a population, then we obtain a collection of states. We can represent this collection of states as a 2D matrix S, where every sij denotes the state of the automaton located at ij
All automata are governed by some transition function or ruleset, which describes how the state of a single cellular automaton is going to change by looking at the state of the automaton and the states of all automata in their neighborhood Depending on the ruleset, applying it multiple times on the same grid can give rise to interesting behavior.
Figure 2 shows the dynamics of the transition function used in Conway’s Game of Life on a particular arrangement of cells typically known as ”glider” Note that no explicit notion of ”objectness” exists in the game The glider is not something programmed explicitly within the rules of the game. Instead, the glider is a natural consequence of how CA interact with one another.
Figure 1: Radius-1 neighborhood of a cellular automaton in a 2D grid (Also known as the Moor neighborhood). Edges represent adjacency w.r.t. the central cell.
2D Cellular Automata are Convolutional Neural Nets!
This kind of grid-like 2D CA can be easily modeled by convolutional neural nets (CNNs).[5] In fact, in Gilpin 2019 they propose a CNN architecture specifically designed for this. First, they use a 3x3 kernel [6] in the input layer for reading the states of each neighborhood They then apply multiple 1x1 kernel layers for modeling the transition function (see Figure 3).
Using this architecture, in Gilpin 2019 they infer the ruleset just by training the network to imitate video samples of CA In principle, this not a process too different from training modern language models to auto-complete text. Basically, for a sequence of grid-states S0, S1, S2, . . . , ST , a CNN is trained to predict St+1 given St. The umbrella term for this kind of training is imitation learning, a term coined precisely because of its reliance on data describing the behavior of a system step by step.
This is a simple but powerful method, because then it is possible to have fully differentiable CA w r t their states and ruleset Since it is a well-known fact that some rulesets result in CA that are Turing-complete [7], it is only natural to wonder what kinds of systems can be learned using differentiable CA.
Neural
Cellular Automata can “Grow” Images!
In general, fully differentiable CA are also known as neural CA (NCA). Imitation learning is not the only way of training NCA. Since their behavior over time is similar to that of any recurrent neural network (RNN) [8], they can be trained using backpropagation through time (BPTT).
In BPTT, a differentiable model is run for a given number of time-steps T. At each timestep t, the model generates a recurrent state (Werbos 1990) In the case of NCA, the recurrent state is the grid-state St Therefore, a T-steps run would generate grid-states S0, S1, . . . , ST 1. Because the transition from any St to St+1 is controlled by a fully differentiable model (e.g. the NCA), it is possible to update the parameters of that model using gradient descent [9]
In practice, applying BPTT this way is likely to fail. The longer a simulation trace becomes (i.e the bigger a value T is assigned), vanishing and exploding gradients [10] are likely to occur (Pascanu, Mikolov, and Bengio 2013) While there are ways to mitigate the problem, no solutions are currently known. This is prohibitive for NCA, because their computational capacity depends greatly on running long traces. For
t + 1.
Figure 2: A ”glider” moving towards the lower-right corner of the grid.
Figure 3: Illustration of the CNN architecture for modeling CA (taken from Gilpin 2019). T=0 stands for the current state at t and T=1 stands for the next state at
any cell to somehow communicate with another, the minimum simulation time that would allow them to send a message to the other end is the same as their Manhattan distance.
Originally, in Mordvintsev et al. 2020, they propose simulating for relatively few steps (i.e. they pick a small value for T) and storing the end-state Progressively, they build up a batch of states that can be used to simulate multiple (independent) traces in parallel. For example, if we simulate S0, S1, S2 and stop, then we obtain a batch with S0 and S2.
Subsequent updates to the NCA’s parameters would stem from simulations starting on each state independently It works quite well, as they show by training NCA to regenerate (and maintain) shapes over long periods (see Figure 4). However, the limitation of this approach is that it cannot possibly learn processes that strictly require high-precision long-range communication between any two automata. In such cases, it would still be necessary to update parameters according to the gradients w.r.t. the entire trace.
In Kalkhof, K uhn, et al. 2024, they propose a different trade-off. Instead of assigning pixels to automata directly, they first apply the fast Fourier transform (FFT)[11], and then train NCA in the frequency domain. As a result, long-range communication between automata occurs immediately after the first step. The trade-off is no longer time v.s. communication, but time v s complexity—to put it roughly For example, mostly flat surfaces in the spatial domain are mainly affected by the cells near the origin in the frequency domain, regardless of how far apart they are in the spatial domain.
Naturally, this approach is still making a trade-off If the problem requires processing high-frequency signals, perhaps it would become prohibitive.
NCA for Coloring Images (Image Segmentation)
In image processing and computer vision, segmentation amounts to specifying segments or regions in an image, such that each segment encapsulates a part of the image associated with a particular concept or semantic
Figure 4: Illustration of NCA learning to maintain the shape of a lizard (taken from Mordvintsev et al. 2020).
In Kalkhof, Gonz alez, and Mukhopadhyay 2023, they propose Med-NCA, a two-step approach to image segmentation reliant on neural cellular automata They identify two key problems with the standard NCA:
High video RAM (VRAM) requirements when dealing with high-resolution images Difficulty reaching convergence on highresolution images
Their method tackles these two problems by first having the NCA communicate over a down-sampled version of a given image, then having the NCA produce a segmentation mask over the original image Between these two steps, the states of the NCA resulting from communicating over the down-sampled image are up-scaled proportionally to fit the original. Figure 5 illustrates this process.
The method proposed in Kalkhof, Gonz alez, and Mukhopadhyay 2023, Med-NCA, reportedly outperforms classical UNet [12] albeit by 2% and 3%. Though, perhaps the strongest contribution of Med-NCA is that it requires 500 times fewer parameters than the classical UNet However, it has some
limitations. For example, there is a 10% performance gap between Med-NCA and nnUNet [13]
A Niche Too Stuborn to Die
CA are a curious case of a topic that has managed to stay ”alive” despite the passage of time and keeps spreading over to fields that were originally seemingly out of scope (e.g. machine learning, physics (Chopard 2018), and philosophy (Berto and Tagliabue 2023)). Furthermore, we know CA are a powerful model of computation and that they are relatively easy to train when compared with other attempts such as the neural Turing machine (NTM)[14] (Graves, Wayne, and Danihelka 2014). Thus, I hope this short article manages to inspire some curiosity in CA and NCA specifically. For example, the implementations discussed in this article hinge on the idea of representing NCA with convolutions. By design, this only allows for automata that communicate with their immediate neighbors. It is only natural to wonder what would happen if we use selfattention [15] (Vaswani et al 2023) layers to learn the shape of the neighborhood, instead of imposing it by design.
Figure 5: Illustration of Med-NCA (taken from Kalkhof, Gonzalez, and Mukhopadhyay 2023).
Notes
[1] In this context, ”Turing Machine” as opposed to ”Turing machine” denotes the computational model characterizing the set of all Turing machines. ”Turing machine” denotes a single instance within this set. These are different concepts, though easy to conflate as they often are.
[2] The Lenia engine is a computer program that allows users to play and experiment with continuous CA.
[3] Conway’s Game of Life is a computer program that allows users to play with discrete CA
[4] The Manhattan distance between two points a and b is defined as Σdi |ai bi|, where d is the number of dimensions Note that both vectors must have the same number of dimensions Otherwise, further specifications are required to handle the mismatch
[5] CNNs are typically known for having their nodes grouped in layered grids such that the same response pattern is present in each grid-cell (Lecun et al 1998)
[6] In this context, a kernel is like a sliding window. Though, unlike a typical sliding window, a kernel may also have some ”depth”. This means that a kernel can read multiple layers or channels at once (e.g. the red, green, and blue channels in digital images).
[7] When a computational model is Turing-complete, it means that it is possible to implement any Turing machine within that model. The easiest way to prove this is by implementing the universal Turing machine within said model
[8] Typically, neural nets read some input and produce some output over a fixed number of steps In contrast, a recurrent neural net can apply an arbitrary number of intermediate steps, before producing a final output
[9] Gradient descent is the standard algorithm for training artificial neural nets
[10] The vanishing and exploding gradient phenomenon is an engineering problem When gradients become too small or too large, our machines may lack the resolution and/or memory to represent them. So, their values snap to zero, NaN, etc.
[11] If we understand sequences as discrete signals, then the fast Fourier transform is an efficient algorithm that computes a representation of a sequence in terms of the frequencies it occupies.
[12] UNet is a type of CNN specifically designed for medical image processing tasks (Ronneberger, Fischer, and Brox 2015)
[13] nnUNet is a self-adaptive framework that extends the classical UNet by adapting some architecture and data-pipeline parameters to the training data
[14] Neural Turing machines are a special type of RNN that incorporate an unbounded tape like the classical Turing machine In principle, this allows them to learn algorithms not possible with typical RNNs However, training them with standard gradient descent is nearly impossible.
[15] Self-attention is the mechanism behind the success of modern large language models (LLMs).
References
Werbos, P J (1990) “Backpropagation through time: what it does and how to do it” In: Proceedings of the IEEE 78.10, pp. 1550–1560. doi: 10.1109/5. 58337.
Lecun, Y. et al. (1998). “Gradient-based learning applied to document recognition”. In: Proceedings of the IEEE 86.11, pp. 2278–2324. doi: 10.1109/5. 726791.
Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio (2013). On the difficulty of training Recurrent Neural Networks arXiv: 1211 5063 [cs LG]
Graves, Alex, Greg Wayne, and Ivo Danihelka (2014) Neural Turing Machines arXiv: 1410 5401 [cs NE]
Ronneberger, Olaf, Philipp Fischer, and Thomas Brox (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation arXiv: 1505 04597 [cs CV]
Chopard, Bastien (2018) “Cellular Automata Modeling of Physical Systems” In: Cellular Automata: A Volume in the Encyclopedia of Complexity and Systems Science, Second Edition. Ed. by Andrew Adamatzky. New York, NY: Springer US, pp. 657–689. isbn: 978-1-4939-8700-9. doi: 10.1007/978-1-49398700-9 57. URL:https://doi.org/10.1007/978-1-49398700-9 57.
Chan, Bert Wang-Chak (Oct. 2019). “Lenia: Biology of Artificial Life”. In: Complex Systems 28.3, pp. 251–286 issn: 0891-2513 doi: 10 25088/complexsystems 28 3 251 url:http://dx doi org/10 25088/ComplexSystem s 28 3 251
Gilpin, William (Sept 2019) “Cellular automata as convolutional neural networks” In: Phys Rev E 100 (3), p 032402 doi: 10 1103/PhysRevE 100 032402 url: https : / / link aps org / doi / 10 1103 / PhysRevE 100 032402
Mordvintsev, Alexander et al (2020) “Growing Neural Cellular Automata” In: Distill https://distill.pub/2020/growing-ca.doi: 10.23915/distill. 00023.
De Mol, Liesbeth (2021). “Turing Machines”. In: The Stanford Encyclopedia of Philosophy. Ed. by Edward N. Zalta. Winter 2021. Metaphysics Research Lab, Stanford University.
Alama, Jesse and Johannes Korbmacher (2023). “The Lambda Calculus” In: The Stanford Encyclopedia of Philosophy Ed by Edward N Zalta and Uri Nodelman Winter 2023 Metaphysics Research Lab, Stanford University
Berto, Francesco and Jacopo Tagliabue (2023) “Cellular Automata” In: The Stanford Encyclopedia of Philosophy Ed by Edward N Zalta and Uri Nodelman Winter 2023 Metaphysics Research Lab, Stanford University
Kalkhof, John, Camila Gonz alez, and Anirban Mukhopadhyay (2023) “MedNCA: Robust and Lightweight Segmentation with Neural Cellular Automata”. In: Information Processing in Medical Imaging. Ed. by Alejandro Frangi et al. Cham: Springer Nature Switzerland, pp. 705–716. isbn: 978-3-031-34048-2.
Vaswani, Ashish et al. (2023). Attention Is All You Need. arXiv: 1706.03762 [cs.CL].
Kalkhof, John, Arlene K uhn, et al (2024) FrequencyTime Diffusion with Neural Cellular Automata arXiv: 2401 06291 [cs CV]
A Wes Andersonian Reading of Explainable AI:
A Review of Asteroid City by Wes Anderson.
by Eduard Saakashvili
First, a scene from the movie Asteroid City: At a science fair in the titular desert town, Jeffrey Wright’s general – General Grif Gibson – addresses a group of aspiring astronomers, their parents, and Boy Scouts (among others) He thunders: “If you wanted to lead a nice, quiet, peaceful life, you picked the wrong time to be born.” Subsequent events namely, alien visitation fulfill this threat.
Once the alien arrives, the US government responds with force: the film’s characters are quarantined under force of arms. But they cannot help but wonder and process, each in their own way, what the alien’s arrival means. A teacher tells her students, “There are still nine planets!” A defiant pupil retorts: “Except now, there’s a alien ” [sic]
Another boy breaks into song:
Dear alien, who art in heaven, Lean and skinny, ‘bout six foot seven: Though we know ye ain’t our brother: Are you friend or foe? (Or other?)
Now, a scene from real life: Six months before Asteroid City’s release, OpenAI launches the first version of ChatGPT, a chat-agent instantiation of the already-released GPT-3 language model This variant catches fire with the general public; people are disturbed, excited, or both at the same time. Few people feel indifferent. Some change their career plans. Others declare the singularity imminent
However, in the intellectual establishments
of the Western world, a large portion of commentators adopt a smug, nothing-newunder-the-sun voice Some label these models as “stochastic parrots”, a mocking reference to what they think these systems are doing: just a remix of their inputs using statistical tricks. Another word for this posture is denial. “There are still nine planets.”
There are parallels here I am not saying Asteroid City is intentionally about AI But, I think we can use Wes Anderon’s film, and his approach to filmmaking in general, to understand the modern human response to AI and, moreover, how our intellectual cultures make meaning of what Murray Shanahan has called the “exotic, mind-like entities'' that have appeared in our midst.
Everyone knows Wes Anderson’s penchant for symmetry and his association with a (now much-parodied) visual vocabulary that spawned a thousand coffee-table books These are his filmography’s superficial features. At a more substantive level, Anderson’s approach to depicting the world reflects an understanding of reality based on stylized abstraction. One gets the sense that, rather than depicting something as it happened, Anderson constructs models of what happened, dioramas that reduce things to their main constituent parts, with a priority put on style. One could roughly call it a kind of Cartesian approach: abstract away noise, chaos, and dirt; rebuild the world with an elegant scheme of mathematical precision and, correspondingly, beauty. Anderson seems to say: all films simplify the world, so why not lean into it and turn that simplification into a fully realized art?
Machines play a leading role in how Anderson depicts this world; his films are often set in periods when machines were making inroads. He uses machines to take his
simplification schema to the extreme. Take the cocktail-maker from Asteroid City: When we see Anderson’s Martini machine, it is immediately comprehensible: one part squeezes the lemon, another shakes the drink and adds ice, another pours. The machine has been reduced to human-intelligible actions. Anderson’s machines are like toys that work: simplified representations and functioning equivalents to the real thing We all, to various extents, live in this kind of mental world. We use heuristics to explain the unexplainable. We reduce machines into parts or make sense of them in terms of other machines or even anthropomorphic metaphors
And the rest is just irrelevant detail for most purposes. Wes Anderon’s machines (and his narratives) are satisfying because they put everything you need to understand them on the surface and point giant arrows at them; the satisfaction of simplification is spoon-fed to the viewer.
In short, Anderson’s machines, like his film worlds, are reductive surrogates of the real thing. Coherent, simplistic, schematic equivalents that abstract away the dirt and incoherence of human experience
It used to be like this: AI systems were composed of a straightforward set of rules that didn’t need explaining, so the map was the territory. The infamously convincing Eliza chatbot from the 1970s can be reduced to a fairly simple pseudocode These things made inherent sense. Even more complex models, like expert systems or decision trees, could be likened to long lists of rules; in a Wes Anderson world, their avatar might be a vaguely Austro-Hungarian clerk poring over a comically large ledger while wielding a mechanical calculator.
However, as models grow not just more
complex but ontologically move from lists of rules to nonlinear neural nets, the representation of the model and the actual model start to diverge dramatically You cannot draw a modern neural net. Its operations do not make schematic sense to human thought. The map and the territory look nothing alike. How do we understand such a model?
The growing need to bridge the map and the territory has spawned the field of Explainable AI (XAI). There is something very Wes Anderson-ish about this approach, and examining XAI helps us both understand the appeal of Wes Anderson’s simplification filmmaking and the appeal of similar moves in AI research.
Let’s say our model involves a thousand parameters, nonlinear functions, and multiple layers of computation. How do we explain what it does? Among the answers is an intriguing approach: surrogate models These look at the model’s behavior and create a second, much simpler model that approximates that behavior. Now, that simple model can somehow “explain” the original. Of course, we are treating the original as a black box and imitating its superficial features. Our explanation roughly correlates with the original but bears no ontological relationship to it. This is called a post-hoc explanation.
“ ... in some situations, a satisfying explanation is held up as the goal rather than a true explanation”
What makes such an explanation a good one?
Since XAI has no established paradigm, criteria abound, XAI practitioners often base their methods not on faithfully explaining a model but on the subjective judgments of people seeing the explanation. In other words, in some situations, a satisfying explanation is held up as the goal rather than a true explanation. This emphasis on psychological closure is also a driving force of Wes Anderson’s filmmaking and provides one of the main pleasures of watching his films. Could a modern, connectionist AI fit into a Wes Anderson film? Certainly not It could not be depicted. In fact, the Asteroid City visibly struggles to depict even an organic, non-artificial alien coherently.
Note that in this flowchart the surrogate model is in no way connected to the “complicated AI model” – it only has access to its inputs and outputs.
“There is no “toy version” of GPT-4.”
When the alien arrives, Anderson’s models break down: the alien’s descent is shown in an upward-facing shot of the sky, something rarely seen in the Euclidean film. The alien is clearly not of this world. While all characters are played by human actors, the alien is a stop-motion animation moving in a completely different rhythm, without any attempt to make him look “realistic”. We never properly see his ship. Basically, the alien makes no sense. The descent is like the arrival of a visitor in the novel Flatland: that
19th-century classic depicts a twodimensional world into which a thirddimensional visitor arrives, throwing the flat society into chaos.
The difficulty in depicting the alien is not a failure of the movie; rather, this formal break is a dramatization and visual enactment of the fact that some realities simply do not accommodate the elegant simplicity of Wes Anderson’s dioramas. Similarly, modern AI is not some Steampunk machine that can be cleanly broken into constituent parts. There is no “toy version” of GPT-4
Image:FocusFeatures
Yet everywhere you look, intellectuals crave Wes Andersonian metaphors: “stochastic parrot”, “merely a statistical model”, “mad libs”, “monkeys at typewriters”, “echo chamber”, “glorified autocomplete”. This need for physical metaphors is the desire for a world that is reducible to toy models.
Wes Anderson knows this craving for simplification and the satisfaction it brings; this may account for much of his popularity. However, Asteroid City is also aware of the limitations of this approach. It is the very transparency and theatricality of its rural setting that makes the alien all the more alien.
The film’s characters ask, in various ways, if the alien is “friend”, “foe”, or “other”. A friend or a foe is someone that exists and makes sense in our world. An “other” is beyond it.
The anxiety this “other” induces is the film's central theme It is this anxiety that haunts us, too The need to “abstract away” what is new, and perhaps frightening, about an application like ChatGPT fuels the desire to lean on old metaphors and previous ways of seeing. When it comes to engaging with AI,
we may prefer a friend to a foe, but we may yet prefer a foe to an “other”.
In this way, Asteroid City is a warning: a two-dimensional world cannot contain a three-dimensional entity How do we bridge the gap? Asteroid City’s answer seems to point back to itself: to art. The film’s characters may be stuck in Flatland, but we are looking down from beyond, this time from our movie theater seats. In dramatizing this higher-dimensional intrusion from the outside, the film catapults the audience beyond the flat.
We may not be able to satisfyingly explain a transformer or neural net to the lay public or even to ourselves. But when we use art to dramatize an encounter with the unexplainable, we get a chance to stand still and look at it, or even just look at our own incomprehension instead of explaining it away. To contemplate ignorance and true novelty, rather than drown it in ham-fisted metaphors, might just be the beginning of an understanding
In this way, sometimes art can go where science can’t.
“I don’t play him as an alien, actually I play him as a metaphor” , says the stage actor playing the alien in this behind-the-curtain scene from Asteroid City
by Koen Willemen
The Future of AI
LegalDisclaimers.
Beyond Code Magazine is not responsible for the content of user-submitted materials
Contributors are solely responsible for the content they submit and any legal consequences arising from it.”
While every effort has been made to ensure the accuracy of the information in this publication, Beyond Code. Magazine assumes no responsibility for errors or omissions.
The opinions expressed in this magazine are those of the authors and do not necessarily reflect the views of Beyond Code. Magazine or its editors. The information provided is for general informational purposes only and should not be considered legal, professional, or financial advice.
Images used in this magazine are royalty-free and sourced from Unsplash, Pexels, Pixbay, Canva, FreePik.com and betterimagesofai.org, unless otherwise stated. All images are used in accordance with their respective licenses. All images are believed to be royalty-free at the time of publication If you believe any image has been used incorrectly, please contact us at beyondcodemagazine@gmail.com .