Mutopia
1
2
3
4
5
The journey from 2021 to 2022
6
Machine Learning
Posthumanism
— Anne Galloway
What if we deny that human beings are exceptional? What if we stop speaking and listening only to ourselves?
7
Speculative Design
— Anthony Dunne, Fiona Raby
We need more pluralism in design, not of style but of ideology and values.
— Patrick Hebron
Working with AI is a lot less like working with another human and a lot more like working with some kind of weird force of nature.
Today, technology becomes part of ourselves as an extension of the human body.
8
It’s no longer relevant to distinguish humans from non-human entities.
9
As cognification is becoming the predictable future...
10
Creativity isn’t exclusively human.
11
What’s the potential of collaborating with artificial intelligence?
12
What’s the possible future for creativity?
13
14
15
CONTENTS Chapter 1
Chapter 2
Thesis project
Process & development
Machine learning experiment
/ 24
Big Data, big design
/ 94
Identity design
/ 52
Identity development
/ 114
Exhibition
/ 68
Machine learning experiment process
/ 122
Framework
/ 86
Installation development
/ 138
16
Chapter 3
Chapter 4
Research & strategy
Citations & biography
Are we already transhuman?
/ 155
Why is posthumanism relevant?
/ 173
Post, post, post
/ 197
Rethinking progress
/ 207
Design, fiction, and social dreaming
/ 225
From human-centered to posthuman-centered
/ 243
Tools are an extension of human body
/ 251
Citations
/ 281
Biography
/ 283
17
Against the backdrop of rapid technological advancement, the boundaries between technology and humans are dissolving. It’s time for us to rethink our relationship with synthetic intelligence. By collaborating with machines, designers are empowered to transcend creative limitations by leveraging the potential of machine learning as a customizable tool to visualize the unseen.
18
19
Thesis Statement
Against the backdrop of rapid technological advancement, the biosphere and technosphere merge into one new hybrid body, technology becoming part of ourselves as an extension of the human body. The concept of natural and synthetic is starting to become undefinable. As it becomes less relevant to distinguish the “human” realm from the “non-human”, what would our relationship look like with synthetic intelligence? And how does this emerging relationship impact the future of creativity? My interest in posthumanism and passion for mythical bestiaries led me to explore the creation of new hybrid creatures. I see this as an opportunity to surpass my imaginary limitations with the help of AI, and as a way to critique human-centrism and normative notions of humanness. Posthumanists challenged the idea of human centrism and proposed to expand the notion of human from something completely closed, already formed, and static to an open and fluid concept. The humanistic sense of the “human” and its instrumentalized view of the “nonhuman” as a tool limits us in the narrow confines of human intelligence. As multispecies anthropologist Anne Galloway writes:“What if we deny that human beings are exceptional? What if we stop speaking and listening only to ourselves?” In light of the rapid evolution of new forms of artificial and synergetic life, rather than alienating it, how can we embody it and flourish as new forms of beings? As Kevin Kelly says, everything that’s already been electrified will also be cognified and the cognification of things can be viewed similarly to the electrification of things that took place during the industrial revolution. As human society is highly connected and things tend to 20
share similar patterns, we could assume it would also happen in the creative field. From bare hands, to pens and brushes, to design software, to generative programming tools, it’s reasonable to think that cognified tools will be the next step of creativity. The goal for my thesis project is to explore the creative potential of working with machines from the perspectives of posthumanism and speculative design, and to provide a framework for graphic designers to reference and explore this new way of creating. Non-deterministic aspects of machine learning can bring unexpected results which lead to new discoveries. Standing on the shoulders of ML allows designers to exceed our imaginary limitations. And these unique potentialities and serendipities would greatly increase the depth and texture of our works. One of artificial intelligence’s strengths is the capacity to be customized to a specific individuals’ needs. As we all have our strengths and weaknesses, rather than forcing us to fit the architecture of pre-existing software, collaborating with AI opens up new possibilities for creative expression. Besides, humans’ perception is often biased, selective, and malleable. Our desires can affect what we see by impacting the way we process visual information ( A psychological phenomenon called motivation perception). Designers’ inability to be objective can be really harmful to the larger society, but ML’s inherent nature of reinforcing biases in the datasets helps us to re-examine things we were unable to be aware of and redirect us into a better position. Graphic design is highly responsive to all the social, cultural and commercial issues. As a graphic designer, we often need to keep learning new things and adapting to our
ever-changing world where media and ways of communication are rapidly changing. All prepackaged software have their inner structures which can constrain our imaginations and creativity. Sometimes, designers tend to pursue ‘effects’ over design and get used to playing things within the realm of software and lose the ability to think about the possibilities beyond the software’s capabilities. It’d be really sad to see future design so closely associated with the prevailing aesthetic of the tools we are currently using. The economy before has always been pursuing consistency and efficiency for the sake of mass production to maximize profits. As technology keeps progressing and individuality becomes scarce, maybe it’s the time for us to express our uniqueness and embrace these individualized modes of making. Instead of trying so hard to train AI to mimic humans, it seems wiser to consider them as an existence that is completely different from us. As Patrick Hebron comments, “The world is full of human thinkers. And if we want human thinking, we should probably go to humans for it.” My thesis includes a series of experiments with Machine Learning. I started training the word ‘mutant’ using 300 fonts from my local laptop and ArtCenter Archetype Press. Then, I moved to image creation. 1794 images of animal skeletons from different species were collected for the training. Later, I was inspired by evolution theory and speculative design and fascinated by the idea of using Machine Learning as a way to visualize the unknown common ancestor of two species. My project has no intention of predicting the real past (it might be illogical and senseless from a biological standpoint). The idea is to encourage people to open the larger possibilities for design beyond functionality (problem solving) and aesthetic (styling). In this experiment, multiple datasets were curated, including fungi, butterflies and snails. The fungi model was first trained. Then, the new hybrids were cultivated by training a new dataset on the finished fungi model. Around 3600 images were used for this training. Based on my experience with machine learning, I attempted to summarize a diagram to help future graphic designers have a better understanding of the workflow in using ML as well as framing the designer’s role in the process. Machines are not perfect and might never be perfect. Sometimes it’s easy to fall into the trap of superficial aesthetic tropes commonly associated with technology. Both human intelligence and artificial intelligence are equally important.
It’s crucial to have methodologies to understand the conceptual models we work with, so we can be more involved in the process and have more control over the result. The biggest challenge of working with machines is balancing the relationship between unexpected results and predictable outcomes. Although machine learning is not the absolute truth, it provides a new way of seeing and thinking. We are given the chance to see ourselves as participants in a more complex human and nonhuman system and are empowered to work in a more flexible and individualized way. It’s not only valuable for creativity, but also beneficial for humans to rethink the complex relationship with nonhuman entities economically, ecologically and morally. In the future, I’d like to provide a contextual framework for working with ML and to develop it into a system and repository where people can share their experiences and experiments. Since the realm of ML is full of new possibilities and unknown discoveries, it would be meaningful to have a platform for creators to collaborate in the expansion of this emerging area of practice..
[1] Armstrong, H., & Dixon, K. D. (2021). Big Data, big design: Why designers should care about artificial intelligence. Princeton Architectural Press. [2] Dunne, A., & Raby, F. (2014). Speculative everything: Design, fiction, and Social Dreaming. MIT Press. [3] Kelly, K. (2017). The inevitable: Understanding the 12 technological forces that will shape our future. Penguin Books. [4] Jain, A. (2021, June 10). Calling for a more-than-human politics. Medium. Retrieved March 23, 2022, from https:// medium.com/@anabjain/calling-for-a-morethan-human-politics-f558b57983e6
21
22
2022 Spring
23
24
PROJECT
chapter 1
THESIS
25
26
Machine learning experiment
27
28
industrial revolution. —Kevin Kelly
of things that took place during the
viewed similarly to the electrification
cognification of things can be
trified will also be cognified and the
Everything that’s already been elec-
29
Process overview
1
2
Type: word ‘mutant’ 3000 steps training
Image: skeletons 3000 steps training
Dataset: 300 fonts from local laptop and ArtCenter archetype press.
Dataset: 1849 images of animal skeletons from different species
feedback:
feedback:
dataset is too small and lack of diversity
dataset images are too different
4
Image: fungi 3000 steps training
Dataset: 1706 images of fungi
30
3
Image: skeletons+ fungi 3000 steps training
Dataset: 1706 images of fungi plus 1849 images of animal skeletons feedback: dataset images have too many features
5
Image: snail+ fungi 1500 steps training
Dataset: 1706 images of fungi plus 745 images of snails
6
Image: fungi+ butterfly 1500 steps training
Dataset: 1706 images of fungi plus 667 images of butterflies
31
SPECULATIV Design is used as a tool to create not only things but ideas, and a means of speculating about how things could be.
MACHINE L
EVOLUTION All species are related and gradually change over time.
32
IVE DESIGN
LEARNING Deep learning: learning the features of data and finding patterns.
N THEORY 33
34
ancestor over one billion years ago.
A way to visualize the common
35
What would their common ancestor look like?
Method
Step 1
Training the fungi dataset
dataset 1 (fungi)
3000 training steps
model 1 (fungi)
36
Step 2
Training a new dataset on the previous model.
model 1
dataset 2
crossing
Waiting for the right time to stop training.
1500 training steps
model 2 (new hybrid)
37
38
39
40
41
42
43
44
45
THE
LAT
A representation of compressed data
46
A 512 dimensional space
TENT SPACE
47
48
49
50
51
52
53
54
Identity design
55
Logotype
56
57
Grid system
58
59
60
61
62
63
64
65
66
67
Graphic language
68
69
70
Exhibition
71
72
73
74
75
76
77
78
79
80
81
82
83
User senario State 1
State 2
size-changing pattern wrapping around 4 sides
logo across 4 sides
84
State 3
State 3
4 sides showing process from dataset to the scene
4 sides showing 4 different scene
85
User senario
B A C
D
92 inch
36 inch
86
State 3 A
Dataset images Part of data images used for training.
B
Training results Part of training results from my experiments.
C
Latent space walking video Latent space walking video generated based on selected 2 images.
D
Creatures in environment A imaginary scene where hybrid creatures coexist and live together.
87
88
Framework
89
Relevance with graphic design
Visualizing the unknown
w
90
Beyond human perspectives
91
Data scientists
Designers/Artists
C
Data curation Data processing
1
92
2
Model Creation
Designer
ML model
A
C
D
1
Learning and adapting
2
Feeding data/ Selecting models
Designers/Artists
Machine
E
Machine perspectives Source of inspiration
F
Human creativity
4
Training Results
Iterations
E
F 3
3
Inputing feedback
4
Developing
93
94
chapter 2
PROCESS & DEVELOPMENT
95
96
Reading material
97
98
Machine learning has already transformed the design profession. How do we use it ethically? Helen Armstrong | 2021
Why should a designer care about machine learning (ML)? Fair question, right? ML consists of algorithms—in essence a set of task-oriented mathematical instructions—that use statistical models to analyze patterns in existing data and then make predictions based upon the results. They use data to compute likely outcomes. But what do these algorithms and predictions have to do with you? The answer grows more self-evident by the day. Machine learning is everywhere and has already transformed the design profession. To be honest, it’s going to steamroll right over us unless we jump aboard and start pulling the levers and steering the train in a human, ethical, and intentional direction. Here’s another reason you should care: you can do amazing work by tapping the alien powers of nonhuman cognition. Machine learning has changed the way humans relate to machines by enabling them to communicate with tech via language, gesture, movement, emotion, etc. These same capabilities will enable designers to engage with creative tools in more intuitive ways, supplanting the mouse, trackpad, and touchscreen. Simply asking software to perform an action— rather than clicking and dragging through a menu to find the right tool—will, for example, allow designers to bypass hours of busywork, not to mention perusing dense tutorials. Perhaps the very concept of a “tool” will grow irrelevant. The more natural and personalized the interaction, the more creative software might feel like an extension of ourselves—and our individual creative approaches—rather than a separate clunky software package. Concept-i. Design and technology studio Tellart worked with Toyota’s Advanced Design team to create the user experience for this emotionally intelligent autonomous concept car. Through this work, Tellart delves into the warm relationships we might form with objects that can get to know us over time. Courtesy Princeton Architectural Press. Candy Hearts. Janelle Shane’s experiments playfully tease out all the ways algorithms can “get things wrong.” 99
She jumps unafraid into the algorithmic training process, in this instance training a neural network to generate candy heart messages. Shane has also trained algorithms to invent recipes, paint colors, pick-up lines, and cat names. Courtesy Princeton Architectural Press. AImoji. Process Studio (Martin Grödl, Moritz Resl) created these AI-generated emojis for the Vienna Biennale for Change 2019, using a Deep Convolutional Generative Adversarial Network (DCGAN). According to the studio, “with each AImoji, new, hitherto unknown ‘artificial’ emotions come to life that challenge us to interpret and interact with them.” Courtesy Princeton Architectural Press. Concept-i. Design and technology studio Tellart worked with Toyota’s Advanced Design team to create the user experience for this emotionally intelligent autonomous concept car. Through this work, Tellart delves into the warm relationships we might form with objects that can get to know us over time. Courtesy Princeton Architectural Press. Candy Hearts. Janelle Shane’s experiments playfully tease out all the ways algorithms can “get things wrong.” She jumps unafraid into the algorithmic training process, in this instance training a neural network to generate candy heart messages. Shane has also trained algorithms to invent recipes, paint colors, pick-up lines, and cat names. Courtesy Princeton Architectural Press. AImoji. Process Studio (Martin Grödl, Moritz Resl) created these AI-generated emojis for the Vienna Biennale for Change 2019, using a Deep Convolutional Generative Adversarial Network (DCGAN). According to the studio, “with each AImoji, new, hitherto unknown ‘artificial’ emotions come to life that challenge us to interpret and interact with them.” Courtesy Princeton Architectural Press. Concept-i. Design and technology studio Tellart worked with Toyota’s Advanced Design team to create the user experience for this emotionally intelligent autonomous concept car. Through this work, Tellart delves into the warm relationships we might form with objects that can get to know us over time. Courtesy Princeton Architectural Press. Escape the Cubicle If design tools combine relational interaction with artificial intelligence’s (AI) growing awareness of context, designers could, at long last, escape desk and screen. We could design for the world as we stand in the world, creating while situated in augmented physical space. Silka Sietsma, Head of Emerging Design at Adobe, asserts, “We’re at the event horizon of a new era of spatial computing—a world where digital experiences mesh with physical reality. Immersive, 3D technologies like AR (augmented reality) and VR (virtual reality), along with voice and embedded sensors, are all converging into a new medium, powered by artificial intelligence.” It is within this new medium—this confluence of physical and digital, that our future design practice will evolve. How else will ML impact design practice? Patrick Hebron, author of Machine Learning for Designers, suggests that we consider the future in terms of “scaffolding complexity.” While reflecting on CAD systems and the future of creative tools, he points out: “These tools make it possible to conceive of systems that are too grand and complex for any one individual to 100
keep all of their big picture goals and specific details in mind simultaneously.” Hebron also notes, “The Florence Cathedral took about one hundred and forty years to go from initial conception to project completion. A much more complicated and recent building, the Burj Khalifa, took about five years.” Our tasks and aesthetic goals, he asserts, will continue to evolve as ML enables us to enter terrain that we could not even envision without AI. “Machine intelligence,” he explains, “will enable creatives to do even more and to think even bigger.” Return of the Centaur In such a vision, humans and intelligent machines work together to arrive at solutions unattainable by either alone. We can refer to this as a centaur: part human, part machine intelligence, one entity. Rather than automating away the designer, designers join forces with AI, augmenting their abilities with ML—a fusing of intelligences. Matt Jones of Google AI argues that to truly take advantage of AI, we must accept its alien nature. Alternative models of “intelligence” exist already in the natural world—specialized forms of cognition distributed across organisms, nerve cells, and root-fungi networks rather centralized in a single human brain. Myriad recent books have raised popular awareness of these alternatives, such as Peter Godfrey-Smith’s book, Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness and Peter Wohlleben’s treatise, The Hidden Life of Trees. As posited by posthuman theorists like N. Katherine Hayles and Donna Haraway, we should expand our understanding to other forms of cognition as we coevolve with our tools. In essence, we must recognize that integrating ML into design practice will not feel like adding a supersmart fake human to our creative team but, instead, will be something else entirely. Like bacteria, trees, or earthworms, AI will think differently than we do. The Combo, Please Since the 1960s, we have imagined that AI will take over form-making, serving up a multitude of form variations from which a designer can simply choose—a fast forwarding of the design process. But, it turns out, the most powerful application of ML is not speeding up our process to arrive at the same kind of conclusions. The most powerful applica tions combine machine intelligence with human intelligence to take us along new paths entirely. Artificial intelligence researcher Janelle Shane puts it simply: “Working with AI is a lot less like working with another human and a lot more like working with some kind of weird force of nature.” This force of nature will tirelessly work toward exactly the goal that we give it, so we have to figure out the right goal. And we shouldn’t expect it—or want it—to solve the problem like a human. Shane points to a project by David Ha, a researcher at Google Brain, in which Ha asked an AI to assemble some parts into a robot to move from Point A to Point B. Rather than solving the problem by assembling a nimble robot, as Ha intended, the AI combined the parts into a tower that could just fall over and land on Point B. As Hebron comments, “The world is full of human thinkers. And if we want human thinking, we should probably go to humans for it. There are a lot of them.” If we don’t waste time trying to force AI to think like a human, we can arrive at Point B—and Points C, D, and E—in fresh, alien ways. 101
102
103
Working with AI is a lot less like working with another human and a lot more like working with some kind of weird force of nature. 104
Novel AI strategies, however, mean little without perspective and purpose. Humans do need to be part of the equation. Remember the human-AI chess teams that triumphed over solo humans or solo AI competitors? The confluence of human and machine is key. As Shane explains, “The AI has no understanding of the consequences.” Humans bring that understanding to the equation. We human designers must be there to frame the right problems—the problems that will move us toward future points that truly benefit humanity. The Future? The future is…fraught. Our profession stands on the cusp. Designers must strive to understand ML capabilities, so that we can engage with it as a design material and a creative force. If we do not, we will fall victim to it. We will create within the parameters that the technology sets for us, rather than the other way around. Thrugh ML we have amazing potential to provide emotional insight to those on the autism spectrum, to reduce gender and racial bias in hiring and lending practices, to springboard creatives into unexpected, wicked problem-solving spaces. However, we can also do the exact opposite—exploit the vulnerable, bias the future by relying upon the past, replace humans by automating away skills that we want and need to maintain autonomy and agency, relegate essential choices to a technology that has no understanding of human consequence. Questions around AI and humanity have been hotly debated since at least the middle of the twentieth century. As design professor and historian Molly Wright Steenson points out, “If we understand that we’ve been asking these questions for a long time, we might have better expectations about how hard it is to find answers.” These are complex questions with wide implications. The terrain is tricky. The future is uncertain. Exciting? Yes. Terrifying? Yes. We have many critical choices ahead. Let’s take on those choices together, thoughtfully, one design at a time.
105
Lee Sedol
AlphaGo
106
Who’s afraid of machine learning? Helen Armstrong | 2021
To understand this technology, designers need to get into the weeds—just a little. There are three main types of machine-learning algorithms: supervised learning, unsupervised learning, and reinforcement learning. Let's start with supervised learning. SUPERVISED LEARNING Supervised learning relies upon a full set of labeled data. (Labeled data means data that has been tagged, effectively placing the data into categories. Think spreadsheets of information.) Algorithms examine this labeled data, learn from patterns in the data, and then make predictions. It's the most prevalent form of machine learning today; therefore, it's worth examining closely. Researchers would choose supervised learning to take on problems of classification or problems of regression. If we want the algorithms to predict the discrete category or "class" that new data will fall within, we would use classification. If we want the algorithms to predict outputs related to a real-valued number, we would use regression. If you are feeling overwhelmed with technical jargon right now, bear with me. I'll explain each of these. CORN SNAKE OR COPPERHEAD? Let’s start with a supervised learning strategy
that uses classification. Imagine that snakes begin to infest your neighborhood. Neighbors begin posting pictures online to determine whether the particular snake in their own yard is a copper-head—poisonous—or the similar-looking corn snake—harmless. Their kids play outside, and they need to know. You decide to design an app to solve this problem. To develop the app, you need a system for differentiating snake species. First, you gather training data: a set of snakes individually labeled as either a copperhead or a corn snake. This data composes our “ground truth.” We can also refer to these two labels as classes. Next, we need to determine features that the system can use to distinguish one class from the other. In this example, let’s identify two features: length and mass. (Note that this is only for the sake of our example. Do not actually use these features to classify snakes and risk the life of a kid.) A data scientist, developer, or designer might determine these specific features, or a team of experts in a particular domain might select the features and label the data. This is called annotation. In our example, we might bring in a team of expert herpetologists. The herpetologists would create a document with a set of rules for annotation. This process can be 107
straightforward, but it can also be quite controversial. Participants might disagree over which features to use or what dataset is most appropriate. Domain-specific experts might also be worried about sharing confidential knowledge about the data with the data science team running the ML algorithms. Let’s assume that our process goes smoothly. Once the features are selected, we dean the resulting labeled data, rid-ding it of errors, inconsistencies, missing data, and duplicates We then randomize the order of the data and look for bias in the training examples. If there are too many copperheads used in the training data, more copperheads will subsequently be identified in the system, biasing it toward copperheads. We’ve seen this play out in numerous real-world examples when, for instance, an ML system has been trained using too many examples of one skin tone and not enough of others.’ We then set aside a portion of the data to use to evaluate our system later. Now the fun begins. We choose a learning algorithm and instruct that algorithm to build a model from the training set. In this example, we need to teach the system to separate the data into two different categories— or classes—of snake species so utilizing a classification model makes sense. Note that this approach, like all ML, builds a statistical model. We can’t achieve 100 percent accuracy with any model, but we will work to select and tweak our model to be as accurate as possible. ML, like design, is iterative. To increase the predictive power, we will need to refine the model through-out the process. There are multiple strategies that a classification model could employ to achieve our goal, such as Decision Tree, K-Nearest Neighbor, and Naive Bayes. In this instance, let’s select a Decision Tree strategy. When we run this selected strategy, the learning algorithms will, in essence, create a model that draws a line through the data creating a boundary separation. Based sed upon their features, some of the snakes will fall under the class of copperhead and some will fall under the It of corn snake. The algorithms will then check the results against the training data for accuracy and redraw the line repeatedly to find the optimal boundary separation—the position in which the most snakes are accurately 108
classified under their respective snake species. Humans oversee this process, working to improve the results. As noted previously, the training data matters. We may need to add or delete training examples, particularly outliers, to increase accuracy. Or we may need to adjust the features. We may even try a different classification strategy, like K-Nearest-Neighbor or Naive Bayes. Each strategy will have its own advantages and trade-offs. When we feel confident about the results, we can run the model on the labeled data that we set aside earlier for the evaluation phase. Depending on the results, we can keep working on the model or accept the accuracy rate. Eventually, we should be able to run any copperhead or corn snake through this system and identify its species at a high rate of accuracy, not too percent but a high rate. We call the computer’s decision that a certain snake is a copperhead or corn snake a prediction. Let’s step back and appreciate the beauty of the resulting system. We no longer need human judgment to determine the species of snake. And we don’t need to program manual rules. In addition, if we have built-in feedback, the algorithms can learn from each misclassification and continually improve the results once the system is in play. Phew, the kids—and the
STATISTICAL MODEL: a mathematical representation of data based on relationships observed within the data. OUTLIERS: atypical values within a dataset values that lie outside the distribution pattern
Labeled Data
Snake
Length
Mass
Copperhead
11.7
3.4
Copperhead
10.2
4.4
Copperhead
12.0
3.0
Corn Snake
12.8
5.0
Corn Snake
12.4
4.8
Corn Snake
11.9
4.5
corn snakes—are safe! In this example we used two classes—copperhead and corn snake—and two features—mass and length. Keep in mind that predictive algorithms can use many, many classes and hundreds if not thousands of features. REGRESSION Regression differs from classification because it allows us to explore values—and thus make predictions—in between and beyond discrete classes. Regression can do this because the output is numerical (or continuous). We wouldn’t pick a regression model for classifying snakes because, in our example, we want each snake to be identified as either one species or another. We would use regression in situations in which we want the system to identify values in between or beyond what we initially defined in the training data. Such values might include the future price of something, customer-satisfaction level, or the grades of a student. In each of these instances, the algorithms would use the relationship between the continuous number and some other variable(s) to make predictions. We used a classification strategy to organize input data (our snakes) into discrete classes. Using regression, we want the algorithms to predict a specific numerical value rather than a class. Models commonly used in regression include linear models, polynomial regression models, and, for more complex regression problems, neural networks. For example, we might use regression to predict the future price of a house depending upon the rate of local job growth, or the level of customer satisfaction depending upon wait time, or the grade of a student in relation to the hours spent working in a studio. Let’s envision the process of running a model to predict a student’s grade (a numerical value between o and -log). First we would label training examples of student grades, the dependent variable, paired with an independent variable, such as the amount of time physically spent weekly in the studio. Once we established some labeled examples, we could train the ML algorithms to predict the future grade of any student based on the independent variable—the amount of studio time. Now, think bigger. Just as with classification,
Classification Sorting items into categories
Regression Identifying real values (dollars, weight, etc)
predictive algorithms could compute a problem much more complex than our simple example. Instead of one variable, in this case time spent in the studio, there could be many. Note that even though with regression the algorithms can make value predictions beyond those explicitly expressed in the training data (i.e., we don’t have to provide examples of every possible grade/hour combination), they are doing so based on labeled data and defined variables. When we get to unsupervised learning, this will change.
DEPENDENT VARIABLE: a variable whose value depends on that of another INDEPENDENT VARIABLE: the input for a process that is being analyzed the feature!
SUPERVISED LEARNING: KEY TAKEAWAYS • requires labeled training data • has a clearly defined goal: “Computer, look for these particular patterns in the data so that you might predict x.” • the most prevalent form of ML today UNSUPERVISED LEARNING In unsupervised learning, an expert does not label the training data or provide the features/ variables. Instead, the algorithms parse through input data looking for regularities or patterns that have not been prespecified. So, for example, instead of feeding our algorithms lots of images of snakes, already labeled as corn snake or copperhead, we might just give the system lots of snake images of both species and ask the 109
110
111
Unsupervised learning
Unsupervised learning algorithm
algorithms to look for patterns to differentiate between the two. The algorithms themselves would then identify variables to reveal patterns. We wouldn’t necessarily know how the algorithms selected the variables—and there could be thousands. We could even ask the algorithms to look for interesting patterns with less of a prescribed goal in mind. Supervised learning requires a supervisor—someone to label the data. Unsupervised learning doesn’t. Bypassing the supervisor can result in striking, unexpected outcomes. When might humans use unsupervised learning? We might select unsupervised learning in situations in which the outcome is unclear or the analysis process too complex for a human to determine key differentiating labels and variables. Such complex situations might include identifying a human face, fraud detection, or predicting consumer behavior. Unsupervised learning also plays a key role in deep learning— to be discussed later. When Machine Learning algorithms detect complex patterns using thousands of variables, humans sometimes struggle to understand how the machines arrived at their predictions. We call this the Black Box 112
Problem. This might not be a big deal when a targeted ad miscalculates our interests. However, if we were denied parole based on a machine prediction of recidivism, it would suddenly be a huge deal—particularly if no one could explain how the decision was made.’ In addition to successfully analyzing highly complex situations, unsupervised learning algorithms can cut down on the time and expense necessitated by labeled data. Remember that supervised learning algorithms require labeled data and identified features for training. Cheaper, more easily acquired, unlabeled data can be fed into unsupervised learning algorithms. All that unstructured, multimodal data—images, sounds, movement—referenced earlier becomes rich fodder for these systems. Common strategies in unsupervised learning include clus-tering and dimensionality reduction. Let’s look at clustering. CLUSTERING Using this strategy, algorithms compute clusters or groups within the input data, i.e., data points with similar features are grouped together and data points with dissimilar features apart.Popular clustering algorithms
UNSUPERVISED LEARNING: KEY TAKEAWAYS • can detect patterns in situations too complex for human analysis • does not require labeled data and/or identified features/variables • can arrive at unexpected insights
AGENT: the one that decides what action to lake in response to rewards or punishments ENVIRONMENT: the surroundings or conditions within which an agent takes action
Two types of hierarchical clustering
Divisive
REINFORCEMENT LEARNING In both supervised and unsupervised learning, algorithms make predictions based on training data. We can think of this as historical data because it already exists in the world. Knowledge of the past dictates predictions of the future. We talked about some of the negative implications of this in chapter three. Here, let’s just consider the following, “Would you want to always be judged by your past behavior?” In contrast, reinforcement learning algorithms do not make predictions based on historical data. Instead, these algorithms build a prediction model on the fly by int with an environment using trial and error. Here’s how it works: an agent tries out possible actions within an environment. Each interaction with the environment produces insight—which acts as the input data. The agent
then uses this data a to iteratively adjust their actions to achieve a specific goa a reward. Think of a video game as an easy analogy for reinforcement learning. In such a game, players move through the game environment. By “playing,” or interacting within that environ-ment, they learn which actions move them toward the goal of winning and which do not. They repeat actions that work, using the resulting insight to inform subsequent behavior. This form of ML is most akin to the way we humans might acquire knowledge. To learn to roller skate, we put on skates and try. Each time we fall, we learn something about staying upright. We achieve our goal—moving around on skates while maintaining our balance—by putting this new knowledge into action. In 2077, DeepMind employed reinforcement learning com-bined with deep convolutional networks, using a program called AlphaGo, to beat a human champion at the nuanced game of Go. Later the some year, they released a more advanced pro-gram called AlphaZero that mastered chess, shogi, and Go.
Agglomerative
include K-Means, Mean Shift, DBSCAN, Expectation-Maximation Clustering using Gaussian Mixture Models, and Agglomerative Hierarchical Clustering. A retail company might use clustering to segment their customers. They could provide existing customer data points as input—age, zip code, gender, purchase history, etc.—and then run clustering algorithms to establish new groupings. The algorithms might discover customer segments that a designer or marketer would not typically envision. Or the model might reveal outliers that could inspire niche markets:, We could also use clustering to identify future trends like the profes-sional turnover rate in a particular industry.s Darktrace, a cybersecurity company, uses unsupervised learning to learn patterns of normal behavior in a system, and then looks for anomalies—suspicious behavior—that could be cyberattacks. Rather than rely upon labeled data built from accumulated knowledge of past security threats, the system can look for threats that companies haven’t yet identified . Clustering can reveal unexpected insights that push beyond the typical human perspective.
113
According to DeepMind, the style and complexity of each game determined the training time: nine hours for chess, twelve hours for shogi, and thirteen days for Go. Compare this to the years of intensive training that mastery requires of a human. Now imagine this technology applied to larger high-impact problems that require strategic action—global warming, for instance. Prior to this moment in 2017, according to Pedro Domingos, author of The Master Algorithm, “The supervised-learning people would make fun of the reinforcement-learning people.”’ DeepMind set everyone straight. Because reinforcement learning algorithms can interact with an unpredictable digital or physical environment, we often see them employed in gaming Als, logistics, resource management, robotics, and autonomous car navigation systems. Researchers can conveniently run these algorithms through millions of virtual simulations before taking their products into the world.9 Computer scientist Mark Crowley, of the University of Waterloo, currently trains virtual fires, using reinforcement
learning so that he might predict the path of future wildfires.rn Uber refines their self-driving AI platform by putting it through thousands of virtual simulations, using predictive algorithms to play the self-driving Al against an equally intelligent environment.” The training period of these algorithms can be lengthy—typically longer than that of supervised learning systems despite the breakneck speed of AlphaZero— but the ability to respond to environments in real time can trump that substantial training time. Common reinforcement learning strategies include Deep Deterministic Policy Gradient, 0-Learning, State-Action-RewardState-Action, and Deep 0-Networks. Note that in each of these strategies, as in unsupervised learning, no supervisor oversees the process. Because there is no supervisor, the supervisor cannot introduce bias into the system. To be clear, this does not eliminate all possibilities for bias, but it does get rid of one common avenue:, This lack of a supervisor also means that reinforcement learning can introduce alien strate-gies for “winning,” i.e., achieving the
Components of reinforcement learning
State
Reward
Agent
Environment
Action
114
If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. And as any parent or dog owner can tell you, that is a much more mysterious relationship to find yourself in.
reward, and then, in turn, teach these unusual strategies back to humans. In the previous example of AlphaGo and AlphaZero, Go champions studied the winning strategies employed by these programs and copied the tactics in their own subsequent games. Some of the tactics, however, were unusable because, try as they might, human players found them incomprehensible., With less overt bias and the introduction of mind-boggling new strategies, rein-forcement learning is pulling in big research dollars right now. REINFORCEMENT LEARNING: KEY TAKEAWAYS • gains insight through trial and error • model interacts with an environment sequentially over time • eliminates supervisor bias • can introduce alien approaches that prove useful to humans CONCLUSION Althogh we considered each ML category separately, many researchers mix these
categories together in practice. Semi-supervised learning refers specifically to blends of supervised and unsupervised learning, but researchers can also use all three categories to achieve their goals. This process can be messy and complex, requiring a deep knowledge of mathematics and statistics and a big helping of intuition. Training a set of algorithms requires a different skill set than programming explicit logic-based instructions. This shift from programming to training produces a relationship with machines that is less clear cut and more difficult to control. Jason Tanz, the site director of Wired, explains, “If in the old view programmers were like gods, authoring the laws that govern computer systems, now they are like parents or dog trainers. And as any parent or dog owner can tell you, that is a much more mysterious relationship to find yourself in.”, Designers thrive in this kind of liminal space. Working with data scientists, we can begin prototyping user experience and interface possibilities that map out such emerging relationships and possibilities between human and machine.
SEMI-SUPERVISED LEARNING: a training approach in machine learning that combined supervised and unsupervised methods. DEEP CONVOLUTIONAL NETWORK: a specific kind of deep neural network that has a convolution layer
115
116
Identity development
117
(1)
(2)
(3)
(4)
(5)
(6)
(7)
118
(8)
(9)
( 10 )
( 11 )
( 12 )
( 13 )
( 14 )
( 15 )
( 16 )
( 17 )
( 18 )
119
120
( 19 )
( 20 )
( 21 )
( 22 )
( 23 )
( 24 )
( 25 )
( 26 )
( 27 )
( 28 )
( 29)
( 30 )
( 31 )
( 32 )
( 33 )
( 34 )
( 35 )
( 36 )
121
( 37 )
( 38 )
( 39 )
122
123
124
Machine learning experiment
125
Part of the dataset of 1849 animal skeleton images
Trained on a dataset of 1706 images, 3000 training steps
126
127
Part of the dataset of 1849 animal skeleton images
Training result, 3000 training steps
128
129
Part of the dataset of 1706 fungi images
130
131
Trained on a dataset of 596 images, 3000 training steps
Trained on a dataset of 1706 images, 3000 training steps
132
133
Method
dataset 1: skeletons
dataset 3: skeletons +fungi
dataset 2: fungi
Trained on a dataset of 1706 images, 3000 training steps
134
3000 steps training
result
135
Part of the butterfly dataset
136
137
Part of the snail dataset
138
139
140
Installation development
141
Installation development
Projection Testing
Adding the environment
142
Form development: inspiration
Form development
143
Content testing
Sequence study
144
Material testing
Logotype testing
145
146
147
148
2021 Fall
149
150
chapter 3
RESEARCH & STRATEGY
151
152
153
154
Intelligent Universal values
Self-reflective Self-awareness
Emotional
Reason
Independent thinking Autonomy
Differences
What does it mean to be human? Conscious
Be flawed and imperfect
Sensitive
Rational
Contradictory
Ethical Human forms
Shared thinking patterns 155
Are we already transhuman?
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
APPLE WATCH Health tracking (lifespan) Access information
WIRELESS EARBUDS Enhance communication, Access information
172
EYE CONTACTS Improve eyesight.
iPhone Enhance communication, Access information...
173
Why is posthumanism relevant?
174
175
Posthumanism, Transhumanism, Antihumanism, Metahumanism, and New Materialisms Differences and Relations Francesca Ferrando | 2013
Abstract: "Posthuman" has become an umbrella term to refer to a variety of different movements and schools of thought, including philosophical, cultural, and critical posthumanism; transhumanism (in its variations of extropianism, liberal and democratic transhumanism, among others); the feminist approach of new materialisms; the heterogeneous landscape of antihumanism, metahumanism, metahumanities, and posthumanities. Such a generic and all-inclusive use of the term has created methodological and theoretical confusion between experts and non-experts alike. This essay will explore the differences between these movements, focusing in particular on the areas of signification shared by posthumanism and transhumanism. In presenting these two independent, yet related philosophies, posthumanism may prove a more comprehensive standpoint to reflect upon possible futures. Keywords: Posthumanism; transhumanism; antihumanism; metahumanism; new materialism; technology; future; posthuman; transhuman; Cyborg.
Introduction In contemporary academic debate, “posthuman” has become a key term to cope with an urgency for the integral redefinition of the notion of the human, following the onto-epistemological as well as scientific and bio-technological developments of the twentieth and twenty-first centuries. The philosophical landscape, which has since developed, includes several movements and schools of thought. The label “posthuman” is often evoked in a generic and all-inclusive way, to indicate any of these different perspectives, creating methodological and theoretical confusion between experts and nonexperts alike. “Posthuman” has become an umbrella term to include (philosophical, cultural, and critical) posthumanism, transhumanism (in its variants as extropianism, liberal and democratic transhumanism, among other currents), new materialisms (a specific feminist development within the posthumanist frame), and the heteroge176
neous landscapes of antihumanism, posthumanities, and metahumanities. The most confused areas of signification are the ones shared by posthumanism and transhumanism. There are different reasons for such confusion. Both movements arose more specifically in the late Eighties and early Nineties,1with interests around similar topics. They share a common perception of the human as a non-fixed and mutable condition, but they generally do not share the same roots and perspectives. Moreover, within the transhumanist debate, the concept of posthumanism itself is interpreted in a specific transhumanist way, which causes further confusion in the general understanding of the posthuman: for some transhumanists, human beings may eventually transform themselves so radically as to become posthuman, a condition expected to follow the current transhuman era. Such a take on the posthuman should not be confused with the post-anthropocentric and post-dualistic approach of (philosophical, cultural, and critical) posthumanism. This essay clarifies some of the differences between these two independent, yet related movements, and suggests that posthumanism, in its radical onto-existential re-signification of the notion of the human, may offer a more comprehensive approach.
(FIG 1)The Vitruvian Man is a drawing made by the Italian polymath Leonardo da Vinci in 1490.
Transhumanism 1.
I should clarify that both movements can be traced earlier than that. The closest reference to transhumanism as the current philosophical attitude can be found in Julian Huxley, “Transhumanism,” in Julian Huxley, New Bottles for New Wine: Essays, London: Chatto & Windus 1957, pp. 13-7. In postmodern literature, the terms “posthuman” and “posthumanism” first appeared in Ihab Habib Hassan, “Prometheus as Performer: Toward a Posthumanist Culture?,” The Georgia Review 31/4, pp. 830-50; and Ihab Habib Hassan, The Postmodern Turn: Essays in Postmodern Theory and Culture, Columbus, OH: Ohio State University Press, 1987.
2.
An international group of authors crafted the Transhumanist Declaration in 1998 which is now posted at http:// humanityplus.org/philosophy/transhumanist-declaration/. The first two of the eight preambles state: “(1) Humanity stands to be profoundly affected by science and technology in the future. We envision the possibility of broadening human potential by overcoming aging, cognitive shortcomings, involuntary suffering, and our confinement to planet Earth. (2) We believe that humanity’s potential is still mostly unrealized. There are possible scenarios that lead to wonderful and exceedingly worthwhile enhanced human conditions.” Last accessed November 14, 2013
3.
See Ronald Bailey, Liberation Biology: The Scientific and Moral Case for the Biotech Revolution, Amherst, NY: Prometheus, 2005.
4.
See James Hughes, Citizen Cyborg: Why Democratic Societies Must Respond to the Redesigned Human of the Future, Cambridge, MA: Westview Press, 2004. [Henceforth cited as CC]
5.
Max More, Principles of Extropy, Version 3:11, 2003, http:// www.extropy org/principles.htm. Last accessed November 14.
6.
James Hughes sees in the Transhumanist Declaration the moment when the legacy with the Enlightenment was explicitly affirmed: “With the Declaration transhumanists were embracing their continuity with the Enlightenment, with
The movement of transhumanism problematizes the current understanding of the human not necessarily through its past and present legacies, but through the possibilities inscribed within its possible biological and technological evolutions. Human enhancement is a crucial notion to the transhumanist reflection; the main keys to access such a goal are identified in science and technology, 2 in all of their variables, as existing, emerging and speculative frames—from regenerative medicine to nanotechnology, radical life extension, mind uploading and cryonics, among other fields. Distinctive currents coexist in transhumanism, such as: libertarian transhumanism, democratic transhumanism, and extropianism. Science and technology are the main assets of interest for each of these positions, but with different emphases. Libertarian transhumanism advocates free market as the best guarantor of the right to human enhancement. 3 Democratic transhumanism calls for an equal access to technological enhancements, which could otherwise be limited to certain socio-political classes and related to economic power, consequently encoding racial and sexual politics.4 The principles of extropianism have been delineated by its founder Max More as: perpetual progress, self-transformation, practical optimism, intelligent technology, open society (information and democracy), self-direction, and rational thinking. 5 The emphasis on notions such as rationality, progress and optimism is in line with the fact that, philosophically, transhumanism roots itself in the Enlightenment,6 and so it does not 177
(FIG 2)
God
Angelic beings
Humanity
Animals
Plants
Minerals
178
democracy and humanism” (CC 178). Similarly, Max More explains, “Like humanists, transhumanists favor reason, progress, and values centered on our well being rather than on an external religious authority. Transhumanists take humanism further by challenging human limits by means of science and technology combined with critical and creative thinking” (PE n.p.). [A considerable amount of transhumanist literature is published online, and so, like in this case, the specific page number of the references cannot be listed.] 7.
Bradley B. Onishi, “Information, Bodies, and Heidegger: Tracing Visions of the Posthuman,” Sophia 50/1 (2011), pp. 101-12.
8.
Rooted in Plato, Aristotle, and the Old Testament, the Great Chain of Being depicted a hierarchical structure of all matter and life (even in its hypothetical forms, such as angels and demons), starting from God. This model, with contextual differences and specificities continued in its Christian interpretation through the Middle Ages, the Renaissance, until the eighteenth century. One classic study on this subject is by Arthur O. Lovejoy, The Great Chain of Being: A Study of the History of an Idea, Cambridge, MA: Harvard University Press, 1936.
9.
Francesca Ferrando, “The Body,” in Post- and Transhumanism: An Introduction, eds. Robert Ranisch and Stefan L. Sorgner, Vol. 1 of Beyond Humanism: Trans- and Posthumanism, Frankfurt am Main: Peter Lang Publisher, forthcoming.
10.
See N. Katherine Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics, Chicago, IL: University of Chicago Press 1999, p. 20: “The thirty million Americans who are plugged into the Internet increasingly engage in virtual experiences enacting a division between the material body that exits on one side of the screen and the computer simulacra that seem to create a space inside the s creen. Yet for millions more, virtuality is not even a cloud on the horizon of their everyday worlds. Within a global context, the experience of virtuality becomes more exotic by several orders of magnitude. It is a useful corrective to remember that 70 percent of the world’s population has never made a telephone call.”
expropriate rational humanism. By taking humanism further, transhumanism can be defined as “ultra-humanism.” 7 This theoretical location weakens the transhumanist reflection, as argued anon. In the West, the human has been historically posed in a hierarchical scale to the non-human realm. Such a symbolic structure, based on a human exceptionalism well depicted in the Great Chain of Being, 8 has not only sustained the primacy of humans over nonhuman animals, but it has also (in)formed the human realm itself, with sexist, racist, classist, homophobic, and ethnocentric presumptions. In other words, not every human being has been considered as such: women, African-American descendents, gays and lesbians, differently-abled people, among others, have represented the margins to what would be considered human. For instance, in the case of chattel slavery, slaves were treated as personal property of an owner, to be bought and sold. And still, transhumanist reflections, in their “ultra-humanistic” endeavors, do not fully engage with a critical and historical account of the human, which is often presented in a generic and “fit-for-all” way.9 Furthermore, the transhumanist perseverance in recognizing science and technology as the main assets of reformulation of the human runs the risk of technoreductionism: technology becomes a hierarchical project, based on rational thought, driven towards progression. Considering that a large number of the world’s population is still occupied with mere survival, if the reflection on desirable futures was reduced to an overestimation of the technological kinship of the human revisited in its specific technical outcomes, such a preference would confine it to a classist and techno-centric movement.10 For these reasons, although offering inspiring views on the ongoing interaction between the biological and the technological realm, transhumanism is rooted within traditions of thought which pose unredeemable restrictions to its perspectives. Its reliance on technology and science should be investigated from a broader angle; a less centralized and more integrated approach would deeply enrich the debate. In this sense, posthumanism may offer a more suitable point of departure.
Posthumanist Technologies If posthumanism and transhumanism share a common interest in technology, the ways in which they reflect upon this notion is structurally different. The historical and ontological dimension of technology is a crucial issue, when it comes to a proper understanding of the posthuman agenda; yet, posthumanism does not turn technology into its main focus, which would reduce its own theoretical attempt to a form of essentialism and techno-reductionism. Technology is neither the “other” to be feared and to rebel against (in a sort of neoluddite attitude),
(FIG 2) 1579 drawing of the Great Chain of Being from Didacus Valades, Rhetorica Christiana.
179
nor does it sustain the almost divine characteristics which some transhumanists attribute to it (for instance, by addressing technology as an external source which might guarantee humanity a place in post-biological futures). What transhumanism and posthumanism share is the notion of technogenesis.11 Technology is a trait of the human outfit. More than a functional tool for obtaining (energy; more sophisticated technology; or even immortality), technology arrives at the posthumanist debate through the mediation of feminism, in particular, through Donna Haraway’s cyborg and her dismantling of strict dualisms and boundaries, 12 such as the one between human and non human animals, biological organisms and machines, the physical and the nonphysical realm; and ultimately, the boundary between technology and the self. The non-separateness between the human and the techno realm shall be investigated not only as an anthropological 13 and paleontological issue,14 but also as an ontological one. Technology, within a posthumanist frame, can be gleaned through the work of Martin Heidegger, specifically in his essay “The Question Concerning Technology,” where he stated: “Technology is therefore no mere means. Technology is a way of revealing.”15 Posthumanism investigates technology precisely as a mode of revealing, thus reaccessing its ontological significance in a contemporary setting where technology has been mostly reduced to its technical endeavors. Additional relevant aspects to be mentioned in relation to posthumanism, are the technologies of the self, as defined by Michel Foucault.16 The technologies of the self dismantle the separation self/others through a relational ontology,17 playing a substantial role in the process of existential revealing, and opening the debate to posthuman ethics and applied philosophy. Posthumanism is a praxis. The ways the futures are being conceived and imagined are not disconnected from their actual enactments: in the posthuman post-dualistic approach, the “what” is the “how.” For instance, posthumanism takes into account space migration but, in its post-modern and post-colonial roots, cannot support space colonization, a concept which is often found in transhumanist literature. This is a good example how transhumanism and posthumanism may approach the same subject from different standpoints and theoretical legacies.
11.
See N. Katherine Hayles, “Wrestling with Transhumanism,” in H+: Transhumanism and its Critics, eds. Gregory R. Hansell, William Grassie, et al., Philadelphia, PA: Metanexus Institute 2011, pp. 215-26.
12.
Donna Haraway, “A Manifesto for Cyborgs: Science, Technology, and Socialist-Feminism in the 1980s,” in The Gendered Cyborg: A Reader, eds. Gill Kirkup, Linda Janes, Kath Woodward, and Fiona Hovenden, New York, London: Routledge 2000, pp. 50-7.
13.
See Arnold Gehlen, Man in the Age of Technology, trans. Patricia Lipscomb, New York: Columbia University Press, [1957] 1980.
14.
See André Leroi-Gourhan, L’Homme et la Matière, Paris: Albin Michel, 1943; also André Leroi-Gourhan, Gesture and Speech, trans. Anna Bostock Berger, Cambridge, MA: MIT Press, 1993
15.
Martin Heidegger, The Question Concerning Technology and Other Essays, trans. William Lovitt, New York: Harper Torchbooks [1953] 1977, p. 12.
16.
Michel Foucault introduced this notion in his later work. Shortly befor his passing in 1984, he mentioned the idea of working on a book on the technologies of the self. In 1988, his essay “Technologies of the Self,” was published post-mortem based on his seminar at the University of Vermont in 1982: Technologies of the Self: A Seminar with Michel Foucault, eds. Luther H. Martin, Huck Gutman, and Patrick H. Hutton, Amherst, MA: University of Massachusetts Press, 1988, pp. 16-49.
17.
See Karen Michelle Barad, Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning, Durham: Duke University Press, 2007.
(FIG 3) 100 Antitheses of Cyberfeminism.
Posthumanism 18.
180
For a historical and theoretical account on cultural posthumanism see Judith Halberstam and Ira Livingston, eds., Posthuman Bodies, Bloomington: Indiana University Press 1995; Neil Badmington, ed., Posthumanism, New York: Palgrave 2000; Andy Miah, “Posthumanism in Cultural Theory,” in Medical Enhancement and Posthumanity, eds. Bert Gordijn and Ruth Chadwick, Berlin: Springer 2008, pp. 71-94.
Although the roots of posthumanism can be already traced in the first wave of postmodernism, the posthuman turn was fully enacted by feminist theorists in the Nineties, within the field of literary criticism—what will later be defined as critical posthumanism. Simultaneously, cultural studies also embraced it, producing a specific take which has been referred to as cultural posthumanism.18 By the end of the 1990s (critical and cultural) posthumanism developed into a more philosophically focused inquiry (now referred to as
19.
See Rosi Braidotti, The Posthuman, Cambridge, UK:Polity Press, 2013.
20.
Gianni Vattimo, The End of Modernity: Nihilism and Hermeneutics in Postmodern Culture, trans. Jon R. Snyder, Baltimore: The John Hopkins University Press, 1988.
21.
In every civilization, while new information is achieved, other information is lost, so that the lost information, once retrieved, becomes new again. Psychoanalyst Immanuel Velikovsky actually defined the human species as that species which constantly loses memory of its own origins. See his Mankind in Amnesia, New York: Doubleday, 1982. Furthermore, consider the parallels between Western scientific discoveries and traditional Eastern spiritual knowledge drawn, for instance, by physicist Fritjof Capra in his influential work The Tao of Physics: An Exploration of the Parallels between Modern Physics and Eastern Mysticism, Boston, MA: Shambhala, 1975.
22.
Francesca Ferrando, “Towards a Posthumanist Methodology: A Statement,” Frame. Journal For Literary Studies, 25/1 (2012), Utrecht University, pp. 9-18. See Bell Hooks, Feminist Theory: From Margin to Center, Boston, MA: South End Press, 1984.
23.
By the late 20th century, our time, a mythic time, we are all chimeras, theorized and fabricated hybrids of machine and organism.
By Donna Haraway, A Cyborg Manifesto.
philosophical posthumanism), in a comprehensive attempt to re-access each field of philosophical investigation through a newly gained awareness of the limits of previous anthropocentric and humanistic assumptions. Posthumanism is often defined as a posthumanism and a post-anthropocentrism:19 it is “post” to the concept of the human and to the historical occurrence of humanism, both based, as we have previously seen, on hierarchical social constructs and humancentric assumptions. Speciesism has turned into an integral aspect of the posthuman critical approach. The posthuman overcoming of human primacy, though, is not to be replaced with other types of primacies. Posthumanism can be seen as a post-exclusivism: an empirical philosophy of mediation which offers a reconciliation of existence in its broadest significations. Posthumanism does not employ any frontal dualism or antithesis, demystifying any ontological polarization through the postmodern practice of deconstruction. Not obsessed with proving the originality of its own proposal, posthumanism can also be seen as a post-exceptionalism. It implies an assimilation of the “dissolution of the new,” which Gianni Vattimo identified as a specific trait of the postmodern. 20 In order to postulate the “new,” the centre of the discourse has to be located, so that the question “New to what?” shall be answered. But the novelty of human thought is relative and situated: what is considered new in one society might be common knowledge in another. 21 Moreover, hegemonic perspectives do not explicitly acknowledge all the resistant standpoints which coexist within each specific cultural-historical paradigm, thus failing to recognize the discontinuities embedded in any discursive formation. What Posthumanism puts at stake is not only the identity of the traditional centre of Western discourse— which has already been radically deconstructed by its own peripheries (feminist, critical race, queer, and postcolonial theorists, to name a few). Posthumanism is a post-centralizing, in the sense that it recognizes not one but many specific centers of interest; it dismisses the centrality of the centre in its singular form, both in its hegemonic as in its resistant modes. 22 Posthumanism might recognize centers of interests; its centers, though, are mutable, nomadic, ephemeral. Its perspectives have to be pluralistic, multilayered, and as comprehensive and inclusive as possible. As posthumanism attracts more attention and becomes mainstream, new challenges arise. For example, some thinkers are currently looking to embrace the “exotic” difference, such as the robot, the biotechnological chimeras, the alien, without having to deal with the differences embedded within the human realm, thus avoiding the studies developed from the human “margins,” such as feminism or critical race studies. 23 But posthumanism does not stand on a hierarchical system: there are no higher and lower degrees of alterity, when formulating a posthuman standpoint, so that the non-human differences are as compelling as the human ones. Posthumanism is a philosophy which provides a suitable 181
Posthumanism historically developed out of the feminist reflection in the Nineties.
182
Cyberfeminists focused on technology as a liberating force.
183
(FIG 4) A Cyberfeminist Manifesto for the 21st Century by VNS Matrix.
Cyberfeminism: Rooted in breaking the binary & reclaiming space
(FIG 5) Lynn Hershman Leeson, “Seduction”, 1985. Black and white photograph.
(FIG 6) Lynn Hershman Leeson, Giggling Machine, Self Portrait as Blonde, 1968, wax, wig, makeup, feathers, plexiglass, wood
184
Cyberfeminists of today practise one of feminism’s central tenets—to escape a patriarchal, universalist logic and be aware of the specifics and marginalities of people’s identities and experiences.
Cyborg is “a creature in a post-gender world”. Technology was redrawing the boundaries of identity.
(FIG 9) Lynn Hershman Leeson, Faces Water Woman, 2020, archival digital print
(FIG 7) Shu Lea Cheang, Brandon, Bigdoll interface, collaboration with Jordy Jones and Cherise Fong, 1998
185
The colonized and t
the marginalized an
the wom
which all of them a Other 186
the enslaved,
nd the non-citizen,
man and the animal
are made into than rational man. 187
way of departure to think in relational and multi-layered ways, expanding the focus to the non-human realm in postdualistic, post-hierarchical modes, thus allowing one to envision post-human futures which will radically stretch the boundaries of human imagination.
New Materialisms New materialisms is another specific movement within the posthumanist theoretical scenario.24 Diana Coole and Samantha Frost point out: “the renewed critical materialisms are not synonymous with a revival of Marxism,”25 but, more literary, they re-inscribe matter as a process of materialization, in the feminist critical debate. Already traceable in the mid to late Nineties in the emphasis given to the body by corporeal feminism, 26 such a rediscovered feminist interest became more extensively matter-oriented by the first decade of the twenty-first century. New materialisms philosophically arose as a reaction to the representationalist and constructivist radicalizations of late postmodernity, which somehow lost track of the material realm. Such a loss postulated an inner dualism between what was perceived as manipulated by the act of observing and describing, as pursued by the observers, and an external reality, that would thus become unapproachable. 27 Even though the roots of new materialisms can be traced in postmodernism, new materialisms point out that the postmodern rejection of the dualism nature/ culture resulted in a clear preference for its nurtural aspects. Such a preference produced a multiplication of genealogical accounts investigating the constructivist implications of any natural presumptions, 28 in what can be seen as a wave of radical constructivist feminist literature related to the major influence of Judith Butler’s groundbreaking works. 29 This literature exhibited an unbalanced result: if culture did not need to be bracketed, most certainly nature did. In an ironic tone, Karen Barad, one of the main theorists of new materialisms, implicitly referring to Butler’s book Bodies that Matter, 30 has stated: “Language matters. Discourse matters. Culture matters. There is an important sense in which the only thing that does not seem to matter anymore is matter.”31 New materialisms pose no division between language and matter: biology is culturally mediated as much as culture is materialistically constructed. New materialisms perceive matter as an ongoing process of materialization, elegantly reconciling science and critical theories: quantum physics with a poststructuralist and postmodern sensitivity. Matter is not viewed in any way as something static, fixed, or passive, waiting to be molded by some external force; rather, it is emphasized as “a process of materialization” (BM 9). Such a process, which is dynamic, shifting, inherently entangled, diffractional, and performative, does not hold any primacy over the materialization, nor can the materialization be reduced to its processual terms. 188
24.
The term was coined independently by Rosi Braidotti andManuel De Landa in the mid-Nineties. See Rick Dolphijn and Iris van der Tuin, New Materialism: Interviews & Cartographies, Ann Arbor, MI: Open Humanities Press, 2012. For the problematization related to the use of the adjective “new” in this context, see Nina Lykke, “New Materialisms and their Discontents,” paper presented at Entanglements of New Materialism, Third New Materialism Conference, Linköping University, May 25-26, 2012. University of Michigan
25
Diana H. Coole and Samantha Frost, “Introducing the New Materialisms,” in New Materialisms: Ontology, Agency, and Politics, eds. Diana H. Coole and Samantha Frost, Durham, NC: Duke University Press 2010, pp. 1-45, here p. 30.
26
See Elizabeth A. Grosz, Volatile Bodies: Toward a Corporeal Feminism, Bloomington, IN: Indiana University Press, 1994. Furthermore, Vicky Kirby, Telling Flesh: The Substance of the Corporeal, New York: Routledge, 1997.
27
One of the proponents of this type of radical constructivism was philosopher Ernst Von Glasersfeld, who elaborated on his theory of knowing, among other texts, in Radical Constructivism: A way of Knowing and Learning, New York: Routledge Falmer, 1995.
28
For a critique of constructivism and representationalism from a posthumanist perspective, see John A. Smith & Chris Jenks, Qualitative Complexity: Ecology, Cognitive Processes and the Re-Emergence of Structures in Post-Humanist Social Theory, Oxon: Routledge 2006, pp. 47-60.
29
See Veronica Vasterling, “Butler’s Sophisticated Constructivism: A Critical Assessment,” Hypatia 14/3 (August 1999), pp. 17-38, here p. 17: “During the last decade, a new paradigm has emerged in feminist theory: radical constructivism. Judith Butler’s work is most closely linked to the new paradigm. On the basis of a creative appropriation of poststructuralist and psychoanalytical theory, Butler elaborates a new perspective on sex, gender and sexuality. A well-known expression of this new perspective is Butler’s thesis, in Bodies that Matter (1993) that not only gender but also the materiality of the (sexed) body is discursively constructed.”
30
Judith Butler, Bodies that Matter: On the Discursive Limits of Sex, New York: Routlegde, 1993. [Henceforth cited as BM]
31
Karen Barad, “Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter,” Signs: Journal of Women in Culture and Society, 28/3 (2003), pp. 801-31, here p. 801.
Antihumanism, Metahumanism, Metahumanities, and Posthumanities There are significant differences within the posthuman scenario, each leading to a specialized forum of discourse. If modern rationality, progress and free will are at the core of the transhumanist debate, a radical critique of these same presuppositions is the kernel of antihumanism, 32 a philosophical position which shares with posthumanism its roots in postmodernity, but differs in other aspects. 3 3 The deconstruction of the notion of the human is central to antihumanism: this is one of its main points in common with posthumanism. However, a major distinction between the two movements is already embedded in their morphologies, specifically in their denotation of “post-” and “anti-.” Antihumanism fully acknowledges the consequences of the “death of Man,” as already asserted by some post-structuralist theorists, in particular by Michel Foucault.34 In contrast, posthumanism does not rely on any symbolic death: such an assumption would be based on the dualism dead/alive, while any strict form of dualism has been already challenged by posthumanism, in its post-dualistic process-ontological perspective. Posthumanism, after all, is aware of the fact that hierarchical humanistic presumptions cannot be easily dismissed or erased. In this respect, it is more in tune with Derrida’s deconstructive approach rather than with Foucault’s death of Man. 35 To complete a presentation of the posthuman scenario, metahumanism is a recent approach closely related to a Deleuzian legacy; 36 it emphasizes the body as a locus for amorphic re-significations, extended in kinetic relations as a body-network. It should not be confused with metahumanity, a term which appeared in the 1980s within comics narratives and role-playing games, 37 referring to superheros and mutants, and it has since been employed specifically in the context of cultural studies. Lastly, the notion of posthumanities has been welcomed in academia to emphasize an internal shift(from the humanities to the posthumanities), extending the study of the human condition to the posthuman; furthermore, it may also refer to future generations of beings evolutionarily related to the human species.
32
It is important to note that Antihumanism is not a homogeneous movement. On this aspect. see Béatrice Han- Pile, “The ‘Death of Man’: Foucault and Anti-Humanism,” in Foucault and Philosophy, eds. Timothy O’Leary and Christopher Falzon, Malden, MA: Wiley-Blackwell, 2010.
33
Here, I will mostly focus on the philosophical current developed out of the Nietzschean-Foucauldian legacies. For an account on the antihumanist perspective rooted in Marxism and developed by philosophers such as Louis Althusser and György Lukács see Tony Davies, Humanism, New York, NY: Routledge 1997, pp. 57-69.
34
Michel Foucault, The Order of Things: An Archaeology of the Human Sciences, trans. Alan Sheridan, New York: Pantheon Books, 1971.
35
See Jacques Derrida, Of Grammatology, trans. Gayatri Chakravorty Spivak, Baltimore: Johns Hopkins University Press 1976.
36
Jaime del Val and Stefan Lorenz Sorgner, “A Metahumanist Manifesto,” The Agonist: A Nietzsche Circle Journal, IV/II (Fall 2011), http://www.nietzschecircle.com/agonist/2011_08/ metahuman_manifesto.html, last accessed November 16, 2013.
37
The term “metahuman” was specifically utilized in the comics series released by publisher DC Comics (New York).
(FIG 5) Michel Foucault, death left its old tragic heaven and became the lyrical core of man: his invisible truth, his visible secret.
Conclusion The posthuman discourse is an ongoing process of different standpoints and movements, which has flourished as a result of the contemporary attempt to redefine the human condition. Posthumanism, transhumanism, new materialisms, antihumanism, metahumanism, metahumanity and posthumanities offer significant ways to rethink possible existential outcomes. This essay clarifies some of the differences between these movements, and emphasizes the sim189
39
Special thanks to Helmut Wautischer and Ellen DelahuntyRoby for comments on earlier drafts of this essay.
190
ilarities and discrepancies between transhumanism and posthumanism, two areas of reflection that are often confused with each other. Transhumanism offers a very rich debate on the impact of technological and scientific developments in the evolution of the human species; and still, it holds a humanistic and humancentric perspective which weakens its standpoint: it is a “Humanity Plus” movement, whose aim is to “elevate the human condition.” 38 On the contrary, speciesism has become an integral part of the posthumanist approach, formulated on a post-anthropocentric and post-humanistic episteme based on decentralized and non-hierarchical modes. Although posthumanism investigates the realms of science and technology, it does not recognize them as its main axes of reflection, nor does it limit itself to their technical endeavors, but it expands its reflection to the technologies of existence. Posthumanism (here understood as critical, cultural, and philosophical posthumanism, as well as new materialisms) seems appropriate to investigate the geological time of the anthropocene. As the anthropocene marks the extent of the impact of human activities on a planetary level, the posthuman focuses on de-centering the human from the primary focus of the discourse. In tune with antihumanism, posthumanism stresses the urgency for humans to become aware of pertaining to an ecosystem which, when damaged, negatively affects the human condition as well. In such a framework, the human is not approached as an autonomous agent, but is located within an extensive system of relations. Humans are perceived as material nodes of becoming; such becomings operate as technologies of existence. The way humans inhabit this planet, what they eat, how they behave, what relations they entertain, creates the network of who and what they are: it is not a disembodied network, but (also) a material one, whose agency exceeds the political, social, and biological human realms, as new materialist thinkers sharply point out. In this expanded horizon, it becomes clear that any types of essentialism, reductionism, or intrinsic biases are limiting factors in approaching such multidimensional networks. Posthumanism keeps a critical and deconstructive standpoint informed by the acknowledgement of the past, while setting a comprehensive and generative perspective to sustain and nurture alternatives for the present and for the futures. Within the current philosophical environment, posthumanism offers a unique balance between agency, memory, and imagination, aiming to achieve harmonic legacies in the evolving ecology of interconnected existence.39
By Rosi Braidotti
The Humanity+ website (http://humanityplus.org), which is currently the transhumanist main online platform states: “Humanity+ is dedicated to elevating the human condition. We aim to deeply influence a new generation of thinkers who dare to envision humanity’s next steps.”
We need to open up the meaning of the identity concept towards relations with a multiplicity, with others. Through opposition to the idea of identity as something completely closed, already formed, and static. We are subjects under construction, we are always becoming something.
38
In the traditions of ‘Western’ science and politics—the tradition of racist, male-dominant capitalism; the tradition of progress; the tradition of the appropriation of nature as resource for the productions of culture; the tradition of reproduction of the self from the reflections of the other—the relation between organism and machine has been a border war.
By Donna Haraway, Cyborg Manifesto.
191
192
193
194
Post-modern
We can be posthuman now. “Posthuman” is an approach to be, it’s a perspective. We cannot only think about posthuman is a open frame to the future, present but also to the past.
We are not posthuman yet, some of us may be posthuman in the close future. We used to be human, some of us are transhuman now.
Progress
Rationality
The enlightenment
New Materialism
Posthuman
Antihumanism(s)
Metahumanism(s)
Humans Transhuman
195
Human(s)
Post-humanism
Deconstruction of the human
Human is not one, but many...
We need to decentralize the human from the focus of the discourse.
Singularity
Cultural posthumanism
Libertarian transhumanism
Democratic transhumanism
Philosophical posthumanism
Extropianism transhumanism
Anthropos(human)
If we keep dualism as an approach (social technology) intact, we’ll have other forms of discrimination.
Problem: dualistic creation of identity. We are defined who we are in seperation from others.
Technology
Human
Post-dualism
Science
Human enhancement
Transhumanism(s)
How can we decentralize our location as a species and also as an individual.
Post-anthropocentrism
Where did the notion of human come from? When did human become human?
Posthumanism(s)
Critical posthumanism
Donna Haraway
Ursula Le Guin
Rosi Braidotti
Cecilia Åsberg
196
Stacy Alaimo
Cornelia Sollfrank
David Pearce
Nick Bostrom
Ray Kurzweil
Francesca Ferrando
Bruce Sterling
Sandy Stone
197
Post! post! post!
198
199
Posthumanism Francesca Ferrando
Why POSTHUMANISM IS A THEORETICAL frame, as well as an empirical one, which can apply to any field of enquiry, starting from our location as a species, to the individual gaze. Posthumanism addresses the question «who am I?» in conjunction with other related questions, such as: «what am I?» and «where and when are we?» The existential aspects are not disjunct to political and spatiotemporal elements. On one side, such an approach has a festive element: the loneliness of the Western subject is lost in the recognition of the others as interconnected to the self. On the other side, the awareness of distributed agency in the evolving body of spacetime becomes infinitely resonant, as does each existential performance: there is no absolute «otherness»; we exist in a material net in which everything is actually connected and potentially intra-acting. Such an awareness generates theoretical as well as pragmatic considerations. In the 21st century the impact of anthropocentric habits on earth has become so massive that geologists are addressing the present era as the Anthropocene, in which human actions are seriously affecting the ecosystem. In the past, humans were not recognized as agents directly causing climate changes. It is now common knowledge that the earth is collapsing under the massive quantity of non-recycled garbage produced daily, the high emission of atmospheric greenhouse gases and the level of pollution introduced into the natural environment. The way the majority of current human societies are perform200
ing their material interacting in this world is based on anthropocentric premises, which are leading to a point of non-return in ecological and sustainable terms. Since everything is connected, this damaged balance is also directly affecting human well-being: an example can be seen in the alarming global rise on cancer rates. Humanism may not be of help in changing such a direction; posthumanism, on the other hand, can be the turning point, by bringing to the discussion crucial notions such as speciesism, entanglement and non-human agency, among others. Who What about humans? Are they going to be advantaged by a posthuman approach? If you think that sexism, racism and any other form of discrimination are impediments in the realization of desirable futures, the answer is: yes. Posthumanism was born out of the feminist reflection and nurtured by the studies of the differences. The Seventies called for a revisitation of the notion of the human by acknowledging that, in the Western tradition, only a specific type of human had been recognized as such: he had to be male, white, Western, heterosexual, physically able, propertied and so on. Such a revisitation called for a recognition of all the «other» humans, who had been left out. Western hegemonic accounts had mostly been dualistic, based on opposites which can be exemplified in the classical pairs: nature/culture, female/male, black/white, gay/hetero etc. Such a dualistic approach reflected the historical praxis of war
Anzaldúa, Gloria 1987. Borderlands/La Frontera: The New Mestiza. San Francisco: Aunt Lute Books. Barad, Karen 2007. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham et al.: Duke University Press. Braidotti, Rosi 1994. Nomadic Subjects: Embodiment and Sexual Difference in Contemporary Feminist Theory. New York: Columbia University Press. Braidotti, Rosi 2013. The Posthuman. Cambridge, UK et al.: Polity Press. Butler, Judith 1999 [1990]. Gender Trouble: Feminism and the Subversion of Identity. New York et al.: Routledge. Crenshaw, Kimberle 1989. «Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics». The University of Chicago Legal Forum 139–167. Einstein, Albert. 1905. «Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?» Annalen der Physik 18 (13):639–643.172 Tidsskrift for kjønnsforskning nr. 2-2014 Ferrando, Francesca 2013. «Posthumanism, Transhumanism, Antihumanism, Metahumanism, and New Materialisms: Differences and Relations» Existenz 8 (2):26–32.
(«us» against the «enemy») and the political habit of contrast, instead of emphasizing necessary social performances of survival, such as coexistence and symbiosis. In the Nineties, the feminist discourse focussed around the notion of borders, which was essential in determining such opposites: how to define what is nature and what is culture? What is female, what is male; what is black, what is white? The answer was: these realms cannot be clearly separated. The feminist discourse developed around notions such as la mestiza (Anzaldúa 1987), the nomadic subject (Braidotti 1994), the cyborg, as the hybrid which has no origin (Haraway 1985). Judith Butler (1990) focussed on the role of culture in the reiteration of constructed notions of nature. By the end of the Nineties, the next step was taken within the field of New Materialisms, which is a specific branch of
posthumanism: nature and culture were recognized as intrinsically entangled (Barad 2007). Donna Haraway thus spoke of «natureculture» (2003), to emphasize that the two terms are not discernible. The posthuman begins its reflection from this hybrid ontological acknowledgement, starting by revisiting the human realm: once we have recognized that the human is not one but many, what is the human anyway? What is the non-human? When Feminism is embedded in the genealogy of the posthuman. Posthumanism historically developed out of the feminist reflection in the Nineties. For instance, the key text which brought posthumanism to broad international attention is: «How we became posthuman» (1999), written in a critical feminist tone by Katherine K. Hay les. Here, it is important to open a
A closed form of feminism, which does Male & Female
not take into account other forms of discriminations such as racism, White & Colors
ableism or ageism, structurally Ability & Disability
Young & Old
undermines its recognitional intent.
201
Racism White & Colors
Exceptionalism Extraodinary & Normal
Ableism Ability & Disability
Ageism Young & Old
Exclusivism Same & Different
202
Modernism Modern & Traditional
Monism One & Others
Colonialism Power & Powerless
Feminism Male & Female
Speciesism Humans & Animals
203
parenthesis. Posthumanism is often simplistically assimilated to a philosophical approach focussed on the latest developments of science and technology. This is due to the fact that the term «posthuman» is used as an umbrella term to include different movements and perspectives (Ferrando 2013). Two movements, in particular, are often confused: transhumanism and (critical, cultural and philosophical) posthumanism. Transhumanism recognizes science and technology as the main assets of reformulation of the notion of the human, and employs the notion of the «posthuman» to name an era in which such reformulations will have irredeemably impacted the evolution of the human, giving raise to the posthuman. Posthumanism, on the other hand, sees the posthuman as a condition which is already accessible, since we have never been human: «human» is a human concept, based on humanistic and anthropocentric premises. Going back to the relation between feminism and posthumanism: can a feminist lose sight of sexism by opening the lens to a posthuman sensitivity? Expanding the perspective by detecting other forms of discrimination in the constitution of social and material narratives does not mean losing the hardly gained critical standpoint of feminism. Quite the opposite. As Crenshaw (1989) has clearly argued, feminism can only be intersectional: a closed form of feminism, which does not take into account other forms of discriminations such as racism, ableism or ageism, structurally undermines its recognitional intent.
204
Any form of discrimination is a potential carrier for any other forms of discrimination, and it is related to all forms of discrimination: sexism is not separated from speciesism, biocentrism and so on; thus, it cannot be approached in isolation. For instance, Braidotti notes how the trafficking of animals precedes the one of women: «Animals are also sold as exotic commodities and constitute the largest illegal trade in the world today, after drugs and arms, but ahead of women» (Braidotti 2013, 8). In this concrete case, speciesism and sexism are working along similar lines. All forms of bias come from a hierarchical social and cultural episteme, whose origin can be traced, in the West, to the symbolic structure of the Great Chain of Beings (Scala Naturae). Rooted in Plato, Aristotle and the Old Testament, the Great Chain of Being depicted a hierarchical structure of all matter and life (even in its hypothetical forms, such as angels and demons), starting from God (Lovejoy 1936). Posthumanism deconstructs any ontological hierarchy; a multidimensional network depicts more closely what is at stake, even if there is no representational autonomy. The ways the physical realm is conceived and perceived, is not separated to the ways it is becoming, since the projection of such forms is simultaneously affecting the sensitive interconnected vibrational body of spacetime. Where Existence is entangled, symbiotic, hybrid. There is no clearly defined borders which allow
Haraway, Donna 1985. «A Cyborg Manifesto: Science, Technology and Socialist-Feminism in the Late Twentieth Century» In: Donna Haraway 1991: Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge. Haraway, Donna 2003. The Companion Species. Chicago: Prickly Paradigm Press. Hayles, N. Katherine 1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago et al.: The University of Chicago Press. Heidegger, Martin 1977 [1953]. The Question Concerning Technology and Other Essays. Trans. William Lovitt. New York: Harper Torchbooks. Nietzsche, Friedrich W. 1976 [1883-5]. «Thus Spoke Zarathustra». In: Friedrich W. Nietzsche: The Portable Nietzsche. Trans. Walter Kaufmann. New York: Penguin Books. Lovejoy, Arthur O. 1936. The Great Chain of Being: A Study of the History of an Idea. Cambridge, MA: Harvard University Press.
205
206
The question, “What is it to be human?” is not just narcissistic, it involves a culpable obtuseness. It is rather like asking, “What is it to be white?” It connotes unearned privileges that have been used to dominate and exploit. But we usually don’t recognize this because our narcissism is so complete.
fixed notions of being. The way matter appears on the large scale might be misleading, if taken as its ultimate state: on a subatomic level, everything is in constant vibration. As famously demonstrated by Einstein (1905), matter and energy are equivalent. Energy is intrinsically relational, as well as matter is irreducible to a single determined entity; any reductionist approach has scientifically failed. From a physics perspective, anything which has mass and volume is considered matter: humans, for instance, are made out of matter, as well as robots. Let’s now go back to our initial question: «who am I?» We are material networks of relations, fluctuant becoming in symbiotic interaction with the «others», the environment, our surroundings; we are constant potentials. In nietzschean terms: we are «a bridge» («Thus Spoke Zarathustra», 7). Human existence is related to any other form of existence; nothing, in this dimension, is completely autonomous or totally independent.In this sense, the field of epigenetics is significant, with its emphasis on the heritable changes in gene expression caused by mechanisms which are external to the underlying DNA sequence. Posthumanism approaches the potentials opened by biotechnology, nanotechnology, cybernetics, robotics and space migration, in an ontological way, through Heidegger: technology is «no mere means», but «a way of revealing» (1953:12). We can thus talk of technologies of existence. Posthumanism has to do with theoretical philosophy as well as with applied ethics. More extensively, posthumanism can be perceived as a path of knowledge, which may eventually turn into full awareness: we literally are what we eat, what we think, what we breathe, what and who we connect to. Currently, posthumanism seems the most open and sensitive critical frame to approach intellectual tasks, as well as daily practices of being. Since any existential performance has interconnected agency, posthumanism will add to your perspective as much as your perspective will add to the posthuman shift. More than an exchange («ex» comes from Latin, meaning «out»), it is an intra-change, a fluid entanglement of being, an expansion of material awareness, a fractal movement of energy which will have simultaneously affected your existence as well as the evolution of spacetime. This is why I think posthumanism is something you want to know about.
207
Rethink progress
208
209
For the majority of the 20th century, progress has been measured by increased speed and efficiency— faster, better, stronger—but the side effects have been fatter, sadder and exhausted. Our definition of success needs to be recalibrated.
210
211
How coloniality manifests in AI
Algorithmic discrimination and oppression: The ties between colonial racism and algorithmic discrimination are perhaps the most obvious: algorithms built to automate procedures and trained on data within a racially unjust society end up replicating those racist outcomes in their results. But much of the scholarship on this type of harm from AI focuses on examples in the US. Examining it in the context of coloniality allows for a global perspective: America isn’t the only place with social inequities. “There are always groups that are identified and subjected,” Isaac says. Beta testing: AI systems are sometimes tried out on more vulnerable groups before being implemented for “real” users. Cambridge Analytica, for example, beta-tested its algorithms on the 2015 Nigerian and 2017 Kenyan elections before using them in the US and UK. Studies later found that these experiments actively eroded social cohesion and disrupted the Kenyan election process. This kind of testing ech212
oes the British Empire’s historical treatment of its colonies as laboratories for new medicines and technologies. Ghost work: The phenomenon of ghost work, the invisible data labor required to support AI innovation, neatly extends the historical economic relationship between colonizer and colonized. Many former US and UK colonies—the Philippines, Kenya, and India— have become ghost-working hubs for US and UK companies. The countries’ cheap, English speaking labor forces, which make them a natural fit for data work, exist because of their colonial histories. AI governance: The geopolitical power imbalances that the colonial era left behind also actively shape AI governance. This has played out in the recent rush to form global AI ethics guidelines: developing countries in Africa, Latin America, and Central Asia have been largely left out of the discussions, which has led some to refuse to participate in international data flow agreements.
The result: developed countries continue to disproportionately benefit from global norms shaped for their advantage, while developing countries continue to fall further behind. International social development: Finally, the same geopolitical power imbalances affect the way AI is used to assist developing countries. “AI for good” or “AI for sustainable development” initiatives are often paternalistic. They force developing countries to depend on existing AI systems rather than participate in creating new ones designed for their own context. The researchers note that these examples are not comprehensive, but they demonstrate how far-reaching colonial legacies are in global artificial intelligence development. They also tie together what seem like disparate problems under one unifying thesis. “It enables us a new grammar and vocabulary to talk about both why these issues matter and what we are going to do to think about and address these issues over the long run,” Isaac says.
Superintelligence: Path, Dangers, Strategies By Nick Bostrom
New Dark Age, By James Bridle
Technically Wrong, By Sara Wachter Boettcher
From Apple to Anomaly By Sarah Cook, Alona Pardo, and Trevor Paglen
A Invisible Women : Data Bias in a World Designed for Men By Caroline Perez
The Vulnerable World Hypothesis, By Nick Bostrom
Algorithms of Oppression By Safiya Umoja Noble
Data Feminism By Catherine D’Ignazio and Lauren Klein
The Age of Surveillance Capitalism, By Shoshana Zuboff
213
214
215
Ways of seeing in the digital age
§ From Apple To Anomaly Given that Artificial Intelligence (AI) has until so recently been the stuff of science fiction, it is astonishing to see how quickly it has become part of the mundane fabric of our lives, whether we are curating our social media feeds or shouting at Alexa. As machines are trained ‘to see’ without human intervention, artist Trevor Paglen’s new commission at the Barbican Centre’s Curve explores how artificial intelligence technologies are being trained to categorise objects and people, highlighting the hidden prejudices and bias inherent in AI.
(FIG 1) Trevor Paglen, “From ‘Apple’ to ‘Anomaly’”. Barbican Curve, London. 2019
216
“Machine-seeing-for-machines is a ubiquitous phenomenon,” Paglen has commented, “encompassing everything from facial-recognition systems conducting automated biometric surveillance at airports to department stores intercepting customers’ mobile phone pings to create intricate maps of movements through the aisles. But all this seeing, all of these images, are essentially invisible to human eyes. These images aren’t meant for us; they’re meant to do things in the world; human eyes aren’t in the loop.” When entering the exhibition, the vast mosaic of 35,000 images displayed floor to ceiling along the entire length of the gallery’s curved wall creates a striking visual effect. Moving closer we can see that the installation brings together snap-shot sized photographs clustered around specific keywords. The words and images are sourced from ImageNet AI, a dataset created by US researchers a decade ago that consists of over 14 million images taken from Flickr and roughly 21,000 crowdsources categories which are used to train the AI systems that surround us.
It starts off promisingly enough, particularly around the first few words on the wall such as APPLE, LICHEN or SUN. However, the further we immerse ourselves into this avalanche of images, the more unsettling it becomes. We realise that images of INVESTORS mainly show white men, CONVICT or BAD PEOPLE predominantly feature Black people. According to ImageNet AI, ARTIST MODELs are mainly half-naked Asian Women and ImageNet’s differentiation of how a WINE LOVER looks like in comparison to an ALCOHOLIC is highly questionable, since it mainly seems to rely on the depicted drink. Surprisingly photographs of Barack Obama turn up in a remarkable number of categories, for example under POLITICIAN, OLIGARCH, RACIST, DRUG ADDICT or TRAITOR. Throughout his artistic career, Paglen has developed a longstanding interest around issues of surveillance, CIA black sites, drone warfare or the apparatus and essence of America’s security system. In From Apple to Anomaly the artist powerfully illuminates what he calls “the deep forms of bias, prejudice, and cruelty that can be built into machine learning systems that classify people”, encouraging us to ask questions about the supposedly neutral applications of AI and machine learning technologies At the same time, Paglen also highlights the superiority the human brain still retains over any form of AI, such as the ability to understand nuance. For example, words such as SPAM can have different meanings but, as we can see from the images in the exhibition, to ImageNet AI it only means canned cooked pork.
(FIG 2) 217
From ‘Apple’ to ‘Anomaly’ invites the viewer to consider that the world of images has grown distanced from human eyes as machines have been trained to see without us.
(FIG 3)
Paglen often refers to this new state of machine-to-machine image making as “invisible images”, in light of the fact that this form of vision is “inherently inaccessible to the human eyes”. 218
(FIG 4) Rene Magritte, This is not an apple, 1964.
Right at the beginning of the exhibition space, prominently displayed on a separate standing wall, Paglen has included an art historical reference: Rene Magritte’s Ceci n’est pas une pomme (This is not an apple), 1964. Nevertheless, ImageNet AI has included it in its APPLE category as it cannot recognise it as a Surrealist artwork or a philosophical discussion on semiotics. Paglen’s reference to Art History suggests that artists have always questioned ways of seeing and image making. His critical investigations into the relationship between vision, power and technology show the crucial role artists can play today by highlighting the problems of perception that have political and social implications that will influence our daily lives, now and into the future.
§ Interview: Apple of My Eye What draws you to explore these hidden power structures?
Put simply, I’m interested in learning how to see. We see with our eyes but also, increasingly, with the technologies we build, whether that’s cameras or sensors or drones or artificial intelligence systems. We also see with our cultural backgrounds, we see from the moment in history that we’re in, and we see through the lens of economic systems that we’re embedded within. I don’t think you can easily pick these things apart.
Where did the interest in AI come from?
I was working on a film about Edward Snowden, called Citizenfour, which developed out of my work trying to understand how mass surveillance infrastructures operate. I had been looking at institutions like the NSA, or the GCHQ in the UK – these huge, global, essentially military, surveillance agencies – and came to realise there were other institutions that are a hundred times bigger called Google, Amazon and Microsoft. They’re very similar in many respects, especially in the way they collect and use data. From there, you encounter AI pretty quickly – it’s an essential part of data collection at that scale.
What should we know about AI?
First of all, the word intelligence is quite misleading. AI is statistics; it’s non-linear algebra, which begins to demythologise AI – it immediately removes these conversations about AI being able to take over the world by itself. On the other hand, AI is everywhere and it is increasingly built into our infrastructures, with the people that run these systems extracting as much data as possible about our daily lives, and, of course, the goal of extracting all that data is ultimately to make money. All that information about your behaviours and habits is sold to insurance agencies that will modulate your insurance premiums and credit agencies to modulate your credit ratings. AI is not passive, but actively 219
220
(FIG 4) Prototype for a Nonfunctional Satellite, 2013. Part of a series exploring the idea of launching a decorative sculpture into the night sky.
sculpts our lives in ways that will financially benefit the massive corporations that operate at this scale. I’ve been looking at training images – the images that are fed into AI systems in order to teach them how to recognise different objects. Say you want to distinguish between apples and oranges, you can build what’s called a neural network and give it thousands of pictures of oranges and thousands of pictures of apples, and it will ‘learn’ to identify what an apple is and what’s an orange. This is an incredibly simplistic example – in real life it happens at a much larger scale. The piece at the Barbican draws from one of the most prominent training sets, ImageNet, which was developed by Stanford University and published in 2009 – a set of 15 million images organised into about 20 thousand categories. In the words of the set’s founders, it’s an attempt to map out an entire world of objects – it’s crazy how extensive it is. The work consists of a montage, going from concepts that we think are not particularly controversial, like an apple – a concrete noun – to something like an apple picker, a more ambiguous concept: Your definition of an apple picker might be completely different to mine. The piece explores how machine learning systems are being trained to see. The training sets establish ‘natural’ or absolute definitions of things that are entirely historically constructed – for example gender, when the only way to know someone’s gender is to ask them. So the method of classifying in this way becomes problematic very quickly.
How does that take form in your recent work?
I mentioned that we might both agree on the concept of an apple, but actually the first piece in the Barbican show, before you see anything, is an image of a Magritte painting that says ‘Ceci n’est pas une pomme’, and yet it has been classified by the AI system as an apple. Who gets to decide what an apple is? Is it the artist, the viewer or the machine learning system?
What will be the implications of AI for art?
You don’t know! I think I’m one of the few people to have actually opened up these training sets and looked at the images and how they are categorised. When you do, you see the assumptions—the categorisation—which is really very regressive; a kind of physiognomy. These images, these taxonomical structures are being built into infrastructures around us all the time, and they operate autonomously; you are not able to challenge how you, or anything else, is being seen.
Will the primacy of the artist not always be there?
You can see how, the worse the assumptions are, the more potential there is to harm the most vulnerable sections of society. There’s an urgent need for people to do more work opening up these systems, to try and under- stand how they work technically, but also the kind of politics that are being built into them. Technologies are never neutral – they actively shape society according to certain rules. There are always winners and losers.
(FIG 4)
221
222
223
224
225
Design, fiction, and social dreaming
226
227
What role can design play when existing systems are reaching their effective limits?
(FIG 1)
228
How can we expand and refresh role for design beyond the constraints of problem-solving?
(FIG 2)
229
230
Anthony Dunne & Fiona Raby
We believe that by speculating more, at all levels of society, and exploring alternative scenarios, reality will become more malleable.
231
Beyond radical design? Anthony Dunne & Fiona Raby
Dreams are powerful. They are repositories of our desire. They animate the entertainment industry and drive consumption. They can blind people to reality and provide cover for political horror. But they can also inspire us to imagine that things could be radically different than they are today, and then believe we can progress toward that imaginary world.1 It is hard to say what today’s dreams are; it seems they have been downgraded to hopes—hope that we will not allow ourselves to become extinct, hope that we can feed the starving, hope that there will be room for us all on this tiny planet. There are no more visions. We don’t know how to fix the planet and ensure our survival. We are just hopeful. As Fredric Jameson famously remarked, it is now easier for us to imagine the end of the world than an alternative to capitalism. Yet alternatives are exactly what we need. We need to dream new dreams for the twenty-first century as those of the twentieth century rapidly fade. But what role can design play? When people think of design, most believe it is about problem solving. Even the more expressive forms of design are about solving aesthetic problems. Faced with huge 232
challenges such as overpopulation, water shortages, and climate change, designers feel an overpowering urge to work together to fix them, as though they can be broken down, quantified, and solved. Design’s inherent optimism leaves no alternative but it is becoming clear that many of the challenges we face today are unfixable and that the only way to overcome them is by changing our values, beliefs, attitudes, and behavior. Although essential most of the time, design’s inbuilt optimism can greatly complicate things, first, as a form of denial that the problems we face are more serious than they appear, and second, by channeling energy and resources into fiddling with the world out there rather than the ideas and attitudes inside our heads that shape the world out there. Rather than giving up altogether, though, there are other possibilities for design: one is to use design as a means of speculating how things could be—speculative design. This form of design thrives on imagination and aims to open up new perspectives on what are sometimes called wicked problems, to create spaces for discussion and debate about alternative ways of being, and to inspire and encourage people’s imaginations to flow freely. Design speculations can act as a catalyst for collectively redefining our relationship to reality.
Probable/Plausible/Possible/Preferable Being involved with science and technology and working with many technology companies, we regularly encounter thinking about futures, especially about “The Future.” Usually it is concerned with predicting or forecasting the future, sometimes it is about new trends and identifying weak signals that can be extrapolated into the near future, but it is always about trying to pin the future down. This is something we are absolutely not interested in; when it comes to technology, future predictions have been proven wrong again and again. In our view, it is a pointless activity. What we are interested in, though, is the idea of possible futures and using them as tools to better understand the present and to discuss the kind of future people want, and, of course, ones people do not want. They usually take the form of scenarios, often starting with a what-if question, and are intended to open up spaces of debate and discussion;
therefore, they are by necessity provocative, intentionally simplified, and fictional. Their fictional nature requires viewers to suspend their disbelief and allow their imaginations to wander, tomomentarily forget how things are now, and wonder about how things could be. We rarely develop scenarios that suggest how things should be because it becomes too didactic and even moralistic. For us futures are not a destination or something to be strived for but a medium to aid imaginative thought—to speculate with. Not just about the future but about today as well, and this is where they become critique, especially when they highlight limitations that can be removed and loosen, even just a bit, reality’s grip on our imagination. As all design to some extent is future oriented, we are very interested in positioning design speculation in relation to futurology, speculative culture including literature and cinema, fine art, and radical social science con-
Dunne & Raby, Needy Robot, 2007 from Technological Dreams No.1: Robots, 2007. Photography by Peter Tingledff.
1.
Stephen Duncombe, Dream: Re-imaging Progressive Politics in an Age of Fantasy (New York: The New Press, 2007), 182.
233
234
It’s becoming clear that many of the challenges we face today are unfixable and that the only way to overcome them is by changing our values, beliefs, attitudes, and behavior.
cerned with changing reality rather than simply describing it or maintaining it. 2 This space lies somewhere between reality and the impossible and to operate in it effectively, as a designer, requires new design roles, contexts, and methods. It relates to ideas about progress—change for the better but, of course, better means different things to different people. To find inspiration for speculating through design we need to look beyond design to the methodological playgrounds of cinema, literature, science, ethics, politics, and art; to explore, hybridize, borrow, and embrace the many tools available for crafting not only things but also ideas— fictional worlds, cautionary tales, what-if scenarios, thought experiments, counterfactuals, reductio ad absurdum experiments, prefigurative futures, and so on. In 2009, the futurologist Stuart Candy visited the Design Interactions program at the Royal College of Art and used a fascinating diagram in his presentation to illustrate different kinds of potential futures.3 It consisted of a number of cones fanning out from the present into the future. Each cone represented different levels of likelihood. We were very taken by this imperfect but helpful diagram and adapted it for our own purposes. The first cone was the probable. This is where most designers operate. It describes
2.
For example, see Barbara Adam, “Towards a New Sociology of the Future” (Draft). Available at http://www. cf.ac.uk/socsi/futures/newsociologyofthefuture.pdf. Accessed December 24, 2012.
3.
For more on this, see Joseph Voros, “A Primer on Futures Studies, Foresight and the Use of Scenarios,” Prospect,
what is likely to happen unless there is some extreme upheaval such as a financial crash, eco disaster, or war. Most design methods, processes, tools, acknowledged good practice, and even design education are oriented toward this space. How designs are evaluated is also closely linked to a thorough understanding of probable futures, although it is rarely expressed in those terms. The next cone describes plausible futures. This is the space of scenario planning and foresight, the space of what could happen. In the 1970s companies such as Royal Dutch Shell developed techniques for modeling alternative near-future global situations to ensure that they would survive through a number of large-scale, global, economic, or political shifts. The space of plausible futures is not about prediction but exploring alternative economic and political futures to ensure an organization will be prepared for and thrive in a number of different futures. The next cone is the possible. The skill here is making links between today’s world and the suggested one. Michio Kaku’s book Physics of the Impossible 4 sets out three classes of impossibility, and even in the third, the most extreme—things that are not possible according to our current understanding of science— there are only two, perpetual motion and
Walter Pichler, TV Helmet (Portable Living Room), 1967. Photograph by Georg Mladek. Photograph courtesy of Galerie Elisabeth and Klaus Thoman/ Walter Pichler.
the Foresight Bulletin, no. 6 (December 2001). Available at http://thinkingfutures.net/wp-content/u loads/2010/10/A_Primer_on_Futures_Studies1.pdf. Accessed December 21, 2012. 4.
Michio Kaku, Physics of the Impossible (London: Penguin Books, 2008).
235
236
Present
Preferable Probable
Plausible
Possible
precognition, which, based on our current understanding of science, are impossible. All other changes—political, social, economic, and cultural—are not impossible but it can be difficult to imagine how we would get from here to there. In the scenarios we develop we believe, first, they should be scientifically possible, and second, there should be a path from where we are today to where we are in the scenario. A believable series of events that led to the new situation is necessary, even if entirely fictional. This allows viewers to relate the scenario to their own world and to use it as an aid for critical reflection. This is the space of speculative culture—writing, cinema, science fiction, social fiction, and so on. Although speculative, experts are often consulted when building these scenarios, as David Kirby points out in a fascinating chapter about distinctions between what he calls speculative scenarios and fantastic science in his book Lab Coats in Hollywood; the role of the expert is often, not to prevent the impossible but to make it acceptable.5 Beyond this lies the zone of fantasy, an area we have little interest in. Fantasy exists in its own world, with very few if any links to the world we live in. It is of course valuable, especially as a form of entertainment, but for us, it is too removed from how the world is. This is the space of fairy tales, goblins, superheroes, and space opera. A final cone intersects the probable and plausible. This is the cone of preferable futures. Of course the idea of preferable is not so straightforward; what does preferable mean, for whom, and who decides? Currently, it is determined by government and industr y, and although we play a role as consumers and voters, it is a limited one. In Imaginary Futures, Richard Barbrook explores futures as tools designed for organizing and justifying the present in the interests of a powerful minority.6 But, assuming it is possible to create more socially constructive imaginary futures, could design
5.
David Kirby, Lab Coats in Hollywood: Science, Scientists, and Cinema (Cambridge, MA: MIT Press, 2011), 145–168.
6.
Richard Barbrook, Imaginary Futures: From Thinking Machines to the Global Village (London: Pluto Press, 2007).
7.
This history is very well documented; for example, see Neil Spiller, Visionary Architecture: Blueprints of the Modern
help people participate more actively as citizen-consumers? And if so, how? This is the bit we are interested in. Not in trying to predict the future but in using design to open up all sorts of possibilities that can be discussed, debated, and used to collectively define a preferable future for a given group of people: from companies, to cities, to societies. Designers should not define futures for everyone else but working with experts, including ethicists, political scientists, economists, and so on, generate futures that act as catalysts for public debate and discussion about the kinds of futures people really want. Design can give experts permission to let their imaginations flow freely, give material expression to the insights generated, ground these imaginings in everyday situations, and provide platforms for further collaborative speculation. We believe that by speculating more, at all levels of society, and exploring alternative scenarios, reality will become more malleable and, although the future cannot be predicted, we can help set in place today factors that will increase the probability of more desirable futures happening. And equally, factors that may lead to undesirable futures can be spotted early on and addressed or at least limited. Beyond Radical DeSign? We have long been inspired by radical architecture and fine art that use speculation for critical and provocative purposes, particularly projects from the 1960s and 1970s by studios such as Archigram, Archizoom, Superstudio, Ant Farm, Haus-Rucker-Co, and Walter Pichler.7 But why is this so rare in design? During the Cold War Modern exhibition at the Victoria and Albert Museum in 2008 we were delighted to finally see so many projects from this period for real. The exuberant energy and visionary imagination of the projects in the final room of the exhibition were incredibly inspiring for us. We were left wondering how this spirit could be reintroduced
Imagination (London: Thames & Hudson, 2006); Felicity D. Scott, Architecture or Techno-utopia: Politics after Modernism (Cambridge, MA: MIT Press, 2007); Robert Klanten et al., eds., Beyond Architecture: Imaginative Buildings and Fictional Cities (Berlin: Die Gestalten Verlag, 2009); and Geoff Manaugh, The BLDG BLOG Book (San Francisco: Chronicle Books, 2009); see also http://bldgblog. blogspot.co.uk. Accessed December 24, 2012.
237
Today designers often focus on making technology easy to use, sexy, and consumable, but how about design that is used as a tool to create not only things but ideas as a means of speculating about how things could be—to imagine possible futures.
to contemporary design and how design’s boundaries could be extended beyond the strictly commercial to embrace the extreme, the imaginative, and the inspiring. We believe several key changes have happened since the high point of radical design in the 1970s that make imaginative, social, and political speculation today more difficult and less likely. First, during the 1980s design became hyper-commercialized to such an extent that alternative roles for design were lost. Socially oriented designers such as Victor Papanek who were celebrated in the 1970s were no longer regarded as interesting; they wereseen as out of sync with design’s potential to generate wealth and to provide a layer of designer gloss to every aspect of our daily lives. There was some good in this—design was embraced by big business and entered the mainstream but usually only in the most superficial way. Design became fully integrated into the neoliberal model of capitalism that emerged during the 1980s, and all other possibilities for design were soon viewed as economically unviable and therefore irrelevant. Second, with the fall of the Berlin Wall in 1989 and the end of the Cold War the possibility of other ways of being and alternative models for society collapsed as well. Market-led capitalism had won and reality instantly shrank, becoming one dimensional. There were no longer other 238
social or political possibilities beyond capitalism for design to align itself with. Anything that did not fit was dismissed as fantasy, as unreal. At that moment, the “real” expanded and swallowed up whole continents of social imagination marginalizing as fantasy whatever was left. As Margaret Thatcher famously said, “There is no alternative.” Third, society has become more atomized. As Zygmunt Bauman writes in Liquid Modernity, 8 we have become a society of individuals. People work where work is available, travel to study, move about more, and live away from their families. There has been a gradual shift in the United Kingdom from government that looks after the most vulnerable in society to a small government that places more responsibility on individuals to manage their own lives. On the one hand this undoubtedly creates freedom and liberation for those who wish to create new enterprises and projects but it also minimizes the safety net and encourages everyone to look out for him- or herself. At the same time, the advent of the Internet has allowed people to connect with similar-minded people all over the world. As we channel energy into making new friends around the world we no longer need to care about our immediate neighbors. On a more positive note, with this reduction in top-down governing, there has been a corresponding shift
away from the top-down mega-utopias dreamt up by an elite; today, we can strive for one million tiny utopias each dreamt up by a single person. Fourth, the downgrading of dreams to hopes once it became clear that the dreams of the twentieth century were unsustainable, as the world’s population has more than doubled in the last forty-five years to seven billion. The great modernist social dreams of the post-war era probably reached a peak in the 1970s when it started to become clear that the planet had limited resources and we were using them up fast. As populations continued to grow at an exponential rate we would have to reconsider the consumer world set in motion during the 1950s. This feeling has become even more acute with the financial crash and the emergence since the new millennium of scientific data suggesting that the climate is warming up due to human activity. Now, a younger generation doesn’t dream, it hopes; it hopes that we will survive, that there will be water for all, that we will be able to feed everyone, that we will not destroy ourselves.
But we are optimistic. Triggered by the financial crash of 2008, there has been a new wave of interest in thinking about alternatives to the current system. And although no new forms of capitalism have emerged yet, there is a growing desire for other ways of managing our economic lives and the relationship among state, market, citizen, and consumer. This dissatisfaction with existing models coupled with new forms of bottom-up democracy enhanced by social media make this a perfect time to revisit our social dreams and ideals and design’s role in facilitating alternative visions rather than defining them. Of being a catalyst rather than a source of visions. It is impossible to continue with the methodology employed by the visionary designers of the 1960s and 1970s. We live in a very different world now but we can reconnect with that spirit and develop new methods appropriate for today’s world and once again begin to dream. But to do this, we need more pluralism in design, not of style but of ideology and values.
United Micro Kingdoms’ Digiland. Courtesy Dunne Raby. Photo: Tomasso Lanza
8.
Zygmunt Bauman, Liquid Modernity (Cambridge, UK: Polity Press 2000).
239
Traditional vs Speculative Traditional design
Speculative design
Attitude
Normative
Critical
Mindest
Pragmatic
Idealistic, futuristic
Purpose
Make business successful
Spur debate, challenge status quo
Goal
Provide answers by solving problems
Find problems or explore questions
Intent
Serving a user in seriousness provide clarity
Provide an audience
Tools
Traditional
Storytelling, fiction films etc.
240
241
242
It’s closer to literature than social science, emphasizes imagination over practicality, and askes questions rather than provide answers.
The value is not what it achieves or does but what it is and how it makes people feel, especially if it encourages people to question, in an imaginative, troubling, and thoughtful way, everydayness and how things could be different. 243
From human-centered to posthuman-centered
244
245
Posthuman-centered design Haakon Faste
Futurist experts have estimated that by the year 2030 computers in the price range of inexpensive laptops will have a computational power that is equivalent to human intelligence. The implications of this change will be dramatic and revolutionary, presenting significant opportunities and challenges to designers. Already machines can process spoken language, recognize human faces, detect our emotions, and target us with highly personalized media content. While technology has tremendous potential to empower humans, soon it will also be used to make them thoroughly obsolete in the workplace, whether by replacing, displacing, or surveilling them. More than ever designers need to look beyond human intelligence and consider the effects of their practice on the world and on what it means to be human. The question of how to design a secure human future is complicated by the uncertainties of predicting that future. As it is practiced today, design is strategically positioned to improve the usefulness and quality of human interactions with technology. Like all human endeavors, however, the practice of design risks marginalization if it is unable to evolve. When envisioning the future of design, our social and psychological frames of reference unavoidably and unconsciously bias our interpretation of the world. People systematically underestimate exponential trends such as Moore’s law, for example, which tells us that in 10 years we will have 32 times more total computing power than today. Indeed, as computer scientist Ray Kurzweil observes, “We won’t experience 100 years of technological advances in the 21st century; we will witness on the order of 20,000 years of progress (again when measured by today’s rate of progress), or about 1,000 times greater than what was achieved in the 20th century.” Design-oriented research provides a possible means to anticipate and guide rapid changes, as design, predi246
cated as it is on envisioning alternatives through “collective imagining,” is inherently more future-oriented than other fields. It therefore seems reasonable to ask how technology-design efforts might focus more effectively on enabling human-oriented systems that extend beyond design for humanity. In other words, is it possible to design intelligent systems that safely design themselves? Imagine a future scenario in which extremely powerful computerized minds are simulated and shared across autonomous virtual or robotic bodies. Given the malleable nature of such super-intelligences–they won’t be limited by the hardwiring of DNA information–one can reasonably assume that they will be free of the limitations of a single material body, or the experience of a single lifetime, allowing them to tinker with their own genetic code, integrate survival knowledge directly from the learnings of others, and develop a radical new form of digital evolution that modifies itself through nearly instantaneous exponential cycles of imitation and learning, and passes on its adaptations to successive generations of “self.” In such a post-human future, the simulation of alternative histories and futures could be used as a strategic evolutionary tool, allowing imaginary scenarios to be inhabited and played out before individuals or populations commit to actual change. Not only would the lineage of such beings be perpetually enhanced by automation, leading to radical new forms of social relationships and values, but the systems that realize or govern those values would likely become the instinctual mechanism of a synchronized and sentient “techno-cultural mind.” Bringing such speculative and hypothetical scenarios into cultural awareness is one way that designers can evaluate possibilities and determine how best to proceed. What should designers do to prepare for such futures? What methods should be applied to their research and training? Today’s interaction designers shape human behavior through investigative research, systemic thinking, creative prototyping, and rapid iteration. Can these same methods be used to address the multitude of longer-term social and ethical issues that designers create? Do previous inventions, such as the internal combustion engine or nuclear power, provide relevant historical lessons to learn from? If little else, reflecting on super-intelligence through the lens of nuclear proliferation and global warming throws light on the existential consequences of poor design. It becomes clear that while systemic thinking and holistic research are useful methods for addressing existential risks, creative prototyping or rapid iteration with nuclear power or the environment as materials is probably unwise. Existential risks do not allow for a second chance to get it right. The only possible course of action when confronted with such challenges is to examine all possible future scenarios and use the best available subjective estimates of objective risk factors. Simulations can also be leveraged to heighten designers’ awareness of trade-offs. Consider the consequences of contemporary interaction design, for example: intuitive interfaces, systemic experiences, and service
(FIG 1)Perceptual Robotics Research, Research collaborations with the PERCRO Perceptual Robotics Laboratory, 2007-2010.
Perceptual robotics focused on the design of immersive multimodal experiences and virtual environments.
Exoskeletal robots used to convey sensations of force during teleoperation or virtual manipulation procedures.
247
To reduce risk, posthuman-centered design should focus on changing the relationship of “use” between humans and machine systems.
“User”
User Activity Perception
Human uses system
Interface
System Display
System Input
System
248
System uses human
“User” User Activity Perception
Human uses system
Interface (Useless)
System Input System uses human
System Display
Art is useless, but it doesn’t mean it has no function. Karl Marx envisioned a future state of “freedom beyond neccessity” much like a posthuman era of robotic automation. He called this dream “The Abolition of Labor.” The abolition of labor is not the abolotion of highly developed technological production, but rather the abolotion of instrumental technological production. Instrumental production uses others as “instruments”. Deinstrumental production, artistic activities, has no purpose outside of itself.
System
249
economies. When current design methods are applied to designing future systems, each of these patterns can be extended through imagined simulations of posthuman design. Intuitive human-computer interfaces become interfaces between post- humans; they become new ways of mediating interdependent personal and cultural values– new social and political systems. Systemic experiences become new kinds of emergent post-human perception and awareness. Service economies become the synapses of tomorrow’s underlying system of techno-cultural values, new moral codes. The first major triumph of interaction design, the design of the intuitive interface, merged technology with aesthetics. Designers adapted modernism’s static typography and industrial styling and learned to address human factors and usability concerns. Today agile software practices and design thinking ensure the intuitive mediation of human and machine learning. We adapt to the design limitations of technological systems, and they adapt in return based on how we behave. This interplay is embodied by the design of the interface itself, between perception and action, affordance and feedback. As the adaptive intelligence of computer systems grows over time, design practices that emphasize the human aspects of interface design will extend beyond the one-sided human perspective of machine usability toward a reciprocal relationship that values intelligent systems as partners. In light of the rapid evolution of these new forms of artificial and synergetic life, the quality and safety of their mental and physical experiences may ultimately deserve equal if not greater consideration than ours. Interaction design can also define interconnected networks of interface touch-points and shape them into choose-your-own-adventures of human experience. We live in a world of increasingly seamless integration between Wi-Fi networks and thin clients, between phones, homes, watches, and cars. In the near future, crowdsourcing systems coupled with increasingly pervasive connectivity services and wearable computer interfaces will generate massive stockpiles of data that catalog human behavior to feed increasingly intuitive learning machines. Just as human-centered design crafts structure and experience to shape intuition, post-human-centered design will teach intelligent machine systems to design the hierarchies and compositions of human behavior. New systems will flourish as fluent extensions of our digital selves, facilitating seamless mobility throughout systems of virtual identity and the governance of shared thoughts and emotions. Applying interaction design to post-human experience requires designers to think holistically beyond the interface to the protocols and exchanges that unify human and machine minds. Truly systemic post-human-centered designers recognize that such interfaces will ultimately manifest in the psychological fabric of 250
Era of brain communication hacks
Era of direct brain communication
In the future, the world will operate as one big brain.
New systems will flourish as fluent extensions of our digital selves.
post-human society at much deeper levels of meaning and value. Just as today’s physical products have slid from ownership to on-demand digital services, our very conception of these services will become the new product. In the short term, advances in wearable and ubiquitous computing technology will render our inner dimensions of motivation and self-perception tangible as explicit and actionable cues. Ultimately such manifestations will be totally absorbed by the invisible hand of post-human cognition and emerge as new forms of social and self-engineering. Design interventions at this level will deeply control the post-human psyche, building on research methodologies of experience economics designed for the strategic realization of social and cognitive value. Can a market demand be designed for goodwill toward humans at this stage, or does the long tail of identity realization preclude it? Will we live in a utopian world of socialized techno-egalitarian fulfillment and love or become a eugenic cult of celebrity self-actualization? It seems unlikely that humans will stem their fascination with technology or stop applying it to improve themselves and their immediate material condition. Tomorrow’s generation faces an explosion of wireless networks, ubiquitous computing, context-aware systems, intelligent machines, smart cars, robots, and strategic modifications to the human genome. While the precise form these changes will take is unclear, recent history suggests that they are likely to be welcomed at first and progressively advanced. It appears reasonable that human intelligence will become obsolete, economic wealth will reside primarily in the hands of super-intelligent machines, and our ability to survive will lie beyond our direct control. Adapting to cope with these changes, without alienating the new forms of intelligence that emerge, requires us transcending the limitations of human-centric design. Instead, a new breed of post-human-centered designer is needed to maximize the potential of post-evolutionary life. 251
Tools are an extension of human body
252
253
Human Machine The first technology or tool wasn’t the spear or sword carried by a hero, but rather the sacks and bags that women used to transport food for their families.
254
Part of the beauty of human and machine systems is their shared inherent fallibility.
System
If we don’t waste time trying to force AI to think like a human, we can arrive at Point B—and Points C, D, and E—in fresh, alien ways. 255
256
257
Sougwen Chung: human and machine collaboration Sarah L. Kaufman | 2020
NEW YORK—Sougwen Chung looks down at her silent, stubborn collaborator with a mix of affection and mild vexation. “I need to debug the unit,” says the 35-year-old artist. “It won’t cooperate with me today.” She strokes the silver-and-white contraption as if she’s soothing a child. Clearly, it is more to her than a “unit.” It’s a robotic arm that paints, powered by artificial intelligence. Meet Doug. Full name: Drawing Operations Unit, Generation Four. Chung uses it and other robots in her performance-based artworks. She and the robots paint together on large canvasses, part team effort, part improvised dance. In pre-coronavirus days, Chung led these AI-assisted painting performances in front of a live audience, on a stage or in a gallery setting. At London’s Gillian Jason Gallery, a series of four of Chung’s robot collaborations is priced at 100,000 pounds, or more than $131,000, per image. Yet with the pandemic, Chung is no longer performing live. She streams her robot collaborations from her studio into exhibit spaces — in August, the Sorlandets Kunstmuseum in Norway hosted several of these transmissions, where they became temporary video installations. Chung has carved out her niche in the expanding world of AI art. Much of this world is focused on the digital side: graphics, pixels, software. But Chung’s work is different. She’s interested in a human-machine partnership, and what that feels like in the body. “I’m interested in the physical world as well as digital,” she says, “not so much an emphasis on pixel manipulation. How these systems can feed back into our everyday lives, and in muscle memory and physical space.” This is 258
why she likes people to see how she and the robots paint together. She calls her work “embodied AI,” and it’s her body she’s talking about — bending or kneeling, wielding her brush on the canvas with her robots, responding to their movements as they respond to hers. Chung has designed and programmed about two dozen Dougs, at a cost of up to $8,000 per unit. She uploaded the early ones with 20 years worth of her drawings, making them experts in her gestures. Doug 4 is even more intimately tied to Chung: It connects to her brain-wave data, and this influences how the robot behaves. When she and her robots paint, they are closely linked through a shared bank of knowledge, and through live, in-the-moment visual and movement cues, just as dancers or musicians are. In Chung’s Brooklyn studio, Doug’s arm bends over a sheet of paper on a table, with its front tip poised just above the surface, ready to be fitted with a brush. Smooth and organic-looking, this Doug could be taken for a biomorphic sculpture. You could say it’s both art object and art maker. Paintings that Chung has created with AI systems hang on the sun-washed walls of her studio: spiraling clouds of blue and white; tendrils that spring forth and recede; fluid lines worming together in an undulating web. Some recall thick-inked calligraphy, the jottings of a secret language. They look like the work of a single artist. But are they? That depends on how you think about AI. It’s a term that even Chung hesitates to embrace. “We don’t have human intelligence figured out,” she says. “That lack of specificity is not the best way to think about a complex set of systems.” She prefers to call her robots collaborators. They don’t fully replicate the human creative process, of course, but neither are they simply spitting out copies of the data Chung feeds them. Instead, they can generate interpretations — for example, expanding upon a data set of Chung’s
How do we reimagine these boundaries and differences that are supposed to exist between humans and machines?
259
260
261
I wanted to try something less about robots executing an existing code and more about working together.
drawings to make their own designs. They can also respond spontaneously to Chung’s lines and brushstrokes, creating a feedback loop with her of improvised, communal creation. Chung made the paintings on her walls with mobile Dougs, Generations Two and Three, that scoot around on wheels with their brushes, trailing paint. (First, she had to figure out how to keep their wheels from slipping on it.) Many of these floor-based units rest on shelves and tables around the studio. They’re round and Roomba-size, topped with coiled wires, small motors and a compact computer device known as a Raspberry Pi. Built into the front of each one is a short, stiff chalk brush, like a shaving brush. Dressed all in black — snug T-shirt, Harem pants — Chung looks more like a dancer than a techie, with her slender physique and expressive, delicate hands. She worries about sounding “too much like a nerd” as she points out the robots’ features. Although she speaks softly and has a calm demeanor, Chung is a bit of an adrenaline junkie. She’s okay with chaos, happy to throw control to the winds. Why else would she choose this path, turning away from safe, contemplative work in her studio to build a career out of risky group projects in public view, with unpredictable algorithms and glitch-prone, high-maintenance machines (looking at you, robotic-arm Doug) that require constant calibrations? For Chung, perseverance while dealing with technical glitches is nothing new. She grew up at the intersection of art and technology; her father was an opera singer and her mother a computer programmer. Born in Hong Kong, they emigrated to Toronto, where Chung was born. She studied violin, taught herself to code and began designing websites 262
in grade school. She was also fond of drawing, though back then she didn’t envision a career as an artist. Still, she liked her work enough to hang on to her early sketches, and to everything since. (This is a highly organized person.) “The drawing practice,” she says, “is something I’ve always kept with me, my whole life.” It was during a research fellowship at MIT Media Lab that Chung discovered robotics. Here was a way to bridge science and art, and build on her sketching. “I was interested in the physical embodiment, and what it would feel like to evolve my own drawing practice,” she says, “and I hadn’t seen robots used collaboratively at that time. I wanted to try something less about robots executing an existing code and more about working together.” In 2014, she launched the machine collaborations that eventually included AI. “It was just this strange experiment,” Chung says, thinking back on the first AI system she built and coded. “What would it be like to have a drawing collaborator that was a nonhuman machine entity? What would that do for my process?” It sounds like a logical progression — from child artist and coder to professional artist building her own robots. Yet Chung says none of this seemed very clear as she was feeling her way into this new realm. “I stumbled into my path,” she says. What pushed her forward wasn’t so much the technology, fascinating as it was, but the rush of performance. That’s what she had loved about playing violin as a child. “I wanted to bring the body back into the creative process, the muscle memory and gesture that were missing from my practice, and that energy you create with the audience.” Maya Indira Ganesh, a technology researcher at Leuphana University in Lüneburg, Germany, says Chung’s work stands out because she rejects prevailing notions of
263
robots and AI, and she’s comfortable with her own fallibility. “What Sougwen does is say, ‘How do we reimagine these boundaries and differences that are supposed to exist between humans and machines?’” Ganesh says, speaking by phone from Berlin. In galleries, “most of the AI art you see is usually simple and straightforward, like watching computation happen. It’s the fetishization of the machine. We think these systems should be perfect and seamless. But Sougwen is very skilled with this technology, and she talks about her works in progress and in process. … She’s showing us that the human is very much a part of the process.” The process. Think experimental theater. Typically, Chung and Doug perform in a darkened gallery space, with spectators (pre-pandemic) gathered around a canvas illuminated on the floor. There’s often music and atmospheric lighting, and the robot is sort of crawling around. “Painting, painting,” says Chung, delivering a firm correction with a smile.Of course. It’s painting. (One of the Dougs, perched beside Chung’s laptop as we watch videos of her performances, still has dried blue and white paint stuck to its little brush.) Doug dashes off gleaming streaks of color, and Chung counters with her own, and so on, artist and machine taking turns reading each other’s painted expressions and building on them. The robot is guided by an AI system known as recurrent neural networks. “It’s more of a call-and-response,” Chung says. “I can input different line strokes and the machine can respond to it. So it’s really about that interaction. But it’s also not about making machines do a thing. You know what I mean? It’s always about that feedback loop in that collaboration.” Interaction. Collaboration. Chung’s language reveals how she thinks about AI. It’s not her slave. She’s not always the boss. “I think a lot about narratives that we tell ourselves about technology and why we have those narratives,” she says. “And I think they’re really influenced by science fiction and pop culture. And that tends to be hypermasculine, hyper-dystopian. That’s why we have all these really sensational stories about AI, like, is it going to take over humanity?
264
Where do we get that from? We get that from ‘The Matrix.’ “That’s not a narrative that I subscribe to,” she continues. “I think it creates a very adversarial, power-driven dynamic with technology.” In the performances, everything comes together: Her tech expertise, her art, the full-body experience. After all the programming and calibrating, it is through these improvised painting experiences with her AI collaborators that Chung has regained the flow state she loved as a musician. “Where you don’t have to think about commas in your code,” she says, “but you can just be in it. … There was this sense of exploration and wonder that I was navigating. It felt very vital and alive, like dance.” Her work continues to evolve. In a recent project, she uploaded her robots with publicly available surveillance footage of pedestrians crossing New York City streets. She extracted specific data streams, to capture the physical motion of pre-coronavirus New York crowds. With the robots, she turned this digitized bustle into brushwork. In future projects, Chung hopes to bring the public into her process and even onto her canvas, to draw alongside the robots.“I’m curious about exploring what the machine would draw like,” she says, “if we all contributed to a drawing set.”Ultimately, Chung wants to use AI technologies to bring people together. Yet now that the coronavirus has us all practicing social distancing, she sees other opportunities: ways for viewers to experience AI art-making remotely, such as the streamed performances. “Picasso used the tools of his day,” she says. “I’m interested in using the technologies that define our current moment, as a way of understanding how they work in our lives. The modern human is surrounded by smart technology and phones and machines, and I want to use them as a source of inspiration, looking to what future art practices could be.” “There’s always a potential for failure,” Chung says. “With this dynamic that I’ve been exploring, it’s about the unexpected. And that keeps me really interested.” 265
IS NOT FOR APPLE
266
IS FOR ALGORITHMS
267
Touchdesigner
268
Processing
269
iPhone
iPhone
Adobe Illustrator
Adobe Photoshop
Adobe Photoshop
Adobe Photoshop
Figma
Procreate
Cinema 4D
270
Runway ML: AdaIN Style Transfer
Runway ML: Adaptive Style Transfer
Runway ML: Adaptive Style Transfer
Runway ML: Photo Sketch
Runway ML: Shape Machine Gan
Runway ML: SPA
Runway ML: SPADE
Runway ML: SPADE COCO
Runway ML: Cartoon Gan
271
272
273
274
275
276
277
“ The medium is the message,” and shapes the society, our tools affects how we approach problems, how we navigate challenges, and depending on their feature set, they have the power to either amplify or diminish our creativity.
278
279
280
chapter 4
CITATIONS & BIOGRAPHY
281
Citations
[1]
Armstrong, H., & Dixon, K. D. (2021). Big Data, big design: Why designers should care about artificial intelligence. Princeton Architectural Press.
[2]
Dunne, A., & Raby, F. (2014). Speculative everything: Design, fiction, and Social Dreaming. MIT Press.
[3]
Kelly, K. (2017). The inevitable: Understanding the 12 technological forces that will shape our future. Penguin Books.
[4]
Jain, A. (2021, June 10). Calling for a more-than-human politics. Medium. Retrieved March 23, 2022, from https://medium.com/@anabjain/calling-for-amore-than-human-politics-f558b57983e6 Armstrong, H. (2016). Digital Design theory: Readings from the field. Princeton Architectural Press.
[5] [6]
Haraway, D. J. (2018). Cyborg manifesto. Camas Books.
[7]
Ferrando, F., & Braidotti, R. (2021). Philosophical posthumanism. Bloomsbury Academic.
[8]
Braidotti, R., & Hlavajova, M. (2020). Posthuman glossary. Bloomsbury Academic.
[9]
Zuboff, S. (2020). The age of surveillance capitalism: The fight for a human future at the New Frontier of Power. PublicAffairs.
[10]
Bridle, J. (2019). New dark age: Technology and the end of the future. Verso.
[11]
Bratton, B. H. (2016). The stack: On software and Sovereignty. The MIT Press.
[12]
Sarah Cook, Alona Pardo, Trevor Paglen (2019). From Apple to anomaly. Barbican Books.
[13]
Raina, A. (2020, February 11). Speculations on the posthuman age. RISD. Retrieved March 24, 2022, from https://www.risd.edu/news/stories/graphic-design-faculty-anastasiia-raina-on-posthumanism-and-design
[14]
Gray, Chris Hables. The Cyborg Handbook. Routledge.
282
[15]
Bostrom, N. (2017). Superintelligence: Paths, dangers, strategies. Oxford University Press.
[16]
Perez, C. C. (2021). Invisible Women: Data bias in a world designed for men. Abrams Press.
[17]
Noble, S. U. (2018). Algorithms of oppression. NYU Press.
[18]
Wachter-Boettcher, S. (2018). Technically wrong sexist apps, biased algorithms, and other threats of toxic tech. W.W. Norton & Company, independent Publishers since 1923.
[19]
D’Ignazio, C., & Klein, L. F. (2020). Data feminism. The MIT Press.
[20]
Zuboff, S. (2020). The age of surveillance capitalism: The fight for a human future at the New Frontier of Power. PublicAffairs.
[21]
McLuhan, M., & Fiore, Q. (1967). The medium is the massage. Random House.
[22]
Hiesinger, K. B., Fisher, M. M., Byrne, E., López-Pastor, M. B., Ryan, Z., Blauvelt, A., Barton, J. R., Zhang, E. Y., Horvat, S., Cogdell, C., LeBrón, M., Gorbis, M., Syms, M., Latour, B., Wood, D., Jackson, N., Bove, V. M., Telhan, O., Yuan, L. Y., … Fanning, C. (2019). Designs for different futures. Philadelphia Museum of Art.
[23]
Klanten, R. (n.d.). Data flow. Gestalten.
[24]
Richardson, Andrew. (2017). Data-Driven Graphic Design: Creative Coding for Visual Communication. Fairchild Books.
283
Acknowledgement
The idea of my thesis have taken shape through conversations and exchanges with many people. I would like to particularly thank my thesis pod leader Brad Bartlett, instructors Miles Mazzie, Roy Tatum, Julian Stein, Geoffrey Brewerton and all the pod members: Finch Liu, Li Tong, Yan Yan, Gloria Tang, Joey Yang, Ellis Yu, Elle Han. I’m also extremely grateful to Yuqin Ni, Momo Jiang for their encouragement and support throughout the whole journey. I feel very lucky and privileged to work with these supremely intelligent and talented people.
284
Biography Xinyi Shao thexinyishao@gmail.com
Xinyi Shao is a graphic designer from China, currently based in Los Angeles. She earned her Masters Degree from ArtCenter college of design and a Bachelors degree in Finance from South University of Science and Technology. She is curious about the world and interested in learning new things. Her works emphasize on typography, motion and generative design.
285
286
287
288
289
290