8 minute read

Common Ground

Where do humans and machines overlap? Can we ever understand AI? Should we be listening to trees? Composer Holly Herndon contemplates the questions raised by our near future with technologist James Bridle

Holly Herndon is the American composer, performer and musical futurist who has just released her third album, PROTO, with the help of both a vocal ensemble – hearkening back to her gospel and folk singing roots in East Tennessee – and a self-built, nascent artificial intelligence called Spawn, which uses machine learning to interpret vocal characters and composition techniques. In 2018, Herndon released Godmother, the track that introduced Spawn to the world. It was a collaboration between Herndon and Jlin, whose music was used as one of the datasets from which to generate the track.

James Bridle is a British artist, writer and technologist whose first book – called The New Dark Age, about technology, knowledge and the end of the future – was published last year, followed by a BBC Radio 4 show, New Ways of Seeing this year. The pair met for the first time this summer in Berlin for an expansive conversation about AI, the nature of intelligence, and its implications on evolution, culture, governance and more.

We're more willing to grant intelligence to things that we've built ourselves than to non-human species

–JAMES BRIDLE

JB: What did you think was going to happen before you started this process [of building Spawn, the baby AI that features on new album PROTO]?

HH: I thought it would be easier. I thought we would get results sooner. It took way longer and it sounded way shittier. Then it became this process of negotiating not just expectation, but allowing it to reveal something of itself to me that I wouldn't always find pleasing and being OK with that, and still trying to find something interesting and different about it.

JB: One project I did was I set out to build my own self-driving car, which is a process of hiring a car and sticking a bunch of webcams and sensors onto it. I had to build an app for my phone. It was velcroed to the steering wheel and all these sensors – you're basically training a neural net to drive like you.

The other thing I made was this thing I called “The Autonomous Trap”, which was this salt circle. I haven't tried it with a real [self-driving car], but it should work in that the car has to obey the rules of the road, so it can't cross that line. The day after I did it, someone sent me a video of a Tesla on a highway in California just after they'd salted the roads and this Tesla was jumping lanes. So it obeys salt lines.

But subsequently, looking back on it, as a visual artwork the point was to make something that affected the car but was also visible to humans. What I got out of that project was finding not even what we have in common but just visible, audible, sensible to both sides. We're starting to recognise this thing that we call machine intelligence as being something fundamentally different from human ways of thinking. Like, where's the common ground?

HH: There's an interesting overlap there with some of the work that we did at Martin-Gropius-Bau. We staged a public training performance involving music and theatre and sets. Basically, I had the public repeating speech and song and then we trained Spawn on the various responses. People had beer bottles in their hands and we had them use their keys to make sharp transient sounds. Spawn really likes this sound, it’s something she responds to, and then I respond to how Spawn responds to that, because I get this very information-rich response. Spawn was struggling with the reverb in that space because the waveforms were becoming smoother, so the differences between the samples were more difficult to make sense of – but when you have sharp transients then that's easy to see the wave shape difference. So it was this area of us coming together, like you're saying. Something that the neural net's responding to and that the human ear is responding to, as well.

It was a little bit different with the voice model we were doing, but it was interesting to hear [Spawn] perform

Jlin's music because it was trying to make sense of what my human voice would normally do when presented with these very transient-heavy percussion sounds that my human voice would never perform. I've tried to see if I could perform Jlin’s track Godmother and, of course, I can't go that fast. It's this fun idea of this next version of my voice that's more flexible, faster, has infinite range and never needs to breathe. If we can get it to work, there's the possibility we would be able to perform through anything that we could model, which is wild. And it wouldn't necessarily have to be a vocal input. You could have a drummer performing through whatever model you imagine, like through a crowd. These are the kind of sonic images that I've been dreaming of for a long time. Like, “Wouldn't it be amazing if I could sing through the voice of a thousand people?” And we're inching towards that. You can do some of these things with digital tricks: doubling and chorusing. I've hit the limit with what I can do with digital manipulation and this opens up a whole new box to play with.

JB: I'm particularly interested in this relationship you have with a technology and intelligence you constructed yourselves because everyone's obsessed with the idea of AI just at the moment we're starting to acknowledge the intelligence of other things more generally. We're more willing to grant intelligence to things that we've built ourselves than to non-human species, even though it's increasingly obvious that primates, cephalopods and trees have forms of intelligence that we should maybe be listening to. So how do we take this sudden decentring of the human with regard to AI? It's like a Copernican moment when suddenly we have to acknowledge there are other forms of intelligence present. And then suddenly go, "Oh shit, there have been incredible amounts of intelligence here all along, and we've completely ignored them.”

HH: That's actually one of the reasons why we chose the child metaphor [for

If we've learned anything from the last several decades, it’s that capitalism is not equipped to deal with issues of climate change or ethics

Spawn]. We were looking at Donna Haraway's writing about the kind of kinship she feels for her inhuman pet relationships. I think using this metaphor of the child confused a lot of people because they thought we were saying Spawn was a human baby. We're like, "No, this is an inhuman child. It's a nascent intelligence that we're trying to raise as a community and impact at the foundational level." I don't know if you read Reza Negarestani’s Intelligence and Spirit. Basically he taps into this idea of the history of human intelligence being this great collective project and maybe the idea of rational thought is somewhere outside of humanity in some way. We're able to tap into it with our own human intelligence, but other species are tapping into it in their own way, and maybe it's something that's outside of ourselves. But I think you're right. It makes total sense that we would be coming to that conclusion while we're coming to this new obsession as well. It's like part of the public consciousness has opened up to accept those ideas now.

JB: There's a slightly guilty teleological bent to my thinking that I always think there's some weird purpose inside things. Purpose doesn't imply necessarily a smooth progression to some glorious future, but that there's some reason to the things that we pick up, and that they shape us deeply. I'm reading Kim Stanley Robinson’s Science in the Capital series. It's full of wonderful stuff, including where he's talking about these ancient axe heads, the tools of homo erectus – the previous iteration of our species homo sapiens. These tools have been described by certain anthropologists as being part of the rapid evolutionary growth of the brain that got us from homo erectus to homo sapien.

HH: Entrainment!

JB: Right. Tools guide evolution on a deep level. Perhaps one of the things that humans have done recently is figure out that we can take conscious control over those tools. And if tools shape our evolutionary direction, then we should take more conscious decisions over what tools do and actually decide that direction for ourselves. We can pick which way we want to go. Because our tools have been developed deliberately to grab our attention, to steal our time, to make us work harder, all of these things. Those are evolutionary redirecting steps. If we want them to go in another direction, we have to make very serious choices around which tools we use and how we design them.

HH: I think this becomes increasingly more complicated as we look at stuff like AI. You're saying you’ve built things from the ground up, but at a certain level there is an abstraction there. There's a reason why AI is called black box technology. It’s really difficult to understand every single inner working decision. And so whether or not we even have the capacity to be able to understand where this is going is a question.

JB: Which is why going forward our framework for interaction cannot be based on understanding. Our desire to know these systems intimately is a sublimated desire for control, that if only we understood everything better – surveillances, whatever – then we would actually have control over them. Our desire for control is precisely the problem. We are never going to understand the world more broadly. We're not going to be able to talk to trees in the way that we talk to one another. But we need to be able to live meaningfully, equitably with them and with all other natural systems. And the same is true of these insanely complex technological systems that we're developing. They extend beyond our own individual understanding, and that goes all the way up to the people who work for the companies developing them. They only understand small [parts of the systems]. They don't have a complete compact understanding, it certainly doesn't extend beyond the black box into the social systems into which these things are embedded. No one has the overview. So the aim therefore cannot be a kind of deep complete technical understanding, but a way of actually living amongst and alongside one another. And ways of forming consensus, acknowledging that we cannot know these things entirely.

HH: Yeah, I agree. I guess the question is how to shape policy and the direction of this development without this mastery. We've seen in the States over the last several years a slight shift in dialogue towards Silicon Valley ideology. I see this optimistic option, but then, even watching the Congressional hearings with Zuckerberg and just seeing the lack of technical understanding... that's when I become pessimistic. I wonder: without a broader political will how these things be shaped? Because right now they're being shaped by the market, by capitalism. And if we've learned anything from the last several decades, it’s that capitalism is not equipped to deal with issues of climate change or ethics. So, how do we find that collective will to shape these conversations actually from the protocol level, from the very foundational level to steer the ship?

JB: The other thing is the fact that we also need to start acknowledging the intelligence and cognition of the nonhuman, both machines and plants and animals and trees, and everything else. And so obviously the next logical step is that true cognitive diversity goes beyond the human. Which means future governments have to include genuine consensus with things outside the human, including both machine intelligence and like plants, animals, trees.

HH: It sounds super psychedelic, the implementation of that. [laughs]

JB: I'm like, “Here's a bunch of scientific research from the last 20 years. And so we need to start talking to trees.”

PROTO is out now on 4AD | New Dark Age is out now, published by Verso

This article is from: