5 minute read

Artifcial Intelligence: Expanding Musical Dimensions

Artifcial Intelligence: Expanding Musical Dimensions

Below Alicia Hayden

Advertisement

To some, the Symphonie Fantastique is a masterpiece composed by French musician Hector Berlioz in the 1800s. To others, such as those in the Sofa Symphonic Orchestra, the piece presents an opportunity to merge the classical with the modern and create a hybrid composition known as ‘Symphonic Fantasy in A Minor, Op. 24: I Am AI.’ This transformation was accomplished by merely one addition to the orchestral and production team - AIVA, a learning technology designed to assist with composing original and personalized music soundtracks. Remixing established musical pieces is one of the latest feats of artifcial intelligence, which is gradually assuming the role of both partner and mentor to the human artist.

Artifcial intelligence (AI) is rapidly becoming a critical character in many disciplines. From contemplating the ethical boundaries of machine learning to implementing robotic surgery in medicine, the applications and implications of AI continue to proliferate. Most recently, much attention has been given to the role of AI in creative endeavours and specifcally, its breakthroughs within the music industry. To obtain a deeper glimpse into technology’s transfguring infuence on music, we should examine how AI is afecting three critical areas of musicmaking: audio mastering, interactive composition technology, and music synthesis. L ast November, TORCH (The

Oxford Research Centre for the Humanities) hosted a panel discussion on the revolutionising role of artifcial intelligence in creative pursuits. This event explored the interweaving, perhaps symbiotic, relationship between technology and art. Panellist Emily Howard, Professor of Composition at the Royal Northern College of Music, discussed how her PRiSM team (Practice and Research in Science and Music) explored the ways AI can learn text as part of a melody. They had trained an AI with 19th-century English in particular. ‘We interacted with this AI by prompting it so that it would generate further text, growing in sophistication over a period of weeks,’ Howard described.

“Automated options have been developed to offer professional mastering without the costs and work of human engineers.”

In a video clip, Howard demonstrated the AI’s nearly fawless continuation of an operatic piece after the vocalist had stopped singing.

Mastering is a step within the audio post-production process that involves preparing and transferring audio to a data storage device, known as the master. The goal of mastering is to help the listening experience sound cohesive and balanced by skilfully adjusting a song’s sonic elements such as frequency ranges, loudness, and spacing between each track. This process requires acute listening and a specialised mastering expert. Nevertheless, automated options have been developed over recent years to ofer professional mastering without the costs and work of human engineers. New Realms of Music Mastering and Synthesis O ne example is LANDR, a web service algorithm that analyses a song uploaded by the user, uses a set of post-production tools such as compressors to enhance the dynamic range of the song, and exports the result.

One drawback of the system is its rigidity – if the user is unsatisfed with the end mix, the AI cannot make corrections in a timely manner. Despite these shortcomings, digital mastering systems remain the more afordable option for users who are unable to call upon a traditional mastering engineer.

Another novel approach is interactive composition technology (ICT), a fuid concept that relies on an improvisational ‘conversation’ between computer and performer. ICT begins by allowing the computer program to listen to the performer, capturing abstract details such as scales, rhythms, harmonies, and general structure. The machine classifes this data from the performer to learn what the performer means by terms like ‘frantic’ or ‘sombre.’ This communication is a form of nuanced musical notation - not in the conventional sense of visual notes on a sheet, but rather in a way that the AI is able to guide the performer towards the intentions of the original composer. Computer and artist continuously present and interpret, blending into one performative entity. PRiSM’s operatic project and the AIVA technology are applications of this concept. M usic synthesis is another critical technique which refers to the generation of sound from scratch. AI’s profound impact on music synthesis is notably illustrated by a recent research project developed by Google.

In 2016, the company launched ‘Magenta,’ an endeavour that aims to expand the abilities of AI in creating songs, images, drawings, and other forms of art. Programmers trained the NSynth (neural synthesizer) algorithm, which uses neural networks to create sounds at the base level of individual samples. The Magenta team constructed the NSynth dataset by collecting a large assortment of musical notes sampled from single instruments across a range of pitches. The learned instrument also relies on WaveNet, a model that learns codes representing the space of instrument and human sounds and uses them to generate speech mimicking the range, timbre, and fuidity of human voices. Dilemmas in the Age of AI T he impact of artifcial intelligence has spurred the formation of an entire industry dedicated to its services towards music, including software such as IBM Watson Beat, Melodrive, and Amper Music. In applications such as LANDR however, we see that these technologies are often a trade-of between time and quality, method and constraint.

Many controversies surround the increasing use of AI, which is often regarded as an unwelcome, manufactured intrusion into areas which have long relied on dexterity and skill, such as agriculture. Some may view creations like ‘Symphonic

Fantasy’ as an afront to beloved classical works. With each new AI instrument, questions rise: Who is creating the database? Whose intellectual property is the music, the code, the machine? There may be issues of cultural bias if machines are trained only in popular Western music; there could be liability dilemmas in the cases where the AI fails to recognize a copyright and mistakenly mixes part of an existing song.

From a broader standpoint, consumers may also struggle to consider the more sinister implications of AI in weaponry and visual recognition software. It can be common to assume that an AI’s efects mimic the machine itself; inanimate, neutral, and objective. Unfortunately, ethical boundaries begin to blur when AI is constructed with a political or social motive. The musical world is not immune to these possibilities. D espite its controversies and moral dilemmas, artifcial intelligence continues to alter societal dimensions while inspiring constructive discussion on what the future of science will entail. In the realm of music, new techniques incorporate the precision and rhythmicity of machine learning with the emotion and dynamism of live performers. Technologies such as LANDR, Google Magenta, and WaveNet are gradually bridging the divide between human- and machine-generated sound.

Still, public unease threatens the technology’s permanence in music. Perhaps AI will be another adaptation in our macro-evolutionary timescale, or perhaps history will record it as a moral failing of humanistic values. In the current moment, however, AI is revolutionizing science and society in ways we have only just begun to explore. Isra Hanain is studying for an MSc in Medical Anthropology at Green Templeton. “Some may view ‘Symphonic

Fantasy’ as an affront to beloved classical works.”