Visual Music Masters

Page 1

Visual Music Masters


Contents

7

10 18 28 42 66 76

Introduction I. History 1 The Beginnings 2 Music and Early Twentieth-Century Painting 3 Luminous Instruments 4 Abstract Films 5 Music and Abstract Art 6 Electronic Art

90 106 124 136

II. Contemporary Developments 7 Animations and Videos 8 Installations 9 Performances 10 Language

142

Appendix Synesthesia

150 156 168 175

References Bibliography Websites Index of Names Photo credits


Introduction

the visual and aural world, and sometimes through free associations. Thirdly, visual music can also be seen as an abstract expression that is not accompanied by sounds of any type, but set in relation to musical concepts. The chapters that follow reflect the different techniques of realization and presentation of the works, retracing the thought of the protagonists and the results of their artistic research. Visual Music Masters is subdivided in two sections: History and Contemporary Developments. The first part illustrates the evolution of visual music, from the early experiments of past centuries to the electronic artworks of the 1970s. The second part offers a comprehensive view of the recent expressions of audiovisual art, and ends with a chapter dedicated to the various aspects inherent to its language. There is also a scientific perspective on the link between images and music: synesthesia. This bizarre and multiform neurological phenomenon is discussed in the Appendix.

The association between images and music has a long history. It aroused the curiosity of many artists and thinkers of the past, stimulated artistic creativity in the twentieth century, and continues to be a topic of great interest today. This book aims to take stock of the situation, now that abstract audiovisual art, having reached maturity, is enjoying a new season of renewed vitality. Analog and especially digital electronic technology has in fact favored the growth of this form of art by providing access to a wide variety of new solutions, including flat screens, projections of colossal dimensions, stereoscopic images, spatialized sounds, realtime interactivity, and much more. There are different notions of what constitutes visual music. According to one view, it is exclusively the simultaneous creation of abstract images and music from a common source. According to a second view, either the music or the images are crafted first, and the other art form is created later, sometimes on the basis of specific correspondences between

7


blue and yellow were definitely part of the scale, in order to create a chromatic scale it would have been sufficient to find the intermediate colors between these hues. In short, he was able to envision a scale of twelve colors. Father Castel however did not take the analogy with music to its furthest possibilities; he didn’t create two modes, one major and one minor, as in music, but concentrated on a single, absolute scale. In addition, he proposed that the tonic corresponded to blue, the interval of a third to yellow and the fifth to red. The reason for the correspondence between tonic and blue was, in his view, that blue was the fundamental color of the world, as demonstrated by the sky. Castel then used the different degrees of brightness of each color to establish a symmetry with the various musical octaves, so each color had twelve different hues.

advantage of an ocular harpsichord was to create ephemeral colors, unlike those in a painting, which typically remain static. The Jesuit monk extended these concepts in several directions, hypothesizing a “music” of smell, of taste and of touch, in this sense acting as a true precursor of a broader concept of multimedia. He also claimed it could be possible to create a sound prism, able to subdivide a chord in its individual notes, but failed in his attempt to construct one. Father Castel’s idea to design an ocular harpsichord was welcomed with enthusiasm in the Mercure de France by a certain Rondet, who also offered advice about how to realize it. Rondet thought one could build small windows illuminated by a light inside the harpsichord, each featuring a small screen that lifted when a key was pressed. The result could then be improved with the installation of various mirrors. It is not actually clear how seriously Castel took the idea of constructing this instrument, and at any rate he did not appear to want to build it in series, as would be conceivable today. In time, he started to doubt that violet was really the fundamental color of the chromatic scale, since it was obtained from the blending of red and blue. In 1734 he therefore began a series of experiments on colors that were published in the form of a letter to Montesquieu, who had urged him to share his ideas with the public. Considering the similarity between sounds and colors in a very literal way, Castel observed that there was a continuum of tones between two sounds an octave apart, but that only twelve notes were distinguishable; colors must then follow the same principle. Since red,

Fig. 1.3 - Johann Gottlob Krüger, project for an ocular harpsichord. In Miscellanea Berolinensia, 1743.

12


As a result, the ocular harpsichord was supposed to have 144 keys. In the same year, 1734, the Jesuit monk completed a first prototype that was presented to several notable people, including Montesquieu. Not much is known about it, except that it seems that its maker opted for an instrument with both musical and visual capabilities. A letter written by composer Georg Philipp Telemann in 1739 provides evidence as to the existence of this instrument, which the German composer seems to have appreciated very much. Telemann wrote: “To have it sound a tone, one touches a key with a finger and presses it, and thereby a valve is opened that produces the chosen tone. […] At the same time, when the key opens the valve to produce the tone, Father Castel has fitted silken threads or iron wires or wooden levers, which by push or pull uncover a colored box, or a ditto panel, or a painting, or a painted lantern, such that at the same moment when a tone is heard, a color is seen.”1 Apparently, a 1755 version of this instrument was equipped with colored glass panes illuminated by a hundred candles, while yet another version used ribbons of various hues. Unfortunately there are no illustrations of this instrument, but Castel’s ingeniousness is apparent even without them. In 1743 Johann Gottlob Krüger wrote a document entitled De novo musices, quo oculi delectantur, genere (On a New Type of Music, for the Pleasure of the Eyes, 1743), in which he claimed that although Castel’s instrument was able to represent a melody with colors, it could not in any way provide a visual rendering of the harmony. More importantly,

Krüger observed that while analogies between sound and light could exist, and the pleasure one drew from harmony of colors and harmony in music could be similar, the phenomenon remained exclusively psychological [Fig. 1.3]. While Father Castel searched for a presumed objective correspondence between physical phenomena, Krüger emphasized the perceptual and therefore subjective dimension of the similarity between sounds and colors. These different conceptions were reflected in the approaches of subsequent artists, that can be summarily categorized as absolutists and relativists The work of Ernst Chladni is particularly relevant in this regard. Continuing the experiments that Robert Hooke had carried out in the seventeenth century, in 1787 Chladni published the results of his research in Entdeckungen über die Theorie des Klanges (Discoveries on the Theory of Sound). In this book he illustrated the objective relationship between sounds and the visual patterns that formed on a metal sheet strewn with sand and made to vibrate. Indeed, the sound frequencies created visual patterns, with the grains of sand arranging themselves on the

Fig. 1.4 - Ernst Chladni, sound patterns, 1787

13


Kandinsky was in contact with musician and painter Arnold Schoenberg, whose music and portraits he appreciated. In 1911, after attending a concert of the Austrian composer’s atonal music, he painted Impression 3 (Konzert) (Impression 3 [Concert]) [Fig. 2.5]. Unlike other artists, Kandinsky was attracted to contemporary music, breaking the ties with the past and searching for a new artistic frontier. This artist’s greatness was also manifested in his will to radically move away from classic values and seek new horizons. Around 1910 František Kupka, who was undoubtedly acquainted with synesthesia, showed a marked interest in the application of music and dynamism to painting. He stated: “By using a form in various dimensions and arranging it according to rhythmical considerations, I will achieve a ‘symphony’ which develops in space as a symphony does in time.”5 In addition, according to Kupka “the public need to add to the action of the optic nerve those of the olfactory, acoustic, and sensory ones.”6 At the same time, this artist was also conscious of the inherent differences between music and images, and clearly advocated a sophisticated interpretation of the issue: “Chromatism in music and musicality of color has validity only as metaphor.”7 As early as 1911, the futurists Umberto Boccioni, Gino Severini and Luigi Russolo, the latter also the inventor of several noisegenerating musical instruments, created paintings with explicit references to music, for example, respectively, La strada entra

Fig. 2.5 - Vasily Kandinsky, Impression 3 (Konzert) (Impression 3 [Concert] ), 1911

instruments, for instance yellow for trumpet, orange for viola or blue for cello [Fig. 2.4]. In addition, he catalogued his pictorial compositions into melodic and symphonic categories, according to their complexity, bearing witness to the marked musicality of his thought. According to the Russian artist, the universe, conceived in a metaphysical sense, was composed of vibrations that manifested themselves through color and sound: “Color, which itself affords material for counterpoint, and which conceals endless possibilities within itself, will give rise, in combination with drawing, to that great pictorial counterpoint, by means of which painting also will attain the level of composition and thus place itself in the service of the divine, as a totally pure art.”4

Fig. 2.6 - Anton Giulio Bragaglia, Dattilografa (Typist), 1911 Fig. 2.7 - Giacomo Balla, La mano del violinista – Ritmi del violinista (The Violinist’s Hand – Violinist’s Rhythms), 1912

22


23


a sound’s and a color’s vibrations. A few years later, Alexander Burnett Hector conceived an audiovisual instrument comprised of a considerable number of light bulbs immersed in colored aniline totaling 40,000 W, with which he performed various concerts in Australia. Vladimir Baranoff-Rossiné, who created abstract paintings as early as 1910 and was part of the Russian avant-garde, introduced an original variant. In the early 1920s he invented a particular instrument, the Piano Optophonique, that “played” a disc of colored glass through which a ray of white light passed that was further modulated by prisms, lenses and mirrors. The result was a constantly changing projection of colors, accompanied by sounds [Figs. 3.1 and 3.2]. Unlike the creators of color organs, Baranoff-Rossiné thought that the issue was not to represent notes or sounds, but to invent a different way of making art, namely of creating moving abstract images. According to Baranoff-Rossiné, establishing arbitrary

3 Luminous Instruments During the twentieth century visual music definitely took off, primarily following two paths: that of abstract cinema, which will be considered in the following chapter, and that of audiovisual concerts featuring instruments that were the extension of the color organ projects. The evolution of technology allowed for more exciting solutions compared to the past which also came to richer artistic results. There were two perspectives to audiovisual concerts, which sometimes intersected: one emphasized audiovisual expression, while the other involved moving abstract images devoid of sound. According to the artists that used the latter approach, the visual element was the music, and therefore adding music was unnecessary. Between 1913 and 1923 the painter Morgan Russell designed the Kinetic-Light Machine, with which he intended to create abstract compositions that would freely evolve over time, without any specific correspondences. According to Russell, projections had to be accompanied by slow music that would express the gradual changes of light. In addition, Russell maintained that the two elements, musical and visual, should not be in total synchrony but rather in dialogue, an avant-garde point of view alsoshared by Man Ray and Hans Richter. At a later date, his friend and colleague Stanton MacDonald-Wright also created a Synchrome Kineidoskope. In 1916 Charles F. Wilcox patented a method of musical composition based on color, specifically on the relation between

Fig. 3.1 - Vladimir BaranoffRossiné, discs for Piano Optophonique, circa 1920–24

28


correspondences was not opportune, also because the results achieved with this approach had not until then been able to transmit the emotional content of music. In the following years and up until 1939, Baranoff-Rossiné gave numerous concerts in Russia and in France, in which another element of analogy with music was that of a continuous reinterpretation. In 1919 pianist Mary Hallock-Greenewalt invented an instrument called the Sarabet, used together with music, which consisted of a sliding rheostat that could control the reflection of seven colored lights. She too maintained that creating precise associations between sounds and colors was not important, since personal influences were decisive and it was therefore impossible to conceive a language that could be universal. Hallock-Greenewalt was among the first to create hand-painted films, although these were not made to be projected but rather to be played through the Sarabet. In addition, she devised a dedicated notation system. In her 1945 book Nourathar. The Fine Art of Light-Color Playing she related her experiences in audiovisual art, which she denominated nourathar, a term that combined the Arabic nouns nour (light) and athar (essence). In this text Hallock-Greenewalt concluded that her art was also useful in that it improved the viewer’s health conditions.1 In 1920 painter Adrian Bernard Leopold Klein invented an instrument to project colored lights. This projector was activated by a two-octave keyboard and was based on Klein’s own theory of colors, which was illustrated in his book Colour Cinematography

Fig. 3.2 - Vladimir Baranoff-Rossiné, Piano Optophonique, 1922–23

29


Fig. 4.24 - Stan Brakhage, The Dante Quartet, 1987

62


writing. In 1958 he designed the Invisible Cinema, a soundproof movie theater consisting of solid black armchairs, walls, floors and ceiling, and partition panels between viewers that forced them to focus their attention on the screen. Kubelka’s work was notable especially for his original approach to this medium, which was referred to as “metric film.” He in fact measured each part of a film in relation to the work’s overall duration, and was particularly careful to make every part communicate with all the other parts. For example, in Adebar (1957) the film segments were thirteen, twenty-six and fiftytwo frames long. After Adebar, the complexity of Kubelka’s expression increased remarkably, to the point where he conceived segments of a single frame. According to the artist, each frame had to play an essential part in the film, and if

Delicacies of Molten Horror Synapse (1991) and Black Ice (1994) which features a very effective zoom effect. He also wrote several books, among these Metaphors on Vision (1963), A Moving Picture Giving and Taking Book (1971) and Telling Time: Essays of a Visionary Filmmaker (2003), that was published posthumously. More recently, Brakhage created three films with Phil Solomon: Elementary Phrases (1994), Concrescence (1996), Seasons… (2002). In his turn, Solomon made several films on his own, as for example The Secret Garden (1988) and Psalm III: “Night of the Meek” (2002), characterized by a very personal style. An entirely different method was followed by Peter Kubelka, an artist who created several experimental films and who expressed himself in the most varied ways, for example through cuisine, music, architecture and

Fig. 4.25 - Peter Kubelka, Arnulf Rainer, 1960

63


functioned with optical discs and followed a principle that had already been partly explored in 1927 by Pierre Toulon with the Cellulophone. Each disc contained eighteen waveforms and thus made it possible to produce different timbres for each tone. In the early 1930s Arnold Lesti and Frederick Sammis invented a Radio Organ of a Trillion Tones. Also notable was Sammis’ Singing Keyboard (1936), which could record sound waves on 35 mm film, and then activate and play them via a keyboard. Norman McLaren also created a series of animations in which the sound was obtained through optical synthesis. In particular, the first works in which he intervened on the celluloid’s optical sound were Rumba (1939), a film never completed, Allegro (1939), Dots (1940) and Loops (1940). McLaren systematically perfected this technique and further elaborated Pfenninger’s methods, which he knew well, finally covering the drawings with grills in such a way that he could obtain not only different timbres, but also portamenti, glissandi, vibratos and even microtones. Among the notable films made using this sound production technique were Now is the Time (1951), which was also stereoscopic, Neighbours (1952), Two Bagatelles (1952), Twirligig (1952), A Phantasy (1953), and especially the aforementioned Synchromy (1971 [Fig. 5.6]). John Whitney Sr. and his brother James invented a truly original system of sound production using pendulums, whose oscillations, by nature sinusoidal, were recorded onto a film’s optical track, which generated sounds when it was reproduced [Fig. 5.7]. The Whitney brothers carried

the volume was changed by modifying the exposure. Harmony and counterpoint were obtained through multiple exposures and other more or less complex techniques. Shortly thereafter, Evgeny Sholpo and the composer Georgy Mikhailovich RimskyKorsakov, the nephew of the more famous Nikolay, developed a musical instrument called Variophone [Fig. 5.4], constructing it after having analyzed several sounds with an oscilloscope. The Variophone worked slightly differently from the preceding instruments, as it featured a series of paper discs with geometric patterns. The discs rotated in synchrony with the film, and the result was a more ample availability of timbres compared to the method used by Avraamov. Around the same time, Boris Yankovsky invented the Vibroexponator, also based on drawings but in this case animated and not recorded frame by frame. In his turn, Nikolai Voinov created the Nivotone, which apparently was the most flexible instrument at the time in terms of generation of sound. Edwin Emil Welte implemented yet another idea with his Licht-Ton Orgel (Light-Tone Organ, 1936 [Fig. 5.5]). This bizarre instrument

Fig. 5.4 - Evgenij Sholpo, discs for Variophone, 1930 Fig. 5.5 - Edwin E. Welte, a disc for Licht-Ton Orgel (Light-Tone Organ), 1936

70


out numerous experiments and perfected their technique to the point of achieving a continuum between rhythm, pitch and timbre: a fundamental concept of contemporary music. Other artists then continued to generate sounds with the aid of optical media. In 1957 Kurt Kren created Versuch mit synthetischem Ton (Test) (Experiment with Synthetic Sound [Test]), scratching the film’s optical track. In 1958 Yevgeny Murzin developed a synthesizer called ANS, based on five optical discs [Fig. 5.8]. Each disc contained a hundred and forty-four tracks, so that one could obtain sounds formed by seven hundred and twenty sinusoidal components. Subsequently, Barry Spinello also made a foray into the creation of “optical” sounds. Following his predecessors’ techniques and expanding upon them, between 1967 and 1971 he completed the films Sonata for Pen, Brush and Ruler, Soundtrack [Fig. 5.9], and Six Loop-Painting without using a camera or sound technology. The methods Spinello used were drawing directly on the celluloid and manipulating the audio track with self-adhesive acetate or tape. Spinello also established that in order to obtain effective rhythms, one had to work with multiples of two (2-4-8-16-32) or three (3-6-12-24) frames. In the more strictly musical context, some composers emerged for their unconventional ideas and measured themselves against the visual world in different ways. Olivier Messiaen was an undoubtedly unorthodox artist who, like Scriabin, had synesthetic experiences. He was exposed to

Fig. 5.6 - Norman McLaren, Synchromy, 1971

71


not be visualized directly and had to be supported by film in order to be shown, and unlike analog electronic images were therefore hybrids. Among the first abstract films made with a computer, besides those of John Whitney Sr., was the remarkable Cibernetik 5.3, made by John Stehura between 1965 and 1969 [Fig. 6.9]. The digital images and others obtained photographically were first rendered in black and white and then printed in color on celluloid, one for each primary color. Stehura himself wrote the image-generating program in the FORTRAN language. Initially Stehura wanted to use the same algorithms to produce the music, but later chose

hands on a full-fledged audiovisual instrument. Theoretically it would have been possible to generate images and music simultaneously, however the machines’ physical distance thwarted the realization of such a project in practice. In time, Spiegel abandoned improvisation and returned to composing. Michael Scroggins was one of the members of Single Wing Turquoise Bird, and like Wiseman also studied video with Paik and Abe. His interactive performances and abstract videos in immersive environments were presented in some of the most important international museums and venues. Study No. 6 (1983), Power Spot (1989), and 1921 > 1989 (1989) are his most prominent films. Between 1974 and 1979 Robert Watts, Bob Diamond and David Behrman created an audiovisual installation called Cloud Music. In this work, a black and white camera recorded the sky, a video analyzer identified the change in brightness on six points of the image and the corresponding passage of clouds, and sounds were generated by a synthesizer built specifically for the occasion. Each variation in brightness produced a harmonic change in the music. In 1977 Gary Hill made Electronic Linguistic, a black and white video in which he visualized electronic sounds in an original and often asynchronous manner. Other works by Hill related to language. During the 1960s and 1970s electronic art developed remarkably, shifting increasingly from the analog to the digital domain. Due to the technological limitations of the time, however, digital animations could

Fig. 6.9 - John Stehura, Cibernetik 5.3, 1965–69

84


selections from Tod Dockstader’s Quatermass (1964). Stan VanDerBeek was also drawn to computers, and collaborated with John Cage, Merce Cunningham, Allan Kaprow, Claes Oldenburg and Nam June Paik. Among other things, he designed and built the MovieDrome (1963–65 [Fig. 6.10]), a hemisphere for the screening of films, and envisioned the use of various forms of technology in the context of art. This artist considered computers as amplifiers of the human imagination, and telephone lines as a tool to create artistic works at a distance, maybe with the participation of the public. In both cases his vision was admirable, anticipating the development of computer networks and its implications in the field of interactivity. Many of VanDerBeek’s works do not belong to the repertoire of Abstractionism, but rather to that of Surrealism or Dadaism. Beginning in the second half of the 1960s, however, he created numerous abstract digital films, starting with Collideoscope (1968) and a series of eight animations called Poem Fields (1967–69), made with Ken Knowlton. The Poem Fields featured long sequences, which were originally in black and white and later colored through an optical process. Other films followed, including: Man and His World (1967); Moirage (1967), a study on optical illusions; Ad Infinitum, for three screens, with synthetic and microscopic images; Symmetricks (1972), another electronic work, in black and white, accompanied by Indian music and made during a residency atMIT’s Center for Advanced Visual Studies; Who Ho Ray No. 1, based on the shapes

Fig. 6.10 - Stan VanDerBeek’s Movie-Drome, 1963–65

of sound patterns; and Euclidean Illusions (1980), developed together with Richard Weinberg while Stan VanDerBeek was artist-inresidence at NASA, and featuring music by Max VanDerBeek. Beginning in the late 1960s, the artist Lillian Schwartz collaborated with various avant-garde musicians including Max Mathews, Jean-Claude Risset and Pierre Boulez, and was part of the organization Experiments in Art and Technology. Still very active today, Schwartz is a versatile artist whose works are often seen in 3D and are based on a wide variety of techniques—one of these, called 2D/3D pixel shifting, invented by her. Schwartz created several abstract 16 mm videos accompanied by electronic music, including: Pixillation (1970), with music by Gershon Kingsley and digital images, together with more traditional techniques; UFOs (1971), with music by Emmanuel Ghent; Apotheosis (1972), produced with images

85


of high-level programming languages—that is similar to natural languages—as well as easyto-use interfaces, making the use of computers much more accessible. 3/78 and Two Space were created in black and white and were based on sets of points that spread in two-dimensional space according to the artist’s choreography. In keeping with the Californian trend of that time of looking towards Oriental cultures, these animations were accompanied, respectively, by traditional Japanese and Javanese music. Calculated Movements was made with a different language, called ZGrass, and a technique based on volumes rather than dots. It employed two shades of grays along with black and white, astepahead compared to the early years, when only few could access a wide array ofcolors. Conditions were quite different when artists had access to more powerful machines, such as those in the computing centers of universities, which featured millions of colors. A case in point is the work of Yoichiro

derived from medical exams and featuring electronic music by F. Richard Moore; Mutations (1972), based on music composed by Jean-Claude Risset in 1969 and made with footage of growing crystals, digital images and lasers; and Papillons (Butterflies, 1973), made with John Chambers and based on thevisualization of mathematical functions. Among her more recent animations: Before Before (2012), EXALT (2013), and The Beauty of Excess (2013). Her award-winning films have been screened the world over. Also among the first artists to use computers was Larry Cuba, who in 1974 created the animation First Fig. After collaborating with John Whitney Sr. on Arabesque, Cuba attracted attention with other animations: 3/78 (1978), Two Space (1979 [Fig. 6.11]) and Calculated Movements (1985), the first of which he produced with the then popular programming language GRASS, written by Tom DeFanti. In this regard, it is worth noting that in this period the appearance of affordable instruments was accompanied by an increased availability

Fig. 6.11 - Larry Cuba, Two Space, 1979

86


Kawaguchi, who was able to create particularly sophisticated and colored animations. Kawaguchi’s animations introduced another innovative feature: threedimensionality. The works Growth (1982) and Morphogenesis (1984) were the first in a series that continued through the years, and constituted an initial step toward a new genre, that of 3D animation, based on the perspective projection of a virtual world generated according to the rules of Euclidean space. Kawaguchi created complex abstract images that simulated viscous liquids andevolving blobs. To be noted in this regard is that the interest in organic forms and the interaction between art and biology still constitutes one of the most stimulating branches of contemporary research. A pioneer in many ways, Kawaguchi was also interested in the relationship with music, although not in an absolute sense as other artists, as witnessed by the DVD Luminous Visions (1998 [Fig. 6.12]), a later production of psychedelic nature featuring music by Tangerine Dream.

Fig. 6.12 - Yoichiro Kawaguchi, images from Luminous Visions, 1998

87


Fig. 7.3 - Brian O’Reilly, Half-life I, 1999

92


manipulated his film footage to create Halflife (1999 [Fig. 7.3]), Sculptor (2001), Volt air (2001–2003), Fluxon (2002), Nanomorphosis (2003) and Pictor alpha (2003). Roads and O’Reilly are also active as performers and more recently presented Flicker Tone Pulse (2009), a collection of seven new audiovisual works. O’Reilly also collaborates with other artists, among them Darren Moore, with whom he formed the duo Black Zenith. This duo staged performances and presented their installation mostly in Southeast Asia, Japan and Australia. Notable by Black Zenith are NOCTURNAL BLUE (2012) and Indefinite Divisibility (2013). Earlier works by O’Reilly include: octal_ hatch (2003), with synthetic sounds generated through the UPIC system invented by Iannis Xenakis; Scan Processor Studies (2006–2007), made with material created by Woody Vasulka with the Rutt/Etra Scan Processor; Weather Mechanics (2007), a multiscreen performance with the Sandin Image Processor; Spectral Strands (2007), a series of videos based on the performances of violist Garth Knox, featuring compositions by Giacinto Scelsi, Gérard Grisey, Salvatore Sciarrino, Michael Edwards and Kaija Saariaho, that partly relate back to the Vasulkas’ Violin Power; and transient (for Koji Tano) (2012–13), featuring sounds by Zbigniew Karkowski. A somewhat similar approach was pursued by certain artists who directly visualized sound. Between 1992 and 1995 Jøran Rudi created When Timbre Comes Apart [Fig. 7.4], an interpretation of sound waves conceived

Fig. 7.4 - Jøran Rudi, When Timbre Comes Apart from Routine Mapping, 1992–95

93


Fig. 7.18 - Marcel Wierckx, Zwarte Ruis Witte Stilte (Black Noise White Silence), 2006

104


a non-synchronic structure between the visual and aural elements, and A Thousand Scapes (2009). In the video Field Notes From a Mine (2012), instead, images and sounds are generated from data gathered in Africa. Along a completely different vein is Walzkörpersperre (2013), made in collaboration with Gert-Jan Prins, in which luminous beams piloted by synthetic sound are projected onto a vertical cement wall. By van Boven are also the performances Point Line Cube Cloud (2008), Shadow Optics (2011), and On Growth and Data (2013), all in black and white. Lastly Black Smoking Mirror (2014), a performance centered on the principle of reflection, features a flammable screen marked by the combustion of a laser ray. Black Smoking Mirror, Walzkörpersperre and the experimental project Deep Space

Ceramics together constitute the trilogy Noise & Matter. Other noteworthy artists include Rafael Balboa, Liubo Borissov, David Brody, Stephen Callear, Chris Casady, Alison Clifford, Phil Docken, Barbara Doser, Jeffers Egan, Jim Ellis, Brian Evans, Thorsten Fleisch, Harvey Goldman, Joe Gilmore and Paul Emery, Michel Gagné, Ian Helliwell, Andrew Hill, Eytan Ipeker, Hiromi Ishii, Wilfried Jentzsch, Elsa Justel, Jaroslaw Kapuscinski, Takashi Makino, Anne-Sarah Le Meur, Lia, Yoel Meranda, Bonnie Mitchell, Scott Nyerges, Simon Paine, Rey Parla, Richard Reeves, Billy Roisz, Tim Skinner, Vibeke Sorensen, George Stadnik, Marcelle Thirache, Shawn Towne, Chiaki Watanabe, Jennifer West, Andy Willy, Amy Yoes and Dmitry Zakharov.

Fig. 7.19 - Martijn van Boven, Interfield, 2007

105


Fig. 8.3 - Granular~Synthesis, FELD, 2000

Fig. 8.4 - Ulf Langheinrich, HEMISPHERE, 2006

108


(2004), light projects to be applied on an architectural scale. Camera Musica (2000), by Gerhard Eckel, is a CAVE-like stereoscopic virtual environment with eight-channels spatial sound and a vibrating floor. In this work the music is linked to the viewers’ movements as they cross the partially transparent virtual cubes. In 2001 Eckel presented the installation LISTEN. Also of interest are the artworks by Squidsoup, for instance the stereoscopic audiovisual composition Driftnet I (2007 [Fig. 8.5]), the result of research previously seen in altzero from 1999, in which users are invited to fly by moving their arms like the wings of a bird to navigate the threedimensional space. Freq : 2 (2006) is part musical instrument and part composition, a work in which the shadows of viewers/ participants are transformed into audible sounds. The Stealth Project (2008) and Discontinuum (2009) were both created on the 3D LED grid system NOVA, while Surface (2010) uses the Ocean of Light hardware to fashion interactive sculptures based on light and sound. Scapes (2011) is a work in five movements with ambient music by Alexander Rishaug, in which each movement presents volumetric representations of sound using a three dimensional grid of lights. Volume 4096 (2012) features an immersive ensemble of lights as well, while Submergence (2013) isa hybrid environment where virtual and physical worlds coincide. In 2014 Squidsoup designed Wavelight and then Aeolian Light, which takes the path started with Submergence, followed in 2015 by

Fig. 8.5 - Squidsoup, Driftnet I, 2007

Fig. 8.6 - Carsten Nicolai and Marko Peljhan, polarm [mirrored], 2010

109


Fig. 8.15 - Edwin van der Heide, Saltwater Pavilion, 1997–2002

Notable among her numerous audiovisual works are: Untitled (1994), Smoke Screen (1995), SWELL (1995), Double Take (1996), Blue Blow (1997), Happy Happy (1997), A Sailor’s Life is a Life for Me (1998), Sun Porch Cha-Cha-Cha (1998), The TV Room (1998 [Fig. 8.14]), Phase = Time (1999), SpaceGhost (1999), Anything You Can Do (2000), Loop (2000), Stiffs (2000), X-Room (2000), One saw; the other saw (2001), sin(time) (2001), and Starry Eyes (2002), featuring a 20 m x 10 m LED screen. Other later works, for example The Wreck of the Dumaru (2004), Sharpie (2009), Premature (2010) or Pillow Flight (2014) do not feature sound. Water Pavilion (1997–2002) was conceived by Edwin van der Heide in collaboration with NOX, Lars Spuybroek, Oosterhuis Associates, Victor Wentink and Arjen van der Schoot. It is an installation in which lights, projections, sounds and the building itself interact with each other. Subdivided into Freshwater Pavilion and Saltwater Pavilion [Fig. 8.15], this work features sixty independent loudspeakers and aims to achieve a sonic architecture rather than simply sounds set inside a space. The music has a generative nature and is therefore always different. Also notable by van der Heide are Laser/Sound Performance (2004), LSP (2012), again a laser-based performance where image and sound are equally important, and the multi-display audiovisual environment DSLE -3- (2012). In addition to producing visuals for such bands as U2, The Chemical Brothers and Massive Attack, as well as collaborating with the world of fashion, the collective

Fig. 8.16 - United Visual Artists, Volume, 2006

116


UVA (United Visual Artists) has created numerous installations. Kabaret’s Prophecy (2004), was a curving LED wall used every night by different VJs in the eponymous London nightclub. The visual patterns created by the wall are the club’s main light source. Interactive Installation Prototype (2006) combines a LED array and 3D camera that senses the gestures and motion of the user, affecting the images and sounds. Volume (2006 [Fig. 8.16]), a field of fortyeight luminous, sound-emitting columns, was made with the musical contribution of Massive Attack and was shown for the first time in 2006 in the garden of London’s Victoria and Albert Museum. Here too, the work reacts to the movement of visitors. In 2008 UVA produced Contact, a responsive floor surface that interacts with users, generating audiovisuals forms. Chorus (2009) comprises a series of pendulums that act inside a space emitting lights and sounds, and is thus simultaneously an installation, a composition, and a musical instrument. Orchestrion (2011), a collaboration with composer Mira Calix, is a 64 cubic meter structure that is both a platform for

performance and a stand-alone sound and light sculpture. The idea of this work then evolved into Conductor, in which light forms are combined with an original score by Scanner, and finally culminated in Origin (2011). Momentum (2014) is an installation that UVA created especially for Barbican’s exhibition space The Curve in London, transforming it in a continually evolving spatial instrument in which pendulums project lights and shadows in the surrounding environment. Great Animal Orchestra (2016) is an immersive installation that celebrates the work of Bernie Krause, who over the years has recorded sounds of over fifteen thousand individual species. Other interesting abstract installations by UVA are instead without sound; Among them: Canopy (2010), Always/Never (2012), Fragment (2013), Grey Area (2013), Continuum (2013), Vanishing Point (2014), Blueprint (2014) and Formation (2014). LIFE – Fluid, Invisible, Inaudible… (2007 [Fig. 8.17]), by Ryuichi Sakamoto and Shiro Takatani, features a 3 x 3 grid of square acrylic aquariums suspended at 2.4 m from the ground and containing water and

Fig. 8.17 - Ryuichi Sakamoto and Shiro Takatani, LIFE - Fluid, Invisible, Inaudible..., 2007

117


an artificial fog machine. Images projected from above and modulated by the kinetic patterns of the mist and the water are reflected onto the floor. Speakers are also affixed to the aquariums: the outcome is a continually evolving audiovisual space in which each element is at once a focal point and part of a harmonious whole. Takatani is also the author of several installations and video performances. Julius Stahl is the auteur of numerous installations featuring very essential visual and sound elements, rigorously in black and white. Among these are Skizzen (Sketches, 2009), made with wires, paper, light and sounds, and Flaechen (Areas, 2009–10 [Fig. 8.18]) and Flaechen 2 (Areas 2, 2010), in which resonating walls of vertical wires (and paper in Flaechen 2), together with lights, create elegant convolutions and a play of light and shadow. Quader (2014) is in some ways similar, composed of eight hundred andthirty-six wires on which sinus tones are transferred, creating a moving structure of visible waveforms. By the same author are Holz (Wood, 2008), Feld III (Field III, 2011), Wellenfelder (Wavefields, 2012), Fragment (2013), Cluster (2014), Linien (Lines, 2014), Cluster (2016) and Feld V (Field V, 2016), eight installations using resonant objects. Untitled (2014), Licht I (Light I, 2016) and Licht II (Light II, 2016) are luminous objects that highlight the relationship between sound and space. Photogramme I (2012), Rauschen (2013), Photogramme II (2014), Phonographien II (2014) and Untitled (2014) are other installations in which shadows are projected on resonating objects.

Fig. 8.18 - Julius Stahl, Flaechen (Areas), 2009–2010

118


The collaborative work by Wolfgang Bittner, Lyndsey Housden, Yoko Seyama and Jeroen Uyttendaele Plane Scape (2010 [Fig. 8.19]) presents a forest of elastic rubber bands stretched from the ceiling to the floor, and focuses on the relationship between image, music and space. The space defined by the rubber bands also acts as a threedimensional screen that reflects the projected images. The installation is accompanied by sound piped through six speakers. Also by Yoko Seyama are the project Sentient Architecture (2009), again with Lyndsey Housden, and the installation Light work #6 : In Soil (2009), which presents images comparable to Thomas Wilfred’s, but featuring a sometimes irregular and vibratory movement. Also by Seyama is SAIYAH _ Scene E (2014), a synesthesiaevoking light performance, with music by Benjamin Staer. The Prayer Drums (2010 [Fig. 8.20]), by Louis-Philippe Demers, Armin Purkrabek and Phillip Schulze, is a work influenced by scenography and inspired by the prayer wheels sometimes seen in buddhist monasteries, and is composed of a wall comprising more than a hundred tiles. When a tile is spun, it becomes colored and generates a sinusoidal sound or noise; given the number of tiles the soundscape generated can by highly complex. The system also interprets the visitors’ movements, while a multichannel diffuses the sounds in space. Robert Seidel, auteur of the videos E3 (2002), with music by Michael Engelhardt, and _grau (2004), featuring music by Heiko Tippelt and Philipp Hirsch, has also

Fig. 8.20 - Louis-Philippe Demers, Armin Purkrabek and Phillip Schulze, The Prayer Drums, 2010 Fig. 8.21 - Robert Seidel, vellum, 2009

119

Fig. 8.19 - Wolfgang Bittner, Lyndsey Housden, Yoko Seyama and Jeroen Uyttendaele, Plane Scape, 2010


Sherwin interprets the projector as a performative instrument, bringing cinematographic screening closer to the condition of live music. Ken Jacobs also created performances with projectors, however his images are not abstract. An exception was Celestial Subway Lines/Salvaging Noise (2005), a phantasmagoric live performance, in collaboration with musicians John Zorn and Ikue Mori, in which Jakobs used a modified version of the magic lantern. In his many performances André Gonçalves has often used electronic circuits, electric motors and various controlling devices. Most interesting among them is for Super 8 Projector and Analog Synthesizer (2009 [Fig. 9.4]), a performance-installation in which a Super 8 projector was modified so that the amplitude of the sound signal controls both light intensity and projection speed: each event is different and is based on a destructive process of the film.

Contemporary technology is more frequently used, however, and there are innumerable examples of artworks featuring it. Beginning in the 1980s, Benton C Bainbridge conceived numerous videos and live performances using optical devices, analog and digital electronic instruments, as well as software written specifically for real-time image management, allowing him to mix and modify video footage in real time, drawing from the suggestions of the music and the environment, and creating absolutely captivating visual works. In 1998 with THE POOOL, a group that included himself and videomakers Angie Eng and Nancy Meli Walker, he conceived the performances Parascape and is warm, presenting the first at The Kitchen and the latter at the Whitney Museum of New York. Noteworthy among his numerous later performances are: NNeng (1999); live @ Test-Portal 2003; El Malogrado (The Loser, 2004), for three video channels; live @ sickness (2004), made with the Rutt/ Etra Scan Processor. Since 2007, Bainbridge has been active as visual designer for “One Step Beyond” at New York’s Museum of Natural History, transforming the museum’s hall with projections, mirrors, lights and laserbeams. His more recent works are Rainbow Vomit (2010), with projections on a cubic structure designed by MOS, Dialed In (2011), with percussionist Bobby Previte; Daytime (2012), with V Owen Bush and Steve Nalepa; Contemporal Collage (v. 2), 2014 [Fig. 9.5]; Horizon (2014) with Sofy Yuditskaya; Gems (2014); Ghost Komungobot (2014), with the musician Jin Hi Kim, and

Fig. 9.4 - André Gonçalves, for Super 8 Projector and Analog Synthesizer, 2009

128


Fig. 9.5 - Benton C Bainbridge, Contemporal Collage (v. 2), 2014 (detail)

Moving Paintings (2015) and the generative installation Observatory / Lisa Joy (2016). D-Fuse is an artists’ collective founded by Michael Faulkner in the mid-1990s that has also produced numerous installations and performances. D-Fuse released several DVDs, including D-Tonate (2003), featuring a compilation of films whose rough edits had been previously sent to musicians worldwide, inviting them to create their soundtrack. Light Turned Down (2001), composed in collaboration with Robin Rimbaud (aka Scanner), is centered on the

relationship between light and sound. In 2004, Scanner collaborated with the Alter Ego Ensemble on a live remix of works by the composer Salvatore Sciarrino. The Desert Music (2006 [Fig. 9.6]) accompanies the eponymous composition by Steve Reich with projections on double screen, while visuals were provided for the musician Beck during his Beck World Tour. Data Flow (2009) is a performance exploring hidden data on the Internet, while Particle # 2 (2010) is concerned with processes of abstraction starting from urban landscapes.

129


and natural/urban landscapes; Bleeding (2012), a synesthetic and synchronic performance for two screens, one of which is painted with phosphorescent pigments; and Punto Zero (Zero Point, 2007–14), a performance that develops from initially simple to highly complex and structured correlations, defining a full-fledged sensory grammar. More recently Otolab created Fields (2015 [Fig. 9.8]), exploring a universe of metal spheres held together by a magnetic field, and Syn (2016), an imaginary journey towards a transcendental state of consciousness, Resilience (2016) and Schism (2016), which explores the rhythm patterns and perceptual effects of two graphic elements placed in relation to the spatialized sound. The brothers Juha and Vesa Vehviläinen, forming the duo Pink Twins, are the auteurs of many performances, often characterized by particularly dense textures and sound masses. Their works include Purple Drain (2002), Goth (2003), Splitter (2006), Pulse (2006), Goth (Smooth) (2006), Pulse + (2007), which is now available as an app for Apple devices, Appetite For Construction (2008), Module (2008), and Defenestrator (2008). Among their most recent creations, Amalgamator (2012), Miracle (2012), and Parametronomicon (2015-2016 [Fig. 9.9]) and Overlook (2017). The duo formed by Boris (audio) and Brecht (video) Debackere have also created some interesting live-cinema installations and performances: Rotor (2005 [Fig. 9.10]), which applies classic notions of film to the production of images and music, and Probe (2008), an interactive

Fig. 9.8 - Otolab (synthetic images by Fabio Volpi), Fields, 2015

132


installation that refers to the cinema experience to create a generative audiovisual journey. Similarly, Cory Metcalf and David Stout, forming the duo NoiseFold, have proposed interesting live cinema installations and performances featuring sensors and other devices. Their installations include Blue Plot (2004), Three Tortures: Babel, Iriscan, Death by Image: a Sonic Execution (2007), i I i (2008), NADA (NOTHING, 2009), El Umbral (The Threshold, 2011), and MELT (2013). The duo’s most compelling performance is probably NoiseFold 2.0 (2009–11), which continues the research they started with NoiseFold 1.0 (2006–08) and Alchimia (2008). Metcalf and Stout define it as a “parthenogenesis machine,” because it is able to generate a wide array of audiovisual behaviors and organisms, including exotic bio-mimetic forms. NoiseFold 2.0 makes use of a complex multimedia system capable of live-mixing an indeterminate sequence of animated chapters, which are modified by infrared sensors, microphones, pedals and control surfaces and which automatically generate sounds. Thus the artists interact with visual forms in order to forge and develop the aural content of each performance. The constantly shifting artificial living

Fig. 9.10 – Boris and Brecht Debackere, Rotor, 2005

structure produced is then presented on a multiscreen video panorama. Also worthy of mention is their performance Emanations (2012), with cellist Frances-Marie Uitti. Of particular interest is the oeuvre of Paul Prudence, who since 2007 has conceived various live works, most of which use generative software: ryNTH[n1] (2007), dedicated to labyrinths; Structure-W (2007), anevolving architecture inspired to gyroscopesand anti-gravity mechanics; Talysis II (2007–2009), in which signal feedback produces symmetrical tessellations and hyperbolic geometry; Fast Fourier Radials (2008), a three-dimensional spectrographic visualization of sound; Rynth[n3] (2010), a performance for planetarium dome featuring essential shapes; Parhelia (2009–10), about a weather event; Bioacoustic Phenomena (2010), in which cellular entities grow in response to sound vibrations; Structure-M11 (2012), derived from industrial

Fig. 9.9 - Pink Twins Live, 2014

133


sounds; Sphaerae.Acoustic.Study (2013), a version of Hydro Acoustic Study (2010) forhemispherical dome; Arc. Cyclotone (2013), a version of Cyclotone (2012) for giant screen that was followed by Cyclotone II (2015); Apeiron (2013), an audiovisual work for Musion’s holographic projection system; Quanta (2014), on the famous physics theory; ungear moi (2014), an interpretation of a piece by COH; the colorful Parhelia (2010–12), displaying constantly evolving geometric shapes; and finally Chromophore (2013 [Fig. 9.11]), featuring a bidirectional communication system between images and sound, and its extensions Lumophore (2013), made for the planetarium in Kaluga, Russia, and Lumophore II (2015).

Other artists have no link to the world of VJs and follow an independent course. For example, Randy Jones, who has been active in this field since the 1980s. Jones at thetime developed a device, based on two sticks, with which he could interact and control audiovisual events. More recently, Jones also worked on Jitter, a video and 3D graphics extension to the Max/MSP environment. Six Axioms (2007) is probably his most representative work. In 1996 Pete Rice presented an interactive system for sound production called Stretchable Music, based on the creation of two-dimensional synthetic objects, each representing a sound. The sounds can be transformed in real time by stretching, compressing and modifying the different visual shapes in various ways. Following the experience gathered in some of his previous works including Yellowtail (1998) and Floccus (1998), Golan Levin presented Audiovisual Environment Suite (2000), a groundbreaking set of seven interactive systems that allow people to create and perform abstract animation and synthetic sound via a graphic tablet, and therefore through gestures. Levin produced the impressive performance Scribble (2000 [Fig. 9.12]), in collaboration with Gregory Shakar and Scott Gibbons, using this synthetic audiovisual instrument, and later created other installations and interactive environments, including Messa di Voce (2003), which was also presented as a performance. High-resolution synthetic animations were employed by the composer Richard Lainhart in No Other Time (2009),

Fig. 9.11 - Paul Prudence, Chromophore [versione xCoAx], 2014

134


a performance for a large reverberating space, electronic music and projections. Lainhart is also the author of various animations, including Pneuma (2008), a sequence of patterns produced with cellular automata software. In this work the music was improvised live with an analog synthesizer. In 2004 Andrew Garton presented audiovisual performances using the Unreal Engine developed by Epic Games in 1998, one of the various software frameworks used to create video games. This is of particular interest, since video game rendering engines permit technically complex solutions and can be used to step out of the typical expressive canons of this form of entertainment, for example to create live performances, or animations called machinima. Also worth noting in this context is Rez, a video game conceived with synesthetic intents by Tetsuya Mizuguchi with United Game Artists, and released by Sega in 2001 for its platform Dreamcast and for the Sony PlayStation 2. A high-definition version for the Microsoft Xbox was released in 2008. During the early 2000s the trio S.S.S Sensors_Sonics_Sights, formed by Cécile Babiole, Laurent Dailleau and Atau Tanaka [Fig. 9.13], have created visual music in real time using a theremin as well as various sensors that can detect arm movements or the distance between the performer and an instrument. The aim was to achieve a three-part audiovisual conversation among the members of the group. Similarly, in 2010 Camille Barot and Kevin Carpentier conceived Immersive Music Painter, providing a stereoscopic interactive

environment and surround sound with which one can create audiovisual events in real time. Other relevant artists in this field include Katrin Bethge, Clinker, R. Luke DuBois, Ellen Fellmann, Al Griffin, Max Hattler, Ryo Ikeshiro, Adam Kendall, The Light Surgeons, Jarryd Lowder, Mia Makela, Olga Mink, Stefan Mylleager, Joshue Ott, Franz Rosati, Billy Roisz, Mark Rowan-Hull, Saul Saguatti, Ben Sheppee, Kurt Laurenz Theinert, Vello Virkhaus and Visual Systeemi.

Fig. 9.12 - Golan Levin, Scott Gibbons, Gregory Shakar, Scribble, 2000 Fig. 9.13 - A performance by S.S.S Sensors_Sonics_Sights

135


George Lakoff and Mark L. Johnson (1980), metaphors are mappings between different domains, like abstract images and music. Metaphor mappings are unidirectional, that is A → B, but not B → A. In a general sense, metaphors are used to bring within understandable terms events that would otherwise not be so. For example, defining a sound as “cold” (or bright, dry, etc.) is useful in that it relates a sound whose characteristics are difficult to describe to something universally known as the cold. The metaphor “this sound is cold” is a simplification, and it follows that many even very diverse sounds can be considered “cold.” The advantage is that communication and conceptualization are facilitated, indeed, the non-physical is usually conceptualized as physical, which is closer to the material world and therefore easier to comprehend. According to Lakoff and Johnson, metaphors have been traditionally understood strictly in terms of language, and not as something that structures our thought and everyday activity. It is reasonable to suggest that words alone cannot modify the world around us, but that changes in our conceptual system influence the way in which we perceive the world. In addition, Lakoff and Johnson maintain that “the experience of time is a natural kind of experience that is understood almost entirely in metaphorical terms”5 and that “we speak in linear order; in a sentence, we say some words earlier and others later. Since speaking is correlated with time and time is metaphorically conceptualized in terms of space, it is natural for us to

Fig. 10.1 - Nicholas Negroponte, Different kinds of film: personalized, conversational, navigational, synthetic. From The Impact of Optical Videodiscs on Filmmaking, 1979

138


conceptualize language metaphorically in terms of space”.6 Introduced here is the theme of the linearity of language in audiovisual art, in other words of a form with a beginning, a development, and an end. In many cases it was natural and understandable for many abstract audiovisual works to follow this approach, music being taken as a point of reference and being an art that mostly develops linearly. This view, however, began to be disputed, especially since the appearance of the first installations. Sound in fact influences the perception of an image in many ways, modifying the perception of movement and speed, and defining time, that is, creating a definite temporal sequence. The predictability of music also influences the perception of images. A musical composition is usually like a frozen memory, and listening to it is akin to reliving and reexperiencing a memory, in a linear way. Sometimes, however, the principle of causality that connects one event to another disappears, as indicated by Umberto Eco (1989) in the early 1960s. It is also interesting to note what Rudolf Arnheim stated in this regard: “Together, the sequential and non-sequential media interpret existence in its twofold aspect of permanence and change”.7 The linearity or non-linearity of a work is also partly tied to the work’s technological medium. In 1979 Nicholas Negroponte published the paper “The Impact of Optical Videodiscs on Filmmaking.” Videodiscs introduced an important novelty compared to celluloid film and videotapes, namely the non-sequential access to frames, which on

Sekule and Blake 1985, 104. When a photograph can be heard, 2011. 3 MacDonald 2006, 55. 4 DVD CREATIVE PROCESS 1990,

this optical medium could be located reasonably quickly. This meant having access to a tool that allowed one to manage an archive of images and sounds differently compared to the past. Negroponte proposed four new types of interactive films made possible by videodisc technology: personalized, conversational, navigational, and synthesized [Fig. 10.1]. Today it is possible to manage an enormous quantity of digital images and sounds, filed on optical, magnetic and electronic media rather than on now obsolete videodiscs, but the concept remains unchanged. Often the creators of installations, VJs, or live cinema performers use this type of architecture to create unforeseen audiovisual journeys in real time. This type of artistic expression usually does not rely on linear structures, but rather becomes like an exploration of a sometimes predetermined, sometimes unpredictable audiovisual environment. This does not mean, however, that these types of open audiovisual forms—to use Eco’s terminology—are better than traditional ones: they are simply different, and address different issues. They represent modes of communication that enrich the expressive possibilities, and thus add to the creative arsenal a greater variety.

1 2

1:06:42 – 1:07:22. Lakoff and Johnson 1980, 118. 6 Ibid., 126. 7 Arnheim 1971, 308. 5

139


Fig. V - Wolfgang Köhler, Baluba and takete, Gestalt Psychology, 1929

At the same time, the issue might be more complex, since synesthesia has been found to skip generations. Often those manifesting this characteristic are left-handed and endowed with an excellent memory. Although the physiological component is undeniable, it is also important to note that the notion of sensory modality is to some degree dependent on culture. For example, the Desana of Colombia generally describe smells in terms of color and temperature, as reported by Constance Classen. One could speak here of a sort of cultural synesthesia, that would probably be lost on Westerners. On the other hand, there seem to exist mappings that are almost objective, recognized by a great majority of individuals regardless of their culture. An example of such mappings emerged in Wolfgang Köhler’s famous experiment of 1929, according to which the words “baluba” and “takete” correspond, respectively, to sinuous and angular figures [Fig. V]. The same experiment was repeated more recently by Ramachandran and Hubbard using the words “kiki” and “bouba.” [Fig. VI]

The extraordinary finding is that ninety-five percent of the individuals interviewed, who were not affected by synesthesia, agreed on the association between word and shape. Ramachandran and Hubbard attribute this phenomenon to the correspondence between the abrupt changes in visual direction of the lines in the left-hand shape with the phonetic inflection of the sound “kiki,” as well as with the sharp inflection of the tongue on the palate. One could also maintain however that these sounds are signals containing a certain degree of noise, greater in kiki and takete, and smaller in baluba and bouba: the corresponding forms would then be a representation of the noise. In addition, the shapes of the letters— supposing they are seen—also suggests a likeness to the drawings, as the letter “k” present in kiki and takete is certainly closer to an angular shape than to a rounded one. Research on synesthesia received a considerable boost from the latest technological developments, such as PET (Positron emission tomography) and especially FMRI (Functional magnetic resonance imaging). Riuma Takahashi and other researchers carried out studies with these technologies on sound → color synesthesia. The result of these experiments demonstrates that the area of the brain called V4, which is in primarily in charge of color vision, becomes activated under conditions of synesthesia, which would confirm Ramachandran’s and Hubbard’s cross-wiring theory. Moreover, because the response in area V4 happens very rapidly, one could exclude a top-down, that is, cognitive process, supporting the view that

146


synesthesia is a predominantly perceptual phenomenon. Sound → color synesthesia seems to be a typical cross-modal expression, involving the interaction of different cerebral sectors. As was previously mentioned, some individuals see colors of definite hues, while others see more complex abstract images, such as colored globes, lines or waves. In 2006 Jamie Ward and some of his colleagues carried out an experiment with various timbres and pitches. They came to the conclusion that sound → color synesthesia is probably caused by an exaggeration of mechanisms that are normally present, rather than by special cerebral pathways that are unique to synesthetes, which would confirm the pruning hypothesis. Other research found a correlation between synesthesia and perfect pitch, even though, surprisingly enough, the individuals with this gift often show less coherence in the choice of correspondences. In addition, they associate notes an octave apart with the same color, although with different degrees of brightness, that is, the higher the sound the lighter the color. Often in fact sound pitch is related to luminosity. Another test used dyads or two-note chords, rather than single notes: in this case the individuals reported two or more colors. In yet another experiment, synesthetes were presented with glissandi between two notes rather than fixed tones, and they reported colors corresponding to the intermediate notes. A similar result was obtained by using microtones, while slightly out of tune notes were perceived as somewhat darker when the sound frequency

was lower compared to tuned note, and lighter if the frequency was higher, confirming a certain consistency in the perceptual phenomenon. Also noteworthy is the work of William G. Collier and Timothy L. Hubbard, which studied musical scales and modes and their corresponding visual brightness in nonsynesthetic observers. The result was that the descending harmonic minor mode was rated as darker than the descending natural/melodic minor mode. In addition, musical keys that started on a higher pitch were perceived as more luminous than those that started on a lower pitch. The data suggests that the rating of brightness of musical stimuli is influenced by more general aspects of pitch height, pitch distance between successive notes, and pitch contour, that is, direction of pitch movement, rather than mode or key. Finally, it would be interesting to conduct research with sounds that do not belong to Western scales or that are at any rate complex enough that they cannot be related to the tones of any musical scale, such as those that can be created using contemporary technologies. An idea for the future.

Fig. VI - Kiki and bouba, from Vilayanur S. Ramachandran, Edward M.Hubbard, Synaesthesia – A Window Into Perception, Thought and Language, 2001

147


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.