LA CANVAS - THE SOUND ISSUE

Page 12

C A N V A S

EDITOR’S NOTE editor-in-chief REBECA ARANGO From my desk at the LA CANVAS studio, I can hear an unidentified mid-frequency buzz pulsing at the rate of 80 beats per minute. At least an octave higher, there’s the monotonous collective hum of the computers. Together they produce a slightly sinister perfect 5th, garnished by the glittery whisper of ceiling fan-swirled air (which I imagine resonates clearer with our canine companions). This is what the world sounds like in 2013, where most of our popular music is made of slices of other music, constructed from 1s-and0s that can be infinitely rearranged, and widely consumed on websites like Soundcloud, where comments cluster around the cliff of the drop. Unsurprisingly, producing the Sound Issue has given the struggle between man and machine a starring role in our March/April publication. It peeks its head in conversation with DJ/illustrator/tastemaker Franki Chan (p. 18) and Boiler Room founder Blaise Belville (p. 20), but there are two stories in particular in which its whole body seems to surface. Despite speaking different languages of style, hip-hop producer/rapper Chase N. Cashe (p. 16) and sound-art collective Lucky Dragons (p. 36) are both caught moderating the digital conflict. For New Orleans native Chase N. Cashe, there’s the tension between his traditional, acoustic origins and his technology-dependent craft, which is resolving itself as he crosses over from beatmaker to MC. When it comes to the art of Lucky Dragons, the means are at odds with ends; the challenge is to use electronic sound to foster a physical, collective connection. For years now, the music industry discussion has centered on the way the Internet has revolutionized distribution. But what both of those stories speak to are the more immediate ways the art has been altered, the intimate parts occurring before and after all that logistical nonsense: creation and perception. The computer is more often than not, the middleman. And whether it’s a larynx, a mandolin, or a gramophone, music always has a middleman. Analog or digital, there’s still an artist on one side and an ear on the other. And it seems the longer we cohabitate with technology, the more adept the artist becomes at narrowing the divide between spirit and computer, and the more adept technology becomes at translating what’s human. For example, take the song-identifying App “Shazam.” Like many programs, it aspires to be a human with infinite knowledge and memory. I had previously imagined it working in a simple, linear way, by comparing the shape of a recorded sound wave to a library of other shapes on a server somewhere. But this sort of 2D pattern recognition would be nearly impossible, because most of the recorded music we hear in public is compressed. Meaning, the distinctly mountainous pattern of a wave is distorted, flattened, or cut-off; its signature binary representation is reconfigured. So Shazam must identify music through a more complex process—one more analogous to the function of the human ear—by identifying an acoustic fingerprint: a multi-dimensional picture of a record constructed from a group of perceptual characteristics. Even as all of these measurements together seem to approximate human ability, they are still database-dependent. What Shazam couldn’t possibly do is identify the voice of Bruce Springsteen singing “Happy Birthday” in the other room. But maybe the CIA has software that can.


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.