Once More, With Feeling

Page 1

Once more, with feeling THOUGHTS & Evidence FROM the making of MOTIV

Russell Maschmeyer, Interaction Design MFA Candidate, 2011 | School of Visual Arts



00 00 CONTENTS 01 Introduction 02 Exploration 03 Investigation 04 Research 05 Evolution 06 Execution 07 Result 08 Conclusion 09 Thank you 10 REFERENCES

Russell Maschmeyer. Once more, with feeling.



01 Introduction

I’m kind of cheap. I don’t spend much on clothes or housing or vacations. I don’t own a car. But I do spoil myself with one, somewhat bizarre hobby: I collect vintage audio recording equipment. I have what amounts to a small, but professional studio tucked away in a closet and a small encyclopedia’s worth of sound engineering know-how tucked away in the nooks and crannies of my brain. I collect because I’m fascinated by the tools and the processes of music making. Music is magical stuff. Collecting musical technology has taught me—and this is why music is fascinating to me—that musical magic isn’t born of the newest technology. There’s as much magic in a Daft Punk performance as in a Sam Cooke a cappella. What makes music magical is the way it can be bent toward expression. I knew while preparing (mentally/physically/emotionally) for the thesis year, that I had to find a project that would combine my love of music, process, and technology. I wanted my thesis to be something I might never have been able to do outside the safe walls of my graduate studio environment. I wanted to make something real, that people could experience first hand. I wanted people to feel just a little bit of magic. I wanted jaws to drop. I wasn’t thinking small.

Russell Maschmeyer. Once more, with feeling.

INtroduction • 5



02 EXPLORATION

I kept a pocket journal during the early part of my second-semester. On the cover I wrote “Thesis Ideas” and jogged down every idea that came to mind. It was informal, but useful. The second entry reads: “making music -> pattern & deviation, surprise.” It was the beginning of what Steven Johnson, author of Where Good Ideas Come From, would refer to as my slow hunch. Over the course of the summer and into the following semester I explored the purpose of pattern and variation in music. Through reading, thinking and writing I realized I was chipping away at much a larger truth: music and physical movement are inextricably linked. Music instinctively drives us to move, and movement drives our instinctive understanding of music. Our brains are hard-wired for it. Then I turned my attention to instruments. I began considering the strengths and weakness of both digital and traditional instruments when judged against this need for connection between movement music. When mastered, traditional instruments become extensions of the musician’s body, allowing musicians to express themselves through movement. New digital instruments seem to lack the same level of connection to physicality. I wanted to find some way to bring a higher degree of expressiveness to digital instruments.

Russell Maschmeyer. Once more, with feeling.

Exploration • 7


BLOG POST March 29, 2010

First Thoughts on a Thesis, and Evolution

They weren’t born that way. They didn’t get that way by accident, either. They got that way by chronic, repeated abuse. That’s not a digital problem, that’s a physical problem. It’s still about an industrial system that cruelly sacrifices human flesh for the sake of dysfunctional machinery. They sit, they type, they stare in screens. All day, every day.” Bruce sterling, shaping things

Though work won’t begin until next semester, a lot of us here at chez SVA IxD have been racking our brains all semester to come up with some brilliant thesis ideas.

Of course we do this to ourselves for a reason. The computer gave us so many opportunities to work abstractly, to expand beyond the physical, to create a document that wastes no paper and can be edited with little to no effort, to model reality. We encased these functions in a form that fit

Slow going at first, but my creative juices got a jump-start down in Savan-

the means of the time: screen, keyboard, mouse. Back then, technology

nah during IxD ’10. There were so many brilliant ideas on display it was

couldn’t be expressive. Since it did the job well enough, we stuck with it—

hard not to come away with a few of your own. Like the effect of hearing a

for 50 years now.

faint tune in the distance and interpreting your own original melody from the fractured bits you hear. That’s happened to you guys too, right? Initially I thought it might be interesting to do something musical, but I hesitated. I didn’t want to concoct a new instrument interface. I’ve seen that done a lot (see: Tenori-on, Otto, and or draw your own), and while that would be fun and fascinating in its own way, it felt too obvious. I want this project to guide my career for at least a few years to come. So I wanted a

“Initially I thought it might be interesting to do something musical, but I hesitated. I didn’t want to concoct a new instrument interface…”

broader scope. I’ve been thinking about evolution. After hundreds of thousands of years,

But technology has increased on exponentially since then, and we have

we’ve become highly physical beings: gangly arms, upright posture, joints

the capability to do so much more. We have the ability to create, touch

and bones that withstand long arduous walks, teeth that can cut and

and manipulate virtual spaces and objects in ways only dreamed of a few

grind nearly anything, hands that can perform millions of coordinated,

years ago, in ways that seamlessly blur the distinction between “IRL” ob-

nuanced manipulations. We’re amazing physical beings. Yet, we spend all

jects and those of the virtual world. We should celebrate and honor our

of our time sitting, staring into projected light computer terminals, han-

evolution by finding new and more physical ways to handle the digital. We

dling virtual objects with a “digital finger” that has less nuance than my

should find ways to de-abstract our computational world in ways that still

pinky toe. We punch 78 or so keys repeatedly to “talk” to friends or to

carry the benefits of virtualization.

“compose” a piece of music. We’re not using our bodies, and so they are failing us. Bruce Sterling, in his book Shaping Things, puts it pretty well:

So, in a nutshell. That’s what I want to do.

“…the heavy duty programmers…are commonly portly guys with wrist supports, thick glasses and mid-life heart attacks.

8 • Exploration

School of Visual Arts. MFA in Interaction Design.

{ Above } Bruce Sterlin’s Shaping Things was (and is) and incredibly important book to me as an interaction designer and lover of technology. It brings to light some of the ways our technological existence is misaligned with our evolution as a species.


field notes Page one of my ‘Thesis Ideas’ notebook. The second entry, ‘making music -> pattern & deviation, surprise,’ turned out to be strangely prescient.

Russell MaschMeyeR. OncE MORE, wiTH FEElinG.

ExploratIoN • 9


BLOG POST April 16, 2010

Every Extension

Amazing. Here we are, a society hell bent on extending our reach through phones, through computers, through “seamless integration” and yet all along the way we’re unwittingly losing perhaps as much as we gain. The mediums we create are built to carry out specific tasks efficiently, but by

I’m continuing to work my way through Adam Greenfield’s Everyware, an amazing book that continues to blow my mind.

doing so they have a tendency to restrict our options for accomplishing

In “Thesis 43” Greenfield quotes Marshall McLuhan (who incidentally has

The extent to which a certain medium is adopted to perform other tasks,

been named the patron saint of Wired Magazine). Marshall coined the

it begins to restrict those as well. This is exactly the case against com-

terms “global village” and “the medium is the message” as well as wrote

puters. We’ve shoved a lot in there and not all of it fits very well. The tool

Understanding Media: The Extensions of Man in 1964, his seminal and

has begun to ill-fit the tasks. Our attentions are now paid so often to how

most widely known work. Anyhow, here’s a great sum up from the Wiki-

to learn a new piece of software or hardware. Seems like a lot of wasted

pedias:

energy. This leads me to Principle Number One of my thesis project:

“McLuhan’s insight was that a medium affects the society in which it plays a role not by the content delivered over the medium, but by the characteristics of the medium itself. McLuhan pointed to the light bulb as a clear demonstration of this concept. A light bulb does not have content in the way that a newspaper has articles or a television has programs, yet it is a medium that has a social effect; that is, a light bulb enables people to create spaces during nighttime that would otherwise be enveloped by darkness. He describes the light bulb as a medium without any content.”

that task by other means. We begin to learn the “One” way to do it, when in fact there are infinite ways. The medium begins to restrict our thinking, our imagination, our potential.

Principle I: Focus on the task, not the tool Who knows how many principles I’ll have by the end of next year. But that feels like a good start to me.

{ Above } Adam Greenfield’s Everyware: The Dawning Age of Ubiquitous Computing was a source of inspiration, driving me to reconsider the ways we shape our technological ecosystem and awakening me to the amazing work in the field of ubiquitous computing.

Wikipedia, “Marsh a l l M c L u h a n ”

So that’s all some intense background just to get to a great idea. Greenfield quotes McLuhan from Understanding Media. McLuhan brilliantly points out:

{ Below } Marshal McLuhan’s Understanding Media is essential reading for anyone interested in media, technology, or design.

“Every extension is [also] an amputation”

10 • Exploration

School of Visual Arts. MFA in Interaction Design.


field notes Another page from my ‘Thesis Ideas’ notebook. As I continued to try and escape my innate desire to work in the musical space, my ideas became more and more disjointed.

Russell MaschMeyeR. OncE MORE, wiTH FEElinG.

ExploratIoN • 11


BLOG POST March 29, 2010

Conducting In the Box

Possible applications include:

An Early Thesis Statement

02

Re-imagining the interfaces of a mixing engineer to perform

03

Approaching audio effects such as compression, EQ, reverbs, de-

Musicality is the intersection between movement and sound. Music and dance have been bound to each other since before our species can remember itself, and the two cannot be split without a loss of efficacy. They are two sides of the same phenomenon—like electricity and magnetism. Musicality is motion. Motion is musicality. To say it plainly: without an understanding and fluency in motion, one cannot be musical.

01

Humanizing a beat sequencer to generate a more dynamic “performance” and a deeper kinetic response to computer generated beats.

more like those between conductor and orchestra

lays, et al using gestural sculpting

There have been inroads to educating computers about motion in music. Wii Music is a great example of an interface that allows for a playful physicality to be returned to a previously arranged piece of music as the participants control dynamics through motion activated Wiimotes. Tod Machover’s Media Lab group, Opera of the Future, is creating richly physical instruments which can be played intuitively through gesture and other types of physical computing. A central theme in Tod’s work is breaking

A computer’s interface is generally dumb to motion; therefore, it cannot

down barriers for non-musicians with unusual instruments so that they

understand or engender musicality. Sure, people make fantastic music

can begin creating rich music without formal training.

on computers, but we wage an interface battle with computers in order to encode our inherent musicality into an inhospitable environment. Even so, more and more musicians and music professionals on both the young and experienced ends of the spectrum have been turning to the computer to inspire, create, and mix their music. It’s called “mixing in the box.” I have to think hard to find a less appealing phrase. Conductors were the first audio engineers. With a deftly rising, falling, and swooping hand accompanied by a metronomic wand they controlled what listeners heard. Volume, speed, dynamics, character, and sometimes content. They shaped raw music into experience, into motion by using nothing more than motion. Modern engineers do it with mixing boards, pan pots, reverb units, delays, flangers, EQs, compressors, expanders, and—God, save us—computers. In order to interface with a single piece of music, an audio engineer may stare at and tweak hundreds of disparate, clunky, asynchronous interfaces, each one a dozen times over. I’d like to investigate the role gesture might play in educating computerized musical engineering interfaces as to the natural motions of music.

12 • Exploration

School of Visual Arts. MFA in Interaction Design.

{ Left } This statement was written after only a few weeks contemplating my interests. It’s surprising how little I’ve deviated from the core belief: that physical motion is essential to creating expressive music.


{ RIGHt } As usual, a thorough and thoughtful response from Liz Danzico to my early thesis statement.

Thesis Preparation Liz Danzico Feedback: April 26, 2010

For Russ Maschmeyer This is an exquisite pursuit, one that is rich with possibilities to explore. As an area to pursue, it’s both personal and pragmatic, progressive and has legacy. I’d be enthusiastic to see this progress as a thesis project. Some people and considerations: Control: Both a conductor and a computer relegate the control to one or a series of pre-defined individuals. In addition to motion, might you be considering democratizing the process of composition to a larger set of consumer or composers? Learning curves: What might the learning curve be for these types of new interfaces? At the intersection of game and movement, is some Tetris-related research: Epistemic Action increases with skill (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.51.32 79&rep=rep1&type=pdf) Kars Alfrink at Leapfrog has been exploring the intersection of interaction design, music composition, and game design. One of the references I see online is to this “opera” that uses large-scale urban games and an opera he composed (http://leapfrog.nl/blog/archives/2009/05/29/announcing-ahybrid-game-opera-for-monster/). While not at the interface level, Kars has been an interaction designer with some depth, and might be an insightful resource for you. Amit Pitaru (http://www.pitaru.com/) whose work has been focused on, primarily art, the work of music and interactive experiences. See also bodies as instruments: “How Bodies Matter: Five Themes for Interaction Design” (http://hci.stanford.edu/publications/2006/HowBodiesMatterDIS2006.pdf)

Russell MaschMeyeR. OncE MORE, wiTH FEElinG.

ExploratIoN • 13


SUMMER SOMEWHERE ELSE From June 1st through August 30th of 2010, I took a time-out from New York and set up shop in San Francisco. I was interning at Apple, an amazing experience allowing me to explore a new field without have to throw my entire being into the work. After the first year of SVA Interaction Design, I definitely needed to recoup some extra mental bandwidth. So, despite working forty hour weeks and being three-thousand miles away from Jessica, it was still a relaxing summer. San Francisco was beautiful and I’ve always enjoyed lonely adventures in strange places. I could have squat-thrusted the weight of a buffalo after walking all those hills. During these three months I began thinking in depth about my career goals and how my thesis project for the coming year might fit in. I knew that I wanted to spend my life working on technologies that honor human evolution. I want to create experiences that strengthen our physical and mental development, while driving social progress. I realized that digital instruments suffer from a lack of fluency in human motion. When I put two and two together, I began to see this thesis focus as a microcosm of a larger technologically systemic issue.

14 • Exploration

School of Visual Arts. MFA in Interaction Design.


A MAJOR INFLUENCE While in San Francisco, I spent each weekend morning at Boogaloo’s, a small cafe on Valencia Street in the Mission District. I’d order a hearthealthy breakfast like juevos rancheros or eggs benedict with bacon on the side and I would dive into Levitin’s This Is Your Brain on Music. Call me a pusher, but if you’re interested in music on any level, get this book. It took the main stage in shaping my thesis concepts—not to mention my thoughts about music and human evolution on the whole. Fascinating on all accounts.

Russell Maschmeyer. Once more, with feeling.

Exploration • 15


expression is BORN Of variation There are enough insights in This Is Your Brain on Music to launch a thousand thesis projects, but the key insight for me was that our neurological interest in music is rooted in expecation and deviation. We build up libraries of music in our brains and we begin to expect certain patterns in the music we listen to at both a macro and micro level. Music only gets interesting to us when a performance deviates in a controlled way from our expectations.

PAGE 72

PAGE 170

PAGE 171

PAGE 171

“A pianist may play five notes at once and make one note only slightly louder than the others, causing it to take on an entirely different role in our overall perception of the musical passage.”

“Groove has to do with a particular performer or a particular performance, not with what is written on paper.”

“The genius of his playing is that he keeps us on our mental toes by changing aspects of the pattern every time he plays it, holding just enough of it the same to keep us grounded and oriented.”

“Musicians generally agree that groove works best when it is not strictly metronomic— that is, when it is not perfectly machinelike.”

16 • Exploration

School of Visual Arts. MFA in Interaction Design.


Movement & MUsic This passage, spanning page 210-211 in This Is Your Brain on Music was one of the most influential for me. The concept of a hidden, primal link between physicality and musical expression—an inborn, physical understanding—was incredibly powerful.

Moreover that musical expertise can take both a technical and an emotional form was central to my attempts at separating the two. Without this passage I might have ended up somewhere very different.

Russell Maschmeyer. Once more, with feeling.

Exploration • 17


E-MAIL From: Date: To:

Nicholas Felton August 6, 2010 Russell Maschmeyer

Howdy Russ! Welcome back to the East Coast (albeit it briefly). I was chatting with Jessica last night about SVA stuff. They’ve inquired whether I would like to be an advisor, which I think would be fun - but with the right advisee. I enjoyed our brief conversation earlier this year, and wanted to pitch myself as a potential advisor, if you think I’d be a good match for whatever you have planned. That is all! Best, nf

18 • Exploration

School of Visual Arts. MFA in Interaction Design.


ON CHOOSING AN ADVISOR I had a lot of questions about what traits were important in an advisor when it came time to choose. Was the most important thing that my advisor be an expert in the field I was exploring? I found it helps, but it’s not the most important thing. Was it important my advisor have some celebrity to potentially launch my project into the limelight? Actually, that would have been a negative if he/she was too in demand to meet regularly. Was it important my advisor have a specific technical knowledge? I found a technical understanding was important, but technical skill wasn’t paramount. After all, my advisor wouldn’t be building my project. The important thing, to me, was connecting with someone I respected, but most importantly: someone I liked and someone who liked me too. You can find technical assistance in a lot of places, but it’s much harder to find a collaborator. That’s the role your advisor should play. Luckily, Nicholas Felton made this decision easy on me. He asked me if I’d like to be his advisee. I like and respect Nick a lot. So I said ‘of course.’ I did seek the help of a few others over the course of my thesis year including Robin Bargar (who really is an expert in the field I’m exploring) as well as Larry Legend. Not to mention thesis instructors Jenn Bove and Paul Pangaro (it really does take a village). A lot of great people contributed valuably to my thesis project, but not in the way Nicholas Felton did by being there every week, genuinely interested in the problem I was trying to solve and the way I was going about solving it. Hats off to Nicholas. www.feltron.com

Russell Maschmeyer. Once more, with feeling.

Exploration • 19


BLOG POST September 7, 2010 April 22, 2010

Extensions & Mastery Interesting article today on Scientific American concerning our use of tools and the level to which our brain can assimilate tools as parts of the body. This is something I’ve thought a lot about as a musician and Interaction Designer. The article provides a good jumping off point for a new thesis principle I’d like to share:

Principle II: The most widely mastered tool is the human body.

“Humans, and some other animals, are able to use tools as additions to the body. When we use a long pole to retrieve an object we couldn’t otherwise reach, the pole becomes, in some sense, an extension of our body.”

{ Left } My first post of the Thesis Development semester. This article drove ideas about the body being played as its own instrument, as long as someone could build a computer system to sufficiently interpret it.

Pa t r i c k H a g g a r & M a t t h ew R. L o n g o

{ RIGHT } According to Malcolm Gladwell (Outliers), it takes 10,000 hours of experience for someone to become an expert, music notwithstanding. My hope was that transforming the body into an instrument might shave off a few of those hours, since any grown person is already a master of their own physicality.

So, to the issue of mastering these extensions; making them part of our bodies. If you’re mastering a back-scratcher, it probably won’t take you 10,000 hours. Mastering a guitar on the other hand could take you years; the piano a lifetime. These hurdles drive a lot of people away from creating and participating in music. But there may be a secret to bringing them back: movement. Nearly everyone dances. Why? Because it comes naturally. There’s no learning curve.

“First, from the brain’s perspective, the body is by far the most familiar object in the world: the body, as William James elegantly put it, is ‘always there.’ Pa t r i c k H a g g a r & M a t t h ew R. L o n g o

I’d like to try and bring these musical extensions we’ve created back into the body. Maybe I’ll fail. Maybe the problem is just too difficult. Maybe to For the most part, the instruments we have accrued developed out of

be truly expressive and musical with any tool, including your body, you

historical context and technological constraints. The acoustic guitar,

must spend years obtaining a certain level of mastery. But if the body re-

the drum set, various percussive sticks and boxes and tubes, the violin,

ally is the most widely mastered tool, certainly it’s got to shave some time

the piano, etc. have all—after a certain point—stopped progressing with

off that 10,000 hours.

technology. The sound of the instrument and the physical form that produces that sound became cherished tradition. Most of us have picked up an acoustic guitar at least once. Maybe someone showed you how to play a chord. You pressed down, but the strings hurt your fingers and when you strummed you heard more muted plunking than you heard notes. If you were stubborn, you picked it up at least a few more times and figured out how to make the chord sound good. If you were really stubborn you learned all the other chords, grew a few calluses on your fingers, and maybe even strummed a few songs you still remember. If you’re the <1% of the population who reaches musical mastery, you picked up that guitar and didn’t put it down until your fingers were as lingual as your tongue.

20 • Exploration

School of Visual Arts. MFA in Interaction Design.


10,000 Hours of practice required to become an expert musician.

Russell Maschmeyer. Once more, with feeling.

Exploration • 21



03 INVESTIGATION

I was naïve. I assumed that I was the first to discover a link between music, movement and gestural interfaces. I was spectacularly wrong. I stumbled on more research papers, experiments, instruments, art projects and exhibitions in the first month alone than I thought possible in a year of searching. A few simple Google searches gave me enough reading to blind me. It was humbling. Before I had even started I felt left in the dust. All I could do was dive in, but quickly my worries evaporated. I realized many of these projects addressed different problems or solved similar problems for a different audience. In the evidence from research papers and one particular art piece, David Rokeby’s A Very Nervous System, I found validation that my hunches concerning computer vision, motion, and music, were well founded. After the release of Microsoft’s Kinect there was an explosion of gestural instrument projects. I spent the better part of two months a nervous wreck, expecting someone to beat me to my thesis concept. Though there was a lot of amazing work, I never saw anything that approached the problem quite the same way.

Russell Maschmeyer. Once more, with feeling.

Investigation • 23


BLOG POST September 19, 2010

The Body, The Brain & Music

{ Left } Thanks to Ian Curry and Jake Barton of Local Projects for turning me on to David Rokeby’s work. In a lot of ways I’m glad I found out of David now, because his work was, in many ways, what I imagine.

So far I’ve been collecting a lot of anecdotal research and snippets from various books and online sources. Here are a few of the things currently inspiring me. A passage from Dr. Daniel Levitin’s book This Is Your Brain on Music:

“Pitch is so important that the brain represents it directly; unlike almost any other musical attribute, we could place electrodes in the brain and be able to determine what pitches were being played to a person just by looking at their brain activity”

{ Left } Mick Grierson has created a computer program that can literally detect and play the note you think. Amazing! Hit the one-minute mark for a quick thought about using such a program to not only determine notes, but to use the brain as an orchestral conductor of those notes; a more brain-centric take on the main idea I’m trying to get at through my study of body motion.

D aniel L evitin, Thi s Is Your B ra i n on Musi c

He takes it a step further. This brainwave translation of sound happened so far back in evolution most living creatures have the same reaction to pitch and sound that we do. He describes an experiment performed on owls:

“Petr [Janata] placed electrodes in the inferior colliculus of the barn owl, part of its auditory system. Then, he played the owls a version of Strauss’s “The Blue Danube Waltz”… Because the electrodes put out a small electrical signal with each firing—and because the firing rate is the same as the frequency of firing—Petr sent the output of these electrodes to a small amplifier, and played back the sound of the owl’s neurons through a loudspeaker. What he heard was astonishing; the melody of “The Blue Danube Waltz” sang clearly from the loudspeakers…” D aniel L evitin, Thi s Is Your B ra i n on Musi c

I mean, come on! Holy shit, right?

Conclusion I guess what I’m saying is… Can’t we do both of these things once? Create a total brain/body instrument and composition tool? I’ll leave you with a thought from David Rokeby’s 1985 essay, Dreams of an Instrument Maker:

“The unique abilities of these instruments seem to propose a new approach to music, which would use to full advantage their unique potential. By this I do not mean the invention of new structural systems, or complex mathematical tunings, but a renovation of the relationship between music, the composer, and time itself.” D a v i d R o k eb y , D r e a m s o f a n I n s t r um e nt Maker

24 • Investigation

School of Visual Arts. MFA in Interaction Design.


Mapping It Out The idea for MOTIV begins to come together as I plan system flows and potential expressive parameters like velocity and note duration. Though I was still searching for other potential directions.

Russell Maschmeyer. Once more, with feeling.

Investigation • 25


BLOG POST September 27, 2010

A Flood of Resources

GESTURAL MUSIC ORGANIZATIONS:

INTERNET RESOURCES

• NIME: New Interfaces for Musical Expression Conference

• NIME 2011 | Oslo, Norway

• Gesture Controlled Audio Systems Research Collective

• Gestural Music Sequencer

• MoPrim: Search for Upper-Body Motion Primitives

• Aggregat – Gestural Ableton Sequencer

Up to this point I’ve been a little afraid of doing my prior art/literature research. The nagging fear in the back of my mind has been, “Well, what if someone has already done it?” My Thesis Development Instructor, Jenn Bove of Kicker Studio, gave me a swift kick in the pants by confirming that, yes, in fact, people have been working in this space for a long time.

• Center for Computer Research in Music and Acoustics

• The Continuum Fingerboard

In order to create a project worth anything, I’ll need to perform a lot more research and figure out why my thesis would be different or better than what has come before. So this week I did just that. So much to show and I’ve only begun to search.

• AirPiano: Gestural Music Controller

ACADEMIC PAPERS:

• Gestural Control of Music Using Vicon 8

• Force Feedback Controlled Physical Modeling Synthesis

• Hand Gesture Controlled Music Player

• Multimodal Analysis of Expressive Gesture in Music and Dance

• Mouse & the Billionaire :: Gesture Control Exploration

• Gesture Control of Music

• Keith Price Bibliography Music Related Gestures Systems

• Slide Guitar Synth with Gestural Control

• Gesture Controlled Musical Instruments

• Instrument Augmentation Using Ancillary Gesture for Sonic Effects

• MoPrim

• Audio-Input Drum

• conGAS

• Communication of Musical Gesture Using the AES/EBU Audio Standard

• String Thing

• Gesture Sound Experiments

• The Center for Computer Research in Music and Acoustics

• InfoMus Lab, Italy

• Symbolic Objects in a Networked Gestural Sound Interface • Interactive Sonification of Emotionally Expressive Gestures • Instrumental Gestures and Sonic Textures • Looking at Movement Gesture: Drumming and Percussion • Gesture and Morphology in Laptop Music Performance • On Development of a System for Gesture Control of Spatialization • Incubator: A Gestural Audio Tool Using Glove Interface for Live Performance Mixing • Gesture-Controlled Physical Modeling Synthesis with Tactile Feedback • Toward an Affective Gesture Interface for Expressive Performance • Gesture Control of Sounds in 3D Space • A Wii-Based Gestural Interface for Computer Conducting Systems • The Sound of One Hand: A Wrist-Mounted Bio-Acoustic Fingertip Gesture Interface • Effect of Latency on Playing Accuracy of Two Gesture Controlled Continuous Sound Instruments without Tactile Feedback • Gesture Control of Singing Voice, a Musical Instrument • Soundstudio4D – A VR Interface for Gestural Composition • In Search of the Motion Primitives for a Communicative Body Language • On the Choice of Gestural Controllers for Musical Applications: An Evaluation of the Lightning II and the Radio Baton

26 • Investigation

School of Visual Arts. MFA in Interaction Design.


{ Top Left } Aggregat, a multi-touch gesture-based digital audio workstation for sound production in ableton live 8. The interface allows up to three people to create music collaboratively in one single set { Top Right } AirPiano is a gestural musical interface that controls virtual instruments by the placing of hands above the board. Distance and x-axis placement are measured to apply pitch and volume. { Left } Gestural Music Sequencer is a sequencer that uses computer vision to alter the pitch and volume of the musical output according to the x and y position of the tracked object, usually a light source.

Russell Maschmeyer. Once more, with feeling.

Investigation • 27


BLOG POST October 2, 2010

New Inspiration: IDMIL I found the Input Devices and Music Innovation Lab (IDMIL) in Montreal. They are an amazing source of materials and information. Marcel Wanderley runs the group at McGill U, and seems to have generated about 80% of literature on the subject of gestural music interfaces. In fact, he literally wrote the book on my thesis topic, New Digital Musical Instruments: Control and Interaction Beyond the Keyboard. While this example (at right) is slightly more nutty less consumer focused than I hope my thesis will be, this is the kind of stuff that makes me feel like I picked the best topic on earth.

{ Above } Immortal-machine for méta-instrument from D. Andrew STEWART on Vimeo. Totally nutty and esoteric performance work. But, God, how awesome is that robo-instrument suit? { Left } New Digital Musical Instruments: Control and Interaction Beyond the Keyboard. I bought this book, but never managed to get around to reading it. One day …

28 • Investigation

School of Visual Arts. MFA in Interaction Design.


School of Visual Arts, MFA in Interaction Design

Wrong Directions Notes from early in my development process. At this point I was still kicking around potential ideas within the music space and trying to determine the angle of my approach. For a short while I considered creating a swarm intelligence system for music.

Russell Maschmeyer. Once more, with feeling.

Investigation • 29


E-MAIL From: Date: To:

Russ Maschmeyer October 27, 2010 Nicholas Felton

Subject: Holy Shit! { Left } ‘Quick Intro to SoundPrism’ on YouTube.com. This little app blew my mind by reducing music making into simple pattern playing. SoundPrism strips away the need for explicit musical knowledge like note names, musical modes or chord construction and simply lets the user play patterns that always seem to sound good. Brilliant design.

From: Nicholas Felton To: Russ Maschmeyer haha! I saw that on your twitter. Still leaves you lots of room to play

From: Russ Maschmeyer To: Nicholas Felton It may even enable me to push more real-time melodic generation and sequencing back into the gestural arena instead of confining it to the screen. Lots to think about.

30 • Investigation

School of Visual Arts. MFA in Interaction Design.


Presentation Notes Notes from two early presentations in the semester. I had a long battle trying to figure out who my audience or market was. I could never tell if this thing I was going to build would be best for knowledgeable musicians or the average person. I’m still torn.

Russell Maschmeyer. Once more, with feeling.

Investigation • 31


BLOG POST December 10, 2010

NEW RADICALS New radical person: Chris O’Shea

New radical organization: OpenNI

Chris O’Shea demonstrates his Air Guitar prototype, using Kinect. I felt as

Chris O’Shea demonstrates OpenNI after it was open-sourced by

if I was in a bit of a race against the clock when I saw this video a whole

PrimeSense. OpenNI is the driver that allows skeleton tracking (the white

month before I would even being constructing my prototype.

lines inside the silhouette).

32 • Investigation

School of Visual Arts. MFA in Interaction Design.


Motion Primitives Early sketches of potential gestural control systems for music. Beginning with a theramin-like concept on the far left and migrating toward more high-level control parameters like tempo on the right.

Russell Maschmeyer. Once more, with feeling.

Investigation • 33


E-MAIL From: Date: To:

Nicholas Felton March 14, 2011 Russ Maschmeyer

Clive Thompson (@pomeranian99) 3/14/11 5:40 PM Cool @danlevitin study: Variations in timing of a piano performance more important than loudness in emotional impact: http:// bit.ly/htq7Jv

{ Left } As if I didn’t have enough respect for this man already, Daniel Levitin published this research in February of 2011, showing the two main performance parameters that impart expression are timing and note velocity. I had arrived at the same conclusion only a few months earlier, without any scientific evidence to back it up. Levitin’s research vindicated my conclusions and allowed me to rest easy, knowing I had chosen to put users in control of the right parameters.

34 • Investigation

School of Visual Arts. MFA in Interaction Design.


A Working Model By the start of the Thesis Presentation semester, I had arrived at the above simplified system model. I proposed creating the gestural engine and the visual feedback display. Every other piece had already been developed by the market.

Russell Maschmeyer. Once more, with feeling.

Investigation • 35



04 RESEARCH

I love interviews. There’s something really pleasant about sitting down with someone and knowing what you’d like to talk about or to have them invite you into their world for a moment so you can see how they do things. Over the course of just a few weeks I conducted three ethnographic interviews and three standard interviews with a mixture of producers and musicians. I asked each of them the following questions: What interfaces/devices/programs/instruments do you use? How do you go about creating a new musical idea? How comfortable are you with leaving things to chance? What do you find challenging/annoying about current interfaces? What Could be improved? What do you appreciate about current interfaces? What do they make effortless? When synthesizing sounds, what parameters do you find yourself tweaking most often? Describe the process you went through to learn these interfaces.

A very clear pattern emerged: both musicians and producers love digital instruments because they change the ways they approach and think about music. New interfaces or modes of interacting are inspiring. But they also struggle against these tools. They tweak, abuse, and otherwise trick them into producing more expressive, human performances.

Russell Maschmeyer. Once more, with feeling.

Research • 37


Chuck Brody Professional Producer


{ Below } Chuck uses Ableton, a digital audio application, to compose entire songs. The interface works like a band rehearsal for him by allowing him to put in different phrases which he can turn on or off at any given time.

memory. Expression is limited to the audio samples in the library. { Right } Chuck’s audio effects rack including compressors, equalizers, delays, reverbs and preamplifiers.

{ Below Right } Chuck’s Akai drum machine. Tapping on pads to the right plays drum samples from

“I think people are experimenting in more of a live feel now, trying to figure out how to bring that classic live feel back into electronic sounds these days.”

“I like to switch my process up and use different sounds and different techniques and interfaces because it takes me in different directions and makes my stuff different.”

“I worked with another producer for a while who loved to cover the computer screen because so many people just watch the screen as they work and they look for things to be wrong visually.”

Russell Maschmeyer. Once more, with feeling.

“Once you’re too focused on making everything perfect it gets to be a little lifeless I think.”

Research • 39


BRIAN Cassagnol HOME studio Producer


{ Below } Brian works entirely on the computer with digital plug-in effects—software versions of the equipment in Chuck’s effects rack.

phones and his computer. Using today’s virtual instruments, the small keyboard is all Brian needs to perform what would traditionally require an orchestra.

{ Below Right } An audio channel plug-in designed to recreate the sound of a vintage recording console. { Right } Brian’s entire studio toolset consists of a keyboard controller with some drum pads, head-

“My low budget setup here is about beating the computer. Technology makes it so that I can access things that were impossible to access 20 years ago, but it also funnels you into a computer-y sound.”

“You find yourself doing funny technique things to get it to sound more real and rewriting parts to sound more real. You have to write to the limitations”

“They have these ‘humanizer’ settings on drum plugins and it’s like… just being random doesn’t humanize it. You play off of the grid in a very specific way.”

Russell Maschmeyer. Once more, with feeling.

“The interesting thing to me: The whole technology boom with music opened up music creation on this level as a possibility to millions of people who just never would have had access or talent or drive enough to have gotten there.”

Research • 41


OLGA BELL electronic musician

42 • Section

School of Visual Arts. MFA in Interaction Design.


{ Below } Olga enjoys using unconventional instruments like the keytar, a throwback synthesizer from the 80s that recasts the keyboard in the form factor of a guitar.

{ Right } Olga during a performance. As both the lead vocalist and keyboardist she’s tied to cumbersome, immobile instruments during performances.

{ Below Right } A sampler, similar to a drum machine. Olga records her own samples and then triggers them by tapping on the red buttons.

“I love that these things teach you how to think in an entirely different way about music.”

“You know what I think would be terrific? If everything was wireless. You’d think by now everything would be wireless but look at all these fucking chords!”

“The reason we added another person in the band is so that we can minimize how much we have to play to a track. So that we can retain our integrity as people who play instruments.”

Russell Maschmeyer. Once more, with feeling.

“It’s the best feeling in the world when you can command something, a piece of equipment or program, enough to make it really do what you want.”

Research • 43


ALEX FEDER traditional musician

44 • Section

School of Visual Arts. MFA in Interaction Design.


{ Below } Alex playing guitar while using his feet to switch on effect units (stompboxes) at his feet. Guitarists are some of the most mobile musicians.

{ Right } Alex (right) on tour as a guitarist for Enrique Iglesias.

{ Below Right } Alex as lead singer of The XYZ Affair, using hand gestures as a means of communicative expression.

“It’s easy when you’re playing to just feel what’s supposed to happen. It’s a conversation and it’s a live conversation that you’re not thinking about.”

“For me it either grooves or it doesn’t. It either makes you bob your head or it doesn’t make you bob your head. There’s an undeniable thing about that. Part of this is totally illogical. That’s why it’s going to be difficult.”

“Teachers never talked about groove and feel and time. It was always like “learn these scales learn these chords.” No one talks about this stuff. When you hit a certain point you’re like… ‘Oh, this is what matters.’”

Russell Maschmeyer. Once more, with feeling.

“That’s almost your motto, this focuses not on what you’re playing, but how you’re playing it.”

Research • 45



05 EVOLUTION

It’s a bit silly to suggest that there was an ‘evolution’ stage in this year-long process. Every one of those three hundred and sixty-five days brought with it at least one new evolution. Nevertheless, I’ve tried to consolidate the major mood swings my project had during the Thesis Development class. Remarkably, I strayed little from my original hunches concerning pattern, deviation, and the power of gesture to add intuitive control to expressive parameters. I’ve included diagrams which illustrate my framing of the problem space, the prototypes that led me to the final concept as well as a few personal epiphanies here and there. This collection of materials ends just past the beginning of spring semester, as I change direction one last time, and pick up a second advisor, Robin Bargar, and organize my goals for the execution of my prototype, MOTIV.

Russell Maschmeyer. Once more, with feeling.

Evolution • 47


BLOG POST October 25, 2010

A MILE MARKER

you touch anything else or not. Tempo and pattern length are easily selectable. This provides people a simply entry point to begin playing around. Even non-musicians can generate something interesting after only a few minutes of tinkering. The problem, however, is that these interfaces are extremely metronomic and inhuman sounding. The samples are always

Problem Current digital music technology has little to no understanding of motion; therefore, it cannot understand or engender musicality.

Opportunity Gesture recognition technology has created new relationship opportunities between musicians and digital music interfaces. These new, exciting relationships allow for a more direct connection between the convenience of computer music creation and the musical nuances of human pattern recognition and improvisation. After some initial research and exploration, I wanted to synthesize some of my findings. First, I wanted to re-examine my assumptions that a purely gestural interface could or would trump any previous interface approaches. The more I thought about why gestures were so great, I also

played on a grid, and adjusting envelope or volume dynamics on the fly is very difficult if not impossible.

“Why not combine the simplicity and learnability of sequencers with the dynamic power of physical gesture?” This gave me an idea. Gesture is potentially great at setting dynamics for both time, envelope and volume. Why not combine the simplicity and learnability of sequencers with the dynamic power of physical gesture? This would provide a solution that is both highly learnable and expressive. Moreover, having a compact interface like an iPad as a base, users can sketch ideas from anywhere and bring them into the gestural environment—which could be a studio or a live performance stage—and add the dynamics layer on top of the base system.

began discovering why solid or tangible interfaces were great. So I started { Left } The first iteration of my instrument landscape, mapping expressivity against the ease of learning the instrument.

making a list (opposite page). Some thoughts about the current interfaces being used for computer music began to stick out. I realized there were three predominate interfaces: keyboards, samplers, and sequencers. If you plot them (subjectively) across two matrices, ease of entry and expressivity, you’ll notice a linear progression occurring. The keyboard, modelled after an analog instrument, is the most expressive, but also incredibly difficult to master. Take anyone who has mastered a keyboard; chances are they first mastered a piano. Drum machines are easier to pick up, though still difficult to master. The easiest interface to pick up belongs to the sequencer and it’s various iOS & other touch screen variants. You simply need to move objects or flip switches and a pattern of samples begins to play regardless of whether

48 • Evolution

School of Visual Arts. MFA in Interaction Design.


Picking Sides At first I had grand illusions about gesture being the solution to all music interface ills, but after thinking and exploring a little bit I discovered that tactile & touch interface have their own strengths as well.

Russell Maschmeyer. Once more, with feeling.

Evolution • 49


A FAILURE

TO COMMUNICATE In October, two presentations into Thesis Development, I realized I had a problem. Not everyone in the classroom had a musical background. Yet, here I was, presenting my potential directions and ideas as if everyone knew the difference between a chord and key. It left many of the students and faculty scratching their heads and turned Q&A sessions into music theory 101. Anti-good.

Music is made of patterns

I knew I had to find some way to communicate my ideas without losing the audience on the ramparts of musical jargon. So I set about making a short set of organizing definitions, using plain language that I would slip into my next presentation.

Interesting music is made of changing patterns

I started at square one by creating a simple organizing principle:

“Music is made of patterns.” I played the class a clip of a sequenced drum beat. It was a simple, repeating rock beat. The kind you’ve heard in a thousand rock songs. It sounded like music, but it also sounded a bit dull. Then I elaborated:

“Interesting music is made of changing patterns.”

words about the craft of playing an instrument: musicianship. The way I see it there are two sides to musicianship. To be a great musician, you’ve got to embody both. The first side to musicianship is tech-

while another string of notes gives you Pachabel’s Canon in D. It’s being able to play those notes quickly enough so that someone listening could pick out the tune. Technique is knowing what to play. Computers are masters of technique. You tell a computer what order and how fast to play a set of notes and it will perform perfectly every single time.

It wasn’t enough to discuss music at a high level. I had grand plans of making an instrument, so I needed to say a few

Technique is all the stuff you learn in early music lessons. It’s both a physical understanding of how to produce sound from an instrument as well as a basic understanding of musical rules. It’s knowing what a scale is and how to play it. It’s knowing that if you string together a particular pattern of notes you get Mary Had a Little Lamb

50 • Evolution

School of Visual Arts. MFA in Interaction Design.

nique.

I played another beat. This one started out the same as the last, the pattern changed partway through. People perked up, and it was pleasing to see the recognition on my audience’s face that it was suddenly a more interesting beat when a change was introduced.

On the other hand, people struggle with technique. It takes most of us months or years before we stop making major mistakes regularly when trying to play. The practice required to achieve good


technique is a major hurdle for people who are interested in playing music. The second side to musicianship is ex-

If technique is knowing what to play, expression is knowing how to play it. Great musicians add subtle variations in loudness and timing while performing a piece. These subtle expressive variations are a vehicle for the performer’s emotions, crafted in the moment of performance. They’re improvisations, and the emotions they convey are readily deciphered by even the least musically knowledgeable among us.

pression.

Computers cannot generate expression in music because they have no understanding of the emotional significance of music. They can add random variations, but they can’t improvise with meaning. On the other hand, people are fantastic at expressive improvisation; even non-musicians. Musical expression is just like dancing. It’s a fundamentally human understanding of how music leads to movement and how movement carries emotional meaning.

Russell Maschmeyer. Once more, with feeling.

Evolution • 51


BLOG POST November 1, 2010

Two Prototype Methods This week I set out to draw up a prototyping plan and coordinate some further interview research for my thesis. The prototyping plan outlines a few specific areas of investigation I’d like to explore and some practical methods for exploring them. There are two levels of interaction that I’m looking at: the sequencing interface and the gestural expression layer. Each layer requires different prototype approaches. The sequencing interface, arguably the most well-trodden layer, is logic-based. There’s a task at hand and likely a common process for accomplishing it. It has a tactile interface; there’s something to look at and touch. But there may be a lot to learn about the sequencing approach from people who haven’t dealt much with sequencing before. Paper prototyping will definitely be the best candidate there. The gestural layer may be less adherent to a strict process or logic. I’m hoping it has much stronger links to reaction and improvisation. Call and

Paper Prototype Questions • Where do users want to start? • Do users know how to start? • How do users envision construction taking place? • How can the sequence grow in complexity? • Is there a common understanding of what can be constructed? • How much do people know about beat basics?

Gestural Prototype Questions • What elements do people feel add expressivity? • What are the things that people feel that want to change via gesture? • How do people want to signal tempo change? • … groove change? • … legato/glissando vs. new note? • … harmony? • … vibrato/tremolo? • … note value?

response. What Paul Pangaro would call a “conversation.” There is no tactile interface. This makes prototyping a little trickier. My working theory is that I can use master musicians to simulate the gestural system. By having them react to participant movements (speed up, play harder, etc.) I can roughly suss out how people might want to interact with a musical system that adds expression dynamics based on movement.

52 • Evolution

School of Visual Arts. MFA in Interaction Design.


The Instrument

Lan dscape

As part of my third presentation I refined my instrument chart to illustrate a simple point: There’s an opportunity for digital instruments to become expressive if they utilize gesture the way traditional instruments do.

Violin/Viola/Cello

Digital Interfaces Meet Gesture Recognition

Traditional

Guitar

Piano

Saxophone Flute Drum Set Bass Guitar Digital Keyboard Drum Machine / Sequencer

Typically, Instruments fall into one of two categories: digital or traditional. Traditional in-

Digital

Expressive Potential

struments are acoustic on some level. Even an electric guitar relies on vibrating strings to produce a tone. Digital instruments rely entirely on integrated circuits and digital to analog conversion to produce sound. There’s little to no gestural input. The technique required to play a digital instrument is about as low as it can be. If you know

Recorder Reactable Sampler

iPhone / iPad Apps Tenori-On Step Sequencer

Technique Requirements

how to flick a light switch or turn a knob, you’re golden. On the other hand, traditional instruments require far more technical knowledge to operate. Imagine picking up a guitar or an oboe for the first time. It’s not entirely evident how you’re supposed to play it. There’s almost always some coordination that has to take place

results in a lack of expressive variation. Tradi-

Looking at the chart, a pattern begins to

while employing gesture to imbue those com-

between making a vibration (plucking a string,

tional instruments measure high in expressive

emerge. As instruments garner greater expres-

positions with variation and expressive mean-

buzzing a reed) and shaping that vibration

potential because the shaping of the vibration

sive potential, they saddled with higher and

ing through intuitive movement.

(holding the string against a fret or pressing

is changed every moment. The guitarist won’t

higher technical requirements.

certain keys).

pluck the string the exact same way every time. My slow hunch told me that giving digital in-

Digital instruments have little expressive po-

Great musicians emphasize this inherent varia-

struments gesture recognition would create

tential primarily because their interfaces are so

tion and use it to their advantage, creating in-

a third instrument type: an expression instru-

boolean. Notes are either turned on or off. This

credibly expressive performances by highlight-

ment. This expression instrument would utilize

leads to a “set it and forget it” mentality which

ing differences.

the composing strengths of digital interfaces

Russell Maschmeyer. Once more, with feeling.

Evolution • 53


BLOG POST November 15, 2010

A First Prototype

as concentric wheels. The Trike concept builds off of the common rotary sequencing model: A playhead, much like a clock hand, spins around a wheel to the assigned tempo, as it crosses paths with strategically placed note objects it triggers the associated note or sample. A good example is

This week we were asked to come up with three concepts for what form our thesis could take. Then we were asked to use prototyping to take one of those concepts a step further. Since I have a pretty solid idea about “what” my thesis is I decided to focus my brainstorming and prototyping muscle on the interface relationships between the screen system and the gestural system. I got some interesting results.

the iPhone app Spoke. Trike builds on apps like Spoke by creating multiple levels of intensity. If your movement is minimal (figure A) it plays only the rhythmic samples in yellow; a simple basic rhythm. As your movement becomes more full body (figure B) the wheel’s playhead stretches out in direct proportion to play the next tier of samples, increasing the complexity and intensity of the beat. And finally, as your whole body flails rhythmically (figure C) you reach the outer echelon of intensity and trigger more rhythmic elements such as cymbal crashes or synthesizer hits. Because we can measure movement (the total portion of the body moving), the velocity (average speed of body parts just before sample strike), and the overall tempo of the movement (periodicity between troughs in movement) we can map these three variables to intensity (number of rings playing), note velocity (individual volume of each sample played), and macro-level tempo respectively. That’s actually a pretty robust and responsive system already. A couple other features include the potential to sample shift based on intensity level by drawing connecting lines between the original sample and a displaced sample, as well as the ability to create different rhythmic sections, each with three levels of intensity. So imagine creating one Trike section that you could use for a distinct intro, first verse, and subsequent verses based solely on of how intensely you dance to each one, and then a second one for similar but distinct chorus sections. Pretty flexible with minimal setup.

The Trike concept involves three concentric rings which build on rhythmic intensity.

The greatest part, I think, is that it does nothing until you start dancing.

If I can muster three concepts like this one, I’ll be set. It’s a rhythmic instrument based on a tiered system of intensity. These tiers are embodied

54 • Evolution

School of Visual Arts. MFA in Interaction Design.


figure A

figure B

figure C

Far Left & Above: The Trike concept paper and physical prototype. A sequencer with three tiers of intensity (yellow, blue, red). As a user’s gestures become more intense, the sequencer adds tiers to it’s playback, making the rhythms more complex. This concept was the first to include a parameter I called “intensity,” a measure of compositional layering.

Russell Maschmeyer. Once more, with feeling.

Evolution • 55


{ above and left } Sketches and paper prototypes for Hoop, a tiered gestural sequence performance instrument. { right } Instrument system diagram and gesture map along with an interface wireframe.

PROTOTYPE A: HOOP I started the concept presentation with a slight modification of my previous Trike prototype and thought through the system a bit more concretely. Hoop is a rhythmic sequencer with three levels of intensity. The performer drags and drops drum hits onto the

56 • Evolution

wheel, spacing them so they create rhythms. As the performer dances, the vision system measures the degree of movement and applies it as intensity. If the performer uses small, slow movements, the intensity will be low and the

sequencer will only play the hits in the innermost circle. If the dancer moves wildly the sequencer plays across all three intensities, stacking rhythmic elements on top of one another. I began by sketching, and worked toward a paper prototype

School of Visual Arts. MFA in Interaction Design.


SySTEM dIagraM

Input

computeR pixels

The performer dances in front of the camera which translates the world into pixels. Computer vision algorithms use those pixels to decipher musical parameters like envelope, panning/volume, vibrato/bend, tempo, and slide. Those parameters are then paired with sequenced notes in the sequencer and ouput as expressive music, which the performer reacts to and the whole loop begins again.

Intensity Velocity Tempo

amplitudes frequencies gESTurE

MuSIC

Magnitude speed Periodicity

Intensity Velocity tempo

note sequence

output

Russell MaschMeyeR. OncE MORE, wiTH FEElinG.

SeQuenceR

sEctIoN • 57


{ above and left } Sketches and paper prototypes for Aqwire, a linear sequencer for melodies that made use of conducting gestures to shape expression. { right } Instrument system diagram and gesture map along with an interface wireframe.

Prototype B: AQWIRE The second prototype was a linear sequencer concept, better suited for making melodies. The linear array of notes allows you to see the “shape” of your tune. The concept involved stacking various sequenced parts together into a multipart song. Tap to turn notes on and off and tap different sections to edit or to

58 • Evolution

begin composing for that section. It was a pretty standard set of features for this kind of sequencer. I hoped the gestures set it apart. Gesturally, Aqwire utilizes conducting gestures to add expressivity. I imagined a user actually holding a baton while using both hands to sculpt musical expression.

Whereas Hoop was aimed more at live performance, Aqwire attempts to solve a composition or audio engineering problem. The user could do multiple “takes” of gesture recording. The takes are represented at right by the wavy colored lines. The user could then fine tune those gestural performances through touch gestures and create a keenly honed performance.

School of Visual Arts. MFA in Interaction Design.

Aqwire attempts to make good on some of my original concepts which cast the user as a sort of mixing engineer, but failed as a direction on the whole.


SySTEM dIagraM

Input

computeR envelope panning/volume vibrato/bend tempo slide

pixels

The performer conducts in front of the camera which translates the world into pixels. Computer vision algorithms use those pixels to decipher musical parameters like envelope, panning/volume, vibrato/bend, tempo, and slide. Those parameters are then paired with sequenced notes in the sequencer and ouput as expressive music, which the performer reacts to and the whole loop begins again.

amplitudes frequencies note sequence

gESTurE

MuSIC

speed hand Position Baton Position Periodicity Posture

envelope Panning/Volume Vibrato/Bend tempo slide

output

Russell MaschMeyeR. OncE MORE, wiTH FEElinG.

SeQuenceR

sEctIoN • 59


BLOG POST December 8, 2010

New Ideas & Strategy We’ve spent these last five weeks generating tangible ideas (key elements/actors/functions) as well as considering business model contexts (customers/partners/value proposition/key activities/competencies). A lot to consider, and I wish the whole semester had been dedicated to what we’ve done these past five weeks.

Three advancements came out of it:

01

I realized that I want to make an instrument focused on perfor-

02

I realized there was a hole in the tablet instrument marketplace

mance, not composition.

for professional instruments. There are a lot of iPad instruments out there, but they either feel like novelties or powerful applications ill-suited for the environment. I wanted to make an iPad app designed to be used on a stage for expressive, intense performance.

03

Communication between musicians is a killer feature for live performance. The ability to pass performance and expression data back and forth between performers could make things incredibly

I managed to eke out two concepts: Hoop & Aqwire (discussed in the pre-

interesting.

vious post). I was pretty happy with Hoop, but less so with Aqwire. Caught between Turkey Day, freelance work, and other school work I didn’t have the brainpower I needed to dive in and start ideating. I went into my presentation feeling lukewarm about my results. Though I came out of it happy that my educational preamble about music was well received. Skip ahead to Thursday, December 2nd, Design Management. Our inclass assignment:

The following morning I had a great conversation with Nicholas Felton about all this. At that point I was still imagining multiple instrument applications, each of which communicate with the others. It seemed daunting as a single thesis project. He pointed out that it was entirely possible to make a single instrument that could be used in a variety of ways (for rhythmic playing or for melodic playing). So, what am I making? I’m making an iPad-based, sequencer-style, multi-

“You’ve started a company. Create the first draft of a design brief for your imaginary design team. The project is your thesis. You have two hours.”

purpose instrument that communicates performance data wirelessly with other iPads running the application to enhance communication and performances between multiple musicians. All said and done, I’m pretty happy with where I ended up.

In two hours I had to compose a project overview, an audience analysis, a competitive review, a project scope and a set of business goals to explain my thesis idea to a team (albeit make-believe). This imaginary team would have to create my thesis based solely off the brief I provide. Design thinking lightning round. For whatever reason, this framing finally did the trick.

60 • Evolution

School of Visual Arts. MFA in Interaction Design.


BLOG POST December 21, 2010

Thesis Concept Wrap-Up The day has finally come. It’s the end of the semester and today I deliver the fruit of my research and ideation work: my final concept. After my last presentation, Paul Pangaro asked a simple question that didn’t have a simple answer: “What is music?” I was speechless. I spent the last couple weeks in a back-and-forth with Paul about that question, formulating an answer. He further asked the questions: “What’s the purpose of music? Whom does it serve? What is good music? Why should we care?” And after a little deliberation between Paul and myself, here’s the answer:

Elevator Pitch:

I’m making a digital instrument that uses gestural input to enhance expressive control, enabling computer musicians to perform emotively.

MUSIC IS AN ART INTRICATELY CONNECTED TO EMOTION. Great music is a tool for empathy, for a musician’s emotion to resonate with our own. Great musicians possess greater control of expression. Increasing an musician’s capacity for employing that expression means better music. FOR A MUSICIAN TO INCREASE CAPACITY FOR EXPRESSION: He/She must practice for years on traditional instruments or utilize a device that gives more direct control over expression.

DESIGN PRINCIPLES I came up with a few design principles over the past few weeks. Some ideas that would help focus my final concept and really make it something I was proud of. • Standalone interface, that’s augmented by gesture • Built for live performance • Communication between instruments • One interface, multiple instrument roles

Russell Maschmeyer. Once more, with feeling.

Evolution • 61


The Periodic Table

OF Expression { Below } Early iterations on the diagram exploring a means to communicate the dimensions of expression as they might relate to a particular instrument’s role in a composition.

MELODIC

CHORDAL Vibrato & Bend

Articulation

Velocity

Texture

Intensity

Tempo

RHYTHMIC

Okay, so maybe it’s a Venn Diagram of Expression, but that doesn’t really have the same panache. The idea was to create a mapping of expression in music. There are three basic roles for instruments in music: Rhythm, Melody, and

62 • Evolution

Accompaniment. That’s an oversimplification, but a useful one. Each of these roles uses different elements of expression to imbue a performance with emotion. Rhythm instruments use tempo, note velocity, and intensity to signal emotion. Melodic instruments use articulation (the length of

the note), note velocity, vibrato & note bending. Chordal instruments utilize articulation, intensity, and sound texture or “color” to convey emotion. Connecting these essential elements to gesture will provide a means of direct control to digital musicians.

School of Visual Arts. MFA in Interaction Design.


Making connections Some early concept planning for Trinity. Three distinct instrument roles in one app could communicate, iPad to iPad, expressive values of their respective performers. By sharing expressive values between performers, I thought it might be possible to create uniquely convergent qualities in a multi-user performance.

Russell Maschmeyer. Once more, with feeling.

Section • 63


TRINITY for iPad Trinity for iPad is a new kind of instrument that combines the ease of a sequencer with the expressiveness of gesture. It’s great for composing music, but it’s built for expressive live performance. You can make rhythms in Beat mode, play melodies in Tune mode, or accompany a singer or other musicians in Chord mode. After you’ve chosen Beat, Tune, or

Chord you can create a sequence of beats, notes, or chords on the wheel. Create multiple sequence wheels to create different sections of a song. Hit the play button to test your sequence and rearrange, recompose or make any other changes you’d like.

tion. The camera captures your movements and while the sequence plays, applies your gestures as the tempo, note velocity, intensity, articulation, texture, vibrato & note bend. The expressive values are tailored to the particular instrument you’re playing.

When you’re ready to perform, connect wirelessly to a Kinect and laptop running Trinity’s computer vision applica-

Trinity is built for group performance and wirelessly shares expressive data between performers in real-time, fur-

MELODIC

ther enhancing the performance experience. Beat likes to share timing data, Tune likes to share tonal data, and Chord likes to share sound textural data. The experience of Trinity will revolutionize the expressivity of digital music, providing a means for masters and novices alike to construct musical sequences and then impart the beauty of musical expression through gesture.

CHORDAL { Left } Trinity can be used as a melodic, chordal, or rhythmic instrument. Each one communicates it’s expressive values to other performers, allowing for a real-time sharing of expression.

RHYTHMIC

64 • Evolution

School of Visual Arts. MFA in Interaction Design.


{ RIGHt } The performer’s gestures are picked up by the Kinect and interpretted by the Trinity desktop application, which is linked via bluetooth to the Trinity app for iPad. The iPad app provides the note sequence while the dektop app provides the expressive nuance.

Russell MaschMeyeR. OncE MORE, wiTH FEElinG.

EvolutIoN • 65


The Ultrasounds

To illustrate a use-case for the Trinity concept I created a story about a band. I hoped to position Trinity as an instrument for those who want to make popular music, but also enjoy thinking about performance and sound-creation in nontraditional ways.

Leila formed The Ultrasounds just over a year ago after bonding with Barry and Patrick over their love of LCD Soundsystem and Daft Punk. They want to make digital sounding dance music with a great stage performance.

Leila a talented singer and has been performing for three years. She can play the keyboard well enough to write great songs, but she has never mastered any instruments.

Barry’s talent is programming great dance beats using drum machines and samplers. He likes to play extra drums during live shows to add some life to his beats.

Patrick has played cello & violin in classical music groups before, but joined the band because he was interested in trying out some different styles of music.

66 • Evolution

School of Visual Arts. MFA in Interaction Design.


They’ve written some great songs, but they’ve been frustrated with their stage performances. Most of the on-stage action involves them pushing buttons to trigger sequences.

The crowd always looks pretty bored.

Barry peruses the new releases on the App Store and discovers a new instrument app called Trinity. It uses the Microsoft Kinect and his iPad in to create expressive digital music.

Being curious about new digital instruments, he downloads Trinity to his iPad and opens it up. He finds its really easy to compose interesting beats.

He decides to try the gestural control and follows the in app instructions for pairing the Kinect with his iPad.

Russell Maschmeyer. Once more, with feeling.

Evolution • 67


He does a quick tutorial and he’s amazed! The beat has come alive! The rhythms are responding to his dancing, getting louder and softer, moving faster and slower, even growing more complex as he increases the intensity of his movement.

He tells Patrick about it the next day before rehearsal starts. Since it’s only $14.99 (cheaper than a pack of bass strings!) Patrick downloads Trinity to his iPad as well.

He chooses the “Tune” instrument and starts playing around with the synthesized violin sound. He composes a musical sequence to match Barry’s beat.

Patrick syncs with Barry’s Kinect as well. The instruments share the composition data and synchronize the tempo and key.

Just then, Leila walks in to the rehearsal space…

68 • Evolution

School of Visual Arts. MFA in Interaction Design.


… and sees Barry dancing up a storm to a beat that seems to move with him. It looks like Patrick is conducting. The music swells and moves with his gestures. It sounds really emotive, and it is mesmerizing to watch.

They tell Leila about Trinity and agree she should buy an iPad just to get this killer instrument app! She likes that she can take her iPad with her to capture musical inspiration.

A few weeks later they debut a new song composed and performed on three iPads running Trinity. The Ultrasounds feel more expressive than ever …

… and it shows in the reaction of the crowd. The audience is dancing up a storm and is mesmerized by the stage performance.

They’ll certainly be composing more songs using Trinity in the future, and may even bring fans up on stage to help them perform the songs!

Russell Maschmeyer. Once more, with feeling.

Evolution • 69


70 • Section

School of Visual Arts. MFA in Interaction Design.


Gone to Florida. Don’t call. Our second concept presentation signalled the end of Thesis Development and the start of the winter break. I spent the better part of two weeks at my parent’s home in Florida, reflecting on the semester and eating my fair share of home cooked meals. Never underestimate the power of home cooking to clear your head. I returned to New York after the new year, feeling uneasy about my direction. Paul Pangaro took charge of our Thesis Presentation class and prepared us quickly for a semester of clarifying our ideas and preparing our prototypes. Clearing my head proved incredibly valuable. In the first week of Spring semester I changed direction and rediscovered my project priorities.

Russell Maschmeyer. Once more, with feeling.

Section • 71


BLOG POST January 19, 2010

A Change of Direction Coming out of last semester, I felt great about where my thesis was, conceptually. When I thought about the possibilities generated by an instrument that could play expression instead of notes I got excited. I still get excited. What was still cloudy and getting even cloudier was how exactly I would prototype something like this.

Prototyping Checklist • Get Kinect up and running (Check!) • Get OpenNI up and running for skeleton tracking (Check!) • Getting OpenNI Sending OSC values into Max/MSP (Nearly!) • Experiment with x, y, & z positions, velocities, vectors, accelerations and movement periodicity to determine best mappings for adding expression to sequenced pieces within Max/MSP (That’s a big one)

My naïve plan was to start in iOS, building a rudimentary app using the

• Concretize and fine-tune expressive mappings (Scrubbing it clean)

Stanford MoMu instrument making library. The goal was to end up with an iOS app I could (hopefully) put into the App Store. Somehow this goal took

Of course that leaves out the actual sequencing interface itself. What will

precedence over getting as far as possible with the prototype concept

the application look like? In the previous plan it had become the center

in general. It was suddenly all-important that it live on an iPad. So much

of the prototype. Don’t get me wrong, I believe a stunning visual design

so, in fact, that in the back of my mind I had actually begun considering

is important. But in terms of demonstrating the concept, I’d much rather

forgoing the use of Kinect and computer vision altogether.

have a prototype that someone could actually play—even if it’s ugly—than show them a beautiful app that does almost nothing.

This week I had a “road to Damascus” moment, realizing how ridiculous that course was. It’s not in the least bit important that the prototype I

That being said, I do plan on creating a full set of visual designs and user-

reveal at the end of this semester live on an iPad. Even in the best of cir-

flows for the future consolidated application. Static designs however will

cumstances it might not even make sense on an iPad. Think about it…

get done far faster once I know what I’m working with functionally.

here you are waving your arms and moving about and then in the middle of a performance you’ve got to stop and manipulate an interface on an iPad? The best chance of creating a robust system for musical expression lies in measuring the movements of the whole body or parts of the body in particular, something that computer vision provides a great solution for. The accelerometer in a single iOS device… doesn’t. So I’ve changed my prototyping course. Instead of beginning to work in iOS creating an app, I’ll begin by utilizing a much more flexible set of environments that will get me up and running with the many pieces of this project faster: OpenFrameworks, Max/MSP, and Kinect.

72 • Evolution

School of Visual Arts. MFA in Interaction Design.


BLOG POST January 24, 2010

A Collection of Goals Paul Pangaro put me in touch with Robin Bargar, who I’ll be meeting with later this week to discuss my thesis. I’m really excited for the opportunity as Robin seems uniquely qualified to tell me if I’m completely crazy or not. In preparation, Robin asked me to put together a summary of my goals, both primary and secondary, as well as an articulation of how I plan to accomplish them, how I will know when I’ve accomplished them, and any related questions.

Primer: Music is an art intricately connected to emotion. Great music is a tool for empathy, for a musician’s emotion to resonate with our own. Great musicians possess greater control of expression. Increasing a musician’s capacity for employing that expression means better music.

Hypothesis: For a musician to increase their capacity to control expression, he/she must practice for

Primary Goal

Questions: What fail-safes can be put in place

is being generated through my gestural input

to ensure a continuous performance? Latency

and what is being shared and used from your

Create a digital instrument that provides an intuitive enhancement of expressive control.

between movement and its capture and inter-

performance.

pretation already seems to be somewhat of a problem. How can I either: lessen the latency

Success: Two or more musicians could en-

as much as possible and/or create gestural

hance a total performance by applying the

Plan: Use computer vision to allow for gestural

mappings where latency is less of an issue? Is

output from one performer’s instrument to

control of the expressive elements of a pre-

it important that the performer see their body

the expression of a second, third or fourth in-

sequenced musical piece.

position reflected in the visual feedback? Or is

strument. For example, I could share my note

it more useful if feedback is abstracted into the

velocity data with you, so that our sequenced

pertinent variables?

melodies stress their notes in the same way.

Success: Through expressive gestural control,

one should be able to generate multiple emotional variations from a single pre-sequenced piece. Questions: What are the primary musical vari-

You could share your articulation data so that

Secondary Goal Create a single interface that can be applied to multiple instrument roles

ables that create expression (tempo, velocity,

my notes are staccato and legato when yours are. Questions: This seems trivial to me on the

surface, but is the simplicity of this the key? Is

timing, articulation)? Which gestural mappings

Plan: Create an interface whose elements (grid

it more useful to keep it simple and allow the

to expressive variables are the most intuitive?

structure, note selection, visual feedback) can

way it’s used by musicians to define its worth?

be applied easily to melodic, rhythmic, chordal/

Or should I be thinking more deeply about what

accompaniment constructions.

elements are communicated or how they’re

Secondary Goal

communicated? I.E. is it smart to have a 1:1

Build an instrument for live performance Plan: Building a system that is portable, that

Success: A musician would be able to com-

pairing? Or could one instrument have a com-

pose all the musical parts for a composition

plimentary reaction to incoming shared data

having only learned a single interface

rather than ape it?

works in a variety of stage environments, is quick to set up and doesn’t crash or otherwise

Questions: Is melody, rhythm and chordal/

fail to operate mid-performance. Building a

accompaniment a robust enough list of in-

system that feels immediately responsive to a

strument roles? Are the expressive elements

performer’s actions. Create a visual feedback

different for each role? If so, how do/do they

system for the musician, displaying the control

overlap?

parameters and the musician’s real-time input. Success: Can be carried by a single person.

Takes < 10 minutes to set up. Has system failsafes in place to make sure glitches go as un-

Secondary Goal Afford Constructive Communication Between Instruments

noticed as possible. Must have a response time

years on traditional instruments or utilize

<= 1/16 note at approximately 120bpm. Visual

Plan: Determine which expressive qualities

a device that gives more direct control over

feedback informs the performance and allows

can/should be shared between multiple instru-

expression

for a better understanding of the correlation

mentalists. Create a system for visual feedback

between movements and expressive outcome.

that makes clear the distinctions between what

Russell Maschmeyer. Once more, with feeling.

Evolution • 73


BLOG POST January 31, 2010

Meetings and Adjustments

For instance, I had originally assumed “intensi-

prescribed a technique he has employed then

ty” was a value best left to chordal and rhythm

to separate the testing of the two parts of the

instruments, but intensity could also be applied

system

to melodic instruments in a few ways. Complimentary notes could added into the

This past week I had some great conversations and some important realizations.

Requirements On Thursday I made a requirements document, a simple list describing each element that would have to be accomplished for me to complete my thesis. Going into the exercise I imagined I would create a two-part system: A sequencing system and what I was calling the “expression engine”—no relation to the CMS system—which would handle the gestural input. After listing out the requirements for the sequencing system and realizing it was three to four times longer than the expression engine, it suddenly occurred to me that it might not be practical. After a conversation with Nicholas Felton (my advisor) on Friday, I spoke about my misgivings. He encouraged me to make a hard decision and drop it. I thought about it for a second and realized that deciding instead to focus on making the expression engine opens up more interesting possibilities. If it’s not tied to a particular sequencer then perhaps it could be incorporated into the workflows of other sequencers. Perhaps it could become an easily adoptable add-on or augmentation of the devices that musicians are already using. Thinking of the expression engine as a discrete instrument makes it a more portable, compelling idea, and without a pretty sequencer interface to distract the audience on presentation day, it might make the actually significant part of what I’m developing clearer.

Josh Davison On Saturday, I had my second Skype chat with Josh, a Chicago native and computer musician. I caught him up on my thinking and got some great feedback. He helped me realize that a lot of the expressive values I’ve been mulling over (velocity, tempo, texture, intensity, articulation) are more applicable to all instrument roles than I had previously assumed.

74 • Evolution

01

melody that match the key and current explicit

The control schema that actually effects the expressiveness of the music ◊ What elements comprise expression

notes, instruments of different timbres could

(velocity, tempo, articulation)?

be added playing in harmony with the current notes, overtones could be added to the explicit

◊ What values comprise those ele-

notes, making the basic sound richer in texture.

ments (velocity is comprised of a single numeric value between 0 and

So I’m re-examined my siloing of expressive

127, but articulation may be com-

values. Perhaps I’ll just give musicians a list of

prised on multiple values which vary

values and they can turn their control of them

depending on the instrument, the

on and off as they please. It’s perhaps less el-

same could be said for intensity)?

egant, but also perhaps more powerful.

Robin Bargar This very morning I met with Robin Bargar, one

02

The movement mapping that creates those values ◊ Does the average velocity of my arms at any given moment deter-

of the early pioneers in the field of virtual real-

mine the velocity value? Or should it

ity, a musician, and Dean of Technology & De-

be the magnitude of movement in all

sign at City Tech in Brooklyn. He’s a very sharp

of my joints?

man. He encouraged me to be very up front about

All comes down to which mapping feels more

who and what I’m making this expressive en-

appropriate in testing.

gine for. The field of music and digital music control is large and has a pretty rich history.

Robin encouraged me to get part one figured

There’s a lot of scientific research within the

out in a week or so using a simple sequencer

field, he pointed out, and that it would be im-

set up and direct (we’re talking knobs here)

portant for me to be explicit about my assump-

control over value setting. This would help me

tions around what kinds of music or approach-

figure out which elements good candidate for

es to music this engine is aimed at. In other

creating expressive outcomes and what values

words, clearly define the constraints (musical

they require. The next—and much harder—part

or otherwise) under which I’ll test my system to

is figuring out how gestural input can yield a

determine success or failure.

sufficient amount of those values to yield expression.

He also gave me some insight into the word he was doing in the early 90’s re: virtual reality. He

School of Visual Arts. MFA in Interaction Design.


BLOG POST FEBRUARY 22, 2010

What’s in a Name? I’ve probably lost a few of you along this long and winding road. I’m sorry. Today, I aim to clarify. Below, you’ll find as clear a description as I can make at this point as well as the new (still impermanent) name for my project. Think of it as the next level up from an elevator pitch:

WHAT IS MOTIV?

New digital music making interfaces lack affordances for musically expressive control, depriving musicians of their innate ability to emote through performance. MOTIV gives digital musicians expressive control by interpreting their physical gestures in real-time, on stage, during the playback of a composition. Using an adaptive computer vision system, MOTIV puts musicians in control of the tempo, intensity, note velocities, articulations, pitch bends and vibratos in the moment, giving way to a musical conversation with surprising and expressive results. For those of you who are curious, MOTIV stands for “Musically Oriented Translation of Independent Vectors”

Russell Maschmeyer. Once more, with feeling.

Evolution • 75



06 EXECUTION

This was the moment I had been waiting for all these long months. I finally had the green light to dive into the code and start building something for real. I couldn’t have been more excited and I couldn’t have been more nervous. I was embarking on an entirely new coding language, C++, which I had never used before. I had to navigate the perils of installing openFrameworks as well as find some way to hack the Kinect. I knew a little bit about object oriented programming, having played around in Processing the previous year, as well as having a bit more in depth knowledge of actionscript from all my years in advertising. But I had no idea if my meager knowledge would carry me where I needed to go. Thanks to the efforts of the open-source community, the Kinect was hacked days after its release in November of 2010, and the code was made available almost immediately for the hacker community to play with. I owe a huge debt of gratitude to the open-source community and the brilliant coders like Zac Lieberman and Theo Watson who work to make things easier for the greener makers among them. With a kick in the right direction from Robin Bargar at just the right time, it turned out to be a mightily smooth ride.

Russell Maschmeyer. Once more, with feeling.

Execution • 77


BLOG POST January 18, 2010

Oh, Glorious Day

{ above } A split screen view of the first Kinect hack I tried. The left pane is the color coded depth map seen by Kinects infrared camera. The pane on the right is the output from the RGB camera.

Today I got up and running with Kinect. Tomorrow, who knows? I’m meeting with my technical advisor tomorrow. We’ll discuss options for world domination and fill in the holes in the following equation:

pWn kinect

78 • Execution

[something]

world domination

School of Visual Arts. MFA in Interaction Design.


BLOG POST January 23, 2010

A Skeleton to Call My Own

{ above } Here’s Jessica’s silhouette and digital skeleton. I installed many versions of openFrameworks before finally finding one that worked with Roxlu’s Kinect hack addon.

After about a week of what seemed like slamming my head into an impenetrable wall of code I finally got Roxlu’s ofxOpenNI addon to run! I had another version of OpenNI running in the Processing environment, but running it in openFrameworks gives me A LOT of room to grow.

Next steps include: • Get ofxOSC addon up and running • Stream OSC data to Max/MSP using ofxOSC • Obtain real-time coordinates of skeleton joints (x, y, z space) • Calculate vectors, velocities, and accelerations of joints • Create a Max/MSP sequencer patch that receives movement values • Experiment with mappings of movement values to musical values

Russell Maschmeyer. Once more, with feeling.

Execution • 79


BLOG POST January 27, 2010

First Expressive Tests

My First Kinect Hack

I’ve made some great coding progress so far utilizing openFrameworks and their ofxOpenNI and ofxOSC addons with a Max patch using the CNMAT objects. It took a week or so to get OpenNI running, but once I did the final demo came together in a single day! Can’t wait to start devising some expression experiments.

{ above } The first mapping of skeletal joints to expressive parameters. Here my friend, Clint, changes note velocities using the y-position of his hand. { top right } My first attempt at hacking the Kinect using OSCeleton { bottom right } My first test run of openFrameworks with ofxOpenNI which takes the Kinect output and creates a digital skeleton.

ofxOpenNI, Max/MSP Expression Test

80 • Execution

School of Visual Arts. MFA in Interaction Design.


BLOG POST FEBRUARY 5, 2010

Max/MSP + Wacom Hijinks

Wacom & Basic MIDI { Left } Using Max/MSP with my Wacom tablet I was able to manipulate MIDI, changing a song’s tempo and pitch bend in real time.

In follow up to a meeting I had with Robin Bargar on Monday, I spent this week constructing a suitable control environment utilizing MAX/MSP and my Wacom tablet. Essentially, I’m using my Wacom pen and the parameters it outputs (x pos, y pos, x tilt, y tilt, pressure, z-axis, etc) as real-time control knobs to manipulate different expressive parameters (tempo and bend to start). The thought goes, if I can construct a suitable format for controlling expression with this level of control, I’ll have a much better chance of controlling expression using bodily gesture with the Kinect.

Wacom & Robust Virtual Instruments { Left } An extension of the above video. This time the instruments are in Logic and I can control multiple instruments at once.

Russell Maschmeyer. Once more, with feeling.

Execution • 81


BLOG POST FEBRUARY 16, 2010

Expressive Control: Velocity, Bend & Vibrato

{ above } With my Wacom prototype I gain control over velocity, bend & vibrato in Max/MSP. Things start to feel expressive.

After a frustrating week trying to get some real-time control over a MIDI file’s tempo during playback I decided it was time to move on, try to keep to my schedule, and begin control work on note velocity, pitch bend, and vibrato. Luckily, I did that all in one night. I have until Monday or so to gain control over articulation. Hopefully I can finish that quickly and get back around to finishing tempo. Here’s the video proof from last night’s work.

82 • Execution

School of Visual Arts. MFA in Interaction Design.


BLOG POST FEBRUARY 24, 2010

{ Right } I present the complete Wacom prototype with expressive control over articulation and intensity. I also put together a better continuous algorithm for tempo control during playback.

Basic Control Prototype Complete! By jove I’ve done it! With the final element of expressive control under my belt, I’m looking forward to getting the gestural control experiments under way. This is when it starts to get exciting, people! I’ve also hatched a plan to begin working with an electronic musician, or perhaps a few, to create compositions with my tool in mind to test how they’d like to interact with the application and see if anything interesting comes out of it. I’ve got a few folks in mind. More on that when I have it. For now, enjoy the show!

Russell Maschmeyer. Once more, with feeling.

Execution • 83


BLOG POST MARCH 4, 2010

We Are Go for Launch A few things happened this week. Jack Schulz and Matt Jones of BERG stopped by the SVA studio

{ above } I begin connecting my Max/MSP control algorithms to gestural control. Here we control the velocity of the notes played in the MIDI file by mapping them to the velocity of your right hand. Nicholas Felton guest stars

to lead some students in a week-long workshop. It was amazing. We made nonsensical product drawings, re-imagined the average household thermostat as something you might see in a book about cooky Japanese inventions by making paper prototypes, and did a bit of technological material research to feed our craft. This was all amazing, and well worth me losing a bit of time to focus on thesis. Nonetheless, I still managed to hit a major milestone in my thesis prototype. I was able to calculate the three-dimensional velocity a user’s right hand (chosen somewhat arbitrarily… I’ll eventually be performing these calculations on all the virtual joints). I wired that velocity directly to the note velocity parameter of my Max/MSP patch et voila!

84 • Execution

School of Visual Arts. MFA in Interaction Design.


{ Right } Irvine Brown’s project “Singing Sock Puppets.” A brilliant use of documentation techniques to present an idea simply and effectively.

Also, after speaking with the fine gentlemen of BERG about my thesis, they gave me a lot of great feedback and encouraged me to consider how I could get this idea across in a video, without having to explain it. i.e. How could I do something like this:

Singing Sock Puppet Now, admittedly, that’s a simpler device than what I’m constructing. But you get it almost in the first instant. You get why it’s great by smile on her face and the way she starts adding her own performative gestures, which really making this gadget sing (forgive me). I don’t need to know how it works. I just see that it does. How can I do that?

Russell Maschmeyer. Once more, with feeling.

Execution • 85


BLOG POST MARCH 10, 2010

This Is Getting Intense At the encouragement of the gentlemen of BERG I began thinking about how I might communicate this concept without having to explain it. No epiphanies on that yet, but it did lead me to thinking about the system’s visual feedback. I think the system’s visuals could do a lot to communicate what’s happening without explanation. I started by sketching. There are already some great ideas in there. I’ll be creating a lot more of those little sketches before I’m ready to actually make anything happen though. Working in openFrameworks on Monday, I successfully determined the velocities of all the virtual Kinect joints, and derived the magnitude of the body’s movement from that information. I also did a bit of visual graphing to understand the patterns of my own movement. Then I managed to actually map the magnitude of my movement to this thing I’m calling “Intensity.” Intensity in my system really just means how densely layered the composition is. The more “intense” a piece becomes, the more layers of instrumentation are added. Enjoy! I’m off to walk around Austin and steel myself for long nights and lots of cool people meeting.

86 • Execution

{ above } I present some rudimentary visual feedback so that users can see the effect of their movements within the system. I also add control over the magnitude of movement and map it to intensity.

School of Visual Arts. MFA in Interaction Design.


Interface Sketches A few of the post-it note sketches I used to quickly ideate some options for the visual feedback system. I wanted to chart the effects of the gestures in a way that directly tied to the body. What would it look like to visualize musical super-powers.

Russell Maschmeyer. Once more, with feeling.

Execution • 87


BLOG POST MARCH 12, 2010

Calling All Collaborators

Now—having developed some of the rudimentary principles as a prototype, and having a clear vision for MOTIV, I think it’s the perfect time to invite others to play, develop, and otherwise shape the experience. Community is the driving force behind the development of WordPress, openFrameworks, and the Arduino physical computing platform (not to mention all the Kinect Hacking happening as well). Each one of these projects

Today I had an epiphany, and it all started with a lie.

relies on the collective shaping power of the community and a directing vision of a core group in communication with that community.

This morning I attended what I thought was a presentation on community building at SXSW. It turned out to be less of a presentation, and more of

So, today, I’m Issuing an open call on meetup.com to musicians and de-

a sharing and discussion group about the communities we were building.

velopers alike. Let’s get together, play, and develop MOTIV into an intui-

Uh oh. I suddenly realized how far I had sat down from the exit doors. I

tive, powerful, and expressive musical tool. Let’s collaborate! If you have

hadn’t come prepared to talk about the community I was building. I wasn’t

friends who fall into one of those categories, spread the word!

even building a community! So when it came time for me to share… I told a little lie. Well I’m building a new kind of instrument, so I’m trying to build a community of musicians to play with it and help build it, together, into something really valuable. It was a little white lie—until I heard myself say it. Then I realized how ridiculous it was that it wasn’t the truth. It immediately dawned on me how transformative turning full-force to the music & developer community could be, especially right now. To date I’ve held back from opening up to others. Until recently I was still forming my vision for the project. Without a clear vision, opening up to a community—any community—would have probably led to confusion and frustration. It might have been exciting in some regards, but would have lacked productive direction. Ultimately it might have killed what I believe could be a fundamentally transformative concept in digital music performance. I needed some time to figure out what it was that I believed in.

88 • Execution

School of Visual Arts. MFA in Interaction Design.


{ Above } The official MOTIV Musicians & Developers meetup.com page, with over 30 members only a few weeks after launch.

Russell MaschMeyeR. OncE MORE, wiTH FEElinG.

ExEcutIoN • 89



07 RESULT

I was giddy when I first connected gesture to the expression control system in Max. It was a major milestone. At the end of that week I videotaped Nicholas Felton controlling the velocity of the notes in a song with just the speed of his arm movements. Within seconds he had the feel of it and was using it expressively. He was improvising. There he was, someone who wasn’t a musician, expressing himself through music. He had a big smile on his face. I would see that same enormous grin over and over as I tested MOTIV out on various audiences. There’s this amazing moment when you see people ‘get it’ and their eyes get wide and they start experimenting with their gestures. It’s brilliant to watch. As I brought the two other core parameters of expression, intensity and tempo, the experience became even more immersive. Performers were concentrating and conversing with the music. Something I only hoped for a few months before was happening with every new user.

Russell Maschmeyer. Once more, with feeling.

Result • 91


92 • Result

School of Visual Arts. MFA in Interaction Design.


MOVE THE MUSIC.

Russell Maschmeyer. Once more, with feeling.

Result • 93


mysong.mid

Tempo Velocity Attack Intensity

Compose music on the digital device you already love—a keyboard, sequencer or a drum machine. Then save it as a digital sequence, like a .midi file.

94 • Result

Open MOTIV and load your digital sequence, then select the expressive parameters you want to control during the performance.

School of Visual Arts. MFA in Interaction Design.


Connect your Microsoft Kinect to your computer and start moving. MOTIV will track your gestures and turn them into musical expression.

MOTIV weds your expression to the sequence in real-time, sending it to your virtual instrument while visualizing your input for the perfect performance.

Russell Maschmeyer. Once more, with feeling.

Result • 95


MOTIVATORS Two users, Jessica and Dave, experiencing MOTIV for the first time. These early test runs were held at the Interaction Design MFA studio. There are some really beautiful moments in both videos when you see the experience come into focus.

96 • Result

School of Visual Arts. MFA in Interaction Design.


Russell Maschmeyer. Once more, with feeling.

Result • 97


TRACKING VELOCITIES 01

Set the coordinates for the hand velocity bubble

if (rLimb.end_joint == 15){ velocityX = pos[1].X * 2.25; velocityY = pos[1].Y * 1.875; };

02

Set the coordinates for the intensity valences

if (rLimb.end_joint == 3){ intensityX = pos[1].X * 2.25; intensityY = pos[1].Y * 1.875; };

03 04 05

If we’ve got a baseline, calculate velocities

06 07 08 09 10

if there was a change in one of the points...

11 12

Store avg velocity in the array of velocities

13

Turn the new velocity into the old one

14

Start a new timer

15

Turn the new position into the old one

if (rLimb.oldPos.X != 0 && rLimb.oldPos.Y != 0 && rLimb.oldPos.Z != 0) { XnPoint3D newPos = b.position; float x_diff = fabs(newPos.X-rLimb.oldPos.X); float y_diff = fabs(newPos.Y-rLimb.oldPos.Y); float z_diff = fabs(newPos.Z-rLimb.oldPos.Z); if (x_diff != 0 && y_diff != 0 && z_diff != 0) { double distance3D = sqrt(x_diff*x_diff + y_diff*y_diff + z_diff*z_diff); rLimb.endClock = clock(); double clockDif = (rLimb.endClock-rLimb.startClock)/CLOCKS_PER_SEC*1000; rLimb.velocitySon = distance3D/clockDif; rLimb.velocityAvg = (rLimb.velocitySon+rLimb.velocityDad+rLimb.velocityGrandad)/3; velocities[rLimb.end_joint] = rLimb.velocityAvg; magnitude = 0; for (int k=0;k<25;k++){ magnitude+=velocities[k]; }; rLimb.velocityGrandad = rLimb.velocityDad; rLimb.velocityDad = rLimb.velocitySon; rLimb.startClock = rLimb.endClock; }; }; rLimb.oldPos = b.position;

Set the current joint position / point Calculate the distance between old/new points

Find the distance between the old and new point End the timer Calculate the time it took Calculate the velocity

Sum the velocities array

98 • Result

School of Visual Arts. MFA in Interaction Design.


What makes it go openFrameworks is a library of C++ code that makes it easy create a lot of fun stuff. I dug deep into ofxOpenNI, an addon for openFrameworks which creates a digital skeleton, to figure out how to find the position of the user’s hand in every frame. Once I could track the position over time I was able to determine the direction and velocity. Then I performed the same set of calculations for every joint to determine the overall intensity of movement.

Using simple openFrameworks functions I created a visual feedback system to give the user a higher degree of control. I mapped the changing expressive values onto the performer’s body. This environment is the control center for the MOTIV experience, collecting and visualizing the gestural input from the user.

Russell Maschmeyer. Once more, with feeling.

Result • 99



Russell MaschMeyeR. OncE MORE, wiTH FEElinG.

05

04

03

02

01

to new virtual instruments.

up the other two spillways, which release MIDI

the intensity climbs higher and higher it opens

starts letting the MIDI out a new channel. As

a high level, it opens the first chamber and

dam’s spillway. If the intensity values reach

Last is the intensity patch. Think of it like a

MIDI notes are played at the specified tempo.

and adjusts the MIDI playback clock so that the

po value being sent from openFrameworks

This is the tempo calculator. It takes the tem-

this one.

pressive parameters has a unique patch like

understand, usually 0-127. Each of the ex-

works and remaps them to values MIDI can

patch that takes the values from openFrame-

This is an example of a conversion algorithm, a

time, creating expressive MIDI.

and pairs it with the expressive values in real

section takes in the raw, unexpressive MIDI

the user chooses what MIDI files to play. This

This is the instrument section. This is where

the song playing.

original tempo of the song and click to start

it down any MIDI channel they like. They set the

sive MIDI information sent out. They can route

as well as specify where they’d like the expres-

can turn on or off the parameters of control

First up is the control panel where the user

You’re looking at the main nerve center for MOTIV. If openFrameworks is the gas pedal and dashboard, this is the engine. Variables like tempo, velocity and intensity come infrom openFrameworks get processed and matched up with the MIDI sequence here, in Max/MSP. It may look complex, but it breaks down into a few simple parts.

ThE drIvINg ENgINE



08 CONCLUSION

MOTIV has been an incredible project to develop. If nothing else, I’ve spent a year creating a project that combines my love of music, process and technology. The way musicians and non-musicians alike have taken to the MOTIV prototype is an incredible validation of my research and the hunches I’ve been nursing to life over the past year, maybe longer. This is the end of the first stage in the life of MOTIV. As it stands, it’s not yet a product, but a platform for discovery. There’s so much to explore as this new opportunity space develops and I’m able to test what works and what doesn’t. The next step is consolidating the code into a single application that can be played with pre-sequenced songs or with instruments like Monome, Tenori-on or iPad via real-time MIDI communication. I hope to open-source the project while continuing to build the adventurous community of musicians and developers that have already gathered around MOTIV. You can always find out what’s new by visiting musicwithmotiv.com.

Russell Maschmeyer. Once more, with feeling.

Conclusion • 103



09 Thank you

Jessica for putting up with grad school. Nicholas Felton for being an amazing collaborator. Robin Bargar for knowing how to proceed. Liz Danzico for pushing in all the right directions. Paul Pangaro for insisting on clarity. Jennifer Bove for insisting on speed. Musicians and producers for sharing your thoughts and time. Larry Legend for the hearing a geek out. OpenFrameworks, Eric St. Onge, & Yang Yang for easing my foray into C++. Everyone who helped in any large or small way.

Russell Maschmeyer. Once more, with feeling.

Thank You • 105



10 REFERENCES

Steven Johnson, “The Slow Hunch,” in Where Good Ideas Come From (New York: Riverhead Books, 2010), 65. Bruce Sterling, Shaping Things (Cambridge, MA: Mediawork, 2005), 133. Adam Greenfield, “Thesis 43,” in Everyware: The Dawning Age of Ubiquitous Computing (Berkley, CA: New Riders, 2006), 148. Marshal McLuhan, Understanding Media: The Extensions of Man (New York: McGraw-Hill, 1964). Wikipedia. “Marshall McLuhan.” Accessed April 16, 2010. http://en.wikipedia.org/wiki/Marshall_ McLuhan. Daniel Levitin, This Is Your Brain on Music: The Science of a Human Obsession (New York: Penguin, 2006). Patrick Haggard and Matthew R. Longo, “You Are What You Touch: How Tool Use Changes the Brain’s Representations of the Body” on ScientificAmerican.com. September 7, 2010. http://www. scientificamerican.com/article.cfm?id=you-are-what-you-touch. Malcolm Gladwell, “The 10,000-Hour Rule,” in Outliers (New York: Little, Brown and Company, 2008), 35. David Rokeby, “A Very Nervous System.” Accessed September 19, 2010. http://homepage.mac. com/davidrokeby/vns.html. Finn Peters. “Music of the Mind.” March 23, 2010. http://www.youtube.com/watch?v=epT16fbf4RM &feature=player_embedded. Antonio De Luca. “Aggregat.” July 30, 2010. http://vimeo.com/13759610. Omeryosha. “AirPiano - Controlling Ableton LIVE.” June 2, 2008. http://www.youtube.com/watch?v =9K10XB1ycT4&feature=player_embedded. Unearthed Music. “Gesture Music Sequencer.” June 20, 2009. http://vimeo.com/5247458. D. Andrew Stewart. “Immortal-machine for méta-instrument.” May 30, 2010. http://vimeo. com/12157933.

Russell Maschmeyer. Once more, with feeling.

References • 107


(cont’d)

Marcel Wanderley, New Digital Musical Instruments: Control & Interaction Beyond the Keyboard (Middleton, WI: A-R Editions, Inc., 2006). Audanika. “Quick Intro to SoundPrism.” August 11, 2010. http://www.youtube.com/watch?v=385Cy mvTecU&feature=player_embedded. Chris O’Shea, “Air Guitar Prototype with Kinect.” December 10, 2010. http://vimeo.com/17669981. Chris O’Shea, “Testing OpenNI & Kinect.” Decmber 9, 2010. http://vimeo.com/17640133. McGill University. “Measuring Musical Pleasure.” March 14, 2011. http://www.mcgill.ca/newsroom/ news/item/?item_id=172676. Adafruit Industries. “WE HAVE A WINNER – Open Kinect driver(s) released – Winner will use $3k for more hacking – PLUS an additional $2k goes to the EFF!” November 11, 2010. http://www.adafruit. com/blog/2010/11/10/we-have-a-winner-open-kinect-drivers-released-winner-will-use-3k-formore-hacking-plus-an-additional-2k-goes-to-the-eff/. Github. “roxlu/ofxOpenNI.” Last updated January 7, 2011. https://github.com/roxlu/ofxOpenNI. University of California at Berkley, Center for New Music & Audio Technologies. “Downloads.” Last updated April 7, 2011. http://cnmat.berkeley.edu/downloads. Irvine Brown, “Singing Sock Puppets.” Accessed March 3, 2011. http://www.irvinebrown.com/?p=15.

108 • References

School of Visual Arts. MFA in Interaction Design.




Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.