LMC 6314 / Design of Networked Media
01 / Introduction Our Domain Project Goals
02 / Research Literature Study Related Works Survey
03 / Ideation Design Considerations Preliminary Ideas
04 05 07
04 / Initial Prototype JukeMonkey How it works Emotokens Scenarios Social Implications Feedback
18 19 20 21 22 25 27
09 10 12
05 / Final Design
06 / References
OUR DOMAIN MOOD & MUSIC which recommends music that peer group of similar preference liked
Why is this project interesting? In a social environment or setting where music is played, be it indoors like a cafe, pub, at home or outdoors, for example in a bus, we can observe people listening to their own music using their ipods or mobile phones. is could be because the music being played in the environment is not appealing to the individual's taste or it could also be due to the person's mood at that particular moment. When you’re pining for a playlist that ﬁts your current actions, emotions, or even desired emotions, shuﬄe features and Genius playlist can often fall short. We would like to base our concept on the fact that behind every song there's always an emotion. ere is a strong connection between music and emotions which can be tapped to improve people’s experience of music when they are in a social setting or when they are in the conﬁnes of their home. Along with the research that music conveys an emotion to its listeners, it has also been shown that music can produce emotion in the listener. People’s tastes in music can also vary based on the mood they are in, so a playlist that a person created yesterday may not necessarily appeal to the same person the next day based on their circumstances. Music recommendations are conventionally based on three approaches (Kuo, Chiang, Shan and Lee, 2009) :
One is the content-based ﬁltering approach which analyzes content of the music which users liked in the past and recommends similar music
Another is the collaborative ﬁltering approach
e other is the hybrid approach which integrates the content and collaborative information for personalized music recommendation
ese recommendation approaches are based on the users’ preferences observed from the listening behavior. However, sometimes, the music a user needs is decided by the emotion of the user or context. Most people experience music every day with aﬀective response. For example, you feel cheerful when listening to an excellent performance at a concert. Conversely, you might want to listen to certain kinds of music based on your mood, for example, if you're angry you might want to listen to hard rock to vent your anger. If you're in a romantic mood you might want to play soothing, slower paced songs. We believe that by playing music according to mood, people would have a more engaging and satisfying experience. Consequently, recommending music according to mood better meets the users’ requirement in some cases.
PROJECT GOALS What we planned to achieve We sought out to create a device that could play music according to the moods and emotions of people located around the device. We sought to provide an easy way to listen to music which people like without them requiring to have a detailed understanding of music and know details of various genres and artists. A group listening to music together might be involved in making decisions at various levels while deciding on what to play - artist, genre, mood, settings,etc. We decided to focus on one of these factors using which we could ease the decision making process while facilitating discussion. We considered evaluating the collective mood of the group from the activities of the group members on social networks, based on which the application could generate perfect playlists and hence create an engaging atmosphere for the group. We started exploring diﬀerent concepts and forms an application/device which could achieve this eﬀect.
We were also keen on exploring the possible interactions when multiple people with same device are at the same location. A possible extension of the idea was to consider an intersection in people’s taste as well as mood while playing music.
LITERATURE STUDY How to detect Mood? Mood is a mental state that is induced by complicated causes. Compared to emotion, mood is a less intense state, but lasts much longer, e.g., days or hours instead of minutes. Mood aﬀects how we behave and make decisions and, more importantly, is an important social signal that others leverage to better interact with us. We often fake our emotions, but we rarely can fake our mood because mood is a long lasting mental state.  Hence, mood is a tractable target for machine learning and is more useful for enriching contextawareness. We identiﬁed that the key technical questions that arise in such a problem space are : How to automatically determine the mood? How to transmit it? How to adapt the content to the mood? How to personalize the mood information? How to optimize client application? ere has been a lot of research on techniques or methods to tackle the above mentioned questions, especially on how to track the mood of the users actively, and on creating models which can eﬀectively map mood to music.  MusicSense is one approach that tries to provide contextual music recommendation by automatically delivering music pieces which are relevant to the context of a Web page when users read it. It involves properly measuring the context relevance between music songs and Web pages, using emotion
as the bridge for such a relevance matching, as music is all about conveying composers’ emotions, and lots of Web pages such as Weblogs also express sentiments of writers. EmotionSense is a mobile phone based adaptive platform whose key characteristics include the ability of sensing individual emotions as well as activities, verbal and proximity interactions among members of social groups. Facial expression recognition and Audio speech recognition have been used previously to evaluate and establish mood states. Body Sensor Networks have also been used to track physiological factors (Heart rate, Blood pressure) and infer mood. Researchers have also identiﬁed individual-level diurnal and seasonal mood rhythms in cultures across the globe, using data from millions of public Twitter messages . Most current applications make use of user input of some form to identify mood for example, by making use of user deﬁned tags and taxonomies or through the use of mood graphs and scales which users can manipulate and customize. Based on an evaluation done for a user study on Mood Player, a music classiﬁcation system based on mood, all participants agreed that they would like to be able to choose music based on mood and 11 out of 12 participants preferred mood classiﬁcation over genre and artist classiﬁcation. e general consensus was that mood classiﬁcation is subjective but it can be useful for ﬁnding music that suits one’s current state of mind.
Moodagent is as an app within Spotify that picks just the right songs to match your current mood. Moodagent knows the mood and musical qualities of almost any track and generates personalized playlists for a user.
Stereomood is a website which gathers songs to add to its database by allowing users to add songs as well as aggregating from popular music blogs. It then allows users to tag the songs with mood tags or activity tags, to help sort them and add them to the correct playlists.
Musicovery is an interactive and customised webradio service. Listeners rate songs, resulting in a personalized programme. Users can also roll over a mood pad to deďŹ ne the mood of the radio.
Songza is a free music streaming & recommendation service. Stating that its playlists are made by music experts, the service recommends various playlists based on time of day and mood or activity. Users can vote songs up or down, and the service will adapt to the user's personal music preferences.
SURVEY Based on the knowledge we gained from the literature study and our initial project goal, we were interested in identifying some key aspects of how people listen to music in a group and to what extent they used social media. We sought to ﬁnd which applications they preferred to use in this domain so that we could identify suitable underlying frameworks and API that could potentially be used to retrieve/generate music play lists based on mood. While deciding on the factors which might signiﬁcantly inﬂuence user’s music preferences, we assumed that location/setting might not be a signiﬁcant factor but we were interested to ﬁnd out if this was true.
Can social media help judging mood?
We conducted an online survey which had 10 questions and focused on the following aspects: Usage of social media Selection of music while in a group Inﬂuences on choice of music
We received 46 responses in total.
How do people decide what music to play in a group setting?
How often do people speak up to change songs they don’t like?
Music preferences by mood and settings
Key ﬁndings from the survey We realized that not everyone uses social media. We discovered that majority of the participants indicated that they rarely depicted their actual emotions on facebook and twitter. is was a key ﬁnding which led us to conclude that we could not use social media to eﬀectively and accurately determine a person’s mood and we decided to eliminate this idea and use another approach to determine mood. We learned that taste in music signiﬁcantly changes
with mood as well as location/setting. 83% agreed that music changes with mood, and 71% agreed to the fact the music varies with the surroundings. We also determined that while playing music in a group, people either depend on one person to lead the group and play music or the group comes to a consensus on common preferences. e survey questions with genres helped us identify the kind of music people like to listen to in various moods and settings.
DESIGN CONSIDERATIONS CHALLENGES
Music Selection Our focus was on creating and designing a novel device/application and we assumed the use of an existing framework which would allow us to select music or generate play lists based on mood, however, we also formulated the following design considerations for music selection:
Use user input to identify mood and preferences Use users' current context - mood/location and associated preferences Use collective mood of a group to select music Use individual user’s data to train the classiﬁer to personalize mood inference Use active feedback to decide: What to play now? What to play next? What not to play?
Interaction Everyone should have a say with no bias from others Provide a simple way for people to reach a common consensus Everyone should have control over the music being played (universal remote) Everyone should have control over the volume of the music
Group Detection Use of proximity to determine group (using wiﬁ or bluetooth) Based on individual interactions with the central device
Form Factor Mobile Phone Tangible Device
Emoticon Disk Player
A music player which plays music based on an emotion disk. A group needs to arrive at a common consensus over the mood to select without discussing the speciďŹ cs like artists/genre. e player generates play lists automatically without seeking constant monitoring. e player has volume controls, and regular play, pause and skip options.
Using a remote to control a music player, each user in a group can input the desired genre or mood on the remote. e player selects the mood/genre to play based on the majority vote. e interaction is based on a voting mechanism where users provide active feedback and can change their decisions at any point without speaking up about their decision.
Location Based Music
is device constitutes of several containers, each representing an emotion. Users â€˜voteâ€™ for their mood by placing a pebble onto the corresponding emotion tray. e device plays music based on the emotion maximum pebbles are in. e system facilitates participatory music selection and active feedback.
is system plays music appropriate to the environment/location. It detects whether it is stationary or in motion, and hence can determine if the user is traveling or stationary. e system can also determine the type of location - university, workplace or a cafe and can play music and set volume appropriate to the setting.
JukeMonkey JukeMonkey presents the listenersâ€™ with music using an intuitive and novel approach allowing them to have a more enjoyable listening experience regardless of their understanding of the music.
Why & How? After considering the pros and cons of our preliminary concepts, we decided to focus on mood as the determining factor for music selection. ere was a certain appeal in using a tangible device to play music due to the nostalgia for the era when people used radios and cassette and cd players. is motivated us to create a tangible device that would consider the user's active input. We believed that the novel disks depicting emotions from one of our concepts would be an interesting and fun way to interact with a tangible device and thus created disks which we call "emotokens". To implement the device, we used an NFC(Near Field Communication) enabled android phone and for the emotokens, we used android NFC tags. Each emotoken has a mood encoded in it, which is depicted on the tag by a monkey expressing the corresponding emotion. As we aimed to create a minimalistic device, we decided to integrate four basic controls - skip song, change mood, pause song and change volume. All options except changing the mood are provided as visual icons on the display. To change the mood, users can eject the inserted emotoken by lifting a lever. On doing so, the inserted emotoken slips out smoothly from a slot on the side of the device.
HOW IT WORKS
Insert an emotoken based on your mood
To change the mood, lift the lever
Remove the emotoken & insert a new one!
Framework / API for music selection
JukeMonkey enables individuals or groups to make quick decisions and oďŹ€ers a simple way to play music based on mood. e application on the phone placed inside the device generates music playlists based on the emotoken selected by interactors by detecting the NFC tag encoded with the speciďŹ c emotion and allows ways to easily change the mood and control the music if required.
Last.FM API: Search user tags for selected mood and retrieve songs attached with those tags Spotify + Echonest: Use Echonest Song API to generate a playlist based on mood and Spotify to play the music Stereomood API: Search user tags for selected mood and retrieve songs attached with those tags
EMOTOKENS MOOD AS OBJECT
Possible Emotions Aggressive
Couple on a date
en e couple decides to play music during their date and chooses an existing playlist with romantic tracks. ey might alternatively decide on an artist who is most likely to play romantic music or a genre which they feel is most suited to set the mood for the date and play music in that genre of by a speciﬁc artist. When the decision is based on the artist, as all songs played by an artist may not necessarily be romantic, the song being played might ruin the ambience. Taking a decision based on genre can be overwhelming as not everyone has that level of understanding of music and they might not be aware about the various deﬁnitions of genres.
Now e couple inserts the “Romantic” emotoken into JukeMonkey and it automatically generates a playlist of romantic music. e couple can carry JukeMonkey along with them wherever they go together.
Sports locker room
e family plays music from the radio or from a music channel on Television. e music generated might be disturbing for some members of the family while others might enjoy it. e music could be a hindrance to the ongoing conversations and the person who is speaking may not be heard.
One of the team players plays music from his phone. Not everyone likes the artist or the genre of the song which is being played, some ﬁnd the song demotivating or depressing to hear just before the game and tell him to turn the volume down or stop playing the music.
e family has a discussion on how everyone’s day was and decides to enter the emotoken as suited to the general mood. ey enjoy the music while having dinner and if a person does not like a speciﬁc song, he/she can skip the song or vote down by interacting with JukeMonkey. If someone wishes to start a conversation or speak up, they can turn the volume down and then start talking so that the music does not cause a disturbance and drown their voice.
One of the team players who has JukeMonkey takes it out, and inserts the ‘energetic’ token. e team listens to the music and is pumped up and motivated while entering the ﬁeld!
SOCIAL IMPLICATIONS As a medium of expression
For creation of ambience
JukeMonkey could give the answer to “How are you” when used in a group or by an individual as the current emotoken gives an indicator of the mood of the group or individual. As a consequence, this could spark conversations and inﬂuence social behavior. It can thus be used as an artifact for emotional reﬂection and to gauge emotions of others in a social context.
JukeMonkey can be used to set the ambience of a place for example, using ‘party’ mood to set the mood of a party and ‘romantic’ mood to set the ambience for a candle-light dinner.
As a mood changer Along with the research that music conveys an emotion to its listener(s), it has also been shown that music can produce emotion in the listener(s) . Empirical research has shown how listeners can absorb the piece's expression as their own emotion, as well as invoke a unique response based on their personal experiences . JukeMonkey can be used by people to elevate their existing mood or counteract it by using an emotoken which is completely opposite to how they feel if they wish to change their mood.
Inﬂuence mood of a group Music can also tap into empathy, inducing emotions that are assumed to be felt by the performer or composer. Listeners can become sad because they recognize that those emotions must have been felt by the composer, much as the viewer of a play can empathize for the actors. JukeMonkey could be used by a study group leader to set the mood of the group, for example, a study group leader who wishes to motivate the group and ensure that they stay alert.
Motivate action using music Listeners may also respond to emotional music through action. roughout history music was composed to inspire people into speciﬁc action - to march, dance, sing or ﬁght; consequently, heightening the emotions in all these events. JukeMonkey could be used by a sports team coach to set the mood to inspire the team and raise team spirit.
Fake emotions Listeners might be inﬂuenced to insert a speciﬁc emotoken or may not reveal their true emotions due to the presence of others.
Mood as an object Objectifying mood might encourage participation. e interaction with the device becomes more playful in nature and users might exchange, gift or hide emotokens.
Trust the Device Users ignore the fact that the device approximates music based on emotion and start to trust the device to play music according to the selected mood.
EVALUATION & FEEDBACK What we observed We conducted an evaluation of the device by doing a think-aloud session with 4 participants and observed how they interacted with the device in a group. Below are some of the key points based on the feedback and observations:
Participants felt the device would be useful in children’s education and could be used to teach speech and rhymes.
One participant felt that the device could be used to track emotions based on the usage and could be a possible indicator of mental health. For example, someone who chooses to listen to sad and gloomy music continuously could be in depression.
As the device concept is novel, participants were curious to play with it and see what kind of music the device played for each emotoken. e interaction gained more focus than the intention to play and listen to music.
One participant exclaimed, "I would never use the sad token". is indicates that people might not reveal their true emotions to the group unless they were comfortable with them. People might also tend to choose a emotion/mood that they wished to be in rather than what they were actually experiencing.
All participants agreed that the device was very simple to use and operate.
One participant mentioned that the device
could be used as a medium of expression for young children, older adults and people with speech impairments. ey felt that the device might be a subtle way to communicate how they were feeling without them having to explicitly state it or talk about it.
We also received a lot of positive feedback from several participants in the GVU Demo Showcase at Georgia Tech held on April 17, 2013 where the initial prototype was showcased. All participants at the showcase who used JukeMonkey were fascinated by the device and were eager to use the emotokens and listen to the music being played. Many were interested to see how the structure was built and were particularly impressed with the idea of using the lever to eject the tokens and were genuinely surprised to see a phone inside. Kids who accompanied the participants turned out to be the biggest fans of JukeMonkey and took turns to use it. We observed that some of the participants tried to insert multiple tokens at the same time or one after the other, and realized that either the device should not allow users to insert multiple tokens or we should provide clear instructions indicating the same. Some of them asked us where they could buy one of these, and pushed us to ﬁnd ways to build a version that could be sold commercially.
FINAL DESIGN A dual approach to decide and play music, alone or in a group. A product with both tangible and digital interactions, customizable yet stand alone, catering to all levels of experience.
e New JukeMonkey After considering the feedback on our initial prototype, we realized that users were delighted to use JukeMonkey and found it very convenient and easy to interact with. ey could insert a suitable emotoken and continue with their work without having to constantly decide what music to play. JukeMonkey was created for both individuals and
groups and for individuals it worked really well. But in groups, some people might not want to speak up in order to change the volume or skip a song, and not everyone might be able to access the device or get a chance to interact with it. is might lead to some disconnect as the device does not capture the feedback of the entire group at all times, and a single person might end up controlling the device. e initial prototype also did not oďŹ€er a way to capture active
feedback of the users on whether they liked or disliked a particular song. In order to provide unbiased control over the music to the users, we decided to provide additional features in the form of a mobile application. Users could still continue using JukeMonkey irrespective of whether they had the application or not, however for those users who wanted their feedback to be considered, they could use the application to upvote or downvote a song that they liked or disliked. Users could also vote to skip the song. e application notiďŹ es the user that their vote to skip the song has been considered but the device skips the song only if more than 50% of the users around the device with the application have skipped it. All users have complete control over the volume of
the device so that they can control the volume while starting a conversation or answering a call without letting the music disturb them. We envision that as a commercial product, JukeMonkey could be shipped with the mobile application integrated with it. e device is capable of functioning independent of the mobile application but those who wanted to be involved in the music selection actively could choose to do so by using the application. We believe that by providing both tangible and digital interactions, we have been able to create an innovative product that oďŹ€ers a novel way to listen and play music and which is capable of making intelligent decisions based on users'
REFERENCES 1. Kuo, F. F., Chiang, M. F., Shan, M. K., & Lee, S. Y. (2005, November). Emotion-based music recommendation by association discovery from ﬁlm music. In Proceedings of the 13th annual ACM international conference on Multimedia (pp. 507-510). ACM. 2. Batson, C. D., Shaw, L. L., & Oleson, K. C. (1992). Diﬀerentiating aﬀect, mood, and emotion: Toward functionally based conceptual distinctions. 3. Cai, R., Zhang, C., Wang, C., Zhang, L., & Ma, W. Y. (2007, September). Musicsense: contextual music recommendation using emotional allocation modeling. In Proceedings of the 15th international conference on Multimedia(pp. 553-556). ACM. 4. Rachuri, K. K., Musolesi, M., Mascolo, C., Rentfrow, P. J., Longworth, C., & Aucinas, A. (2010, September). EmotionSense: a mobile phones based adaptive platform for experimental social psychology research. In Proceedings of the 12th ACM international conference on Ubiquitous computing (pp. 281-290). ACM. 5. Vanello, N., Guidi, A., Gentili, C., Werner, S., Bertschy, G., Valenza, G., ... & Scilingo, E. P. (2012, August). Speech analysis for mood state characterization in bipolar patients. In Engineering in Medicine and Biology Society (EMBC), 2012 Annual International Conference of the IEEE (pp. 2104-2107). IEEE. 6. Gluhak, A., Presser, M., Zhu, L., Esfandiyari, S., & Kupschick, S. (2007). Towards mood based mobile services and applications. In Smart Sensing and Context (pp. 159-174). Springer Berlin Heidelberg.
7. Golder, S. A., & Macy, M. W. (2011). Diurnal and seasonal mood vary with work, sleep, and daylength across diverse cultures. Science, 333(6051), 18781881. 8. Meyers, O. C. (2007). A mood-based music classiﬁcation and exploration system (Doctoral dissertation, Massachusetts Institute of Technology). 9. Benta, K. I. (2005, November). Aﬀective aware museum guide. In Wireless and Mobile Technologies in Education, 2005. WMTE 2005. IEEE International Workshop on (pp. 53-55). IEEE. 10. Garrido, S.; E. Shubert (2011). "Individual diﬀerences in the enjoyment of negative emotion in music: a literature review and experiment". Music Perception 28: 279–295. 11. Sloboda, J. A.; Juslin, P. N. (2001). "Psychological perspectives on music and emotion". Music and Emotion: eory and Research: 79–96. 12. Hunter, P. G.; Schellenburg, E. G., & Schimmack, U. (2010). "Feelings and perceptions of happiness and sadness induced by music: Similarities, diﬀerences, and mixed emotions". Psychology of Aesthetics, Creativity, and the Arts 4: 47–56. Scherer, K. R.; Zentner, M. R. (2001). "Emotional eﬀects of music: production rules". Music and Emotion: eory and Research: 361–387.
LMC 6314 Design of Networked Media Spring 2013 Sanat Rath Sahithya Baskaran