Rebugging Mixed Reality: hear

Page 1

Project Title Student(s) Name(s) Chair for Architectural Informatics Department of Architecture Technichal University of hearMunichAlinaGötz,MaximilianSteverdingChairofArchitecturalInformaticsTechnicalUniversityofMunich revolutionizing the way you hear

hear spatial, even digital

Hear and locate people spatially in online Hearconferences.andinteract with any virtual and physical sound sources together spatialy.

4 ChairhearofArchitectural Informatics Prof. Dr.-Ing. Frank Petzold Rebugging Reality Nick Förster, Gerhard Schulz Alina Götz, Maximilian Steverding 3628061, 03716600

5 302826241614121064 TableContactCollaborationOutlookDiscussionFinalPrototypingConceptIdeationResearchIntroductionImplementationofContents

6 TheIntroductionearisamarveloussensualpieceofequipment

for the human body. Though we can close our eyes the ears are always listening to their surroundings and a loud noise can wake us up even from the deepest Butsleep.even though we are exposed to sound every day, we barely notice the subtile changes and ways on how we interact with it. Only in the rare moments of loudness we witness, that sometimes we want to alter our surroundings in order to adjust the soundscape we are in. Nevertheless possibilities to influence the way in which we experience sound are very limited. Digital media brought a wide range of augmentation for other senses, most noticeably the eyes. The ear has been almost left out, especially within our everyday experience of sound. With hear we want to change the way how people experience sound and how they can directly interact with the way how, what and where they hear. Modeover the technologie is already existing. hear is using these technologies in a new and complex system with intuitive user-experience. Welcome to a world of augmented audio.

7

Problem

Many aspects of live are ought to be simplified by virtual augmentation. Nevertheless the easiest realm of senses to be augemented, the auditive, is widely left out in the research for augmenting the everyday.

Hypothesis

8

Altering the soundscape of our everyday sourounding will lead to changes in our behaviour that will drastically improve the way in which we are within the world. Binding the virtual network of digital urban infrastructure and the natural space of our beeing closer together.

9 Solution Through already existing technologies we can enhance and individualize the way in which we explore sound from different sources, being it natural or virtual, within our everyday-life. Imagine sitting in your living room watching tv. Suddenly an important businesscall is incoming. Only with a tab you can switch the audiosource you are currently listening to, take it with you on your way through your flat only to let it go after its finished our your attendance is no longer needed and return to the sound of your natural souroundings.

an important role within our everyday communication. Apart from the underlying messages of body language, most of the complex narration and semantics of our daily communication is brought along by sound that is emitted by one person, traveling through space, eventually entering ones ear. Thus, invention within the field of communication always results in big leaps for the development of society. While technology advanced, the phone was invented and with it, people became used to long distance communication. Not necessarily being forced to physical closeness is a great advantage but also requires a whole apparatus of infrastructure. With the emergence of wireless technology, the single human became independent from the materialization of this infrastructure (phones) and was able to take certain calls with him- or herself trough space. Still the communication itself is very linear due to the missing Thereforespatiality. we can distinguish two modes of communication. The first is the „traditional“ communication, where all entities are organized within space, and the perception of the communication is altered by your position.

10

MostResearchnoticeably,soundplays

13

The second is the „linear“ communication, that directly connects two or more entities without organizing them in space. Thus, this communication is not spatial but binary. To over-come the binarity of the „linear“ communication, other senses like the eyes were integrated in these modes of communication. Like in video-chats, the visuality was added to the mode of communication. Nevertheless it did not change their organization. These forms can only be a altered by on or off, which is in the nature of binary communication. The second is the „linear“ communication, that directly connects two or more entities without organizing them. Thus, this communication is not spatial but Tobinary.come over the binarity of the„linear“ communication, other senses like the eyes were integrated in these modes of communication. Like in video-chats, the visuality was added to the mode of communication. Nevertheless it did not change their organization. These forms can only be a altered by on or off, which is in the nature of binary communication.

The transformation of the human into a digitally enhanced entity that both exists in physical and virtual space is inscribed in our modern everyday. Most of us are already cyborgs, enhanced by little Babelfishes within our ears. Let‘s enhance these technology and awake its full potential.

WhatIdeationifwecouldbringspatial organization to digital communication through existing technology? We are already surrounded by devices and software that is able to spatialize sound as well as to track our movement through space solely on the information gathered by our headphones. So why sticking to the binarity of „linear“ communication and listening methods. Why not rebug reality by altering the way we communicate digitally and expand this on how we perceive audio in general.

14

the auditive cyborg is already a reality

16

TheConceptconceptofthisdriveris not to create a software or hardware but implement a standard that is able to reorganize the way how different audio-sources interact with one another. The spatialization itself happens regarding the users position. Different audio-streams, being it people talking to you, videos you are watching, or the lecture you are currently listening to, are being laid out in front of the user.The driver itself organizes the sound around the user. Just like in a room with different audio-emitters, the user will hear sounds from different directions, being able to track and interact with them. This will not only include for example virtual calls but can also integrate real audio-sources within the room, like tv`s, laptops or even physically present persons.

Just like within a physical space, for example ones flat, or office, different audio-sources are being played. Through spatialization, the user is able to differentiate the different sources due to their position.

On the upper left corner of the screen, the user is able to see the different commands that he or she can use in order to change the audio that is being witnessed. The Prototype should not be understood as an interface of a possible interaction with the driver. It is more a simulation of the experience within a real space. The user is able to select different audio-sources within the space just like within the finished project. Moreover the user will be able, within the finished program, to change between selecting the sound- sources within his real or his virtual surroundings. The prototype thus encouraged us, that such a systematic implementation would be a big enhancement of the current technology.

18

ThePrototypingPrototypeissimulatingademo-room.

consists of two different components.

The Final implementation of the system hear, will be developed as an audio-standard, for example like dolby digital, that enables developers to link their audiosources and software within the spatial organization of that system.

The premise of this audio-standard is the use of already existing technology to make implementation as easy as Thepossible.programm

The second component is the hear driver. The driver uses the bluetooth capabilities of devices to create a mesh-network that integrates the users headphones so re-connecting will be a thing of the past.

The first component is the pair of true wireless bluetooth headphones. They need to be capable of tracking the users head movement. Therefore a gyroscope within the headphones is needed.

The user-interaction is done via very simple gestures, that can be performed with the headphones. The user will be able to tap to focus on the sound that he or she is currently looking at. If the user wants to take a sound source along the way, the headphones just need to be squeezed.

20

Final Implementation

21 revolutionizing the way you hearlistenspeakhearphone

22 single streams

23 mesh stream

(1) the user is hearing a physical audiosource (2) when the phone rings, virtual sources are places around the user, integrating also the physical source (3) the user can move freely and take the audio-sources on the way. the integrated physical source was locked an is also taken along. (4) but the user can also let go off the physical source. It is then distributed to its old spot.

24

25 (5) therefore not only virtual but also physical audio-soucres can be implemented in the experience (6) even existing talking humans can be taken along with the user due to the new spatial organisation

26 viahead-trackingheadphones

27 tapping to focus on a specific sound squeezing to take the sound with you

TheDiscussionprototypeofcourseisnotable to display all interaction of the system because its limited stage of programming. Therefore especially the virtual distribution of different entities of communication cannot be fully explored. Nevertheless the shown usability of altering sound within real space through very simple gestures is seen as a great advantage towards the way in which we experience sound. Moreover sound has been a much to underrated feature in which we change and mediatize our surroundings. More of this will be reflected on within the Evenoutlook.though, the technology to construct such a driver is already existent, the hurdles to implement this as an industry wide standard are high. But after all dolby digital for example became a de facto standard too, also operating within the spectrum of sound.

We hope that through simple gestures, the userinteraction with the spatial organization will be as natural and as intuitive as possible. Thus we are deliberately dispensing a graphical interface for organizing and interacting with the driver. Even though such a graphical interface would probably make miss organization within the audio-sphere more forgiving, it is introducing another level of sensual experience that will weaken the intuitiveness of a solely auditive interface. Moreover the user would need to always

28

Hints that such a development is taking place are for example the developments around the AirPods Pro by Apple.

29 carry a graphical interface along the way, leading to a more inconvenient experience.

Due to the lack of deep programming knowledge only the concept, the idea of organizing sounds spatial in order to interact with them, can be described within this paper. Even though we have researched the technological necessities, that we would think are relevant, the coded implementation of such a crossplatform, cross-device integration cannot be grasped by us. Nevertheless technologies like dolby, bluetooth etc. exist cross-plattform, cross-device. Therefore we believe, that such a technology is possible and will be available at some point in the future.

JustOutlookasthefuturisticimages of holographic projections or smart glasses altering the way we see, the project hear and its possible outlook will change the way we hear. The system itself will slowly lead to the augmentation through headphones that will become a daily accessoire. Not only will be the visual be mediatized but also the auditive. We will radically change the way in which we experience sound, detaching us more and more from the spatiality of the physical space. Each of us will be creating their own soundscapes to experience our own auditive realities.

30

31

was kind of difficult. Although the workshop was organized very well and the used techniques were good to find the way and develop a concept, working out the concept was hard due to the lack of physical presence.

TheCollaborationCollaborationthroughoutthesemester

32

Organizing working-times especially in a conceptdriven project, is essential, because this can best be developed in direct communication. Thus working out the concept was a special situation. Furthermore it would have helped or clarified the perception of the course, if the goal to create a new working tool for architects would have been outlined from the beginning. It would have been great if a little bit more input would have been provided from the start in order to guide the projects into the expected direction and to not let the aspirations drift too far during the brainstorming phases.

33

34 Contact Maximilian Steveridng 4.03716600Mastersemester Alina 2.Mastersemester03628061Götz

35 revolutionizing the way you hear

Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.