Page 31

P31 MARCH 2019

Le to right: Student Marc Green, and Professor Damian Murphy

and audio signal processing. An active sound artist, in 2004 he was appointed as one of the UK’s first AHRC/ ACE Arts and Science Research Fellows, investigating the compositional and aesthetic aspects of sound spatialisation, acoustic modelling techniques and the acoustics of heritage spaces. AMBEO AMBEO is a series of systems designed to enhance the Virtual and Augmented Reality experience, aiming to push the boundaries of spatial audio with a mission to create compelling audible AR experiences. The manufacturers' claim is that by blending virtual 3D sound with a user's real acoustic world, and with the help of Sennheiser's unique so¢ware and hardware tools, they will be able to take full control of their AR and MR experiences. Professor Murphy said of the benefits of AMBEO: “It is a novel technology that has real potential as a tool for developing binaural immersive experiences that do not close the subject off from the wider world. There is significant opportunity to develop new augmented audio experiences with better interaction between individuals sharing in the same experience. The AMBEO headset is also a really interesting, compact and creative device for making immersive binaural recordings.” PHD SOUNDSCAP AR Marc Green is a current York University PhD student researching at the AudioLab, who has made extensive use of AMBEO while working under the guidance of Professor Murphy. One of Green's recent publications is 'EigenScape: A Database of Spatial Acoustic Scene

Recordings.' Originally trained as a classical pianist, his career has progressed through music production courses and studio work to, at present, high level sonic research. Much of his work is based around "Environmental Soundscapes', both in practice and research, and he is currently working on measurement systems, which constitutes looking into the sonic content of a landscape and how people react to it. He is also investigating how machine learning can be deployed to create new ideas. For his degree, a few years back, Green created a music, sound, and visual art work featuring content around human speech, based on the condition Hyperlexia, a form of autism, and for his Masters research, Green worked on Acoustic Feedback Processing in conjunction with Allen & Heath. As part of his current PhD studies, Green has developed a new app called Soundscap AR, now available on the Apple App Store. Intended to analyse the sounds of local environments, the app uses machine learning to go beyond the traditional decibel level measurement techniques, and instead, give users readings for how much natural, mechanical, or human sound makes up a given sound scene. Green has long argued that current noise level measurements give no information on the actual content of a sound scene or, in other words, how loud an environment is. Therefore, his intention has been to use machine learning to better inform by creating readings that can be done remotely. Soundscap AR uses AR to add a selection of virtual sound sources, or a virtual sound barrier, to a given scene, monitoring how these affect the readings and perception, and the app works best with Sennheiser’s AMBEO Smart Headset. AMBEO Smart allows users to

hear augmented sound, featuring built-in microphones so users can hear their environment, real and virtual, as though they are not wearing earphones. Green has been more than happy with AMBEO: “The best thing about the headset is that the microphones are really good quality and very easy to use.” Green explains further about his Soundscap workflow: "The first thing I did was use a em32 Eigenmike and a surround sound microphone, recording in multiple locations in the north of England, from cities to nature. The em32 is an incredible mic made by mh acoustics, and is composed of multiple professional mics positioned on the surface of a rigid sphere. Based on those recordings, I worked to create a computer-based machine learning system, hoping that the computer would learn what those spaces are and how they behave. Machine learning is based on the idea of creating a device that would react to sound without having to take people to a location, and not necessarily based on a decibel reading alone." The second part of the app consists of 'Positive Sound', which overlays sounds made by Green to create a Virtual Sound Objects Soundscape; four layers are currently available as virtual objects, a Virtual Water Fountain, a Virtual Bird Song, an Acoustic Barrier and a Car. Impressively, users can also create their own virtual sound objects. Looking forward, Green is developing various new ideas and projects: “Future ideas will involve using the Eigenmike and new work on source tracking within recording, using more detailed sound measurement. I am also thinking about news ways of creating original audio and music scene generation for the gaming industry that will react to a users location.” 

www.psneurope.com PSE84.AMBEO.indd 2

21/02/2019 10:10

Profile for Future PLC

PSN Europe 84 - March 2019  

Dani Bennett Spragg - Meet this year's MPG Awards Breakthrough Engineer

PSN Europe 84 - March 2019  

Dani Bennett Spragg - Meet this year's MPG Awards Breakthrough Engineer