4 minute read

SOUNDSCENE

Sounds of clarity in a world of noise

Many of us are surrounded by sound in our daily lives, yet somehow our brains are still able to pick out a single voice or particular source of sound, a process called auditory scene analysis. Researchers in the Soundscene project aim to build a deeper understanding of this process, as Professor Jennifer Bizley explains.

There are often multiple sources of sound around us in our daily lives, whether it’s colleagues chatting in the office, traffic noise, or the sort of background hum that you often hear in a busy pub. Different sounds arrive at the ear as a mixture, yet somehow the brain can still pick out a single voice or a particular source of sound, a topic of great interest to Jennifer Bizley, Professor of Auditory Neuroscience at University College London. “We don’t really know how the brain

SOUNDSCENE

How does the brain organize sounds into auditory scenes? Real-world listening involves making sense of the numerous competing sound sources that exist around us and involves multiple brain regions. We seek to understand how neural processing within auditory cortex, prefrontal cortex and hippocampus, and the interactions between these areas, enable listeners to make sense of sound.

Project Coordinator, Jennifer Bizley, D.Phil Professor of Auditory Neuroscience. UCL Ear Institute 332 Gray’s Inn Road London WC1X 8EE T: +0207 679 8934 E: j.bizley@ucl.ac.uk W: www.dBSPL.co.uk

Jennifer Bizley is Professor of Auditory Neuroscience and a Wellcome Trust and Royal Society Sir Henry Dale Fellow at the UCL Ear Institute. does this. We know that older listeners, and people suffering from hearing loss or certain neurological problems find this process quite challenging,” she says. This is an issue central to Professor Bizley’s work as the Principal Investigator of the Soundscene project. “We want to improve our understanding of this process. If we can work out how the brain does this, then maybe use this to develop a machine listening device that can successfully pick a voice out of a mixture,” she outlines.

Soundscene project

This research builds on earlier behavioural and brain scanning experiments in humans which have shown which areas of the brain are active while the brain interprets sound scenes. “Some of these are areas that you would expect, like the auditory cortex. But then there are other areas, like the hippocampus and the frontal cortex, that imaging studies have shown are also involved,” says Professor Bizley. While functional imaging can show which parts of the brain are involved, it doesn’t really tell researchers how they’re solving a task, so Professor Bizley and her colleagues are using an animal model to gain deeper insights. “We are using start-of-the-art systems neuroscience methods in ferrets, which are really effective as an animal model because they are able to learn behavioural tasks involving auditory scene analysis. We want to understand what role these areas beyond the auditory cortex play in listening,” she explains.

The development of more sophisticated electrodes, capable of recording signals from hundreds of sites in the brain simultaneously, has opened up new possibilities in this respect. Researchers today have access to a lot more data than their predecessors, while Professor Bizley says this data can also be used in interesting ways. “We can begin to understand how neurons interact with one another, and how information is processed and transmitted between different brain regions,” she continues. The ear projects to a region called the cochlear nucleus, which sits in the brain stem, and a series of brain stem nuclei extract different features of sound. “Eventually information ends up in the auditory cortex where neurons can integrate information across time and frequency. We think this is where the brain effectively un-mixes the sound that arrived at the ear back into the separate sources that existed in the world – and from there you can choose a source to listen to.”

This research holds important implications for our understanding of hearing difficulties. Typical old-age hearing loss involves the loss of cells responsible for detecting sound, which compromises a listeners ability to perform scene analysis effectively. Another group of people have what are loosely called central auditory processing disorders. “If you sit them in an audiology booth, and you measure their ability to detect pure tones, they’re completely normal. Yet, if you put them in a situation where there is a lot of background noise, they really struggle to pick a voice out,” says Professor Bizley. A deeper understanding of how the brain separates different sources of sound could eventually lead to improved treatment of these disorders, while Professor Bizley says the project’s research also holds relevance in other areas. “For example the machine listening devices that sit behind things like Siri and Alexa need to pick out a voice from background noise,” she points out.