FEATURE
HEARING THEEARS WITH
LISTENING WITH THE
BRAIN
By Bernhard Ross, PhD and Claudia Freigang, PhD (Post Doctoral Fellow) Faculty Affiliations: Senior Scientist, Rotman Research Institute, Baycrest Health Sciences Associate Professor, Department of Medical Biophysics, University of Toronto Full Member, Institute of Medical Science
S
ensory organs provide us with an interface to the environment. Specifically, hearing is crucial for speech communication, and not being able to hear properly has a huge impact on everyday life. Our ears receive pressure changes of sound travelling through the air, transform the sound into mechanical vibrations at the basilar membrane, and finally into electrical impulses at the auditory nerve. However for understanding a conversation or listening joyfully to a piece of music, we need the brain as an interpreter of what we hear. With our research we try to understand how the brain identifies elementary features of sound, separates sounds originating from different sources, and integrates pieces of sound into meaningful streams of speech and music. The brain has the spectacular ability to disentangle multiple simultaneously occurring sounds like those in everyday conversations and environments. We term this ‘auditory scene analysis’ and subsume the underlying brain actions as ‘central auditory processing’. We are investigating brain activity related to sensation and perception with electroencephalography (EEG) and magnetoencephalography (MEG), as both techniques record neural activity at fine time scales of millisecond resolution. With EEG and MEG we can identify
18 | IMS MAGAZINE SPRING 2016 SENSORY SYSTEMS
brain activity at each stage of auditory processing. We use a battery of signal processing algorithms to extract auditory neural signals from ongoing brain activity. Each sound we hear elicits a sequence of neural responses along the auditory pathway, reflecting a hierarchy of information processing. Early responses are related to the analysis of the temporal and spectral structure of the sound, later responses may relate to combinations of sounds into a syllable or a word, and even later responses may indicate cognitive processes of decision-making or reacting based on the perceived sound. We focus our analyses on early and later brain responses. Early auditory responses are strictly time locked to the sound and can be identified by the temporal structure. Later brain responses are generated by the brain at more variable latencies, and we identify those signals based on their rhythmic structure using frequency analysis of brain activity. MEG additionally allows us to localize which brain area is involved in generating the neural response. Aging affects our sensory organs: the range of visual accommodation and tactile acuity is reduced and we lose sensitivity for hearing, specifically at higher frequencies. Hearing loss progresses gradually and becomes noticeable for most people in their 60s, when high frequency hearing loss reaches the frequency range of speech. Hearing loss can be compensated for