The Oxford Handbook of Religious Space Jeanne Halgren Kilde https://ebookmass.com/product/the-oxford-handbook-of-religiousspace-jeanne-halgren-kilde-2/
The Oxford Handbook of Polling and Survey Methods (Oxford Handbooks)
Edited by GREIG I. DE ZUBICARAY and NIELS O. SCHILLER
Oxford University Press is a department of the University of Oxford It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries
Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America.
All rights reserved No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above
You must not circulate this work in any other form and you must impose this same condition on any acquirer
Library of Congress Cataloging-in-Publication Data
LC record available at https://lccn loc gov/2018013813
CONTENTS
Preface
G I. Z N O. S
List of Contributors
1. Neurolinguistics: A Brief Historical Perspective
S E. B
PART I. THE METHODS
2. Neurolinguistic Studies of Patients with Acquired Aphasias
S M. W
3. Electrophysiological Methods in the Study of Language Processing
M L K D. F
4. Studying Language with Functional Magnetic Resonance Imaging (fMRI)
S H K S
5. Transcranial Magnetic Stimulation to Study the Neural Network Account of Language
T S
6. Magnetoencephalography and the Cortical Dynamics of Language Processing
R S , J K , M L
7. Shedding Light on Language Function and Its Development with Optical Brain Imaging
Y M A C
8. What Has Direct Cortical and Subcortical Electrostimulation Taught Us about Neurolinguistics?
H D
9. Diffusion Imaging Methods in Language Sciences
M C S J. F
PART II. DEVELOPMENT AND PLASTICITY
10. Neuroplasticity: Language and Emotional Development in Children with Perinatal Stroke
J S. R L R. P
11. The Neurolinguistics of Bilingualism: Plasticity and Control
D W. G J F. K
12. Language and Aging
J E. P
13. Language Plasticity in Epilepsy
J R. C M J. H
14. Language Development in Deaf Children: Sign Language and Cochlear Implants
A J. N
PART III. ARTICULATION AND PRODUCTION
15. Neuromotor Organization of Speech Production
P T , I D , A S D
16. The Neural Organization of Signed Language: Aphasia and Neuroscience Evidence
D P. C L A. L
17. Understanding How We Produce Written Words: Lessons from the Brain
B R J P
18. Motor Speech Disorders
W Z , T S , I A , A
S
19. Investigating the Spatial and Temporal Components of Speech Production
G I. Z V P
20. The Dorsal Stream Auditory-Motor Interface for Speech
G H
PART IV. CONCEPTS AND COMPREHENSION
21. Neural Representations of Concept Knowledge
A J. B M A. J
22. Finding Concepts in Brain Patterns: From Feature Lists to Similarity Spaces
E M S L. T -S
23. The How and What of Object Knowledge in the Human Brain
F E. G B Z. M
24. Neural Basis of Monolingual and Bilingual Reading
P M. P -A , M O , I Q ,
M C
25. Dyslexia and Its Neurobiological Basis
K J N L
26. Speech Perception: A Perspective from Lateralization, Motorization, and Oscillation
D P , G B. C , I D , A
F
27. Sentence Processing: Toward a Neurobiological Approach
I B -S M S
28. Comprehension of Metaphors and Idioms: An Updated Meta-analysis of Functional Magnetic Resonance Imaging Studies
A M R
29. Language Comprehension and Emotion: Where Are the Interfaces, and Who Cares?
J J. A. B
PART V. GRAMMAR AND COGNITION
30. Grammatical Categories
D K
31. Neurocognitive Mechanisms of Agrammatism
C K. T J E. M
32. Verbal Working Memory
B R. B
33. Subcortical Contributions to Language
D A. C A J. A
34. Lateralization of Language
L V H Q C
35. Neural Mechanisms of Music and Language
M O L. R S
Index
PREFACE
GREIG I. DE ZUBICARAY AND NIELS O. SCHILLER
NEUROLINGUISTICS is a highly interdisciplinary field, with influences from psycholinguistics, psychology, aphasiology, (cognitive) neuroscience, and many more. A precise definition is elusive, but often neurolinguistics is considered to cover approximately the same range of topics as psycholinguistics, that is, all aspects of language processing, but approached from various scientific perspectives and methodologies. Twenty years ago, when the first Handbook of Neurolinguistics, edited by Harry Whitaker and Brigitte Stemmer, was published, it was relatively easy to identify the contributions from individual disciplines, with the dominant evidence base and approach being clinical aphasiology. Today, neurolinguistics has progressed such that individual researchers tackle topics of interest using multiple methods, and share a common sense of identity and purpose, culminating in their own society and annual conference. The Society for the Neurobiology of Language will have its tenth anniversary in 2018, and its annual meeting now regularly exceeds 700 attendees.
When we first proposed to collate and edit this Handbook of 35 chapters, we knew we were undertaking a challenging task given the rapid expansion of the field and pace of progress in recent years. We envisaged a mix of chapters from established and emerging researchers, with contributions covering the contemporary topics of interest to the field of neurolinguistics. We wanted more than the mere acknowledgment of the multilingual brain featured in previous handbooks, and to encourage varied perspectives on how language interacts with broader aspects of cognition and emotion. Responses to our invitations were mostly generous. By and large, we believe we have achieved much of what we set out to accomplish.
The scope and aim of this new Oxford Handbook of Neurolinguistics is to provide students and scholars with concise overviews of the state of the art
in particular topic areas, and to engage a broad audience with an interest in the neurobiology of language. The chapters do not attempt to provide exhaustive coverage, but rather present discussions of prominent questions posed by a given topic.
Following an introductory chapter providing a brief historical perspective of the field, Part I covers the key techniques and technologies used to study the neurobiology of language today, including lesion-symptom mapping, functional imaging, electrophysiology, tractography, and brain stimulation. Each chapter provides a concise overview of the use of each technique by leading experts, who also discuss the various challenges that neurolinguistic researchers are likely to encounter.
Part II addresses the neurobiology of language acquisition during healthy development and in response to challenges presented by congenital and acquired conditions. Part III covers the many facets of our articulate brain, its capacity for language production—written, spoken, and signed—again from both healthy and clinical perspectives. Questions regarding how the brain organizes and represents meaning are addressed in Part IV, ranging from word to discourse level in written and spoken language, from perception to statistical modeling. The final Part V reaches into broader territory, characterizing and contextualizing the neurobiology of language with respect to more fundamental neuroanatomical mechanisms.
Our thanks go to the authors of the chapters, without whom the Handbook would not have been possible. Their commitment, expertise, and talent in exposition are rivaled only by their patience with the editorial process. Thanks also go to Peter Ohlin, Hannah Doyle, and Hallie Stebbins at Oxford University Press, who encouraged and ensured the publication of The Oxford Handbook of Neurolinguistics.
Contributors
Ingrid Aichert, PhD, is a speech-language pathologist. She works as a Research Associate in the Clinical Neuropsychology Research Group (EKN) at the Institute of Phonetics and Speech Processing, University of Munich, Germany. Her main areas of research are apraxia of speech and phonological disorders.
Anthony J. Angwin is a Senior Lecturer in Speech Pathology at the University of Queensland. His research, focused primarily within the field of Language Neuroscience, uses neuroimaging and behavioral paradigms to advance current understanding of language processing and language learning in healthy adults and people with neurological impairment.
Andrew J. Bauer received his PhD at Carnegie Mellon University and is currently a Postdoctoral Fellow at the University of Toronto. His research uses machine learning techniques applied to fMRI data to understand where and how knowledge is neurally represented in the brain, and how the brain changes with learning new concepts.
Sheila E. Blumstein is the Albert D. Mead Professor Emerita of Cognitive, Linguistic, and Psychological Sciences at Brown University. Her research is concerned with delineating the neural basis of language and the processes and mechanisms involved in speaking and understanding, using behavioral and neural measures of persons with aphasia and functional neuroimaging. Blumstein’s research has focused on how the continuous acoustic signal is transformed by perceptual and neural mechanisms into the sound structure of language, how the sound structure of language maps to the lexicon (mental dictionary), and how the mental dictionary is organized for the purposes of language comprehension and production.
Ina Bornkessel-Schlesewsky is Professor of Cognitive Neuroscience in the School of Psychology, Social Work and Social Policy at the University of South Australia in Adelaide. She was previously Professor of
Neurolinguistics at the University of Marburg, Germany, and Head of the Max Planck Research Group Neurotypology at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany. Her main research interest is in the neurobiology of higher-order language processing.
Bradley R. Buchsbaum is an Associate Professor in the Department of Psychology at the University of Toronto and a scientist at the Rotman Research Institute at Baycrest. His research focuses on the cognitive neuroscience of memory and language, with special focus on how memory emerges from neocortical representations that underlie perceptual and motor cognition.
Qing Cai is a cognitive psychologist and Professor of Psychology at East China Normal University. Her research focuses on the neural basis of speech and reading, their acquisition in typical and atypical development, as well as their relation to learning, memory, music, and other higher-order cognitive functions.
Manuel Carreiras, PhD, is the Scientific Director of the Basque Center on Cognition, Brain and Language (BCBL), Ikerbasque Research Professor, Honorary Professor of the University College of London, and Visiting Professor of the University of the Basque Country (UPV/EHU). His research focuses on reading, bilingualism, and second-language learning. He has published more than 200 papers in high-impact journals in the field. His research has been funded by various research agencies, including the European Research Council.
Marco Catani is Professor of Neuroanatomy and Psychiatry at King’s College London and Honorary Consultant Psychiatrist at the Maudsley Hospital. He has contributed to the development of diffusion tractography methods applied to the study of white matter connections in the normal brain and in a wide range of neurodevelopmental and neurological disorders.
Gregory B. Cogan is an Assistant Professor of Neurosurgery at Duke University. His research focuses on the neural underpinnings of speech and auditory cognition.
Jeffrey R. Cole is an Assistant Professor of Clinical Neuropsychology in the Department of Neurology at Columbia University Medical Center, and Adjunct Assistant Professor of Psychology and Education at Columbia
University – Teachers College. His clinical practice and research interests focus on patients with complex and medically refractory epilepsies, Wada testing, and cortical language mapping.
David A. Copland is a University of Queensland Vice Chancellors Fellow and speech pathologist. He is active in the fields of psycholinguistics, language neuroscience, and clinical aphasia management. He has particular interests in determining the neural mechanisms underpinning aphasia recovery and treatment, in developing better interventions for aphasia, and understanding subcortical contributions in language as observed in stroke and in Parkinson’s disease.
David P. Corina is a Professor in the Departments of Linguistics and Psychology at the University of California, Davis. He is the Director of the Cognitive Neurolinguistics Laboratory at the Center for Mind and Brain. His research interests include the neural processing of signed and spoken languages and neural plasticity as a function of linguistic and altered sensory experience.
Alejandrina Cristia received her PhD in Linguistics from Purdue University in 2009 and did postdoctoral work on neuroimaging at the Max Planck Institute for Psycholinguistics before joining the French CNRS (Centre national de la recherche scientifique) as a Researcher in 2013.
Ido Davidesco is a Research Assistant Professor at the Teaching and Learning Department at New York University. His research focuses on how brain oscillations become synchronized in classrooms.
Greig I. de Zubicaray is Professor and Associate Dean of Research in the Faculty of Health at Queensland University of Technology, Brisbane, Australia. His research covers brain mechanisms involved in language and memory and their disorders, neuroimaging methodologies, the aging brain and cognitive decline, and most recently, the emerging field of imaging genetics.
Isabelle Deschamps is a Professor at Georgian College, Orillia, Ontario, Canada, and a Researcher in the Speech and Hearing Neuroscience Laboratory in Québec City. Her research focuses on the neural correlates of phonological processes during speech perception and production.
Anthony Steven Dick is Associate Professor of Developmental Science and Director of the Cognitive Neuroscience Program in the Department of Psychology at Florida International University, Miami. His research focus is on the developmental cognitive neuroscience of language and executive function.
Hugues Duffau (MD, PhD) is Professor and Chairman of the Neurosurgery Department in the Montpellier University Medical Center and Head of the INSERM 1051 Team at the Institute for Neurosciences of Montpellier (France). He is an expert in the awake cognitive neurosurgery of slowgrowing brain tumors. For his innovative work in neurosurgery and neurosciences, he was awarded Doctor Honoris Causa five times, and he was the youngest recipient of the prestigious Herbert Olivecrona Award from the Karolinska Institute in Stockholm. He has written four textbooks and over 370 publications for a total of more than 25,000 citations and with an h-index of 85.
Kara D. Federmeier is a Professor in the Department of Psychology and the Neuroscience Program at the University of Illinois and a full-time faculty member at the Beckman Institute for Advanced Science and Technology, where she leads the Illinois Language and Literacy Initiative and heads the Cognition and Brain Lab. Her research examines meaning comprehension and memory using human electrophysiological techniques, in combination with behavioral, eye-tracking, and other functional imaging and psychophysiological methods.
Adeen Flinker is an Assistant Professor of Neurology at the New York University School of Medicine. He is the Director of Intracranial Neurophysiology Research at the Comprehensive Epilepsy Center. His research focuses on the temporal dynamics of speech production and perception.
Stephanie J. Forkel is an Honorary Lecturer at the Departments of Neuroimaging and Forensic and Neurodevelopmental Sciences at the Sackler Institute of Translational Neurodevelopment, Institute of Psychiatry, Psychology and Neuroscience at King’s College London. She has a background in psychology and neurosciences, which she currently applies to identify neuroimaging predictors of language recovery after brain lesions using diffusion imaging.
Frank E. Garcea completed his PhD in Cognitive Neuroscience in the Department of Brain and Cognitive Sciences at the University of Rochester in July 2017. He is now a postdoctoral research fellow at the Moss Rehabilitation Research Institute, where he studies language and action representation in brain-damaged individuals.
David W. Green is an Emeritus Professor in the Faculty of Brain Sciences at University College London. Theoretical work and neuroimaging research with neurologically normal participants from young adults to the elderly have been combined with applied research into the neural predictors of speech recovery post-stroke in monolingual and multilingual individuals with aphasia.
Marla J. Hamberger is a Professor of Neuropsychology in the Department of Neurology at Columbia University Medical Center, and Director of Neuropsychology at the Columbia Comprehensive Epilepsy Center. Her research focuses on brain organization of cognitive mechanisms supporting word production using electrocortical stimulation mapping and behavioral techniques in patients who require brain surgery involving eloquent cortex.
Stefan Heim is a cognitive neuropsychologist and neurolinguist. He did his PhD thesis at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany. He is now Professor and Chair of the academic programs for Speech-Language Therapy (BSc, MSc) at RheinischWestfälische Technische Hochschule (RWTH) Aachen University, Germany. His main research focus is on the connectivity, and plasticity of the language network in the human brain.
Gregory Hickok is Professor of Cognitive Sciences and Language Science at the University of California, Irvine. He is Editor-in-Chief of Psychonomic Bulletin & Review and author of The Myth of Mirror Neurons 2014.
Kaja Jasińska is an Assistant Professor of Linguistics and Cognitive Science at the University of Delaware. Dr. Jasińska studies the neural mechanisms that support language, cognitive, and reading development across the lifespan using a combination of behavioral, genetic, and neuroimaging research methods. Her research aims to understand how early life experiences can change the brain’s capacity for language and learning, with particular focus on understanding development in high-risk environments.
Marcel A. Just, D. O. Hebb Professor of Cognitive Neuroscience at Carnegie Mellon and Director of its Center for Cognitive Brain Imaging, uses fMRI to study language-related neural processing. The research uses machine learning and other techniques to identify the semantic components of the neural signature of individual concepts, such as concrete objects (e.g., hammer), emotions (e.g., sadness), and quantities (e.g., three). The projects examine normal concept representations in college students, as well as disordered concepts in special populations, such as patients with autism or suicidal ideation.
David Kemmerer’s empirical and theoretical work focuses mainly on how different conceptual domains are mediated by different cortical systems. He is especially interested in the relationships between semantics, grammar, perception, and action, and in cross-linguistic similarities and differences in conceptual representation. He has published over 60 articles and chapters, and also wrote an introductory textbook called Cognitive Neuroscience of Language 2015.
Judith F. Kroll is Distinguished Professor of Psychology at the University of California, Riverside, and former Director of the Center for Language Science at Pennsylvania State University. Her research takes a cognitive neuroscience approach to second-language learning and bilingualism.
Jan Kujala is a Staff Scientist at the Department of Neuroscience and Biomedical Engineering, Aalto University, Finland. He has introduced and actively develops magnetoencephalography (MEG)-based methods for investigating cortico-cortical connectivity and applying them in the language domain.
Nicole Landi is an Associate Professor of Psychological Sciences at the University of Connecticut and the Director of EEG Research at Haskins Laboratories. Dr. Landi’s research seeks to better understand typical and atypical language and reading development using cognitive neuroscience and genetic methodologies.
Laurel A. Lawyer is a Lecturer in Psycholinguistics at the University of Essex. Her work has looked at the intersection of phonological theory and speech perception, as well as aspects of deaf language processing. Her current work investigates morphological decomposition in speech
perception, and ambient language processing in children with cochlear implants and normal hearing adults.
Michelle Leckey is a PhD candidate in the Psychology Department at the University of Illinois, Urbana-Champaign. As a member of the Cognition and Brain Lab, her research uses electrophysiological methods to investigate syntactic processing across the life span, as well as individual differences that impact lateralization of language processing.
Mia Liljeström received her doctoral degree from Aalto University, Finland. She is currently working at the Department of Neuroscience and Biomedical Engineering at Aalto University, Finland, where she combines magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) to study large-scale functional networks underlying language and speech.
Jennifer E. Mack is an Assistant Professor in the Department of Communication Disorders at the University of Massachusetts–Amherst. Her research focuses on the neural and cognitive basis of sentence processing impairments and recovery in aphasia, using methods such as eye-tracking and magnetic resonance imaging (MRI).
Bradford Z. Mahon is an Associate Professor in the Department of Psychology at Carnegie Mellon University. He is Co-Editor-in-Chief of Cognitive Neuropsychology. His research program uses structural and functional magnetic resonance imaging (MRI) and behavioral testing in patients with acquired brain lesions to test cognitive and neural models of normal function, and to develop prognostic indicators of long-term recovery.
Yasuyo Minagawa is a Professor of the Department of Psychology at Keio University. She received her PhD in medicine from the University of Tokyo in 2000. Her research examines the development of perception and cognition, with a focus on speech perception, social cognition, and typical and atypical brain development.
Elizabeth Musz is currently a Postdoctoral Research Fellow in the Psychological and Brain Sciences Department at Johns Hopkins University. Her research uses neuroimaging to study how conceptual information is represented in the brain. She received her PhD in Psychology at the University of Pennsylvania.
Aaron J. Newman, BA (Winnipeg), MSc, PhD (Oregon), is a Professor in the Departments of Psychology & Neuroscience, Pediatrics, Psychiatry, and Surgery at Dalhousie University in Halifax, Canada. His research focuses on the use of behavioral, neuropsychological, and multi-modal neuroimaging methods to study neuroplasticity in language and related systems. He is actively involved in training and research initiatives involving applications and commercialization of neuroscience.
Mattson Ogg is a PhD student in the Neuroscience and Cognitive Science Program at the University of Maryland, College Park. His background as a musician and recording engineer informs his approach to the study of music and language. His specific interests center around how listeners recognize sound sources.
Myriam Oliver, PhD, is a cognitive neuroscientist who obtained her PhD at the Basque Center on Cognition, Brain and Language (BCBL). Currently, she works at the Hoeft Lab for Educational Neuroscience at University of California, San Francisco. The main aim of her research is to understand how reading modulates structurally and functionally the neural networks in healthy bilinguals and monolinguals.
Pedro M. Paz-Alonso, PhD, is the Principal Investigator leading the Language and Memory Control research group at the Basque Center on Cognition, Brain and Language (BCBL). He received his training at the Center for Mind and Brain at the University of California, Davis, and at the Helen Wills Neuroscience Institute at the University of California, Berkeley. He uses functional and structural MRI to further understand the neurobiological basis of reading, language control, and memory processes and their development over childhood. He has been recently awarded with the Ramón y Cajal research fellowship.
Jonathan E. Peelle is in the Department of Otolaryngology at Washington University in Saint Louis. His research investigates the neuroscience of speech comprehension, aging, and hearing impairment using a combination of behavioral and brain imaging methods.
Vitória Piai is an Associate Principal Investigator at the Donders Institute for Brain, Cognition and Behaviour of Radboud University and Radboud University Medical Center. Her research focuses on language function in healthy populations as well as populations with speech or language
impairment. She works with a range of behavioral and neuroimaging methods and pays special attention to the intersection of language and other functions, such as executive control, (semantic) memory, and motor control in the case of speaking.
David Poeppel is the Director of the Department of Neuroscience at the Max-Planck-Institute (MPIEA) in Frankfurt, Germany, and a Professor of Psychology and Neural Science at New York University. His group focuses on the brain basis of hearing, speech, language, and music processing.
Lara R. Polse received her PhD and training in Speech Language Pathology from the San Diego State University/University of California, San Diego, Joint Doctoral Program in Language and Communicative Disorders. She is currently a Speech Language Pathologist in Davis, California.
Jeremy Purcell is a cognitive neuroscience Research Scientist in the Cognitive Science Department at Johns Hopkins University. His research uses both cognitive neuropsychology and neuroimaging methods to study the neural bases of orthographic representations in both reading and spelling.
Ileana Quiñones, PhD, is a Postdoctoral Researcher at the Basque Center on Cognition, Brain and Language (BCBL). Her main research interests focus on the characterization of the brain dynamics underlying language comprehension, a theoretical problem with a direct impact on education and social policies. Her research experience includes studies with healthy participants and atypical populations, with different paradigms and with a varied of behavioral and neuroimaging techniques (e.g., electroencephalographic, MRI, fMRI and DTI).
Alexander Michael Rapp, PhD, MD, is a Psychiatrist and Researcher at the University of Tübingen, Germany. His research interests include the functional neuroanatomy of non-literal language in healthy subjects and patients with psychiatric diseases.
Brenda Rapp is a Professor of Cognitive Science at Johns Hopkins University and Editor-in-Chief of the journal Cognitive Neuropsychology. Her research focuses on understanding the nature of the cognitive and neural bases of the orthographic representations and processes that support reading and spelling. To this end, she applies the methods of cognitive neuropsychology, psycholinguistics, and neuroimaging.
Judy S. Reilly, PhD, is a developmental psycholinguist who has worked on affect and language development (spoken and written) in both typical and atypical populations.
Riitta Salmelin is Professor of Imaging Neuroscience at the Department of Neuroscience and Biomedical Engineering, Aalto University, Finland. She has pioneered the use of magnetoencephalography (MEG) in language research, and has a strong track record in multidisciplinary neuroscience research and training. She has edited the handbook MEG: An Introduction to Methods (Oxford University Press 2010) and serves as Associate Editor of the journal Human Brain Mapping.
Niels O. Schiller is Professor of Psycho- and Neurolinguistics at Leiden University. He is Academic Director of the Leiden University Centre for Linguistics (LUCL) and serves on the board of the Leiden Institute for Brain and Cognition (LIBC). His research areas include syntactic, morphological, and phonological processes in language production and reading aloud. Furthermore, he is interested in articulatory-motor processes during speech production, language processing in neurologically impaired patients, and forensic phonetics.
Matthias Schlesewsky is a Professor in the School of Psychology, Social Work and Social Policy at the University of South Australia in Adelaide. He was previously Professor of General Linguistics at the University of Mainz, Germany, and, prior to that, one of the first “Junior Professors” in the German Academic System, with a position at the University of Marburg. His main research interests are in the neurobiology of language and changes in language processing over the life span.
Theresa Schölderle, PhD, is a Research Associate in the Clinical Neuropsychology Research Group (EKN) at the Institute of Phonetics and Speech Processing, University of Munich, Germany. Her main area of research is early-acquired dysarthria. Moreover, she works as a speech therapist in an institution for children and adults with multiple disabilities.
Teresa Schuhmann is an Associate Professor of Cognitive Neuroscience at Maastricht University. Her research focuses on applying various noninvasive neuromodulation techniques in cognitive and clinical neuroscience. Dr. Schuhmann is one of the pioneers in the combination of neuroimaging and
neuromodulation techniques for studying the network dynamics underlying language production.
L. Robert Slevc is an Associate Professor of Psychology, part of the program in Neuroscience and Cognitive Science, and a member of the Maryland Language Science Center at the University of Maryland, College Park. His research focuses on the cognitive mechanisms underlying language processing, music processing, and their relationships in both normal and brain-damaged populations.
Karsten Specht is a cognitive neuroscientist. He is a Professor at the Department of Biological and Medical Psychology at the University of Bergen, Norway, where he became head of the Bergen fMRI group, and he also holds a guest professorship at the Arctic University of Norway in Tromsø. His main research focus is on clinical multimodal neuroimaging, auditory perception of speech and music, connectivity and plasticity of the language network, and rehabilitation from speech and language disorders.
Anja Staiger, PhD, is a speech therapist and neurophonetician. She works as a Research Associate in the Clinical Neuropsychology Research Group (EKN) at the Institute of Phonetics and Speech Processing, University of Munich, Germany. Her main areas of research are speech motor disorders (apraxia of speech and dysarthria).
Cynthia K. Thompson is a Ralph and Jean Sundin Distinguished Professor of Communication Science, Professor of Neurology, and Director of the Center for the Neurobiology of Language Recovery (CNLR) at Northwestern University. Her work, supported by the National Institutes of Health throughout her academic career, examines normal and disordered sentence processing (and recovery in aphasia), using online (i.e., eye-tracking), multimodal neuroimaging, and other methods. She has published her work in more than 150 papers in referred journals, numerous book chapters, and two books.
Sharon L. Thompson-Schill is the Christopher H. Browne Distinguished Professor of Psychology at the University of Pennsylvania, and the founding Director of mindCORE, Penn’s hub for the integrative study of the mind. Thompson-Schill’s lab studies the biological bases of human cognitive systems. She uses a combination of psychological and neuroscientific methods, in both healthy and brain-damaged individuals, to study the
psychological, neurological, and genetic bases of complex thought and behavior, including topics in perception, attention, memory, language, and decision-making.
Pascale Tremblay is Associate Professor of Speech-Language Pathology at Université Laval in Quebec City, Canada, Researcher at the CERVO Brain Research Center, and Director of the Speech and Hearing Neuroscience Laboratory. Her research focuses on the cognitive neuroscience of speech perception and production and on cognitive aging.
Jos J. A. van Berkum is Professor in Communication, Cognition and Emotion at Utrecht University. His research explores the pragmatic aspects of language comprehension, with a particular focus on affective and social factors.
Lise Van der Haegen is a Postdoctoral Researcher at the Department of Experimental Psychology (Ghent University, Belgium), funded by the Research Foundation Flanders. Her research focuses on (a)typical lateralization of language and face processing in left-handers and bilinguals.
Stephen M. Wilson is Associate Professor of Hearing and Speech Sciences at Vanderbilt University Medical Center. His research interests are aphasia and neuroimaging of language processing.
Wolfram Ziegler, PhD, is Professor of Neurophonetics and Head of the Clinical Neuropsychology Research Group (EKN) at the Institute of Phonetics and Speech Processing, University of Munich, Germany. His main areas of research are speech motor control and disorders.
CHAPTER 1
NEUROLINGUISTICS A Brief Historical Perspective
SHEILA E. BLUMSTEIN
THE past 50 years have witnessed a revolution in our understanding of how the faculties of mind intersect with the brain. A major piece of this endeavor is neurolinguistics, the study of the neural mechanisms underlying language. The scientific field of neurolinguistics was originally defined by Harry Whitaker and remains the centerpiece of the journal Brain and Language, which he founded in 1974. However, the spirit of neurolinguistics predates the 1970s and has been and continues to be the subject of study under the guise of a number of other disciplines, including neuropsychology, aphasiology, psycholinguistics, and the cognitive neuroscience of language. While it is beyond the scope of this chapter to chart the complete history of the study of the brain and language, it will provide a retrospective view, focusing on the foundations from which our current knowledge and questions derive, the theoretical underpinnings that still guide much of our current research, what we have learned, and what questions and challenges remain for the future.
There is a long history examining the effects of brain injury on language. From the work of Paul Broca and Carl Wernicke, it was shown that lesions to
particular areas of the brain had specific and different consequences on language behavior. Indeed, classical aphasiology, exemplified by the works of Kurt Goldstein (1948), Henry Head (1926), and Alexander Luria (1966), to name a few, provided detailed descriptions of the clinical syndromes that emerged pursuant to lesions to particular areas of the brain. These clinical syndromes identified a constellation of impaired and spared language abilities centering on the following: speech output, its fluency and articulation; auditory comprehension of sounds, words, sentences; naming; repetition of words and sentences; and secondary language skills including reading and writing. Among these syndromes, it is the language behavior of patients who were clinically classified as Broca’s, Conduction, and Wernicke’s aphasia that served as the primary foundation for the detailed examination of the nature of the underlying language deficits.
From this work emerged the view that language for most individuals was left- hemisphere dominant, and that there was a direct relation between neural areas and language function (i.e., one could “predict” the aphasia syndrome based on lesion site and vice versa). Indeed, the seminal monograph Disconnexion Syndromes in Animals and Man, written by Norman Geschwind (1965), built on the classical work of the nineteenth- and twentieth-century “diagram makers.” The diagram makers, led by Wernicke and Ludwig Lichtheim, identified brain regions that were “centers” specialized for particular language functions, and they mapped out, in the form of diagrams, these centers and the connections between them. In this view, damage to these functional centers or to the connections between them gave rise to the classical aphasia syndromes (for a detailed review, see Levelt, 2013). Lesion localization was limited by the technology of the day, and, as we will see, with increased advances, the accuracy of lesionsymptom mapping could not be sustained in its entirety. Nonetheless, the notion that there were functionally specialized neural centers with white and gray matter connections between them was a major advancement in the field, as it became the working model characterizing language deficits in aphasia and language-brain relations more broadly.
While the aphasia syndromes provided a rich tapestry of spared and impaired language abilities, they beg the question of what aspects of linguistic function may be compromised. In particular, linguists have long assumed that language is broken down into a hierarchy of structural components, including sounds (phonetics and phonology), words (the
lexicon), sentences (syntax), and meaning (semantics). Each of these components has its own set of properties or representations.
Considering language deficits in aphasia from this linguistic perspective, a different set of questions arise, one that more directly addresses the nature of language deficits in aphasia. Here, for example, one can ask whether the basis of the auditory comprehension deficit in Wernicke’s aphasia is due to an impairment in processing the sounds of speech, in mapping sounds to words, in processing the meanings of words, or whether the nonfluent, often agrammatic production deficit of Broca’s aphasics is due to an articulatory/phonological impairment, a syntactic impairment, or simply to an economy of effort. And more generally, one can ask whether linguistic deficits in aphasia reflect impairments to representations or to the processes that access them. This is not to say that there was no attempt by the early aphasiologists to “explain” the nature of the deficits giving rise to aphasia. For example, Broca proposed that the third frontal convolution (i.e., Broca’s area) was the “center for articulated speech” and Wernicke proposed that the auditory comprehension impairment of Wernicke’s aphasics was due to a deficit in “auditory images.” However, what distinguished these classical approaches to the syndromes of aphasia and the more modern era was that the functional centers were defined descriptively absent a linguistic theoretical framework, and the hypothesized functions were based on clinical observations rather than being tested experimentally.
The beginning of the modern era in neurolinguistics began with linguistic approaches to aphasia. There were two pioneers, Roman Jakobson and Harold Goodglass, who most clearly led this new approach to the study of the brain and language. Roman Jakobson was perhaps the first linguist to consider the aphasias from a linguistic perspective, suggesting that the breakdown of speech in aphasia and its development in children reflect phonological universals of language (Jakobson, 1941, translated 1972), and proposing that the output deficits in Broca’s and Wernicke’s aphasia reflect impairments in syntagmatic and paradigmatic axes of language, with a syntagmatic deficit giving rise to Broca’s aphasics’ syntactic deficit and a paradigmatic deficit giving rise to Wernicke’s aphasics’ paragrammatic deficit (Jakobson, 1956). Interestingly, in present-day parlance, this distinction reflects a sequencing disorder for Broca’s aphasics and a word selection disorder for Wernicke’s aphasics. Harold Goodglass, who established the Boston Aphasia Research Center along with the neurologist
Norman Geschwind and led it from the mid-1960s to 1996, was among the first to apply experimental methods drawn from psycholinguistics and cognitive psychology to systematically examine the nature of linguistic deficits in aphasia (Goodglass, 1993). It is this multidisciplinary approach, focusing on the confluence of the study of the classical aphasia syndromes with theoretical approaches to language and experimental methodology, that gave birth to what we now call neurolinguistics.
E A S
L D A
Neurolinguistic approaches to the study of aphasia were guided by the view that language is hierarchically organized into structural components and that these structural components mapped directly to functionally defined neural regions. Thus, the focus was on conducting parametric studies of linguistic features of the classical aphasias. For Broca’s aphasics, the emphasis was on potential syntactic impairments and phonetic/phonological deficits. Early results suggested that indeed Broca’s aphasics not only had agrammatic deficits in production, but also displayed auditory comprehension impairments when the only cues available were syntactic in nature (Zurif & Caramazza, 1976; Zurif, Caramazza, & Myerson, 1972). Moreover, studies of the acoustic properties of speech production suggested that these patients had articulatory/phonetic planning impairments (see Blumstein, 1981, for review). For Wernicke’s aphasics, the question was whether their auditory comprehension impairments were due to phonological deficits where phonemic misperceptions might give rise to selecting incorrect words (e.g., hearing bear and pointing to pear) or were due to semantic impairments reflecting deficits in the underlying meaning representations of words or in accessing the meaning of words. Early results suggested that although these patients do have deficits in perceiving phonological contrasts, it did not predict the severity of their auditory comprehension impairments (Basso, Casati, & Vignolo, 1977; Blumstein, Baker, & Goodglass, 1977; but see the following section, “The Modern Era,” in this chapter). Those studies examining word meaning indicated that the underlying representations of words appeared to be relatively spared in aphasia, while access to meaning
and the time course of mapping sounds to word meanings was impaired (Milberg & Blumstein, 1981; Swinney, Zurif, & Nicol, 1989; but see the following section, “The Modern Era,” in this chapter). A broad range of behavioral paradigms have been used in these studies, including discrimination, identification, and psychophysical experiments of speech, word and/or picture matching, lexical decision, hierarchical clustering, and grammaticality judgments, to name a few.
Nonetheless, this approach using aphasia syndromes as a basis for neurolinguistic investigations was not without its critics. They came from two directions. One reflected a challenge to group studies using the classical aphasia syndromes, and the other reflected a challenge to one-to-one mapping between aphasia syndrome and lesion localization. With regard to the former, it was proposed that clinical syndromes do not necessarily cut across linguistic domains, and hence the study of patients grouped by aphasia syndrome may not provide a window into the particular linguistic deficit (Caramazza, 1984, 1986). Moreover, classification is a “messy” thing; there is variability in severity across patients within the defining properties of the syndrome, and some patients do not have all of the symptoms included in the classification schema. Hence, in this view, group studies using the classical aphasia syndromes are by their very nature fundamentally flawed (Schwartz, 1984).
To mitigate this concern, these researchers took a single case study approach where detailed analyses were conducted of particular linguistic deficits. The overarching goal was to use the effects of brain damage as a window into current linguistic theories and theories of language processing (Caramazza, 1986). However, in contrast to classical neuropsychological case studies that described unique behavioral patterns in relation to lesion localization, this approach was typically agnostic with respect to the lesion status of the patient. Hence, while it provided interesting insights into the linguistic nature of deficits in aphasia (see Rapp & Goldrick, 2006, for a review), this case study approach did not consider the neural systems underlying such deficits, and hence provided limited insight into the relation between brain and language.
Turning to lesion localization of the aphasia syndromes, technological advances in neuroimaging over the past 20 years have provided a cautionary tale for the view that there is a one-to-one mapping between aphasia syndrome and lesion. First, the lesions associated with the classical aphasias
were typically described incorrectly as only cortical. However, in fact, lesions of patients commonly include both cortical and subcortical structures (see Copland & Angwin, Chapter 33 in this volume). Second, lesions of patients with aphasia are rarely focal; rather, they tend to be large, encompassing a number of neural regions. If the lesions are focal, patients typically present with transient aphasias, not the chronic syndrome profile of the classical aphasias. Third, there is variability across lesion profiles. As one might expect, no aphasic has exactly the same lesion profile. As a result, there are differences across aphasics in the degree of damage in a particular area, as well as in the extent to which a lesion extends to other areas of the brain. Finally, research has shown that there is not a one-to-one relation between aphasia syndrome and lesion localization. For example, not all Broca’s aphasics have lesions in Broca’s area (BA45), nor do all patients with damage in Broca’s area present with Broca’s aphasia (see Dronkers, 2000).
These issues notwithstanding, neurolinguistic investigations using the aphasia syndromes have provided the basis for much of the modern era spanning the last 20 years. The aphasia syndromes have provided a theoretical framework for examining linguistic deficits, suggesting that the temporal lobe is involved in accessing the meanings of words, that the superior temporal gyrus is involved in auditory processing of speech, that posterior temporal areas are involved in integrating auditory and articulatory processes, and that the inferior frontal gyrus (IFG) is involved in processing syntax and articulatory planning as well as in selection processes for words. Critically, these studies have shown, across patient groups and lesion sites, similar patterns of performance as a function of linguistic structural complexity. For example, for all patients, structurally more complex sentences are more difficult to comprehend and also to produce, and the perception and production of phonologically similar words result in increased errors compared to words that share few phonological attributes. Despite claims that deficits are “selective” to a particular component of the grammar (e.g., syntax), nearly all aphasics, regardless of syndrome, display impairments that affect multiple linguistic components of language, although the severity of the impairment and the underlying functional impairment may differ.
Taken together, these studies were among the first to suggest that neural systems underlying different components of the grammar are broadly tuned,
recruiting neural systems rather than functionally encapsulated neural regions (see the following section, “The Modern Era,” for further discussion). Nonetheless, although patients across syndromes may appear to share a common impairment to a particular aspect of language, the patterns of performance of the patients may differ, suggesting that different functional impairments emerge as a function of lesion site. For example, both Broca’s and Wernicke’s aphasics show lexical processing impairments. However, examining access to words that share phonological attributes and hence are lexical competitors (e.g. hammer vs. hammock), lexical candidates appear to stay active longer than normal for Wernicke’s aphasics. In contrast, Broca’s aphasics fail to select the target from among competing lexical alternatives (Janse, 2006; Utman, Blumstein, & Sullivan, 2001; Yee, Blumstein, & Sedivy, 2008). These findings support the structural integrity of the lexical system in these two groups of patients since lexical access is influenced by whether lexical items are competitors or not, but suggest different processing impairments in each group, presumably reflecting the distinct functional roles of temporal and frontal lobe structures in lexical access.
T M E
It is difficult to demarcate when the “modern era” began, as science typically progresses incrementally. Nonetheless, there are a number of factors that contributed to what characterizes today’s approach to neurolinguistics. One has to do with the increased influence of computational models that focus on the nature of information flow in the language system (cf. Dell, 1986; Marslen-Wilson, 1987; McClelland, 1988). These models enriched earlier structural models in which components of the grammar were separate, encapsulated modules (cf. Fodor, 1983). Additionally, computational models have been used to characterize language impairments in aphasia in terms of processing deficits (Dell, Schwartz, Martin, Saffran, & Gagnon, 1997; McNellis & Blumstein, 2001; Mirman, Yee, Blumstein, & Magnuson, 2011; Rapp & Goldrick, 2000).
Neurobiologically inspired computational models, beginning with the Parallel Distributed Processing (PDP) models of the 1980s (Rumelhart & McClelland, 1986) and continuing through contemporary models
(Bornkessel-Schlesewsky, Schlesewsky, Small, & Rauschecker, 2015; Horwitz, Friston, & Taylor, 2000; Wennekers, Garagnani, & Pulvermuller, 2006), are a major advance as they seek to develop neurally plausible models of how the brain processes speech and language. For example, the use of neuron-like elements allow for graded responses to stimulus input, and Hebbian learning mechanisms can characterize word learning (Garagnani, Wennekers, & Pulvermuller, 2007). Hebbian learning has also been used to simulate lexically guided tuning effects on speech perception (Mirman, McClelland, & Holt, 2006).
Without question, the single most important factor that has influenced the current state of the art in neurolinguistics research is technological advances that have revolutionized our ability to map structural and functional properties of the brain. It is beyond the scope of this introductory chapter to review all of the methods, the advantages and disadvantages of each, and their contributions to our understanding of the neural basis of language, but suffice to say they provide a broad spectrum of tools that afford different sources of information about neural activity. To briefly consider a few, functional magnetic resonance imaging (fMRI) has had perhaps the greatest influence because it is a noninvasive procedure that allows for examining neural activation patterns with parametric manipulations of linguistic constructs (see Heim & Specht, Chapter 4 in this volume). Other important methods that are now a part of the toolbox of the neurolinguist include transcranial magnetic stimulation (TMS) that allows for the creation of “virtual lesions” (see Schuhmann, Chapter 5 in this volume), and electrophysiological measures such as event-related potentials (ERPs; see Leckey & Federmeier, Chapter 3 in this volume) and magnetoencephalography (MEG; see Salmelin, Kujala, & Liljeström, Chapter 6 in this volume) that provide crucial information about the time course (in ms) of processing. Enhanced structural information comes from magnetic resonance imaging (MRI), which has provided advances in understanding the structural properties of the normal brain and in mapping lesions using voxel-based lesion mapping (Bates, Wilson, Saygin, Dick, Sereno, Knight, & Dronkers, 2003; see Wilson, Chapter 2 in this volume). Additional insights come from diffusion tensor imaging (DTI), which measures white matter fiber tracts and connections between different parts of the brain (see Catani & Forkel, Chapter 9 in this volume).