Shreya Sharma Senior Thesis 2025

Page 1


How Body Motion

Shapes Social Perception

Shreya Sharma

Senior Thesis | 2025

How Body Motion Shapes Social Perception

Abstract:

This study examines the impact of motion and racial identity on empathy and implicit bias using a two-by-two mixed design. Sixty participants from UCLA engaged in two sessions. In session one, participants performed twenty-four everyday actions, which were motion-captured and later bound to avatars of either their own race or an outgroup (white vs black). Session two involved virtual reality interactions with avatars paired to either the participants' own motion or a same-sex motion. Neural activity was recorded using functional near-infrared spectroscopy (fNIRS). Measures of implicit bias (IAT, AMP, Racial Dictator Game), empathy (IOS, IRI), and self-reported affinity toward the avatar were collected pre- and post-interactions.

Predictions include enhanced neural synchrony in mirroring, motor, and mentalizing networks when participants engage with avatars displaying their own motion. Participants interacting with Black avatars bound to self-motion are expected to exhibit increased empathy toward the racial outgroup, alongside decreased implicit bias. Self-motion is hypothesized to increase affinity for the avatar, extend interaction times, and reduce interpersonal distance. The findings could provide insight into reducing racial bias and using VR to improve empathy and relationships between people of different backgrounds.

Objectives:

This study aims to investigate brain activity in motor and action observation during interactions with avatars that perform motions representing oneself or others. It also examines whether these interactions can influence implicit bias and empathy towards individuals from different racial groups.

Introduction:

The discovery of mirror neurons has provided a neural basis for understanding the connection between observation and actions. In the 1990s, mirror neurons were first identified in macaque monkeys (Acharya & Shukla, 2012). Originally, researchers were studying specialized neurons to understand the control of hand and mouth movements. The researchers tracked the monkey’s electrical signals from neurons in the brain while the monkey reached for a piece of food. The neurons were activated when the monkey picked up the food and when watching people pick up food. Mirror neurons are specialized cells that activate both when an individual performs an action and when observing an action performed by others. The discovery of mirror neurons added another layer to our understanding of empathy. Mirror neurons allow humans to match the emotional states of others, providing the fundamentals for empathy. Humans have an instinctual drive to connect and share emotions with those around them, which is vital for social bonding and understanding. Empathy is the ability to share and understand the emotions of others. Emotional contagion, the phenomenon where one person’s emotional state triggers a similar emotional response in another, is often considered the first step in the empathetic process (Gutsell & Inzlicht, 2011).

These mirror neurons are located in key areas of the brain: the premotor cortex, the supplementary motor area(SMA), the primary somatosensory cortex, the inferior parietal cortex, and the prefrontal cortex. The premotor cortex plays an important role in movement planning and organizing, allowing for anticipation and execution of actions (Sira & Mateer, 2014). The SMA’s role is to

perform motor functions like self-generated voluntary movement, suppress inappropriate behaviors, monitor actions, and sequence movements (Coull et al., 2016). SMA is also involved in perceiving the duration of visual stimuli. The primary somatosensory cortex's role is the ability to perceive touch, pressure, pain, and temperature (Moreno & Holodny, 2021). The parietal cortex plays an important role in visual feedback and facilitates the understanding of movement by integrating visual and sensory feedback (Cleveland Clinic, 2023a). The medial parietal cortex(mPFC) plays a role in the regulation of emotion, motivation, and sociability, as well as cognitive processes (Xu et al., 2019).

The activation of mirror neurons within these regions creates a neural link between perception and action, enabling individuals to understand observed actions. This stimulation process is fundamental to the development of empathy and social understanding. Prior research has highlighted that individuals who self-report higher levels of empathy tend to exhibit stronger activation of their mirror neurons in observation of others (Wallmark et al., 2018).

The initial sharing of emotions is important for empathy but requires a further realization that the emotions are elicited by another person’s experiences. The perception-action model of empathy explains how empathy operates at a neural level. According to this model, when humans see someone experiencing an emotion or performing an action, their brain activates similar neural networks as if experiencing the same emotion or action ourselves. This neural activation helps us to understand and share the emotional and motivational states of others, leading to empathetic responses. The process of neural simulation reinforces our ability to connect with others emotionally, making empathy a natural part of human interaction. Through the use of functional magnetic resonance imaging (fMRI), the mirroring effect has been documented across a range of emotions and actions, such as

feeling pain, experiencing disgust, or understanding intentional movements. This suggests that empathy is deeply embedded in our brain’s functioning. fMRI is a non-invasive diagnostic tool that measures and maps brain activity by detecting changes in blood flow(Cleveland Clinic, 2023). The idea of fMRI is that when a particular area of the brain is more active, it consumes more oxygen. To meet this increased demand, blood flow to that region also increases. During an fMRI scan, the individual performs specific tasks or responds to stimuli, allowing researchers or clinicians to observe how different parts of the brain function and interact. This method is especially useful for mapping brain areas responsible for functions like movement, sensation, language, and memory. It helps in presurgical planning, understanding neurological disorders, and conducting research.

However, the response of mirror neurons is not the same across all individuals or situations. Research by Gutsell and Inzlicht (2011), for example, has shown that the brain's mirror neuron system may not simulate the actions of someone perceived as different. They tested how empathy varies between an ingroup and an outgroup. They used Electroencephalography (EEG), which measures electrical activity in the brain using electrodes (small, metal discs) that are attached to the scalp(Mayo Clinic, 2024). Brain cells are always producing electrical impulses, which show up on the EEG recording. In their study, participants were shown images of people expressing sadness. These individuals were labeled as either belonging to the participants' own group(ingroup) or a different group(outgroup). The researchers wanted to see if the brain’s empathy response differed based on whether the person experiencing sadness was similar to or different from the participant. The EEG recorded the brain’s electrical activity, focusing on alpha waves, which are inversely related to brain activity. When participants’ brains were more active, alpha waves decreased, indicating increased neural engagement. The researchers used this data to determine how much the participants'

brains mirrored the emotions they were observing. Results showed a significant difference in brain activity depending on which group was shown in the picture. When participants observed ingroup members showing sadness, their brains mirrored the sadness, indicating an empathic response. Their mirror neuron system was actively simulating the observed emotion. In contrast, when participants viewed outgroup members experiencing sadness, there was significantly less neural activation. This suggests a reduced empathic response, implying that their mirror neurons were not as engaged. The researchers also measured participants’ levels of prejudice to understand if there was a connection between prejudice and empathy. They found that individuals with higher levels of prejudice exhibited even greater differences in neural responses to ingroup versus outgroup members. Those with more prejudice had a noticeably reduced empathic response to outgroup members, showing that their mirror neuron system was less likely to stimulate the actions and emotions of people perceived as different. This suggests a potential neural bias for biases in social perception and empathy.

For this study, functional near-infrared spectroscopy (fNIRS) was used to record brain activity. fNIRS is a portable, non-invasive way of brain imaging that measures changes in blood oxygenation, making it somewhat similar to functional magnetic resonance imaging (fMRI)(Karim et al., 2012). However, unlike fMRI, fNIRS are small, relatively inexpensive, and easy to move in. fNIRS can be used to record brain activity while a person is moving, unlike fMRI. It provides a unique way to observe brain functions in natural settings. The method provides insight into brain activity by monitoring oxygen levels in the brain. In 1977, Frans F. Jobsis first discovered a way to measure blood flow in the brain without needing to perform surgery or insert anything into the body(Jobsis, 1977). The discovery was groundbreaking as it allowed for a non-invasive approach to monitor brain activity. Over the years since the discovery, fNIRs have been used to study

different brain regions, such as frontal, visual, motor, auditory, and somatosensory cortices. The function of these brain regions include thinking, seeing, moving, hearing, and feeling. fNIRS uses near-infrared light and red visible light to observe and image the changes in 2 forms of hemoglobin: oxyhemoglobin and deoxyhemoglobin. Hemoglobin is the protein used for transporting oxygen in the blood. Oxyhemoglobin(HbO2) is the oxygenated form of hemoglobin, and deoxyhemoglobin (HHb) is the reduced form of hemoglobin (no oxygen)(Kyriacou et al., 2019). The use of light is similar to how hospitals use pulse oximeters to measure oxygen in the blood through a clip on the finger. Active areas of the brain require more oxygen, so by measuring how much light is absorbed in different regions of the brain, fNIRS can indicate which parts of the brain are more active during tasks (Vazquez, 2010). During a fNIR recording, flexible fiber cables deliver low levels of light to specific positions on the head (The light is less than 0.4 watts per square centimeter). These positions, called source positions, emit light in at least two different wavelengths or colors. The different wavelengths are crucial as they allow us to distinguish the light absorbed by oxyhemoglobin and deoxyhemoglobin. When light enters the tissue, it diffuses through the layers of the scalp, reaching the outer five to eight millimeters of the brain’s cortex. This light then reflects back out of the head and is collected by optodes(detectors) placed on the cap. The optodes measure how much light returns from each source position. The amount of light absorbed versus how much is reflected back gives information about the blood oxygen levels in that particular brain area. Since the brain tissue scatters light significantly, the path light travels inside the brain is complex. On average, light in the 650-900 nanometer range scatters after only traveling about one-tenth of a millimeter (Karim et al., 2012). By arranging the optical sensors in a specific pattern on the head, researchers can approximate the location of the brain activity.

Spatial arrangement helps in creating a map of active brain regions during various tasks.

To set up for fNIR experiments, a montage is created. A montage is the arrangement of optodes and sources. fNIR experiments are normally designed with a restricted number of sources and optodes, making it crucial for the optodes to be positioned on parts of the scalp that will effectively record the brain activity of regions that are relevant to the experiment. A montage selection process includes identifying the brain regions of interest and either making a new montage or using a previous montage. To find the brain regions of interest, it is common to search for literature with similar study designs and then place optodes on the recommended EEG landmarks. The montage regions of interest(MNI coordinates) for this study are: primary motor cortex, inferior parietal lobule(supramarginal gyrus and angular gyrus), primary somatosensory cortex, medial prefrontal cortex, and supplementary motor areas.

IAT:

The implicit association test (IAT) is a psychological assessment tool developed to uncover implicit biases. Implicit bias is the unconscious stereotypes that influence our judgments and behaviors toward others (American Psychological Association, 2022). These biases can be based on factors such as race, gender, age, or other categories. This specific study focuses on race. It uncovers implicit biases by recording how quickly and accurately people can categorize different words and images. Unlike surveys or interviews that rely on self-reporting, the IAT reveals hidden biases by examining how easily people associate positive or negative words with different social groups, like race. Participants in an IAT typically sort words into categories like “good” and “bad” and images into groups such as “White” or “Black”. The test shows that people are faster at pairing words and images when those pairings match with their implicit biases. For example,

someone who unconsciously associates positive words with white faces and negative words with black faces will sort those pairings faster. The IAT uses specific stimuli, such as facial images and words, to trigger responses.

In this study, Project Implicit, a research platform, provided the images(Project Implicit, 2011). These images are carefully edited to ensure consistency, meaning any differences in response times are due to biases rather than variations in the images themselves. The words used in the IAT are divided into positives (like “Joy”, “Happy”, “Laughter”, “Love”, “Glorious”, “Pleasure”, “Peace”, and “Wonderful”) and negative (like “Evil”, “Agony”, “Awful”, “Nasty”, “Terrible”, “Horrible”, “Failure”, and “Hurt”) categories. These words are selected to have strong emotional meanings, making it easier to measure biases through the speed of participants’ responses. This study uses a version of the IAT called the Single-Category IAT (SC-IAT), which focuses on one group, in this case, individuals with dark skin, without comparing them to another group. This focus helps measure biases against people with dark skin more directly. Participants had to categorize images of dark-skinned faces and words as either positive or negative. The test was split into two parts: one where participants associated faces with positive words and another where they associated faces with negative words. This setup helps balance the test and prevents order effects from influencing the results. Each trial presented either a word or a picture in the center of the screen, with category labels displayed above to indicate which response key to press. Participants had a maximum of 1500 milliseconds to respond. If no response was made within that time frame, a prompt encouraging faster responses was displayed for 500 milliseconds. Although this time limit rarely impacted responses, it helped to instill a sense of urgency and reduced the likelihood of controlled thinking during the task. Correct answers were acknowledged with a green ‘O’ displayed briefly on the screen, while incorrect answers were marked with a red ‘X’.

To minimize response bias, the presentation frequencies of different trial types were adjusted. For example, in one stage, 58% of the correct responses corresponded to one response key (‘z’), while the remaining 42% were assigned to the other response key(‘m’). This pattern was reversed in the other stage. A correct response means that the participant pressed the appropriate response key that corresponds to the correct categorization of the given stimulus (word or image) based on the instructions they were given (Project Implicit, 2011a). For example, if the task requires the participant to press the ‘z’ key for positive words and darkskinned faces, and the participant correctly categorizes a positive word or a dark-skinned face by pressing ‘z’, that is a correct response. The ratio of different stimuli, such as Coca-Cola pictures, positive words, and bad words, varied across blocks. For instance, in one stage, these stimuli were presented in a 7:7:10 ratio, ensuring 58% of the correct responses were tied to one key and the remaining 42% to the other. In the second stage, positive words were on the ‘z’ key, and Coca-Cola pictures and negative words were on the ‘m’ key. The ratio for this stage was 7:10:7. The object dimension, which refers to the target category being evaluated, is labeled as Black (Epifania, 2022). The evaluative dimension, which relates to the attributes associated with these categories, is labeled as positive and negative. Seven pictures were picked to be associated with pictures of a six-pack of Coca-Cola, two-liter bottles of Coca-Cola, and diet Coca-Cola. The test was divided into two blocks of 96 trials each, with 24 practice trials to help participants familiarize themselves with the task. During one block, participants used the same key to categorize words as either “good” or “bad” and images of faces as “dark”. The response keys(‘z’ and ‘m’) were counterbalanced to prevent any bias from being influenced by the hand or key used. This setup ensured that the categorization task tested implicit biases rather than motor preferences. The SC-IAT recorded both accuracy and response times, providing a measure of how quickly

and correctly participants could make these judgments. Several modifications were made to the SC-IAT to improve its accuracy. First, the response window was increased from 1500 milliseconds to 2000 milliseconds to give participants more time, reducing errors due to rushed responses. Second, the number of critical trials was reduced from 72 to 48 trials per block to shorten the overall testing time while maintaining reliability. These changes were informed by previous research and aimed to create a more comfortable testing environment that accurately captured implicit bias.

Racial Dictator Game:

The Racial Dictator game is an adaptation of the dictator game used in behavioral economics and social psychology to study social preferences, fairness, and biases. In the traditional dictator game, one participant(the dictator) is given a sum of money and decides how much to give to another participant(the recipient), who has no input on the decision (Whitt & Wilson, 2007). The amount given reflects the dictator's generosity or fairness. In the racial dictator game, race is introduced as a variable by assigning racial identities to the recipients, often through names, photos, or avatars that signal racial background. This allows researchers to investigate whether implicit or explicit racial biases influence decision-making. Studies using the Racial DG often find the participants may allocate less money to participants perceived as belonging to a racial outgroup. This reveals any underlying biases that might not be captured through self-reported measures. For example, research conducted in Bosnia, a post-conflict society, found that ethnic biases significantly influenced allocation decisions, with participants favoring their in-group members over out-group members.

AMP:

The Affective Misattribution Procedure (AMP) is a psychological method used to measure implicit bias by capturing how people's emotions towards one stimulus influence their evaluation of another unrelated, neutral stimulus. This test is similar to the IAT. Developed by Payne et al. (2005), the AMP presents participants with a prime(such as a picture or word related to a social group, political figure, or concept) followed by a neutral stimulus(often an abstract symbol). Participants are then asked to rate the neutral stimulus as unpleasant or pleasant. The prime influences their emotional response, and the AMP can reveal implicit biases that individuals may have.

Methods:

The study uses a two-by-two experimental design, with race (white vs black) as the between-subject factor and motion (self vs other) as the within-subject factor. Participants are recruited for a two-session study that uses motion capture technology and virtual reality simulations. Participants begin the study by completing pre-task questionnaires to assess baseline levels of empathy, implicit racial bias, and self-consciousness. There are six questionnaires to complete. The self-consciousness scale (SCSR) asks questions about how participants think others perceive them. The motivation to control racism (MTCR) asks participants to rate if they try to make others know they aren’t prejudiced. The multidimensional assessment of interoceptive awareness (MAIA) asks participants to rate if they notice tension in their bodies. The in-group/outgroup bonding questionnaire asks participants to agree or disagree about feelings towards black people, white people, and their race. The body perceptions questionnaire short form (BPQ-SF) asks participants to rate questions about the body. Next, participants wear a velcro motioncapture suit and hat equipped with holographic balls. Their movements are recorded on cameras all around the room. The

cameras are connected to a computer, which allows the motions to appear on a 3rd skeleton. The participants are asked to do twentyfour motions for thirty seconds, which include: dancing, exercise, kicking a ball, having a conversation/speaking, lifting boxes, laughing, getting someone’s attention, washing clothes, cleaning, cooking and serving food, acting, building something, baking a cake, hurrying up, teaching, gathering people, pouring drinks, showing excitement, greeting someone, playing tennis, playing guitar, stretching, introduce yourself, jumping jacks. The participants must start and end these actions in a T pose. The actions are motion-captured using Nanthia Suthana’s Motion

Capture environment in Semel. The participants are asked to perform the actions naturally. They will not be able to see their movements while they are performing them. They will not be notified that they will see these actions in session two.

Between sessions one and two, the participants' actions are binded to the avatars using software called Unity Hub. The process for binding motions includes binding all twenty-four motions to an avatar. As race is a between-subjects manipulation, participants will only engage with an avatar of only one race (white: ingroup or black: outgroup), associated with self or other motion (withinsubjects manipulation). First, the T pose is trimmed out of the recording. Then, the files are exported, creating a virtual reality controller and screen recording the motions. The motions will then be used in session 2.

After the motions are bound, the participants will come in for session 2, where they will interact with the avatars that will either be white or black in virtual reality. Their neural activity will be recorded using fNIRs. The fNIRs are provided by Matt Lieberman’s Lab. First, the participants complete the Implicit Association Test (Single Category Version), Racial DG: Racial Dictator Game, and AMP: Affect Misattribution Procedure. Then, complete three questionnaires: The positive and negative affect schedule (PANAS - SF) asks participants to score feelings they

have been experiencing in the past week. The in-out survey (IOS) asks participants to select a diagram that best describes their relationship with different genders and racial groups. The BOQ. Then, the participant will either interact with an avatar with their own motions or with someone else's motions in virtual reality. Both avatars are of the same race. Each session with the avatar lasts for around twelve minutes, and each action is observed for thirty seconds. After observing the action, the participant is instructed to respond naturally to the avatar. After the first avatar, the participant is asked to complete the IAT, Racial DG, AMP, PANAS, BOQ, and IOS again. They also answer another questionnaire related to self-reported affinity and control over the avatar’s actions. Next, they will interact with the avatar that was not shown in the first avatar. They will interact with the avatar the same way as the first avatar with fNIRs. After they interact with the second avatar, the participant will complete the same six surveys.

Predictions:

As the research for this study is still ongoing, no results are available yet. It is anticipated that participants will show stronger brain synchrony when interacting with avatars that use their own movements, suggesting a deeper cognitive and emotional engagement. This heightened connection is expected to extend to social environments, particularly when interacting with Black avatars using self-motion. In these cases, participants may demonstrate reduced implicit bias and increased empathy toward the racial outgroup, as reflected in both implicit tests and selfreported data. Behavioral responses within the virtual reality environment will show further insight. During this study, participants will be tracked on how close they stand to the avatar, how long they interact, and how positively they view the avatar’s movements. It is expected that participants will naturally gravitate

toward avatars reflecting their own motion, resulting in closer proximity, longer engagement, and higher affinity ratings.

Future directions:

The findings of this study will have important implications for reducing implicit bias and enhancing empathy. However, further research is needed to explore and apply the findings in realworld settings. A limitation of this study is the participant group. Future research should include a more diverse group of participants across different cultural backgrounds and age groups to ensure that the results are broadly representative. By incorporating individuals from a variety of backgrounds, researchers can examine whether the effects in this study generalize across different populations. Further research can explore different social contexts, such as teamwork or scenarios with conflict. While this study uses fNIRs, further research could incorporate other neuroimaging methods, such as fMRIs and EEG, to gain a more detailed understanding of the neural mechanisms underlying changes in bias and empathy. Given the potential of virtual reality, these findings could be applied to educational and professional settings. Future research should explore how to implement and improve implicit bias in real-world training programs. By expanding participant diversity and testing different social contexts, future studies could build on the possible findings of this study to further understand the role of mirror neurons.

Acknowledgements:

I would like to express my gratitude to the Iacoboni Lab at UCLA. Special thanks to Akila Kamabadi for her guidance and support throughout this study. I appreciate all the participants and collaborators who made this study possible.

References

Acharya, S., & Shukla, S. (2012). Mirror neurons: Enigma of the metaphysical modular brain. Journal of Natural Science, Biology and Medicine, 3(2), 118.

https://doi.org/10.4103/0976-9668.101878

American Psychological Association. (2022). Implicit bias. American Psychological Association. https://www.apa.org/topics/implicit-bias

Cleveland Clinic. (2023a, January 8). Parietal Lobe: What It Is, Function, Location & Damage. Cleveland Clinic.

https://my.clevelandclinic.org/health/body/24628-parietallobe

Cleveland Clinic. (2023b, May 27). Functional MRI – seeing brain activity as it happens. Cleveland Clinic.

https://my.clevelandclinic.org/health/diagnostics/25034functional-mri-fmri

Coull, J. T., Vidal, F., & BurleB. (2016). When to act, or not to act: That’s the SMA’s question. Current Opinion in Behavioral Sciences, 8, 14–21.

https://doi.org/10.1016/j.cobeha.2016.01.003

Epifania, O. M. (2022, February 15). ImplicitMeasures. RProject.org. https://cran.rproject.org/web/packages/implicitMeasures/vignettes/impli citMeasures.html

Gutsell, J. N., & Inzlicht, M. (2011). Intergroup differences in the sharing of emotive states: Neural evidence of an empathy gap. Social Cognitive and Affective Neuroscience, 7(5), 596–603. https://doi.org/10.1093/scan/nsr035

Jobsis, F. (1977). Noninvasive, infrared monitoring of cerebral and myocardial oxygen sufficiency and circulatory parameters. Science, 198(4323), 1264–1267.

https://doi.org/10.1126/science.929199

Karim, H., Schmidt, B., Dart, D., Beluk, N., & Huppert, T. (2012).

Functional Near-infrared Spectroscopy (fNIRS) of Brain Function During Active Balancing Using a Video Game System. Gait & Posture, 35(3), 367–372. https://doi.org/10.1016/j.gaitpost.2011.10.007

Kyriacou, P., Budidha, K., & Abay, T. Y. (2019, January 1).

Optical Techniques for Blood and Tissue Oxygenation (R. Narayan, Ed.). ScienceDirect; Elsevier.

https://www.sciencedirect.com/science/article/abs/pii/B978 0128012383108864

Mayo Clinic. (2024, May 29). EEG (Electroencephalogram) –Mayo Clinic. Mayoclinic.org.

https://www.mayoclinic.org/testsprocedures/eeg/about/pac-20393875

Moreno, R. A., & Holodny, A. I. (2021). Functional Brain Anatomy. Neuroimaging Clinics of North America, 31(1), 33–51. https://doi.org/10.1016/j.nic.2020.09.008

Payne, B. K., Cheng, C. M., Govorun, O., & Stewart, B. D. (2005). Affect Misattribution Procedure. PsycTESTS Dataset. https://doi.org/10.1037/t04568-000

Project Implicit. (2011a). Frequently Asked Questions. Harvard.edu. https://implicit.harvard.edu/implicit/faqs.html

Project Implicit. (2011b). Project Implicit. Projectimplicit.net. https://www.projectimplicit.net/

Sira, C. S., & Mateer, C. A. (2014, January 1). Frontal Lobes (M. J. Aminoff & R. B. Daroff, Eds.). ScienceDirect; Academic Press.

https://www.sciencedirect.com/science/article/abs/pii/B978 0123851574011489

Vazquez. (2010). Cerebral oxygen delivery and consumption during evoked neural activity. Frontiers in Neuroenergetics. https://doi.org/10.3389/fnene.2010.00011

Wallmark, Z., Deblieck, C., & Iacoboni, M. (2018).

Neurophysiological effects of trait empathy in music listening. Frontiers in Behavioral Neuroscience, 12.

https://doi.org/10.3389/fnbeh.2018.00066

Whitt, S., & Wilson, R. K. (2007). The Dictator Game, Fairness and Ethnicity in Postwar Bosnia. American Journal of Political Science, 51(3), 655–668. JSTOR. https://doi.org/10.2307/4620090

Xu, P., Chen, A., Li, Y., Xing, X., & Lu, H. (2019). Medial prefrontal cortex in neurological diseases. Physiological Genomics, 51(9), 432–442. https://doi.org/10.1152/physiolgenomics.00006.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.