Page 1

Issue VIII March 2018

McGill University is located on land which has long served as a site of meeting and exchange amongst Indigenous peoples, including the Haudenosaunee and Anishinabeg nations. The PSI Journal would like to acknowledge these nations as the traditional stewards of the land on which we have the privilege of engaging in academic pursuits.


PSI Ψ McGill Psychology Undergraduate Research Journal Issue VIII March 2018



As a young and ever-developing science, it is important to publicize the work done by young scientists in Psychology. To engage with the work of undergraduate students is to get a glimpse of the future direction of academia. From questioning current nutrition practices in prisons, to exploring individual experiences of pain, self-compassion, and empathy, this year’s edition of the journal seems to have unintentionally adopted a theme of social change. Perhaps this is an acknowledgement that past research has been inadequate in its scope, or perhaps it is a reflection of current social trends towards inclusivity and holding administrations accountable. Either way, I am heart-warmed by the interpersonal focus of this journal. At a more local level, department publications such as this one are crucial in creating a sense of connection between students and their program. With the PSI journal being in its 8th edition, I can only hope that this feeling of connection is the case, and that exposure to the variety of works submitted can provide a picture of how diverse this program is. I would like to thank my editorial team for dedicating time and effort to this project. This could not have happened without your involvement!

AurĂŠlie van Oost Journal Coordinator


Journal Coordinator: AurĂŠlie van Oost

Managing Editors: Hana Gill Vaishnavi Kapil

Editors: Nadia Blostein Michaela Field Amy Hauer Dara Liu Alice Lu Stephanie Simpson Grace Zhou

Cover Art: Hayley Mortin


TABLE OF CONTENTS: Ethical and Cultural Implications of the Medical Framework on Deaf Culture Elif Aksulu


Self-Compassion and its Effect on Self-Regulatory Behaviors Sreya Belbase


Neuroscience of Social and Affective Touch Martin Dimitrov


Is There a Trade-off Between Truth and Well-being? An Investigation of Causation in the Depressive Realism Hypothesis Stephania Donayre


Differences in Interpersonal Behavior among Acquisitive and Protective Self-Monitors Xiaoyan (Rachel) Fang et al.


The Effects of Pain Catastrophizing and the Experience of Pain on Vasovagal Reactions in Blood Donation Aliza Hirsch


How Does a Carbohydrate-Based Breakfast Regimen Affect Aggression Levels in Prison Inmates? Samaa Kazerouni


Pregnancy-Related Chemosignals Produce Analgesia and Increase Corticosterone Levels in Male Mice Rachel Nejade


A Critical Reading of Hall’s Reclaiming Your Sexual Desire Stephanie Simpson Post-Traumatic Stress Disorder in Canadian Military: The Invisible Wounds and Persistent Neglect of Canadian Military Veterans Julia Tesolin Parsing Sex and Gender Differences in Empathy AurÊlie van Oost


173 185


Ethical and Cultural Implications of the Medical Framework on Deaf Culture Elif E. Aksulu PSYC 530 Prof J.C. MacDougall


Since the end of 20th century, cochlear implants and genetic studies on sensorineural hearing loss have raised great interest, promising to “cure the problem” of deafness (Hintermair, Allbertini, 2005, p.184). Consequently, United States government funding shifted from social innovations and services towards biological and medical areas. Moreover, the media and some medical professionals and researchers have been shaping the public perception on deafness, marking it as an issue. The problematic framing of deafness has major social, political and ethical consequences on deaf culture, sign language and public debates on this topic (Hintermair, Allbertini, 2005, p.189). The main interest of this paper is to raise awareness of the consequences of the medical approach to deafness, which promotes the “normalization” of stigmatization on the deaf community. This paper will discuss the cultural definitions of deafness, the ethical and cultural implications of the medical approach, and its impact on deaf children. In addition, the current and future implications of cochlear implants and genetic pre screening will be examined. Current Research and Future Directions Cochlear Implants Cochlear implants are devices that stimulate sound in the damaged inner ear. They convert sound waves into electrical currents through electrode channels implanted inside the cochlea and stimulate the auditory nerve fibers into conveying information to the brain. (Ahtari, 2017). Since 1990, with the approval of the U.S. Food and Drug Administration, cochlear implant surgery has been preformed on children with profound sensorineural hearing loss (Lane & Bahan, 1998, p.299). Even though cochlear implants cannot provide “typical hearing” yet, technological advancements improve the quality of sound information a person can perceive by increasing the number of electrodes channels placed in the cochlea.


Genetic Studies Current knowledge on hereditary deafness is extensive (Nance, 2013, p.11). As Robertson (2003) explains, there are future possibilities of prescreening for deafness-associated mutations (p. 466). One potential PDG (pre-implantation genetic diagnosis) could screen for the GJB2 mutation, which is currently known as the most common cause of hereditary deafness. People might then be able to select whether or not they want children with this particular mutation, in order to have a hearing child or a deaf child (Robertson, 2003, p. 470). Since the genetic testing for inherited phenotypic traits is seen as controversial, the possible use of PGD is highly debated. Defining Deafness Before discussing issues concerning deafness, one has to be aware of the different definitions that shape this topic. From a cultural perspective, being deaf is seen as a cultural variation, whereas in the Western perspective, it is seen as a disability (Lane & Bahan, 1998, p. 298). Cultural View According to Lane, Deaf-World, can be considered an “ethnic group”, since it possesses such properties defined by social scientists (Lane, 2011, p. 4). Ethnic groups usually consist of cultural, behavioral, linguistic or religious practices that unite their members. In addition, members of Deaf-World have a collective name, a sense of community, behavior norms, customs, distinct values, social structure, language, art forms, history and kinship (Lane, 2011, p. 5). American Sign language (ASL), like any other language, has the function of forming and strengthening a culture, and as a visual language, it contains memories and symbols of the Dead-World (Lane, 2001, p. 9). “Having a language changes everything” as Lane explains; it changes the discourse on deafness to pertain to a linguistic issue rather than a disability issue.


Medical Framework According to the Western medical perspective, deafness is seen as a deficit that needs to be “cured or fixed”. Consequently, cochlear implant programs promise the “normalization” of a child into the hearing community and the potential for deaf children to adopt spoken language (Hyde & Power, 2006, p. 107). Currently, cochlear implants offer improved sound stimulation. In addition, there are examples of children learning speech through oral training. However, some argue that ASL enables better and richer communication (Sachs, 1986, p. 6). Children’s Right to an Open Future Davis proposes (1997) Feinberg’s concept of “the child’s right to an open future” in order to have a constructive ethical evaluation on the debate between disability and culture view of deafness (p. 562). That is, considering ethical challenges as “a conflict between parental autonomy and the child’s potential autonomy”. Another evaluation for the parents’ decision uses the Kantian principle, which involves treating the child as an end rather than a means. Therefore, parental decisions have to be directed towards the child’s wellbeing instead of their own worldview. Review on Cochlear Implants Experts cite evidence in favor of a critical period in which speech training in profoundly deaf children with cochlear implants has successful results (Ahtari, 2017). Since the critical age is before the age where children develop autonomy and self-expression, implant surgery does have ethical implications. Firstly, for some deaf individuals, cochlear implants may be considered as useful tools. However, it should be acknowledged that even the latest cochlear implants do not provide “typical hearing”. It requires conscious effort to learn each spoken word and extensive training for a profoundly deaf child to learn speech that is mostly given by “oralist” schools.


Expectations put upon a deaf child can result in feelings of inadequacy. Moreover, the time spent in oral education takes away the time that can be spent learning other disciplines that their peers otherwise wood. By contrast, it is known that sign language helps deaf children develop a conception of abstract ideas (Sacks, 1986, p. 9). Review on Genetic Screening Open discussion on the voluntary diffusion of hereditary deafness sparked when a lesbian couple tried to increase their chances of having a deaf baby by using sperm from a deaf father (Levy, 2002, p. 284). Ethical concerns were raised regarding the pre-genetic screening of embryos for deaf gene mutations. It is important to be aware of the ways in which the deaf community could be denigrated by such a framework (Robertson). Robertson then draws a parallel with the potential dangers of pre-implantation screening for sexual orientation, since pre-implantation screening leads to a decrease in the genetic variations of a population. Conclusion The article is not aiming to criticize the advancements and discoveries of hearing impairments, but rather it is written to raise awareness of the consequences that certain medical frameworks may have on deaf culture. Just because certain communication methods do not align with the standard ways of spoken language, does not mean deaf people should be deemed lesser in society. In fact, sign language has been so well integrated into some communities that one can observe deaf and hearing people colloquially communicating amongst each other in ASL. It is essential for medical professionals to consider emerging data that explains how there is no evidence of social or psychological improvement in children who use cochlear implants (Hyde, 2005, p.103). Educational strategies that focus primarily on achieving speech may result in


the oppression of a child and a regression of their learning experience (Sacks, 1986, p. 5). The oral approach aims to normalize deaf children into the hearing culture without acknowledging their needs for communication and educational growth. As the Kantian principle suggests, behavior that can benefit the child would involve treating them under the principle of what could most better them. Problems may arise with certain medical professionals who impose their ethical views on what is “better” for a child. This puts pressure on parents who are in a vulnerable state because of their responsibility to make decisions that will affect their newborn’s future. However, it is important to note that extremist sides of both cultural and medical approaches have the potential to violate this principle. As mentioned before, policies regarding deafness vary depending on the social model or disability model of deafness. The approach of “curing deafness” may legitimize many actions taken against sign language and against the protection of Deaf-World’s culture. Debate held by both ends of the etymological framework damages the options that can be provided to deaf children. This takes away a deaf child's freedom to experience a rich sense of identity.


References Ahtari, P. (2017). Cochlear implant and factors that affect its outcomes. [Power Point Slides]. Retrieved from www.mycourses.com Davis, D. (1997). Genetic dilemmas and the child's right to an open future. Rutgers Law Journal, 28, 549-592. Hintermair, M. & Albertini, J. (2005). Ethics, deafness, and new medical technologies. Journal of Deaf Studies and Deaf Education 10(2). 185-192. Oxford University Press. Hyde, M. & Power, D. (2006). Some ethical dimensions of cochlear implantation for deaf children and their families. The Journal of Deaf Studies and Deaf Education. 11(1), 102–111. Lane, H., et al. (2011). “Ethnicity, ethics and the Deaf-World.” Association of Visual Language Interpreters of Canada, 27(2), 4-13. Lane, H. & Bahan, B. (1998). Ethics of cochlear implantation in young children: A review and reply from a Deaf-World perspective. Otolaryngology–Head and Neck Surgery. 119(4). 297-313. Levy, N. (2002). Deafness, culture, and choice. Journal of Medical Ethics. 28, 284-285. Robertson, J. (2003). Extending preimplantation genetic diagnosis: the ethical debate Ethical issues in new uses of preimplantation genetic diagnosis. Human Reproduction, 18 (3), 465- 471. Sacks, O. (1986). “Mysteries of the deaf.” The New York Review of Books. Retrieved




Self-Compassion and its Effect on Self-Regulatory Behaviors Sreya Belbase PSYC 499

Content Warning: eating disorders


Abstract Self-regulation is a skill that impacts human capacity to adaptively function. As an aspect of selfregulation, emotional regulation is particularly important to adapting healthy relationships with oneself. In turn, emotional dysregulation disrupts our ability to adaptively respond to stressful situations, experience of distress, and/or perceived threat. The notion of the self is susceptible to both internal and external judgment. Individuals who perceive threats to their sense of self may be susceptible to disordered eating and such pathology is rooted in emotional dysregulation. In contrast, self-compassion is a construct defined by self-kindness, common humanity, and mindfulness. This paper explores the utility of training this construct provides for the prevention and treatment of eating pathology. As self-compassion is a relatively new concept, future directions suggest utilizing self-compassion focused therapy as an adjunct to mindfulness interventions in the domain of maladaptive regulation related to eating pathology, as mindfulness is a key to cultivating self-compassion.


Self-Compassion and its Effect on Self-Regulatory Behaviors Introduction Self-Regulation Self-regulation may be defined as the capacity to set a goal, engage in behaviors directed towards fulfilling that goal, monitor progress, and make adjustments upon evaluation of behaviors discrepant to that goal (Terry & Leary, 2011). Self-regulation includes many other qualities including self-discipline, self-control, and particularly emotional regulation. Emotional regulation is the process whereby individuals are cognizant of their emotions, are capable of controlling the duration and intensity of their emotional responses relative to the intensity of the situation, and have the ability to transform their emotional response in an adaptive way. Emotional regulation strategies are particularly useful in managing feelings of stress or distressing situations (Neff 2003). Emotional Regulation Emotional self-regulation, as well as other self-regulatory systems, is like a muscle (Baumeister, Vohs & Tice, 2007). Suppression of emotional responses depletes self-regulatory resources, thereby reducing available self-regulatory resources for later tasks (Terry & Leary, 2011). However, the self-regulatory system has been identified as one that can be trained. In the way that a muscle becomes tired after exerting work on it, over time with proper strength training, that muscle is capable of more work and expelling less energy. Emotional self-regulatory ability can too be trained to respond in adaptive ways (Baumeister et al., 2007). Thus, in individuals susceptible to maladaptive regulatory emotions, behaviors, and thoughts, there is room for intervention.


One domain which has been identified as particularly susceptible to maladaptive emotional regulatory behaviors is disordered eating or eating pathologies (Kelly, Carter & Borairi, 2013). Subclinical and clinical anorexia nervosa (AN), bulimia nervosa (BN), and binge eating disorder (BED) symptomatology may be explained by the model of affect regulation. This theory posits that “criticism and hostility from both others and the self are thought to stimulate anxiety, anger, and/or shame, and promotes self-protective but sometimes maladaptive behaviors” (Kelly et al., 2013). Maladaptive behaviors among individuals with eating pathology include food restriction, and binging and purging (Kelly et al., 2013). Emotional Dysregulation and Disordered Eating In individuals with anorexia nervosa, restrictive eating combined with excessive exercise is conceptualized as a maladaptive behavioral response to reduce feelings of shame. Exertion of such rigid eating habits is thought to be protective of the self, however, shame resurfaces with the “anorexia voice” symptom characteristic of the disorder, thereby perpetuating excessive selfcontrol (Kelly et al., 2013). Although the case of anorexia nervosa appears to reflect a capacity for high self-control (which often bolsters the feeling of pride in such individuals thus reinforcing maladaptive behaviors), it may instead be viewed as the inability to disengage from an unhealthy goal. Disengaging from goals has been identified from motivational research as being equally as important as engaging in goal-directed behavior (Baumeister et al., 2007). The effects of AN on affected individuals is startling; they are susceptible to an array of health problems including increased risk for early mortality (Godsey, 2013). Furthermore, although AN behaviors are apparent of extreme self-control, they reflect poor emotional regulation, thereby characterizing their restrictions as inability to disengage from a goal. Additionally, it has been found that selfimposed starvation among this population eventually leads to binge eating in 30-50% of patients


referred for treatment and this breakdown begins about 9 months after intense dieting regimens (Prolux, 2007), evidence of a faulty regulatory system. Bulimia nervosa (BN) and binge eating disorder (BED) are both characterized by binge eating. Binge eating refers to the rapid consumption of foods within a discrete time-period (i.e. 2 hours), whose amounts are much larger than normal consumption during a similar time frame, combined with a sense of loss of control over eating (O’Reilly, Cook, Spruijt-Metz & Black, 2014). In the case of BN, the individual fears weight gain, however in response to their emotional distress, they engage in binge eating to soothe and cope with such feelings. Although feelings such as anxiety, anger and depression are momentarily relieved, these same emotions return after a binge and individuals may engage in purging behaviors (such as the use of laxatives, or excessive exercise), to prevent weight gain (Prolux, 2007). Similarly, individuals with BED engage in binge episodes to relieve or “escape” from distress. Poor emotional regulation resulting in eating as a gateway to relieve stress has been wellestablished as escape theory (O’Reilly et al., 2014). However, rather than a soothing or nurturing process, eating and one’s relationship with food becomes an experience of struggle “marked by intense approach and avoidance” (Kristeller & Wolever, 2011). Ultimately, a cycle is perpetuated in which an individual experiences anxiety or fear, eats in response to such feelings, feels momentarily soothed before the feelings of harsh self-criticism and shame return, which then catalyzes another binge episode (Webb & Forman, 2013). The behaviors associated with BN and BED are associated with poor psychosocial outcomes, including reports of decreased self-esteem, increased dietary restraint, increased body dissatisfaction, fat self-perception, and anxiety and depression symptoms (Webb & Forman, 2013).


It must be noted that symptoms of eating pathologies such as AN, BN, and BED are often experienced at a subclinical level. Nevertheless, such symptomatology is a risk factor for developing the disorder. Subclinical symptoms include body dissatisfaction, negative body image, shame, self-criticism, and low self-esteem; all factors related to self-evaluative anxiety (Neff, Kirkpatrick & Rude, 2007). As has been identified, emotional dysregulation is one of the overarching systems that hosts the symptoms associated with eating pathologies. Hence, intervening in self-regulation is necessary. Since emotional regulation and self-regulation processes are capable of change to cultivate healthier relationships with goals potential interventions that both prevent the onset of disordered eating and treat it must be examined. Recently, there has been much traction and a body of empirical evidence supporting the use of Eastern-philosophy based practices to cultivate a healthier relationship with the self. One of these practices is self-compassion (Germer & Neff, 2013). The remainder of this paper will focus on the construct of self-compassion and how it may be considered for managing maladaptive emotional regulation and eating pathology. Self-Compassion Definition of Construct Despite its roots in Eastern philosophy, self-compassion is a relatively new concept in Western clinical psychology. The term has been operationalized in the last decade, consisting of mindfulness, self-kindness, and common humanity. Each of these aspects are not viewed as distinct personality traits, but rather as something that can be cultivated with practice (Neff, 2003). Mindfulness is a quality of consciousness, “that is characterized by continually attending to one’s moment by moment experiences, thoughts, and emotions, with a non-judgmental approach” (O’Reilly et al., 2014). Mindfulness is conceptualized in contrast to over-identification. Over-


identification refers to the process through which individuals ruminate and become carried away and consumed by their feelings. Thus, over-identification prevents the individual from cognitively restructuring their thoughts and viewing their emotions more objectively or alternatively, without attacking their sense of self. In contrast, mindfulness permits the individual to view their situation in a less reactive way, with a rather “equilibrated mental perspective” (Neff, 2003). The non-judgmental and accepting stance associated with mindfulness extends to the selfcompassion aspects of self-kindness and of common-humanity. Individuals preoccupied with selfperceptions are often much more critical of themselves than of others. They are less understanding towards themselves when experiencing suffering or distress, and beat themselves up over it (Germer & Neff, 2013). Such behavior is overly judgmental and threatens the sense of the self even more, exacerbating anxious, depressive, and angry feelings. In contrast, allowing oneself to account for common humanity and treat oneself with self-kindness permits a more respectful stance towards the self. Self-kindness involves treating oneself with the same warmth they would to others and treating oneself with understanding rather than punishment during times of adversity. Similarly, common humanity requires acknowledgement that the individual and their experience of distress is part of the human condition; suffering is universal, and no one is perfect (Neff 2003). Therefore, the construct of self-compassion is expected to be inversely correlated to feelings of over-identification, self-judgment, and isolation, feelings linked to self-esteem (Neff, 2003). Self-Compassion versus Self-Esteem Self-esteem is a construct with a vast body of literature reflecting that it is associated with positive psychological outcomes. It is defined broadly as a “positive global appraisal of one’s self worth,” (Kelly, Vimalakanthan & Carter, 2014). However, this construct also has a growing body of literature indicating that it is maladaptive, as it is based on self-evaluation, judgement, and


comparisons to determine self-worth (Neff, 2003). Research shows that self-esteem is correlated with narcissism, illusory beliefs, defensiveness in the face of failure (Kelly et al., 2014), and an overemphasis on evaluation and “liking the self” may lead to self-absorption, self-centeredness, and a lack of concern for others (Neff, 2003). In contrast, self-compassion is not evaluative or comparative, thus considered to be a more stable and unconditional skill that can be fostered with practice, whereas self-esteem is recognized as a trait, one that is highly resistant to change (Neff, 2003). In the context of eating pathology, people with eating disorders, body dissatisfaction, or maladaptive eating habits tend to be self-critical and shame-prone, and research shows that selfcriticism is highly correlated with eating disorder symptomology (Gale, Gilbert, Read & Goss, 2014). Such shame is also highly related to negative global self-evaluations, ultimately stemming from self-critical comparisons of the self to others (Neff, Kirkpatrick & Rude, 2007). As a result, self-compassion is being explored as a construct to consider evoking instead of self-esteem. Selfcompassion has been proposed to enable recognition of the human condition as one that is not flawed, thereby “enabling a situation in which the sense of self or self-esteem maintenance softens or disappears” (Neff, 2003). Ultimately, this permits the development of a sense of self that is capable of extending kindness not contingent on downward or upward social comparisons to put puff oneself up or put themselves down (Neff et al., 2007). In a recent study, it was found that selfcompassion, as opposed to self-esteem was associated with significantly less anxiety after eliciting a task to induce participants to consider their greatest weaknesses (Neff et al., 2007). In another study, it was found that reports of greater self-compassion resulted in subsequent report of decreased disordered eating, and when controlling for self-esteem, self-compassion remained a significant predictor of disordered eating (Breines, Toole, Tu & Chen, 2014). Therefore, research


suggests that self-compassion may buffer against self-evaluative situations, preventing the onset of feelings of shame and anxiety that often precede episodes related to emotional dysregulation in patients suffering with eating disorder. Furthermore, research is now inching towards the field of self-compassion and its effects on regulatory behaviors, including eating behaviors. Self-Compassion in Research Self-regulatory behaviors, including goal setting in the pursuit of health may undermine well-being if goal pursuit is maladaptive due to emotional dysregulation. This is often the case for individuals suffering from eating pathologies. However, by adopting a self-compassionate approach, individuals may realize goals that promote comfort and satisfaction, rather than engage in goals that are extreme or dangerous to their health (Terry & Leary, 2011). There has been evidence supporting compassion-focused therapy. In one preliminary study, the introduction of compassion focused therapy into a standard program of cognitive behavioral therapy (CBT) for patients suffering from eating disorders was examined. It was found that the adjunct of compassion focused therapy to CBT lent significant improvements on reports of eating-disorder related outcomes (Gale, Gilbert, Goss & Read, 2014). In this particular study, outcomes for patients suffering from bulimia nervosa showed the most progress; and while patients suffering from anorexia nervosa did improve, their improvements were not statistically significant. Another study utilized compassion-focused therapy as an intervention for eating disorder patients, with two main aims. The first was to explore whether early changes in self-compassion would predict change in eating disorder symptoms, while the second examined if these early changes in self-compassion would predict change in shame over time, while controlling for early change in eating disorder symptoms (Kelly et al., 2013). Here it is important to note the relevance of shame in eating pathology. According to the affect regulation model of eating disorders,


perceived threats to the self are internalized. In order to alleviate such feelings of distress, restrictive habits or eating binges are induced. Although such feelings are momentarily soothing, they perpetuate the feeling of shame associated with the negative internalized thoughts on body dissatisfaction (Kelly et al., 2013). Therefore, a self-compassion focused intervention should reduce shame, if it has significant effects on reduction of eating disorder symptoms. This study found exactly that. Results showed that participants who experienced an early relatively large increase in self-compassion during-intervention, had significantly decreased eating disorder symptoms over the remaining 12 weeks of treatment. Additionally, these participants showed significant decreases in experience of shame over time, and the reduction of shame was faster than for participants who did not experience early increases in self-compassion. The participants who did not experience early increases in self-compassion did not show reductions in shame either. Therefore, the results of this preliminary study suggest that shame may be a significant contributor to the maintenance of eating disorder pathology (Kelly et al., 2013). Considering the results of this study and the previous one, compassion focused therapy shows promising results of improvement of eating disorder related outcomes (Gale et al., 2014; Kelly et al., 2013). Other studies have examined self-compassion’s effect on body-dissatisfaction and poor body image. Body dissatisfaction and poor body image may be experienced at both the subclinical and clinical levels of disordered eating, and may be operationalized as a “negative evaluation of one’s body that involves a perceived discrepancy between an individual’s assessment of their actual and ideal body” (Albertson, Neff, & Dill-Shackleford, 2014). Self-evaluative feelings of dissatisfaction, combined with shame, are risk factors for developing disordered eating pathology. Body dissatisfaction may result in unhealthy goals or unattainable ideals, thereby perpetuating unhealthy weight control behaviors. When failed, this may lead to self-destructive emotional


responses and maladaptive affect regulation (Breines, Toole, Tu & Chen, 2014). Therefore, to prevent against unhealthy symptoms exacerbating and developing into a clinical disorder, selfcompassion may be useful. In one study consisting of participants with subclinical eating disorder with poor body image and body dissatisfaction, a compassion focused intervention resulted in significant gains in self-compassion, which in turn decreased body dissatisfaction and body shame, while increasing body appreciation. These changes were significantly different to pretest results, and gains were maintained at a follow up 3 months later, suggesting that an induction of a brief self-compassion intervention may have long term impressions (Albertson et al., 2014). One other study examined shame and its impact on body dissatisfaction. It was found that self-compassion mediated the relationship between shame and body dissatisfaction, with increased self-compassion predicting decreased shame and less body dissatisfaction (Ferreira, Pinto-Gouveia & Duarte, 2013). In a different study, it was hypothesized that self-compassion would encourage acceptance of imperfections and reduce body shame in participants with severe body dissatisfaction and poor body image. It was found that self-compassion was negatively associated with two traits of perfectionism, which is highly correlated with eating disorders. Self-compassion was negatively correlated with self-criticism and negatively correlated with a perceived discrepancy between performance and standards (Breines et al., 2014). These results are particularly important as they imply that goal discrepant behaviors and harsh self-criticism – facets of negative self-evaluation and poor self-regulation – may be altered or buffered with a more self-compassionate outlook. In a recent meta-analytic study, it was examined whether self-compassion was protective against eating disorders and poor body image. The results of this study found that the greatest predictor of disordered eating was fear of self-compassion, whereas patients who exhibited less


fear and more self-kindness had decreased reports of eating disorder symptoms (Braun, Park & Gorin, 2016). This finding is consistent with the notion that levels of self-compassion early in treatment may lead to better gains, suggesting that self-compassion may be a skill that can be trained to prevent the onset of disordered eating. Other such studies on self-compassion have focused specifically on binge-eating and uncontrollable eating behaviors, as they reflect an attempt to escape from feelings of distress per the affect regulation model (Webb & Forman, 2013). Such behaviors are found at both the clinical and subclinical level, thus indicative of self-compassion focused interventions’ utility for both prevention and treatment of psychopathology. One study drew upon the finding that individuals who participate in binge eating report a lack of constructive self-regulatory resources to manage emotional distress. Participants suffering from binge eating disorder reported “less emotional clarity and access to adaptive emotional regulation strategies” compared to non-binge eaters (Lavender, Gratz & Tull, 2011). Another study examined the relationship between positive selfcompassion and binge-eating severity among an at-risk student population and found that selfcompassion positively covaried with unconditional self-acceptance while negatively correlating with deficits in ability to adaptively respond to negative emotional status (Webb & Forman, 2013). This finding supports the hypothesis that self-compassion is a more stable and unconditional trait than self-esteem, just as unconditional self-acceptance contrasts severe self-evaluation and promotes self-kindness, characteristic of self-compassion. Studies have also examined disordered eating with regards to the “disinhibition effect” (Adams & Leary, 2007). The disinhibition effect refers to the paradoxical finding that individuals will consume more calorie-dense foods if they have already consumed a calorie-dense food. This is paradoxical, as it would be expected that individuals with disordered eating would be more likely


to restrict eating after consuming a food that would otherwise be restricted. Instead, it has been found that these individuals will consume more after such a preload, and this may be explained by the abstinence violation effect, which reflects the mentality that violating ones’ regimen by eating something restricted might as well be a window of opportunity to eat more restricted foods (Adams & Leary, 2007). This is consistent with the finding that “food cravings have been shown to lead to obsessive thoughts about food and impulsive consumption of craved foods” (O’Reilly, Cook, Spruijt-Metz & Cook, 2014). Thus, when participants are presented with an opportunity to consume a craved restricted food, emotional regulation and self-control resources weaken, and impulsive behaviors further induce consumption of food to escape negative self-thoughts and feelings (Adams & Leary, 2007). The impact of self-compassion on the disinhibition effect has been examined in a study that hypothesized that self-compassion would serve as a buffer against emotional dysregulation post negative events. It was found that the induction of a self-compassion exercise before a preload significantly reduced the amount consumed by a participant group of restrictive eaters in comparison to a group of highly restrictive eaters who were not preloaded (Adams & Leary, 2007). In this study, self-compassion was found to be effective in attenuating the effects of the preload on negative self-thoughts thereby preventing the disinhibition effect. General Discussion and Conclusion There has been a growing body of literature supporting that self-compassion may be used for both prevention and treatment of disordered eating behaviors and individuals with poor body image, thereby suggesting it is an effective means of harnessing a positive view of the self. Selfcompassion consists of cultivating a view of self-kindness, mindfulness, and accepting commonhumanity (Neff, 2003). These three traits enable an individual to feel unconditional positive self-


regard and respond non-judgmentally to adversity, stress, and negative events. Such unconditional positive regard has been shown to be effective among a population of individuals with disordered eating as the induction of self-compassion reduces self-criticism, shame, and body-dissatisfaction. It has also been shown to reduce binge eating severity. Nurturing self-compassion in lieu of selfesteem has been established as particularly important towards this population because it enables one to disengage from self-evaluations and approach eating disorder evoked distress in a more mindful, understanding way (Gale et al., 2014). Moreover, self-compassion has been found to be particularly effective in affect regulation. Social mentality theory posits that the self-compassionate system deactivates the threat system (which is involved in poor affect regulation resulting in avoidance based mechanisms to relieve distress such as binge eating), and instead promotes the activation of a self-soothing system. The self-soothing system in turn permits “greater capacities for intimacy, effective affect regulation, and successful coping with the environment� (Neff, Kirkpatrick & Rude, 2007). Such qualities have been found to buffer binge eating and reduce experiences of shame (Gale et al., 2014; Kelly et al., 2013). Therefore, by eliciting emotional approach coping strategies key to the practice of self-compassion in response to distress, individuals are able to be more mindful, non-judgmental, and understanding of their emotions. Nonetheless, it must be addressed that there are limitations with regards to available research on self-compassion. Limitations and Future Directions Self-compassion research has been primarily qualitative and is lacking randomized clinical trials. Therefore, despite evidence supporting self-compassionate induction to alleviate emotional dysregulation associated with eating pathology, it is minimal. The studies examined include ones where participants complete self-reports of symptoms. However, among a population that


experiences shame or is highly impacted by negative self-evaluations and perceived threats to their sense of self, it is important to consider that reductions in pathological symptoms may be biased and may reflect the experimenter expectancy effect. Additionally, the concept of self-compassion has only been operationalized in Western clinical psychology in the past decade thus, it is relatively new and there must be more research to determine causality. Nonetheless, that is not to discount self-compassion focused research. The self-compassion scale developed by Kristin D. Neff has demonstrated strong internal and test-retest reliability, as well as convergent and discriminant validity (Neff, 2003). Evaluation of the scale shows that self-compassion is strongly related to positive psychological health outcomes including decreased self-criticism, depression, and anxiety, and increased life satisfaction, connectedness, and emotional intelligence (Neff, 2003). These indicators have been associated with disordered eating pathology as a function of emotional dysregulation thus selfcompassion must still be further explored. One way to go about bridging the gap in research is through the adjunction of self-compassion focused therapies to mindfulness based interventions. Mindfulness has been identified as one of the core characteristics of compassion, and a “certain degree of mindfulness is needed in order to allow for enough mental distance from one’s negative experiences so that feelings of self-kindness and common humanity can arise� (Neff, 2003). As mindfulness and self-compassion are both Eastern-philosophy based concepts, it is understandable why they are interrelated and mindfulness may be a precondition for selfcompassion. Mindfulness has been recognized by Western clinical psychological research as effective earlier than self-compassion and has gained particular traction in the field of eating behaviors.


Meta-analysis of mindfulness based interventions for the treatment of binge eating disorder have moderate support for the reduction of emotional eating urges and their occurrence (O’Reilly et al., 2014). This has suggested that such interventions provide individuals with the skills necessary to adaptively cope with distress and mitigate avoidance of distress through impulsive eating. Therefore, by inducing mindfulness based interventions, participants can pave the way to lead a more self-compassionate life. In another systematic review, it was found that mindfulness meditation significantly decreased binge eating and emotional eating among individuals susceptible to this behavior, consistent with the previous finding (Katterman, Kleinman, Hood, Nackers & Corsica, 2014). Another meta-analytic systematic review examined prospective design studies which also found that mindfulness based therapies may be effective in the treatment of eating disorders, as positive outcomes were found for all disorders including AN, BN, and BED (Wanden-Berghe, Sanz-Valero & Wanden-Berghe, 2011). Therefore, there consists an empirically supported body of evidence for mindfulness based therapies to cultivate a “non-judgmental, decentered approach to internal experience” (Wanden-Berghe et al., 2011). There have also been well-established studies reflecting the validity of mindfulness-based interventions. These include a study that examined the relationship between general mindfulness and reduced calorie consumption (Jordan, Wang, Donatoni & Meier, 2010); a study that examined the association between facets of trait mindfulness as assessed by the Five Facet Mindfulness Questionnaire (FFMQ) (Lavender, Gratz & Tull, 2011); a qualitative mindfulness based eating disorder (M-BED) intervention and its impact on women’s self-awareness, affect regulation, and impulse control (Prolux, 2007); a study examining the dispositional mindfulness and its impact on engaging in disordered eating behaviors (Wanden-Berghe et al., 2011); and investigation of mindful eating on serving size moderation among a non-clinical sample (Beshara, Hutchinson &


Wilson, 2013). Comprehensively, these studies show that individuals with trait mindfulness experience their environments with less hostility and negativity, thereby enabling both positive and negative thoughts and feelings to occur without judgment. This is indicative that such individuals are overall less likely to be impacted by affect dysregulation as they are at a more equilibrated mental state (Jordan, Wang, Donatoni & Meier, 2010). Individuals who do not have mindful qualities when exposed to a brief mindfulness intervention demonstrated a reduction in calorie consumption, suggesting mindfulness is associated with greater capacity for self-regulation (Jordan et al., 2010). Other studies supporting that mindfulness promotes self-regulation have shown that the mindfulness traits of non-reactivity, acting with awareness, and non-judgment were significantly negatively correlated with eating pathology (Lavender et al., 2011). In the M-BED study, participants who previously reported extremes in thoughts, feelings and behaviors, felt more self-aware, self-accepting, and hopeful post-intervention (Prolux, 2007). Thus, mindfulness may support feelings of acceptance and foster common humanity, which is key to the development of self-compassion. Lastly, non-disordered individuals who exhibited high trait mindfulness were less likely to engage in eating disorder related behaviors thus supporting that mindfulness may be preventative or act as a buffer against poor eating behaviors associated with maladaptive regulation (Wanden-Berghe et al., 2011). This is further supported by evidence that enhancing mindful eating skills has improved serving size moderation among a lay population, thus reflecting the positive impact of psychoeducation in relation to mindfulness to prevent disordered eating manifestation (Beshara et al., 2013). It must be noted here that although these studies have shown extensive support for mindfulness based interventions in eating regulation, one specific intervention must be highlighted in relation to mindfulness as being crucial to the development of self-compassion. Mindfulness-


Based Eating Awareness Training (MB-EAT), is founded on the notion that mindfulness meditation enables individuals to train themselves towards fostering a non-judgmental, more selfaware state that catalyzes the capacity to effectively regulate emotions, allowing them to become more aware of their eating patterns and ultimately cultivate self-acceptance (Kristeller & Wolever, 2011). Two randomized clinical trials have shown that MB-EAT is a highly effective intervention. Participants who suffer from binge-eating disorder showed the best treatment gains directly as a function of the amount of time they spent on mindful meditation, and this predicted improvement on indicators of self-regulation. MB-EAT is a promising intervention with regards to emotional dysregulation in eating pathology, as many of its components foster traits associated with selfcompassion including self-kindness, and common humanity. Therefore, mindfulness based interventions may benefit from adding components of self-compassion to their methods as mindfulness and self-compassion, closely related, can improve health outcomes to cultivate a positive, accepting view of the self. Conclusion Self-regulation is comprised of emotional regulation. In psychopathology, emotional regulation is particularly maladaptive in the context of eating pathology, which is characterized by a highly negative self-evaluation that perpetuates feelings of shame, dissatisfaction, and threat. In response, individuals may try to escape such distressing feelings through eating to activate the soothing system. Ultimately, this exacerbates feelings of shame and induces other maladaptive emotional regulation behaviors. A proposed solution to prevent such dysregulation as well as to treat is through the cultivation of self-compassion. Evidence has supported that self-compassion focused interventions reduce distressing symptoms associated with eating pathology, while also potentially buffering against maladaptive emotional dysregulation. One facet of self-compassion


is mindfulness, a construct with much more evidence based literature, and it is viewed as a precursor to self-compassionate qualities such as self-kindness and self-humanity. Thus, future directions may consider supplementing mindfulness based interventions with the latter two characteristics of self-compassion as evidence has shown that randomized clinical trials in mindfulness based research may already support these other two constructs though they are not operationalized as part of mindfulness.


Works Cited Adams, Claire A., and Mark R. Leary. "Promoting Self-Compassionate Attitudes Toward Eating Among Restrictive and Guilty Eaters." Journal of Social and Clinical Psychology 26.10 (2007): 1120-144. Web. Albertson, Ellen R., Kristin D. Neff, and Karen E. Dill-Shackleford. "Self-Compassion and Body Dissatisfaction in Women: A Randomized Control Trial of a Brief Meditation Intervention." Mindfulness (2014): n. pag. Web. Baumeister, Roy F, Kathleen D. Vohs, Diane M. Tice. “The Strength Model of Self-Control” Current Directions in Psychological Science 16.6 (2007). 351-355. Web. Beshara, Monica, Amanda D. Hutchinson, and Carlene Wilson. "Does Mindfulness Matter? Everyday Mindfulness, Mindful Eating and Self-Reported Serving Size of Energy Dense Foods Among a Sample of South Australian Adults." Appetite 67 (2013): 25-29. Web. Braun, Tosca D., Crystal L. Park, and Amy Gorin. "Self-Compassion, Body Image and Disordered Eating: A Review of the Literature." Body Image (2016): 117-31. Web. Breines, Juliana, Aubrey Toole, Clarissa Tu, and Serena Chen. "Self-Compassion, Body Image, and Self-Reported Disordered Eating." Self and Identity 13 (2014): 432-48. Web. Ferreira, Clàudia, José Pinto-Gouveia, and Cristiana Duarte. "Self-Compassion in the Face of Shame and Body Image Dissatisfaction: Implications for Eating Disorders." Eating Behaviors (2013): n. pag. Web. Gale, Corinne, Paul Gilbert, Ken Goss, and Natalie Read. "An Evaluation of the Impact of Introducing Compassion Focused Therapy to a Standard Treatment Programme for People with Eating Disorders." Clinical Psychology and Psychotherapy 21 (2014): 1-12. Web.


Germer, Christopher K., and Kristin D. Neff. "Self-Compassion in Clinical Practice." Journal of Clinical Psychology 69.8 (2013): 856-67. Web. Godsey, Judi. "The Role of Mindfulness Based Interventions in the Treatment of Obesity and Eating Disorders: An Integrative Review." Complementary Therapies in Medicine (2013): 431-38. Web. Jordan, Christian H., Wan Wang, Linda Donatoni, and Brian P. Meier. Mindful Eating:Trait and State Mindfulness Predict Healthier Eating Behavior. The Cupola: Scholarship at Gettysburg College. N.p., Oct. 2010. Web. Katterman, Shawn N. "Mindfulness Meditation as an Intervention for Binge Eating, Emotional Eating and Weight Loss: A Systematic Review." Eating Behaviors 15 (2014): 197-204. Web. Kelly, Allison C., Jacqueline C. Carter, and Sahar Borairi. "Are Improvements in Shame and Self-Compassion Early in Eating Disorders Treatment Associated with Better Patient Outcomes?" International Journal of Eating Disorders (2013): n. pag. Web. Kelly, Allison C., Kiruthiha Vimalakanthan, and Jacqueline C. Carter. "Understanding the Roles of Self-Esteem, Self-Compassion and Fear of Self-Compassion in Eating Disorder Pathology: An Examination of Female Students and Eating Disorder Patients." Eating Behaviors 15 (2014): 388-91. Web. Kristeller, Jean L., and Ruth Q. Wolever. "Mindfulness-Based Eating Awareness Training for Treating Binge Eating Disorder: The Conceptual Foundation." Eating Disorders: The Journal of Treatment and Prevention 19 (2011): 49-61. Web.


Kristeller, Jean, Ruth Q. Wolever, and Virgil Sheets. "Mindfulness-Based Eating Awareness Training (MB-EAT) for Binge Eating: A Randomized Clinical Trial." Mindfulness (2013): n. pag. Web. Lavender, Jason M., Kim L. Gratz, and Matthew T. Tull. "Exploring the Relationship Between Facets of Mindfulness and Eating Pathology in Women." Cognitive Behaviour Therapy 40 (2011): 174-82. Web. Magnus, Cathy M.R., and Kent C. Kowalski. "The Role of Self-Compassion in Women's SelfDetermined Motives to Exercise and Exercise Related Outcomes." Self and Identity: Psychology Press 9 (2010): 363-82. Web.Neff, Kristin D. "The Development and Validation of a Scale to Measure Self-Compassion." Self and Identity: Psychology Press 2 (2003): 223-50. Web. Neff, Kristin D., Kristin L. Kirkpatrick, and Stephanie S. Rude. "Self-Compassion and Adaptive Psychological Functioning." Journal of Research in Personality 41 (2007): 139-54. Web. Neff, Kristin. "Self-Compassion: An Alternative Conceptualization of a Healthy Attitude Toward Oneself." Self and Identity: Psychology Press 2 (2003): 85-101. Web. O'Reilly, G. A., L. Cook, D. Sprit-Metz, and D. S. Black. "Mindfulness-Based Interventions for Obesity Related Eating Behaviors: A Literature Review." Review. Obesity Reviews 15 (2014): 453-61. Web. Prolux, Kathryn. "Experiences of Women With Bulimia Nervosa in a Mindfulness-Based Eating Disorder Treatment Group." Eating Disorders 16 (2007): 52-72. Web. Terry, Meredith L., and Mark R. Leary. "Self-Compassion, Self-Regulation and Health." Self and Identity: Psychology Press (2011): 352-62. Web.


Wanden-Bergh, RocĂ­o Guardiola, Javier Sanz-Valero, and Carmina Wanden-Bergh. "The Application of Mindfulness to Eating Disorders Treatment: A Systematic Review." Eating Disorders 19 (2011): 34-48. Web. Webb, Jennifer B., and Mallory J. Forman. "Evaluating the Indirect Effect of Self-Compassion on Binge-Eating Severity Through Cognitive-Affective Self-Regulatory Pathways." Eating Behaviors 14 (2013): 224-28. Web.


Neuroscience of Social and Affective Touch Martin Dimitrov PSYC 492 Professor Jonathan Britt


Researchers have long understood that tactile sensation serves a discriminative function- it provides the brain with fine grained information about an object’s size, shape, texture, and weight. However, there seems to be at least one distinct function of touch that underlies some of social communication. In fact, it has been found that some of the affective and social properties of touch are encoded in a distinct neural substrate in the C tactile afferent nerve, part of the peripheral nervous system. Affective touch is hedonic and rewarding, and its role may consist of strengthening social bonds and buffering stress. Properties of the C-tactile Afferent Nerve Experimental psychology has shown that touch plays an important role in affiliative behavior in both humans and other animals. While this seems obvious, it is less intuitive that the mammalian nervous system developed a distinct neural mechanism that starts in the skin. A subtype of CT afferent nerves (low threshold mechanoreceptive C) were discovered in 1939 when they discharged to having their tactile receptive fields gently stroked (Zotterman, 1939). A unique property of these nerves is that they continued to spike after the stimulus stopped, as they fired an after-discharge. CT afferent nerves were discovered in humans in 1990 (Nordin, 1990). In humans, they are thin and unmyelinated with a slow conduction of 1m/s (versus discriminative Aβ afferents – 35-75m/s), making them unsuitable for conveying fast and reliable information. Moreover, they are so sensitive that they fire to a stimulus with the weight of 0.22 grams, akin to a butterfly’s weight (Vallbo, Olausson, Wessberg, Norrsell, 1993). They are also found only on hairy skin. Part of our knowledge about them comes from two rare cases of people who lost Aβ afferent function in adulthood from an illness that selectively attacked myelinated nerves (Cole et al., 2006; Olausson et al., 2002, 2008). They could detect soft brush strokes on their hairy skin but not on glabrous skin. However, consistent with the low acuity of CT’s, the patients reported the


sensation as being vague, and they were unable to discriminate its physical properties. They also rated the sensation as pleasant, supporting the idea that these nerves encode the rewarding properties of social touch. Further support for this affective input hypothesis comes from a microneurography and calcium imaging studies with healthy volunteers (Löken et al, 2009; Vrontou et al. 2013). Researchers reported that the CT firing rate changed to different stroke speeds, and firing was highly correlated with participants’ subjective ratings of how pleasant the strokes felt. Six different speeds were tried, from 0.1cm/s to 30cm/s. The Aβ fibre firing rate showed a linear relationship to the speed of stroking. In contrast, the CT discharge frequency demonstrated an inverted-U-shaped shaped response to increasing speeds. Therefore, researchers concluded that the nerve is a “middle-pass” information filter, tuned to a particular type of touch, such as that likely to occur during conspecific grooming or caressing (Löken, Wessberg, Morrison, McGlone, & Olausson, 2009). This idea is borne out by a study that suggests CTs are selective to temperature. At a stroke speed of 3cm/s, their response was highest to a 32°C probe, which is around the temperature of human skin (Ackerley et al., 2014). The authors propose that CT’s evolved to tag the tactile information likely to arise in affiliative interactions between kin. Pathway to the Brain and Projections Unlike the Aβ fibres, CT afferents take a different route to the brain when leaving the PNS. Specifically, they take the spinothalamic tract rather than the dorsal column (Andrew, 2010; Craig, 1995). These two pathways synapse in different thalamic nuclei and project to distinct cortical regions. The posterior insula is a main target for CT’s, consistent with its role in processing visceral information (Björnsdotter et al., 2009; Krämer et al., 2007; Lovero, Simmons, Aron, & Paulus, 2009; Morrison, Björnsdotter, & Olausson, 2011, Morrison, Löken, et al., 2011; Olausson et al., 2002). This is consistent with a proposed pathway in rodents and primates. Somatosensory areas


on the pariental operculum are implicated. Activity in the posterior insula is correlated with participants’ pleasantness ratings of optimal types of stroking (Kress, Minati, Ferraro, & Critchley, 2011). For example, it was found that individuals with autism spectrum disorders find stroking aversive and show decreased insular activity, suggesting that the insula is involved in the hedonic aspect of touch. Other studies implicate the superior temporal gyrus, (e.g. Cascio et al., 2012) also shows reduced activity in autism spectrum individuals during stroking. However, because social communication is so complex and context dependent, it is likely that a brain-wide network serves in the processing of CT-mediated affective touch. Touch information from CT’s is likely integrated with social cues from other modalities in the superior temporal sulcus, as well as with decision and reward networks involving the orbitofrontal cortex (Bennett et al., 2014; Gordon et al., 2011). For instance, in one study, people found stroking from the experimenter to be more pleasant than selfstroking (Ackerley et al., 2014). We have seen that individuals with severely diminished myelinated nerves (lacking Aβ tactile sensation) lack discriminative touch. This begs the question, would an individual with a reduction of CT afferents differ in their capacity for appreciating affective touch? To investigate this matter, a small group of people with a rare mutation causing a severe reduction in unmyelinated afferent nerves shed light on this question. This group rated optimal stroking as less pleasant than controls, and differed in their rating patterns across stroke speeds (Einarsdottir et al., 2004). However, because their ratings remained positive, the researchers believed that a less effective compensatory mechanism had taken over in processing affective information. Moreover, their insula activity was not modulated by stroking speed, and structural brain imaging showed a significant reduction in white matter volume in a tract from the thalamus to the posterior insula (Morrison, Löken, et al., 2011). Although the posterior insula shows


specificity for affective touch, it is likely that both types of touch converge in other somatosensory areas. Overall, the authors argue that the near CT knockout individuals’ ability to appreciate pleasant touch is likely due to a compensatory mechanism at work. However, given that these individuals rate stroking as less pleasant than controls, it may also be plausible that an existing mechanism encodes the residual rewarding properties rather than a compensatory mechanism. Since CT’s and Aβ fibres project to many of the same cortical areas, including the SII (Morrison, in press) - it may be that Aβs also encode pleasant touch, just as CT’s poorly encode discriminative touch. Functional Role The authors of the review propose that social touch is both calming and serves as a basis for social bonding. First, they argue that caressing and stroking results engenders a safe environment in which the animals need not be alert, signalling them to relax. Social touch might be one way of suppressing sympathetic nervous system arousal (Morrison, 2013). This proposed role is in line with what we see in primates. Chimps, for instance, extend companionship (e.g. embrace, touch, groom) to those in distress after a fight (Fraser, Stahl, Aureli, 2008). It is important to note that social touch alone cannot be the only signal of safety in an animal’s environment, since there would need to be another cue for the first animal to know it is appropriate to begin grooming others. There are a number of brain regions involved in parasympathetic regulation, such as the central amygdala, periaqueductal gray, anterior insula, and anterior cingulate cortex (Porges, 2007). Social touch may engage the parasympathetic nervous system through this cortical modulation of the brainstem. The authors write that a recent study corroborates this hypothesis by showing that CT activity is reinforcing. In one study, mice were injected with a compound that


selectively depolarized CT afferents without affecting other neuron types (Vrontou et al, 2013). With this approach, they were able to reverse a place preference that animals had developed to a chamber. In other words, if the CTs were activated while an animal spent time in a chamber that was non-preferred, the animal was more likely to develop a preference of this chamber as a result of the manipulation. The stress-reduction role is not demonstrated by this study in the way the authors of the review suggest, as they claim the stimulation is “reinforcing and/or anxiolytic”. This is because place-preference does not ipso facto demonstrate that affective touch is pleasurable because it is calming. Therefore, it does not show that its rewarding or hedonic effects are exerted through modulation of the sympathetic nervous system or the stress response system. It only shows that the stimulation of CT’s is reinforcing, and that animals will return to contexts in which they experienced it. An experiment would have to at least correlate parasympathetic activation with CT activation to suggest it plays a role in affective touch. In support of their idea, the authors point out that those with anxious personalities tend to have relationships composed of more grooming and touching. Another study they cited showed that holding hands reduces anxiety posed by an impending threat. Finally, research with zebrafish has showed that stimulation of tactile sensors by a water current abolished hiding behavior after exposure to a fear-inducing pheromone. These effects were similar to swimming alongside conspecifics in the same tank. This is a result closer to what the CT afferent researchers would have to find in order to demonstrate the anxiolytic function of the nerve. It might be interesting to selectively knock out CT fibres with a toxin and see to what degree they still prefer a chamber in which they spent time with conspecifics. This would not speak to the stress modulation role of CT’s, but it could indicate the degree to which the CT’s encode the rewardingness of social interaction.


Conclusion In closing, the intuitive appeal of this hypothesis is strong. It seems obvious that affective touch and allogrooming - especially in mate or mother-offspring relationships –has a calming effect that serves to alleviate mild stress. Moreover, touch seems to have a privileged role in affiliate behavior, especially in pair bonding. In fact, in romantic partnerships, relationship satisfaction, familial affection, and trust all correlate positively with self-reports of how often people groom each other (Nelson & Geher, 2007). A central part of the positive human experience is produced by affective touch. Thus, it is key that future research seeks to understand the brain networks that underlie social touch, and, in particular, how contextual cues modulate its rewardingness.


References Ackerley, R., Backlund Wasling, H., Liljencrantz, J., Olausson, H., Johnson, R. D., & Wessberg, J. (2014). Human C-tactile afferents are tuned to the temperature of a skin-stroking caress. Journal of Neuroscience, 34, 2879–2883. Andrew, D. (2010). Quantitative characterization of low-threshold mechanoreceptor inputs to lamina I spinoparabrachial neurons in the rat. Journal of Physiology, 588, 117–124. Bennett, R. H., Bolling, D. Z., Anderson, L. C., Pelphrey, K. A., & Kaiser, M. D. (2014). fNIRS detects temporal lobe response to affective touch. Social Cognitive and Affective Neuroscience, 9, 470–476. Cascio, C. J., Moana-Filho, E. J., Guest, S., Nebel, M. B., Weisner, J., Baranek, G. T., & Essick, G. K. (2012). Perceptual and neural response to affective tactile texture stimulation in adults with autism spectrum disorders. Autism Research, 5, 231–244. Cole, J. D., Bushnell, M. C., McGlone, F., Elam, M., Lamarre, Y., Vallbo, A. B., & Olausson, H. (2006). Unmyelinated tactile afferents underpin detection of low-force monofilaments. Muscle Nerve, 34, 105–107. Craig, A. D. (1995). Distribution of brainstem projections from spinal lamina I neurons in the cat and the monkey. Journal of Comparative Neurology, 361, 225–248. Einarsdottir, E., Carlsson, A., Minde, J., Toolanen, G., Svensson, O., Solders, G., … Holmberg, M. (2004). A mutation in the nerve growth factor beta gene (NGFB) causes loss of pain perception. Human Molecular Genetics, 13, 799–805. Kress, I. U., Minati, L., Ferraro, S., & Critchley, H. D. (2011). Direct skin-to-skin versus indirect touch modulates neural responses to stroking versus tapping. Neuroreport, 14, 646–651. Löken, L. S., Wessberg, J., Morrison, I., McGlone, F., & Olausson, H. (2009). Coding of pleasant touch by unmyelinated afferents in humans. Nature Neuroscience, 5, 547–548.


Lovero, K. L., Simmons, A. N., Aron, J. L., & Paulus, M. P. (2009). Anterior insular cortex anticipates impending stimulus significance. Neuroimage, 45, 976–983. Morrison, I., Björnsdotter, M., & Olausson, H. (2011). Vicarious responses to social touch in posterior insular cortex are tuned to pleasant caressing speeds. Journal of Neuroscience, 31, 9554–9562. Morrison, I., Björnsdotter, M., & Olausson, H. (2011). Vicarious responses to social touch in posterior insular cortex are tuned to pleasant caressing speeds. Journal of Neuroscience, 31, 9554–9562. Nelson, H., & Geher, G. (2007). Mutual grooming in human dyadic relationships: An ethological perspective. Current Psychology, 26, 121–140. Nordin, M. (1990). Low-threshold mechanoreceptive and nociceptive units with unmyelinated (C) fibres in the human supraorbital nerve. Journal of Physiology, 426, 229–240. Olausson, H., Lamarre, Y., Backlund, H., Morin, C., Wallin, B. G., Starck, G., et al. (2002). Unmyelinated tactile afferents signal touch and project to insular cortex. Nature Neuroscience, 5, 900–904. Olausson, H., Wessberg, J., Morrison, I., McGlone, F., & Vallbo, Å. (2010). The neurophysiology of unmyelinated tactile afferents. Neuroscience and Biobehavioral Reviews, 34, 185–191. Porges, S. W. (2007). The polyvagal perspective. Biological Psychiatry, 74, 116–143. Vallbo, Å., Olausson, H., Wessberg, J., & Norrsell, U. (1993). A system of unmyelinated afferents for innocuous mechanoreception in the human skin. Brain Research, 628, 301– 304.


Vrontou, S., Wong, A. M., Rau, K. K., Koerber, H. R., & Anderson, D. J. (2013). Genetic identification of C fibres that detect massage-like stroking of hairy skin in vivo. Nature, 493, 669–673. Zotterman, Y. (1939). Touch, pain and tickling: An electrophysiological investigation on cutaneous sensory nerves. Journal of Physiology, 95, 1–28.


Is There a Trade-off Between Truth and Well-being? An Investigation of Causation in the Depressive Realism Hypothesis PSYC 528 Stephania Donayre


Abstract Depressive realism suggests that depressed people have a more realistic view of themselves and the world compared to mentally healthy people. According to this view, the nature of depression seems to play a role in diminishing positive illusions. While by and large, the literature accepts diminished positive illusions as a real phenomenon, their etiological relationship with the depressive realism hypothesis remains indeterminate. That is, depression may lead to more unbiased updating, or neural systems conducive to more unbiased updating of beliefs may generate depression. Therefore, I propose a 4-time experimental study that shines some light onto this question, and I predict from these results and the available literature that depressive mood might enhance one’s ability for realistic insight. Keywords: depression, depressive realism, cognitive updating, positive illusions, bias


Is There a Trade-off Between Truth and Well-being? An Investigation of Causation in The Depressive Realism Hypothesis There are two main theories regarding the association between rationality and mental health: the “traditional” conception of mental health, according to which an accurate perception of the self and the world is the cornerstone to psychological well-adjustment (Erikson, 1963; Beck; 1979; Jahoda, 1958; Maslow, 1950); and Taylor and Brown’s Social Psychological Model on mental health (1998, 1994), which claims that positive illusions are adaptive, promote mental health, and enable people to feel hopeful in the face of great difficulties and overwhelming uncertainty. The present investigation has found important theoretical support for the latter hypothesis, and the main findings are discussed in this paper. According to our literature review, there are three important inter-related domains in which healthy people exhibit positive illusions, while moderately depressed people do not: 1) The Illusory Superiority, in which those healthy selfevaluate in unrealistically positive terms; 2) The Optimism Bias, in which they hold views of the future that are rosier than base-rate data can justify; and 3) The Illusion of control, in which they believe they have greater control or mastery over environmental circumstances than is actually the case. This integrated set of beliefs is thought to serve a wide variety of cognitive, emotional, and social functions. As such, it is expectedly absent in depressed individuals. This phenomenon is called “Depressive Realism” (Taylor and Brown, 1994), and suggests that depressed people actually have a more realistic view of themselves and the world than mentally healthy people. This paper reviews the evidence of this claim, its limitations, and its future implications for mental health research and management.


Literature Review Positive Illusions Positive illusions are unrealistically favourable attitudes that people have towards themselves, the world, and the future. Thus, they are a form of creative self-deception that, at least in the short-term, helps maintain self-esteem and promote productivity, social interaction, subjective happiness, and physical health (Taylor and Armor, 1996; Taylor et al., 2003; McKay & Dennett, 2010). On one hand, research suggests that there may be modest genetic contributions to the ability to develop positive illusions (Owens et al. 2007); on the other hand, early environment seems play a consequential role as well: people are more prone to develop these positive beliefs in nurturing environments than in harsh ones (idem). Moreover, cultural context has a considerable role in the development of these beliefs (Heine & Hamamura, 2007), as we will see when considering the limitations of the depressive realism theory. We will now explore how optimal mental health is associated with three broad kinds of unrealistic beliefs. I. “Above-the-average effect� or Illusory Superiority For illusory superiority to be established by social comparison, the word average needs to be defined and its operational assumptions acknowledged. In principle, it is logically plausible for nearly all of the population to be above the mean if the distribution of abilities is highly skewed. For instance, the mean number of hands per human being is slightly lower than two since some individuals have fewer than two, and (almost) none have more. Nevertheless, as researchers we imply that the nature of psychological capabilities is continuous or dimensional in nature, as opposed to discrete or categorical. Hence, experiments usually compare subjects to the median of a sample peer group, since by definition, it is impossible for a majority to exceed the median.

This “statistical impossibility” is consistently observed in the above-the-average effect, in which people regard themselves more positively than they regard others, and less negatively than others regard them. From this observation, it has been argued that human beings, rather than being equally aware of one’s strengths and weaknesses, are normatively more aware of their strengths and not very aware of their weaknesses (Taylor & Brown, 1988). Thus, it is hypothesized that when one describes themselves, their positive attributes are often more descriptive than their negative attributes (Taylor, 1983). While the psychological explanations behind this phenomenon are still a matter of debate, the effect itself has been widely recognized across various abilities and traits, from the realm of driving ability (Svenson, 1981) to attractiveness (Barelds & Dijkstra, 2009), parenting (Wenger & Fowers, 2008), ethics (Tappin & McKay, 2016), and health (Brown, 1986). In short, people seem to be predisposed to believe they are much more competent and desirable than they actually are, and this self-serving way of perceiving ourselves might be an effective manner of protecting our self-esteem. The most paradigmatic ramification of illusory superiority is empirically reported in general intelligence estimation. Cross-cultural studies on perceived intelligence (Dunning, 2005; Dunning, Kerri, & Kruger, 2003; Lee, 2012), report the transcultural proclivity of people with a below-average IQ to overestimate their IQ (and of people with an above-average IQ to underestimate their IQ). This phenomenon has been coined as the «Downing effect». In this same line of research, illusory superiority has been found in academic work comparing memory selfreports, such as Schmidt’s (1999) research in older adults. Furthermore, this phenomenon has been found foreseeable in larger scale circumstances. For instance, in a survey at Stanford University, 87% of MBA students rated their academic performance as above the median (Zuckerman, 2001). This does not apply to students exclusively: in similar inquiry survey of faculty at the University


of Nebraska, 68% rated themselves in the top 25% for teaching ability, and more than 90% rated themselves as above average (Cross, 1977). The concept of illusory superiority seems to be applicable to more social matters as well, such as popularity, friendships (Alicke et. al, 2005) and relationship happiness (Buunk, 2001). In Zuckerman and Jost's study (2011), participants were given detailed questionnaires about their friendships and asked to evaluate their own popularity. Using social network analysis, they were able to demonstrate that participants generally had inflated perceptions of their own popularity, especially in comparison to their own circle of friends. Notably, when subjects describe themselves in positive terms compared to other people, this also includes illustrating themselves as less susceptible to bias than other people. This effect is called the "Bias Blind Spot" and has been demonstrated independently in multiple studies (for a review, see Scopelliti et al., 2015). II. Optimism Bias People are considered unrealistically optimistic if they predict that a personal future outcome will be more favourable than that is suggested by a relevant, objective standard (Shepperd et. al., 2015). Thus, researchers distinguish two different facets in this phenomenon. On one hand, it is a tendency for healthy human beings to overestimate their likelihood of experiencing a broad variation of positive events, such as high starting salaries after college graduation (Fernandez et. al., 1996), or having a gifted child (Shepperd et. al, 2013). On the other hand, the unrealistically optimistic is supposed to be underestimating their chance of succumbing to negative events, such as getting divorced (Radcliffe & Klein, 2002), or smokers believing that they are less likely to contract lung cancer or disease than other smokers (Ayanian & Cleary, 1999). This delusive nature of optimism is also evident in individuals' underestimation of the time required for finishing a


variety of tasks, a misjudgement known as the planning fallacy (Calderon, 1993). Moreover, evidence suggests that most people believe that they should be unrealistically optimistic, a valuejudgement that I identify as “meta-optimism”: people generally believe that it is better to be unrealistically optimistic than accurate or pessimistic in personal predictions (Armor, Massey, & Sackett, 2008). Unrealistic optimism has arguably many causes (e.g the way humans process information through a “representativeness” heuristic: Tversky, 1977), though for the purpose of this paper the motivational origin is our most salient one. The desire to feel good may motivate people to be unrealistically optimistic in their personal predictions: thus, positive outcome expectations tend to foster goal persistence, reduced anxiety, positive affect, and hope (Armor & Taylor, 1998). As a consequence, optimism often enhances productivity and persistence with tasks on which people might otherwise give up (Greenwald, 1980). In this way, newer evidence from the extensive literature of positive psychology (DeSalvo et al., 2006; Bushwick, 2012) portrays the advantages of positive thinking that extend from greater happiness (Seligman, 2003; Gilbert, 2005) to longer life expectancy (Bopp et al., 2012). III. Illusion of control The illusion of control is the proneness for individuals to overestimate their ability to control events they demonstrably do not influence; namely, the relationship between one's behavior and an uncorrelated outcome. The phenomenon has been successfully replicated in many different contexts: laboratory experiments, observed behavior in games of chance, and self-reports of realworld behavior (for a comprehensive review, see Thompson, 2004). One simple form of this fallacy is found in casinos: when rolling dice in craps, it has been shown that people tend to throw


harder for high numbers and softer for low numbers. It is thought to influence gambling behavior and belief in the paranormal. Moreover, the illusion of control is argued to be adaptive because it empowers people to feel hopeful when facing uncontrollable risks (Bonanno, Rennicke & Dekel 2005). The predominant paradigm in research on unrealistic perceived control has been Ellen Langer’s (1975) 'illusion of control’. This theory proposes that assessments of control depend on two conditions: an intention to create the outcome, and a relationship between the action and outcome. In games of chance, these two conditions frequently go together. According to the author, when people are driven by internal goals concerned with the exercise of control over their environment, they will seek to reassert control in conditions of chaos, uncertainty or stress. Thus, one way of coping with a lack of real control is to falsely attribute oneself control over a situation. Psychological theorists have consistently emphasized the importance of perceptions of control over life events, since humans have a strong desire to control their environment. Taylor and Brown (1998) have argued that positive illusions, including the illusion of control, are a beneficial psychological adaption adaptive as they motivate people to persist at tasks when they might otherwise give up. This position is best illustrated by Albert Bandura's claim that "optimistic self-appraisals of capability, that are not unduly disparate from what is possible, can be advantageous, whereas veridical judgments can be self-limiting” (Bandura, 1989, p. 1175). Thus, it is arguable that a sense of control- the illusion that one’s personal choices have an impact on uncontrollable circumstances-has a definite and a positive role in sustaining life. Neural Systems Behind Positive Illusions. There has been notable interest in the neural underpinnings of the widespread phenomenon of positive illusions. On one hand, the degree to which people estimate themselves as more appealing than the average person has been linked to


reduced activation in their orbitofrontal cortex and dorsal anterior cingulate cortex (Beer, 2014; Beer & Hughes, 2009; idem, 2012). On the other hand, superiority illusions have been claimed to arise from resting-state brain networks modulated by dopamine (Yamada et. al., 2013; Qiu et. al, 2010). The neural areas associated to positive illusions also play a role in "cognitive control� (cognitive control pertains to the association between frontal lobe dysfunction and poor insight). COMMENT: I did not quite understand what the author was trying to say in the previous paragraph so I tried to rephrase my interpretation of the paragraph more clearly. In the case of unrealistic optimism, new studies now claim that positive illusions emerge as a result of biased belief updating with distinctive neural correlates (Shah et. al., 2016). On a behavioral level, these studies suggest that for negative events, desirable information is incorporated into personal risk estimates to a greater degree than undesirable information (resulting in a more optimistic outlook). Thus, functional neuroimaging suggests that the rostral ACC (Anterior Cingulate Cortex) plays a key role in modulating both emotional processing and autobiographical retrieval (Addis et. al., 2007). Based on these data, it has been suggested that the rostral ACC has a crucial part to play in creating positive images of the future and ultimately, in ensuring








Sharot, Korn, & Dolan (2014) examined the question of how people maintain unrealistic optimism, despite frequently encountering information that challenges those biased beliefs. She found a marked asymmetry in belief updating. Participants updated their beliefs more in response to information that was better than expected than to information that was below expectation. This selectivity was mediated by a relative failure to code for errors that should reduce optimism. Thus,


she found that optimism was related to diminished coding of undesirable information about the future in a region of the frontal cortex (right IFG) that has been identified as being sensitive to negative estimation errors. Participants with high scores on trait optimism





undesirable errors in this region than those with low scores. In contrast, tracking of desirable information in regions processing desirable estimation errors (MFC/SFG, left IFG and cerebellum) did not differ between high and low optimists. In summary, these findings indicate that optimism is tied to a selective update failure and diminished neural coding of undesirable information regarding the future. These results did not reflect specific characteristics of the adverse events (for

FIG. 1 Brain activity tracking estimation errors. (a,b) Regions in which BOLD signal tracked participantsâ&#x20AC;&#x2122; estimation errors on a trial by trial basis in response to desirable information regarding future likelihoods included the left IFG (a) and bilateral MFC/SFG (b) (P < 0.05, cluster level corrected). (c) BOLD signal tracking participantsâ&#x20AC;&#x2122; estimation errors in response to undesirable information was found in the right IFG/precentral gyrus (P < 0.05, cluster level corrected). (d) Parameter estimates of the parametric regressors in both the left IFG and bilateral MFC/SFG did not differ between individuals with high or low scores on trait optimism. In contrast, in the right IFG, a stronger correlation between BOLD activity and undesirable errors was found for individuals with low scores on trait optimism relative to those with high scores. Error bars represent s.e.m. *P < 0.05, two-tailed independent sample t test.

example, familiarity, negativity, arousal, past experience, or how rare or common the event was), since these variables were controlled in the aforementioned studies. Thus, unlike predictions from learning theory, where both positive and negative information are given equal weight (Pearce, 1980; Sutton, 1998), these researchers found a valence-dependent asymmetry in how estimation errors affected beliefs about oneâ&#x20AC;&#x2122;s personal future.


Limitations. While the notion that neurotypicals tend to perceive reality more positively and optimistically is mostly considered to be a fact, there still are controversies about the extent to which people reliably demonstrate positive illusions outside experimental conditions, and also concerning whether or not these illusions are always beneficial to the people who have them (Colvin & Block, 1994; Kruger, Chan & Roese, 2009; McKay & Dennett, 2009). The provocative idea that objectivity and happiness could be in opposition to each is both a psychological and philosophical debate that will continue for some time. For now, we will analyze the empirical evidence available. The main limitation of the first rather robust compilation of research is the fact that the large majority of the data originates from studies solely conducted on American participants. Thus, this might not be a true representation of human psychology, but rather a phenomenon that varies cross-culturally. For example, some studies indicate that East Asians tend to underestimate their own abilities in order to improve themselves and get along with others (Falk et. al., 2009; DeAngelis, 2003). Moreover, to the best of my knowledge, none of these studies have drawn distinctions between people with legitimate or illegitimate high self-esteem. On this note, other studies have found that the absence of positive illusions mainly coexists with high self-esteem (Compton, 1992) and that determined individuals â&#x20AC;&#x201C; focused on personal growth and learning â&#x20AC;&#x201C; are less prone to positive illusions (Knee, 1998). While this last body of literature is, in comparison to the former, small and less recent, it is consequential to examine the possibility that while illusory superiority could be associated with undeservedly high self-esteem, people with legitimate high self-esteem do not necessarily have to exhibit it. Unlike the above-average-effect, optimism bias has been reported to transcend gender, race, nationality and age (idem; Makridakis & Moleskis, 2015). Nevertheless, it is not yet clear 57

whether people are unrealistically optimistic at all times or for all events (Harris, Griffin, & Murray, 2008). For example, people often show less unrealistic comparative optimism when estimating their chances of experiencing negative events that occur frequently in the population (Chambers, Windschitl, & Suls, 2003). However, while it is conceivable that unrealistic optimism can vary with situational factors, it also seems resistant to interventions designed to reduce it (Weinstein & Klein, 1995). Moreover, other research has revealed that unrealistic absolute optimism can lead to disappointment, regret, and other problems when outcomes fall short of expectations (Carroll et al., 2006; Colvin and Block, 1994). Although excessive illusions can be detrimental, it can be argued that they are preferable to their opposite (Seligman, 1975; Nolen, 2014), since negative illusions can lead individuals to give up on trying because they believe that future events cannot be influenced by them in any constructive form. Finally, most of the past work has operationalized illusory control in terms of subjective ratings or behaviour, with limited consideration for the relationship between these definitions or the broader construct of agency. However, Tobias-Webb (2016) just published a robust experimental paper in which all of these limitations are overcome. His results confirm an association between subjective and behavioural illusory control and locate the construct within the cognitive literature on agency: the subjective feeling of authorship over one's actions. Depressive Realism Realism is the ability to experience and perceive reality objectively. As the prior findings suggested, healthy, non-depressed people seem to be have a natural predisposition to engage in certain cognitive errors. In response, depressive realism tells us that some depressed individuals make more accurate judgements and realistic predictions than people without depression.


Predictably, a system that does not allow the creation of such perceptions may promote angst and undermine coping strategies resulting in a downward spiral of the effect of stressful life events on mental health (Taylor et al., 1984; Wood et al., 1985). Following the logic of the available literature, I will synthesize and interpret the findings with regard to the three categories previously enumerated (illusion of superiority, optimism bias and illusion of control) to characterize an optimal mental state in the context of depressive realism. I. Illusion of Superiority Moderately depressed individuals have been reported to display a less positive, but relatively unbiased view of the self (Allan et. al, 2007; Andrews & Thomson, 2009; Barnaby et. al., 2009; Beer, 2014; Dunning, 1991; Harkness et. al., 2010). This seems to go beyond self-reports only: when using a solid, consistent reference point in experimental designs, depressed individuals have still been shown to be more realistic (Dunn, 2006; Kapci and Cramer, 1998). Nevertheless, as expected, the more depressed the person is, the more likely they will underrate themselves. Indeed, depressed persons are often characterized by low self-esteem (Beck, 1967; Aloy & Abramson, 1979). Thus, if one assumes that depressed individuals are not motivated to preserve or enhance self-esteem, then their accuracy in detecting contingencies regardless of outcome valence follows (idem). Consequently, the more severe the depressive emotional state, the more that the research findings deviate from depressive realism, towards the negativity hypothesis (SzuTing, Koutstaal, Poon, & Cleare, 2012; Wisco, 2009). In this way, the present findings in regard to the lack of superiority bias in depressed individuals are compatible with the notion of attributional style observed in social psychological experiments (Ball, McGuffin, & Farmer, 2015). Further research would be needed to directly


assess the satisfactoriness of a motivational narrative between depressed and non-depressed individuals in judging contingencies. For now, the evidence supports a level-of-depression account on illusion of superiority or better-than-average effect. Furthermore, this phenomenon has been expected to have an effect in social matters (Weightman, Air, & Baune, 2014). For instance, in perceiving the nature of the interaction in personal relationships, depressed individuals often report feeling misunderstood, even (or especially) by their loved ones (Gordon, Tuskeviciute, & Chen, 2013). This perception, in turn, might not be the manifestation of a general negativity associated with depression (as Beck would speculate), but rather an accurate assessment of reality (Gordon et. al., 2013). Indeed, people with depression do tend to be less understood by their partners than people without depression, given their partners’ lack of “empathic accuracy”. Depression, intriguingly, also seems to be closely related to conscious awareness of the symptoms of one’s long-term condition. In a fascinating investigation conducted on the relationship between anosognosia (conscious awareness of one’s mental or physical ailments) and the mood of people affected by Alzheimer’s (Mograbi et. al., 2014), anosognosia appears to be negatively associated with depression. In other words, the more awareness patients have of their disease, the more depressed they are (De Carolis, 2015; Huntley & Fisher, 2016; Vanheusden, 2009). On this note, although the nature of this association has not been explained, one can speculate that depression might enhance one’s ability for realistic insight. II. Absence of optimism bias Korn et. al. (2014) suggest that depression is related to the absence of optimistically biased belief updating concerning future life events. In their study, the predictive ability of healthy and


depressed individuals to estimate their personal probability of experiencing 70 adverse life events was measured. They found that clinically depressed participants updated their beliefs in proportion to how desirable or undesirable the life event described was (Korn et al., 2013) (COMMENT: I donâ&#x20AC;&#x2122;t understand this sentence at all). In contrast, healthy individuals were less likely to update beliefs when information called for adjustments in a pessimistic direction. Thus, healthy controls confirmed an optimistic bias in updating â&#x20AC;&#x201C; that is, they oriented their beliefs toward more desirable information. Notably, depressive symptom severity correlated with biased updating: more severely depressed individuals showed a more pessimistic updating pattern. Overall, this resulted in an absence of updating asymmetry across their sample of depressed individuals, consistent with the level-of-depression account of depressive realism (Birinci, 2010). On this last note, Kornbrot et. al. (2013) investigated the effects of mild depression on time estimation and production by assessing and generating time intervals of particular durations. Whereas people with mild depression were accurate in estimating perceived and produced time, controls overestimated perceived time and underestimated produced time. These results suggest that individuals with mild depression perceive time more accurately than healthy subjects. The finding that mild depression may be related in some domains to an absence of a positive bias, rather than a presence of a negative bias, raises an intriguing possibility for future research (Strunk et. al., 2009): namely, testing whether the absence of positively biased belief updating could predict the onset of a depressive episode among individuals at risk for depression.


III. Illusion of control Alloy and Abramson (1979) presented evidence that subjects estimated their control over a simple light bulb when the switch-on ratio was actually predetermined by the researchers. While non-depressed subjects claimed to have (illusory) control over a light bulb, depressed subjects were more realistic and did not endorse this cognitive distortion. This paradigmatic study attracted much interest since its publication, and now we have several successful replications of it in different variations (Tabachnik et. al., 1983; Dobson & Franche, 1989; Lovejoy, 1991; Margo et. al., 1993; Presson & Benassi, 2003; Walker et. al., 2003; Watson et. al., 2008; Yeh and Liu, 2007). In healthy individuals, the illusion of control is especially apparent under circumstances of adversity (Garret, 2014; Taylor and Armor, 1996) which may enhance resilience to stressful life events. Thus, the debilitating consequences of exposure to situations in which responses and outcomes are unrelated have been observed within a large number of species, including dogs (Miller & Seligman, 1975; Liu, Kleiman, Nestor, & Cheek, 2015). Of particular import to this illusion is the hopelessness theory, which argues that depressed individuals have generalized expectancies of independence between their behaviors and outcomes: the depressive is characterized as one who believes that they are ineffective and powerless to control outcomes in the world (Allan, Siegel, & Hannah, 2011). Thus, a natural deduction from the theory is that depressed subjects should underestimate the degree of contingency between their reactions and environmental outcomes (Abramson & Alloy, 1980; Allow & Seligman, 1979). Nevertheless, the evidence available does not support this assumption, but focuses on valence rather than probability (for a review, see Sharot & Garret, 2016). For instance, in Experiment 3 (Alloy and Abramson, 1979), the effects of outcome valence,


rather than probability, were evaluated on control ratings in depressive and non-depressive participants. In other words, an outcome was made either desirable or undesirable, in contrast to frequent or infrequent. Over the course of 40 trials, participants had the choice to either press a button or not do anything within a specific time frame, after which a green bulb would either light up or stay off. At the end of the trials, the participant rated, on a scale of 0 to 100, the degree of control that their responses exerted over the illumination of the light. Both the occurrence of the outcome and the measure of the contingency between the response and the outcome were constant, and the valence of light onset was manipulated. For the “win” condition, the participant gained $0.25 on each trial during which the light turned on. For the “lose” condition, the participant began the block of trials with $5.00 and lost $0.25 on each trial on which the light did not turn on. There was no contingency between responding and the monetary incentive; that is, these rewards occurred regardless of whether or not the participant responded. In the win condition, outcome valence influenced non-depressive ratings but not depressive ratings. In the lose condition, ratings were low for both mood groups. Thus, depressive realism was verified with an “outcome valence effect”, as well as with an outcome-density effect. While this specific result has been experimentally replicated a number of times (Lennox, Bedell, Abramson, Raps, & Foley, 1990; Vázquez, 1987; Cramer, 1999), it is not clear what is causally driving the accuracy of depressed subjects’ responses; that is, whether this is a mood effect, or a “pure” analytic judgement of contingency. It could be that depressive mood causes people to assess contingencies better, or that those who are realistic about contingencies are at a higher risk for depression. Neural Systems Behind Depressive Realism. Garrett et. al. (2014) uses brain imaging in conjunction with a belief update task administered to clinically depressed patients and healthy 63

controls to characterize brain activity that supports unbiased belief updating in clinically depressed individuals. He found that unbiased belief updating in depression is mediated by strong neural coding of estimation errors in response to good news – left inferior frontal gyrus and bilateral superior frontal gyrus – and to bad news – right inferior parietal lobule and right inferior frontal gyrus (see Fig. 4 below). Results suggest that depression, in contrast to good mental health, is related to a lack of discounting of bad news, resulting in unbiased updating of beliefs in response to good and bad news. Therefore, unbiased updating in MDD is explained in this study by the adequate use and neural tracking of negative estimation errors. These






complementary to those from Sharot et. al. (2011), who demonstrated that, in healthy participants, biased updating in response to positive and negative news is mediated by a relatively weak correlation between brain activity and negative estimation errors, but intact coding of positive estimation errors. Moreover, in a previous study (2007), Sharot argued that positivity bias in the imagination of future life events – where participants imagined positive future events as closer in time and more vivid than negative events – is a bias mediated by activity in the rostral anterior cingulate cortex and amygdala. Although participants in that study showed an optimistic bias in unconstrained imagination, researchers identified an optimistic learning bias when participants’ beliefs were challenged by new information. Thus, Sharot’s results provide a


powerful explanatory framework for how optimistic biases are maintained, and Garretâ&#x20AC;&#x2122;s research offers a compatible mechanistic account of how depressive symptoms support unbiased beliefs. In summary, these findings suggest that the human propensity toward optimism is facilitated by the brainâ&#x20AC;&#x2122;s failure to code errors in estimation when those call for pessimistic updates. This failure results in selective updating, which supports unrealistic optimism that is resistant to change (see Fig 2. below). In response, underestimating undesirable susceptibility to negative events might serve an adaptive function by enhancing explorative behavior and reducing stress and anxiety associated with negative expectations (Scheier, 1989; Taylor et. al., 2000; Varki, 2009). This is consistent with the observation that mild depression is affiliated to unbiased





(Soderstrom, 2001) and severe depression to pessimistic expectations (Strunk, Lopez, & DeRubeis, 2006). Limitations. Depressive realism faces many challenges. First, notwithstanding a considerable body of research supporting the theory, there also exists evidence to the contrary (Dobson & Pusch, 1995; Dunning & Story, 1991; Allan et. al., 2007; Conn, 2007). For instance, depressive realism has not been proved when participants make decisions for other people, such as financial decisions (Garcia-Retamero et. al., 2015). Birinci & Dirik (2010) group these contradictions into 3 main categories: objectivity of realistic evaluation, the validity of the term depression, and the generalizability of the results.


Stone et al. (2001) proposed the notion of general negativity to explain the depressive realism hypothesis: depressed people, she argues, always have a more negative attitude towards reality, and thus appear to be more realistic. However, they are only more pessimistic than nondepressed people, and not necessarily realistic. While this assumption has been supported by several researchers (Fu, Koutstaal, Poon, & Cleare, 2005), others argue that the explanation is circular and incomplete (Dunn et. al, 2006; Sharot & Garret, 2016), and claim that the experimental design of these studies did not include a consistent and solid realistic reference point, a recent improvement in most depressive realism research protocols (Yeh & Liu, 2007; Whiten et. al., 2008). Mtsetfi et. al. (2005, 2007) examined the nature of these disputes, and observed that the interval between trials was a very important independent variable. As the interval increased, they argue, healthy subjects had more judgment errors, whereas the interval between trials did not affect the performance of depressed people. It can be interpreted that depressed subjects did not consider the entire context of the experiment, making them appear to be more realistic or successful in their estimates (Adelson, 2005). Moreover, mediator factors have been suggested (Blanco, 2012). In the former study, participants evaluated their control over a flashing light on the computer screen. While the results empirically confirm the depressive realism hypothesis (depressed participants do not overestimate their control over independent events), results can be evaluated from a different light: depressed individuals might be more passive or hesitant in exhibiting their reactions. On this interpretation, the predictive accuracy in the task might not result from an accurate analysis only, but may actually be modulated by motivational factors. Further research needs to be done on this note.


Finally, we have found three meta-analyses (Ackerman and DuRubeis, 1991; Moore and Fresco, 2007a; Moore, 2012) in which approximately 200 studies were analyzed, and more studies supported the depressive realism hypothesis than did not. However, many of these studies adopted different methodologies, utilized variant construals of depressive realism and unrealistic optimism, and in some experiments the sample sizes were very small. Additional research is needed to replicate the findings in more naturalistic settings with higher power and more diverse (nonwestern) populations. Even if it is justified to believe that depressive realism is a real phenomenon, the available data do not necessarily point to causation: depression may lead to more unbiased updating, or neural systems that support more unbiased updating of beliefs may generate depression. Therefore, I propose a hypothetical study that can shine some light into this relationship. Materials and Method Participants. 300 individuals aged 18-65 would participate in the study. From this population, 150 would be unmedicated depressed participants, and 150 controls. Ideally, this group will have a proportional ratio of males to females. Depressed individuals would be identified through the McGill Psychotherapy Process Research Group (MPPRG) or recruited by advertisement. Healthy controls would be recruited from the McGill Psychology Human Participant Pool and matched to depressed individuals for age, gender, and level of education. Exclusion criteria include antidepressant medication within 5 weeks of the study, an episode of substance abuse within 1 year prior to undertaking the study, and any other past or present psychiatric condition (excepting depression & anxiety in the depressed patient group only).


For the depressed participants, the MINI should confirm that they had experienced depressive episodes in the past and met criteria for a major depressive episode at the time of the study. The MINI will also be employed to exclude individuals with any other significant psychiatric history besides anxiety. Before the study, all participants should complete the Beck Depression Inventory (Beck et. al., 1961). Ideally, we would expect a majority of depressed participants to be mildly depressed, followed by moderately ones, and with a minority of severely depressed. All participants should receive informed consent and compensation for their time. Procedure â&#x20AC;˘ Time 1: participants are introduced to an adapted version of the Cognitive Abilities Test (CogAT)1, and are told that they have the opportunity to be additionally compensated according to their performance on the exam and the accuracy with which they predict their objective scores on the exam. They are informed that they have 2 opportunities during the study to guess their scores, and they can change their answers without monetary penalty. All participants will take the exam and researchers will collect their respective scores. â&#x20AC;˘ Time 2: directly following exam completion, participants are asked to predict their score, from 0% to 100%. Additionally, they are prompted to guess how well they performed in relation to the other participants in the laboratory room. â&#x20AC;˘ Time 3: depressed and elated mood states are induced in 100 controls and 100 depressed individuals, respectively. Depressed mood will be induced in healthy controls by having them

1 The Cognitive Abilities Test (CogAT) is a group-administered assessment intended to estimate reasoning and problem solving abilities using verbal, quantitative, and nonverbal (spatial) symbols (Lohman & Hagen, 2001a). 68

watch melancholic film clips with self-referent depressive dialogues, such as “I have never done anything worthwhile in my life” and “I am so tired. I just want to go to sleep and never wake up”. In contrast, elated mood will be induced in depressed participants with humorous, cheerful film clips with self-referent elated sentences, such as “I'm feeling pretty good right now” and “Everyday, I am getting better and better”. The remaining 50 depressed and 50 control subjects will not undergo mood induction and instead watch neutral film clips, such as a documentary about plants without self-referent sentences. With the exception of the last group, all moodinduced participants will be instructed to imagine being in the situation and to ‘imagine the feelings you would experience in the situation’. This set of instructions has been found to be most effective in inducing mood states (Westermann, Stahl, & Hesse, 1996). • Time 4: we will assess mood induction throughout 3 tasks. First, in order to measure the illusion of control, the win non-contingency task used in Experiment 3 in Alloy & Abramson (1979), explained in the former section of this paper, will be utilized in this study. Secondly, in order to measure optimism bias, participants will be given a second opportunity to change their predictions for their exam scores (T1). And finally, participants will be shown the results from a fictitious set of 50 participants in the room. The results will be similar, with an average of 50%. In order to measure the illusion of superiority, participants will be asked again if they would like to update their answers given the new information. We will recollect these new assessments and compare them to the scores given in (T2), with the objective of investigating the effect that the mood induction procedure had in participant’s answers and changes.


Results Depressive realism is a descriptive (and not etiological) hypothesis. If the theory is assumed to be true, it could be deduced that the depressive mood state causes individuals to make more realistic inferences. If so, individuals should update their beliefs more accurately when they are depressed but not when they are not depressed. Alternatively, those who judge situations accurately may be more prone to depression than those who misjudge situations, and the neural systems that support more unbiased updating of beliefs may promote depression. For instance, people who are realistic about their impact on environmental events would be at higher risk of depression. Given the evidence and theoretical framework provided in the previous section, I hypothesize that the current investigation will support the first interpretation of depressive realism. Originally, this research aimed to investigate whether the results reported by Alloy and Abramson (1979) would be seen when the mood states were transient. I hypothesize that inducedelation depressives will display temporarily elated illusions of control normally observed in nondepressives, whereas induced-depressive controls will exhibit depressive realism ordinarily observed in depressives. As for the non-induction groups, I predict that non-depressed participants will express higher ratings of control than did depressed participants. In summary, I predict that Alloy & Abramsonâ&#x20AC;&#x2122;s results will be replicated in this study, where mood is established as an independent or causal variable. Secondly, in line with the available literature, I predict that temporarily elated depressives will exhibit optimism bias; that is, they will change their score predictions more favourably after the mood induction. In the same fashion, I predict that this provisionally elated group will fail to accurately assess the set of scores provided by the researchers, and will exhibit illusory superiority: that is, they will change their comparative predictions in a more self-serving fashion. In the


temporarily depressive healthy group, I expect the opposite results: they will change their score predictions more negatively after the mood induction, and will not manifest illusory superiority. Finally, with no induction or with neutral induction, I predict that non-depressed participants will have higher ratings of optimism bias and illusory superiority than depressed participants. Discussion and Future Directions If the hypotheses of the current investigation are confirmed, moderately depressive moods could potentially be conceptualized as affecting individual awareness of what has been learnt, and not necessarily the process of learning itself. Authors like Bertels, Demoulin, and Franco (2013) seem to implicitly arrive to this conclusion. In their intriguing paper “Side effects of being blue: influence of sad mood on visual statistical learning”, participants with depression seemed to have better conscious access to another type of knowledge, obtained via visual statistical learning. If this is correct, Beck’s Cognitive Theory of Depression might be decisively flawed, or at the very least incomplete. Further research is needed to explore the potential implications of these findings in clinical and therapeutical contexts. Recent multidisciplinary research has confirmed that subjects with depressive symptoms are more accurate than controls in performing tasks involving time perception and estimates of personal self-related circumstances (Ackerman and DuRubeis, 1991; Moore and Fresco, 2007a; Moore, 2012). However, in other task predictions, such as random future events (Siegel, 2007), and states of affairs concerning other individuals (Chuang, 2007), the evidence is unclear. In sum, people diagnosed with mild or moderate depression, the most resistant modality of the illness (AlHarbi, , 2012; De Sousa et. al., 2015), make more accurate judgements in a variety of domains, and the most striking evidence comes from studies based on people’s real-life experiences. Thus, moderate levels of depression seem to be the target of depressive realism. Conversely, severe


depression and non-depression do not exhibit such considerable calibration differences. As depression progresses, a negative bias may be observed (Kornbrot et. al., 2013). In this way, the level-of-depression account of depressive realism is supported by the current available evidence (Birinci, 2010). Positive illusions can have benefits and drawbacks for the individual, and there is a controversy over whether they are evolutionarily adaptive (McKay & Dennett, 2009; Veelen & Nowak, 2011). While the illusions may have a direct benefit by aiding the individual to cope with stress, or by promoting resilience when working towards success (Bandura, 1979), unrealistically positive expectations may also prevent people from taking sensible precautions. As a matter of fact, recent research suggests that positive illusions may have both short-term gains and long-term costs (Bortolotti & Antrobus, 2015). Further research and evolutionary theorization is needed to better understand the specific circumstances in which positive illusions are advantageous to healthy human beings. Within an evolutionary framework, mild depression has been hypothesized as a novel adaptation for analyzing complex problems (Bergstrom, 2016; Trimmer et al., 2015). Depressed individuals are not only prone to intense, persistent rumination on their problems, they also have difficulty thinking about anything else. Numerous studies have shown that this thinking style is often highly analytical, and can be very productive (Barbic et. al., 2014; Cola et. al., 2010; Watkins & Teasdale, 2001). For instance, studies have found that expressive writing promotes quicker resolution of depression (Gortner, Rude, & Pennebaker, 2006; Graf, Gaudiano, & Geller, 2008) and they suggest that this is because depressed people gain insight into their problems (Hayes et. al., 2005). Thus, depressive thinking styles can be seen, perhaps, as the opposite of a blithe and easy self-confidence. This provides a new framework that is worth exploring for its potential


contributions to understanding the evolutionary vulnerability of mental disorders, a framework that incorporates both adaptive evolution and the importance of individual life experiences.


References Addis DR, Wong AT, Schacter DL. (2007) Remembering the past and imagining the future: common and distinct neural substrates during event construction and elaboration. Neuropsychologia, 45:1363–1377. Ali, A., Ambler, G., Strydom, A., Rai, D., Cooper, C., McManus, S., Weich, S., Meltzer, H., Dein, S., & Hassiotis, A. (2012). The relationship between happiness and intelligent quotient: the contribution of socio-economic and clinical factors Psychological Medicine, 43 (06), 1303-1312. Alicke, M. D., Dunning, D., & Krueger, J. I. (2005). The self in social judgment. New York: Psychology Press. Allan, L. G., Siegel, S., and Hannah, S. (2007). The sad truth about depressive realism. Q. J. Exp. Psychol. 60, 482–495. Andrews PW, Thomson JA Jr (2009). The bright side of being blue: depression as an adaptation for analyzing complex problems. Psychol Rev. 2009 Jul;116(3):620-54. Alloy LB, Abramson LY (1979) Judgment of contingency in depressed and nondepressed students: sadder but wiser? J Exp Psychol Gen 108:441–485. Ambady N, Gray HM (2002). On being sad and mistaken: mood effects on the accuracy of thinslice judgments. J Pers Soc Psychol. Oct;83(4):947-61. Barbic SP, Durisko Z, Andrews PW (2014) Measuring the Bright Side of Being Blue: A New Tool for Assessing Analytical Rumination in Depression. PLoS ONE 9(11): e112077.


Beer J. S. (2014), Exaggerated Positivity in Self-Evaluation: A Social Neuroscience Approach to Reconciling the Role of Self-esteem Protection and Cognitive Bias, Social and Personality Psychology Compass, 8, pages 583–594 Berridge, K. C., and Kringelbach, M. L. (2011). “Building a neuroscience of pleasure and wellbeing,” in Psychology of Well-Being: Theory, Research and Practise. Bhattacharjee, A., and Mogilner, C. (2014). Happiness from ordinary and extraordinary experiences. J. Consum. Res. 41, 1–17. Berridge, K. C., and Kringelbach, M. L. (2013). Neuroscience of affect: Brain mechanisms of pleasure and displeasure. Curr. Opin. Neurobiol. 23, 294–303. Benassi VA, Mahler HI (1985). Contingency judgments by depressed college students: sadder but not always wiser. J Pers Soc Psychol. Nov;49(5):1323-9 Bennett D, Murawski C, Bode S. Single-Trial Event-Related Potential Correlates of Belief Updating. eNeuro. 2015;2(5): ENEURO.0076-15.2015. Bergstrom CT, Meacham F (2016). Depression and anxiety: maladaptive byproducts of adaptive mechanisms. Evol Med Public Health. Aug 3;2016(1):214-8. Birinci F, Dirik G (2010). Depressive realism: happiness or objectivity. Turk Psikiyatri Derg. Spring; 21(1):60-7. Review. Bloom, P. (2010). How Pleasure Works. New York, NY: W. W. Norton & Company. Brickman, P., Coates, D., and Janoff-Bulman, R. (1978). Lottery winners and accident victims: is happiness relative? J. Pers. Soc. Psychol. 36, 917–927.


Braw, Y., Aviram, S., Bloch, Y., & Levkovitz, Y. (2011). The effect of age on frontal lobe related cognitive functions of unmedicated depressed patients Journal of Affective Disorders, 129 (1-3), 342-347. Buunk, B. P. (2001), Perceived superiority of one's own relationship and perceived prevalence of happy and unhappy relationships. British Journal of Social Psychology, 40: 565–574. Chuang SC. Sadder but wiser or happier and smarter? (2007) A demonstration of judgment and decision making. J Psychol. Jan;141(1):63-76. Crane, C., Barnhofer, T., Visser, C., Nightingale, H., & Williams, J. M. G. (2007). The effects of analytical and experiential rumination on autobiographical memory specificity in individuals with a history of major depression. Behaviour Research and Therapy, 45(12), 3077–3087. Crisp, R. (2008). “Well-Being,” in Stanford Encyclopedia of Philosophy, ed E. N. Zalta. Cross, K. P. (1977). Not can, but will college teaching be improved?. New Directions for Higher Education, 1977: 1–15. Damasio, A. R. (1994). Descartes' Error. Emotion, Reason, and the Human Brain. New York, NY: Avon Books. Davies, W. (2015). Happiness Industry: How Government and Big Business Sold Us WellBeing. New York, NY: Verso. Dawes, R. (1994). House of Cards. Psychology and Psychotherapy Built on Myth. New York, NY: Free Press.


De Caro, M., and Macarthur, D. (2004). Naturalism in Question. Cambridge, MA: Harvard University Press. De Caro, M., and Macarthur, D. (2010). Naturalism and Normativity. New York, NY: Columbia University Press. Davidson, R. J. (2005). Emotion regulation, happiness, and the neuroplasticity of the brain. Adv. Mind Body Med. 21, 25â&#x20AC;&#x201C;28. Dunning D, Story AL (1991). Depression, realism, and the overconfidence effect: are the sadder wiser when predicting future actions and events? J Pers Soc Psychol. Oct;61(4):521-32. Dworkin, R. (1993). Life's Dominion: An Argument about Abortion, Euthanasia, and Individual Freedom. New York, NY: Knopf. Ehrenreich, B. (2009). Smile or Die. How Positive Thinking Fooled America and the World. Londra: Granta. Falkenberg I, Kohn N, Schoepker R, Habel U (2012) Mood Induction in Depressive Patients: A Comparative Multidimensional Approach. PLoS ONE 7(1): e30016 Ferraris, M. (2014). Manifesto of New Realism. New York, NY: Suny Press. Fredrickson, B. L., Grewen, K. M., Coffey, K. A., Algoe, S. B., Firestine, A. M., Arevalo, J. M., et al. (2013). A functional genomic perspective on human well-being. Proc. Natl. Acad. Sci. U.S.A. 110, 13684â&#x20AC;&#x201C;13689. doi: 10.1073/pnas.1305419110 Garrett N, Sharot T, Faulkner P, et al (2014). Losing the rose tinted glasses: neural substrates of unbiased belief updating in depression. Front Hum Neurosci 2014; 8:639.


Gordon AM, Tuskeviute R, Chen S. A (2003) A multimethod investigation of depressive symptoms, perceived understanding, and relationship quality: depressed and misunderstood? Pers Relatsh 20:635-654. Gabriel, M. (2011). Trascendental Ontology. London: Bloomsbury Academic. Haaga DA, Beck AT (1995). Perspectives on depressive realism: implications for cognitive theory of depression. Behav Res Ther. Jan;33(1):41-8. Review. Haidt, J. (2006). The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom. New York, NY: Basic Books. Harman, G. (2010). Toward Speculative Realism. New Alresford: John Hunt Publishing. Haybron, D. (2008). The Pursuit of Unhappiness: The Elusive Psychology of Well- Being. New York, NY: Oxford University Press. Haybron, D. (2011). “Happiness,” in Stanford Encyclopedia of Philosophy, ed E. N. Zalta. Available online at: http://plato.stanford.edu/entries/happiness/ Hurka, T. (1993). Perfectionism. Oxford: Oxford University Press. Hunt LT, Rutledge RB, Malalasekera WMN, Kennerley SW, Dolan RJ (2016) ApproachInduced Biases in Human Information Sampling. PLOS Biology 14(11): e2000638. Korn CW, Sharot T, Walter H, et al (2014). Depression is related to an absence of optimistically biased belief updating about future life events. Psychol Med; 44:579–592. Kapçi EG, Cramer D (1998). The accuracy of dysphoric and nondepressed groups' predictions of life events. J Psychol. Nov;132(6):659-70.


Kedia, G., Mussweiler, T., & Linden, D. E. J. (2014). Brain mechanisms of social comparison and their influence on the reward system. Neuroreport, 25(16), 1255–1265. Kornbrot DE, Msetfi RM, Grimwood MJ (2013). Time perception and depressive realism: judgement type, psychophysical functions and bais. PLoS One 8:1-9. Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one's own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134. Kramer, P. D. (2006). Listening to Prozac, 2nd Edn. New York, NY: Penguin Books. Lewis, G. J., Kanai, R., Rees, G., and Bates, T. C. (2014). Neural correlates of the ‘good life’: Eudaimonic well-being is associated with insular cortex volume. Soc. Cogn. Affect. Neurosci. 9, 615–618. doi: 10.1093/scan/nst032 Linden, D. J. (2011). The Compass of Pleasure. New York, NY: Penguin Books. Lutz, A., Slagter, H. A., Rawlings, N. B., Francis, A. D., Greischar, L. L., and Davidson, R. J. (2009). Mental training enhances attentional stability: neural and behavioral evidence. J. Neurosci. 29, 13418–13427. doi: 10.1523/JNEUROSCI.1614-09.2009 Mograbi DC, Morris RG (2014). On the relation among mood, apathy, and anosognosia in alzheimer’s disease. J Int Neuropsychol soc; 20:2-27. Moore MT, Fresco DM (2007). Depressive realism and attributional style: implications for individuals at risk for depression. Behav Ther. Jun;38(2):144-54.


Msetfi RM, Murphy RA, Simpson J, Kornbrot DE (2005). Depressive realism and outcome density bias in contingency judgments: the effect of the context and intertrial interval. J Exp Psychol Gen. Feb;134(1):10-22. Ohmae S. (2012) The difference between depression and melancholia: two distinct conditions that were combined into a single category in DSM-III. Seishin Shinkeigaku Zasshi.;114(8):886-905. Review. O’Sullivan, Owen P. (2015) The neural basis of always looking on the bright side. Dialogues in Philosophy, Mental and Neuro Sciences, 8(1):11-15 Pearce, J.M. & Hall, G. A model for pavlovian learning: variations in the effectiveness of conditioned but not of unconditioned stimuli. Psychol. Rev. 87, 532–552 (1980). Molenbergh P., Fynn-Mathis T, Böckler A., Singer T., Kanske J (2016). Neural correlates of metacognitive ability and of feeling confident: a large-scale fMRI study. Social Cognitive and Affective Neuroscience; nsw093. Remmers, C., & Michalak, J. (2016). Losing Your Gut Feelings. Intuition in Depression. Frontiers in Psychology, 7, 1291. Ruehlman LS, West SG, Pasahow RJ. Depression and evaluative schemata (1985). J Pers. Mar;53(1):46-92. Review. Scult MA, Paulli AR, Mazure ES, Moffitt TE, Hariri AR, Strauman TJ (2016). The association between cognitive function and subsequent depression: a systematic review and metaanalysis. Psychol Med. Sep 14:1-17.


Sharot T, Kanai R, Marston D, Korn CW, Rees G, Dolan RJ. Selectively altering belief formation in the human brain. Proceedings of the National Academy of Sciences of the United States of America. 2012;109(42):17058-17062. Sharot, T., Riccardi, A.M., Raio, C.M. & Phelps, E.A (2007). Neural mechanisms mediating optimism bias. Nature 450, 102–105. Sharot T, Garrett N. Forming Beliefs: Why Valence Matters. Trends Cogn Sci. 2016 Jan;20(1):25-33. Sutton, R.S. & Barto, A.G. Reinforcement Learning: An Introduction (MIT Press, Cambridge, Massachusetts, 1998). Taylor, S. E. and Armor, D. A. (1996), Positive Illusions and Coping with Adversity. Journal of Personality, 64: 873–898. Taylor SE, Brown JD (1998). Illusion and well being: a social psychological perspective on mental health. Psychol Bull 103:193–210. Thompson, Suzanne C. (2004), "Illusions of control", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp. 115–125. Schmidt, I. W., Berg, I. J., & Deelman, B. G. (1999). Illusory Superiority in Self-Reported Memory of Older Adults. Aging, Neuropsychology, and Cognition (Neuropsychology, Development and Cognition: Section B), 6(4), 288-301.


Soderstrom, N. C., Davalos, D. B., & Vazquez, S. M. (2011). Metacognition and depressive realism: Evidence for the level-of-depression account. Cognitive Neuropsychiatry, 16(5), 461-472. Szu-Ting Fu T, Koutstaal W, Poon L, Cleare AJ (2012). Confidence judgment in depression and dysphoria: the depressive realism vs. negativity hypotheses. J Behav Ther Exp Psychiatry. Vรกzquez C. (1987) Judgment of contingency: cognitive biases in depressed and nondepressed subjects. J Pers Soc Psychol. Feb;52(2):419-31. Weismann-Arcache, C., & Tordjman, S. (2012). Relationships between Depression and High Intellectual Potential Depression Research and Treatment, 2012, 1-8. Zuckerman, E., & Jost, J. (2001). What Makes You Think You're so Popular? Self-Evaluation Maintenance and the Subjective Side of the "Friendship Paradox" Social Psychology Quarterly, 64(3), 207-223.


Differences in Interpersonal Behavior among Acquisitive and Protective Self-Monitors PSYC 380D Xiaoyan (Rachel) Fang, Gentiana Sadikaj, Debbie S. Moskowitz


Abstract This study examined associations between interpersonal behavior and acquisitive self-monitoring (ability to vary behaviors out of concern for achieving social status) and protective self-monitoring (ability to vary behaviors out of concern for avoiding social rejection). We hypothesized that high acquisitive self-monitors would (1) engage in more agentic and communal behaviors generally across various situations; (2) exhibit more agentic behaviors in response to perception of less agency by others; and (3) exhibit more communal behaviors in response to perception of increased communion by others. We expected that high protective self-monitors would (1) engage in less agentic behaviors generally across situations and (2) exhibit less communal behavior in response to perception of less communion by others. Using an event-contingent recording (ECR), 266 university students reported their interpersonal behaviors and perceptions of interaction partnersâ&#x20AC;&#x2122; behaviors in daily interactions over 20 days. Self-monitoring was assessed by the Revised 18-item Self-Monitoring Scale (Gangestad & Snyder, 1985). The results supported that high acquisitive self-monitors engaged in more agentic behavior generally across situations, and they responded with more communal behavior when perceiving increased communion by others. High protective self-monitors behaved less agentically generally across various situations. This study showed that acquisitive self-monitoring and protective self-monitoring influence individualâ&#x20AC;&#x2122;s patterns of interpersonal behavior differently. The concern of achieving social status in high acquisitive selfmonitors may encourage more agentic behavior and more communal behavioral response to perceptions of increased communion in others. In contrast, the concern of avoiding social rejection may account for lower agentic behavior among high protective self-monitors.


Differences in Interpersonal Behavior among Acquisitive and Protective Self-Monitors Interpersonal behavior differs between individuals and across interpersonal situations with each individual. While self-monitoring is defined as the ability to exercise control over our verbal and non-verbal self-presentation (Snyder 1974), it is unclear how the two kinds of self-monitoring, acquisitive and protective, are related to patterns of interpersonal behavior, broadly defined along the communal and agentic dimensions. In the present study, two questions are explored with respect to the influence of acquisitive self-monitoring and protective self-monitoring on interpersonal behavior. First, are these two kinds of self-monitoring related to how people behave in general across various social situations? Second, are these two kinds of self-monitoring related to how people behave in specific social situations? The present study employed an eventcontingent recording method, which allowed us to measure interpersonal behavior as it unfolds in daily interpersonal interactions. To address the first question, we computed associations between the acquisitive self-monitoring and protective self-monitoring respectively, and mean levels of interpersonal behavior, both communal and agentic, across all interpersonal interactions. To answer the second question, we explored whether these two kinds of self-monitoring moderated interpersonal behavioral in specific interpersonal situations characterized by an interaction partnerâ&#x20AC;&#x2122;s interpersonal behavior. Defining Self-Monitoring Self-monitoring is originally defined as the ability to engage in expressive control out of a concern for social appropriateness (Snyder, 1974). Over time, the concern of attaining social appropriateness by regulating and controlling oneâ&#x20AC;&#x2122;s self-presentation has shifted towards the concern of cultivating status within perceived hierarchical social structure (Fuglestad & Snyder,


2010). High self-monitoring individuals are thought to be guided by social cues to regulate and control their behavior to attain social status. The behavior of low self-monitoring individuals, on the other hand, is guided more by their own affective states and attitudes and less by social cues. Snyder (1979) described high self-monitors as individuals who strive to build the image of “the right person in the right place at the right time”, whereas low self-monitors as individuals who show a substantial congruence between “who they are” and “what they do”. Therefore, high selfmonitors are expected to vary their behaviors across interpersonal interactions, whereas low selfmonitors tend to be more consistent in their behaviors across interpersonal interactions. Self-monitoring was originally conceptualized as one-dimensional construct (Snyder & Gangestad, 1986). Subsequently, self-monitoring was defined by two independent factors, labelled as acquisitive self-monitoring and protective self-monitoring (Briggs & Cheek 1988; Lennox 1988; Wilmot 2015). Acquisitive self-monitors monitor and adapt their behavior to achieve social status (Wilmot, 2015). Individuals higher in acquisitive self-monitoring proactively pursue their agenda. Those individuals are thought to possess the social skills of greater adaptation and assertion across situations. Low acquisitive self-monitors, in contrast, are less likely to strategically adjust their behavior in response to social situation out of the concern for attaining social status (DeYoung & Weisberg, in press). The second factor, protective self-monitoring, characterizes a person’s willingness to change their behavior to avoid social rejection and disapproval (Wilmot, 2015). Individuals higher in protective self-monitoring, who are thought to have low self-esteem, monitor and adjust their behaviors to suit others. In contrast, low protective self-monitors are less likely to conform their behavior to suit others to avoid social rejection (DeYoung & Weisberg, in press).


Self-Monitoring and Interpersonal Behavior Previous studies using the one latent factor model of self-monitoring have provided evidence suggesting that self-monitoring does influence an individual’s interpersonal behavior. Garland and Beard (2001) demonstrated that high self-monitoring women attained leadership status more often than low self-monitoring women. Turnley and Bolino (2001) showed that high self-monitors, compared to low self-monitors, were better at effectively using impression management tactics of ingratiation (i.e., the use of flattery or favor-done in an attempt to be seen as likeable) and self-promotion (i.e., playing up one’s ability or accomplishments to be seen as competent) to attain a favorable impression among their colleagues. Another study has described the influence of self-monitoring as a mediator of conformity in a group situation such that selfmonitoring mediated the influence of sensitivity to social cues on conformity behavior (Rarick, Soldow & Geizer, 1976). These studies suggest that individuals who are high self-monitors act like social chameleons by varying their interpersonal behavior in terms of how dominant (leadership emergence), submissive (conformity behavior), and agreeable (impression management by ingratiation) their interpersonal behavior is. However, little empirical research has examined how acquisitive self-monitoring and protective self-monitoring might influence an individual’s patterning of interpersonal behavior overall and in specific situations. Research on the Two Kinds of Self-Monitoring High acquisitive self-monitors are thought to be proactively pursuing one’s agenda while actively and competently engaging in interpersonal interactions. Wilmot et al. (2015) examined the associations of metatraits with acquisitive self-monitoring and protective self-monitoring. According to this theoretical paper, acquisitive self-monitoring is equivalent to the metatrait Plasticity (i.e., the shared variance of Extraversion and Openness/Intellect). People high in


acquisitive self-monitoring are expected to achieve social status by being assertive and selfenhancing, and also â&#x20AC;&#x153;by getting along with others, being helpful to others, and occupying boundary spanning positions in social and work networksâ&#x20AC;? (Fuglestad & Snyder, 2010). High acquisitive self-monitors, compared to low acquisitive self-monitors, are thought to be more assertive and warm. They are more likely to experience positive emotions and seek excitement. High acquisitive self-monitors are said to adopt a wider range of characteristic adaptation in interpersonal interactions. Low acquisitive self-monitors on the other hand are low in Plasticity and less likely to adjust their social behavior strategically in the pursuit of their agenda and social status (DeYoung & Weisberg, in press). This theorization is supported by findings examining the associations between acquisitive self-monitoring and basic personality traits (Avia et al., 1998; Wolf et al., 2009). These findings suggest that acquisitive self-monitoring is positively related to Extraversion and Openness/Intellect. With the concerns of avoiding social rejection in social interactions, high protective selfmonitors are thought to monitor and attempt to adjust their public social behavior because of the absence of a stable sense of self to decide how to behave. High protective self-monitors have been described as emotionally unstable and experiencing higher levels of negative emotional states (DeYoung & Weisberg, in press). Wilmot et al. (2015) also demonstrated the correlation between protective self-monitoring and a metatrait. It was shown that protective self-monitoring was negatively correlated to Plasticity, with an even stronger negative relation to Stability (i.e., the shared variance of Emotional Stability, Conscientiousness, and Agreeableness). Protective selfmonitoring has been found to be positively related to Neuroticism but negatively related to Conscientiousness (Avia et al., 1998; Wolf et al., 2009).


As status relates to the ability to secure resources and achieve goals, concerns for achieving social status would lead high acquisitive self-monitors to engage in more agentic behavior and communal behavior in general compared to low acquisitive self-monitors. The implications of acquisitive self-monitoring being equivalent to Plasticity and positively related to Extraversion and Openness/Intellect also encourages the expectation that in specific social situation where high self-monitors perceive increased communal behavior from others, they would respond with more communal behavior compared to low acquisitive self-monitors. In addition, their proactive pursuit of social status suggests that when perceiving decreased agentic behavior from others, high acquisitive self-monitors would respond with increased agentic behavior compared to low acquisitive self-monitors. High protective self-monitorsâ&#x20AC;&#x2122; concerns of avoiding social rejection and disapproval suggest that we would expect less agentic behavior in general among high protective self-monitors than low protective self-monitors. The association of high protective self-monitoring with emotional instability supports the expectation that in specific social situations where they perceive increased communal behavior from others, they would respond with more communal behavior compared to low protective self-monitoring. High protective self-monitorsâ&#x20AC;&#x2122; main concern is social rejection and threat to social status. Therefore, it is not expected that high protective self-monitors would show different levels of agentic behavior in response to perception of agentic behavior relative to low protective self-monitors. The event-contingent recording (ECR) method, an intensive repeated measurement applied in naturally occurring settings, was used in the present study. Participants reported on their own interpersonal behavior and their perceptions of interaction partnersâ&#x20AC;&#x2122; behavior over a 20-day period. Multilevel modeling, which allows the simultaneous investigation of within-person associations


such as the association between interpersonal behavior and perception of othersâ&#x20AC;&#x2122; interpersonal behavior for each individual, and the influence of individual differences in self-monitoring on these within-person associations, was used in this study. Method Participants Participants were recruited through in-class announcements, e-mail, and poster advertisements. All recruited participants were students attending the Smith School of Business at Queenâ&#x20AC;&#x2122;s University. The number of participants recruited in the present study was 317 over 4 years. 17 participants withdrew from the study and 34 individualsâ&#x20AC;&#x2122; data were excluded due to not meeting the cut-off criteria (i.e., did not provide 12 days or more of ECR recordings) and therefore data from 266 (84%) participants were used. The current sample comprises 88 (33%) men and 177 (67%) women ranging in age between 17 and 25 years (M =20.27 years, SD = 0.70). One participant did not indicate gender. Procedure Participants attended an initial meeting during which the study was explained and they were asked to complete a questionnaire battery, which included the Revised 18-items SelfMonitoring Scale (Gangestad & Snyder, 1985). They were then guided through a sample social interaction form they would be completing via mobile application during the event-contingent recording (ECR) procedure. The ECR procedure required participants to complete multiple forms about their social interactions every day for twenty days. Participants completed an average of 48 ECR forms (SD = 25, ranging from 13 to 137) over the 20 days using this procedure, and were compensated $160 for participation in the study.


Measures Self-monitoring. The Revised 18-item Self-Monitoring Scale (SMS-R; Gangestad & Snyder, 1985) was used to measure participants’ acquisitive self-monitoring and protective selfmonitoring (Briggs & Cheek, 1988; Lennox, 1988; Wilmot, 2015). Thirteen items measured acquisitive self-monitoring (e.g., “I can only argue for ideas which I already believe” or “I would probably make a good actor”), and five items assessing protective self-monitoring (e.g., “I may deceive people by being friendly when I really dislike them” or “I am not always the person I appear to be”). Participants indicated their responses using a True/False format. The average acquisitive self-monitoring scores and protective self-monitoring scores were calculated with higher values indicating higher self-monitoring. The Cronbach coefficient alpha for acquisitive and protective self-monitoring subscales in this current sample were .69 and .40 respectively. The correlation between the two subscales was .25. Event-contingent recording. Using a standardized form on mobile application, participants reported on social interactions soon after every interaction occurring face-to-face or voice-to-voice (i.e., text-based social interactions such as e-mails or texting were not included in this study), lasting for 5 minutes or longer, and involving mutual responding with at least one other person. The form included items measuring participants’ interpersonal behavior, affect, and perceptions of interaction partners’ behavior, as well as features of the situation such as location, relationship with the interaction partner, and mode of interaction. Perceptions of interaction partners’ behavior. The Interpersonal Grid (Moskowitz & Zuroff, 2005) was administered to assess participants’ perceptions of interaction partners’ behavior. This is a reliable measure of perception that has been found to generalize across perceivers and to provide ratings that converge between perceiver and target (Moskowitz & Zuroff,


2005). The Interpersonal Grid is a 9 x 9 square grid consisting of a vertical axis representing agentic behavior and a horizontal axis representing communal behavior. The vertical (agentic) axis ranges from assured-dominant behavior to unassured-submissive behavior, and the horizontal (communal) axis ranges from cold-quarrelsome behavior to warm-agreeable behavior. The four corners of the Interpersonal Grid were anchored by critical (upper-left), engaging (upper-right), withdrawn (bottom-right) and deferring (bottom-right) respectively. Participants were asked to select a square on the grid to indicate their perception of an interaction partners’ behavior during the social interaction. Interaction partners’ perceived agentic and communal behavior was scored from 1 to 9, where higher scores illustrate higher perceived agency and higher perceived communion. Participants’ interpersonal behavior. The Social Behavior Inventory (Moskowitz, 1994) consists of four twelve-item scales that measure the four poles of interpersonal behavior (i.e., dominance, submissiveness, agreeableness, and quarrelsomeness). An example of dominant interpersonal behavior would be “during this interaction, I asked the other(s) to do something”. Example of submissive interpersonal behavior would be “during this interaction, I waited for the other person to talk or act first”. “During this interaction, I showed sympathy” would be considered an agreeable interpersonal behavior. “During this interaction, I did not respond to the other(s) questions or comments” would be an example of quarrelsome interpersonal behavior. To avoid the development of response sets, four different versions of the SBI were constructed and rotated across the 20 days of the study, with three items from each of the four scales on each form. First, the mean number of items corresponding to each behavior scale in each event was calculated. For example, if a person answered one agreeable behavior in the event, the person’s agreeable behavior score for the event would be 1/3, which ranging between 0 and 1. Ipsatized scores were then


constructed by calculating the difference between the mean score for all behavior scales within a given event and each behavior score for that event. Event-level agentic behavior was measured by subtracting dominant and submissive behavior scores, whereas event-level communal behavior was measured by subtracting agreeable and quarrelsome behavior scores. Results Descriptive Statistics Table 1 provides descriptive statistics for the person-level variables and event-level variables. Multi-level Modeling The two primary goals of this study were to examine the (1) influence of self-monitoring on mean agentic and communal behavior, and (2) moderating influence of self-monitoring on the relation between perception of an interaction partner’s agentic and communal behavior and participant’s agentic and communal behavior. As observations produced by the ECR method are not independent (i.e., data for multiple events were nested within each participant), a multi-level modeling analysis was used. Event-level data (i.e., Level 1) were nested within participants (i.e., Level 2, or person-level). To examine the influence of self-monitoring on mean agentic and communal behavior, two models were constructed in which mean agentic and mean communal behavior were the dependent variables respectively. Acquisitive and protective self-monitoring scores were entered concurrently as person-level predictors of agentic and communal behavior. The regression coefficient linking, for example, acquisitive self-monitoring with agentic behavior, indicates the prediction of mean agentic behavior across all events by acquisitive self-monitoring. To examine the moderating influence of self-monitoring on the relation between the perception of an interaction partner’s interpersonal behavior and participant’s interpersonal


behavior, two models were constructed in which interpersonal behavior, either communal or agentic behavior, were the dependent variables. In the model in which communal behavior was the dependent variable, perception of an interaction partners’ communal behavior was entered as event-level predictor, whereas acquisitive self-monitoring and protective self-monitoring were entered as person-level moderators of the within-person regression of participant’s communal behavior on perception of communal behavior. The model in which agentic behavior was the dependent variable was similarly constructed. The event-level predictors were centered within each participant; that is, a participant’s mean perception of communal behavior was subtracted from the perception of an interaction partner’s communal behavior in each event. The same centering was performed on perception of agentic behavior. A within-person centered score represents the deviation of an event-level perception of an interaction partner’s communal (or agentic) behavior score from the person’s generalized perception of others’ communal (or agentic) behavior. For person-level predictors, group-mean centering was applied such that the sample’s mean in acquisitive self-monitoring and protective self-monitoring was subtracted from an individual’s score in acquisitive selfmonitoring and protective self-monitoring respectively. Mean Interpersonal Behavior and Self-Monitoring The first model in the analysis was to examine the effect of acquisitive and protective selfmonitoring on interpersonal behavior across all social events. Acquisitive self-monitoring as a predictor of mean communal behavior. Inconsistent with expectations, acquisitive self-monitoring was not associated with mean communal behavior, b = .016, t(263) = .41, ns (see Table 2).


Acquisitive self-monitoring as a predictor of mean agentic behavior. Consistent with expectations, acquisitive self-monitoring predicted mean agentic behavior across all events, b = .097, t(263) = 2.97, p < .01 (see Table 2). Participants higher on acquisitive self-monitoring reported behaving more agentically, b = .20, t(263) = 19.59, p < .001, than participants lower on acquisitive self-monitoring, b = .16, t(263) = 15.24, p < .001 (see Figure 1). Protective self-monitoring as a predictor of mean communal behavior. Inconsistent with expectations, protective self-monitoring was not associated with mean communal behavior, b = - .045, t(263) = - 1.30, ns (see Table 2). Protective self-monitoring as a predictor of mean agentic behavior. Consistent with the hypothesis, protective self-monitoring predicted mean agentic behavior, b = -.059, t(263) = -2.02, p < .05 (see Table 2). Participants higher on protective self-monitoring reported behaving less agentically across all interactions, b = .16, t(263) = 15.78, p < .001, than did participants lower on protective self-monitoring, b = .19, t(263) = 18.96, p < .001 (see Figure 1). Within-Person Covariation Between Interpersonal Behavior and Perception of Othersâ&#x20AC;&#x2122; Behavior. The second model in the analysis was to examine the moderating effect of acquisitive selfmonitoring and protective self-monitoring (person-level variable) on the relation between agentic and communal interpersonal behavior (event-level variable) and perception of interaction partnerâ&#x20AC;&#x2122;s communal behavior and agentic behavior (event-level variable). The within-person association between communal behavior and perception of communal behavior. The event-level perception of a partners communal behavior was positively associated with communal behavior, b = .051, t(7958) = 22.42, p < .001. Perceptions of greater


communal behavior in the interaction partner were associated with increases in communal behavior. Acquisitive self-monitoring as a moderator of the within-person association between communal behavior and perception of communal behavior. Consistent with the hypothesis that acquisitive self-monitoring would moderate the relation between perceived communal behavior and communal behavior, a significant interaction between acquisitive self-monitoring and perceived partner communal behavior was found, b = .026, t(7956) = 2.57, p < .05 (see Table 3 and Figure 2). Compared to low acquisitive self-monitors, high acquisitive self-monitors reported a greater increase in communal behavior when they perceived increased communal behavior in their interaction partner; slope for high acquisitive self-monitors, b = .057, t(7956) = 17.32, p < .0001; slope for low acquisitive self-monitors, b = .045, t(7956) = 14.05, p < .0001. Protective self-monitoring as a moderator of the within-person association between communal behavior and perception of communal behavior. Inconsistent with the hypothesis that protective self-monitoring would moderate the relation between perceived communal behavior and communal behavior. No significant interaction between protective self-monitoring and perceived partner communal behavior was found, b = -.007, t(263) = -.73, ns (see Table 3). The within-person association between agentic behavior and perception of agentic behavior. Event-level perceptions of an interaction partnerâ&#x20AC;&#x2122;s agentic behavior were negatively associated with agentic behavior, b = - .0062, t(7958) = - 2.84, p = .0045 < .01. Perceptions of greater agentic behavior in the interaction partner were associated with decreases in agentic behavior.


Acquisitive self-monitoring and protective self-monitoring as a moderator of the within-person association between agentic behavior and perception of agentic behavior. Inconsistent with the prediction that acquisitive self-monitoring would moderate the relation between perceived agentic behavior and agentic behavior, no significant interaction between acquisitive self-monitoring and perceived partner agentic behavior was found, b = .014, t(7956) = 1.33, ns (see Table 4). Similarly, protective self-monitoring did not moderate the association between perception of agentic behavior and agentic behavior, b = - .012, t(7956) = - 1.24, ns (see Table 4). Discussion In this study, we have shown that the two kinds of self-monitoring, acquisitive and protective, are related to different patterns of interpersonal behavior. In general, individuals with a higher level of acquisitive self-monitoring exhibited more agentic behavior compared to individuals with lower level of acquisitive self-monitoring. In contrast, individuals with higher level of protective self-monitoring exhibited less agentic behavior, compared to individuals with lower level of protective self-monitoring, across various situations. We further found that acquisitive self-monitoring moderated the association of communal behavior with perception of interaction partners communal behavior such that high acquisitive self-monitors, compared to low acquisitive self-monitors, responded with more communal behaviors to the perception of increased communal behaviors in the interaction partner. The demonstration of higher agentic behavior for higher acquisitive self-monitors reflects the concern of achieving social status among them by generally adopting behaviors which are more assertive and self-enhancing. In a similar manner, the lower agentic behavior of higher protective self-monitors than lower protective self-monitors reflects the concern of avoiding social rejection among high protective self-monitors by generally


adopting behaviors which are more submissively or less dominantly. The increased communal behavior in response to an interaction partners’ communal behavior may have significance for individuals higher on acquisitive self-monitoring as responding with more communal behavior signifies a warm and welcoming interaction circumstance which helps them in their pursuit of social status. These results align with the idea that high self-monitors, compared to low selfmonitors, differ in the extent to which an individual engage in expressive control on their behavior in a social context. Furthermore, these findings converge with previous literature to demonstrate that high self-monitors attempt to construct patterns of behavior enabling them to achieve social status, but also possibly to avoid social rejection. Therefore, the results of the present study suggest that interpersonal behavior in our daily social interaction is influenced by individual’s selfmonitoring level (i.e., high vs. low), as well as type of self-monitoring (i.e., acquisitive vs. protective). Consistent with previous research findings (Briggs & Cheek 1988; Lennox 1988; Wilmot 2015), the results of the present study provided some evidences inconsistent with one latent factor model of self-monitoring. Using one latent factor model, important discrepant relations of the two self-monitoring (acquisitive vs. protective) would be obscured. For example, a previous study using an overall score on self-monitoring inferred that self-monitoring “is not well represented in the FFM” where self-monitoring was found to have a moderate positively relation with Extraversion but negligibly correlated with Emotional Stability, Agreeableness, and Conscientiousness (Barrick, Park & Mount, 2005). In contrast, by assessing acquisitive selfmonitoring and protective self-monitoring distinctly, acquisitive self-monitoring was found to be positively associated with Extraversion and Openness/Intellect, whereas protective selfmonitoring









to 98

Conscientiousness (Avia et al., 1998; Wolf et al., 2009). Furthermore, acquisitive self-monitoring was found to be equivalent to metatraits Plasticity where protective self-monitoring was found to be negatively related to metatraits Stability (Wilmot, 2015). The present study is among the first to study two kinds of self-monitoring with patterns of interpersonal behavior along communion and agency dimensions, which further our understanding towards the distinctions between acquisitive self-monitoring and protective self-monitoring and their distinct influences on our daily communal and agentic interpersonal behavior. Future research could re-examine past studies substituting a two factor model of self-monitoring for the one factor model that has typically been used. Protective self-monitoring did not moderate individual’s communal behavior on perception of interaction partner’s communal behavior. The absence of such an effect may be related to low internal consistency of the items in the scale assessing protective self-monitoring (Cronbach coefficient alpha=.40). Among a total of eighteen items, there were only five used to assess protective self-monitoring. The Revised 18-item Self-Monitoring Scale applied in the present study adopted True-False response format instead of the Likert scale, which might further limit the variability of reported self-monitoring. Even though the moderating effect of protective self-monitoring on communal interpersonal behavior was not significant, it would be interesting to further examine the possibility that protective self-monitoring acts as a moderator of the influence of interaction partners’ behavior on individual’s affect. As previous researchers (Briggs & Cheek 1988; Lennox 1988; Wilmot 2015) characterized high protective self-monitors as emotionally unstable with concerns of avoiding social rejection in interaction, it would be expected that high protective self-monitors,


compared to low protective self-monitors, would respond with increased positive affect to increased communal behavior by interaction partners. Future research could examine whether setting moderates how people with different kinds of self-monitoring (acquisitive and protective) respond to others’ behavior. According to previous research, a work setting provides a strong, structured situation with well-defined roles and expectations to guide individual’s behavior whereas a nonwork setting provides less constrained context and therefore is more ambiguous in guiding interpersonal behavior (Moskowitz, Ho & Turcotte-Tremblay, 2007). Findings from the present study reveal that individuals with higher acquisitive self-monitoring exhibit more agentic behavior than individuals with lower acquisitive self-monitoring generally across situations. It would be expected that the work setting would moderate the relation between acquisitive self-monitoring and agentic behavior such that in a work setting, individuals with higher acquisitive self-monitoring with the concerns of achieving social status would exhibit even more agentic behavior compared to a nonwork setting. In contrast, as findings from the present study show that individuals with higher protective self-monitoring exhibit less agentic behavior than individuals with lower protective self-monitoring generally across situations, it would be expected that the work setting would moderate the relation between protective self-monitoring and agentic behavior such that in a work setting, individuals with higher protective self-monitoring with the concerns of avoiding social rejection would exhibit even less agentic behavior compared to that in a nonwork setting. Limitations In the present study, we relied on participants self-reporting their interpersonal behavior and perceptions of interaction partners’ interpersonal behavior. This might be argued is a limitation of the study, as a participants’ self-report might not accurately correspond to their interaction


partners’ report on their interpersonal behavior. Future research to replicate these results might consider adding the other person in the interaction as a source of information. Another limitation of the study is that the temporal order of behaviors cannot be determined; the study faces a potential threat from the assumption that the perception of the other precedes the participant’s behavior. Conclusion The results of the present study demonstrated the understanding of how interpersonal behavior changes as a function of acquisitive self-monitoring and protective self-monitoring by assessing interpersonal behavior and perceptions of others along communion and agency dimensions in multiple interpersonal events. The concerns for achieving social status of high acquisitive self-monitors are associated with more agentic behavior compared to low acquisitive self-monitors. The concerns for avoiding social rejection reduce the extent to which high protective self-monitors exhibit agentic behavior compared to low protective self-monitors. Although people generally respond to communal behavior by others with communal behavior, the extent to which one’s communal behavior corresponds to perception of the other’s communal behavior is moderated by acquisitive self-monitoring. The communal behaviors of self and others are most likely to correspond for individual with higher acquisitive self-monitoring.


References Avia, M. D., Sánchez-Bernardos, M. L., Sanz, J., Carrillo, J., & Rojo, N. (1998). Selfpresentation strategies and the five-factor model. Journal of Research in Personality, 32(1), 108–114. doi:10.1006/jrpe.1997.2205 Briggs, S. R., & Cheek, J. M. (1988). On the nature of self-monitoring: Problems with assessment, problems with validity. Journal of Personality and Social Psychology, 54(4), 663–678. doi:10.1037//0022-3514.54.4.663 Barrick, M. R., Parks, L., & Mount, M. K. (2005). Self-monitoring as a moderator of the relationships between personality traits and performance. Personnel Psychology, 58(3), 745–767. doi:10.1111/j.1744-6570.2005.00716.x David L. Rarick, Gary F. Soldow & Ronald S. Geizer (1976). Self-monitoring as a mediator of conformity, Central States Speech Journal, 27:4, 267-271, doi:10.1080/10510977609367903 DeYoung, C. G., & Weisberg, Y. J. (in press). Cybernetic approaches to personality and social behavior. In M. Snyder & K. Deaux (Eds). Oxford Handbook of Personality and Social Psychology, Second Edition. Oxford University Press. Fuglestad, P. T., & Snyder, M. (2010). Status and the Motivational Foundations of SelfMonitoring. Social and Personality Psychology Compass, 4(11), 1031-1041. doi:10.1111/j.1751-9004.2010.00311.x


Garland, H., & Beard, J. F. (1979). Relationship Between Self-Monitoring and Leader Emergence Across Two Task Situations. Journal of Applied Psychology, 64(1), 72-76. doi:10.1037/h0078045 Gangestad, S., & Snyder, M. (1985). "To carve nature at its joints": On the existence of discrete classes in personality. Psychological Review, 92(3), 317-349. doi:10.1037/0033295x.92.3.317 Lennox, R. D. (1988). The problem with self-monitoring: A two-sided scale and a one-sided theory. Journal of Personality Assessment, 52(1), 58-73. doi:10.1207/s15327752jpa5201_5 Moskowitz, D. S. (1994). Cross-situational generality and the interpersonal circumplex. Journal of Personality and Social Psychology, 66(5), 921â&#x20AC;&#x201C;933. doi:10.1037/00223514.66.5.921 Moskowitz, D., Ho, M. R., & Turcotte-Tremblay, A. (2007). Contextual Influences on Interpersonal Complementarity. Personality and Social Psychology Bulletin, 33(8), 10511063. doi:10.1177/0146167207303024 Moskowitz, D. S., & Zuroff, D. C. (2005). Assessing interpersonal perceptions using the Interpersonal Grid. Psychological Assessment, 17(2), 218â&#x20AC;&#x201C;230. doi:10.1037/10403590.17.2.218 Snyder, M. (1974). Self-monitoring of expressive behavior. Journal of Personality and Social Psychology, 30(4), 526-537. doi:10.1037/h0037039


Snyder, M. (1979). Self-Monitoring Processes. Advances in Experimental Social Psychology Advances in Experimental Social Psychology Volume 12, 85-128. doi:10.1016/s00652601(08)60260-9 Snyder, M. & Gangestad, S. (1986). On the nature of self-monitoring: matters of assessment, matters of validity. Journal of Personality and Social Psychology, 51(1), 125-139. doi:10.1037//0022-3514.51.1.125 Turnley, W. H., & Bolino, M. C. (2001). Achieving desired images while avoiding undesired images: Exploring the role of self-monitoring in impression management. Journal of Applied Psychology, 86(2), 351-360. doi:10.1037//0021-9010.86.2.351 Wilmot, M. P. (2015). A contemporary taxometric analysis of the latent structure of selfmonitoring. Psychological assessment, 27(2), 353. doi:10.1037/pas0000030 Wolf, H., Spinath, F. M., Riemann, R., & Angleitner, A. (2009). Self-monitoring and personality: A behavioural-genetic study. Personality and Individual Differences, 47(1), 25â&#x20AC;&#x201C;29. doi:10.1016/j.paid.2009.01.040


Table 1 Descriptive Statistics for Person-Level and Event-Level Variables Variable


Acquisitive Self-monitoring .59 Protective Self-monitoring .59 Mean perceived communal behavior 6.97 Mean perceived agentic behavior 6.24 Mean communal behavior .37 Mean agentic behavior .18 Events reported 47.97



.23 .26 .75 .81 .14 .12 25.40

0 – 1.00 0 – 1.00 4.74 – 9.00 4.18 – 8.21 .049 – .73 - .16 – .51 13 – 137

Table 2 Mean-level Interpersonal Behavior Predicted by Acquisitive Self-monitoring and Protective SelfMonitoring Interpersonal behavior


Acquisitive self-monitoring on communal behavior .016 Protective self-monitoring on communal behavior -.045 Acquisitive self-monitoring on agentic behavior .097 Protective self-monitoring on agentic behavior -.059

SE .039 .034 .033 .029


t value


263 .41 .68 263 -1.30 .19 263 2.97** .0033 263 - 2.02* .044

Note. Analyses were based on 7,956 observations from 266 participants. SE = standard error; df = degrees of freedom. * p < .05 ** p < .01


Table 3 Communal Behavior Predicted by Event-Level Perceived Partner’s Communal Behavior, Acquisitive Self-monitoring, Protective Self-monitoring, and their Interactions Communal behavior b Perceived partner’s communal behavior .051 Acquisitive Self-monitoring .017 Protective Self-monitoring - .042 Perceived partner’s communal behavior X Acquisitive Self-monitoring .026 Perceived partner’s communal behavior X Protective Self-monitoring - .0068

SE df t value p .0023 7956 22.49*** .0001 .041 263 .40 .069 .036 263 -1.16 .25 .010 7956 .0094 7956

2.57* - .73

.010 .47

Note. Analyses were based on 7,956 observations from 266 participants. SE = standard error; df = degrees of freedom. * p < .05 *** p < .001 Table 4 Agentic Behavior Predicted by Event-Level Perceived Partner’s Agentic Behavior, Acquisitive Self-monitoring, Protective Self-Monitoring, and their Interactions Agentic behavior Perceived partner’s agentic behavior Acquisitive Self-monitoring Protective Self-monitoring Perceived partner’s agentic behavior X Acquisitive Self-monitoring Perceived partner’s agentic behavior X Protective Self-monitoring

b - .0063 .080 - .046

SE df .0022 7956 .034 263 .030 263

t value p -2.88** .0040 2.37 .019 -1.55 .12


.010 7956



- .012

.0093 7956

- 1.24


Note. Analyses were based on 12,761 observations from 266 participants. SE = standard error; df = degrees of freedom. ** p < .01


Figure 1. Acquisitive self-monitoring and protective self-monitoring as predictors of mean agentic behavior

Figure 2. The relation between communal behavior and perception of otherâ&#x20AC;&#x2122;s communal behavior as a function of acquisitive self-monitoring


The Effects of Pain Catastrophizing and the Experience of Pain on Vasovagal Reactions in Blood Donation Aliza Hirsch PSYC 395 Dr. Blaine Ditto


Abstract Vasovagal reactions (VVR), which produce symptoms such as syncope and less severe presyncopal symptoms such as faintness and weakness, are a major barrier to recruiting and maintaining blood donors. Although the connection between anxiety and an increased risk of a vasovagal reaction during blood donation is quite robust and well-documented, little research has focused on the role of pain in VVR. The current study aims to analyze pain and pain catastrophizing as potential variables influencing the likelihood of vasovagal symptoms in blood donors. The study involved 504 donors who completed the Blood Donation Reactions Inventory (BDRI), a questionnaire assessing subjective vasovagal symptoms and various measures of anxiety, pain, and pain catastrophizing. In univariate analyses, significant pairwise associations are observed between BDRI scores, nurse-initiated chair reclining as a treatment for VVR (serving as the objective measure of VVR), anxiety, self-reported pain, and pain catastrophizing. However, after controlling for measures of pre-donation and in-chair anxiety, the association between pain and chair reclining became non-significant. These results suggest that the effects of pain and pain catastrophizing are not entirely independent from measures of anxiety. Keywords: vasovagal reaction, blood donation, anxiety, pain, pain catastrophizing


The Need for Blood Donation The need for blood donation is an immense and growing issue of concern due to the ageing population and low donor rates. Blood and its constituents are critical components of medical care in disease management, treatment, and surgeries. Canadian blood services estimate that a single car crash victim could require blood transfusions from up to 50 donors (Canadian Blood Services, 2017). The prevalence of blood transfusions is such that half of all Canadians, in their life time, will either need blood or know someone who will need blood (Canadian Blood Services, 2017). This growing need is aggravated by extremely low estimates of eligible donor turnout. Recent estimates suggest there are 17.1 million Canadians eligible to donate blood; however, a mere 4% of these eligible donors donate (Fan et. al., 2012). Similarly, only 3% of eligible Quebec residents donate (Hema-Quebec, 2008). Blood collection agencies face further difficulties in recruiting potential donors when considering the ever increasingly stricter donor criteria based on on age, medical history, travel history, sexual activity and more (Hema-Quebec, 2014).


exclusion criteria is established to minimize the risk of unsafe blood transfusions, these increasingly stricter standards lead to decreased donor rates. A recent study suggests that only 37.8% of the American population is eligible to give blood when considering all 31 exclusion criterion (Riley, Schwei, & McCullough, 2007). Thus, in order to meet the staggering demand for blood, research should focus on developing strategies and understanding variables that influence donor behaviour in order to maximize the possibility of a positive donation experience. Given that blood donation is based on voluntary participation, ensuring a positive donor experience is critical to increasing recurring donations in the future. Contrary to the ideal scenario,


research has indicated that approximately only 50% of all first time donors will return for a second donation (Ownby, Kong, Watanabe, Tu, & Nass, 1999; Piliavin & Callero 1991). Consequently, this leads to the dilemma whereby the blood donation system depends on new donors whom must eventually be replaced since they do not engage in repeated donations (France et. al., 2013). Evidently, establishing donor satisfaction is critical; however, some characteristics of the process make establishing this contentment difficult. In particular, the procedure may elicit anxiety, vasovagal symptoms, and pain, which can hinder a positive donor experience. Pre-Donation Anxiety Certain worries related to blood donation, such as the pain of the finger prick, venipuncture, blood loss, dizziness, nausea, and social embarrassment can lead to higher levels of pre-donation anxiety especially amongst novice donors (Ditto & France 2006). Using the Blood Donation Reactions Inventor (BDRI), which asks raters to indicate potential negative reactions to blood donation including faintness, dizziness, weakness, blurred vision and nausea, researchers Ditto and France (2006) found that pre-donation anxiety was significantly related three factors: blood donation related symptoms, pain of the finger prick, and venipuncture. The effects of anxiety on blood donation are further highlighted with higher rates of anxiety being linked to a reduced desire to engage in future donations (France et. al., 2013). Those with higher levels of pre- donation anxiety were less likely to indicate that they would donate again in the future. This effect was significantly demonstrated in women when researchers followed up on donor activity in the following year (Ditto & France, 2006). First time donors tend to report higher levels of anxiety than more experienced donors (Chell, Waller, & Masser, 2016; Frank et. al., 2013). As a result,


new donors might be less likely to donate again and therefore contribute to high rates of donor turnover. To counter the effects of blood donation anxieties, researchers Sinclair et al. (2010) showed that donors who underwent a post-donation intervention aiming to reduce specific donor concerns reported lower levels of anxiety and were more likely to donate again in the upcoming year. As evidence that anxiety may reinforce the experience of pain during donation, researchers France et al. (2013) found that increased anxiety led to greater reports of needle pain, which correlated to increased reports of donor dissatisfaction. Additionally, the literature has consistently demonstrated that elevated levels of anxiety are correlated with increased vasovagal reactions among donors (France et al., 2012; Labus, France, and Taylor 2000; Meade, France, & Peterson 1996). Though the relationship between predonation anxiety and the likelihood of vasovagal reactions has been firmly established, the details of this relationship remain largely unclear. The term â&#x20AC;&#x153;anxietyâ&#x20AC;? is often vaguely defined in the context of blood donation, for there is little specificity for what is causing the anxiety (e.g. blood loss, pain, contagious illness). By identifying the specific mechanisms or processes underlying this anxiety, researchers can develop more effective techniques for its reduction. For example, if the uneasiness about pain is more important, then therapeutic interventions can focus on ensuring donors that the pain will not be significant. Vasovagal Symptoms Pre-syncopal (e.g. fainting, dizziness, and lightheadedness) and syncopal reactions, resulting from reduced cerebral blood flow in normally healthy individuals, are crucial to consider when designing interventions to increase donor conservation. The experience of a vasovagal reaction (VVR) has been significantly associated with decreased rates of donor return (Ditto &


France, 2006; Eder, Hillyer, Dy, Notari, & Benjamin, 2008; France, France, Roussos, & Ditto, 2004). Youth is the main risk factor associated with vasovagal reactions, and thus minimizing the likelihood VVR is especially pertinent when maintaining young, novice donors (Khan & Newman, 1999; Newman, 2014; Olatinju, Etzel, & Ciesielski, 2010; Schlump et. al., 2008). Other risk factors for VVR include body size, number of previous donations, gender (i.e. women are more likely to experience a reaction due to lower BMI), and length of phlebotomy (Newman, 2014). Mild reactions, including dizziness, weakness, and lightheadedness, are rather frequent during the blood donation procedure and these symptoms significantly deter donors from donating again in the future (France et. al., 2004). In one study, researchers France et al. (2004) showed that amongst 1052 donors, those who scored higher on the BDRI were much less likely to give blood again in the following year. Of those who scored below 10 on the BDRI (i.e. experiences less symptoms), 55% returned to donate in the following year, compared to a 35% return rate of those who scored above 10 on the BDRI (i.e. experienced more or more symptoms). Furthermore, techniques to reduce vasovagal reactions, such as applied tension, have been shown to reduce reported vasovagal symptoms during blood donation (Ditto et. al., 2003; Holly, Baleigh, & Ditto, 2011; Holly, Torbit, & Ditto, 2011). Women who practiced this technique were more likely to report a fewer vasovagal symptoms and reduced levels of anxiety. Further research has suggested that, for female donors, vasovagal symptoms can function as a mediator between pre-donation levels of anxiety and donor retention ((Ditto & France, 2006). Pain & Pain Catastrophizing Measures of pain experienced during the blood donation process have usually been acquired through numerical rating scales or visual analogue scales that are given post-donation


(Ditto et al., 2003; France, Adler, France, & Ditto, 1994; France et. al., 2013; Meade et al., 1996; Stowell, Trieu, Chuang, & Quarrington, 2009) Research suggests that these are reasonably reliable and valid means of obtaining pain ratings from adults (France et. al., 1994; Hawker, Mian, Kendzerska, & French, 2011). Moreover, techniques have been developed to reduce the pain experienced during blood donation. For example, topical anesthesia is one such treatment that has been shown to reduce pain ratings (Fisher et al., 1998; Shavit, Hadash, Knaani-Levinz, ShachorMeyouhas, & Kassis, 2005; Stowell et al., 2009). In one study, some participants rated that they would be more likely to donate blood if they were offered a topical anesthetic to reduce pain (Watanabe, Jay, Alicto, & Yamamoto, 2011). This was particularly true amongst novice donors. However, other research casts doubt on whether such treatments are truly effective in increasing donor retention. For example, Stowell et al. (2009) found that ultrasound enabled topical anesthesia did reduce the pain of phlebotomy relative to a placebo group; however, these donors reported that they were unlikely to ask for this treatment in future donations and did not feel that it would affect their willingness to donate in the future. Thus, literature surrounding precisely how the pain experience is impacting blood donation is rather sparse and mixed. It is possible that pain alters the likelihood of a vasovagal episode, and thereby indirectly impacting the blood donation process. The experience of pain in blood donation has been closely tied to anxiety and vasovagal reactions when analyzing blood donor behavior (France et. al., 2013; Meade et al., 1996). Lower ratings of needle pain were associated with lower anxiety levels and and subjective syncope reaction scores, as well as higher rates of donor satisfaction (France et. al., 2013). Furthermore, pain ratings are generally higher amongst novice donors, which were linked to issues with donor retention (Callero & Piliavin, 1983; Meade et al., 1996; Miller & Weikel, 1974). Gender has also been significantly linked to pain ratings during the blood donation


procedure. In one study, researchers France et al. (1994) found that amongst 722 participants, women with hypertensive parents reported lower levels of pain sensitivity during blood donation. Another study found that women indicated higher ratings of pain from pre-donation finger-prick and venipuncture than men (Ditto et al., 2003). Pain catastrophizing is defined as the tendency to possess an amplified “mental set” during painful experiences (Sullivan et al., 2001) Research has indicated that ‘catastrophizing’ is a crucial predictor of how an individual copes with and experiences pain (Granot & Ferber,2005). In addition, a large portion of the pain catastrophizing literature has demonstrated the link between this ‘mental set’ and a more intense and distressing pain experience (Keefe et al., 1987; Keefe, Brown, Wallston, & Caldwell, 1989; Sullivan et al., 2001) as well as a tendency for women to catastrophize more than men (Sullivan et al., 2001). Although the effects of fear and anxiety are significantly associated with blood donation, this research has been rather broad and pain catastrophizing can potentially serve as a specific mechanism that increases pain following blood donation. The link between pain and pain catastrophizing has been established in a wide range of studies including those involving back pain, arthritis, surgery, dental work, and more (Sullivan et al., 2001). Research on blood donation thus far has not examined this specific link, but given the previous literature, there is reason to believe that those who catastrophize during blood donation are more likely to provide higher pain ratings, and potentially show an increased likelihood to experience VVR. Therefore, pain catastrophizing can serve as a potential mediator that contributes to issues with donor retention. The Current Study This research project was part of a large-scale randomized control study aimed at studying the effect of applied tension and respiration control techniques on blood donation and the


experience of vasovagal reactions. This specific project examines the effects of anxiety, pain, and pain catastrophizing on VVR. Given previous literatureâ&#x20AC;&#x2122;s agreement on the strong link between anxiety and VVR, the primary goal was to look at how pain and pain catastrophizing would affect the report of vasovagal symptoms. In concordance with previous research, it was hypothesized that report of higher pre-donation anxiety would lead to greater likelihood of reporting vasovagal symptoms. Similarly, it was hypothesized that participants with higher levels of pain catastrophization would rate the experience as more painful and be more likely to report vasovagal symptoms. Methods Participants Participants for the study were recruited from a larger pool of donors giving at mobile Hema Quebec blood drives in universities and CEGEPs throughout the local Montreal area. Given the location of these blood clinics, most of the participants were young, adolescent students. The study took place at the site of donation. A total of 611 people participated with an age range of 1843 (M= 21.8 years old, SD =3.3 years). The average number of previous donations was 2.8 times. The sample included 321 women and 281 men. Participants were then randomly assigned to one of four treatment conditions: 151 were assigned to the applied muscle tension treatment, 153 were assigned to respiration control, 153 were assigned to combined treatment, and 154 were in the control condition. 91 participants deferred following screenings by Hema-Quebec and were thus unable to participate in the study. Procedure Participants were recruited following the completion of one of several Hema Quebec prescreening tasks. After providing written consent, participants completed a questionnaire


providing basic information including age, weight, height, number of previous donations, and an anxiety rating. They were then randomly assigned to one of four treatments and watched a short film providing them with instructions on their prescribed technique. They were instructed to practice the given technique throughout the duration of the blood donation process. Each of the films included the same narrator and was provided in French and English. The first treatment was applied muscle tension, and the video accordingly provided the viewer with muscle-tensing exercises including pointing their toes down and tensing their leg and arms, then maintain the tension for three seconds and relax for three seconds. In the respiration treatment, participants were instructed on a shallow and slow breathing technique in order to prevent hyperventilating. The third treatment group involved combined applied muscle tension and respiration, as such their video included instructions on both of the techniques. The fourth condition was composed of control participants who were not required to watch an instructional video. Following the video, one of the research assistants would remind the participant to practice their corresponding technique throughout the duration of the blood donation process. The research assistant then used a portable B- D monitor to take two measurements of the participantsâ&#x20AC;&#x2122; blood pressure and heart rate, and then they provided the participant with a portable capnometer. Participants then continued to follow the typical donation procedure as instructed by HemaQuebec. A research assistant observed the participant throughout the process. The assistant completed an observational form reporting any difficulties experienced by the participant, whether the participant practiced their respective techniques in accordance with previous instructions, the presence of a vasovagal reaction, and whether the nurse initiated the chair reclining treatment. The assistant also asked the participant for a verbal rating of relaxation on a scale of 1-100.


Following the donation, participants were held in a waiting area for several minutes in order to ensure no vasovagal reaction would occur. The research assistant then assisted the participant in the removal of the capnometer before directing them to the designated post donation area, where they could replenish with snacks and liquids as well as complete the post-donation questionnaire, which included the Blood Donation Reactions Inventory (BDRI) (France, Ditto, France & Himowan, 2008), the pain catastrophizing scale (Sullivan et. al., 1995), and a rating of their likelihood to donate again in the future from 0-100. Also, all participants, besides those in the no treatment control condition, provided an indication of whether they had practiced their given technique continuously or occasionally prior to, during, and post-donation. Lastly, the researcher took two final measures of the donorsâ&#x20AC;&#x2122; blood pressure and heart rate. Measures Blood Pressure and Heart Rate. The research assistant took four readings of blood pressure and heart rate (Model A10, Beckton Dickinson, Franklin Lakes, NJ) in total. Two of the measures were taken prior to donation and two of them were taken post-donation. End-Tidal Carbon Dioxide Measurements. The participant was required to wear a portable capnometer (Micrograph Plus, Ordion Capnography, Minneapolis, MN) throughout the duration of the procedure. The patient was informed that the machine is solely used to measure their end-tidal CO2 output. Specifically, this model was useful for its portable nature and therefore did not disturb the donation process. Blood Donation Reaction Inventory (BDRI). The BDRI (France, Ditto, France & Himowan, 2008) survey was used to measure participantsâ&#x20AC;&#x2122; vasovagal symptoms by having them indicate, on a six-point scale, their experience of symptoms including dizziness, faintness, and


nausea. An additional item was added to require participants to provide a state pain measure in response to the needle. Pain Catastrophizing Scale. This survey includes 13-items that are rated on a five-point scale (Sullivan et. al., 1995) and was used to achieve a general understanding of how participants reacted to pain. Sub scores of rumination, magnification, and helplessness could be calculated as well as an overall score of pain catastrophizing. Examples of items include: when Iâ&#x20AC;&#x2122;m in pain I feel I canâ&#x20AC;&#x2122;t go on; I become afraid that the pain will get worse, and thereâ&#x20AC;&#x2122;s nothing I can do to reduce the intensity of the pain; etc. Results Initial analyses were pairwise correlations between the following variables: pre-donation anxiety, in-chair rating of relaxation, needle pain, BDRI, and pain catastrophizing. Small but significant correlations between all variables were noted (Table 1). Follow-up regression equations examined the effects of pain and pain catastrophization on vasovagal symptoms after controlling for anxiety ratings. The result was that pain catastrophizers were significantly more likely to provide higher pain state ratings following donation. Pairwise comparisons also yielded a significant correlation between pre- donation anxiety rating and pain catastrophizing. Verbal rating of relaxation provided by the donor while donating blood (i.e. while donating) was negatively correlated with pain catastrophizing. As a result, perhaps not surprisingly, the association between pain catastrophizing and the need for treatment for a vasovagal reaction became non-significant when controlling for predonation anxiety and in-chair relaxation. However, the associations between pre-donation anxiety and in-chair relaxation with vasovagal reactions were maintained even when first entering pain catastrophizing in the regression equation (B= .15, p< .001 and B=-.22, p< .001, respectively.


On the other hand, even after controlling for the effects of anxiety before and during donation, pain catastrophizing still had a significant effect on the report of vasovagal symptoms (B=. 12, p=. 003), though this result is harder to interpret given the subjective nature of the BDRI which relies on self-reported ratings of weakness, dizziness, etc. Similar results were obtained when using needle pain ratings instead of pain catastrophizing. Lastly, gender was added to the regression equations to determine whether or not it moderated the effects of pain or pain catastrophizing on vasovagal symptoms; however, no significant differences between genders were found. Discussion The goal of this study was to examine the effects of pain and pain catastrophizing on vasovagal symptoms in blood donors. Significant research has shown the effects of anxiety on vasovagal symptoms, yet there has been a lack of focus in determining the causes of pre-donation anxiety and the effects of pain in VVR. It was hypothesized that higher ratings on a state pain measure and a pain catastrophizing scale (Sullivan, 1995) would significantly correlate with an increase in the self-report of vasovagal symptoms following donation using the BDRI (France et. al., 2008). Conforming to previous literature, it was expected that increased anxiety would be significantly correlated with the report of vasovagal symptoms. Given that the BDRI is a selfreport measure, an objective measure of vasovagal symptoms during blood donation, i.e. nurseinitiated treatment of chair reclining, was included in the analyses. The analysis indicated that those who reported higher levels of pre-donation anxiety were more likely to indicate higher levels of pain catastrophizing. This finding agrees with previous literature showing that higher levels of anxiety correlated with increased levels of reported pain in response to the needle (France et. al., 2013). Furthermore, the verbal rating of relaxation, which


was provided by donors in the donation chair, was negatively associated with pain catastrophizing. This shows that donors who felt more relaxed were less likely to report high levels of pain catastrophization. Overall, these findings provide further support for the hypothesis that pain is a significant element to consider when designing interventions to reduce anxiety in blood donors. Naturally, pain catastrophizing was significantly correlated with pain state measures. Previous findings have also shown that those who tend to amplify their pain experiences are more likely to provide higher ratings of pain in response to a given stimulus (Keefe et al., 1989; Keefe et al., 1987; Sullivan et. al., 1995; Sullivan et al., 2001). Thus, this studyâ&#x20AC;&#x2122;s findings support the connection between a more distressing pain experience and higher ratings of pain catastrophizing. However, these findings are unique in their specific focus on the blood donation experience. Upon using a more objective measure, pain and pain catastrophizing were not significantly correlated with vasovagal reactions when controlling for the effects of anxiety. The objective measure involved research assistantsâ&#x20AC;&#x2122; report of whether the nurses initiated chair, which has been supported as a valid indication and treatment for VVR (Fisher et. al., 2016; Ditto et. al., 2003; Ditto & France 2006). In contrast to the hypothesis, pre-donation anxiety was associated with chair reclining treatment regardless of the effects of VVR was in previous blood donation research (Ditto et. al., 2003). Thus, pre-donation anxiety appears to be the most important variable when determining what is predictive of VVR. Interestingly, when using the BDRI, a self-reporting measure, there was a significant association between pain and pain catastrophizing and vasovagal symptoms. These were maintained even after controlling for pre-donation anxiety. Prior research has supported the link between pain and vasovagal reactions (France et. al., 2013). It was also indicated that BDRI ratings were a significant indicator of blood donor return independent of the chair


reclining treatment. In other words, although the BDRI is a more subjective indicator, it correlates significantly to donor retention (France et al., 2004; France et al., 2013).

Therefore, the

relationship between pain and pain catastrophizing with the BDRI may still be relevant, especially when considering the inventoryâ&#x20AC;&#x2122;s predictive power in relation to future donor behavior. Based on these ambiguous results, it is evident that the effects of pain and pain catastrophizing on VVR are only found when considering self-report measures. The results are mixed-to some degree, pain has an independent effect on VVR, but only when considering selfreport measures instead of a more objective measure. Potentially, there are some specific aspects of VVR, as measured by the BDRI, which can be triggered from feelings of pain and pain catastrophizing. For instance, the BDRI measures specific indicators such as dizziness, weakness, and fatigue, as opposed to simply assessing a more general reaction that would require immediate treatment. However, from a broader perspective, anxiety is more predictive of VVR compared to pain and pain catastrophization when considered independently. This research raises the question of whether or not pain catastrophizing and anxiety are simply two measures of the same underlying construct. Although not entirely conclusive, much of the literature supports the notion that anxiety and pain catastrophizing are two theoretically distinct measures that are interrelated (Benore et al., 2015; Eccleston et al., 2005; Granot & Ferber, 2005). The finding that pain catastrophizing was significantly correlated with BDRI scores, even after controlling for anxiety, lends support to this hypothesis. However, the association between these two variable becomes non-significant when using the objective measure, which does casts some doubt on the degree of distinctiveness between these two measures. Perhaps, pain catastrophizing is a better predictor of the subtler symptoms of VVR (e.g. weakness or dizziness) but not of more extreme reactions occur, such as one that requires the immediate action of a nurse.


It is reasonable to expect that anxiety is a better predictor of VVR than pain and pain catastrophizing. By definition, a vasovagal reaction is a physiological stress reaction indicated by a pattern of cardiovascular activity that leads to predictable symptoms such as increased heart rate and pupil dilation. This stress reaction requires some level of arousal and anxiety in order to occur. Without this mediation, there is no response. Therefore, it is plausible that although pain and pain catastrophizing would have a small but significant effect on vasovagal symptoms, anxiety likely serves as a more robust predictor, given the nature of the vasovagal response. There are a few limitations to these findings which merit some consideration. First, although the sample size was fairly large, the participants in this study were rather young (M=21.8 years old) and inexperienced, as they had an average of 2.8 previous donations. Thus, the demographic of the participants limits the generalizability of these findings given that the donor population as a whole is usually older and well-versed in the donation procedure. However, this younger sample is better suited for studying the effects of VVR, since it is more likely to occur amongst novel donors. Additionally, the measures used to assess participants present some limitations. First, the BDRI and pain-state measure in response to the needle were assessed post-donation. Thus, symptoms of VVR and pain ratings were not reported during the blood donation, which may have skewed the results. For logistical reasons, it was not possible to obtain these measures during the donation, as it would interfere with the blood collection process. To minimize bias, a research assistant observed the donor at all times to note any signs of VVR, including the chair reclining treatment. Also, the temporal sequence of events might have affected the pain catastrophizing measure, since it was obtained following the blood donation procedure. Even though the pain catastrophizing scale is a general measure, undergoing a painful procedure might have altered the


participantsâ&#x20AC;&#x2122; responses. Future researchers might want to include a pain catastrophizing scale both pre- and post- donation to achieve more accurate measurements. Lastly, the nature of statistical analyses presents further limitations. Due to time restrictions, the assessment of possible mediation of the effects of pain catastrophizing by anxiety (and vice-versa) was conducted using simple stepwise regression analyses. More thorough mediation analyses using the PROCESS technique and structural equation modeling (France et al., 2013) will be conducted in the future. Nevertheless, the current analyses provide a useful starting point for investigating the associations between these variables. Conclusion Altogether, evidence for the role of pain and pain catastrophization on vasovagal reactions is somewhat mixed. A comparison involving a self-reported BDRI did seem to indicate a significant relationship between these variables. However, when the objective indicator of VVR was used and the effects of anxiety were controlled for, pain and pain catastrophization were not significantly associated with the experience of VVR. In general, anxiety was a better predictive measure of VVR. Therefore, although future interventions will likely benefit from reducing pain and targeting pain catastrophization, they should focus on reducing anxious- related symptoms. Future research should aim to investigate more specific aspects of the donation procedure that might elicit higher levels of anxiety. Therefore, if it is not the painfulness of the procedure that the donators are concerned with, researchers should consider other worries that might be eliciting higher levels of anxiety, such as contracting illness through needle exposure, social embarrassment, and needle phobia.


Table 1 Pearson Correlation Values Predonation anxiety In-chair rating -0.30* of relaxation Needle pain 0.08+ Pain 0.16* catastrophizing BDRI 0.24* Chair reclining 0.19*

In-chair rating of relaxation

Needle pain

-0.22* -0.19*


-0.26* -0.23*

0.25* 0.13*

Pain catastrophizing

0.24* 0.13*

Note. * = p < .05, + = p < .10 for all analyses


References Benore, E., Dâ&#x20AC;&#x2122;Auria, A., Banez, G. A., Worley S., Tang A. (2015). The influence of anxiety reduction on clinical response to pediatric chronic pain rehabilitation. The Clinical Journal of Pain, 31(5), 375â&#x20AC;&#x201C;383. Callero, P. L., & Piliavin, J. A. (1983). Developing a Commitment to Blood Donation: The Impact of One's First Experience. Journal of Applied Social Psychology, 13(1), 1-16. Canadian Blood Services (2017). Who Does My Donation Help? Retrieved from https://blood.ca/en/blood/who-does-my-donation-help Chell, K., Waller, D., & Masser, B. (2016). The Blood Donor Anxiety Scale: a six-item state anxiety measure based on the Spielberger State-Trait Anxiety Inventory. Transfusion, 56, 1645-1653. Ditto, B., & France, C. R. (2006). Vasovagal symptoms mediate the relationship between predonation anxiety and subsequent blood donation in female volunteers. Transfusion, 46(6), 1006-1010. Ditto, B., France, C. R., Lavoie, P., Roussos, M., & Adler, P. S. (2003). Reducing reactions to blood donation with applied muscle tension: a randomized controlled trial. Transfusion, 43, 9, 1269-1275. Ditto, B., Wilkins, J.-A., France, C. R., Lavoie, P., & Adler, P. S. (2003). On-Site Training in Applied Muscle Tension to Reduce Vasovagal Reactions to Blood Donation. Journal of Behavioral Medicine, 26(1), 53-65.


Eccleston C., Jordan A., McCracken L. M., Sleed M., Connell H., Clinch J. (2005). The Bath Adolescent Pain Questionnaire (BAPQ): Development and preliminary psychometric evaluation of an instrument to assess the impact of chronic pain on adolescents. Pain, 118, 263â&#x20AC;&#x201C;270. Eder A.F., Hillyer C.D., Dy B.A., Notari, E.P., Benjamin R.J. (2008). Adverse reactions to allogeneic whole blood donation by 16- and 17-year-olds. Journal of the American Medical Association, 299(19), 2279-2286. Fan W., Yi, Q. L., Xi, G., Goldman, M., Germain, M., & O'Brien, S. F. (2012). The impact of increasing the upper age limit of donation on the eligible blood donor population in Canada. Transfusion, 22(6), 395-403 Fisher, R., Hung, O., Mezei, M., Stewart, R. (1998). Topical anesthesia of intact skin: liposomeencapsulated tetracaine vs. EMLA. British Journal of Anesthesia, 81(6), 972-973. Fisher, S. A., Allen, D., DorĂŠe, C., Naylor, J., Di, A. E., & Roberts, D. J. (2016). Interventions to reduce vasovagal reactions in blood donors: a systematic review and meta-analysis. Transfusion, 26 (1), 15-33. France, C. R., Ditto, B., France, J. L., & Himawan, L. K. (2008). Psychometric Properties of the Blood Donation Reactions Inventory: a subjective measure of presyncopal reactions to blood donation. Transfusion, 48(9), 1820-1826. France, C.R., France, J.L., Himawan, L.K., Stephens, K.Y., Frame-Brown, T.A., Venable, G.A., Menitove, J.E. (2013). How afraid are you of having blood drawn from your arm? A simple fear question predicts vasovagal reactions without causing them among high school donors. Transfusion, 53(2), 315-321.


France, C.R, France, J.L, Kowalsky, J.M., Ellis, G.D., Copley, D.M., Geneser, A., Frame-Brown, T., Venable, G., Graham, D., Shipley, P., Menitove, J.E. (2012) Assessment of donor fear enhances prediction of presyncopal symptoms among volunteer blood donors. Transfusion, 52(2), 375â&#x20AC;&#x201C;380. France, C.R., France, J.L., Roussos, M., Ditto, B. (2004). Mild reactions to blood donation predict decreased likelihood of donor return. Transfusion and Apheresis Science, 30(1), 17-22. France, C. R., France, J. L., Wissel, M. E., Ditto, B., Dickert, T., & Himawan, L. K. (2013). Donor anxiety, needle pain, and syncopal reactions combine to determine retention: a path analysis of two-year donor return data. Transfusion, 53(9), 1992-2000. Granot, M., & Ferber, S. G. (2005). The roles of pain catastrophizing and anxiety in the prediction of postoperative pain intensity: a prospective study. The Clinical Journal of Pain, 21(5), 439-450. Hawker, G. A., Mian, S., Kendzerska, T., & French, M. (2011). Measures of adult pain: Visual Analog Scale for Pain (VAS Pain), Numeric Rating Scale for Pain (NRS Pain), McGill Pain Questionnaire (MPQ), Short-Form McGill Pain Questionnaire (SF-MPQ), Chronic Pain Grade Scale (CPGS), Short Form-36 Bodily Pain Scale (SF-36 BPS), and Measure of Intermittent and Constant Osteoarthritis Pain (ICOAP). Arthritis Care & Research, 63, 240-252. Hema-Quebec (2008). Fact Sheet About Blood Donation. Retrieved from: https://www.hemaquebec.qc.ca/userfiles/file/media/anglais/dondesang/fiche-tech-sang-eng.pdf Hema-Quebec (2014). Who Can Donate? Retrieved from: https://www.hemaquebec.qc.ca/sang/donneur-sang/puis-je-donner/index.en.html


Holly, C. D., Balegh, S., & Ditto, B. (2011). Applied tension and blood donation symptoms: the importance of anxiety reduction. Health Psychology: Official Journal of the Division of Health Psychology, American Psychological Association, 30(3), 320-325. Holly, C. D., Torbit, L., & Ditto, B. (2012). Applied tension and coping with blood donation: a randomized trial. Annals of Behavioral Medicine: a Publication of the Society of Behavioral Medicine, 43(2), 173-180. Jakovina, B. S., Bicanic, G., Hrabac, P., Tripkovic, B., & Delimar, D. (February 01, 2014). Preoperative autologous blood donation versus no blood donation in total knee arthroplasty: a prospective randomized trial. International Orthopedics, 38(2) 341-346. Labus, J.S., France, C.R., Taylor, B.K. (2000). Vasovagal reactions in volunteer blood donors: Analyzing the predictive power of the medical fears survey. International Journal of Behavioral Medicine, 7(1), 62–72. Keefe, F.J., Brown, G.K., Wallston, K.A., Caldwell D.S. (1989). Coping with rheumatoid arthritis: catastrophizing as a maladaptive strategy. Pain, 37(1), 51–60. Keefe, F.J., Caldwell, D.S., Queen, K.T., Gill, K.M., Martinez, S., Crisson, J.E., Ogden W., Nunley, J. (1987). Osteoarthritis knee pain: a behavioral analysis. Pain, 28(3), 309–321. Khan, W., & Newman, B.H. (1999). Comparison of donor reaction rates in high school, college, and general blood drives. Transfusion, 39, 31S. Meade, M. A., France, C. R., & Peterson, L. M. (1996). Predicting vasovagal reactions in volunteer blood donors. Journal of Psychosomatic Research, 40(5), 495-501. Miller, T.R. and Wiekl M.K. (1974). Blood donor eligibility, recruitment, and retention. Transfusion, 14(6), 616-622.


Newman, B.H. (2004). Blood Donor Complications after whole-blood donation. Current Opinion in Hematology, 11(5), 339-345. Olatunji, B. O., Etzel, E. N., & Ciesielski, B. G. (2010). Vasovagal syncope and blood donor return: examination of the role of experience and affective expectancies. Behavior Modification, 34(2), 164-174. Ownby, H. E., Kong, F., Watanabe, K., Tu, Y., and Nass, C. C. (1999). Analysis of donor return behavior. Retrovirus Epidemiology Donor Study. Transfusion 39: 1128–1135. Piliavin, J. A., and Callero, P. L. (1991). Giving Blood: The Development of an Altruistic Identity, Johns Hopkins, Baltimore. Riley, W., Schwei, M., and McCullough, J. (2007), The United States' potential blood donor pool: estimating the prevalence of donor-exclusion factors on the pool of potential donors. Transfusion, 47(7), 1180–1188. Shavit, I., Hadash, A., Knaani-Levinz, H., Shachor-Meyouhas, Y., Kassis, I. (2009). Lidocainebased topical anesthetic with disinfectant (LidoDin) versus EMLA for venipuncture: a randomized controlled trial. The Clinical Journal of Pain, 25(8), 711-714. Sullivan, M. J. L. (1995). The Pain Catastrophizing Scale: Development and Validation. Psychological Assessment, 7(4), 524-532. Sullivan, M. J.L., Thorn, B., Haythornthwaite, J. A., Keefe, F., Martin, M., Bradley, L. A., & Lefebvre, J. C. (2001). Theoretical perspectives on the relation between catastrophizing and pain. The Clinical Journal of Pain, 17(1), 52-64. Sinclair, K.S., Campbell, T.S., Carey, P.M., Langevin, E., Bowser, B., France, C.R. (2010). An adapted post donation motivational interview enhances blood donor retention. Transfusion, 50(8), 1778– 1786.


Schlumpf, K. S., Glynn, S. A., Schreiber, G. B., Wright, D. J., Randolph Steele, W., Tu, Y., Hermansen, S., Higgins, M.J., Garratty, G., & Murphy, E. L. (2008). Factors influencing donor return. Transfusion, 48(2), 264-272. Stowell, C. P., Trieu, M. Q., Chuang, H., Katz, N., & Quarrington, C. (2009). Ultrasoundenabled topical anesthesia for pain reduction of phlebotomy for whole blood donation. Transfusion, 49(1), 146-153. Watanabe, K. M., Jay, J., Alicto, C., & Yamamoto, L. G. (2011). Improvement in likelihood to donate blood after being offered a topical anesthetic. Hawaii Medical Journal, 70(2), 2829.


How Does a Carbohydrate-Based Breakfast Regimen Affect Aggression Levels in Prison Inmates? Samaa Kazerouni PSYC 512


Abstract Canadian prisons have been experiencing increased incidents of aggressive behaviour between inmates. Research has found a relationship between increased serotonin (5-HT) levels and reduction in aggressive behaviours. Additionally, previous studies have found that eating carbohydrate-based meals allows for increased tryptophan (TRP) levels in the brain, promoting the synthesis of 5-HT. This study will combine these two findings to test the effects of eating daily carbohydrate-based meals on aggression levels in prison inmates. It will be comprised of a three-month intervention, where prison inmates will receive an entirely carbohydrate-based breakfast each day. Various measures will be used to compare their aggression levels prior to, during and after the intervention with those of inmates from another prison, which will serve as the control group. We hypothesize that this carbohydrate-based breakfast regimen will result in decreased levels of aggressive behaviours in prison inmates. Keywords: tryptophan, serotonin, carbohydrate, aggression


How Does a Carbohydrate-Based Breakfast Regimen Affect Aggression Levels in Prison Inmates? Aggression is a widely prevalent behaviour in our society. Between five and seven percent of the general population will meet criteria for Intermittent Explosive Disorder (IED) in their lives (Coccaro, Fanning, Phan & Lee, 2015). IED is a diagnosis of severe impulsive aggression, which is a subset of aggression and the focus of this study. Impulsive aggression is retroactive, meaning that it occurs in response to a provocation, threat, or frustration. It contrasts with instrumental aggression, which is proactive and has the primary goal of obtaining a benefit or reward (Coccaro et al., 2015). The prevalence of impulsive aggression in society makes it a vital topic of research. As a result of Correctional Service Canadaâ&#x20AC;&#x2122;s (CSC) implementation of stricter limits on solitary confinement, there has been a rise in reported cases of aggression in Canadian prisons. Annual inmate-on-inmate assault has risen from 301 cases in 2006-07 to 581 cases in 2014-15 (Tutton, 2016). It is imperative to be better informed about the mechanisms controlling aggression, as well as solutions to curb the rise of such behaviour in Canadian federal prisons. The link between tryptophan, serotonin and aggression has been studied extensively. Tryptophan (TRP) is an essential amino acid found in high-protein foods such as meat, cheese and eggs (Nikulina & Popova, 1988). It cannot be synthesized by humans, therefore our entire supply of TRP comes from the diet (Nikulina & Popova, 1988). Proteins contain very little TRP (approx. 1%) in comparison with the other large neutral amino acids (LNAAs), which are made up of approx. 25% TRP (Spring, Chiodo & Bowen, 1987). All LNAAs compete for access to the same carrier molecules to be transported across the blood-brain barrier (Steenbergen, Jongkees, Sellaro & Colzato, 2016). As a result, a diet rich in protein leads to smaller increases in TRP


plasma levels than other LNAAs. Due to the greater prevalence of other LNAAs, TRP levels actually can decline after eating a high-protein meal (Steenbergen et al., 2016). However, the ingestion of carbohydrates has been found to increase the ratio of TRP to LNAA concentrations in blood plasma, giving TRP a competitive advantage in accessing the brain (Markus, 2007). This is a result of carbohydrate-induced rise in glucose that triggers insulin secretion, causing most LNAAs other than TRP to leave the bloodstream and be taken up by skeletal muscles (Markus, 2007). Even though there is little to no TRP in carbohydrates, eating a purely carbohydrate-based meal increases the influx of tryptophan into the brain, while even small amounts of protein can prevent the increase in the ratio between TRP and competing LNAAs (Richard et al., 2009). The time of ingestion of carbohydrates may have an effect on the ability of the meal to modify TRP availability. Fernstrom et al. (1979) performed a comparison of three calorically equivalent high carbohydrate meals in one day, which showed an increase in the TRP to LNAA ratio only after the first meal (as cited in Richard et al., 2009). Ashley et al. (1982) confirmed these results by showing that evening meals comprised of either 20% protein or 500 kcal carbohydrates had no significant effect on the TRP to LNAA ratio (as cited in Richard et al., 2009). Additionally, research has shown that peak plasma and TRP levels are reached two hours after carbohydrate intake, and remain elevated for at least 7 to 12 hours (Steenbergen et al., 2016). Tryptophan is the only source of serotonin (5-HT) in the body (Nikulina & Popova, 1988). After crossing the blood-brain barrier, the tryptophan-hydroxylase enzyme converts TRP to 5hydroxytrytophan (Richard et al., 2009). 5-hydroxytryptophan is then converted to 5-HT by the


decarboxylase enzyme (Richard et al., 2009). After ingesting foods rich in TRP, plasma levels increase, and the synthesis of 5-HT in the brain can be doubled. Figure 1 depicts the full process from TRP to 5-HT. There is well-founded evidence in psychology literature supporting the inverse correlation between TRP and aggression; this is seen most





impulsive het


Moskowitz, Pinard & Young, 2006; Bernhardt, 1997; Coccaro et al., 2015; Kuepper et al., 2010; Moskowitz, Pinard, Zuroff, Annable & Young, 2001). For instance, Bjork, Dougherty, Moeller and Swann (2000) found that after TRP supplementation in aggressive men, higher plasma TRP levels (hence higher 5-HT levels) were associated with less aggressive responses to provocation. A study by Moskowitz et al. (2001) investigating the effect of TRP on social interaction found that increasing TRP levels decreased quarrelsome behaviour in healthy subjects in their everyday life. Psychosocial outcomes are corroborated with biochemical findings. Higher levels of serotonin in the hypothalamus and amygdala are associated with decreased aggression (Bernhardt, 1997). In fact, the serotonin deficiency hypothesis, based on one of the most frequently reported findings in biological psychiatry, states that aggressive and impulsive


personality traits in humans are associated with reduced levels of serotonin’s metabolic product 5-hydroxyindolecetic acid in the cerebrospinal fluid (de Boer & Koolhaas, 2005). The reduction in aggressive behaviours is one of the most common hypotheses of serotonin’s role in pathological aggression (Duke, Bégue, Bell & Eisenlohr-Moul, 2013). There is opposing evidence in the literature pertaining to the relationship between serotonin levels and aggression, with certain studies claiming that there is no relationship at all. A recent meta-analysis performed by Duke et al. (2015) reported only a small inverse correlation between the two, a stark shift from earlier literature. However, it is possible that the countervailing impact of different serotonergic effects masks the true effects of the serotonin-aggression relation (Duke et al., 2015). It is clear that the relationship between serotonin and aggression is more complex than previously believed. Studies considering serotonin levels as a result of satiation also see a decrease in aggressive behaviours (Bernhardt, 1997). In Nikulina and Popova’s (1988) study on levels of aggression in minks, they found that an increase in the activity of the 5-HT system in the hypothalamus and amygdala of a satiated mink resulted in reduced aggressive reactions to prey. It is important to note that instead of proteins, minks were fed carbohydrates in this study (Nikulina & Popova, 1988). Studies have been performed to examine the effects of nutritional supplements on aggression in prisoners. Schoenthaler et al. (1997) found that vitamin-mineral supplements resulted in reductions of aggressive incidents and rule violation behaviours in incarcerated juveniles. A similar study by Gesch, Hammond, Hampson, Eves and Crowder (2002) found a 26% reduction in disciplinary offenses as a result of their intervention with supplementary vitamins, minerals and essential fatty acids in young adult prisoners. Most recently, Zaalberg, Nijman, Bulten, Stroosma


and Staak (2009) found a reduction in reported incidents of aggression in prisoners who received daily nutritional supplementation. These diet-intervention studies in prisons have successfully reduced aggression levels, however no studies have been performed with the goal of increasing serotonin levels through carbohydrate-based meals to reduce aggression in inmates. The goal of this study is to empirically test if a purely carbohydrate-based meal will influence levels of aggressive behaviour in prison inmates. We hypothesize that implementing a carbohydrate-based meal at the beginning of the day will increase tryptophan levels, thus increasing serotonin levels in the brain, leading to a reduction aggressive behaviour. Methods Participants This study will include 600 male adult (aged 18 years or over) prison inmates from two Canadian federal prisons. The prisons will be equivalent in size (300 beds) and will be located in the same province. These prisons will be maximum security institutions, matched for the number of cases of aggression per year. One prison population will be the experimental population, and the other one will be the control population. Participants will be enrolled in the trial after having provided written informed consent, countersigned by a member of the prison staff. The study will have to conform to the normal operations of the institutions, where participants might leave for parole or cell relocation. Thus, the analysis will allow participation to vary from one to three months. Participants will receive a small financial compensation for their cooperation. Measures The Buss Perry Aggression Questionnaire (AQ). The AQ (Buss & Perry, 1992) is a 29item questionnaire where participants are made to rank certain statements along a five-point scale from “extremely uncharacteristic of me” to “extremely characteristic of me”. It covers four


dimensions of aggression: physical aggression, verbal aggression, anger and hostility. The scores are normalized on a scale from zero to one, with one being the highest level of aggression. Social Dysfunction and Aggression Scale (SDAS). SDAS (Wistedt et al., 1990) consists of ten items covering both outward and inward aggression. Responses are ranked on a five-point scale from zero to four, with zero implying the behaviour is not present, and four meaning that the behaviour is present to a severe degree. Prison Reports. Information from mandated reports made by the ward staff of the prison about aggressive and rule-breaking behaviour will be included in our analysis. Dietary Intake. The dietary intake of participants will be assessed through food diaries. Participants will indicate which of the choices they had eaten at breakfast and how much. They will be asked to report on all items consumed, including beverages. Procedure First, all inmates in both the control and experimental groups will complete a demographic survey as well as the AQ, and all prison staff in both prisons will complete the SDAS to measure the prisonersâ&#x20AC;&#x2122; baseline levels of aggression. Participants will be told that CSC is implementing a trial breakfast menu in their prison. In the experimental group, the intervention will consist of breakfast in the prison being a strictly carbohydrate-based meal with no protein whatsoever. Participants in the intervention will fill out a daily food diary after they eat their breakfast. The control prison will receive a new breakfast menu with the same nutritional breakdown as typical prison meals to keep the guise of the trial menu implemented by CSC. This intervention will continue for three months. Inmates and staff will complete the AQ and SDAS, respectively, at the end of the three-month period. Additionally, the total number of incidents reported by ward staff


from one month before the intervention starts (baseline) and during the period of the intervention will be examined. Data Analysis Three paired t-tests will be performed to find significant differences in aggression levels of inmates before and after the intervention. These tests will compare AQ scores and SDAS scores before and after the intervention, as well as the number of incident reports filed before, during and after the intervention. Differences between the experimental and control group on the outcome measures will be tested by means of repeated measures ANOVA. To understand how various outcome measures are interrelated, a correlation matrix will be made. To make the incident data comparable between inmates who stay in the trial for the full three months and those who have to stop earlier, incident numbers will be converted into rates per 100 prison days. Negative binomial regression analyses will be performed on these incident rates due to the highly specific distribution of such incident variables (Zaalberg et al., 2010). Results In support of our hypothesis, we expect to see a decrease in aggression levels in inmates in the experimental group following the intervention. There will be a reduction in self-reported levels of aggressiveness and hostility over time, as measured with the AQ. There will also be a reduction in aggression scores after the intervention, as rated by the prison staff through the SDAS (Figure 2). In the control group, insignificant change is expected in both AQ and SDAS scores. We will see a decrease in the number of incidents reported by prison staff over time within the experimental group (Figure 3). The control group will not see any significant variations in the number of aggressive incidents over this time period.


Further subgroup analyses will be performed using this data. We will consider the change in aggression levels of those who initially had the most incident reports, as well as inmates who were convicted of crimes involving high levels of aggression. We expect to see greater changes in aggression levels as a result of the intervention in these groups, due to the likelihood that their baseline levels were higher than average. Finally, we will perform a subgroup analysis dividing inmates into three age cohorts, to see whether age played a role in the results. We expect the greatest change in aggression levels in the youngest cohort. Discussion and Conclusions We hypothesized that implementing a carbohydrate-based breakfast would increase tryptophan levels, leading to reduced aggression levels in prison inmates. This study would be the first to implement a dietary change that is not a nutritional supplement within the prison system. It is important to identify the direct link between biochemical processes and psychosocial outcomes. This study addresses that significant gap in the literature, and proposes a mechanism by which satiety through the ingestion of carbohydrates reduces aggressive behaviours. This study has real-life implications, because its success could lead to a carbohydrate-based breakfast regimen being introduced in prisons across the country. It would not require any significant shift


in prison food budgeting or management, seeing as carbohydrates are typically inexpensive and easily prepared, making it a sustainable solution to inmate aggression. This study is being conducted in a maximum-security male prison, whose inmates are likely to display aggressive behaviours more frequently than the average person. In fact, those in prison might exhibit abnormal 5-HT synthesis mechanisms, which could have contributed to their conviction in the first place. It is important to note that the findings of this study may not be generalized to other populations. Another limitation is that variations in participantsâ&#x20AC;&#x2122; genes associated with serotonergic functions might contribute to interindividual variability in response to tryptophan. It is difficult to control for such variation, hence it should be considered during statistical analysis. Additionally, the large age range, as well as stress or insulin resistance, might affect the levels of serotonin synthesized in each individualâ&#x20AC;&#x2122;s brains. Nevertheless, Richard et al. (2009) state that fluctuations in the TRP to LNAA ratio and changing TRP availability are the two factors most likely to affect the process of serotonin synthesis, meaning that all other factors can be rendered insignificant. Due to the nature of the intervention proposed, this study is not doubleblinded. Hence, it is important to consider the potential implications of bias in prison staff while observing aggression levels. However, the use of objective measures (incident reports) mitigates this source of bias. Supplementary investigations should include assessments of TRP and 5-HT levels in blood samples before and during supplementation. This will provide further evidence that a reduction in aggression is indeed attributable to increased TRP and 5-HT levels, and not any other confounding factors. An empirical link between carbohydrate ingestion, increased TRP and reduced aggression will support our results.


To further study the effects of carbohydrate based meals on aggressive behaviour, other groups could also be targeted â&#x20AC;&#x201C; specifically, populations with high base rates of aggressive incidents. This study could also target particular categories of offenders, such as perpetrators of domestic violence with severe aggression management issues. Conducting similar studies in different populations could increase the generalizability of these findings.


References Aan het Rot, M., Moskowitz, D., Pinard, G., & Young, S. (2006). Social behaviour and mood in everyday life: The effects of tryptophan in quarrelsome individuals. Journal of Psychiatry Neuroscience, 31(4), 253-262. Bernhardt, P. C. (1997). Influences of Serotonin and Testosterone in Aggression and Dominance. Current Directions in Psychological Science, 6(2), 44-48. doi:10.1111/14678721.ep11512620 Bjork, J., Dougherty, D. M., Moeller, G., & Swann, A. C. (2000). Differential Behavioral Effects of Plasma Tryptophan Depletion and Loading in Aggressive and Nonaggressive Men. Neuropsychopharmacology,22(4), 357-369. doi:10.1016/s0893-133x(99)00136-0 Buss, A. H., & Perry, M. (1992). Aggression Questionnaire. PsycTESTS Dataset. doi:10.1037/t00691-000 Coccaro, E. F., Fanning, J. R., Phan, K. L., & Lee, R. (2015). Serotonin and impulsive aggression. CNS Spectrums, 20(03), 295-302. doi:10.1017/s1092852915000310 De Boer, S. F., & Koolhaas, J. M. (2005). 5-HT1A and 5-HT1B receptor agonists and aggression: A pharmacological challenge of the serotonin deficiency hypothesis. European Journal of Pharmacology,526(1-3), 125-139. doi:doi.org/10.1016/j.ejphar.2005.09.065 Duke, A. A., Bègue, L., Bell, R., & Eisenlohr-Moul, T. (2013). Revisiting the serotonin and aggression relation in humans: A meta-analysis. Psychological Bulletin, 139(5), 1148-1172. doi:10.1037/a0031544


Gesch, C. B., Hammond, S. M., Hampson, S. E., Eves, A., & Crowder, M. J. (2002). Influence of supplementary vitamins, minerals and essential fatty acids on the antisocial behaviour of young adult prisoners: Randomised, placebo-controlled trial. The British Journal of Psychiatry, 181(1), 22-28. doi:10.1192/bjp.181.1.22 Kuepper, Y., Alexander, N., Osinsky, R., Mueller, E., Schmitz, A., Netter, P., & Hennig, J. (2010). Aggression - Interactions of serotonin and testosterone in healthy men and women. Behavioural Brain Research, 206(1), 93-100. doi:10.1016/j.bbr.2009.09.006 Markus, C. R. (2007). Effects of carbohydrates on brain tryptophan availability and stress performance. Biological Psychology, 76(1-2), 83-90. doi:10.1016/j.biopsycho.2007.06.003 Moskowitz, D., Pinard, G., Zuroff, D. C., Annable, L., & Young, S. N. (2001). The Effect of Tryptophan on Social Interaction in Everyday Life A Placebo-Controlled Study. Neuropsychopharmacology, 25(2), 277-289. doi:10.1016/s0893-133x(01)00219-6 Nikulina, E. M., & Popova, N. K. (1988). Predatory aggression in the mink (Mustela vison): Roles of serotonin and food satiation. Aggressive Behavior, 14(2), 77-84. doi:10.1002/1098-2337(1988)14:23.0.co;2-3 Richard, D. M., Dawes, M. A., Mathias, C. W., Acheson, A., Hill-Kapturczak, N., & Dougherty, D. M. (2009). L-Tryptophan: Basic Metabolic Functions, Behavioral Research and Therapeutic Indications. International Journal of Tryptophan Research, 2, 45-60. doi:10.4137/ijtr.s2129


Schoenthaler, S., Amos, S., Doraz, W., Kelly, M., Muedekinig, G., & Wakefield, J., Jr. (1997). The Effect of Randomized Vitamin-Mineral Supplementation on Violent and Nonviolent Antisocial Behavior Among Incarcerated Juveniles. Journal of Nutritional & Environmental Medicine, 7(4), 343-352. doi:10.1080/13590849762475 Spring, B., Chiodo, J., & Bowen, D. J. (1987). Carbohydrates, Tryptophan, and Behavior: A Methodological Review. Psychological Bulletin, 102(2), 234-256. Steenbergen, L., Jongkees, B. J., Sellaro, R., & Colzato, L. S. (2016). Tryptophan supplementation modulates social behavior: A review. Neuroscience & Biobehavioral Reviews, 64, 346-358. doi:10.1016/j.neubiorev.2016.02.022 Tutton, M. (2016, December 12). Canadian prisons see sudden spike in violence. Retrieved from https://globalnews.ca/news/3122684/canadian-prisons-see-sudden-spike-inviolence/ Wistedt, B., Rasmussen, A., Pedersen, L., Malm, U., Träskman-Bendz, L., Wakelin, J., & Bech, P. (1990). The Development of an Observer-Scale for Measuring Social Dysfunction and Aggression. Pharmacopsychiatry, 23(06), 249-252. doi:10.1055/s-2007-1014514 Zaalberg, A., Nijman, H., Bulten, E., Stroosma, L., & Staak, C. V. (2009). Effects of nutritional supplements on aggression, rule-breaking, and psychopathology among young adult prisoners. Aggressive Behavior,36(2), 117-126. doi:10.1002/ab.20335


Pregnancy-Related Chemosignals Produce Analgesia and Increase Corticosterone Levels in Male Mice Rachel Nejade PSYC 395


Abstract In recent years, the importance of chemosignals on behavior in rodents has become increasingly appreciated in fields such as psychology, neuroscience and physiology. Previous studies have shown the effects of male mice chemosignals on the physiological and behavioral responses of female mice, yet the opposite has been rarely researched. The present study observes the influence of chemosignals from pregnant females, and how it affects pain sensitivity of exposed mice. We are also investigating physiological changes that could be affected by exposure to chemosignals from pregnant mice. Using a variety of behavioral and physiological techniques, we demonstrate that pregnancy-associated chemosignals can produce analgesia in male mice, and is mediated by stress. These findings lead to a better understanding of sex differences on physiological and behavioural changes in mice exposed to chemosignals, as well as to further the interest of sex differences in pain research. Key words: analgesia, chemosignaling, Hargreaves assay, corticosterone.


Pregnancy-Related Chemosignals Produce Analgesia and Increase Corticosterone Levels in Male Mice Recent literature has demonstrated a clear consensus regarding the influence of male mice chemosignals on the physiology and behavior of female mice. A major finding demonstrated that male urine, due to its high level of protein, causes synchronized estrus induction, puberty acceleration and delay (Weidong et al., 1998). These pheromones have been proven to be androgen-dependent, as those effects have not been replicated with castrated males (Whitten, 1959). Further research demonstrated that urinary volatiles were impactful on the behavior and physiology of female mice, no matter the gender or the age of the subjects (Weidong et al., 1998). Other studies have shown that urinary protein complexes have been associated with the Bruce effect, whereby the scent of an unfamiliar male mouse causes a pregnancy-block in recently mated females (Bruce et al., 1959). Although the effect of male chemosignals on female mice has been thoroughly studied, the effect of female mice chemosignals on male mice has rarely been investigated. However, it has been shown that female urinary volatiles differ in concentration during gestation and lactation (Jemiolo et al., 1987), and that those volatiles could potentially act a chemosignals. As such, the present study focuses on the effects of pregnancy-associated and lactatingassociated chemosignals on pain sensitivity in exposed animals. We hypothesize that chemosignals released from pregnant females may cause stress in an exposed animal, thus leading to stressinduced analgesia. Indeed, stress-induced analgesia â&#x20AC;&#x201C; the phenomenon by which stress activates different intrinsic pain inhibitory mechanisms (Butler and Finn, 2009) â&#x20AC;&#x201C; has been proven and well-


documented (Akil et al., 1976; Bodnar et al., 1978; Butler & Finn, 2009; Lewis et al., 1981; Mayer et al., 1971; Terman et al., 1984). In order to determine the effects of chemosignals, we will use a variety of behavioral and physiological techniques, such as Hargreaves’ assay to measure thermal nociception and fecal boli count to measure stress levels. To measure physiological changes, we will perform a corticosterone analysis, using the Ab108821-Corticosterone ELISA kit. This study robustly demonstrates that chemosignals from pregnant females significantly affects pain sensitivity in male, but not female mice. Furthermore, we show that exposure to pregnant females increases corticosterone levels in male, but not female mice. This finding is of great importance because there is very little research on the role of chemosignals in rodents outside of reproduction. It also illustrates a type of chemosignal never before discovered. Further studies will have to replicate the results found for lactation-associated chemosignals and beddings, and determine whether this analgesic response is accompanied by a physiological change in male corticosterone levels. Methods Subjects Subjects were naïve CD-1 male and female outbred strain mice, at least 6 weeks of age. CD-1® (Crl:ICR) mice were purchased from Charles River Laboratories (Boucherville, QC or Durham, NC). In the Hargreaves’ assay, the sample consisted of n = 140 naïve males, n = 120 naïve females and n = 24 castrated males and n = 24 Sham males. In the Fecal Boli Count study, the sample consisted of n = 64 males. Finally, in the corticosterone study, the sample consisted of n = 32 males and n = 16 females. Early pregnant mice were 5-8 days into gestation, and late


pregnant mice were 17-21 days into gestation. All mice were housed in standard polycarbonate cages in groups of 3-4 same-sex littermates in a temperature-controlled (20 ± 1 °C) environment (14:10 h light/dark cycle; lights on at 07:00 h); tap water and food (Harlan Teklad 8604) were available ad libitum. All procedures were approved by local animal care and use committees and were consistent with national guidelines. Thermal Nociceptive Assays A thermal nociceptive assay evaluates the ability of a rodent to detect a noxious thermal stimulus, caused by stimulation of nociceptors. The existence of pain is assessed through behaviors such as withdrawal, licking, and vocalization (Barrot, 2012). Hargreaves’ Assay Each mouse was placed in an individual chamber of a 12-chamber apparatus with transparent plastic outer walls to allow experimental observation and metal inner walls that visually isolated each mouse from each other. The worktable was equipped with a clear glass top to allow for the noxious stimulus – a high-intensity beam (3020E) from a mobile projector lamp bulb located below the glass tabletop and aimed at the upper-hind paw of the mouse – to reach the mouse. Mice habituated for 1-1.5 hours in the testing room prior to testing. A paw withdrawal latency cut-off of 40 s was imposed to prevent the possibility of tissue damage. Response latencies in this test are very variable, and thus each mouse was tested up to 8 times on each hind paw (Hargreaves et al., 1988; Mogil et al., 1999). This study measured the paw’s withdrawal latency to noxious thermal stimuli when both male and female mice were exposed to: naïve male, naïve female, post-weaning, early and late pregnant mice, lactating, and pregnant mice bedding. To ensure the phenomenon studied was associated with chemosignals and not an environmental third factor, castrated males were also


tested and exposed to pregnant mice. Each group had baseline measurements and post-exposure measurements. Fecal Boli Count Calvin Hall (1934) found that defecation and urination in rats are associated with reactions set off by the activation of the sympathetic nervous system such as emotional stress. This technique was therefore used first in the present study as an indicator of stress. Mice were placed in a 12-chamber apparatus with transparent plastic outer walls and metal inner walls that isolated each mouse. Mice were allowed to habituate for 1-1.5 hours in the testing room prior to exposure. The animals were placed on an elevated mesh platform, which allowed for the fecal boli to pass through and reach the “collection area”, delimited according to each cubicle. This collection area was made of white papers, respectively numerated according to which mouse the fecal boli belonged to. Only naïve male mice were tested. Data was collected after the following period of times: habituation (1 hour), baseline (30 minutes), post-exposure (0-30 min) and post-exposure (30-60 min). The collection area was changed immediately and cleaned. Enzyme-Linked Immunosorbent Assay (ELISA) Ab108821-Corticosterone ELISA kit Corticosterone is a main glucocorticoid involved in regulation of energy, immune reactions, and stress responses in rodents. Ab108821 uses a quantitative enzyme immunoassay “sandwich” technique measuring corticosterone in less than 3 hours. A polyclonal antibody specific for corticosterone was pre-coated onto a 96-well microplate with removable strips. Corticosterone in standards and samples is sandwiched by the immobilized polyclonal antibody and biotinylated polyclonal antibody specific for corticosterone, recognized by a streptavidin


peroxidase conjugate. All unbound material is then washed away and a peroxidase enzyme substrate is added. Finally, the colour development is stopped and the intensity of the colour is measured using a microplate reader capable of measuring absorbance at 450nm (Abcam, ab108821). As such, the experiment was performed to quantify plasma corticosterone concentration as a stress biomarker in the naïve males and females. The Corticosterone ELISA kit was used to analyse the amount of corticosterone within the subjects’ plasma, after they have been exposed to olfactory stimuli (i.e. pregnant-associated chemosignals). Each mouse was placed in a 12-chamber apparatus with transparent plastic outer walls and metal inner walls that isolated each mouse. Mice were allowed to habituate for 1-1.5 hours in the testing room prior to exposure. Then, the first eight mice – used as baseline – were taken out of the testing room for blood collection by means of beheading, to avoid confounding variables due to being placed in the euthanasia device. The blood samples were then transferred into a test tube with one-tenth volume of 0.1 M sodium citrate as an anticoagulant and later centrifuged for 10 minutes at 3000x Relative Centrifugal Force (RCF) and in a temperature of 4°C. The undiluted plasma samples collected after centrifugation were then stored at -20°C, while waiting for analysis. Regarding the other eight mice, those were given another 30 minutes to habituate and were then presented with an olfactory stimulus for another 30 minutes. Those stimuli included pregnant and naïve females for males and naïve female mice were only exposed to naïve female mice, as another control. Once the exposure time was passed, the last eight mice were taken out of the cubicles and their blood was collected, and centrifuged in the same manner as it was for the baseline.


Results Hargreaves’ assay A paired t-test was performed to compare paw withdrawal latency between baseline and post-exposure data2. The data was computed into the statistical computer program SYSTAT and graphed using GraphPad Prism. The analysis revealed a significant (P < 0.05) increase in withdrawal latency in male mice after being exposed to late pregnant and lactating mice; t(30)=4.109, p<.001. There was no significant effect in female mice. A summary of the Hargreaves’ assay results can be found in Figure 1 and Figure 2. Fecal Boli Count The results did not show any significant increase in fecal boli. Therefore, since the raw numbers are quite explicit on their own, no paired T-test or other statistical method was used to further analyze the data. A summary of the results can be found in Figure 3. Corticosterone ELISA-kit Assay An unpaired t-test with Welsh Correction was carried out to analyze the corticosterone concentration results for each group. The sample size for each group was n = 8. After computing the data into SYSTAT and graphing them using GraphPad Prism, the analysis revealed a significant (P < 0.05) increase in corticosterone concentration in male mice when exposed to pregnant mice; t(7.89)=2.361, p=.0225. However, when exposed to naïve female mice, the effect was not significant for both naïve male and female mice. A summary of the results can be found in Figure 4 and Figure 5.


Sample size are found respectively in the graph’s legend.


Discussion The present study provides novel and exciting insights into the role of female chemosignaling. We show that chemosignals from pregnant females causes analgesia in male, but not female mice. This effect appears to be testosterone mediated, as females and castrated males do not show the effect. We also show that exposure to chemosignals from pregnant females causes an increase in corticosterone in male mice, which may be leading to stress-induced analgesia. These findings add extensively to the scarcely existing literature of the effect of female chemosignals on male mice, and corroborate with the work of Jemiolo et al. (1987), suggesting that female urinary volatiles can act as chemosignals because they differ in concentration during gestation and lactation. Pregnant chemosignals affect pain sensitivity in male mice The results from the Hargreavesâ&#x20AC;&#x2122; assay demonstrate that male mice have increased withdrawal latency after exposure to pregnant or lactating female mice, suggesting a chemosignalinduced analgesia. This effect is not seen in female mice. Moreover, male mice exposed to bedding only of late pregnant mice demonstrate the same effect, suggesting that this phenomenon is olfactory and associated with chemosignals. Since female mice and castrated male mice do not exhibit analgesia when exposed to pregnant or lactating mice, we conclude that this phenomenon is also mediated by testosterone. A possible explanation for this phenomenon might be that the chemosignals from the pregnant mice stress out the male mice. This resembles a typical flight response â&#x20AC;&#x201C; or acute-stress response - that animals exhibit when faced with a potential threat. Pregnant and lactating females are known to be more protective of their offspring when those are at their most vulnerable, and in the eyes of the female, the biggest threat to her progeny would come from an unfamiliar male.


However, it is important to note that Hargreave’s assay can be quite variable. The experimenter can sometimes misinterpret a flinch caused by pain from a flinch caused by a sudden noise or movement around the mouse. Other factors can also influence the withdrawal latency, such as grooming or sleepiness of the mice. Despite collecting data up to eight times per paw, amounting to a total of 16 scores per mice in one condition (i.e. baseline or post-exposure to pregnant mice), the environmental and personal factors surrounding the execution of the assay can impact the results (Callahan et al., 2008). Pregnant chemosignals induce stress, and lead to an increase in corticosterone levels in male mice Due to the large amount of literature demonstrating stress-induced analgesia (Akil et al., 1976; Bodnar et al., 1978; Butler and Finn, 2009; Lewis et al., 1981; Mayer et al., 1971; Terman et al., 1984), we measured corticosterone levels in mice exposed to pregnant and naïve mice. The results of the analysis showed a significant (P < 0.05) increase in corticosterone concentration in males exposed to pregnant mice. This effect was not produced in males and females when exposed to naïve females. Since an increase in corticosterone concentration represents an increase in stress response, we can conclude that male mice exhibit higher levels of stress when exposed to pregnantassociated chemosignals, which induces analgesia when male mice are exposed to noxious stimuli. Although the results of the fecal boli experiments did not produce expected effects (Hall, 1934), there are other factors which could explain why the number of fecal boli kept on decreasing after exposure to stimuli. Indeed, the physiology of any living being comes into play when looking at defecation. The mice have been inside the cubicles for over two hours without any food or water available. The initial emotional stress inherent to being put in an unfamiliar environment is somewhat shown in the results, but it also implies that the mice have already “emptied” their


stomach and therefore that despite being emotionally stressed by the new stimuli introduced in their environment (i.e. pregnant mice), they simply cannot perform this simple digestive function anymore. Finally, it is important to note that the corticosterone assay also had some limitation. Its most important one being its sample size (n) per condition. Although the results were significant (P <0.05), the sample size could be increased since the standard error from the mean remains quite high (SE = 0.9049). Due to this group variability, our sample might not be fully representative of the population of male naïve mice. Moreover, another limitation encountered during the assay came about during the blood collection. By taking the mice out of their cubicles one-by-one and onto the collection table, this might have caused them emotional distress which might have been associated wrongly with pregnant chemosignals. This could be a potential confounding variable, even though naïve female mice’s corticosterone levels did not increase, since those results were not statistically significant. Further studies should be done by looking at pregnant-associated chemosignals on castrated males, to ensure that this stressful response is indeed mediated by chemosignals. Despite the existence of those limitations, the significance of the results cannot be dismissed and could be the grounds for additional findings in the scientific literature focusing on sex differences in chemosignaling. More research needs to be carried out, to complete the findings of the present study and validate the hypothesis that lactating-associated chemosignals are responsible for stress-induced analgesia in male mice, by looking at the increase or decrease in corticosterone concentration of male, female and castrated male mice. If those results confirm this hypothesis, one could contribute further to this study by also looking at the role of aggressiveness – potentially considered an instinctive reaction against male mice thought to threaten the females’


offspring (Svare, 1981) â&#x20AC;&#x201C; as the cause for the increase in chemosignals in female mice during late pregnancy or lactation. Conclusion The present study is the first of its kind to have shown that chemosignals from pregnant females have an effect on exposed mice. This also furthers our understanding of how stressinduced analgesia could be created by chemosignals, and the mechanisms at work triggering it. The role of chemosignals and the sex differences associated with them could better explain differences in behaviors and in pain sensitivity altogether. Such theories could be tested at the human level, by looking at the influence of olfactory volatiles from pregnant women on menâ&#x20AC;&#x2122;s analgesic response. Hence, the results of this study should encourage more research focusing on the investigation of sex differences in chemosignaling and the effects chemosignals can have on pain.


References Akil, H., Mayer, D.J., Liebeskind, J.C. (1976). Antagonism of stimulation-produced analgesia by naloxone, a narcotic antagonist. Science 191, 961â&#x20AC;&#x201C;962. Barrot, M. (2012). Tests and models of nociception and pain in rodents, In Neuroscience 211, 3950, ISSN 0306-4522, https://doi.org/10.1016/j.neuroscience.2011.12.041. Bodnar, R.J., Kelly, D.D., Spiaggia, A., Ehrenberg, C., Glusman, M. (1978a). Dose-dependent reductions by naloxone of analgesia induced by cold-water stress. Pharmacol. Biochem. Behav. 8, 667â&#x20AC;&#x201C;672. Bruce, H.M. (1959). An Exteroceptive Block to Pregnancy in the Mouse. Nature. 184. 105. doi:10.1038/184105a0. Butler, R.K, Finn, D.P. (2009) Stress-induced analgesia, In Progress in Neurobiology, 88 (3), 184202, ISSN 0301-0082, https://doi.org/10.1016/j.pneurobio.2009.04.003. Callahan, B.L., Gil, A.S.C., Levesque, A., Mogil, J.S. (2008). Modulation of Mechanical and Thermal Nociceptive Sensitivity in the Laboratory Mouse by Behavioral State. The Journal of Pain, 9 (2), 174-184, ISSN 1526-5900, https://doi.org/10.1016/j.jpain.2007.10.011. Hall, C. S. (1934). Emotional behavior in the rat. I. Defecation and urination as measures of individual differences in emotionality. Journal of Comparative Psychology, 18(3), 385403. http://dx.doi.org.proxy3.library.mcgill.ca/10.1037/h0071444. Hargreaves, K., Dubner, R., Brown, F., Flores, C., & Joris, J. (1988). A new and sensitive method for measuring thermal nociception in cutaneous hyperalgesia. Pain, 32(1), 77-88. doi:10.1016/0304-3959(88)90026-7.


Jemiolo, B., Andreolini, F., Wiesler, D., andNovotny, M. (1987). Variations in the mouse (Mus musculus) urinary volatiles during different periods of pregnancy and lactation.J. Chem. Ecol. 13(9):1941–1956. https://doi.org/10.1007/BF01014677. Lewis, J.W., Sherman, J.E., Liebeskind, J.C. (1981). Opioid and non-opioid stress analgesia: assessment of tolerance and cross-tolerance with morphine. J. Neurosci. 1, 358–363. Mayer, D.J., Wolfle, T.L., Akil, H., Carder, B., Liebeskind, J.C. (1971). Analgesia from electrical stimulation in the brainstem of the rat. Science 174, 1351–1354. Svare B.B. (1981) Maternal Aggression in Mammals. In: Gubernick D.J., Klopfer P.H. (eds) Parental Care in Mammals. Springer, Boston, MA Terman, G.W., Shavit, Y., Lewis, J.W., Cannon, J.T., Liebeskind, J.C. (1984). Intrinsic mechanisms of pain inhibition: activation by stress. Science 226, 1270–1277. Whitten, W. K. (1959) Occurrence of anoestrus in mice caged in groups. J. Endocrinol. 18, 102– 107. doi: 10.1677/joe.0.0180102. Wiesler, D., & Novotny, M. V. (1999). Urinary Volatile Profiles of the Deermouse (Peromyscus maniculatus) Pertaining to Gender and Age. Journal of Chemical Ecology, 25(3), 417-431. doi:10.1023/a:1020937400480.


Figures Section

Figure 1. Hargreaves’ test measuring withdrawal latency (s) of a) male (n = 96) and b) female (n = 96 )mice following exposure to other mice (as labeled on graph). ***p<.001, *p<.05.

Figure 2. Hargreaves’ test measuring withdrawal latency (s) of d) naïve male mice following exposure to pregnant mice (n = 8 for pregnant and n = 12 for lactating), e) naïve mice following exposure to the bedding of pregnant mice (n = 24 males, n = 24 females), f) castrated (n = 24) and sham (n = 24) male mice following exposure to pregnant mice. *p<.05, **p<.01.


Figure 3. Number of Fecal Boli in male mice before and after being exposed to pregnant mice and lactating mice. Data was gathered after habituation (1 hour), after baseline (30 minutes), after exposure to stimuli (0-30 min) and again post-exposure (30-60 min).



Figure 4. (a) Corticosterone Concentration (ng/ml) in male mice plasma before and after being exposed to pregnant mice. Pre-exposure group (n = 8) was then compared to the post-exposure group (n = 8). *p<.05. (b) Corticosterone Concentration (ng/ml) in male mice plasma before and after being exposed to naĂŻve female mice. Pre-exposure group (n = 8) was then compared to the post-exposure group (n = 8) data. No significance was found.


Figure 5. Corticosterone Concentration (ng/ml) in naĂŻve female mice plasma before and after being exposed to naĂŻve female mice. Pre-exposure group (n = 8) was then compared to the post-exposure group (n = 8). No significance was found.


Statement of Contribution The data presented in this study are not the results of the work solely done by the author of this research report. Fellow students and other members of staff have collected, gathered and analysed the findings regarding the Hargreavesâ&#x20AC;&#x2122; assay results. The author of this research report has been involved in the execution of the olfaction study (i.e. habituation and exposure to stimuli) associated with figuring the impact of pregnant-chemosignals on male miceâ&#x20AC;&#x2122;s corticosterone levels, and assisted in the collection and centrifugation of the plasma samples. Other members of staff have been responsible for the analysis of the data, although the author of this report has helped wherever she could throughout said analysis. Moreover, the author has also been the one to collect and compute the data for the Fecal Boli Count analysis. An incredible acknowledgement of contribution is also to be given to the technicians who are responsible for housing and for the reproduction of mice, since they are the ones maintaining the miceâ&#x20AC;&#x2122;s stable environment, and provided us with healthy subjects necessary for this experiment. Another contribution should also be made to those who trained the author of this report in the design, execution and statistical analysis of experiments, throughout her research project. Finally, one of the most important contribution to this research paper was carried out by Sarah Rosen, who was actively involved in the editing of this report, but who also taught the author to execute most of her functions during this research project.


A Critical Reading of Hallâ&#x20AC;&#x2122;s Reclaiming Your Sexual Desire Stephanie Simpson PSYC 436 Professor Binik


A Critical Reading of Hall’s Reclaiming Your Sexual Desire Reclaiming Your Sexual Desire, a self-help book by Dr. Kathryn Hall, sheds light on the waxing and waning nature of female sexual desire. Adopting Basson’s non-linear model of desire, this text provides multiple steps women can take in order to regain or unlock their erotic potential. The author discourages the medicalization of low sexual desire, and instead, conceptualizes it within a constructivist framework. Although Hall presents clinical proof of success, she fails to provide experiential evidence that could strengthen her advice. From an empirical perspective, this paper will, therefore, examine three major recommendations from Hall’s book including a deterrence from pharmacological treatments, practicing mindfulness, and cognitive behavioural therapy (CBT). This will establish if Hall’s methods are not only worthwhile for women who experience low desire, but to what extent these recommendations are scientifically sound. The central tenet of Hall’s book proposes that a reductionist strategy affords a superficial and short-term solution to the issue of low desire. The principal assertion is that desire can only improve once the sociocultural context is addressed (Hall, 2004). Nonetheless, several randomized, doubled-blind, placebo-controlled trials using a testosterone patch (TP) contradict this claim. In 2005, Braunsetin et al. and Buster et al. separately conducted two similar studies in which women who were surgically menopausal, concurrently taking estrogen, and showing symptoms of hypoactive sexual desire disorder (HSDD) were randomized to receive either a TP or a placebo for 24 weeks. Both experimenters reported that a 300 µg/d TP treatment significantly enhanced sexual desire compared to the control group with minimal adverse events (Braunsetin et al., 2005; Buster et al., 2005). To ensure the results apply to other women, two additional studies utilized the same protocol with a cohort of naturally menopausal women (Panay et al., 2010; Shifren et al., 2006). Panay et al.’s (2010) sample included women who were with and without


concurrent estrogen treatment, and again found that the TP significantly improved desire in both treatment groups. Together, these studies provide strong, empirical evidence that hormonal treatments can meaningfully diminish HSDD symptoms in surgically or naturally menopausal women with minimal risk for at least a 24-week period. These investigations provide compelling support for the use of hormonal therapies since they make adequate group comparisons using a controlled condition, apply a standardized protocol to a randomized sample to ensure effects are not due to extraneous variables, and include a reliable measure of desire (the Profile of Female Sexual Function). However, these authors only monitored women for six months and the samples being included typically lacked diversity, with some consisting of up to 90% white women. It is fair to assume then that these reports offer little information on the long-term benefits or potential harm of using the TP for any longer than 24 weeks, nor can one be sure that they will generalize to women in minority groups. In conclusion, Hallâ&#x20AC;&#x2122;s core recommendation is partially reasonable. A reductionist perspective should not be completely discredited for these experiments demonstrate that it can have a significant impact on female sexual desire. Yet, it is still unclear whether these outcomes are merely ephemeral, and if they can extend to women of all colours. After all, HSDD is rarely experienced transiently and does not discriminate against ethnicity. While testosterone treatments may be effective for menopausal women, the results are not as cogent for premenopausal women with low sexual desire. Two randomized, placebo-controlled studies monitored the effects of a drug called flibanserin over a 24-week period on women clinically diagnosed with HSDD (DeRogatis et al., 2012; Katz et al., 2013). Both studies found significant, enhanced effects on desire in the treatment condition (receiving 100 mg of flibanserin) compared to the control group. Yet, in a report reviewing this drug treatment, Woloshin and


Schwartz (2016) assert that flibanserin is dangerous when combined with alcohol. Moreover, both studies found that significantly more women in the treatment condition experienced adverse events (e.g., somnolence and dizziness) compared to the controlled group (DeRogatis et al., 2012; Katz et al., 2013). Another caveat is that over half of the authors were currently or previously employed by Boehringer Ingelheim – the pharmaceutical company which originally owned the rights to this drug. This impugns the investigations’ objectivity since the researchers may be biased to favour a drug in which their company was so heavily invested. In light of this additional experimental evidence, it appears that, to a certain degree, Hall’s primary assertion is less valid with menopausal women, but more justifiable when considering premenopausal women. Numerous well-designed studies indicate that significant benefits from treatment using the TP certainly outweigh the relatively small risks. Nevertheless, premenopausal women need to approach pharmacological treatments like flibanserin with greater caution since it can have deleterious side effects. Hall’s hesitancy towards pharmacological treatments of low sexual desire is warranted in this subset of the population. Aside from her general avoidance of biologically-based approaches, Hall also offers specific advice to enhance desire. These include holistic techniques like sensual meditation and focused breathing exercises (Hall, 2004). A study by Brotto and Basson (2014) randomized 115 women seeking treatment for low sexual desire into an immediate mindfulness-based treatment or a delayed treatment (i.e., a control group). Women learned breathing techniques and how to practice the “Body Scan” – tapping into the sensations of specific body parts, including the genitals. After six months, they found that compared to controls, the treatment condition reported higher levels of sexual desire (Brotto & Basson, 2014). This study is credible as it used a randomized, controlled design to ensure that no systematic bias within groups produced the effect.


One limitation is that it is difficult to disentangle the procedure which produced this outcome – is it the sensual meditation, the breathing techniques, or a combination of both? Secondly, 38% of the women recruited were diagnosed with both an arousal and desire disorder (Brotto & Basson, 2014). As a result, the sample was inclusive of multiple diagnoses, meaning there is some uncertainty as to whether these results will apply to women solely experiencing HSDD. Overall though, this experiment yields empirical confirmation that Hall’s holistic recommendations are viable and effective in reversing symptoms of low desire. Finally, Hall suggests psychotherapy or counselling as additional treatment options. Unlike the hormonal treatments, Hall argues that these therapeutic interventions will improve sexual desire in the long-term since they address the underlying cause of the issue, whether it is related to a relationship imbalance, stress, or self-esteem (Hall, 2004). A seminal study by Ravart, Trudel, Marchand, Turgeon, and Aubin (1996) applied a couple-based CBT intervention in which the female partner was clinically diagnosed with HSDD. Many stages of the program also incorporated Hall’s "sexercise” recommendations such as communication skills training, sensate focus exercises, and encouraging sexual fantasy. These researchers reported that at post-treatment, 68% of women no longer displayed the criteria for HSDD after three months compared to those in a wait-list control group (Ravart et al., 1996). Three years later, the researchers extended the protocol by including a one-year follow-up of participants and discovered that 38% of women in the treatment condition were still symptom-free one year later (Trudel et al., 2001). These experiments demonstrate that a brief, multimodal CBT intervention can ameliorate the symptoms of HSDD over the long-term because they confront the psychosocial factors that may inhibit desire. One drawback to both of these interventions is that the items measuring desire from the Sexual History Form were translated from English to French to accommodate their French-speaking


sample. Neither study reported the new validity or reliability, and it is well-known that these psychometric properties may not carry over following translation (Kaplan & Saccuzzo, 2013). As a result, their measure of desire may be inaccurate. In addition, both studies only recruited married or common-law females since elements of the therapy were couple-based. Therefore, it is unclear whether these treatments could also benefit single or dating women diagnosed with HSDD. Above all though, these experiments demonstrate that Hall’s suggestions for psychotherapeutic interventions are well-founded in the rigorous realm of science, and have the long-lasting ability to remedy the symptoms of low desire in married women with no threat to personal safety. While many courses of action have been presented, the scientific and clinical communities are far from establishing an infallible treatment for low desire in women. Research indicates that Hall’s advice for using clinically-based therapies, especially CBT, are effective in reducing HSDD symptoms in women for up to a year. Secondly, Brotto and Basson’s (2014) experiment proves that mindfulness exercises which integrate her recommendations for focused breathing and sensual meditation can enhance levels of sexual desire. Lastly, Hall warns against the use of oversimplistic pills or patches which assume desire is physically founded and renders short-lived cures (Hall, 2004). To a certain extent, Hall is valid in this assumption. A myriad of randomized, placebo-controlled studies fail to show any long-term enhancement of desire past a brief 24 weeks. Importantly though, evidence also shows that hormonal therapy in menopausal women can affect desire as significantly as other psychosocial treatments. This implies that, in contrast to Hall, hormones must play a significant role in the fluctuation of desire. In short, one can conclude that Hall’s three primary suggestions are, to a considerable degree, based on empirical fact.


References Braunstein, G., Sundwall, D., Katz, M., Shifren, J., Buster, J., Simon, J., . . . Watts, N. (2006). Safety and Efficacy of a Testosterone Patch for the Treatment of Hypoactive Sexual Desire Disorder in Surgically Menopausal Women: A Randomized, Placebo-Controlled Trial. Archives of Internal Medicine, 165(14), 660. doi:10.1016/s0022-5347(05)00380-0 Brotto, L. A., & Basson, R. (2014). Group mindfulness-based therapy significantly improves sexual desire in women. Behaviour Research and Therapy, 57, 43-54. doi:10.1016/j.brat.2014.04.001 Buster, J. E., Kingsberg, S. A., Aguirre, O., Brown, C., Breaux, J. G., Buch, A., . . . Casson, P. (2005). Testosterone Patch for Low Sexual Desire in Surgically Menopausal Women: A Randomized Trial. The American College of Obstetricians & Gynecologists, 105(5), 944952. doi:10.1097/01.aog.0000158103.27672.0d DeRogatis, L. R., Komer, L., Katz, M., Moreau, M., Kimura, T., Garcia, M., . . . Pyke, R. (2012). Treatment of Hypoactive Sexual Desire Disorder in Premenopausal Women: Efficacy of Flibanserin in the VIOLET Study. The Journal of Sexual Medicine, 9(4), 1074-1085. doi:10.1111/j.1743-6109.2011.02626.x Hall, K. (2004). Reclaiming your sexual self: How you can bring desire back into your life. Hoboken, NJ: John Wiley & Sons. Kaplan, R., & Saccuzzo, D. (2013). Psychological Testing: Principles, Applications, and Issues (8th ed.). California: Wadsworth Publishing. Katz, M., DeRogatis, L. R., Ackerman, R., Hedges, P., Lesko, L., Garcia, M., & Sand, M. (2013). Efficacy of Flibanserin in Women with Hypoactive Sexual Desire Disorder:


Results from the BEGONIA Trial. The Journal of Sexual Medicine, 10(7), 1807-1815. doi:10.1111/jsm.12189 Panay, N., Al-Azzawi, F., Bouchard, C., Davis, S. R., Eden, J., Lodhi, I., . . . Sturdee, D. W. (2010). Testosterone treatment of HSDD in naturally menopausal women: The ADORE study. Climacteric, 13(2), 121-131. doi:10.3109/13697131003675922 Ravart, M., Trudel, G., Marchand, A., Turgeon, L., & Aubin, S. (1996). The efficacy of cognitive behavioural treatment for hypoactive sexual desire disorder: an outcome study. Canadian Journal of Human Sexuality, 5(4), 279-293. Shifren, J. L., Davis, S. R., Moreau, M., Waldbaum, A., Bouchard, C., DeRogatis, L., . . . Kroll, R. (2006). Testosterone patch for the treatment of hypoactive sexual desire disorder in naturally menopausal women. Menopause: The Journal of the North American Menopause Society, 13(5), 770-779. doi: 10.1097/01.gme.0000243567.32828.99 Trudel, G., Marchand, A., Ravart, M., Aubin, S., Turgeon, L., & Fortier, P. (2001). The effect of a cognitive-behavioral group treatment program on hypoactive sexual desire in women. Sexual and Relationship Therapy, 16(2), 145-164. doi:10.1080/14681990120040078 Woloshin, S., & Schwartz, L. M. (2016). US Food and Drug Administration Approval of Flibanserin. JAMA Internal Medicine, 176(4), 439-442. doi:10.1001/jamainternmed.2016.0073


Post-Traumatic Stress Disorder in Canadian Military: The Invisible Wounds and Persistent Neglect of Canadian Military Veterans Julia Tesolin


Post-Traumatic Stress Disorder in Canadian Military: The Invisible Wounds and Persistent Neglect of Canadian Military Veterans According to Statistics Canada, post-traumatic stress disorder (PTSD) occurs after a person has witnessed or experienced “a traumatic event involving actual or threatened death, serious injury or violent personal assault, such as sexual assault” (2013). Post-traumatic stress disorder in military veterans is a growing social problem for the Canadian population, as “soldiers have a greater chance of developing post-traumatic stress disorder (PTSD) than of being fired upon, physically injured or killed in combat” (Westwood et al., 2010, p. 45). It is an important subject matter, since PTSD is one of the most common mental health disorders that military veterans suffer from, and these men and women deserve to be treated appropriately and efficiently. Currently, the Canadian Forces are required to provide a “confidential mental health screening questionnaire and a 40-minute, semi-structured interview with a mental health professional” to its military between 90 to 180 days after their return (Zamorski et al., 2014, p. 321). This research paper examines how Canada’s approach in dealing with military veterans suffering from PTSD has changed over time. Research by Sally Chivers and Fikretoglu et al. focuses on the different ways that the Canadian government has responded to the issue of PTSD in its military. In comparison, Robert Stretch looked at a unique group of military veterans with PTSD, Canadian Vietnam War veterans, and examined society’s reaction to their return back home, as well as the failure of the government to provide them any compensation. Copp and McAndrew focused on the role of the Canadian army in dealing with the first diagnoses of mental illness during World War II, while Zamorksi et al.’s research detailed Canada’s current approach to treat PTSD in military veterans returning from Afghanistan. The thesis of this paper, amidst all of the research on the topic, developed from the ideas of Chivers and Fikregotlu et al. and how the Canadian government seems to be falling short


when it comes to appropriately addressing and treating this national social problem. As we examine the Canadian government’s role in dealing with PTSD over time, the two social science disciplines of psychology and sociology will be explored. At the same time, the concepts of PTSD and cognitive behavioral treatment (CBT), as well as soldier as a master status and mental illness as a social problem will be studied. Since the mid 20th century, the Canadian government has provided limited assistance and aid to its mentally ill military upon their arrival back home. Canada’s approach in dealing with PTSD does not seem to have changed over time, as result of them not wanting to intervene in a problem that they no longer feel responsible for. On the surface, they appear to provide preliminary testings and health services, but this treatment does not seem to be pursued due to insufficient resources. After years of combat, the battlefield becomes the soldier’s home, and getting used to this violent environment may eventually affect their mental health. When involved in active combat on the battlefield, soldiers are exposed to a multitude of stressors including “witnessing atrocities including the torture of civilians, the handling of civilian adult and child casualties, and the retrieval and disposal of human remains” (Westwood et al., 2010, p. 45). According to Zamorski et al., there are also a number of pre-deployment risk factors that contribute to post-deployment mental health problems in the military, such as the soldier’s “baseline mental health and resilience, number of previous deployments, and total number of months deployed over a period of time” (Zamorski et al., 2014, p. 320). The combination of these factors contributes to each soldier’s chances of developing PTSD, thus it is important for the military to be provided with the necessary preparatory information and services that would allow them to be fully aware of what the deployment involves. In the 1940s, psychiatrists attributed the breakdowns that military men experienced to “an accumulation of strains, both physical and mental, of great intensity – bodily


danger, continuous physical exertion, loss of sleep, insufficiency and irregularity of meals, intermittent but perpetually recurrent bombardment and the sight of comrades and civilian refugees being killed around them” (Copp & McAndrew, 1990, p. 23). After undergoing threats in such dangerous circumstances, soldiers can experience physical and/or mental symptoms of the traumatic experience due to the accumulation of stress. During World War II, patients admitted to the “Emergency Medical Services” (EMS) of hospitals experienced a wide variety of symptoms which included “epilepsy, strong fear reactions, chronic headaches, enuresis, gastric illness, uncontrollable restlessness, exaggerated physical weakness, muscle tics, and obsession phobias” (Copp & McAndrew, 1990, p. 17). These symptoms have not changed much from the 1940s because soldiers still experience similar traumatic events on the battlefield. What does appear to have changed is the way that veterans choose to deal with its effects on their mental health. According to Westwood et al., “former [Canadian and American] military personnel with war-related trauma are more likely to use medical services and have hypertension, asthma, and chronic pain symptoms than veterans without exposure to traumatic stress” (2010, p. 45). Additionally, these men and women are at “higher risk than their peers for premature mortality from accidents, chronic substance abuse, and suicide” (Westwood et al., 2010, p. 45). Furthermore, although post-traumatic stress disorder is a personal psychological problem, it still has lasting effects on the veteran’s surrounding environment. Serious impacts have been noted on the veteran’s “marital relationship” and it was found that “elevated rates of domestic violence and divorce are more likely with veterans with PTSD than in veterans without PTSD” (Westwood et al, 2010, p. 45).


There is a multitude of ways to treat PTSD. Nonetheless, the current “strongest support in the research literature […] is for treatment interventions that combine cognitive and behavioral methods, with emphasis on measured exposure-type techniques” (Westwood et al., 2010, p. 46). After examination of Copp and McAndrew’s research, the treatment offered to Canadian military in the 1940s appears to be the same one provided today. In January 1941 at Basingstoke, a committee of neuropsychiatrists met and agreed upon the effective treatment for patients with “psychopathic personalities” (Copp & McAndrew, 1990, p. 18). The treatment for “all anxiety and hysteria cases” consisted of a “careful evaluation of physical, psychological, and sociological components”(Copp & McAndrew, 1990, p. 19). The psychiatrist would need to collect a “detailed case history with “questioning about childhood environmental influences, parental attitudes and relationships, phobias, school and work record, disposition towards sports and physical dangers, sexual habits, adaptation to difficulties, mood changes”(Copp & McAndrew, 1990, p. 19). After collecting the patient’s history, a mental examination was conducted to survey “intellect as well as emotion”, followed by a discussion of “the factors causing the immediate mental conflict” (Copp & McAndrew, 1990, p. 19). According to Copp and McAndrew (1990), the senior neuropsychiatrist at Basingstoke “believed in repeated talks with patients so that ‘repressed fears and conflicts’ could be aired again and again” (p. 19). But when the “immediate problems” that led to the “neurosis” could not be dealt with, the psychiatrist proposed “hypnosis, sodium pentathol, and occasionally prolonged narcosis” (Copp & McAndrew, 1990, p. 19). These extreme measures are not used today, as psychologists now aim to resolve the mental conflict that the individual experiences by changing their faulty thinking and to help them to modify their outward behavior by exposing the patient to their fears. Nevertheless, the vocal exposure techniques which involved repeated talks of the traumatic event are still used today. These techniques can be direct


(i.e. bringing the person back to the site of the traumatic event) or indirect (i.e. having a casual conversation about the details of the traumatic experience). Today, Canadian military organizations have created guidelines to help mentally ill veterans “regulate operational tempo, in part to keep its impact on mental health to sustainable levels” (Zamorski et al., 2010, p. 320). This process involves “exposure to traumatic stressors” based on the veteran’s “deployment length and leadership qualities” (Zamorski et al., 2010, p. 321). For instance, one of the cognitive-behavioral treatment programs for individual soldiers involves “multiple components such as direct therapeutic exposure to the traumatic memories, eye movement desensitization and reprocessing (EMDR) and vocational counseling” (Westwood et al., 2010, p. 46). In addition, a new approach emphasized by researchers is the “proactive involvement with peer groups” (Westwood et al., 2010, p. 46). It has been found that “the group process more readily protects patients from being overwhelmed by the power of therapy-released emotions and also provides a guilt-reducing distortion-correcting, ‘fool proof’ peer group” (Westwood et al., 2010, p. 46). Military veterans are some of our nation’s most prized citizens, as they risked their lives on a daily basis for their country. They are essentially viewed, in sociological terms, as possessing a “master status”. Military veterans make up a distinct class in our society and are referred to by their position in the Canadian army. For these reasons, it is not always easy for them to reach out for help and to show signs of weakness, because they are believed to be unbreakably strong men and women capable of anything. In a recent Canadian military survey, Fikretoglu et al. found that “instead of forming a homogenous group, those who seek treatment for PTSD fall into distinct subgroups” (2007, p. 856). For example, one group of “treatment seekers” was categorized by


“significant trauma exposure, significant PTSD interference, and comorbid major depressive disorder” while another subgroup of “non-treatment seekers” had “similar levels of trauma exposure, lower levels of PTSD interference, high spirituality, and low social support” (Fikretoglu et al, 2007, p. 856). The evidence shows that the military veterans who do seek treatment have a constellation of disorders other than PTSD, which remains as one their main reasons for seeking treatment. Meanwhile, those who don’t seek treatment are likely to have less social support (Fikretoglu et al, 2007, p. 856), which can be attributed to the idea that most family members and friends do not see the returning military veteran as in need of help, since they have always been portrayed as a fearless individual. The treatment that wounded veterans receive “positions their role as fundamental to what it means to be Canadian” (Chivers, 2009, p. 325). According to Chivers, the wounded returning soldier symbolizes “the sacrifices his/her country made” and thus there is “a strong popular investment in supporting him, lionizing him, and making him appear broken but whole” (2009, p. 325). In general, society does care about its wounded military veterans, as “the wounded war hero” is an enduring symbol of patriotism. However, any country (including Canada) needs to maintain a positive image so that other nations will perceive it in a good light (Chivers, 2009, p. 327). In Chivers’ words, “for Canadians, this is particularly important because of their desire to be seen internationally as a middle power that keeps the peace”. Although disability caused by war is seen as either heroism or destruction, it also represents a “fundamental logic of war and its relationship to national belonging” (2009, p. 325). As such, the military veterans must prove to society that they still deserve their master status by showing that even though the battlefield was a disconcerting experience for them, they are still strong Canadian heroes. It is also important to emphasize how the status of a soldier changes between when they first enter combat and when


they come back home with injuries (mental and/or physical). “[W]hile these men were previously deemed fit for service and ideal to represent the Canadian nation on television and in combat”, society now considers them “unfit for combat” (2009, p. 327). As a result, military veterans are forced to continue to symbolize “national pride in appearing whole” while remaining “quite different from other disabled people” due to their master status (2009, p. 327). The issue of mental illness is still a social problem for the Canadian society, as the nation fails to provide sufficient treatment to its military veterans returning home with PTSD. In reality, “the effects of war trauma, if left untreated, do not simply dissipate” (Stretch, 1991, p. 240). It is important to take a look to the past and see how Canada dealt with its mentally ill military veterans, and Robert Stretch does just this by taking a closer look at a group of military veterans who were often ignored and unnoticed: the Canadian Vietnam War veterans. According to Stretch, “prolonged isolation from other Vietnam veterans, being ignored by Canadian society, being rejected as veterans by the Canadian government and the Royal Canadian Legion, feeling abandoned by the U.S. government, and lack of readjustment counseling services or Canadian mental health professionals familiar with the diagnosis or treatment of combat-related PTSD” have all contributed to the difficult readjustment of Canadian Vietnam veterans (Stretch, 1991, p. 252). Stretch notes that based on these results, “treatment for disorders like PTSD needs to become more readily available in Canada” (1991, p. 252). It seems that Canada has neglected its mentally ill veterans in the past, and this can be attributed to the long lasting stigma associated to mental illness which is particularly hard to break for military veterans. Today, although instances of PTSD are increasing with the rising number of “major conflicts” and military operations, the Canadian press does not give much attention to this important matter (Chivers, 2009, p. 336). For instance, while the “Soldier On” program designed to help military veterans with their recovery from war refers


to “medical and psychological needs”, the program is not designed for soldiers with mental disabilities (Chivers, 2009, p. 336). PTSD in Canadian military veterans is not a problem that the Canadian government can ignore, as it has now become a social problem that affects not only the individual affected by the war, but also those around them. While the Canadian Forces Health Services Group claims that PTSD could possibly be a “normal response to abnormal events”, new Canadian policies aim to send more soldiers to combat locations, therefore these “abnormal events are likely to increase and not decrease in frequency” (Chivers, 2009, p. 338). As previously stated by Fikretoglu et al., military veterans who have comorbid depressive disorders are more likely to seek treatment, while “the absence of major depressive disorder, even in the face of significant trauma exposure and PTSD interference” often make it less likely for other military members to actively ask for help (2007, p. 856). A new type of intervention proposed by Fikretoglu et al. consists of tailoring the treatment to the characteristics and needs of each “subgroup of potential treatment seekers”, in addition to providing them with the necessary “information, availability and effectiveness” of the treatments (2007, p. 856). With a distinct and clear focus on growing national social problem of post-traumatic stress disorder in its military veterans, this research paper explored the Canadian government’s approach in dealing with PTSD over time. The evidence shows that Canada does not appear to have changed their way of targeting this issue, and the military is falling short when it comes to providing the necessary treatment for its mentally ill veterans. This backward step can be attributed to the social stigma around the issue of mental illness and to the expectation that society has of its military veterans as true Canadian heroes. In reality, many military men and women are struggling with the many symptoms of PTSD on a daily basis, whether on the battlefield or back home. For this


reason, it is necessary for us as Canadians to take a stand on ensuring that our bravest citizens are treated with the respect and dignity that each and every one of them fought hard to deserve.


References Chivers, S. (2009). Disabled Veterans in the Americas: Canadians “Soldier On” after Afghanistan—Operation Enduring Freedom and the Canadian Mission. Canadian Review of American Studies, 39(3), 321-342. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&AuthType=cookie,ip,url&db=aph& AN=45066476&site=ehost-live Copp, T., & McAndrew, B. (1990). Battle Exhaustion: Soldiers and Psychiatrists in the Canadian Army, 1939-1945. Montreal, QC: McGill-Queen’s University Press. 11-26, 109-127, 149-161 Fikretoglu, D., Brunet, A., Guay, S., & Pedlar, D. (2007). Mental Health Treatment Seeking by Military Member With PTSD: Findings on Rates, Characteristics, & Predictors From a Nationally Representative Canadian Military Sample. Canadian Journal Of Psychiatry, 52(2), 103-110. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&AuthType=cookie,ip,url&db=aph& AN=24117585&site=ehost-live Statistics Canada. (2011). Canadian Forces Mental Health Survey. Retrieved


http://www.statcan.gc.ca/daily-quotidien/140811/dq140811a-eng.htm Stretch, R.H. (1991). Psychosocial readjustment of Canadian Vietnam veterans. Journal of Consulting and Clinical Psychology, 59(1), 188-189. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&AuthType=cookie,ip,url&db=pdh& AN=1991-15522-001&site=ehost-live


Westwood, M.J., McLean, H., Cave, D., Borgen, W., & Slakov, P. (2010). Coming Home: A Group-Based Approach for Assisting Military Veterans in Transition. Journal for Specialists in Group Work, 35(1), 44-68. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&AuthType=cookie,ip,url&db=sih&A N=49086091&site=ehost-live Zamorski, M.G. (2014). Prevalence and Correlates of Mental Health Problems in Canadian Forces Personnel Who Deployed in Support of the Mission in Afghanistan: Findings From Postdeployment Screenings, 2009-2012. Canadian Journal of Psychiatry, 59(6), 3190326. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&AuthType=cookie,ip,url&db=pbh& AN=96708248&site=ehost-live


Parsing Sex and Gender Differences in Empathy AurĂŠlie van Oost PSYC 502 Dr. Rosemary Bagot


Broadly, empathy is “the ability to understand and share the internal states of others” (Christov-Moore et al., 2014, p. 2). Robust sex differences in empathy have been reported in both animal and human studies, ranging from differences in empathetic behaviours to differential activation of brain areas associated with empathy. Assumptions that these differences are innate and universal are consistently made in literature. A distinction between sex and gender must be made to parse out effects that are innate and biological from those that are learned. Sex can be defined as the characterization of bodies into either “male” or “female” (usually), based on reproductive biology such as sex chromosomes or genitalia. Gender refers to cultural conceptions of what the behavioural “norms” are for members within the sex binary, including roles and behaviours. While the two terms are related, they are not equivalent. It is important to determine whether observed divergent effects are due to sex or gender, in order to understand how empathy – a critical social behaviour – can be shaped by arbitrary social influences. Background Historically, empathy has been a difficult psychological phenomenon to define. While there is strong agreement on the general definition of empathy, there remains contention on specific elements. One agreed-upon element of empathy is that it has both an affective and a cognitive realm, and that these interact – however, the degree to which these systems are independent from each other is not fully established (Cuff, Brown, Taylor, and Howatt, 2017). The affective element of empathy is often described as an immediate, preconscious reaction, whereby one feels the emotion of another person. The cognitive element is more conscious and controlled, pertaining to understanding another person’s emotions. Christov-Moore et al. (2014) elaborate


on the distinction between the affective and cognitive components of empathy, stating that these components are supported by “mimicry” and “mentalizing” neural networks, respectively. The “mimicry” system is characterized by the human mirror neuron system (Gallese, 2003), whereas the “mentalizing” system involves the temporal poles, temporoparietal junction (TPJ), and the medial prefrontal cortex (mPFC) (Schulte-Rüther, Markowitsch, Shah, Fink, & Piefke, 2008). Many researchers in the field maintain that these two systems interact to modulate empathy, and dysregulation of this interaction may be the cause of atypical empathy engagement (ChristovMoore, 2014; Cuff et al., 2017). Robust sex differences have been reported in activation patterns of the “mirroring” and “mentalizing” neural systems, supporting previously established differences in empathy behaviour. The first paper to do this was Schulte-Rüther et al. (2008), which demonstrated enhanced neural engagement of the “mirroring” system in females, concluding that women may be better at affective empathy. This was endorsed by Christov-Moore et al. (2014), who state that females appear to show more engagement in the affective areas during social cognition. Both studies support observed behavioural differences, wherein women show higher pain ratings than men in a pain empathy task (Shamay-Tsoory et al., 2013), while men seem to be stronger in cognitive empathy tasks (Russell, Tchanturia, Rahman, & Schmidt, 2007). Better performance by men in cognitive empathy tasks is also supported by observations of increased activation of the temporoparietal junction (Schulte-Rüther et al., 2014), an area linked to the “mentalizing” network. Furthermore, the neuropeptide hormone oxytocin has been linked to empathy. Oxytocin administration increases empathy in a pain rating task for the “other” and not the “self” (Abu-


Akel, Palgi, Klein, Decety, & Shamay-Tsoory, 2014), as well as empathy for those in a cultural out-group (Shamay-Tsoory et al., 2013). Certain oxytocin-receptor polymorphisms have been linked to inter-personal differences in empathy (Rodrigues, Saslow, Garcia, John, & Keltner, 2009), as well as gender differences in affective empathy (Wu, Li, & Su, 2012) with higher ratings of emotional distress and concern in women. While these sex differences are compelling, there is evidence that they are at least moderated to some extent by socialized factors, specifically gender roles and culture. Ickes, Gesn, and Graham (2000) found support for the moderating variable hypothesis – that women were better at empathy tasks when they were aware that they were being tested on their inter-personal skill, a conventionally essential characteristic of their gender role. This effect is not found in men, as empathy is not considered a “masculine” trait (Klein & Hodges, 2001). These gender role beliefs are taught and enforced from an early age, making them a highly consolidated aspect of self. Cultural norms are also enforced early, and cultures differ in their social norms. Collectivist cultures are predominantly found in Asia and South America, with members typified as focusing on the wellbeing of the collective before that of the individual. Western cultures are mostly individualist, with members focusing on self-fulfilment. These cultural differences are related to empathy in that they determine how people will interact with the internal states of others. This proposal hypothesizes that previously observed divergent sex responses – men performing better on cognitive empathy tasks, as well as having increased activation of cognitive neural areas – will not exist across cultures, thus showing that these reported sex differences in empathy are moderated by socialized aspects of gender.


Experimental Strategy In order to test this hypothesis, it will be necessary to recruit participants internationally and from a range of different individualist and collectivist cultures. Individualist participants will be recruited from the US, Canada, and Australia, and collectivist participants from China, Japan, and Vietnam, all countries that have been linked to individualism/collectivism. These country choices are otherwise fairly arbitrary as there is no global rank data for the construct. Each participant will complete a questionnaire measuring their collectivism/individualism (Triandis, Bontempo, Villareal, & Licca, 1988) – participants that do not conform statistically to their country’s cultural orientation will be excluded. One hundred participants of both sexes will be recruited from each country. The inclusion criteria for participants will be that they are cis-gendered, are free of neurodivergence or psychiatric disorders, and have lived in the same country for at least 10 years. This ensures that participants have been exposed to the same social norms for an extended period. For ease of recruitment, participants will be recruited from university programs, and samples will be gender balanced. Empathic accuracy is a person’s capacity to accurately perceive the internal state of another person – hence it is a measure of cognitive empathy (Ickes, 1993). Participants will be exposed to the standardized stimulus – an 8-minute video in which a female subject shares a painful memory, coded by raters as emotionally salient – while in an fMRI machine. The clip will be taken from a real psychotherapy session filmed with client consent, thus ensuring ecological validity (as opposed to having an actor), and the frame will include her upper body and face. The video subject will then review the clip and pause the video at time-points where she recalls having a specific thought or feeling, of which a written summary is then provided. During the


fMRI measure, the video is paused at these time-points, and the participant prompted to provide a verbal explanation of what they think the video subject is thinking, similar to the protocol in Ickes (2001). This is then transcribed and rated by 5 separate raters for similarity with the account provided by the video subject, giving the participants an empathic accuracy index (Ickes, 2003), the behavioural measure of cognitive empathy for this study. The fMRI analysis will consist of previously identified a priori areas (temporal poles, TPJ, and mPFC). A multivariate multiple linear regression will be conducted, with sex (male or female) and culture (individualist or collectivist) as the independent variables, and empathic accuracy index and fMRI neural activation patterns as the dependent variables. Since specific oxytocin receptor polymorphisms have been linked to altered empathic ability (Rodrigues et al., 2009), a saliva sample will be taken from the participants in order to control for these. A questionnaire about implicit gender role beliefs (Kray, Howland, Russell, & Jackman, 2017) will also be completed by participants at the same time as the culture questionnaire, avoiding demand effects that could occur if completed after the fMRI scan. Implicit gender role beliefs will provide data on whether participants have internalized conventional gender role beliefs, an important variable to control for in case unexpected results may be explained by a lack of this internalization. By surveying to see whether there is a divergence in collectivist cultures away from the established neural and behavioural sex differences found in individualistic cultures, we can attempt to dissect how the sex difference effects we observe in individualist cultures are due to gender role socialization. If there is a significant difference between the neural activation profiles in these areas between cultures â&#x20AC;&#x201C; i.e. no difference in activation between sexes â&#x20AC;&#x201C; then we can conclude that the observed sex difference in neural activation is moderated by gender. If we


observe that men have higher empathic accuracy indices in one culture and not the other, we can conclude that this effect is also moderated by gender. If there is no difference in observed sex differences in behavioural and neural measures between cultures, then we can conclude that these sex differences are truly innate, and not moderated by gender. An alternative outcome would be if there is a difference between cultures in sex divergence for the behavioural measure but not the neural measure – this would also provide strong evidence that sex differences in empathy are actually socialized gender differences, this time informed by culture. Finally, finding a difference between culture in sex divergence for the neural measure but not the behavioural measure would suggest that there may be a joint impact of culture and gender norms that may counteract. Conclusion This study adopts the “promising” advancements in the field that Zaki and Ochsner (2012) describe in their review – engaging with naturalism and brain-behaviour links. This study looks at empathy as it most often occurs. Participants view a whole person describing an emotional situation, as opposed to having access to isolated social cues, such as only being exposed to the eyes or to an auditory stimulus. This parallels empathy in a real-world setting, such as when supporting a friend. The study also unites neurological and behavioural measures, which allows us to create understanding of the mechanism behind empathy. By determining how influential the role of gender roles is in empathy, we can conclude that aspects of our socialization have a very real impact on our social functioning. This is controversial, as empathy is an inherently moral behaviour. Is our morality shaped by the norms

we are taught young? If we discover that differences in empathy are taught, this would introduce the concept of “absolute” empathy as an achievable standard. If a moderating influence of gender


roles is found, this may lead to policy change within schools to reduce the gender role messages sent to children. As well as this, the study provides information about social behaviours outside of the oft-studied Western context.


References Abu-Akel, A., Palgi, S., Klein, E., Decety, J., & Shamay-Tsoory, S. (2015). Oxytocin increases empathy to pain when adopting the other- but not the self-perspective. Social Neuroscience, 10(1), 7–15. https://doi.org/10.1080/17470919.2014.948637 Christov-Moore, L., Simpson, E. A., Coudé, G., Grigaityte, K., Iacoboni, M., & Ferrari, P. F. (2014). Empathy: Gender effects in brain and behavior. Neuroscience and Biobehavioral Reviews, 46(Pt 4), 604–627. https://doi.org/10.1016/j.neubiorev.2014.09.001 Cuff, B. M. P., Brown, S. J., Taylor, L., & Howat, D. J. (2016). Empathy: A Review of the Concept. Emotion Review, 8(2), 144–153. https://doi.org/10.1177/1754073914558466 Gallese, V. (2003). The Roots of Empathy: The Shared Manifold Hypothesis and the Neural Basis of Intersubjectivity. Psychopathology, 36(4), 171–180. https://doi.org/10.1159/000072786 Ickes, W. (1993). Empathic Accuracy. Journal of Personality, 61(4), 587–610. https://doi.org/10.1111/j.1467-6494.1993.tb00783.x Ickes, W. (2001). Measuring empathic accuracy. In J. A. Hall & F.J. Bernieri (Eds.), The LEA series in personality and clinical psychology. Interpersonal sensitivity: Theory and measurement (pp. 219-241). Mahwah, NJ: Lawrence Erlbaum Associates. Ickes, W., Gesn, P. R., & Graham, T. (2000). Gender differences in empathic accuracy: Differential ability or differential motivation? Personal Relationships, 7(1), 95–109. https://doi.org/10.1111/j.1475-6811.2000.tb00006.x Klein, K. J. K., & Hodges, S. D. (2001). Gender Differences, Motivation, and Empathic Accuracy: When it Pays to Understand. Personality and Social Psychology Bulletin, 27(6), 720–730. https://doi.org/10.1177/0146167201276007


Kray, L. J., Howland, L., Russell, A.G., & Jackman, L. M. (2017). The effects of implicit gender role theories on gender system justification: Fixed beliefs strengthen masculinity to preserve the status quo. Journal of Personality and Social Psychology, 112(1), 98-115. http://dx.doi.org/10.1037/pspp0000124 Rodrigues, S. M., Saslow, L. R., Garcia, N., John, O. P., & Keltner, D. (2009). Oxytocin receptor genetic variation relates to empathy and stress reactivity in humans. Proceedings of the National Academy of Sciences of the United States of America, 106(50), 21437–21441. https://doi.org/10.1073/pnas.0909579106 Russell, D. T. A., Tchanturia, K., Rahman, Q., & Schmidt, U. (2007). Sex differences in theory of mind: A male advantage on Happé’s “cartoon” task. Cognition and Emotion, 21(7), 1554–1564. https://doi.org/10.1080/02699930601117096 Schulte-Rüther, M., Markowitsch, H. J., Shah, N. J., Fink, G. R., & Piefke, M. (2008). Gender differences in brain networks supporting empathy. NeuroImage, 42(1), 393–403. https://doi.org/10.1016/j.neuroimage.2008.04.180 Shamay-Tsoory, S. G., Abu-Akel, A., Palgi, S., Sulieman, R., Fischer-Shofty, M., Levkovitz, Y., & Decety, J. (2013). Giving peace a chance: Oxytocin increases empathy to pain in the context of the Israeli–Palestinian conflict. Psychoneuroendocrinology, 38(12), 3139–3144. https://doi.org/10.1016/j.psyneuen.2013.09.015 Triandis, H. C., Bontempo, R., Villareal, M. J., Asai, M., & Licca, N. (1988). Individualism and collectivism: Cross-cultural perspectives on self-ingroup relationships. Journal of Personality and Social Psychology, 54(2), 323-338 http://dx.doi.org/10.1037/0022-3514.54.2.323


Wu, N., Li, Z., & Su, Y. (2012). The association between oxytocin receptor gene polymorphism (OXTR) and trait empathy. Journal of Affective Disorders, 138(3), 468â&#x20AC;&#x201C;472. https://doi.org/10.1016/j.jad.2012.01.009 Zaki, J., & Ochsner, K. N. (2012). The neuroscience of empathy: progress, pitfalls and promise. Nature Neuroscience, 15(5), 675. https://doi.org/10.1038/nn.3085


Profile for MPSA  Undergraduate Research Journal

PSI Journal 2017  

PSI Journal 2017  


Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded