Journal of the Bachelor of Arts and Science Volume 11, 2009
McGill University Montreal, Quebec, Canada www.mcgillbasic.com
Journal of the Bachelor of Arts and Science Volume 11, 2009 Editors-in-Chief Lucy Erickson Kathleen Gollner Editorial Board Nicholas Dillon Arielle Miles Elena Ponte Carolyn Poutiainen Alyssa Salaciak Hyun Song Design and Layout Emily Coffey Anastassia Dokova Johnson Fung Darren Haber Robert Rabbie Jordana Remz Ampersand is supported by the Bachelor of Arts and Science Integrative Council, the Arts Undergraduate Society, the Dean of Arts Fund, and SSMU Campus Life Fund.
The moral rights of the authors have been asserted
Printed on 100% recycled paper
About the Contributors
Autism A Critique of Simon Baron-Cohen’s Theory of the Extreme Male Brain
Lorna Sampson Riden 12
Nature of Nurture: On Irreconcilability
Stephane Hurtubise 21
A Call for Media-Climate-Change: An Inconvenient Bias Journalistic Balance in Science News Coverage of Global Warming
Kota Talla 28
In Reverence of Divine Architecture An Essay on the Medieval World View of Geography and Cosmology
Kathleen Gollner 33
The History of the Potato in Europe How a New World Food Conquered the Old World
Maya Kaczorowski 42
The Singularity and the Syllogism: Mathematics and Astronomy in Ulysses
Lindsay Waterman 52
Structuring Rationality Deviation and Nature in Enlightened Thought
Nicholas Dillon 60
“Really Little Men,” Gone Wrong? Addressing the consequences of Enlightened thought on Social Hierarchies
Maia Woolner 65
The Destabilization of Gender Why Critiques of Male Circumcision are Relevant to All Individuals
mpersand is committed to the promotion of interdisciplinary thought, demonstrating how divided fields can be woven together in an insightful way. We take part in a tradition, one that goes back so far and so deep its roots are lost in our collective memory.
A survey of history reveals the progressive impact of interdisciplinary thought. There was no demarcation of anatomy and artistry for da Vinci or of physiology and philosophy for Galen. There were no subject guides for Kant, Hume, Locke, or Descartes. Bridges across paradigms were once common: Newton was at once an alchemist and a physicist, Freud an initiator and a perpetrator of psychology. Though the tradition today is largely obscured by artificial boundaries and specialized fields, Ampersand highlights domains where integrative discourse takes place. That is, discourse that dares to transcend the gaps of modern thought; discourse that reclaims the value—and the necessity—of integrative study. For Ampersand, ideas originate in the tension between borders. Consider how unlikely couples—math and literature, empowerment and circumcision, bias and objectivity—provide insight. And, how neuroscience and linguistics, news media and environmental studies open new frontiers alongside previously tread ones: across medieval continents, through potato fields, on meteorite tails, and within Enlightened thought. We hope to challenge what you think you know—about order, about change, about the relationship of arts and science. For that is progress, in the grand tradition.
About the Contributors Nicholas Dillon, U1 Cognitive Science and History & Philosophy of
Science, responds to most names including “hey you” and “get off my lawn.” He likes brains, words, ideas, a couple of other things, and, most of all, the sun. He balances his TV viewing between Antiques Roadshow and America’s Most Wanted, and hopes to become a professor. His essay was written for HIST 350: Science and the Enlightenment, taught by Prof. Nicholas Dew.
Yun Gao is a U3 student majoring in Biomedical Sciences, with a doubleminor in History and Sexual Diversity Studies. She likes cooking, eating, and talking about sex. Her paper on Circumcision is her baby from a semesterlong research project. This version is an edited version of her original 50-page long behemoth.
Kathleen Gollner came to McGill for a B.A. & Sc. in hopes of pursuing
science writing, but not before pursuing her love of dance, spending time with both the National Ballet and the Royal Winnipeg Ballet. Now a U2 student in Cognitive Science and History & Philosophy of Science, she wrote her paper on medieval perspectives of geography and cosmology for HIST 214: Introduction to European History.
Stephan Hurtubise is a U3 B.A. & Sc. student, pursuing a double major
in linguistics and psychology. He was recently accepted into the Graduate Program in Linguistics at McGill and will begin his master’s in the fall of 2009. He hopes to become a professor and to share his enthusiasm for the discipline with students who may not yet know what they want to do. “On Irreconcilability” was written for the course PSYC 532: Cognitive Science, taught by Professor Thomas R. Shultz.
Maya Kaczorowski, U2 Arts & Science Joint Honours Mathematics and
Economics, has always been interested in gastronomy and the evolution of cooking. This is her third food-related publication. She previously examined ice cream headaches in 2002 and false positive opiate tests due to poppy seed consumption in 2008. This paper was not written for a course.
Lorna Sampson is currently in a double major in Biomedical Sciences and
International Development. She is a U2 student looking to pursue a career in international health law. Her paper was written for ANTH 423: Mind, Brain, and Psychopathology. In her paper, she integrated arts and sciences by creating a discourse for Simon Baron Cohen’s theory of how the medical condition autism can be seen as a presentation of the extreme male brain.
Kota Talla, B.Sc. Microbiology & Immunology ‘09, is currently exploring the different sides of science. Born in Belgium, he is carrying out biochemical research with the intention of pursuing interests in the life sciences. Besides spreading the word about climate change, he advocates scientific literacy and outreach in the community. His piece on bias in science news was written for ENGL 378: Media Ethics.
Lindsay Waterman graduated in December with majors in English Literature and Molecular Biology. Having decided early that the road more traveled by wasn’t for him, he set his sights on medicine. He hopes the career will allow him to keep learning in both the humanities and sciences—and to perhaps buy a Steinway grand piano.
Maia Woolner graduates this May with an honours degree in European
History and a minor in Italian studies. She is planning to focus on her personal development as a creative writer and musician for the next few years before returning to graduate school. She has hopes of becoming a university professor and a historical fiction author!
urely you have heard a joke that begins with a physicist, a mathematician, and a philosopher encountering some everyday situation, and when asked a simple question about it they give ridiculously different answers. These jokes are funny because they vividly bring to mind how our background influences our perception of the world. And it is this world and its inhabitants that we try to make sense of in the academic quest for knowledge. To overcome the tremendous complexities of the task at hand, the principle of division of labour has been implemented to the fullest: The subject matter has been divided up meticulously into disciplines, sub-disciplines, subsub-disciplines, etc., in which each researcher tackles a specific problem with a set of specific methods. At an institutional level this divide-and-conquer strategy is reflected in the division of universities into faculties, departments, programs, research groups, and so on. Every researcher works hard at her own little piece of the jigsaw puzzle of our world’s puzzles. New results are obtained, papers are published, books written, new research projects started, and grant applications submitted.
However, in the feverish productivity in academia one part of the enterprise is often postponed to some distant future. Namely, the individual results have to be assembled, the pieces of the puzzle have to be brought together. This task might be considered trivial, not worth the effort, and not producing any new knowledge. In fact, the opposite is the case: It is far from simple, requires much toil, and can lead to surprising new insights. After all, we all know the old adage that “the whole is greater than the sum of its parts.” What makes the synthesis of results from different disciplines difficult is that each research area has its particular language, methods, and commonly accepted background assumptions, and that in the drive forwards we often forget to look sideways in order to keep track of the other pieces of the jigsaw puzzle. The division of labour that is intended to increase productivity thus results in crippling compartmentalization and in the jokes mentioned above. Moreover, just as in ordinary life, what is unfamiliar is often considered unpleasant and undesirable (see, for example, “math anxiety”). This situation is not hopeless. We can overcome it by taking the time to broaden one’s horizons, to look beyond the end of one’s nose, to learn to think outside the box. This might sound trite, but that doesn’t mean it’s easy. Crossing the boundaries of disciplines means adopting new points of view, asking unfamiliar questions, and being willing to accept an unfamiliar set of
assumptions and methods. Being interdisciplinary doesn’t just mean to establish a new discipline at the intersection of already existing ones, but rather it is an attitude that involves continuously searching for new doors and the courage to open them. When the unfamiliar is embraced, new connections are established, which, in the long run, is rewarding and fun. The greats of the past, like Aristotle, Descartes, and Einstein, had no qualms transgressing the boundaries of disciplines. They skillfully moved between the arts and sciences, following the motto: “Arts without Sciences are empty, Sciences without Arts are blind.” This journal presents the work of McGill undergraduate students who have ventured to bridge the traditional gaps between disciplines. Their explorations range over such diverse topics as potatoes and circumcision, autism and climate change, and many others in between, and they bear witness to the immense richness of our world and to the authors’ audacity and skill to present it to us. So, plunge into the practice of being interdisciplinary: Don’t start reading the paper whose title sounds most familiar to you, but begin with the one which sounds most foreign and let yourself experience your horizons expanding!
Dirk Schlimm Assistant Professor Department of Philosophy and School of Computer Science McGill University
Neurological & Emotional Barriers 1
Autistic children are separated from the world by an inability to socialize and identify with people. Attempts to understand the neurological basis of this disorder include a theory which presupposes the existence of separate male and female brains, and attributes autism to an extreme-male case. Lorna Sampson explores and criticizes this theory with a careful analysis of its basis and its weaknesses. Autism is widely regarded to be one of the most severe childhood psychiatric conditions (Rutter, 1978; Frith, 1989; Baron-Cohen, 1995). It is diagnosed on the basis of abnormal social development, atypical communicative development, restricted interests, and repetitive activity, along with limited imaginative ability (DSM IV, 1994). It is claimed that children with autism fail to become social, remain on the periphery of any social group, and become absorbed in obsessive interests and activities such as collecting unusual objects or facts (Baron-Cohen, Knickmeyer, & Belmonte, 2005; Baron-Cohen, 2003). To understand Simon Baron-Cohen’s theory of the Extreme Male Brain, attention must be paid to what he considers gender differences in the brain observed in ‘normal’ individuals; that is, individuals who do not have autism. The notion that men and women have physiologically different brains has recently become the subject of continuing scientific studies, media interest, and ‘pop psychology.’ Traditionally, the main mental domains in which sex differences have been studied are verbal and spatial abilities (Baron-Cohen, Knickmeyer, & Belmonte, 2005). Simon Baron-Cohen uses his clinical experience and academic research to suggest two neglected dimensions for understanding human sex differences: ‘empathizing’ and ‘systemizing’ (Baron-Cohen, Knickmeyer, & Belmonte, 2005). He defines the male brain psychometrically as those indi-
viduals in whom systemizing is stronger than empathizing, and the female brain as those with the opposing cognitive profile, or as those who tend to emphasize empathy (Baron-Cohen et al. 2006; Baron-Cohen, 2003). Baron-Cohen’s narrow definition of autism is an extreme overemphasis of the normal male brain profile in a given individual. This paper takes a critical position on the validity of Simon Baron-Cohen’s Extreme Male Brain theory of autism. It is divided into two parts: the first recapitulates Baron-Cohen’s theory, and the second makes an evaluation of the specificity absent in his claims. In the analytical half of this paper, special attention is paid to noted characteristics, behaviours, and symptoms of autism, which represent the diagnostic criteria and classifications from the fourth edition of The Diagnostic and Statistics Manual (DSM-IV). In addition, a supplemental list gathered empirically by a leading Canadian children’s autistic therapist will be examined. The Empathizing-Systemizing Theory of Sex Difference in Cognition It has been widely accepted that, although men and women do not differ in general intelligence, differences in brain function between the sexes are illuminated through specific cognitive tasks. In order to understand the extreme version
Neurological & Emotional Barriers 3
...due to prenatal hormonal fluctuations, male and female brains develop differently on a neuronal basis, thus creating structurally different brains of the male brain, the female and male brains must first be individually examined. As described by Baron-Cohen, Differences favouring males are seen on the mental rotation test, spatial navigation including map reading, targeting, and the embedded figures test […] Males are also more likely to play with mechanical toys as children, and as adults they score higher on engineering and physics problems. In contrast, females score higher on test of emotion recognition, social sensitivity, and verbal fluency. They start to talk earlier than boys do and are more likely to play with dolls as children. (Baron-Cohen, Knickmeyer, & Belmonte, 2005) According to the empathizing-systemizing (E-S) theory of psychological sex difference, such dissimilarities reflect a stronger systemizing in males and stronger empathizing in females (Baron-Cohen, 2003). “Empathizing is the drive to identify another person’s emotions and thoughts, and to respond to them with an appropriate emotion” (Baron-Cohen, 2003). This is not merely the process of ‘mind-reading’, a simple assessment of what another person is thinking or feeling. Empathizing necessarily involves an emotional reaction, what Baron-Cohen deems, “[a]n emotion triggered by the other person’s emotion” (Baron-Cohen, 2003). It is understood that females, on average, will engage in spontaneous sympathizing more frequently than males. The skill
of empathizing, like musical ability, is found in the population along a Gaussian curve with most of the population exhibiting a normal range of empathizing capability (Baron-Cohen, 2003). Individuals may vary significantly in their ability to empathize depending on the level of their digression from the center of this curve. In contrast to the common female exhibition of empathy, males tend to systemize, focusing on “the drive to analyze, explore and construct a system” (Baron-Cohen, 2003). The systemizer intuitively assesses a system and figures out how it works by paying attention to the system’s innate rules or laws. This process is pursued in order to understand the current system or in an effort to invent a new one. Systemizers operate on an ‘if-then’ causational basis (BaronCohen, 2003). By monitoring the input, operation and output, they can discover what makes the system function more or less efficiently and the range of outputs that it can produce (Baron-Cohen, 2003). Systemizing skills follow the same Gaussian curve that empathizing skills do (Baron-Cohen, 2003), with the majority of the population sitting at the centre of the curve, and some individuals at either extreme. The Influence of Culture Notwithstanding the observable impact of culture on cognitive differences perceived to be stemming from gender, societal factors must be addressed to ensure that the reader understands the
Ampersand influence of socio-cultural factors on sex differences. Little boys are supposed to play with trucks and little girls are supposed to play with dolls, or so Western societal stereotypes suggest. Furthermore, one must consider the influence that the media, same-sex peers, and parents have on the decisions that children make early in their ontogenetic development. A clear example can be found in the different ways that parents speak to their sons, as opposed to the way parents speak to their daughters: this could very well contribute to the differences observed in the development of empathy (Baron-Cohen, 2003). However, BaronCohen and others within the scientific community argue that because some divergent gender characteristics are observed at birth, it is unlikely that culture is the only influential factor in creating the male-female systemizing-empathizing dichotomy (Karmiloff-Smith & Thomas, 2002). Biology: The Truth about Androgen Levels Socio-cultural determinism creates an incomplete account of why the female brain is found to be generally better at empathizing, and the male brain at systemizing (Baron-Cohen & Wheelwright, 2004). In addition to the psychological and cognitive sex differences, it has been determined that due to prenatal hormonal fluctuations, male and female brains develop differently on a neuronal basis, thus creating structurally different brains (Courchesne, Redcay, & Kennedy, 2004; Baron-Cohen, Knickmeyer, &Belmonte, 2005). During fetal development, the endocrine-driven release of testosterone will shape the brain in different ways. For example, exposure to androgens pre-
natally increases spatial performance in females in many species including humans (Resnick et al., 1986; Hines & Green, 1991), while castration has been shown to decrease the spatial ability of male rats (Williams, Barnett, & Meck, 1990). Consequently, neuroendocrinal evidence seems to be consistent with the notion of a male or female brain type, as a result of the varying levels of circulating androgens during critical periods of neural development. By studying the prenatal activating effect of sex hormones on the brain, Norman Geschwind proposed that fetal testosterone affects the growth rates of the two hemispheres of the brain (Baron-Cohen, 2003). The greater the testosterone level, the faster the development of the right hemisphere, and subsequently, the slower the development of the left hemisphere. The right hemisphere is known to be involved in spatial ability, which is “assisted by the ability to systemize” (Baron-Cohen, 2003) just as the left hemisphere is implicated in communication and language. This in turn helps in one’s ability to empathize. Although greatly criticized, there is some support for Geschwind’s hypothesis that males have more advanced right hemisphere skills and females have superior left hemisphere skills (Geschwind, 1987). Simon Baron-Cohen studied babies whose mothers had undergone amniocentesis (the extraction of amniotic fluid) during the first trimester of pregnancy so that prenatal testosterone levels could be studied, and then followed up with the children at twelve and twenty-four months of age (Baron-Cohen, 2003). He was able to identify those with lower testosterone during fetal development as having higher levels of eye contact and a
Neurological & Emotional Barriers 5 larger vocabulary and those with higher testosterone levels prenatally as having a less advanced vocabulary and reduced eye contact (Baron-Cohen, 2003)—precisely what Geschwind had predicted (Baron-Cohen, 2003).
the male brain is directly reflected in its greater systemizing abilities (Baron-Cohen, Knickmeyer, & Belmonte, 2005).
Furthermore, it has been discovered that the cerebrum as a whole is approximately 9 percent larger in males, a difference driven by a larger fraction of white matter than grey (Baron-Cohen, Knickmeyer, & Belmonte, 2005). This increased brain size predicts both decreased interhemispheric connectivity and a smaller corpora callosa resulting in an increase in local connectivity and decrease in long-range connectivity (Ringo, Demeter, & Simard, 1986; Baron-Cohen, Knickmeyer, & Belmonte, 2005). This directly affects locally-derived systematizing skills, as well as empathizing skills, both of which require the integration of information from multiple neural sources in the brain (Baron-Cohen, Knickmeyer, & Belmonte, 2005). Due to the positive relationship between local connectivity and systemizing skills, and between longrange connectivity (integration of multiple neural sources) and empathizing skills, the increased local connectivity in
By expanding the E-S theory of typical sex differences in the general population, the Extreme Male Brain (EMB) theory has evolved. Hans Asperger first put forward the Extreme Male Brain theory of autism in 1944; however, the translation of his German text did not reach the United Kingdon until 1991. Asperger originally stated that, “[t]he autistic personality is an extreme variant of male intelligence” (Asperger, 1944). His theory was that people with high-functioning autism had a variant of brain type S, an extreme version of the systematic male brain (Baron-Cohen, 2003). Simon Baron-Cohen picked up the theory of EMB by conjecturing that “understanding sex differences in the general population has implications for understanding the causes of autismspectrum condition” (Baron-Cohen, Knickmeyer, & Belmonte, 2005). Baron-Cohen’s theory states that individuals on the autistic spectrum are characterized by impairments in empathizing coupled with normal functioning or even advanced systemizing skills (Baron-Cohen, 2003).
Autism is predominantly a male condition (with a male to female sex ratio of 4:1) (Rutter, 1978), and 75% of those with autism also suffer from additional mental handicap.
The Extreme Male Brain Theory of Autism
Autism is predominantly a male condition (with a male to female sex ratio of 4:1) (Rutter, 1978), and 75 percent of those with autism also suffer from additional mental handicap. In the case of people with a form of autism called Asperger Syndrome (AS) whose IQs are in the normal range, the sex ratio is even more pronounced at 9:1 (male:female)
Ampersand (Wing, 1981). Baron-Cohen argues then, that the presence of autism and AS is strongly correlated to being male. High Functioning Autism and Asperger Syndrome Before delving deeper into BaronCohen’s support for his theory linking EMB and autism, an important distinction needs to be made. Since autism is clinically diagnosed along a continuum, individuals with a range of mild to severe symptoms are placed along this spectrum. It is necessary to further understand the categories as described by Baron-Cohen, as they have driven the framework within which the EMB theory exists. Baron-Cohen lists the three pillars along the autism spectrum as “classic autism, high-functioning autism, and Asperger Syndrome (AS)” (Baron-Cohen, 2003). Classic autism, as Baron-Cohen describes, is characterized by poor language ability, low IQ , poor social skills, limited imagination, and obsessive interest in unusual topics. Patients with classic autism can be considered to function “as if they lived in a bubble” (Baron-Cohen, 2003). In 1990, a shift in thought occurred when it was discovered that children with normal or even above average IQ were increasingly being diagnosed with autism (BaronCohen, 2003). It was found that these “high-functioning” autistic children had late language development but remained advanced in ‘islets of ability’ in mathematics and other rule-based subjects. Finally, Baron-Cohen describes Asperger Syndrome patients as being “a small step away from high-functioning autism” (Baron-Cohen, 2003)—not only do they share the tendency in au-
In autism, it has been found that the long range connectivity monitored during an empathizing task is abnormally low tistic patients to have normal to high IQ and to speak ‘on time’, but they also have the same difficulties in social and communication skills that are seen in autistic patients (Baron-Cohen, Jolliffe, Mortimore, et al. 2006). This distinction becomes blurry in Baron-Cohen’s explanation of the Extreme Male Brain, as his theory seems applicable only to the higher-functioning level of the spectrum, omitting a wide array of patients. EMB – Physiology and Neuroanatomy In order to make Hans Asperger’s claim accessible today, Simon BaronCohen took his strict definition of the male and female brain and assessed the EMB theory empirically by cognitive tests and advanced neuroimaging. Baron-Cohen has proven through data from two questionnaires—the empathy quotient (EQ ) and the systemizing quotient (SQ )—that the extreme form of S-brain exists in each of the two genders (Baron-Cohen, Knickmeyer, & Belmonte, 2005). By studying patients with AS, Baron-Cohen discovered reduced empathy (through the EQ ) and intact or superior systemizing (by high SQ scores) (Baron-Cohen, Knickmeyer, & Belmonte, 2005). His research notes the difficulty in applying these tests to people with classic autism as a result of their reduced language and belowaverage intelligence. Therefore, he pos-
Neurological & Emotional Barriers 7 tulates characteristic behaviours such as, “insistence on sameness, repetitive behaviour, obsessions with lawful systems and…superior attention to the detection of change” (Baron-Cohen, 2003), as being evidence for hyper-systemizing. In further support of Baron-Cohen’s theory, looking at the autistic brain neuroanatomically reveals an exaggerated version of what may be occurring in a typical male brain—an imbalance between local and long-range neuronal activity (Baron-Cohen, Knickmeyer, & Belmonte, 2005). In autism, it has been found that the long range connectivity monitored during an empathizing task is abnormally low (Welchew, Ashwin, Berkouk et al., 2005). Functional Magnetic Resonance Imaging (fMRI) morphometry has shown that children with autism tend to have larger than normal brains, reflecting an increase in white matter over grey matter (Courchesne, Redcay, & Kennedy, 2004). Similarly, research on amygdaloid, corpus callosal, and cerebral cortex development in children with autism shows an exaggeration of these structures as compared to those seen in boys without autism. These particular structures are thought to affect short distance tracts more than long distance tracts— consistent with the long-range cognitive and empathetic deficits seen in these patients (Baron-Cohen, Knickmeyer, & Belmonte, 2005). Criticism of EMB The discussion in the following segment of this paper will focus on an analysis of Simon Baron-Cohen’s Extreme Male Brain theory. Particular emphasis will be placed on the over-generalizablity of Baron-Cohen’s theory, which
Simon BaronCohen’s Extreme Male Brain theory fails to address this distinctive behavioural developmental capacity of which children with autism are capable. weakens the relevance of his argument as a whole. As a vehicle for analysis, the elements of Baron-Cohen’s theory will be tested through a representative list of characteristics and symptoms taken from the fourth edition of the Diagnostic and Statistical Manual of Mental Disorders, with further scrutiny via the use of a supplementary list compiled by a current Canadian senior children’s autistic therapist. Annette Karmiloff-Smith has criticized Simon Baron-Cohen on the basis of selectivity within his definition of autism (Karmiloff-Smith & Thomas, 2002). She purports that Baron-Cohen’s account is based only on cognition, omitting key emotional and behavioural characteristics of autism. KarmiloffSmith argues that “the process of development itself violates key assumptions of [Simon Baron-Cohen’s] static cognitive neuropsychology model and thus invalidates the direct inference from impairment to cognitive structure” (Karmiloff-Smith & Thomas, 2002). Furthermore, Baron-Cohen’s account of autism is grounded in an evolutionary psychology framework which denotes innate, static, and encapsulated modules of the mind, and fails to be applicable to the fluid symptomology
Ampersand considered in this particular psychopathology. Moreover, autism, like other behaviourally defined developmental disorders, requires explanations in terms of numerous interacting genetic and environmental risk factors (Gottlieb & Halpern, 2002). Not only is this interaction complex at any given point in time, but it is also constantly changing throughout development. Simon BaronCohen’s theory of Extreme Male Brain fails to address this multifaceted reality. As stated previously, Baron-Cohen’s notion of autism is restricted to the assessment of people with Asperger Syndrome and high-functioning autism. Providing evidence which is based solely on autistic patients with higher functioning language skills and IQs, while simultaneously making generalizations about the entire spectrum of disorders, creates a very narrow and overly specific field of reference in which to gather evidence. Below are four empirically determined autistic characteristics that are not covered by Simon Baron-Cohen’s EMB theory and which endeavor to provide evidence of the narrow, and therefore, ultimately lacking, encompassment of EMB theory. Transition along the Spectrum The DSM IV states that “the impairment in social interaction may change over time in Autistic Disorder and may vary depending on the developmental level of the individual” (DSM IV, 1994). Simon Baron-Cohen’s Extreme Male Brain theory fails to address this distinctive behavioural developmental capacity of which children with autism are capable. Baron-Cohen makes little,
if any, inferences to reclassification of the disorder throughout an individual’s ontogenetic development. It is widely accepted that a child has reached his or her developmental potential by age seven to nine, though this has yet to be fully verified (Sampson, 2008). Therefore, the greatest therapeutic and developmental work is undertaken before this age. Below is an anecdote concerning a young girl initially diagnosed with autism who was able to have the diagnosis revoked and lowered from autism to Asperger Syndrome with the work of a range of intensive multi-faceted therapeutic approaches.
ila (names have been changed) was diagnosed with low-functioning autism as a toddler. She barely spoke, and when she did, it was by grumbles and grunting utterances. Her social skills were non-existent and she would throw tantrums because no one could understand her. Kila’s parents sought professional Applied Behavioural Analysis-based treatments combining the expertise of psychologists, psychiatrists, speech therapists, and autistic therapists who worked on standard interventions such as social and play-based programs, assistance with communication, gross and fine motor skills, behavioural analysis and intervention, and speech therapy. By the age of nine, Kila had been rediagnosed with Asperger Syndrome. She can communicate effectively, has moderate social skills, and exhibits the ability for imaginary play (Sampson, 2008). Baron-Cohen’s thesis of a rigidly deviant brain structure does not allow for developmental transformations such
Neurological & Emotional Barriers 9
as the case discussed above in Kila’s example. This limited approach is rooted in the fact that Baron-Cohen views autism as existing within a strict and static definition with little room for the fluctuation seen in this disorder. However, further examples need to be examined to clearly understand the shortcomings of Baron-Cohen’s argument. Individual Symptomology within the Spectrum Simon Baron-Cohen clearly defines the three classifications of autism as he uses them to prove his theory: classic or low functioning autism, high functioning autism, and Asperger Syndrome. He makes a clear case for the EMB in high-functioning autism and AS, and asserts the view that due to below-
hristopher is almost seven years old and has classic autism. He can engage in imaginary play, which is a symptom of a very high-functioning autistic child because of the necessity for having the capacity for a ‘theory of mind.’ Christopher has a dollhouse as part of his therapy, which was given in order to stimulate play. If Christopher had simply knocked the dolls over or thrown them on the floor upon seeing the dollhouse, this would not have indicated a capable imagination. Instead, Christopher moves the dolls into different rooms and puts them to bed. He will hand the dolls to his therapist with clear indications that she too should play, indicating extremely advanced knowledge of face recognition and even empathy. This is in contrast to Christopher’s originally
At the start of his therapy, he could not say intelligibly spoken words. To date, he can make three-word sentences, varying his limited, though growing vocabulary to denote demands and questions, which, for such a low functioning symptom, is a notable breakthrough. average language skills and IQ often exhibited in low-functioning autism, “it is less straightforward to test systemizing” ( Baron-Cohen, Knickmeyer, & Belmonte, 2005). What Baron-Cohen neglects to address is the range of skills, symptoms, and behaviours that an individual can have along the spectrum itself. An example of these parallel actions is given in the case of Christopher.
non-existent verbal skills. At the start of his therapy, he could not say intelligibly spoken words. To date, he can make three-word sentences, varying his limited, though growing vocabulary to denote demands and questions, which, for such a low functioning symptom, is a notable breakthrough. Christopher’s case is evidence of how the behaviours and skills of autistic children span a broad spectrum, and cannot be narrowly defined as done by Simon Baron-Cohen (Sampson, 2008).
Ampersand High and Low Sensitivity A further look at symptomology will reveal an interesting physical characteristic typical of children with autism that also show great internal variation. The DSM IV explains this phenomenon as an “odd response to sensory stimuli, [for example], a high threshold for pain, oversensitivity to sounds or being touched, exaggerated reactions to light or odors” (DSM IV, 1994). This augmented response to sensory stimuli is also known as having “high sensitivity”, which affects some, but not all, autistic children. Baron-Cohen’s extreme variant of an S-Brain type in people with autism cannot account for these frequently viewed behaviours. Sensory stimuli have little to do with ‘mindreading’ or social behaviour, and yet both are exhibited in a number of autistic children. A frequently used therapy for autistic children is massage or rubbing the skin with varying textures, as they often experience a tingling or numb sensation in their limbs and trunk region (Sampson, 2008). This, among other related issues, can affect their balance and spatial awareness—a skill in which S-type Brains should be proficient, according to Baron-Cohen. The existence of high and low sensitivity varies among children with autism, and this challenges Baron-Cohen’s claim that autistic children have normal or even advanced spatial awareness. It must be asserted, then, that through his overly generalized definition of autism, Simon Baron-Cohen has made his theory irrelevant.
Brain-to-Body Disconnect Finally, there is a fascinating brainbody disconnect that has been witnessed in children with autism. Although little research has been done on this particular feature of autism, it further enforces the notion that the EMB theory does not cover the entire spectrum of the disability and omits the range of specificity required in the analysis of such a disorder. Below is an example of how this brainbody disconnect can be manifested.
van is seven years old and has not yet been successfully toilet-trained. Evan’s brain does not understand the signals that his body is sending him; he does not notice the urgency that he experiences prior to having an ‘accident’. His ability to understand his own physiological responses is a concept that has not progressed past a toddler’s level. In addition, Evan has trouble identifying, consciously or otherwise, the various parts and movements of the mouth required to form words. His speech therapist will gently stimulate his gums and mouth with a toothbrush to encourage feeling and allow him to connect the sounds that he is intellectually trying to make, but simply cannot, with the parts of the body required (Sampson, 2008). Clearly, this disconnect cannot be attributed to being an aspect of an extreme version of a male brain as described by Simon Baron-Cohen. Conclusion This analysis of Simon Baron-Cohen’s Extreme Male Brain theory through the lens of empirically determined behavioural, emotional, and physiological characteristics of autism confirms Annette Karmiloff-Smith’s assessment:
Neurological & Emotional Barriers 11 Baron-Cohen’s selective definition of autism has put serious constraints on the relevance of his cognitive theory. Furthermore, Baron-Cohen’s failure to provide an ontological narrative for autism weakens his claims considerably. It is interesting to note that, although Baron-Cohen insists that individuals can fall anywhere along the spectrum of brain types, his theory does not speak to the entire autistic spectrum. Given that specificity is what Simon BaronCohen lacks, perhaps it is in the details of this complex disorder that its true definition is found. Conceivably, the ingrained human desire to classify and isolate the physical world into a lawful system cannot be done for autism—this kind of systemizing is not a viable process for conceptualizing autism, and consequently, Baron-Cohen’s Extreme Male Brain theory falls short of his intended mark. REFERENCE American Psychiatric Association. (1994). Diagnostic and Statistical Manual of Mental Disorders. 4th ed. Washington DC: American Psychiatric Association. Asperger, H. (1991). Autism and Asperger’s Syndrome. Cambridge University Press. Baron-Cohen, S. (1995). Mindblindness: an essay on autism and theory of mind. Cambridge, MA: MIT P/Bradford Books. Baron-Cohen, S., Knickmeyer, R.C., & Belmonte, M.K. (2005). Sex Differences in the Brain: Implications for Explaining Autism. Science 310: 819-23. Baron-Cohen, S. (2003). The Essential Difference: The Truth About the Male and Female Brain. New York, NY: Basic Books. Baron-Cohen, S., Jolliffe, T., Mortimore, C., & Robertson, M. (2006). Another Advanced Test of Theory of Mind: Evidence from Very High Functioning Adults with Autism or Asperger Syndrome. Journal of Child Pyschology and Psychiatry 38: 813-22.
Courchesne, E., Redcay, E., & Kennedy, D. (2004). The Autistic Brain: birth through childhood. Current Opinion in Neurology 17: 489-96. Frith, U. (1989). Autism: explaining the enigma. Oxford: Basil Blackwell. Geschwind, N., & Galaburda, A. (1987). Cerebral Lateralization. Cambridge, MA: MIT. Gottlieb, G., & Halpern, C.T. (2002). A relational view of causality in normal and abnormal development. Development and Psychopathology 14: 421-35. Happe, F. (1994). Autism: An Introduction to Psychological Theory. New York, NY: Harvard UP. Hines, M., & Green, R. (1991). Human hormonal and neural correlates of sex-typed behaviours. Revew of Psychiatry 10: 536-55. Karmiloff-Smith, A., & Thomas, M. (2002). Are developmental disorders like cases of adult brain damage? Implications from connectionist modelling. Behavioral and Brain Sciences 25: 727-88. Leslie, A. M., & Thaiss, L. (1992). Domain Specificity in conceptual development: evidence from autism. Cognition 43: 315-24. Leslie, A. M., & Frith, U. (1988). Autistic children’s understanding of seekng, knowing, and believing. British Journal of Developmental Psychology. 22: 225-51. Resnick, S., Berenbaum, S., Gottesman, I., & Bouchard, Jr. T. (1986). Early hormonal influences on cognitive functioning in congenital adrenal hyperplasia. Developmental Psychology 22: 191-98. Ringo, J., Doty, R., Demeter, S., & Simard, P. (1994). Time is of the essence: a conjecture that hemisphere specialization arrises from interhemispheric conduction delay. Cerebral Cortex 4:331-43. Rutter, M. (1978). Diagnosis and Definition. In Rutter, M. & Schopler, E. (Eds.). Autism: a reappraisal of concepts and treatment. New York, NY: Plenum P. Sampson, M. (November, 28 2008). Telephone interview. Tager-Flusberg, H. (Ed.) (1999). Neurodevelopmental Disorders. New York: MIT P. Welchew, D., Ashwin, C., Berkouk, K., Salvador, R., Suckling, J., Baron-Cohen, S., & Bullmore, E. (2005). Functional disconnectivity of the medial temporal lobe n Asperger’s Syndrome. Biological Psychiatry 57: 991-98. Williams, C., Barnett, A., & Meck, W. Organization effects of early gonadal secretions on sexual differentiation in spatial memory. Behavioural Neuroscience 104:84-97. . Wing, L. (1981). Asperger Syndrome: a clinical account. Psychological Medicine 11: 45.
Language & the Brain 13
Do we learn language? Or, is language pre-wired, engrained in our genes? Tracing the two theories, Stephan Hurtubise explains the debate and, ultimately, questions if the answer is necessarily one theory over the other. Attempts to describe the inner workings of the mind often fall into a “nurture versus nature” debate: either the faculties of the mind were nurtured into being—that is, shaped by environmental influences—or they are inher-
connectionist architectures is presented (Pinker, 2005). This paper will conclude by proposing that the theories under consideration need not be disjoint, for they can be co-adopted in a beneficial way.
For Pinker, the “complexity in the mind is not caused by learning; learning is caused by the complexity in the mind” ent, shaped by our genetic material. The case of language provides an interesting instance herein. For cognitive scientist Steven Pinker, knowledge from modern linguistics suggests that the rules of language are inherent. If these rules are inherent, then specialized brain structures were naturally selected over time, eventually endowing humans with the faculty of language. Another model— the connectionist model—describes the case for nurture. This model, supported by cognitive scientists like Jeffery Elman, asserts that the parts of the mind which produce language are formed as we learn the rules of language. In “So How Does the Mind Work?” Pinker argues that “the connectionist models most popular among cognitive psychologists are ill-equipped to represent the logical structure of propositions” (Pinker, 2005), but is there really a choice to be made? Granted, these two possibilities seem at odds, but are they really mutually exclusive? These questions will be addressed by elaborating on the two theories, considering their strengths and their potential for reconciliation. Finally, a hybrid model in which symbol processing architectures are combined with
Nativism: The Case for Nature For Pinker, the “complexity in the mind is not caused by learning; learning is caused by the complexity in the mind” (Pinker, 1994). He has conceded that the environment is crucial, but he stresses that a child’s mind is only capable of attaining its complexity because genetics endows the brain with an innate faculty for it (Pinker, 1995). For one, the role of a hereditable language faculty is supported by the shape of the human vocal tract. The modification of the vocal tract for the demands of speech actually compromises the acts of breathing, swallowing, and chewing (Pinker, 1995); thus language must have had an immense evolutionary benefit in order for it to be naturally selected despite being physiologically maladaptive (Pinker, 1995). Noam Chomsky’s theory of Universal Grammar (UG) also supports the case for innateness. It supposes that there is a catalogue of language universals inherent in the mind which restricts the variation of human language in a non-arbitrary way (Pinker, 1995). For example, in morphology—word form— derivational suffixes (which change the
category of a word, i.e. in the case of “run” to “runner”) are always closer to the original stem-word than inflectional suffixes (which help a word “fit” into a sentence by adjusting for plurality etc.). Since this pattern is apparent across all languages, it implies that the order of rules for the creation of new words is shared amongst all human beings (Pinker, 1995). UG, then, is said to sanction the structures required for the acquisition and utilization of language (Pinker, 1995). In further support of innate brain structures that enable language, Pinker makes an important distinction between language and general intelligence:
er, 1995). According to studies cited by Pinker, parents do not express approval or disapproval based on the grammatical acceptability of their children’s utterances; instead, they tend to express judgements based on an utterance’s truth or falsity (Pinker, 1995). Thus, the only evidence for children acquiring language is in the form of the sentences spoken around them (positive evidence). There is virtually no negative evidence; that is, evidence that explicitly indicates to the child which expressions are proscribed. Thus, when guessing what expressions are permissible in the language they are acquiring, children surpass the allowable sets of sentences for that language. Pinker has noted how at 18 But there are universal patterns in the output of months, “children’s two-word children’s speech whose combinations are highly input is statistically rare at best. For example, similar across cultures” children recognize the unacceptability of addthe two are seen to be independent in ing an “s” to the end of a word during both neurological and genetic disorders regular pluralisation within compounds (Pinker, 2003). Strokes can leave adults (“rats-eater” is less grammatical than with a catastrophic loss in language yet “rat-eater”), despite no provision of exrender other (nonverbal) aspects of intelplicit evidence (Pinker, 1995). Pinker ligence untouched (Pinker, 1995). Conhas reasoned that the language learning versely, there are syndromes, including “machine” must already know someSpina Bifida and William’s Syndrome, thing to begin with—something telling which allow “excellent language abilthe child’s brain how to recover from ities” to “coexist with severe retardation” over-generation of possible expressions (Pinker, 1995). But the innateness of (Pinker, 1995). Additional support for language is best depicted in how chilthis particular hypothesis comes from dren acquire language. the fact that the use of “Motherese” — the way parents talk to their children Pinker has noted how at eighteen using different tones of voice, lengths of months, “children’s two-word combinautterance, levels of simplicity, and types tions are highly similar across cultures” of content—does not bring the child (Pinker, 1995). This lends credence to through the developmental stages of the possibility that there exists a “neurolanguage acquisition any faster (Pinker, logically determined ‘critical period’ for 1995). Taken together, these facts prosuccessful language acquisition” (Pinkvide strong circumstantial evidence in
Language & the Brain 15
...the syntax of natural language can be seen as the solution to a design problem: it expresses, in a linear string, the recursive and multidimensional aspects of semantics. favour of innate mental faculties that allow for language. But how do these mental faculties compile meaningful expressions? Within the pages of The Language Instinct, Pinker has suggests that each person’s brain contains a lexicon matching words to their meanings—acquired by a universal, age-determined period of rote learning during childhood whereby the sounds are linked to the concepts they stand for—and a set of rules which dictate how one uses the elements of the lexicon to convey relationships among those concepts (Pinker, 1994). Pinker has dismissed the idea that we simply assemble sentences one piece at a time in a linear order based on the probabilities that certain words will appear adjacent to those already chosen (Pinker, 2003). But that dismissal includes sentences with meaningless content, like “colourless green ideas sleep furiously.” Thus, it seems likely that we pay less attention to the words themselves and more attention to word categories (Pinker, 1994). In effect, Pinker has supposed that words are grouped into phrases as mental symbols which can interact in systematic ways (Pinker, 1994). The structures then derived from these phrasal combinations reflect the semantic relationships between the words themselves and the concepts they represent.
Syntactic structure, according to Jackendoff and Pinker, evolved recently to make communication more efficient (Jackendoff & Pinker, 2005). “Syntax” refers to the specific ordering of words when they are grouped together in a meaningful way. Without it, the only viable alternative is concatenation among words, stringing one word after another in a chain without a rule-based structure (Jackendoff & Pinker, 2005). The use of syntax as a regulation of word order then became an evolved expression of recursive structures in cognition; like vision and the other sensory modalities, it became a hereditable phenomenon (Jackendoff & Pinker, 2005). Syntax, by its very nature, involves a set of generative phrase structure rules which define “distinguishable syntactic categories such as [Nouns] or [Verb Phrases]” (Jackendoff & Pinker, 2005) which are then filled with meaningful supplements in their respective categories (like, “cat” for noun, or “run home” for verb phrase). Thus, the syntax of natural language can be seen as the solution to a design problem: it expresses, in a linear string, the recursive and multidimensional aspects of semantics. In this way, syntax provides a “sophisticated accounting system for marking semantic relations so that they may be conveyed phonologically” (Jackendoff & Pinker, 2005), allowing for meaningful exchanges between speakers of the same language. This syntax, Pinker has argued, is a “beautifully designed code that our brains use to convey complex thoughts” (Pinker, 1994). The rules of grammar are, by this account, a clear refutation to the argument that there is nothing in the mind that is not first in the senses (Pinker, 1994).
Connectionism: The Case for Nurture The connectionist model of language processing, another attempt to account for the facts of language, put forth by Jeffery Elman, employs a simple recurrent network which is built upon input from the environment (Elman, 2004). Elman begins by positing that words, rather than having a meaning, give clues to meaning (Elman, 2004). Words are not merely entries in a “passive data structure that resides in long-term memory” but kinds of sensory stimuli, actively
pattern of activation within these hidden units. Then, the context layer provides the network with a kind of memory by storing the previous unit states and accounting for its effects on present unit states, allowing for meaning to be derived from context. Language comprehension, as assumed by Elman for experimental purposes, relies on the anticipation of what will follow (so called expectancy generation), so the network’s task is to predict, given a string of words, the next grammatically possible word, without relying solely on linear relationships and requiring, Artificial neural networks do allow instead, more abstract constituent relationfor the possibility of innateness ships based on context (Elman, 2004). directing networks in the mind (Elman, 2004). With this conceptual foundaThrough this implementation, the tion, Elman describes the formation of network uses distributional informaan artificial neural network composed of tion—that is, where a word is located in input units, output units, hidden units, a sentence relative to the categories of the and a context layer (Elman, 2004). In words apparent before and after—to dean artificial neural network, input units1 termine the categories of words (Elman, act as analogs to our perceptual system 2004). Thus, unlike the nativism model, (visual perception, audition, etc.) while the categories of words are not innate but others act as output units (motor moveare constructed from distributional inments, language, etc.). In addition to formation of the words acquired. These these, there are the units in between, categories end up conforming not only called hidden units, which perform the to categories but to semantic kinds, like computations connecting the input to animate and inanimate nouns, with furthe output. When a network “learns” to ther subdivisions into animals, humans, accommodate a new environment, the and breakables (Elman, 2004). It should hidden units alter the strength of their be noted that a proponent of a more connections. Basically, the hidden units traditional, passive view of the lexicon are what provide the network with its could criticize these findings by point“storage capacity,” that is, its ability to ing out that two instances (tokens) of the adapt and retain information distributed same type (say, the word “boy” occurring among them. They allow the network to twice in the same sentence) ought to be ‘know’ whatever it needs to. Recognizing identical but will be unavoidably difa word, for example, involves a certain ferent, since internal states reflect prior context, allowing no two token-induced states to be the same (Elman, 2004). In 1 Units in artificial neural networks are analogous to neurons in biological neural networks response, Elman would reply that the
Language & the Brain 17 tokens need not be identical with one another, as a type like “boy” is not explicitly represented (Elman, 2004). In
erties of verbs (how many arguments they take) and the semantic restrictions they place on those arguments (Elman,
Optimistically enough, there may even be empirical evidence in favour of a theory of language with both a rule-based component and an associative memory system other words, not only are more general lexical categories represented in the hidden unit layer, small perturbations between the use of some word and the next, as a function of the context layer, are represented separately. This might also account for the agreement between verbs and their arguments (a possibility brought up by Elman), and of grammatical roles like subject (what a sentence is about) and object (the complement of the verb) (Elman, 2004). To summarize, as words are acquired they are categorized based on their location, but these categories are not limited to the word itself: they account for the context of its usage. Then, context and location suggests possible words to follow those encountered, and those predictions, Elman argues, enable comprehension. By virtue of the context-sensitive architecture of the network, the association of sense and meaning as contextdependant is accounted for (Elman, 2004). This model even accounts for the vocabulary burst during child language acquisition at around eighteen months of age, by positing that prior to this point there exists no category structure to draw upon (Elman, 2004). Elman argues for the success of these kinds of simple recurrent networks by citing how they can account for the valence prop-
2004). Overall, this seems like a successful model, in that it sought to provide us with an alternative view of the mental lexicon and led to an account of category formation. It imparts a tremendous amount of credibility to the idea that connectionism can describe language. Though one should not be misled; whereas we have seen that Pinker’s symbolic conception of language includes innateness, it should be noted that connectionism does not, by default, preclude it. Actually, artificial neural networks do allow for the possibility of innateness, albeit in three distinct forms. First is the notion of representational innateness which requires that there be a particular pattern of synaptic connectivity in the brain (Karmiloff-Smith & Plunkett, 1998). This would be the most direct method, but it amounts to suggesting that the genome somehow entirely predetermines the synapses between neurons (Elman, 1999). The problem here is that not only does this contradict the observed plasticity of the brain, but it also requires more information than is contained within the genome (Elman, 1999). Alternatively, there is architectural innateness, which involves either specifications of the number of layers of neurons and their packing density in the brain, or stipulations on the type of
network formed by the neurons (Elman, 1999). The third and final variety is chronotopic innateness, which involves timing, with respect either to cell division and temporal development or to the incremental presentation of data (Elman, 1999). Elman ultimately seems to favour timing as the prominent factor in language acquisition because it can explain how a network, either real or artificial, must pay attention to sentences of one level of complexity before being physically capable of paying attention to another. What is important to note is that, as stated above, innateness, in itself, is not ruled out by connectionist models (Elman, 1999). In a way, Elman concedes that there can be innate constraints, but of a very different nature than those “envisioned by the pre-wired linguistic knowledge hypothesis” (Elman, 1999). That is, while architectural and chronotopic innateness are still functions of the genome, the level of deterministic influence the genetic code has is, in Elman’s view, less than significant. This should not be taken to mean that knowledge is builtin (Karmiloff-Smith & Plunkett, 1998). Instead a distinction ought to be made between knowing something a priori and having the tools required to discover it (Karmiloff-Smith & Plunkett, 1998). Against the case for nature, Elman argues that there exists no case of selective impairment of language which can be traced back to a specific gene (Elman, 1999). Moreover, it has been pointed out that the shape of the vocal tract is also due to our bipedal nature, taking some of the steam out of the idea that it has been specifically selected for by natural selection (Karmiloff-Smith & Plunkett, 1998). Contrary to the accusations
against many evolutionary hypotheses of language, the role of adaptation to the development of language can be made both rigorous and empirically testable (Pinker, 2003). Pinker has stated that while there should be genes whose effects contribute to the development of human language, these genes need not affect only language, and that the nature of language is also most likely polygenic (Pinker, 2003). In fact, geneticists have found FOXP2, a gene that “plays a causal role in the development of the brain circuitry underlying language and speech” (Pinker, 2003). No other species has the human sequence and it seems likely that this version of the gene was selected for language (Pinker, 2003). While the innateness thesis promoted by Pinker and others comes across as, by Elman’s view, quite wrong, there are also problems with certain connectionist simulations. As an example, take the network offered up by Morris, Cottrell and Elman (2000). When it was designed to acquire grammatical relations (subject, object, etc.), it was trained in a completely unrealistic way. After each sentence input, a kind of marker was presented, segregating the sentences from one another (Morris, Cottrell, & Elman, 2000). But positive linguistic evidence is not available to the young language learner in this way. Motherese plays no significant role, and even in cases where cultures frown upon direct communication with children, language is still acquired. Also, since the network was designed to acquire grammatical relations, it appears to imply at least some semblance of architectural innateness. Since there is no principled flaw, the network’s success should not be dismissed, though modifications are necessary to prevent hidden assumptions
Language & the Brain 19 from sneaking in and undermining the results. This being said, both conceptions of language do have a long way to go before a complete understanding is achieved. Nature & Nurture So what of the apparent incompatibility? Fascinatingly enough, a response in the form of a hybrid language model exists, even if it is limited in its scope. In “Rules of Language”, Pinker proposes that the theories of associationism, a connectionist-type model, and rule-andrepresentation, a nativism-type model, are both partly right. Associationism accounts for otherwise seemingly irregular associations between words, while the rule-and-representation model accounts for the rule-like processes in language (e.g. the addition of the past-tense suffix “-ed” on regular verbs) (Pinker, 1991). This model’s combination of Pinker’s necessity for inherent rules with Elman’s conception of a network’s capacity to predict patterns in irregularity comes across as very attractive to one seeking a way to avoid contradictoriness (Pinker, 1991). Optimistically enough, there may even be empirical evidence in favour of a theory of language with both a rule-based component and an associative memory system (Pinker, 1991). The memory of irregular verbs, owing its existence to some sort of connectionist network, seems to be affected by frequency. The more an irregular verb is encountered, the less likely it is that an error will be made (Pinker, 1991). Regular verbs, in opposition, are less difficult to comprehend when presented as novel to an experimental subject; new words can be inflected for tense in the regular way while evoking no sense of oddness (Pinker, 1991). As a matter of
fact, a double dissociation exists here, just as the hybrid theory would predict, since the regular and irregular inflection of verbs can be adversely affected independently of one another. This dissociation even extends to developmental and neurological deficits, as patients with aphasias2 have an intact capacity to produce and understand irregular verbs despite damage to the execution of the rules of grammar (Pinker, 1991). Similar facts with respect to patients with specific language impairments and William’s Syndrome provide plausible reasons to believe that a theory very much like this one might come close to the truth: an all-encompassing model for acquisition, comprehension, and production of language (Pinker, 1991). Throughout this paper, we have taken a look at some of the ideas underlying a symbolic approach to the understanding of language, and we have analyzed some attempts made for a connectionist explanation. We have also seen one instantiation of the possibility that models need not be restricted to one conception or another, hinting at the possibility that a choice between nurture and nature is not inevitable. Generalizing even more, it also seems plausible that these two models simply present us with two different levels of understanding, and that neither one is technically more or less correct than the other—at least, no more than by how much physics is more correct than chemistry. From a reductionist stand-point, perhaps it is best to realize that, while one model may eventually come out on top, we must accept the possibility that they simply provide us with different viewpoints, and that a 2 Loss of ability to produce or understand language due to brain lesions
move made from one to the other may reflect a search for practicality and applicability. If the latter is true, then no irreconcilability exists, as there is no conflict to square. REFERENCES Elman, J. L. (1999). The emergence of language: A conspiracy theory. In B. MacWhinney (Ed.), Emergence of Language. Hillsdale, NJ: Lawrence Earlbaum Associates. Elman, J. L. (2004). An alternative view of the mental lexicon. Trends in Cognitive Science 8(7), 301-306. Jackendoff, R., & Pinker, S. (2005). The nature of the language faculty and its implications for evolution of language. Cognition 97(2), 211-225. Karmiloff-Smith, A., Plunkett, K., Johnson, M. H., Elman, J. L., & Bates, E.A. (1998). What does it mean to claim that something is â€˜Innateâ€™? Mind & Language 13(4), 588-597. Morris W. C., Cottrell, G. W., & Elman, J. L. (2000). A connectionist simulation of the empirical acquisition of grammatical relations. In S. Wermter & R. Sun (Eds.), Hybrid Neural Symbolic Integration. Springer Verlag. Pinker, S. (1991). Rules of Language. Science, 253, 530-535. Pinker, S. (1994). The Language Instinct. New York: HarperCollins Publishers Inc. Pinker, S. (1995). Language Acquisition. In L. R. Gleitman & M. Liberman (Eds.), An Invitation to Cognitive Science (2nd ed.), Vol. 1: Language 135-182. Cambridge Massachusetts: The MIT Press. Pinker, S. (2003). Language as an Adaptation to the Cognitive Niche. In M. Christiansen & S. Kirby (Eds.), Language evolution: States of the Art. New York: Oxford University Press. Pinker, S. (2005). So How Does the Mind Work? Mind & Language 20(1), 1-24.
News Media & Climate Change 21
In a world where action today will decide how many generations will outlive ours, the media’s portrayal of science is disconcertingly skewed. Kota Talla calls for a change in the media-climate, as he tackles the failings and subsequent consequences of Climate Change informants. Mass media coverage of scientific issues exerts a powerful influence on shaping public discourse and perception of science, in turn guiding government policy on scientific issues. Global climate change represents one of the most pressing concerns of our time. Yet, the issue of global warming has failed to reach the forefront of a public consciousness informed by a biased media climate. The traditional journalistic norm of balance, when applied to scientific reporting, sends mixed messages to the public with variable quality evidence supporting conflicting points of view. The perceived lack of consensus derived from this artificial balance perpetuates inaction in addressing climate change. As a result, damages due to climate change are exacerbated for society, as well as for future generations that will inherit the consequences of our inaction. These circumstances mandate a “mediaclimate-change” in current media practices with respect to bias in climate science news. Objectivity and Bias The principle of objectivity provides a foundation for the journalistic and scientific disciplines to reveal the truths of the world. By adopting a neutral and balanced position, objectiveness deters any kind of partisanship on an issue, aiming to ensure full disclosure to the audience. In this media climate, editors of news media advocate stories containing arguments that pay equal attention to both sides: a traditional norm of reporting. However, journalists striving
to minimize bias often resort to cursory attempts to balance reporting in lieu of proper fact-checking and investigation. Accordingly, the practice of superficially reporting news with pseudo-objectivity disregards the consensus of scientifically-accepted truths. Furthermore, some form of bias inevitably seeps into news journalism, ranging from intentional to inadvertent, and individual to institutional as discussed below (Sloan et al., 2007).
The general populace remains ill-equipped to critically evaluate coverage of science news issues, let alone any bias in reporting. Mainstream media constructs news stories from a sampling of events, intending to be representative of the entire story. The editorial staff of a news outlet chooses which events are newsworthy, and determines whether certain stories warrant placement on the front page or burial in the middle sections. Information filters through the selection process based on certain criteria and values that comply with the organization’s policies. For example, story-selection bias emerges in news coverage of a seal plague, to the exclusion of a similar phenomenon involving beetles, which strikes audiences as less visually appealing (Ander-
News Media & Climate Change 23 son, 1991). Thus mainstream media sets the agenda for what information is disseminated to the public. Public consumption of news media is further complicated by low scientific literacy. Since general science education often terminates at high school, the public relies on the media as its primary source of accessible scientific information (Nelkin, 1987). For example, the media delivers public warnings on consumer products, genetically modified organisms, and new pharmaceutical drugs on the market. However, the general populace remains ill-equipped to critically evaluate coverage of science news issues, let alone any bias in reporting. Bias in ‘certainty’ Accurate translation of scientific findings from technical jargon into accessible news reports can be a daunting task. Science news stories drafted by unspecialized reporters may be adulterated by bias or by invalid interpretations. Moreover, in broadcast coverage, news writers usually favour clarity and brevity due to time constraints. News must be made easily digestible for audiences, while deadlines add to the pressure of regular production cycles. The bias introduced here tends toward overarching headlines that imply ‘certainty’ in research findings. Such a portrayal of science overlooks the complexity and uncertainty inherent in scientific developments for the sake of concise articles. For example, the 700-page Stern Review on the Economics of Climate Change was condensed into a single list of convenient bullet points by BBC News (BBC News, 2006). Straightforward warnings included in the list, like “Melting glaciers will increase flood risk”, belie the complexity of underlying assumptions
and probability of events. Today’s media climate in which science news stories develop does not lend itself to thorough explanations of scientific issues. Bias in coverage News media employ short bursts of coverage through the ‘sound bite’ to convey information by grabbing the audience’s attention through sensationalist tactics. For example, preliminary medical discoveries are often implicated to be ‘ground-breaking’, whereas the stark reality requires decades of further research and trials before regular patients may benefit. Indeed, many instances of curing well-known diseases have been reported based on progress in animal models, which differ from human biology. Likewise, global warming coverage frequently brings new grim predictions of the future climate. These news stories seem to ignore the complex nature of the earth-atmosphere system and present the audience with an incomplete picture of climate change. For example, retreating glaciers are often cited as evidence for global warming (Jowit, 2008), but other factors, which also contribute to glacier regimes (e.g. local topography) (Dyurgerov, 2003), are neglected. The accelerating rate (Dyurgerov, 2003) of current glacier retreat is actually part of a continuous trend since the nineteenth century (Dyurgerov & Meier, 2000) and may even be growing in some regions (Dyurgerov & Meier, 2000). Due to simple sensationalism, global warming coverage tends to sacrifice accurate explanations of scientific phenomenon for the sake of melodrama. Climate change news is further complicated by the nature of the problem itself. The parameters must be examined
on time scales that are sometimes difficult to comprehend. In contrast, shortterm priorities dominate news coverage in the form of breaking news and overshadow long-term issues. To compete for audience viewership, climate change stories are labeled with catchy headlines and condensed into convenient sound bites that simplify the issue. The portrayal of climate science reinforces stereotypes of simplicity and inadequately treats the multifaceted nature of global warming.
singular events such as Hurricane Katrina cannot definitively be linked to climate change (Haines, 2006). Along the same lines, promotional material for the film exploits public confusion about the difference between correlations and cause-and-effect relationships. Promotional materials display images of smoke emanating from a coal-powered plant in the shape of a tropical cyclone, giving the false impression that man-made pollutants fuel hurricane formation (An Inconvenient Truth, 2006).
Promotional materials display images of smoke emanating from a coal-powered plant in the shape of a tropical cyclone, giving the false impression that man-made pollutants fuel hurricane formation. Bias in “An Inconvenient Truth” Some of these biases in climate change coverage are exemplified in the documentary “An Inconvenient Truth,” presented by the former U.S. Vice President Al Gore. This film exhibits an ideological bias by highlighting particular natural disasters in order to produce convincing propaganda instead of a scientifically rigorous piece. Alarmism creeps in at times, by appealing to the audience’s emotions and ‘gut feelings’ and by featuring environmental landscapes. The dramatic execution of the film includes shots of glacier calving, where rapidly melting ice falls into the sea. Furthermore, an unfounded emphasis placed on specific catastrophic events like Hurricane Katrina is not necessarily attributable to global warming as the film suggests. Although climate change may increase the intensity of tropical cyclones (IPCC, 2007),
The documentary often takes liberties in selecting the scientific content to suit its call-to-action message. Its presentation of scientific material ignores inconvenient facts such as the natural climatic variability contributed by volcanic eruptions and variations in the Earth’s orbit (IPCC, 2007; Imbrie et al., 1993). The film’s writers neglect to point out that although anthropogenic causes of current climate can be measured, these natural climate forcing agents may also play a significant role (IPCC, 2007). As a whole, the complexity of the climate system is reduced to a read-along narrative for the public. Balance as Bias Although the journalistic norm of balance applies to political or social news that involve subjective views, stories of a scientific nature are not amenable to this principle. Indeed, balanced reporting
News Media & Climate Change 25 can, at times, distort the interpretation of objective data obtained from repeated experiments. Discovery of scientific knowledge empirically through experimentation confers greater credibility to particular scientific opinions. Thus, the ideal of pure journalistic balance is detrimental to accurate reporting of science through weight of evidence. Balance in itself can be viewed as a bias in information disclosure, when untenable and extreme positions receive a disproportionate amount of attention. Science news coverage often condones fringe views that rebut scientific claims without valid evidence. In accommodating these ideas, news outlets call on ‘experts’ or refer to studies from industry-lobbying groups, delusional skeptics, privately-funded scientists, and think-tanks (hiring these third parties even has the perverse effect of making dissenters a sought-after commodity). An example of such coverage appears in a New York Times interview of the climate skeptic Dr. Charles Lindzen (Stevens, 1996) who dismisses results from IPCC climate models as illegitimate, comparing them to an “ouija board.” He goes on to accuse scientists of maintaining a sense of crisis in order to receive research grants, and suggests that a “herd instinct” was responsible for the apparent scientific consensus. Meanwhile, other skeptic groups, like the American Petroleum Institute, have hired dissident scientists for privatelyfunded research to support their business interests. An internal memorandum of American Petroleum Institute brought to light the recruitment and training of scientists in order to proclaim to the mainstream media and government that global warming risks were too uncertain to justify action (Cushman, 1998).
News media irresponsibly acknowledge unsound contrarian views which gain traction in the public’s mind. Trying to strike a balance, the news media polarize public perception of climate change by stressing contention over consensus. Framing contentious points of view in stylized debate obscures scientific issues rather than enlightening, while instilling doubt in the public regarding the likelihood of consensus in climate science. The resulting mass confusion has permitted governments to delay responsible action addressing global warming. Former U.S. President George W. Bush called for a decade’s worth of additional research on the problem before committing any serious measures, harping on numerous uncertainties regarding the cause and potential effects (Pianin, 2002). A Republican strategy memorandum explicitly stated that the global warming debate could be “won” by making the public’s belief in the lack of consensus or scientific certainty the primary issue (Burkemen, 2003). In this way, mainstream media perpetuate public misconceptions of science and allow the perceived uncertainty of scientific consensus to be exploited by politicians.
Framing contentious points of view in stylized debate obscures scientific issues rather than enlightening, while instilling doubt in the public regarding the likelihood of consensus in climate science.
Bias in reporting Paradoxically, the business of mainstream media pits academic science against industry interests under the disguise of impartiality. News media thus shirk the responsibility of discerning between claims supported by peer-reviewed evidence and those by unreliable sources. Journalistâ€™s faith in the testimony of secondary experts (Wilson, 2000) removes itself from the primary source information available in peer-reviewed scientific journals which are the gold standard in science. This literature resource underpins the publication of accurate scientific information as measured by the journalsâ€™ reputation and impact factor. Yet news journalists usually miss these reliable sources, which are monitored by multiple rounds of review and replicability. Current reporting of climate change does not seem to appreciate the scope of the problem. Indeed, climate change reaches beyond the confines of scientific research, into areas as diverse as international affairs, agriculture, energy, health, human development, and economics (IPCC, 2007). These interrelationships demand a more comprehensive coverage of climate change, including its diverse effects on the human world; but limited airtime and print space are not conducive to dealing with matters in sufficient depth. Furthermore, news media often shun specialized reporters in these diverse fields due to higher costs and additional training (Gans, 2004). Therefore unspecialized journalists are forced to report this news, without the background necessary to handle the unique multidisciplinary challenges of climate change reporting.
Blindly appropriating journalistic norms, like using balance as a template for science reporting, undermines public understanding of vital scientific evidence, which could stand to benefit from greater exposure. Overcoming bias Blindly appropriating journalistic norms, like using balance as a template for science reporting, undermines public understanding of vital scientific evidence, which could stand to benefit from greater exposure. Mainstream media practices warp public perception of the scientific issues, translating into futile climate policy. Adopting peerreviewed publications and their authors as primary sources for new media may prevent the dilution of scientific content. News media must expand coverage of climate science and adapt stories to involve the academic community as a whole. An interdisciplinary approach to climate science issues in the media would call for active input from both scientists and other scholars. Perhaps scientists could advise news media with certain guidelines to address the scientific aspects of global warming coverage, similar to the IPCC report summary adapted for policy makers (IPCC, 2007). Such measures may potentially transcend the disconnect between the mainstream media and the scientific community, and facilitate dialogue to
News Media & Climate Change 27 overcome various biases. Over time, public understanding of climate change can be greatly enhanced, encouraging progressive climate policies. REFERENCES Anderson, A. (1991). Source strategies and the communication of environmental affairs. Media, Culture, and Society 13: 459. An Inconvenient Truth. (2006). DVD. Dir. Guggenheim, D.. Prod. David, L., Bender, L., Burns, S. Z. 2006. Feat. Al Gore. 2005; Paramount Classics DVD. BBC News. (October 30, 2006). At-a-glance: The Stern Review. BBC News. Burkeman, O. (March 4, 2003). Memo exposes Bush’s new green strategy. The Guardian. Cushman, J. H. (April 26, 1998). Industrial group plans to battle climate treaty. The New York Times. Dierkes, M. & von Grote, C. (2000). Between Understanding and Trust: The Public, Science and Technology. London: Routledge. Dyurgerov, M. B. & Meier, M. F. (2000). Twentieth century climate change: Evidence from small glaciers. PNAS 97: 1406-1411. Dyurgerov, M. B. (2003). Mountain and subpolar glaciers show an increase in sensitivity to climate warming and intensification of the water cycle. Journal of Hydrology 282: 164-176. Gans, H. J. (2004). Deciding What’s News. Evanston, IL: Northwestern University Press. Haines, A, Kovats, RS, Campbell-Lendrum, D, Corvalan, C. Climate change and human health: impacts, vulnerability, and mitigation. The Lancet 367(2006): 2101-2109. Imbrie, J. et al. (1993). On the structure and origin of major glaciations cycles 2. The 100000-year cycle. Paleoceanography 8: 699-736. Jowit, J. (March 16, 2008). Melting glaciers start countdown to climate chaos. The Guardian. Nelkin, D. (1987). Selling Science: How the Press Covers Science and Technology. New York: W. H. Freeman. Pachauri, R. K. & Reisinger, A. (2007). Climate Change 2007: Synthesis Report. Summary for Policy Makers. IPCC. Pianin, E. (December 4, 2002). Group meets on global warming: Bush officials say uncertainties remain on cause, effects. Washington Post: A8. Sloan, D. & Mackay, J.B. (2007). Media Bias: Finding it, Fixing it. Jefferson, NC: MacFarland & Company. Stevens, W.K. (June 18, 1996). SCIENTIST AT WORK: Richard S. Lindzen; A Skeptic Asks, Is It Getting Hotter, Or Is It Just the Computer Model? The New York Times.
Wilson, K. M. (2000). Drought, debate, and uncertainty: measuring reporters’ knowledge and ignorance about climate change. Public Understanding of Science 9: 1-13.
Nature & Religion 29
For Medieval writers, the universe was a gesture of God’s greatness. Their works explained the structure of the world and their stories explored its boundaries. These were not ‘Dark Ages’—the constancy of the Cosmos and the vastness of the Earth inspired then, as they do today. When St. Thomas Aquinas unified natural philosophy and Christian theology in the thirteenth century, he echoed medieval conceptions of harmony and disobedience such that descriptions of the physical world were equally reliant on natural and religious laws (Davies, 1996). The medieval world view held that harmony resided in the maintenance of one’s proper place within the universe, a concept adopted from Aristotle and expanded to define the physical borders and the scientific constraints imposed on Creation by God. All entities existed because God had created them, therefore they all had their place in the universe (Schildgen, 2002) and that place was governed by natural laws befitting their form. Deviance lay in the disobedience of the physical parameters. Medieval writers expressed the scientific and religious constraints of each physical boundary. Consequently, their views of geography and cosmology were articulated in accordance with the notion that every entity had its place in the universe and that harmony was achieved by maintaining that proper place. For both geography and cosmology, this expression was revealed through the definition of borders, the speculation of residency, and the maintenance of harmony. Geography: Normalcy within Boundaries The medieval view imposed geographical boundaries onto the world such that the physical parameters segregated the natural laws that were unique to each geographical region. Firstly, the
world was divided into three continents: Africa, Asia, and Europe. Between them lay the sea which took its shape after the crucifix; reminiscent of the amendment of human sins (Mottman, 2002). In spite of their differences, all three continents bordered the cross because the residents of each had an equal share in God’s creation and in Christ’s sacrifice. However, each continent was defined as possessing distinct natural laws so what was accepted as a normalcy in one continent may not be accepted within the others. Secondly, Jerusalem, as the site of Jesus’ crucifixion, was depicted as the center of the world (Mottman, 2002). The further from Jerusalem the more foreign the natural laws and, as such, the greater the deviancy from Christ’s image. Considering physical parameters based on natural philosophy and Christian theology, the medieval world view found verification for the belief that every entity—no matter how deviant from European normalcy—had its proper place on Earth.
All entities existed because God had created them, therefore they all had their place in the universe and that place was governed by natural laws befitting their form. Within each set of boundaries there resided exotic and mysterious beings that diverged in appearance and practice from those apparent in Europe, a
stance that was not entirely fictional for people in the Middle Ages. This view was supported by the encyclopedias written by Isidore of Seville and Vincent of Beauvais (Schildgen, 2002). Both had recorded descriptions of exotic races beyond Europe who were mysterious and monstrous (Schildgen, 2002). For those who read these accounts, it seemed as though all those who existed beyond
Jerusalem, as the site of Jesus’ crucifixion, was depicted as the center of the world. European borders had strange physical appearances and took part in unusual practices. However, they were not regarded as deviations. As St. Augustine had contended, it was accepted that all creatures were God’s creations and, therefore, none should be condemned as a digression from normalcy (Schildgen, 2002). The only condition was that each creature had to maintain its abode within the parameters that God had granted it. As such, when medieval writers wished to transcend European normalcy and trespass into foreign natural laws, they had to set their plot within the corresponding geographical location in order to preserve the harmony of God’s creation. Geoffrey Chaucer had approached incest in several of his tales; however, it was never fully manifested except when the setting was shifted to the Mongol Empire (Lynch, 2002). Incest in Europe was unacceptable, but it had to have its proper place in the universe. The East was widely regarded as sexually deviant compared to Europe
(Lynch, 2002), so the choreography of an incestuous relationship was accepted as long as it took place within the boundaries of Asia. Notably in “The Squire’s Tale” where, Thilke wikke ensample of Canacee, That loved her owene brother synfully -The Squire’s Tale (II. 78-79) Chaucer moved his setting to Asia such that incestuous relationships would not disobey God’s laws for Europe. But Chaucer is careful to stress the sin committed by the European siblings. They partook in an act acceptable in Asia, but because their proper place was in Europe they had trespassed God’s Creation. Dante Alighieri took a different approach in his On Monarchy. Instead of removing his plot from the constraints of European boundaries, in order to explore a deviation of European normalcy, he narrated a pilgrimage as the symbolic journey in amendment for sin. As his pilgrim traveled from Egypt to Jerusalem, he crossed boundaries into the holy center of the world, Jerusalem (Schildgen, 2002). Dante suggested that, provided the pilgrim approached
The closer the pilgrim was to Jerusalem, the greater his reverence and penitence had to be. the transcendence of borders with the appropriate reverence for God and adequate penance for sins, then the succession into Jerusalem would be acceptable. Otherwise, the necessary harmony of proper place would not be maintained because the pilgrim’s place was not accorded with the holiness and purity of
Nature & Religion 31 that realm. The closer the pilgrim was to Jerusalem, the greater his reverence and penitence had to be. Accordingly, both Chaucer and Dante revered the borders between the norms of the different geographical regions as defined by God. Cosmology: Earthly Chaos and Divine Order The medieval picture of the cosmos was based on the views laid out by Aristotle, who depicted the universe through natural philosophy, and the integration of theological aspects proposed by St. Aquinas. The combined perspective segregated the cosmos into spheres characterized by both scientific and religious properties. The universe was divided into two parts: a sublunary realm, which contained the area below the moon’s orbit, and a supralunary world, which encompassed the domain beyond the moon (Cartwright, 2005). These two realms housed two distinct proper places. The observation of natural phenomena revealed the inconstancy of the sublunary world where all things were unpredictable and impermanent, in stark contrast with the supralunary world where all things were constant and orderly (Cartwright, 2005). In effect, all entities associated with chaos were assigned their proper place within the sublunary realm, and all entities associated with constancy were assigned
The furthest of these spheres was classified as the primum mobile which was responsible for the regular motion of all the other supralunary spheres (Cartwright, 2005). For Aquinas, these classifications provided insight into the residency of the heavens. In his Summa for the Gentiles, St. Aquinas provided extensive explanations for the theological implications of the residency of the cosmos. He held that humans were the lowest of the intellectual beings created by God, and, consequently, the human’s abode was on Earth (Aquinas, 1923). The other intellectual beings had their proper places in each of the concentric spheres in the supralunary realm and held the responsibility of moving the celestial bodies, while God resides at the end of all things (Aquinas, 1923). So once again, it can be said that the medieval view imposed physical parameters to segregate areas governed by distinct laws; in this specific case, the boundaries divided the levels of a hierarchy of divine order. Humans resided in the chaotic sublunary realm and God resided in the highest of the supralunary spheres. St. Aquinas also laid the foundation for the medieval writers to explore the transcendence of cosmic boundaries between the proper places of earthly chaos and divine order.
“If God’s essence be seen at all, it must be that the intellect sees it in the divine essence itself” a proper place within the supralunary realm. In addition, the supralunary world was subdivided into various spheres, each associated with a celestial body, namely the planets and the stars.
In Paradise, Dante described a journey through the cosmos in ascension to Heaven. As he traveled through the heavenly spheres, he witnessed the characteristic qualities of each. For in each,
there rested a planet and a corresponding angel who moved it (Cartwright, 2005). These characteristic qualities define the proper place of each angel and its planet. As his journey proceeded, Dante approached the end of all things: As the geometrician, who endeavours To square the circle, and discovers not, By taking thought, the principle he wants, Even such was I at that new apparition; I wished to see how the image to the circle Conformed itself, and how it there finds place; But my own wings were not enough for this, Had it not been that then my mind there smote A flash of lightning, wherein came its wish. Here vigour failed the lofty fantasy: But now was turning my desire and will, Even as a wheel that equally is moved, The Love which moves the sun and the other stars. - Canto XXXII
Akin to the challenge of transcending geographical boundaries or of trying to square a circle, Dante had to compensate for the disobedience of his proper place. Since Man’s proper place was on Earth, he could not enter Heaven on his own accord and in his human form. This resolution is echoed in St. Aquinas’ words: “If God’s essence be seen at all, it must be that the intellect sees it in the divine essence itself ” (Aquinas, 1923). Thereby, Dante repented for his sins and liberated his soul from material bounds in order to cross the final border into Heaven (Cartwright, 2005). As such, Dante conserved the proper places of earthly chaos and divine order by abiding the natural and theological laws of each realm; in so doing he maintained harmony within God’s creation.
Concerning both geography and cosmology, medieval writers preserved the natural philosophy and religious ideas that were woven together to explain the World. Medieval writers like Dante and Chaucer followed the ideas promoted by St. Augustine, St. Aquinas, and the various Encyclopedia writers. They all advocated for the maintenance of proper place in accordance with the physical parameters imposed upon creation by God. They revered these boundaries and ensured that their stories honoured the divine architecture of the universe. REFERENCES Aquinas, T. (1923). Summa Contra Gentiles Book 3 Part I. London: Burns Oates and Washbournes Ltd. Cartwight, J.H., & Baker, B. (2005). Literature and Science: Social Impact and Interaction. Santa Barbra, California: ABC-CLIO. Davies, N. (1996). Europe: A History. Oxford: Oxford University Press. Lynch, K. L. (Ed.). (2002). Chaucer’s Cultural Geography. New York, NY: Routledge. Mottman, A.S. (2002). Maps and Monsters in Medieval England. Pennsylvania: University of Pennsylvania Press. Schildgen, B.D. (2002). Dante in the Orient. Chicago, Illinois: University of Illinois Press.
Nutrition & History 33
French fries and potato chips; mashed, scalloped, baked, sliced, or fried; sweet or savory; red, purple, or white; as a snack, in a stew, on the side, or perhaps alone; the potato is everywhere. Surprisingly, it was no where near this common before the Spanish stumbled upon the Americas. It was a hardwon victory, but the potato has conquered the world and our hearts. The Origin of the Potato Today, the potato is a staple food in many European cultures. Few realize, however, that this nutritious and versatile food did not originate in Europe, and, moreover, only recently became a part of the European diet. Along with the jicama, the tomatillo, and the plantain, the potato is endemic to South America, and was introduced to Europe during the Renaissance, when European elites recognized the potential of the potato as an inexpensive food for the masses. Yet, the underclass was initially reluctant to embrace it. Ironically, the potato would attain, just a few centuries later, such prominence in the diets of indigent Europeans that the late nineteenth century potato blight was responsible for one of the most severe famines in history. The potato continues today to be an important food crop in Europe and around the world. For such a prominent crop, the potato had a slow start. The potato had been growing wild in the Andes up to 13,000 years ago, yet it was only around 7,000 years ago that these spuds began to be cultivated (Chapman, 2000). Due to the Andes’ fluctuating temperatures, poor soil conditions, and high elevations, grains grew poorly, if at all; potatoes, on the other hand, were well suited to this environment and could be cultivated on artificial fields built into the marshes around Lake Titicaca (McNeill, 1999). Advances in agriculture and selective breeding led to the production of less
bitter and healthier potatoes than the first wild potatoes, which were highly toxic (Wright, 2008). Soon, many native cultures embraced the potato, as can be seen in pottery of pre-Inca cultures such as the Nazca and Chimu (Chapman, 2000), and today the potato is called Mama Jatha in the Andes, or “mother of growth” (FAO, 2008). Indeed, the potential of the potato as insurance against crop failure was recognized early on by the natives: the Incas froze potatoes in the night air, then trampled them to expel their moisture, turning them into chuñu, a nutritious foodstuff that could be kept in underground frozen storehouses for decades (McNeill, 1999; Shindler, 1995). The Incas rightly realized that one of the greatest attributes of the potato, in addition to its wide range of climate and soil tolerance, is its easy storage—the same reason for which the potato would soon be embraced by Europeans. The Potato’s First Contact with Europe It is believed that the first Europeans to come into contact with the potato were the Conquistadors, when they were led to Peru in 1532 by Francisco Pizarro (Rayment, 2008). About forty years after Columbus’ discovery of the New World, the Conquistadors subjugated the Inca civilization. One of the most valuable discoveries they made was the starchy paste that served as the main nourishment for the Incan miners, which was, in fact, chuñu (Chapman,
Nutrition & History 35 2000). And while the Conquistadors had initially taken a dislike to the potato, they quickly realized the potential of potatoes as easily stored foodstuffs on their ships. Grains familiar to Europeans did not grow in the Americas, so early sailors, including the Conquistadors, depended on maize and potatoes as the main sources of food on the long voyage home. Upon reaching Spain, the leftover tubers were most likely taken by sailors who had grown accustomed to potatoes and who then attempted to cultivate them in their own gardens (McNeill, 1999). As bluntly stated by Hawkes (1993; p1), “no actual account has yet been discovered (and very probably does not exist) of potatoes being brought to Europe.” The first written record of potatoes in Europe was on November 28, 1567 in public notary records of the Canary Islands, off the coast of Morocco. Using these records the date of introduction of the potato to Europe has been estimated as sometime between 1560 and 1565 (Rios, 2007; p1272; Hawkes, 1993; p5; FAO, 2008). Recent studies indicating similarity in both genetic markers and gross morphology between the two strains support the presumed journey taken by the potatoes from the Andes to the Canary Islands (UWM, 2005). The present-day potatoes of the Canary Islands are the ‘missing link’ between the European and Peruvian strains (Hawkes, 1993; p3). As potatoes continued to be cultivated in Europe, they adapted to the climate and slowly evolved into the potato that we now consume, Solanum S. tuberosum. The species is divided into two subspecies: andigena, Peruvian po-
tatoes still suited for short daylight conditions, and tuberosum, an evolution of andigena in Europe for longer daylight (FAO, 2008). The genetic studies conducted also gave insight to the particular strains, out of hundreds that grew at the time of conquest in Peru, which are ancestral to the common potato. The Canary Island potatoes seem to be a hybrid of two wild strains still growing in South America: an Andean strain, cultivated by the Incas in southern Peru, and a Chilean strain, which may have been chosen because the environment of the Chilean lowlands and Western Europe was most alike (Williams, 2007; Ríos, 2007; p1278).
The name stuck, however, and well into the 19th century potatoes were referred to as “Virginia potatoes” before finally replacing the sweet potato to become the potato In addition to genetic evidence, shipping records also support the Canary Islands as the first European location to receive the potato, as manifests indicate that barrels of potatoes were exported from Gran Canaria and Tenerife to Antwerp in 1567 and Rouen in 1574 (Williams, 2007; Hawkes, 2001; p1). However, until the nineteenth century there was much confusion in Europe surrounding the origin of the potato. This was mostly due to the publication, in 1797, of the English herbalist John Gerard’s Herball, in which he calls the
vegetable “potatoes of Virginia,” second to the “common potato,” now known as the sweet potato (McNeill, 1999; FAO, 2008). Gerard was told by a friend of the introduction of the potato to England by Sir Francis Drake’s ship, which was returning in 1580 from a worldwide trip that included an exploration of Virginia. However, his source failed to mention that the ship had pillaged Spanish ships en route—presumably the source of the potatoes on Drake’s ship (McNeill, 1999). The name stuck, however, and well into the nineteenth century potatoes were referred to as “Virginia potatoes” before finally replacing the sweet potato to become the potato (FAO, 2008).
As a member of the poisonous nightshade family, only the potato tuber is edible, as its leaves, stems and sprouts contain toxic glycoalkaloids The potato was introduced to Spain and England almost simultaneously, but it took hold much more quickly in Spain. Even though it was considered a food for the underclass, hospital patients, inmates, or livestock (Rayment, 2008), the potato successfully spread to the Spanish mainland by 1573 (FAO, 2008). As it was seen as an exotic plant, the potato was popular in European botanical gardens. Many monasteries or papal ambassadors often had extensive gardens and cultivated potatoes (CHIN, 2008; Chapman, 2000). From Spain, potatoes were circulated among botanists and other scientists so that, by 1600,
it had reached the gardens of Italy, Austria, Belgium, Holland, France, Switzerland, England, Germany, Portugal, and Ireland (Chapman, 2000) However, the potato was mainly cultivated as a curiosity and was not used as a source of nourishment. Why Europe needed a Potato: the Nutrition and Cultivation of the Potato In choosing to feed the potato only to the underclass and livestock, the Europeans displayed their suspicions concerning the tuber’s value. These suspicions were not entirely unfounded. As a member of the poisonous nightshade family, only the potato tuber is edible, as its leaves, stems, and sprouts contain toxic glycoalkaloids (FAO, 2008). Small amounts of glycoalkaloids, such as solanine and chaconine, are also found in the potato tuber. If the potato is left out in the sun for too long, the levels of glycoalkaloids will increase due to an increase in chlorophyll (FAO, 2008), causing the potato to turn green in colour and taste bitter as a natural defense against pests. Interestingly, glycoalkaloids can have some salutary effects in humans, such as inhibiting the growth of cancer cells in the liver and colon, and even potentiating a malaria vaccine (Friedman, 2004). Unless properly administered, however, the consumption of glycoalkaloids from green potatoes is a potential source of illness in humans. Despite its toxic potential, the nutritional content of the potato is unparalleled by other carbohydrates. The potato is low in fat, at only 0.1 g per 100 g (FAO, 2008), and contains many nutrients required for sustenance, with 45% of the recommended daily intake
Nutrition & History 37 of vitamin C, 14% vitamin B6, 14% folacin, 12% magnesium, 10% thiamin, 9% iron, 8% niacin, 6% panthothenic acid, and 6% phosphorus (Rayment, 2008). Without a doubt, the nutritional content of the potato is remarkable, providing every vital nutrient except calcium, vitamin A, and vitamin D (Chapman, 2000). Furthermore, a single acre of potatoes is sufficient to provide sustenance for up to ten people (Rayment, 2008). In fact, it was recognized in eighteenth century Ireland that “a single acre of potatoes and the milk of a single cow turned out to be enough to feed a whole family” (McNeill, 1999). The potato also provides many health benefits due to a protein profile matching human needs (as determined by the World Health Organization), dietary antioxidants that slow the aging process by neutralizing free radicals, and dietary fibre required for digestion (FAO, 2008). The potato is a wonder food not only due to its nutritional value, but also due to its harvest. A potato yields more food calories per acre and per unit of water (CHIN, 2008), as well as more protein and calcium, than any other major crop: as highlighted by the Food and Agriculture Organization (2008), For every cubic metre of water applied in cultivation, the potato produces 5,600 calories (kcal) of dietary energy, compared to 3,860 in maize, 2,300 in wheat and just 2,000 in rice. For the same cubic metre, the potato yields 150 g of protein, double that of wheat and maize, and 540 mg of calcium, double that of wheat and four times that of rice. Furthermore, up to 85 percent of the tuber is edible, compared to only 50 percent of cereal plants (FAO, 2008). The
main competitor of potatoes is bread; but potatoes are much cheaper, require far less preparation, and are just as nutritious. It should be noted, however, that even though potatoes contain many necessary nutrients, they should only be used as staple food as part of a balanced diet that includes other vegetables and whole grains. A diet consisting almost entirely of potatoes, even if meeting basic energy and nutrition requirements, is not recommended if suitable alternative exists (FAO, 2008). The potato plant’s robustness makes it an easy food to cultivate. Potato plants can grow in moist or dry soil, basic or acidic soil, and at high or low elevation (Chapman, 2000). Additionally, the potato plant is perennial, meaning that if potatoes are not harvested, the tubers will sprout and provide nutrients for up to twenty new plants in the following season (Rayment, 2008; FAO, 2008). Providing edible tubers in just two months of cultivation, the potato plant matures in only three to four months, faster than any other staple food (CHIN, 2008).
As it is never mentioned in the Bible, many followers would not consume potatoes, following the Church’s doctrine that it was a creation of witches Like any other crop, the potato plant requires care for a successful harvest. To prevent the build-up of pathogens in the soil that could lead to pests, farmers must rotate potatoes with other
crops (FAO, 2008). The potato plant requires the soil to be plowed yearly due to a greater spread of weeds compared to grain fields (McNeill, 1999). This requirement was especially demanding for early adopters of the potato, as this extra work had to be done by hand prior to the advent of horses and tractors in Europe. The potato is also vulnerable to many diseases and pests, such as late blight, bacterial wilt, and potato blackleg (FAO, 2008). However, these diseases and parasites are much less prevalent than those affecting grain crops, such as ergot infesting rye (McNeill, 1999). It is because of its resistance and resilience in all environments that the potato plant remained a reliable and favoured source of nourishment relative to other crops requiring the same amount of labour, water, and acreage for a lower production of food energy and nutrients. The Potato’s Spread Throughout Europe: Convincing the Lower Classes Due to its ties to the poisonous nightshade family, as well as suspicions that it caused leprosy (The Economist, 2008), the potato initially had trouble gaining popularity in Europe. As it is never mentioned in the Bible, many followers would not consume potatoes, following the Church’s doctrine that it was a creation of witches (CHIN, 2008; Chapman, 2000). Furthermore, the potato was the first food plant in Europe to be grown from tubers rather than seeds, and so was not widely accepted by landowners, who preferred it as a garden crop rather than a food crop. Centuries after its introduction to Spain, the acceptance of the potato in the rest of Europe was brought on by the elites, who, recogniz-
ing its potential in case of crop failure, craftily convinced the lower classes of its worth. One of the first aristocrats to accept the potato was Frederick the Great of Prussia, who was such a lover of the potato he had a patch of potato flour sewn into his shirt, right above his heart. In 1744, recognizing the benefits of the potato as a food for the masses, he ordered his subjects to grow potatoes (McNeill, 1999). However, they refused, believing it was a food suitable for livestock and not for human consumption. The peasants gave in only after the army was sent to enforce the order. His insight on the potato soon paid off, however, as during the Seven Years’ War (1756-1763) fields
After a long study of the nutritional aspects of the potato, he published results in 1773 indicating that the potato was “edible” and even “nutritional” of grain crops were destroyed, but his populace survived, as the potatoes were buried deep underground (McNeill, 1999). As a prisoner of the Prussians during the Seven Years’ War, Antoine Augustine Parmentier, a French pharmacist and chemist, recognized the nutritional benefits and ease of production of the potato (Rayment, 2008). Parmentier was an employee of Louis XV and realized he needed to convince only the King in order to convince all of France. After a long study of the nutritional aspects of
Nutrition & History 39 the potato, he published results in 1773 indicating that the potato was “edible” and even “nutritional” (McNeill, 1999). Parmentier now had the King’s full support; soon Louis XV was wearing a potato flower in his buttonhole and MarieAntoinette wore the blossom in her hair, both in an attempt to popularize the tuber with the upper class (Chapman, 2000). Even more unusual, with the King’s aid, Parmentier employed reverse psychology: he had fifty royal acres of potatoes planted right outside of Paris and guarded only during the day (Rayment, 2008). At night, the local peasants stole the crops, thinking they were valu-
In merely two centuries, the potato went from being despised for supposedly causing leprosy and being poisonous, to sustaining an entire continent’s underclass. able, and began to cultivate them. The scheme worked, and by the beginning of the nineteenth century, France was already producing 21 million hectoliters of potatoes annually (McNeill, 1999). The adoption of the potato as a staple crop in the rest of Europe was much slower. Even though the British Royal Society recommended the potato’s cultivation as early as 1662, potatoes did not gain prominence in England until the outbreak of the American War of Independence, at which point the government encouraged potato cultivation to counteract food shortages (Chapman,
2000). The potato quickly gained favour, and, by the turn of the nineteenth century, recipes using potatoes as ingredients were regularly published in The Times of London. Although in Prussia, France, and England, the potato’s popularity was due to its acceptance by the upper class, in Ireland, the first adopters of the potato were the lower classes. The potato made its way from England to Ireland because of its plentiful and nutritious tubers. Because they were early adopters, it is possible that the Irish became too attached to the potato, starting families with “only an acre [of potatoes] and a cow” (McNeill, 1999). It is estimated that in the late eighteenth century, peasants in Ireland were obtaining 80 percent of the calories in their diet from potatoes, some consuming ten potatoes per day on average (CHIN 2008). In merely two centuries, the potato went from being despised for supposedly causing leprosy and being poisonous, to sustaining an entire continent’s underclass. The Potato’s Dependence and Demise: the Rise and Fall of a Population With the cultivation of potatoes, farmers were able to produce much more food per acre of land, while also protecting themselves against famine and disease, as potatoes were nutritious enough to ward off scurvy, tuberculosis, and even measles (Chapman, 2000). This ease of production of the potato was one factor contributing to a population explosion in Europe. As the potato gained in popularity across Europe, the population boomed, further fueled by the Industrial Revolution. The change was most noticeable in
England, where the diet of the working class changed from meat, bread, and cheese in the eighteenth century to mostly potatoes at the turn of the nineteenth century (Chapman, 2000). According to Nunn and Qian’s study (2008), after taking into account the effects of the Industrial Revolution, the increased production of the potato still explains up to two thirds of the population boom in Europe since the eighteenth century, and a quarter of the urbanization rate (p2). The statistics are just as staggering for England and Wales, whose populations almost doubled from 1801 to 1851 due to the nourishment of the potato (Chapman, 2000). As the potato became a staple food in Ireland, the population explosion was unparalleled, doubling from 1780 to 1841 to 8 million people, “without any significant expansion of industry or reform of agricultural techniques beyond the widespread cultivation of the po-
The developing world produced more potatoes than the developed world for the first time in 2005, as potato consumption increased twofold in these countries in 40 years tato” (Chapman, 2000). The Irish had become dependent on the potato, and only a few similar varieties, leaving their crops vulnerable to pests and diseases. Soon enough, the late potato blight, caused by the fungus Phytophthora infestans (Rayment, 2008), hit Irish crops.
The first sign of impending disaster came in 1844-45, when a mould disease, late blight, ravaged potato fields across continental Europe, from Belgium to Russia. But the worst came in Ireland, where potato[es] supplied 80 percent of calorie intake. Between 1845 and 1848, late blight destroyed three potato crops, leading to famine that caused the deaths of one million people. (FAO, 2008) The now-infamous Great Famine caused the death of one million individuals, as well as the emigration of some two million others who could afford to do so (CHIN, 2008). The Potato Today Far from its origins in the Andes, the potato is one of the most popular foods around the globe today. Less than half of the world’s potatoes are consumed fresh, and many are consumed as snack foods such as French fries and potato chips (FAO, 2008). Currently the fourth largest crop on the planet, after maize, wheat, and rice (The Economist, 2008), potatoes “are grown on an estimated 192,000 square kilometres, or 74,000 square miles, of farmland, from China’s Yunnan plateau and the subtropical lowlands of India, to Java’s equatorial highlands and the steppes of Ukraine” (FAO, 2008). Such an international food could surely be the solution to an international problem: world hunger, especially in developing nations. The developing world produced more potatoes than the developed world for the first time in 2005, as potato consumption increased twofold in these countries in forty years (FAO, 2008). The ease of production of potatoes and low requirements in water,
Nutrition & History 41 acreage, and labour (The Economist, 2008) make it ideal for the developing world, as was recognized by the United Nations’ 2008 Year of the Potato (FAO, 2008). Although potato consumption in developing countries is still less than a quarter of that in Europe, the potato can serve as a wonder food when accompanied by other vegetables and grains. Our dependence on the potato must not be absolute, as a modern-day repetition of the Great Famine should be avoided at all costs. Yet, the varied history and adaptability of the potato, from the New World to European courts, suggests its inherent versatility. Perhaps our society should make better use of the potato. They need not be predominantly a snack food; potatoes could be used as a food for the masses in the hope of solving modern hunger problems. REFERENCES Canadian Heritage Information Network (2008). The Potato Industry: Alive in O’Leary, PEI; Prince Edward Island Potato Museum. Retrieved December 27, 2008, from http://www. virtualmuseum.ca/pm.php?id=record_detail&f l=0&ex=00000121&rd=55557&hs=0 Chapman, J. (2000). The Impact of the Potato. History Magazine 1(2). Retrieved July 15, 2008, from http://www.history-magazine.com/ potato.html Food and Agriculture Organization (2008). International Year of the Potato 2008. Retrieved December 21, 2008, from http://www. potato2008.org/en/ Friedman, Mendel. (2004). Analysis of biologically active compounds in potatoes (Solanum tuberosum), tomatoes (Lycopersicon esculentum) and jimson weed (Datura stramonium) seeds. Journal of Chromatography A 1054, 143-155. Hawkes, J.G. and J. Francisco-Ortega. (1993). The early history of the potato in Europe. Euphytica 70, 1-7. McNeill, W.H. (1999). How the Potato Changed the World’s History. Business Network 49(1). Retrieved July 16, 2008, from http:// findarticles.com/p/articles/mi_m2267/ is_1_66/ai_54668867
Nunn, Nathan & Qian, Nancy. (2008, June). Columbus’s Contribution to World Population and Urbanization: A Natural Experiment Examining the Introduction of Potatoes. (Preliminary working paper). Retrieved December 27, 2008, from http://www.econ.brown.edu/fac/Nancy_ Qian/Papers/Potatoes_draft2.pdf Rayment, W.J. (2008). The Potato!. Retrieved July 15, 2008, from http://www.indepthinfo.com/ potato/ Ríos, D. et al. (2007). What Is the Origin of the European Potato? Evidence from Canary Island Landraces. Crop Science 47(3), 1271-1280. Shindler, Merrill. (1995, October 27). Long Live the Potato!. Los Angeles Reader. Retrieved February 15, 2009, from http://www.elrocoto. com/lareader.html University of Wisconsin-Madison. (2005, October 4). Finding Rewrites the Evolutionary History of the Origin of Potatoes. ScienceDaily. Retrieved July 16, 2008, from http://www.sciencedaily.com/ releases/2005/10/051004085552.htm Williams, S.C.P. (2007, May 15). The Secret History of the Potato. ScienceNOW Daily News. Retrieved from http://sciencenow.sciencemag. org/cgi/content/full/2007/515/2 (2008, February 28). Wonder-food, History of the potato. The Economist, 165(10). Retrieved July 16, 2008, from http://www.economist.com/ books/displaystory.cfm?story_id=10759072 Wright, C.A. (2008). Short History of the Potato. Retrieved July 16, 2008, from http://www. cliffordawright.com/caw/food/entries/display. php/topic_id/6/id/102/
Literature & Mathematics 43
The Classic Ulysses ends with a dot. And in that dot, there is ample room for interpretation. Lindsay Waterman expounds the disarmingly simple period in terms of geometry, astronomy, periodicity, and syllogism, such that, without having read the novel, we can appreciate the symbolism and the significance of that final point in the story. Ulysses follows its protagonist, Bloom, over the course of a single day in Dublin. Bloom is in many ways a plain man, and it is in many ways a plain day: Bloom eats breakfast, chats briefly with his wife Molly, goes to work at a newspaper, eats lunch, and visits a friend in the hospital. In the evening he follows his friend Stephen to a seedy neighborhood, and then helps him home. Yet the man and his day are revealed as anything but plain; through them James Joyce explores the limits of the English language, the complexity of the human mind, and the place of man in the universe. This essay deals mostly with the penultimate “episode” (or chapter) of the book, called “Ithaca.” Each episode in Ulysses is written in a different style, and in “Ithaca” the style employed is the catechism, consisting of a series of questions and answers, often used in scientific textbooks of Joyce’s day. This scientific discourse allows the author to present ideas developed in the preceding sixteen episodes in a new light—no longer that of impressions and instinct, but of reason. The final question of the novel, “Where?”, has as its answer a single, black dot. The following analysis attempts to unravel its mystery. In “Ithaca”, elements of Bloom and Stephen’s existence are in many ways circular. Circles are used to describe how characters think, how they move
through the world, and the world itself. This episode portrays the moment when Bloom, in returning to his house which he left earlier that day, comes full circle. The scene is framed by many physical echoes of this circle in the numerous basins, a “concave spoon,” a roundabout like “a globe,” an abortive “full circle gyration,” and a “hoop” (Joyce, 1986; p554-568). In addition to such substantial circles, there are also temporal and psychological circles. Either the narrator, Bloom, or Stephen note the periodicity of specific things, such as menstruation, equinoxes, sunrise, tides, and sex. Periodicity can be represented using trigonometry, and it is by nature circular. Such periodicity has reflection in the recurrence of Bloom and Stephen’s experience. For instance, when Bloom lights a fire, Stephen recalls “others elsewhere in other times who, kneeling on one knee or on two, had kindled fires for him” (Joyce, 1986; p547). Similarly Bloom remembers the many times that he has discussed gas lights “during nocturnal perambulations” (Joyce, 1986; p545). Such thoughts reveal how periodicity can recur in social experiences.
ircles are principally found in three distinct realms of experience: the psychological, the temporal, and the spatial. Within each of these realms are subdivisions of diameter. In the spatial dimensions, circles of human size are juxtaposed with circles of 900 times the dimension of earth (Joyce, 1986; p573).
Temporally, the period of cars around a roundabout is juxtaposed with that of comets around stars (Joyce, 1986; p557, 575). Psychologically, Bloom’s consideration of objects “in rotation” (Joyce, 1986; p580) is contrasted with the long term recurrence of thoughts about gaslights. By being ubiquitous in the episode (the shape appears on all but sixteen of the chapter’s pages), by occurring in different spheres of existence, and by varying in size within given spheres, the circle invites comparison between disparate things. Bloom’s recurrent discussion of gaslights is somehow similar to the tides, because they are both periodic phenomena. The tides are similar to a concave spoon, because they are both describable using trigonometry. The orbit of a comet is similar to the circles vehicles make around a roundabout.
Stephen and Bloom mirror the celestial spheres above them. Urination is a lower human function, culturally significant of the base animal nature of the human rather than the heavenly or Godly. As with most activities involving human effluvia, it is in some sense shameful, taking place in secret. Yet the geometric similarity of stars and the character’s hands suggests that what is traditionally regarded as base and shameful may be, in some transcendent sense, celestial. Bloom’s and Stephen’s hands are further developed as mirrors to celestial counterparts as they bid one another good bye. The “(respectively) centrifugal and centripetal” (Joyce, 1986; p578) forces which guide their hands are the same forces that hold planets and comets in orbit, and that act on any object undergoing circular motion. Thus the hands that had been clasping their
The circle invites comparison of disparate things. Thus, the episode highlights the circle as a common element, as a form that appears in many substances. For example, when Stephen and Bloom urinate in the garden, Joyce uses circularity to unite two disparate phenomena. Above them is the night sky, containing “hirsute comets and their vast elliptical egressive and reentrant orbits,” “the monthly recurrence known as the new moon,” and “the attendant phenomena of eclipses, solar and lunar” (Joyce, 1986; p575). Contemplating the sky, Stephen and Bloom obscure their penises by “manual circumposition” (Joyce, 1986; p574). In the circular positioning of their hands during urination,
penises in the act of urination are motivated by the forces that move celestial bodies. Being motivated by the same forces, the stars and the hands are shown to be, on a basic level, the same thing. The question of the marriage of the obscene and the celestial is made concrete in Bloom’s quest to “certify the presence or absence of posterior rectal orifice in the case of Hellenic female divinities” (Joyce, 1986; p600). His quest is motivated by a need to know if the body, with all of its effluvia and excretions, has any place in the celestial. Of all the moments of marred titillation since he left his house, Bloom rates his inspection of the deities among the
Literature & Mathematics 45 “imperfections in a perfect day” (Joyce, 1986; p600). This stands out from his other experiences because he feels the related question is spiritually relevant. The answer to Bloom’s question is presented graphically at the end of the episode in the form of the dot, given in response to the question “where?”, which ends the catechism. The dot embodies a matrix of connotations and intersections developed through the episode and the whole novel. Bloom falls asleep with his head next to his wife’s buttocks, having kissed its “hemispheres” with “osculation” (Joyce, 1986; p604). Given the position of Bloom’s head as he falls asleep, the dot could reasonably signify Molly’s anus, and in so signifying be a satisfactory answer to the question “where?” Yet, equally plausibly, the dot could resemble the planet earth, which carries Molly and Bloom “westward, forward and rereward respectively” by its rotation (Joyce, 1986; p606). The dot, then, signifies both the “heavenborn earth” (Joyce, 1986; p578), and Molly’s anus—suggesting, perhaps the serene coexistence of the human body and the divine world. Yet the dot is evocative of all the other circles in the episode as well. It is convincing as a “dark sun”, an eclipsed sun or moon, as Bloom’s “mammary prominences”, a cup viewed end on, or as representing the motion of vehicles “passing slowly, quickly, evenly, round and round and round the rim of a round and round precipitous globe” (Joyce, 1986; p553-603). The dot does not declare itself as any one of these circles; it is open to interpretation as any of the spheres in the episode. The equality of the dot in representing each of the particular circles in the episode results in its
signifying them all. The dot suggests a singularity, or commonality, in the plurality of different objects and processes that exist in psychology, time, and space. Taken to an extreme, the dot symbolizes the continuity of the universe. It is human to regard things as discrete and separate: stars seem different from hands; the rotation of the planets around the sun seems different from that of vehicles around a traffic circle. Yet as a response to the question “where?” the dot is more than an assertion that the bodily human processes are unified with the celestial, and that Bloom is thus in contact with the celestial spheres while in contact with Molly’s anus. The dot, if taken as a suggestion of the singularity of things in the universe, asserts that Bloom spans different levels of existence in the same way as the circle. Bloom participates in the singularity that the dot signifies.
he mystery of the dot can be further understood in terms of the omphalos. The original omphalos stone is said to have been lost in the fourth century B.C. during the destruction of the Oracle at Delphi. It was a putative meteorite of “black stone” which “Zeus or Saturn threw on the earth” (Brezina, 1904). In shape the stone was a “conical mass”, and it was sometimes symbolized “by a circle with a dot in the middle” (Middleton, 1884). In Ithaca, the “sidereal origin of meteoric stones” and “meteoric showers” are considered, and a meteor streaks the sky (Joyce, 1986; p577). Given that both the dot and the omphalos stone are black and circular, the prominent mention of meteorites, and the occasional representation of the omphalos as a circle with a dot, it is perhaps not unreasonable to interpret the dot as the omphalos stone.
The “sidereal origin” of omphalos makes the stone a physical connection between the celestial and the terrestrial. Having fallen from the sky to earth, it shares its celestial identity with a terrestrial one, and in so doing spans both spheres. Thus, the interpretation of the dot as omphalos is an addition, rather than an alternative, to the interpretation of the dot as embodying the common element in disparate things. Since the omphalos stone has both a sidereal and sublunary nature, it too can unite the heavenly with the mundane.
Another salient aspect of omphalos is its location at the point of the creation of the cosmos. Since the omphalos is the center of cosmic creation, the woman is the center of human creation, and interpreting Molly as omphalos equates her power to bring forth life with the power of the Gods to bring forth the universe. The result is again a likening of different levels of existence. In the same way that human bodily functions are aligned with the movement of planets through the dot, human creation is likened with divine creation.
In Hellenic culture, omphalos was considered the center or “navel” of the world, and the place from where the cosmos began. If the dot is omphalos (and still a response to the question “where?”), then Bloom’s return home is a return to the center of his existence. Molly, the lodestar of Bloom’s reality, is the individual that he thinks about most. Rather than referring to her by her name, he thinks of her only as “she,” and in so doing reveals how tightly his consciousness is knit to her identity. Since the aspect of his home that defines it as the center of his existence is Molly, Bloom’s omphalos—and the elusive dot itself—is his wife.
Interpreting the dot as omphalos gives a similar result as seeing the dot as a symbol for all the circles in the episode. Seeing the two interpretations as separate is insufficient; the dot incorporates the omphalos in the same way that it incorporates all the other circles in the episode. Settling for one or the other would oversimplify the dot’s meaning. The meaning of the dot—that reality is unified—depends on it representing a plethora of things. Thus the circle of the omphalos is rightly absorbed into the many other circles represented by the dot.
If this hypothesis is correct, then the love Bloom has for Molly takes on new meaning. The omphalos was believed to mediate the connection between the mortal and divine, enabling communication between the two. It follows that Bloom’s connection to and love for his wife facilitates his connection to the heavenly or immortal. Through his love, Bloom interacts with the divine.
The shape of the circle itself is of importance in communicating the dot’s message of singularity in specious pluralism. Just as items as different as stars and spoons are equated by their shape, the individual points in a circle are equal in their common distance to a central point. For this reason, the equality of the shape of the circle supports the singularity suggested by the dot in a way that an octahedron or rectangle would not.
Literature & Mathematics 47 Yet the shape of the circle represents more than equality: the circle is also representative of unknowability. A number of times through Ulysses Bloom contemplates “the secular problem of the quadrature of the circle,” (Joyce, 1986 ) and in “Ithaca” he muses that solving the problem—for which he would receive one million pounds—would allow him to acquire his dream home. However, in 1882 (four years prior), the problem of the quadrature of the circle was proven impossible. The task is to convert a circle of a given area into a square of equal area (Encyclopedia Britannica, 2007). For the conversion, the area of the circle, given by A = π · r2 where r is
Bloom later concludes, when contemplating the meaning of the night sky, that there are some problems for which there is “no known method from the known to the unknown. the radius), must be calculated exactly. Yet π can’t be calculated exactly, either with an equation or a straightedge. It is a number composed of infinite digits, and at no point has it been found to repeat. Furthermore, it is not the root of any polynomial equation with rational coefficients. π is unlike a merely infinite and non-repeating number (an “irrational number”) such as √2, which can be calculated as the diagonal of a square formed by two lines of distance x, where x·x=2. Calculating π requires more than such a simple polynomial equation,
rather it requires an equation of infinite length. Although such equations (called infinite series) can be truncated to give approximations of π, it is impossible to calculate π exactly, thus it is impossible to solve the quadrature of the circle. In mathematical terminology, π is a transcendental number. It is so named because it can only be approximated, thus, in some sense, it is “above and independent of the universe” (Oxford Dictionary Online, 2009).
n the context of the dot, the problem of the quadrature of the circle serves as a reminder that the circular form which so many processes and objects share is, at a basic level, unknowable. So while the dot represents the approximation of disparate objects through a common element, the basic nature of the approximation is impossible to tell because the common element, π, is fundamentally mysterious. The transcendence of π makes it clear is that the common element (the circle), by which the dot creates a singularity from a plurality, is somehow removed from the physical universe. Archaeological evidence suggests that the question of the quadrature is one of the oldest geometrical problems, having perplexed Indus mathematicians before 2000 B.C. (Pearce, 2002).The question, having been asked since the beginning of mathematics, continues to puzzle the nineteenth century man, Bloom. Given that the problem was proven impossible in 1882, it is strange that Bloom was “occupied with the problem of the quadrature of the circle” (Joyce, 1986; p574) at least as late as 1886. His unwillingness to let the problem go even though it is known to be unknowable is analogous to the unwillingness of modern individ-
uals to accept unanswerable questions about deity and meaning that have been asked since the beginnings of human consciousness. It is likely that the reason that the quadrature keeps Bloom occupied despite its impossibility is that the problem reveals in an obvious way the practical limits of logic. Given Bloom’s dogmatic belief in the use and power of logical thought, the impossibility of the problem of the quadrature of the circle stands as a challenge to his worldview. Stephen often asserts himself as “proceeding syllogistically from the known to the unknown,” (Joyce, 1986; p572) when confronted with a difficult problem. As if in response to this, Bloom later concludes, when contemplating the meaning of the night sky, that there are some problems for which there is “no known method from the known to the unknown” (Joyce, 1986; p575). An example of a problem which cannot be solved logically occurs when Bloom indulges in “meditations of involution” about the infinite divisibility of space. He begins by visualizing “the universe of human serum constellated with red and white bodies,” then realizes that they themselves are “universes of void space constellated with other bodies”. Eventually, he comes to the conclusion that “if the progress were carried far enough, nought nowhere was never reached” (Joyce, 1986; p586). His conclusion is nonsensical, and reveals that, as the reiteration of division is continued ad infinitum, rationality breaks down, and ceases to be able to describe reality. Similarly, while working on the quadrature of the circle, Bloom “learned of the existence of a number computed to a relative degree of accuracy” to require “innumerable quires and reams of India paper” for the expression of its “units,
tens, hundreds, thousands, tens of thousands, hundreds of thousands, millions, tens of millions, hundreds of millions, billions” (Joyce, 1986; p574). The number with endless decimal places that Bloom is thinking about is π. In contemplating its infinitude, his logic fails as he arrives at “the nucleus of the nebula of every digit of every series containing succinctly the potentiality of being raised to the utmost kinetic elaboration of any power of any of its powers.” Both the division of space and the nature of π cause Bloom’s logic to break down because his attempted calculations necessarily involve infinity.
eaving behind pure math, the most obvious purpose of the dot on the page is as a period to the syllogism formed by the first three sections of the book. In an abstract sense, the combined episodes of Ulysses comprise an argument that adheres to a syllogistic pattern of logic, by which a conclusion is inferred from two premises. The syllogism is visibly present within the seventeenth episode, being both featured by the vehicle of narration and referred to by Stephen as the method by which he passes “from the known to the unknown” (Joyce, 1986 ). It is present in the episode as a way of uncovering truth. Syllogisms are broken into three parts: the major premise, the minor premise, and the conclusion. Through a term common to both premises, called the “middle term” the conclusion is drawn. Thus, if a syllogism takes the form “All α are ß; all ß are ∂; thus all α are ∂,” the major premise is “all α are ß,” the minor, “all ß are ∂” and the middle term is “ß.” So, when Stephen chants The Legend of Harry Hughes to Bloom, the
Literature & Mathematics 49 section of the chant in which Harry goes “out to play ball” and in so doing breaks “the Jew’s windows all” is the first (major) part. Similarly, the section in which the “Jew’s daughter...cuts off his little head” is the second (minor) part (Joyce, 1986; p565-567). Finally, after chanting both of these parts, Stephen supplies an interpretation that could stand for the conclusion of the syllogism. But, in the case of this song, the common term is not clear. The reader is invited to think about the song as a syllogism, but the interpretation is vague: it could be “boys search for playing balls in the yards of Jews; Jews kill little boys; boys that search for playing balls in the yards of Jews are killed.” Yet this syllogism does not do justice to the complexity of the legend; it’s unclear whether the Jew’s daughter should be generalized to all Jews, or whether the breaking of windows is important to the action of the story. The syllogism is an insufficient way of viewing the song, unless the definition of syllogism is broadened, or the complexity of the story is narrowed. The syllogism is also invoked as Bloom considers the possibility of an extraterrestrial race of beings, and “the possible social and moral redemption of said race by a redeemer.” Bloom concludes that “it could not be proved impossible” that some other planet in the solar system harbours sentient life. In response to the reiterated question of “the problem of possible redemption,” it is stated that “the minor was proved by the major” (Joyce, 1986; p574). The major and minor terms in the putative syllogism of Bloom’s thoughts on extraterrestrial life and redemption are again not perfectly clear. The syllogism could take the form “Sentient beings live on
another planet; all beings are capable of redemption; sentient beings on other planets are capable of redemption.” Yet the minor term in this contextually plausible syllogism is not explicitly stated in the narration. Once more, although the reader is invited to think in terms of syllogisms, the syllogism is only helpful when either the situation is simplified or the definition of the syllogism is loosened. Such a loosening of syllogistic logic allows the disparate to be brought together in the dot. Were it necessary to adhere rigorously to syllogistic logic, then the syllogism “all stars are circles;
The disparate levels and spheres of reality— from cups, to stars, to tides, to recurrent thoughts—are united. all roundabouts are stars; circles are roundabouts” would be false, because not all circles are roundabouts. But falseness, rather than preventing syllogistic analysis, should be seen as encouragement because elsewhere the reader is lead to think syllogistically even when the terms of the syllogism seem vague or illogical. Moreover, the importance of syllogistic thinking in relation to the dot is suggested by the dot’s nature as the final point of syllogisms. Syllogisms draw a conclusion by noting a common element in two different propositions. The dot illustrates a conclusion by embodying the middle term in a host of circular things presented in the episode. The conclusion states that through the middle term of the circle
the many circular objects and processes in the episode are, in fact, unified. The precise nature of their unification is unclear because the restrictions on syllogistic logic are relaxed, as is warranted by the examples of vague syllogisms given in the episode. The suggestion that logic should be relaxed in understanding the dot is held in the shape of the dot as well. As a circle, the dot can only be described in terms of π, the number for which mathematicians can only ever approximate. When Bloom tries to express the infinity of π and the infinite divisibility of space, his logic breaks down. Because the geometric shape of the dot can only be fully understood at infinity, attempts to understand it using pure logic will inevitably fail.
et logic should not be abandoned in the dot’s analysis; it requires a mixture of logic and approximation. On the one hand the dot is the quod erat demonstrandum of a syllogism, marking the point of proof where logic results in a perfect conclusion. On the other hand, the dot represents the limits of logic because infinity is required to understand its shape. In the dot, both the path to knowledge and the limits of knowledge coexist. The dot represents both the knowable and the unknowable. Like the omphalos, the dot is both terrestrial and divine. It represents what the human mind can comprehend, and the divine element in the cosmos which is necessarily incomprehensible. And, like the omphalos, it allows communication with the gods. The communication is affected through a demonstration of oneness in existence; oneness in the sense
that the disparate levels and spheres of reality—from cups, to stars, to tides, to recurrent thoughts—are unified; and in the sense that the comprehensible can be unified with the incomprehensible, the mortal with the divine. REFERENCES Brezina, Aristides. (1904). The Arrangement of Collections of Meteorites. Proceedings of the American Philosophical Society 43: 176, 213. Geometry. (n.) In Encyclopædia Britannica online. Retrieved December 20, 2007, from http:// www.britannica.com/eb/article-217480 Joyce, James. (1986). Ulysses. New York: Random House. Middleton, Henry. (1888). The Temple of Apollo at Delphi. The Journal of Hellenic Studies 9, 304-305. Pearce, Ian G. (2002). “Early Indian culture - Indus civilisation”. Indian Mathematics: Redressing the balance. School of Mathematical and Computational Sciences University of St Andrews. Transcendent. (adj. n.) In Oxford English Dictionary Online. Retrieved December 20, 2007 from http://dictionary.oed.com
Anatomical & Moral Deviancy 51
During the Enlightenment, men depended less on divinity to explain the nature of the world and more on their capacity to reason. But they were challenged with the existence of those who deviated from the ‘norm’, so called ‘monsters’. They had to rationalize how those beings came into form, but that rationalization created a hierarchy which besieged those who qualified as deviants. The essay by Nicholas Dillon tackles how enlightened thought rationalized the existence of ‘monsters’, while Maia Woolner traces how those explanations founded social distinctions.
Nothing is precise in nature. (Hankins, 1985; p127)
— Le rêve de D’Alembert, by Denis Diderot
he human sciences of the eighteenth century were very concerned with human diversity: how it arose and what it meant. It takes no great leap, then, to see that the human sciences relied on— and perhaps even existed because of— the wealth of contrast that the period found. Voyages to new worlds brought
formerly unseen varieties of man, animals, and plants into the salons, natural history collections, gardens, journals, and scientific societies of enlightened Europe. But though expeditions to faroff places led to new ideas about human diversity, such distant tours were not always necessary to tour the differences between humans. Less remote was an
Anatomical & Moral Deviancy 53 examination of how humans could deviate from the known, familiar uniformity that was thought to be the essence of nature’s order. That is, instead of looking at the varieties of humans across geographic space and time, human nature could
And though during the period it never came to fruition, I will argue that these disruptions of the known order could not truly be reconciled without adjusting the notion of what it meant to be natural.
Monsters provided insight for the embryological debates of the period, and feral children, when examined systematically, gave an unprecedented view of what humans could be when stripped of civilization and sociability and left to develop in the natural world. be elucidated, at least in part, through examinations of deviant forms—what Francis Bacon described as “strange and monstrous objects, in which nature deviates and turns from her ordinary course” (Daston, 1998; p38). Two such aberrant instruments of comparative reasoning were the ‘monstrous’ human and the ‘wild’ human.1 It must be realized that neither physical nor moral deformity was new to people during this period; such ‘grotesque’ marvels had long been known in popular folklore. But naturalized and disenchanted, the deviations revealed quite a lot about human nature (Lorraine & Park, 1998). Monsters provided insight for the embryological debates of the period, and feral children, when examined systematically, gave an unprecedented view of what humans could be when stripped of civilization and sociability and left to develop in the natural world. 1 Note: these designations of ‘unnatural deviations’ are to be interpreted solely in the light, unfortunate as it may have been, of Enlightenment ideals of what was natural.
rom the supposed microscopic observation of homunculi 2 in the head of spermatozoon, to frogs dressed in tight pants to test the fertility of their seminal fluid, the debates on animal generation during the eighteenth century embraced hypotheses and forms of experimentation that might seem fanciful or even humorous to modern readers. But, in truth, the contention between the two ideas of generation—preformation and epigenesis—was very serious and its scientific implications were significant. Grounded initially in Aristotelian ideas of causation (Hankins, 1985; p134), epigenesis—the view that an embryo developed its organs and, over time, acquired its final form through successive changes of an undifferentiated mass— went against the philosophy of preformation—which held that the fully formed adult already existed within the embryo as a homunculus and required, after birth, only time for growth. For good reason, epigenetic theory was eventually disabused of, or at least removed from, 2 Literally, a little man.
Aristotelian teleology; and, in a similar vein, preformationist theory moved away from the idea of actual miniature humans—instead concluding that the embryo need only contain the plan for its development (Hankins, 1985; p136). So, by the mid-to-late eighteenth century, both epigenesis and preformation were embedded soundly in more sophisticated scientific principles. Thus we encounter monstrosity and its interplay with theories of generation. As already prefigured, the fact that monsters were not new to the period did not make them any less of a challenge to the cultured rationality characteristic of enlightened thinkers. Incompatible with the ideal of a perfect, orderly nature, monsters occasioned not only aesthetic troubles—including those of one museologist who would not publish an image of a two-headed calf because it was, he said, “an unpleasant sight” (Hagner, 1999; p175-217)—but also a real classificatory dilemma for physiologists at the time. Each variant of monstrosity— from hydrocephalic 3 infants to the dwarf Foma, a courtly spectacle of Peter the Great—was so singular: there was not even conformity in the non-conformity of the monstrous being. As such, there was no simple physiological unity to be arrived at; theories for one form of monstrosity did not usually befit another (Hagner, 1999; p188-189). In any case, any theory on the origin of monstrosity would have to also be tenable within a framework of generation. The mon3 Hydrocephalus (literally, “water head”) is a condition, characterized by an enlarged head, resulting from the accumulation of excess cerebral spinal fluid in the ventricles of the brain. It is caused by blockages, congenital or otherwise, within the ventricular network of the brain.
But, in principle, all theorizing needed to account for the three generally accepted categories of monstrosity: cases of misplaced organs, cases of defect, and cases of excess. strous births would have either deviated in their epigenetic development or would have been deviant as preformed germs. The latter of these reasons perhaps requires clarification. The defective germ was not, by definition, monstrous due to divine influence. To rid the theory of theological dispute about why God would or would not have created a monstrous form, many preformationists argued that the monstrous germ arose simply from “accidental aberration,” celestially unmediated (Bates, 2002; p217).
s with any dichotomy, it was not as simple as one theory versus the other; hybrid theories doubtless arose. But, in principle, all theorizing needed to account for the three generally accepted categories of monstrosity: cases of misplaced organs, cases of defect, and cases of excess (Hagner, 1999; p190). In the 1740s, developments by Swiss naturalist Abraham Trembley on the spontaneous regeneration of fragmented polyps—which is to say, generation without a preformed germ—had posed a serious threat to preformationist theory (Hankins, 1985; p136). The aforementioned move away from the strictly homunculist view of preformation,
Anatomical & Moral Deviancy 55 which took place in the 1760s, better situated the theory (Hankins, 1985; p145). But as science historian Michael Hagner has shown, in the second half of the eighteenth century, epigenetic theory— especially that of German physiologist Caspar Friedrich Wolff—used monstrosity, surgically examined, to an epistemic end. Specifically, examinations of monstrosity were performed in the hope of both resolving the problem of generation and also showing the superiority of epigenetic ideas. What Wolff did was, in disputing the preformationist notion of pre-existing structures that emerged and then were left to wear away from
Still, even with new theories, the problem of monstrosity was ultimately not solved in the eighteenth century, and neither was the generation debate in which monsters were invoked. Neither epigenetic nor preformation theory can be said to have been more correct, inasmuch as neither had an understanding of modern biology with its notions of natural selection, hereditary mechanisms, and genetic mutations and disorders. But monstrosity still gave comparative insight into the human sciences of the Enlightenment, especially human embryology. Though by no means a happy statement, Wolff, in discussing
Simply stated, if development was disturbed, monsters arose. age—“I cannot stand such an awful nature,” he said—he emphasized both the invisible dynamism and the regularity of generation in nature, through which, he felt, monstrous physical deviation arose (Hagner, 1999; p195-197). A force called vis essentialis, intrinsic to the matter of which living bodies were composed, was what guided the generative process. Yet, as science historian Thomas Hankins notes, Wolff still took a positivistic stance—emphasizing that “the organs of the body have not always existed”—but made no acknowledgment of the specific way in which organ formation had been brought about, stating only that “it has been brought about” (Hankins, 1985; p141). Monstrosity could thus be described, safely in its three categories, based on imbalances of vis essentialis that led to defective generation. Simply stated, if development was disturbed, monsters arose.
the opportunity to dissect newly deceased conjoined twins, spoke honestly when he said, “the death of this monster is much happier for anatomy and physiology than its survival”(Hagner, 1999; p194). The generation of a normal being had to be explicable in such a way that the deviant generation of a monstrous being could also be explained. Such an understanding of development, though not possible by the end of the century, would help provide a definitional basis for what the origin of human life was, and consequently, would help to explain how and why human life could deviate from physical regularity.
f it was man monstrous who gave insight into the physical development of humans, it was man untamed who gave insight into his species’ moral development. Variously called wild children, wolf children, Homines feri, or just savages, these feral deviants—described in Linnaeus’s Sys-
tema naturae simply as “four-footed, mute, hairy”—while not common, were known in the eighteenth century to exist or to have existed, as evidenced through a small number of documented cases (Linnaeus, 1997; p13). The cases, rare like the examples of monstrosity to which they were analogous, shared no specific mould or manifestation. All, though, were cases of children, found or
supposed perfectibility—the notion of which was described by intellectual historian Arthur Lovejoy as a temporalized Chain of Being (Wokler, 1993; p124). From l’homme physique to l’homme moral (Wokler, 1993), the gradient of mankind reached eventual perfection. This gradient of perfectibility implied a kind of conjectural history whereby a series of successive events—like the Buffonian époques4 claimed Wolff, in discussing the opportunity to have shaped the (Hankins, 1985; to dissect newly deceased conjoined earth p1 51-1 52) — a l lowed twins, spoke honestly when he “savage man’s passage from nature to culture” said, “the death of this monster (Wokler, 1993; p122). is much happier for anatomy and The passage, though, could also be reversed; physiology than its survival” without civilizing forces moral degencaptured in the wild, who had been left eration could take hold, reverting man to live in isolation, fending for survival back—deviating him—to his natural in their wholly natural environment. state. Conversely, in Rousseau’s view, Old enough or lucky enough to not have “everything is good, coming from God; perished when abandoned, they would everything degenerates in the hands of have still been young enough to have man” (Douthwaite, 2002; p96). This developed free from the formative bias was the central idea behind Émile, ou of society, culture, or, more broadly, any De L’Éducation, Rousseau’s famous bilsociability at all. It was this ‘natural’ updungsroman5 : nature was the ideal state, bringing, without the words or thoughts, and so a natural upbringing would proemotions or inclinations of others, that duce the best citizen. “Fix your eyes on made feral children (who inevitably benature, follow the path traced by her,” came feral adults) so potentially useful says the novel’s narrator (Douthwaite, to the human sciences. If, as Rousseau 2002; p97). Was l’homme de la nature claimed, man’s development was ruled by the troika of “nature, things, and 4 These époques, or epochs, were seven diviother men”, then feral deviants were just sions—proposed by Georges-Louis Leclerc, men inchoate, left bare to be shaped by Comte de Buffon—of the natural history of the nature alone (Douthwaite, 2002; p95). earth. The principal causal feature within this Brought into a world of enlightened civilization, feral children revealed some of the essential qualities, manners, and customs of being human while also serving as test subjects of man’s
history was the natural cooling of the earth over time.
5 A novel with themes centering upon the main character’s moral and psychological development. Émile, incidentally, was the first novel of this type.
Anatomical & Moral Deviancy 57
If not culture, if not ideas, if not even speech, what was human nature made of? to be recreated and idealized or to be institutionalized and perfected? This is where the feral children fit in. There was no need for speculation as to what the ‘natural’ state of man was: it was revealed readily by the untamed deviations. John Locke’s Tabula rasa epitomized, feral children were generally, as French revolutionary and Rousseauian educator Marie-Jeanne Roland said of two famous cases, “without language, signs, and likely without ideas” (Douthwaite, 1997; p183). Except for some instinct, which Locke had allowed for (Douthwaite, 2002; 97), they were blank slates—unkempt wild things, lacking ostensible sapience, who blurred the then extant philosophical barrier between man and animal. Ably challenging any and all preconceptions of human nature, including Rousseau’s idea of ‘natural nobility,’ feral children were instruments of comparative reasoning. For instance, as science historian Julia Douthwaite notes, they “undermined the notion that language is innate to humankind” (Douthwaite, 1997; 180). Such complex abstractions as speech or symbolic thought, it seemed, were not useful as fundamental markers of all humanity. In this respect, the case of Victor of Aveyron is particularly illuminating. The boy, after being found in 1799, eventually came under the care and study of Dr. Jean Itard. Itard worked constantly to achieve five objectives: “attach him to social life”, “awaken [his] nervous sensibility”, “extend the sphere of his ideas”, “lead him to the use of speech”, and “exercise frequently the most simple operations of [his] mind” (Itard, 1972; p102).
These basic goals, which were simple in principle but difficult to achieve, relied on Victor’s deviation to probe sensualist theories of development (Douthwaite, 1997; p191). Step by step, sense by sense, Victor was taught, for example, to relate objects to words written on a blackboard. At first, he found categorization conceptually difficult. Was ‘book’ meant to signify only the book Itard had used in the lesson or all things with words on paper, or was there a third, vague region where ‘book’ was both an object and an idea? (Malson, 1972; p76) Of course Victor could not relate these specific thoughts to Itard, and, while he did eventually learn to write basic expressions, he never learned to speak. But though incapable of speech, Victor’s emotions were discernible. When punished, he “seemed more responsive to moral than to physical sanctions”, and protested to Itard’s punishments when meted out unjustly (Malson, 1972; p76). He had somehow acquired a moral compass—thought to be the paragon of human progress—yet, like the Orangutan, Victor possessed vocal organs but could not speak (Wokler, 1993; p131). If not culture, if not ideas, if not even speech, what was human nature made of? This question, as analyzed through feral children, informed educational theories, anthropological ideas, and, most of all, the basic understanding of psychological development in humans. In the preface to his first report on Victor, Itard wrote: Cast on this globe, without physical powers, and without innate ideas; […] In the savage horde the most vagabond, as in the most
civilized nations of Europe man is only what he is made by his external circumstances […] he enjoys, from the enviable prerogative of his species, a capacity of developing his understanding by the power of imitation, and the influence of society. (Malson, 1972; p80)
From deviation man could be delivered from fear—classification of deviant forms would allow for this—but over deviation he could never be installed entirely as master. If anything, what feral child emphasized for the human science of the Enlightenment was the relative malleability of the human, the importance of sociability, and the lack of any real distinction between l’homme physique and l’homme moral. Like Victor’s book, human nature was not easily categorized.
nd so we return to the epigraph of this discourse. Nature’s imprecision was ultimately problematic for the practitioners of the Enlightenment human sciences. Though deviation could be used comparatively, it was always being weighed against a precisely defined ‘natural’ state. In nature, as Diderot wrote, “everything changes, everything passes away” (Kors, 1997; p44). This, of course, was not the popular view and, excepting for a few proto-evolutionary materialists, there was little reason for
most to think that what was natural under one set of conditions would be any less natural under another. But Enlightenment notions of deviation were static epistemological claims and not objective measures. Truly, as Dr. Bordeu remarks, in Le Rêve de D’Alembert, “everything that is can be neither contra natura nor outside of nature” (Kors, 1997; p43). There was, consequently, no teleological basis for normative forms. A century before Diderot, Guillaume Lamy, a Parisian doctor, wrote: “Has [man] ever been able, or will he ever be able, to find the conveniences that wings bring to the birds?” (Kors, 1997; p36) Evidently, normal humans could be perceived (statically and subjectively) as just natural deviants of idealized perfection. Under the rigidity of Linnaean classification, deviant forms were just the hierarchical baseline—Homo monstrous and Homo ferus, the ghastly others. But Buffonian classification, emphasizing physical truth over abstraction, defined a species in terms of the “relation and material continuity” of individuals— a lineage of which monsters and feral children were necessarily a part (Phillip, 1995; p129). The distinction between normal and physically, psychologically, or otherwise aberrant individuals could, and indeed should, be empirically delineated—in degrees of physical deformity or developmental abnormality. But general moral judgments on the state of human nature were based only on aesthetic sensibilities and philosophical ideas of what nature should be. Therefore, in defining the deviant, all that needed to be established was probabilistic error (Bates, 2002; p219). Nature is about regularity of probability and not regularity of form. Without rigid uniformity or foreseeable perfection in nature,
Anatomical & Moral Deviancy 59 variation was entirely natural and entirely conforming: everyone had been a “potential monster” and also a potential savage (Hagner, 1999; p214). So despite attempting to leave nothing, as Max Horkheimer and Theodor Adorno6 wrote, “out there”, the Enlightenment ideal of nature, strangely, prevented integration of deviation into nature. From deviation man could be delivered from fear—classification of deviant forms would allow for this—but over deviation he could never be installed entirely as master (Hagner, 1999; p178).
o conclude, deviation was instrumental to the rational methodology of the Enlightenment human sciences. But monstrous or untamed, deviant humans, while elucidating their ‘normal’ counterparts, did not fit into the enlightened order of nature. Re-conceptualizing nature in more material, less abstract, and less idealized terms would be the only way to ever fully accommodate these deviant forms, and thereby maximally expand the scientific understanding of Homo sapiens in all its forms. REFERENCES Bates, D.W. (2002). Enlightenment Aberrations. Ithica, NY: Cornell University Press. Daston, L. (1998). What Can Be a Scientific Object? Reflections on Monsters and Meteors. Bulletin of the American Academy of Arts and Sciences 52, (2).
6 Influential philosophers of the Frankfurt School who, in their book Dialektik der Aufklärung (Dialectic of Enlightenment), argued that the goal of enlightenment was gaining do-
minion over, and obviating fear of, the unknown.
Datson, L. & Park, K. (1998). The Enlightenment and the Anti-Marvelous. Wonders and the Order of Nature, 1150-1750, 329-363. New York: Zone Books. Douthwaite, J. (1997). Homo ferus: Between Monster and Model. Eighteenth-Century Life 21 (2)176-202. Douthwaite, J. (2002). The Wild Girl, Natural Man, and the Monster: Dangerous Experiments in the Age of Enlightenment. Chicago. Hagner, M. (1999). Enlightened Monsters. In Clark, W., Golinski, J., & Schaffer, S. (Eds.) The Sciences in Enlightened Europe, 175-217. Chicago: University of Chicago Press. Hankins, T.L. (1985). Science and the Enlightenment. Cambridge: Cambridge University Press. Kors, A.C. (1997). Monsters and the Problem of Naturalism in French Thought. EighteenthCentury Life 21 (2), 23-47. Linnaeus, C. (1997). Systema Naturae (1735). In Chukwdi Eze, E. (Ed.) Race and the Enlightenment: A Reader. Oxford: Blackwell. Malson, L. & Itard, J. (1972). Wolf Children and The Wild Boy of Aveyron. London: NLB. Sloan, P. (1995). The Gaze of Natural History. In Fox, C., Porter, R., & Wokler, R. (Eds.) Inventing Human Science: Eighteenth-Century Domains, 112-51. Berkeley: University of California Press. Smith, R. (1995). The Language of Human Nature. In Fox, C., Porter, R., & Wokler, R. (Eds.) Inventing Human Science: EighteenthCentury Domains, 88-111. Berkeley: Universty of California Press. Wokler, R. (1993). From l’homme physique to l’homme moral and back: towards a history of Enlightenment anthropology. History of the Human Sciences 6(1), 121-138.
Hence it manifestly appears, that the animal machine is made, not by parts, but all together; seeing it is impossible, that a circle of motions, some of which depend on others, be completed, without all their instruments being in their proper places…. Wherefore the animalcula, which by the help of microscopes we discovering swimming [sic] in the semem masculinum, are really little men, which being received into the womb, are there cherished as in a nest, and grow in due time to a proper size for exclusion. Therefore Hippocrates said very justly: In the body, there is no beginning, but all the parts are equally the beginning and the end. -Richard Mead, British physician (1673-1754) Some three hundred or so years before the birth of Christ, Aristotle defined ‘monstrosity’ as a ‘departure from type’ (Tuana, 1993). This definition would come to have serious consequences in the eighteenth century, when the development of the discipline of human sciences was profoundly influenced by the study of ‘monsters’; the inquiry into the origin of physical birth defects in the eighteenth century radically called into question age-old attitudes towards human life and its place in nature. No more were these beings solely curiosities to be exhibited as examples of the playfulness of nature; they were an insult to
proper decorum in a century obsessed with scales, systems, and order. Coupled with progress in the field of embryology, the results of new discoveries created tension against the traditional tenets of life itself—the implications found resonance in moral and theological beliefs as well as for ideas concerning aesthetics, classification, race, and gender. During the eighteenth century, the conception of the universe as a “Great Chain of Being”—an idea originating with Plato that expounded the continuity, gradation, and hierarchy of all
Biological Development & Social Classification 61 life—gained widest acceptance (Lovejoy, 1957). As articulated by the German philosopher G.W. Leibniz, When we consider the infinite power and wisdom of the Maker, we have reason to think, that it is suitable to the magnificent harmony of the universe, and the great design and infinite goodness of the architect, that the species of creatures should also, by gentle degrees, ascend upwards from us towards his infinite perfection, as we see they gradually descend from us downwards. (Lovejoy, 1957)
understand how these differences arose. Perhaps the most ‘earth shattering’ problem was that of God and his creative intent. These issues were further complicated by their intimate connection with the eighteenth century debate between two theories of generation: Epigenesis and Preformation. The fundamental questions were: What is the nature of life in its original state—is it fully formed or does it develop from unorganized matter? Mechanist philosophy, which dictates that the universe functions like a machine, in a rational and causal manner, was all the rage during the last decades of the seventeenth century. Therefore, due to the lack of a plausible explanation of the mechanics of a
The Great Chain of Being became a launch pad from which many other more complicated questions arose. Not only was this chain meant to be a literal hierarchy of biological complexity, but Scientists struggled to classify it also implied a gradathose with physical deformities - in tion in both degree of perfection and of moral the same way that they struggled character. The higher to classify different races one was on the chain, the more assured one could be of his rationality and closeness gradual development of life, preformato God. Many Enlightenment scholars, tion was the dominant embryological in the belief that God had created man theory of the time (Tuana, 1993). It proin physical perfection conceived of those pounded the idea that all living beings with physical deformities as ‘monsters’. were “divinely formed in the original This belief in man’s physical perfection creational act” (Hagner, 1999), and that raised important questions in light of either the sperm or the egg contained a those with birth defects—where did fully formed homunculus1 which grew they belong on the scale of the universe? from invisibility to visibility—a theory For eighteenth century scientists, the that made monsters, “really little men,” struggle was to answer how and why gone wrong. monsters existed in a world supposedly ordered, designed, and maintained by The theological implications were God and his divine laws. If all humans grave, not only for monsters but for the were second only to angels in proximclassification of man in the world. If one ity to the divine creator, Enlightenment scholars sought to distinguish between 1 Fully grown and developed human in minperfection and imperfection as well as to iature form.
were to maintain that God was almighty and omnipotent, then monsters too must fall within the perfection of God’s creation. Scientists struggled to classify those with physical deformities—in the same way that the struggled to classify different races. They wondered: “Could these beings have claim to those attributes ascribed to normal human beings?” Preformationists would struggle to discover perfection, regularity, purpose, and beauty in deformity. For some, like Leibniz, it was enough to say that monsters “follow rules and are in conformity with the general will [of God] although we are unable to perceive such a conformity” (Hagner, 1999). Others believed that God’s plan could be destroyed by certain external influences. Regardless, the unfavorable words ‘chance’ and ‘accident’ began to resonate in the consciences of the theologically concerned. Those who could not accept that God would plan for such abnormalities posed far greater problems for the mechanist–preformation camp; the assumption could be made that God was not allpowerful, nor was he an active agent in every single life. The epigenesists believed that both the egg and the sperm jointly contributed to the formation of the embryo through the arrangement of unorganized matter. This theory was able to offer far more creative answers to the origin of monsters—answers that revealed that there was far more to life than previously expected. C. F. Wolff, one of the greatest supporters of epigenesis, condemned preformationism because it painted a picture of life as nothing more than a decline. To believe that “there is never generation in nature, but only a lengthening or increase in parts”
(Tuana, 1993) was to deny life any of its creative attributes and contradicted a general movement in the latter half of the eighteenth century which supported nature as full of active agents. Instead of being purely mechanical, nature was full of life forces such as “formative drive” and “vis essentialis” —two substances hypothesized to drive generation and inadvertently create monsters if something went awry. The epigenesist position heralded a plurality of possibilities, not a definitive answer. Regardless, this debate fundamentally changed the way life was perceived. In suggesting that generation was a dynamic and unpredictable, epigenesists initiated the move to free nature from its theological bonds, turning life into a process and nature, rather than God, into the active agent. Deformation became a problem relating to temporal development where the end product was variable. God was losing his grip and nature was becoming increasingly haphazard and unpredictable. There were some, like Swiss anatomist and physiologist Albrecht von Haller, who simply could not accept Epigenesis because of the implications for human life; a human that could be the product of unorganized matter rid mankind of its divinely ordained moral nature. For others, like Wolff, uncovering the processes of nature did not deny God’s existence—it simply elucidated the fact that God made nature more complicated than previously understood. But there are two sides to every coin. Epigenesis turned sour when it transcribed the generation process to a hierarchical system like that of the Great Chain of Being. German anatomists and physiologists, J.F. Meckel’s and F. Tiedemann, argued that monstrosities were beings whose “organs do not de-
Biological Development & Social Classification 63 velop in the usual way,” and that “any deviation of the embryo from the human form is a fall back in animal formation, and thus any monstrosity is like an animal, not always in outer shape, but more or less in its inner one” (Hagner, 1999). This view would have profound effects for gender when it was later noted by various anatomists that for unknown reasons, the majority of acephalic embryos—those born without heads— were female. Tiedemann, in conformity with Aristotle’s belief that the ‘proper form’ of the human was male (Tuana, 1993), argued that full maturation of an embryo would always yield a son. Female children had suffered from “an inhibition of the embryo on a lower level of development.” In putting forward the idea that all embryos were female in their earliest from, he made the assumption that “obviously woman is much more similar to the fetus than a man, and thus the woman stands on the lower level of development” (Hagner, 1999). This evolutionary mechanism, in asserting that women were simply “less developed,” accounted for the contemporary belief that women were physically, mentally, and morally inferior because they did not posses the same capacities for such faculties as men. The inherent irony in the epigenesist position is that while it liberated nature from divine predetermination, it built up new biological methods for inferiority of women and the physically handicapped. Embryological studies functioned as the micro level explanation for anatomical differences, and thus the study of monsters also led to practices in comparative anatomy. However, in the attempt to understand their physical deformities, scientists began to make dangerous assumptions about human
bodies as well. For instance, preformationist supporter, A. v. Haller, came to the conclusion that conjoined twins represented a new species of human being. The idea that physical differences constituted the basis for such important distinctions between humans came to full fruition, however, with the idea of race. What started out as a somewhat innocuous comparison between the greatest physical differences, like that of conjoined twins, became increasingly detrimental as the gradation of differences became progressively smaller. Skin color, facial characteristics, and body type became the new deformities, establishing the parameters necessary for the Eurocentric vision of the world. It is important to remember that the development of the human sciences was influenced by the social and political organization of the period and thus could not be as objective as the physical sciences, regardless of the attempt to use similar empirical methods. Understanding the nature of humans would make it possible to rationalize and reform society but it could also provide justification and ‘proof ’ for the status quo (Hankins, 1985). Anatomist S.T. Soemmering, for example, utilized the physiognomic method in his studies of the brain to make the claim that there was a bodily, moral, and intellectual gulf between Africans and Europeans (Hagner, 1999). Similarly, German physiologist, J.F. Blumenbac’s studies in comparative anatomy provided the impetus for his work “The Degeneration of the Species,” which made aesthetics a matter of science and the principal determinant for identifying the stock race from which all other races have degenerated.
Caucasian variety…In general, that kind of appearance which, according to our opinion of symmetry, we consider most handsome and becoming….I have taken the name of this variety from Mount Caucasus, both because its neighbourhood, and especially its Sothern slope, produces the most beautiful race of men, I mean the Georgian; and because all physiological reasons converge to this, that in that region, if anywhere, it seems we ought with the greatest probability to place the autochthones of mankind. For in the first place, that stock race displays, as we have seen…the most beautiful form of the skull, from which, as from a mean and primeval type, the others diverge by most easy gradations…. (Blumenbach, 1997) This view allowed the terms ‘beautiful’ and ‘ugly’ to be applied to entire peoples. Nor was this physical differentiation divorced from the moral; it was supposed that the superior beauty meant superior morality (Bindman, 2002). Virtue inherently yielded the beautiful, and vice inherently begot the ugly. These ideas were continued with intensity in the nineteenth century, when Cesare Lombroso, an Italian criminologist, used the same types of comparative techniques to make criminal deviance a matter of anatomical evidence. What comes next? A “tendency towards criminality” gene? (Lombroso, 1972) The eighteenth century study of monsters paradoxically liberated man and then enchained him yet again. While providing the basis for much advancement in the fields of embryology and comparative anatomy, these ideas and techniques became the watershed for many of the modern ideas of discrimination, as well as the source of much hyp-
ocrisy. As the joke goes, “Once upon a time, the French were obsessed the size of the skull as arbiter of intelligence. They discovered that (ho! ho!) they happened to have larger skulls than the British and deemed their theory correct. Unfortunately it was soon after noted that the Germans had been endowed with much larger heads. The French scientists then insisted that theory had been disproved because French are (quite naturally) the most intelligent of men.” REFERENCES Bindman, D. (2002). Ape to Apollo: Aesthetics and the Idea of Race in the Eighteenth Century. Ithica, NY: Cornell University Press. Blumenbach. (1997). “Degeneration of the Species” in Race and the Enlightenment| A Reader.Cambridge, MA: Blackwell Publishers. Hagner, M. (1999). “Enlightened Monsters” in The Sciences in Enlightened Europe. Clark, W., Golinkski, J., & Schaffer, S. (Eds). Chicago: University of Chicago Press. Hankins, T.L. (1985). Science and the Enlightenment. Cambridge: Cambridge University Press. Lombroso, C. (1972). Criminal Man | According to the Classification of Cesare Lombroso. New Jersey: Patterson Smith. Lovejoy, A. (1957). The Great Chain of Being| A Study in the History of an Idea. Cambridge, MA: Harvard University Press. Sloan, P. (1995). “The Gaze of Natural History” in Inventing Human Science: Eighteenth-Century Domains. Berkley: University of California Press. Tuana, N. (1993). The Less Noble Sex| Scientific, Religious, and Philosophical Conceptions of Women’s Nature. Indianapolis: Indiana University Press.
Surgery and Sexuality 65
Yun Gao raises the question of bodily autonomy in genital operations. By tracing the social and medical aspects of male, female, and intersex surgeries at birth, she reveals how society’s gender-ideals are forced upon non-consenting individuals. Her thorough—uncut—critique provokes insight on how ideals and expressions of gender are relevant to all individuals.
imply a bit of foreskin snipped off the penis? Male circumcision is generally viewed in American society and by the American medical community as a benign, routine surgical operation without any adverse effects. This is reflected by reports from the United States’ National Center for Health Statistics. The most recent reports, dating from 1999, place the male circumcision rate in the United Sates at 65 percent (NCHS, 1999). However, one alarming aspect of the procedure is that it is a non-consensual operation that
has been acknowledged by numerous national medical associations as being “not essential to the child’s current wellbeing”(British Medical Association, 2006) and “rarely clinically indicated” (American Academy of Paediatrics, 2005). Why are most Americans so silent in regards to male circumcision, despite its questionable ethicality and benefits? In this article, I intend not only to critique the procedure of male circumcision as an invasion of an infant’s bod-
Circumcision is not merely a simple nick of the foreskin or prepuce; it is the wholesale removal of natural tissue. ily autonomy, but also to examine the procedure from a gender-based perspective. I will demonstrate how cultural assumptions justify male circumcision; these assumptions are part of a greater set of regulations governing gender and gender expression. Ultimately, critiques of male circumcision are therefore very relevant for feminist, queer, and trans scholars and activists studying all forms of genital modification operations. The three forms of operations I will examine include male circumcision, female genital cutting, and operations on intersex babies. The question of consent (or lack thereof) in all three cases is disregarded while priority is given to adherence to relative cultural and/or religious expectations. Male Circumcision To begin, circumcision is not merely a simple nick of the foreskin or prepuce; it is the wholesale removal of natural tis-
sue. Babies born without foreskins are recorded by hospitals as having a “birth defect” (Kessler, 1997). Furthermore, numerous studies have shown that the foreskin is “an integral part of the male genitalia” as a “platform for nerves and nerve endings” (Gollaher, 2000). Despite this, support for routine male circumcision is widespread in the United States. Some believe the procedure yields health benefits to the child and even to the general public. For example, it has been cited as reducing chances of contracting certain STIs, including HIV, as well as UTIs and penile cancer (Benatar & Benatar, 2006). However, these claims tend toward hyperbole, and the unfavourable (or at most ambivalent) perspectives from various national Western medical associations reflect their scepticism. The Canadian Paediatric Society stated outright in 2004 that it does not recommend male circumcision for newborns. In 2004 and 2003 respectively, the Royal Australasian College of Physicians and the British Medical Association could not find any clinically-related justification for the operation. The American Academy of Family Physicians, American Academy of Pediatrics, and American Urological Association are the most ambivalent in their treatment of circumcision. However, as of 1999, the AAP still would not recommend male circumcision as a routine operation based on lack of evidence of potential benefits.
Surgery and Sexuality 67
65 percent of American men are currently circumcised, despite the fact that only 2 percent of Americans are actually Jewish. Regardless of which national medical association’s official policy one chooses to peruse, in all cases there are no recommendations for circumcision to be performed routinely; there is no evidence that the benefits outweigh the costs. Perhaps the major cost is that of pain, although this is difficult to measure accurately. It has been acknowledged by almost all pro-circumcision and anti-circumcision advocates, as well as by all of the Western national medical organizations, that pain is always present in the procedure (BMA, 2006). Despite this fact, in North America, between 64 and 96 percent of circumcisions (depending on the hospital) are still performed without anaesthesia (Fox & Thomson, 2005).
A History of Male Circumcision hat is the purpose of conducting an operation that causes pain to the patient, was not conducted with his consent, and serves no medical purpose? Why can the operation not be delayed until a child can at least give his informed consent? To examine present justifications of male circumcision, past circumstances must be examined. Male circumcision traces its origins back to the Jewish faith. The Bible’s Genesis 17 describes Abraham’s religious duty to God: to undergo cir-
cumcision and to furthermore ensure that all boys of the Jewish faith would do the same on the eighth day after their birth. Genesis 17 is the first biblical text to mention ritual circumcision, now known as the brit milah (Glick, 2005). Today, there is still the general consensus that circumcision for religious purposes is acceptable. Male circumcision has maintained its place as an identifier of Judaism. For example, doctor and Jewish scholar Leonard Glick has pointed out the paradox that often occurs in the case of Jewish-Gentile intermarriage, in which the Jewish parent in the couple often insists on having the boy circumcised, despite having already broken Jewish religious laws against intermarriage in the first place (Glick, 2005). In practice, male circumcision is no longer exclusive to boys of the Jewish faith. 65 percent of American men are currently circumcised, despite the fact that only 2 percent of Americans are actually Jewish (Glick, 2005). Furthermore, every year, there are one million baby boys newly circumcised in the United States (American Academy of Family Physicians, 2007). In most cases, Jewish babies are circumcised under the same conditions that Gentile newborns are circumcised, completely nullifying the religious justification for circumcision, as Jewish religious law actually requires a ritual circumcision, the brit milah (Glick, 2005). Thus, attempts to frame male circumcision in a solely religiously-motivated context are difficult to consider logical, in light of the fact that male circumcision is no longer a method of enforcing Jewish separatism.
The changing arena of circumcision— from the context of Jewish religion to secularism and Christianity—can be attributed to its medicalization in the nineteenth century. Doctors in the Western world became convinced that circumcision could promote “superior chastity,” after observing apparently lower incidences of gonorrhoea and syphilis among Jews (Glick, 2005). An increase in the rate of circumcision in Britain occurred after the First World War, when fears of syphilis were at their peak (Darby, 2005). Also contributing
2005). Incidences of the procedure were significantly more prevalent among members of Britain’s professionals than among manual labourers (Darby, 2005). That circumcision became a mark of the elite implies that it was used deliberately as a tool by the upper classes to further distinguish themselves from common labourers. Furthermore, those who were richer could afford to pay for their children’s circumcisions, and were also more likely to receive ‘education’ about the procedure in the first place simply by having greater physician access (Darby,
That circumcision became a mark of the elite implies that it was used deliberately as a tool by the upper classes to further distinguish themselves from common labourers. to the medicalization of circumcision was an increase in Jewish-American doctors, many of whom were the sons of immigrants, in the period from 1870 to 1940. No doubt some of them felt vindicated by the growing acceptance of circumcision among Gentiles, demonstrated by their encouragement of the procedure, if not as a Jewish ritual, then as a method to preserve good health (Glick, 2005). Curiously, the British circumcision rate, at its peak in the 1930s, was estimated to be 34 percent (Darby, 2005), a figure which did not come close to rivalling the United States, which at its peak in the 1970s was about 85 percent (Harrison, 2002). Perhaps this is due to a classist element of circumcision present in Britain that did not exist in the United States. The circumcised penis became an identifier for a member of the British upper class (Darby,
2005). In contrast to the British model, American society has been described as more egalitarian, with fewer classrelated social tensions. The lack of an established elite class in the United States nullified circumcision as an exclusive indicator of familial wealth and morality. In fact, in all aspects, the relative social egalitarianism in the United States prevented sexual discourse from taking on the “class articulations” of Britain (Mort, 1987). Circumcision was a routine operation in American hospitals and not dependent on a family’s financial resources (Darby, 2005). Circumcision’s comparative popularity in the United States may have also been due to simple medical circumstance. In 1950, a prominent British doctor, Douglas Gairdner, gave a thorough and rather caustic review of
Surgery and Sexuality 69 circumcision in the British Medical Journal, opening the door to a whirlwind of responses and opinions in the British medical community afterwards (Darby, 2005). Significantly, no other British doctor had criticized circumcision with as much vehemence as Gairdner had, and few American doctors had bothered to denounce male circumcision at all (Darby, 2005). Gairdner’s direct and scathing attack on circumcision was cited in 1979 in a British Medical Journal editorial as a major factor behind the procedure’s fall in Britain from its peak at thirty percent to the current rate of six percent (Darby, 2005). Additionally, the role of circumstance is further illustrated by the Canadian context. A significant minority of Canadian doctors are trained in Britain, while few are American-trained; perhaps the British medical tradition of criticism has been passed on to the Canadian medical community, as Canada has never had as high a circumcision rate as the U.S. (Darby, 2005). By the early 1970s, in Britain, the National Health Service no longer paid for the cost of circumcision (Goldman, 1997), no doubt contributing to the continued fall of the circumcision rate. In contrast, in the United States, only recently have there been suggestions that state Medicaid programs should no longer fund unnecessary circumcisions, and such discussions are often controversial. For example, in 2001, North Carolina terminated funding for male circumcision; the policy was reversed only one month later, amid a media storm and pressure from doctors (Craig, 2004). Consequently, in 2000, North Carolina Medicaid paid 1.8 million dollars to doctors to perform medically-unnecessary circumcisions (Craig, 2004).
As of 2003, only twelve states have eliminated Medicaid funding for unnecessary circumcisions (Craig, 2004). Clearly, it is not only cultural pressure that validates and vindicates male circumcision; medical practitioners with economics in mind played a role as well.
Female Genital Cutting
n contrast to male circumcision, in the Western world’s eyes, female circumcision is almost uniformly agreed to be an abhorrent and backward practice, as demonstrated by a 1982 statement released by the World Health Organization declaring it “unethical” (Shell-Duncan, 2001). These operations have been performed on between thirty and seventy-four million women, in at least twenty African countries (Boulware-Miller, 1985). Ages at which the procedure is performed vary based on location. For example, in Egypt and Sudan, the most common age range is between five and nine years, although in other locations, the practice is performed on girls at puberty (Gordon, 1991). The wide extent and regional variations of the procedure is further reflected by the World Health Organization’s division of female genital operations into four categories. The first type is analogous to male circumcision, in that it is the prepuce of the female that is operated on. The clitoris is either left intact or is partially removed (Cook et al., 2002). The second and third classifications describe more invasive variations of the operation. The second type involves the partial or total excision of the labia minora and clitoris, while the third type describes infibulation, a practice in which the vaginal opening is narrowed
or stitched shut. The last category of female genital operations is a category for all other unclassified variations, as mild as a pricking of the clitoris or labia, or as invasive as the scraping and cutting of the vaginal interior (Cook et al, 2002). According to some, operations falling in the first and last categories could very well be as benign as male circumcision (Anufuro et al., 2004), in that the recovery period is relatively briefer and no physiological or pathological effects are necessarily suffered. No solid consensus has been reached on the naming of the practice. This article will predominantly use the term “female genital cutting”, thus hopefully avoiding an outright dismissal of the gravity and invasiveness of some variations of the practice, while also withholding the judgmental and accusatory tone to which the term “mutilation” lends itself. Who in particular is being judged and in a condemnation of female genital cutting? It has often been characterized, especially by advocates of women’s rights, as a manifestation of patriarchal and oppressive forces in the African countries where the procedure is performed
(Hellsten, 2004). Western feminists, and the world at large, often describe the practice as ‘mutilation’ in an attempt to emphasize the violent context in which it is performed. The procedure’s condemnation by the World Health Organization, the International Federation of Gynaecology and Obstetrics, and the United Nations reflects this widely-held viewpoint (Cook et al., 2002). The Hosken Report, one of the most famous and eye-opening pieces of academic work on the subject, links female genital cutting to “the failure of [Sudanese] society to develop and govern itself ”. It also states that the procedures are “incompatible with contemporary life” (Hosken, 1993). Choice and Empowerment The abhorrence for female genital cutting, however, is not echoed quite as clearly in Africa, where most cases of it occur (Hosken, 1993). Indeed, although numerous African countries’ ministries of health or wellness have released statements condemning the practice (ShellDuncan, 2001), a significant number of women treat the procedure as a positive force. In the Kenyan town of Kikhome, for example, both boys and girls have their genitals cut when they are preteens, as a rite of initiation into adulthood that they themselves initiate. The cutting ceremony, witnessed first-hand by anthropologist Christine Walley, is laden with cultural significance, and the pre-teens must learn a number of ritually-required songs and dances (Walley, 1997). Other observers describe the bonds formed within groups of girls eagerly anticipating their ritual cutting (Boulware-Miller, 1985).
Surgery and Sexuality 71 Numerous researchers have suggested that female genital cutting has been ‘reclaimed’ by some of the women undergoing the procedure. For example, without claiming to describe all genitally-altered women, Janice Boddy suggests that female genital cutting allows some women to distinguish themselves from men and to distance themselves sexually from them. By emphasizing the women’s fertility instead of their sexuality (which would always be seen in the context of submission to men in marriage), female genital cutting functions as a re-emphasis of the strength and influence they hold as bearers of children (Walley, 1997). Western perceptions commonly hold that sexual satisfaction for women is greatly reduced due to female genital cutting. However, a study in 1989 conducted by Hanny Lightfoot-Klein found that nearly ninety percent of respondents experienced orgasm throughout their marriage. Further studies in Sudan and Egypt that do not solely focus on the attainment of orgasm (as an indicator of sexual satisfaction) have confirmed that circumcised women could indeed enjoy sex and be happy within their marriages (Anufuro et al, 2004). For most women, marriage is the “primary path to social and economic survival and advancement” (BoulwareMiller, 1985). As described earlier, female genital cutting, for better or for worse, renders women more desirable as
potential marriage partners, ultimately giving them more power than if they had been unmarried. From a Western feminist perspective, framing women’s power and agency in the context of their “marriageability” might not be considered to be a sign of female empowerment, but it is inaccurate to paint these genitally-altered women as helpless victims or to accuse them of possessing a false consciousness. Their status as genitally-cut women may have allowed them space for social ascendancy in their local communities, for example. Female empowerment in Western terms cannot simply be evaluated in other regions of the world in the same terms, where completely different cultural contexts and living standards apply.
bviously, I do not aim to suggest that all acts of female genital cutting should be embraced as demonstrations of female empowerment. However, I do wish to highlight the tendency for Westerners to portray the procedure in very black and white terms, a particularly problematic practice for a number of reasons. Firstly, by grouping all female genital cutting under the same umbrella term, one ignores regional differences, erasing the nuances of variegated cultural backgrounds in favour of a uniform, culture-effacing “African” identity. Secondly, outright condemnation of genital cutting can have an alienating effect on the women who have had the operation or who will send their daughters to be
For how many women is female genital cutting a coercive practice that they normally would not submit to, and for how many women is it a symbol and tool demonstrating their own power and agency?
cut as well. Women and girls who feel empowered (however they define “empowerment”) may feel indignant of Western discourses that rob them of their agency and value. This discursive tension was seen at the 1980 International Women’s Conference in Copenhagen, which was marked by the near walkout by a group of African feminists who felt alienated by the paternalistic and alienating dialogue adopted by their Western counterparts (Sarkis, 1980). Is it possible to come up with concrete statistics placing women on a “side”? For how many women is female genital cutting a coercive practice that they normally would not submit to, and for how many women is it a symbol and tool demonstrating their own power and agency? A conclusion is difficult to come to, as the human experience is rarely one of uniformity. Christine Walley’s interviews with three girls who had just undergone the procedure illustrate contradictory tensions. When asked if they would want their daughters to be “circumcised”, one girl responded affirmatively, the other said she would not, “after some thought”, and the third girl did not answer (Walley, 1997). They were well aware of the cultural significance of the cutting ritual, but two of the three girls still could not give an enthusiastic endorsement of it, despite their own non-violent experiences being genitally cut. This apparent paradox serves to confirm the need for discourse on the subject matter to remain culturally-sensitive and non-condemnatory. Patronizing assumptions that these girls have been abused and mutilated would only have an alienating effect on them, and may not necessarily be accurate in the first place. Although two of the three girls did not endorse female
genital cutting, they did acknowledge the ritual’s role in community bonding (Walley, 1997). That all forms of female genital cutting are condemned by the international organizations, regardless of severity, is particularly interesting when one takes into consideration the almost universal acceptance of, or at least ambivalence toward, male circumcision, by the medical community and the general public alike.
Accomodating Female Circumcision
ne particular double-standard evident in the treatment of male and female genital alterations can be seen when one examines the new practice of harm reduction and the medicalization of female genital cutting. In numerous African countries, officials are realizing that attempts to completely eradicate female genital cutting are not working. The new strategy of harm reduction acts to minimize the damage done to those women who do undergo the cutting. The strategy emphasizes the importance of cleanliness, usage of analgesics, and medical professionalism, while also encouraging less invasive procedures in general (Shell-Duncan, 2001). For example, in Somalia, milder forms of the first type of cutting, a prick of the clitoris, are being advocated by harm reduction activists. This mildest form of female genital cutting fulfills the requisite cultural requirements, and has been cited by some (though not all) observers as having “minimal health risks” (ShellDuncan, 2001). At the very least, it is certain that clinicalized female genital cutting is a healthier alternative to traditional methods that often involve prac-
Surgery and Sexuality 73 titioners who lack the necessary sterilized equipment and medical knowledge to prevent harmful side effects (ShellDuncan, 2001). In the United States, one attempt has been made at harm reduction. In 1996, doctors in a hospital in Seattle noted the frequency with which Somali parents would request physicians to perform “circumcisions” on their newborn children, regardless of the child’s sex. Doctors were forced to explain that circumcision in the United States was only legally permitted on males (Davis, 2006). The “Seattle Compromise” was thus suggested by Somali mothers. A symbolic nick would be made on the prepuce of the newborn girls, done by a doctor in a controlled medical setting. Such an operation performed has been stated to be “less injurious to the health, welfare, and safety of girls than male circumcision is to the health, welfare, and safety of boys” (Davis, 2006). Significantly, all of the female genital cutting operations would be occurring with the use of analgesics; the majority of male circumcisions are not conducted with pain reduction in mind, as described earlier. Male and Femate Circumcision: A Double Standard Thus, if female genital cutting was to be performed in its mildest form, as suggested by harm reduction advocates and proponents of the Seattle Compromise, then it would be no more harmful than male circumcision. In some cases of female genital cutting, there is no doubt of gender oppression at play, involving men working to suppress female sexuality. Yet in other cases, the cutting is done to bond members of a cultural
group together, analogous to male circumcision originally being practiced to mark one’s place in the Jewish community. Cutting done to pre-adolescent and adolescent girls often involves female bonding rituals, and can also be an initiation into adulthood. Furthermore, the existence of symbolic “circumcision through words” in some Kenyan communities, symbolic infibulations in Somalia (Shell-Duncan & Hernlund, 2000), and other ritualistic, non-invasive variations of female genital cutting in Israel and Indonesia (Davis, 2006), would suggest that the magnitude of the procedure matters less than the fact that it is taking place at all, in compliance with cultural norms and expectations. Although the Seattle Compromise still represents a medically-unnecessary operation performed on a patient without her consent, at the very least, the light nick to the clitoris proposed does not remove any tissue, would not impair sexual function, and does not have the same mark of irreversibility that male circumcision bears.
he parallels between controlled female genital cutting and male circumcision are now hopefully evident. They are both usually done for non-medical purposes, and thus solely for cultural or religious purposes. Indeed, controlled female genital cutting would fall under closer scrutiny than Jewish male circumcision procedures, which can still be carried out by traditional mohels in synagogues (Davis, 2006). At the very least, all efforts would be made to eliminate sensations of pain in female genital cutting operations; this option is usually not even extended towards infants undergoing male circumcision. Despite the similarities between the two genital-
When is it permissible to justify violating the bodily autonomy of a non-consenting patient for no purpose other than to fulfill cultural expectations? modification operations, and despite the increased health and safety regulations that a hypothetical controlled female genital cutting procedure would be beholden to, the Seattle compromise was rejected almost unanimously. Some opponents of the proposal felt strongly enough about it to send hate mail and death threats to the hospital. The rejection of the Seattle compromise, juxtaposed with the complete lack of regulation of male circumcision, can be attributed to cultural doublestandards. The difference between the female genital cutting proposed in the Seattle Compromise and male circumcision lies in the public’s respective perceptions of the practices. Female genital cutting has been associated with only violence and brutality and savagery, due to culturally-reductionist descriptions of the procedure. Women who have undergone the act are portrayed as victims only. Because the procedure is foreign and unfamiliar in Western culture, it is immediately painted in a negative light. One such example is illustrated in Judith Lorber’s Paradoxes of Gender. Lorber firmly opposes female genital cutting and characterizes it as mutilation, yet significantly has little to criticize about the more common male circumcision. Troublingly, she excuses the practice by claiming that “it is for women’s and men’s [sexual] pleasure” (Lorber, 1994). This is particularly ironic in light of controversial scientific claims
that male circumcision can in fact reduce sexual pleasure for the man (Gollaher, 2000). I do not wish to downplay the suffering that numerous women have experienced due to female genital cutting. I aim instead to point out the cultural condescension with which the Western world treats the third world, at least on the topic of female genital cutting. Western cultural complacency has allowed the continuation of male circumcision while still condemning female genital cutting, even in its most minor form. In a similar vein, I have no intention of downplaying female genital cutting in order to magnify the level of seriousness of male circumcision. It is not productive to “compare” oppressions and elevate one at the expense of another. Instead, critiquing male circumcision and the double-standards inherent in its continued cultural acceptance can be beneficial to activists trying to reduce the harm present in the more invasive forms of female genital cutting. By questioning male circumcision, one explores the power of cultural conditioning. When is it permissible to justify violating the bodily autonomy of a non-consenting patient for no purpose other than to fulfill cultural expectations? If Western society as a whole can acknowledge that both female genital cutting and male circumcision involve the same loss of bodily autonomy, then we can move beyond our ethnocentric, paternalistic, racially-Othering ten-
Surgery and Sexuality 75 dency to characterize practitioners of female genital cutting as savage and exotic, while still painting the West as a civilized champion of human rights. This ultimately bolsters the position of anti-genital cutting activists by removing one level of hypocrisy that unfortunately taints the present discourse and alienates the very women they are trying to reach. After this occurs, perhaps efforts to reduce the invasiveness of some female genital cutting procedures will be registered more efficiently.
Operations on Intersex Babies
heryl Chase, the founder of the Intersex Society of North America, gained media publicity in the 1980s and 1990s by pointing out the similarities between female genital cutting and operations on infants with ambiguous genitalia (Gollaher, 2000). In both cases, the operations are non-consensual violations of a child’s right to bodily autonomy, in order to satisfy dominant cultural expectations. However, in this article, I intend to discuss how male circumcision is just as relevant as female genital cutting to not only intersex, but trans and queer activists as well.
The condition of intersexuality, which affects anywhere from one in 6,900 (Krahl & Kuhnle, 2002) to one in 1,500 (Dreger, 2004) individuals, is pronounced by doctors upon viewing the genitals of a newborn baby. Existing medical standards define the proper infant clitoris as anything smaller than 0.9 centimetres, while infant penises are deemed appropriate provided they are larger than 2.5 centimetres. Those who bear genitals falling within the proper ranges are classed as either a ‘male’ or
‘female’ without any problem, and can thus be raised from thereon with the gender socialization that correctly befits their genitals. However, should a newborn’s genital length fall between 0.9 and 2.5 centimetres, the child is ‘genitally ambiguous’ (Preves, 2002), and is often described as intersex. According to official guidelines set out by the American Academy of Pediatrics, an intersex child’s state of being constitutes a “social emergency” that requires treatment “as quickly as possible” (Savulescu & Spriggs, 2006). The correct treatment in mind is surgical intervention. Penises that are “too short” for boys are surgically removed along with the testes. The infant is raised as a girl. Likewise, a girl’s clitoris that is “too long” under medical standards has its size reduced and a vagina that is too shallow becomes deepened (FaustoSterling, 1997). Of course, the infant had no way of giving consent to this operation, a fact that in itself is problematic in its blatant invasion of bodily autonomy. However, if one takes into consideration the fact that this irrevers-
ible procedure causes physical harm, the motives governing its performance must be questioned. Physiological effects of the operation can include “scarring, chronic pain, [and] chronic irritation” (Chase, 1998). Furthermore, the removal of a micropenis or enlarged clitoris may very well decrease sexual pleasure, and surgical replacements (such as new or smallersized clitorises) are not guaranteed to retain the same functionality (FaustoSterling, 1997). Furthermore, intersex infants who are genetically male and have reproductive capabilities will lose them due to the removal of their testes (Turner, 1999). Accepting Non-consensual Procedures
hy does the majority of the American medical community—and indeed, the general public—condone the performance of these non-consensual procedures? The Paediatric Surgeons Working Party, in 2001, released a statement justifying “corrective” surgery on intersex infants on the grounds that “normal looking genitalia” would “encourage stable gender identity and reduce stigma and psychological distress” (Savulescu & Spriggs, 2006). Work by Garry Warne cites as justification the “psychological benefit of the parents” and the prevention of psychological problems that could arise from “cruel discrimination” from nonintersex children (Warne, 2003). Justine Schober, a urologist, echoes Warne by pointing out that “patients and parents want surgery that looks cosmetically authentic and provides good function”, and that genital surgery would “provide the
patient with positive psychosocial and psychosexual adjustments throughout life” (Schober, 2004). Warne also states that intersex surgery would remove abnormal genitalia that would function as obstacles to “the development of healthy and satisfactory sexuality” (Warne, 2003), a claim that does not stand up to other evidence pointing to losses of sexual pleasure and function of individuals who have been genitally-altered. The above claims, first proposed by John Money of Johns Hopkins Medical School, that early genital surgery and subsequent sex assignment allows the development of a “normal”, unambiguous psychosexual identity has been widely accepted by most clinicians. Indeed, Money’s recommendations have found their way into medical textbooks (Krahl & Kuhnle, 2002). However, practice has proven his theory to be disastrously false. The most spectacular case from a publicity standpoint has been the experience of the individual known as both John and Joan. John was a male whose penis was burned off in a circumcision accident, and who was subsequently reared as the girl Joan, following recommendations by Money. His preliminary reports documenting Joan’s socialization as a girl were promising, thus crystallizing beliefs in the medical community that early genital surgery was indeed justified. However, from the age of fourteen, Joan chose to live as John, a boy, and elected to undergo reconstructive surgery to recover what had been taken from him non-consensually. In 2004, he committed suicide (Savulescu & Spriggs, 2006). Although John’s particular case is different than those of intersex individuals, in that his operation was a response to a circumcision accident, it is not alone in its demonstration of the dissatisfac-
Surgery and Sexuality 77 tion numerous genitally-altered individuals have felt as adults. The Intersex Society of North America has many documented cases of adults who express only anger towards doctors and parents responsible for the perpetuation of the surgery. In 2002, the ISNA had 1500 members (Preves, 2002). Cheryl Chase, founder of the ISNA, has also stated in a brief submitted in a court case that the ISNA has never received notice of any adult coming forward to state that he or she was grateful for early genital surgery (Chase, 1998). Thus, genital surgery on intersex infants is rarely medically indicated, and when it is performed, due to its nonconsensual nature and irreversible effects, it often negatively impacts the patient as an adult. The justifications described earlier given by medical professionals serve less to benefit the patient, and work more simply to satisfy common culturally-engrained ideas of gender, sex, and sexuality. Prevailing norms in the West stress that there are only two genders, and that the “genitals are the essential sign of gender” (Preves, 2000). The existence of intersex infants creates a “social emergency”, as these children occupy a space in which they cannot be categorized in either of the “two” genders, and thus there is uncertainty as to how a child should be socialized. While operating doctors believe that they are doing the intersex baby a service by eliminating the genital – and thus social – ambiguities pertaining to the child, their medicalization of the condition overrides the rights of the child to determine the characteristics of his or her body.
Many other underlying assumptions guide early genital surgery. First, it is assumed that all children must have a stable, defined gender identity—based solely on genital appearance—as early on in life as possible. As well, human genitals have expectations of solely heterosexual intercourse built into them: penises that are “too small” for heterosexual intercourse are removed and replaced by vaginas that are able to participate in the act. When doctors suggest that early genital surgery prevents future psychosexual complications from arising in a child, they assume that the sole purpose of genitals is for standard penis-invagina intercourse. The medical community’s expectation of heterosexual intercourse demonstrates the ubiquity of compulsory heterosexuality in Western society. The term, coined by Adrienne Rich, describes a set of social conditions in which heterosexuality is deemed to be natural, to the exclusion of all other forms of sexuality. Compulsory heterosexuality not only prizes heterosexual intercourse as the sole “natural” sex act (as it results in reproduction), but also is manifest in attempts to restrict men and women to separate spheres or roles in life (Rich, 1980). Following the model of compulsory heterosexuality, boys with micropenises are transformed into girls by physicians, under the conviction that the penis must be large enough to fulfill its “natural” purpose as a penetrator of the vagina. Penis size is the guideline used to “make” males, and if the length of the penis is deemed unsatisfactory for heterosexual intercourse, then infant is fashioned into a girl (Preves, 2002). Of course, the effects of the procedure on sexual pleasure are disregarded, prompting Anne Fausto-Sterling to
When doctors suggest that early genital surgery prevents future psychosexual complications from arising in a child, they assume that the sole purpose of genitals is for standard penis-in-vagina intercourse. remark, “Penetration in the absence of pleasure takes precedence over pleasure in the absence of penetration” (FaustoSterling, 1997).
Compulsory Heterosexuality he idea of compulsory heterosexuality ties into larger issues of cultural constructions of gender and gender roles and expectations. The works of psychologist John Money, the original advocate of John’s transformation to Joan, exemplifies this intersection. In his writings, particularly the 1968 book Sex Errors of the Body, Money categorises bodies that do not fall in the female/male dichotomy as “errors,” and advocates gender expressions of simply “husband” or “wife.” His reductive and confining views on human sexuality are further evinced by his advocacy of surgeries to “fix” genitally-ambiguous children, John/Joan being a case in point. Both his writing and his advocacy reflect his belief that from birth, children should be oriented to their “biologically and culturally acceptable gender role[s]” (Meyerowitz, 2002). Neither Money nor other proponents of the non-consensual surgery seem to have questioned why a child needs to be socialized from birth in a particular fashion. It is curious that a difference of 1.6 centimetres at birth is the determinant of how a child will be raised and subsequently be perceived. According to
Stephanie Turner, an opponent of early genital surgery, it is generally accepted that a child only forms the concept of gender identity at eighteen months of age (Turner, 1999). Why is it so difficult to postpone operating on intersex children and to raise them in a gender-neutral environment, until they decide for themselves how, if at all, they wish to identify? Do ambiguous genitals constitute such a social emergency that doctors cannot wait a year and a half for a child to perform gender on his or her own? Ultimately, it seems more appropriate to allow a child to decide in an informed and independent manner whether or not genital surgery is really required to match his or her own perception of the body. If one raises children in a genderneutral environment, the “dilemma” of coming up with a method of socialization to match the genitals is altogether bypassed. Sharon Preves’ interviews with numerous adult intersexuals who have chosen not to perform either the masculine or feminine gender, but who have opted to live outside the gender binary, demonstrate the possible reality of gender neutrality. These individuals have embraced their genital ambiguity and are found to be significantly welladjusted. The first-hand experiences of these individuals, whose voices have not often been heard by the medical community, lack “gender identity and relationship difficulties leading to social and psychological problems”; this has been a
Surgery and Sexuality 79 claim ominously predicted by doctors for individuals who do not fit in the confines of the gender binary (Savulescu & Spriggs, 2006). One interviewee speaks of the “freeing experience” of not agreeing with “the societal norm” of adopting a clear-cut gender, while another voices the same sense of liberation as she describes the love she feels for her intersex, ambiguous body (Preves, 2000). Also well-adjusted are self-identifying women who have large clitorises but did not undergo surgery to reduce them to “non-ambiguous” sizes. Anne Fausto-Sterling and Bo Laurent have documented seventy cases of children who grew up with ambiguous genitalia, and noted that most of them have “developed ways of coping with their anatomical difference” (Chase, 1998), once again a demonstration of many individuals’ abilities to develop “correctly” without early genital surgery. In one study conducted by Sharon Preves, only five percent of intersex individuals had reached adulthood without childhood medical intervention (Preves, 2000). Few intersex individuals are able to grow up with their bodies intact to demonstrate why “corrective” surgery is not necessary. In the more likely event that surgery does occur, it has been demonstrated to be futile, and attempts at reversing its effects often occur in adulthood. A follow-up study conducted by John Money
and Howard Devore found that three out of twenty-three intersex individuals who had been raised as girls chose to switch genders and become men later on in life (Chase, 1998). Ursula Kuhnle and Wolfgang Krahl point out the existence of three other studies in which individuals socialized as girls end up adopting a male gender identity and presentation, apparently with few problems (Krahle & Kuhnle, 2002). Many of these men incidentally also did not demonstrate problems with forming fulfilling emotional and sexual relationships with others (Diamond, 1999). This was despite commonly-held assumptions that the penises of these individuals were inadequate for heterosexual intercourse, and that these individuals would have been inadequate men by extension. The sex “errors” described in John Money’s 1968 book have less to do with the health of intersex individuals, and more to do with Western society’s and the medical community’s cultural hangups on the idea of grey zones. A great deal of what “psychological damage” is observed for some intersex individuals may have less to do with their genitals or day-to-day societal interactions, and more to do with the medicalization of the condition. The sense of urgency projected by doctors to erase intersexuality conveyed to some genitally-altered individuals that their condition was one that was shameful (Dreger, 2004). For some individuals who were forced to undergo
...it seems more appropriate to allow a child to decide in an informed and independent manner whether or not genital surgery is really required to match his or her own perception of the body.
The sense of urgency projected by doctors to erase intersexuality conveyed to some genitally-altered individuals that their condition was one that was shameful multiple hospital visits to complete the “corrective” genital surgery, they were turned into guinea pigs by the medical community by being subjected to group medical examinations. Such experiences not only robbed individuals of their privacy and autonomy, but contributed to their sense of alienation (Preves, 2000). This is particularly ironic considering the claim made by medical professionals that surgery to correct ambiguous genitals would in fact allow the child to fit in and avoid social ostracism (Savulescu & Spriggs, 2006).
Intersex outside the West
ne final demonstration of the cultural constructs dominating decisions to operate on non-consenting intersex individuals can be seen upon examination of the treatment of intersex individuals from non-Western countries. Scholars Ursula Kuhnle and Wolfgang Krahl, based in Malaysia, point out the increasing prevalence in Southeast Asia of genitally-ambiguous children being raised as boys, despite their “severely undervirilized” genitalia. These individuals were observed to have few problems integrating into society and being accepted as males (Krahl & Ku-
hnle, 2002). Thus, in some non-North American locations, the medical community places less emphasis on the appearance of genitalia in determining a child’s gender. Of course, the decision to avoid operating on genitally-ambiguous children and to raise them as males, despite the presence of a micropenis, may be due to the fact that “more prestige was associated with the male role” within the Chinese and Indian communities living in Malaysia (Krahl & Kuhnle, 2002). Thus, the decision to operate on intersex children is one that is culturally-conditioned. The degree of harmonious social adjustment demonstrated by the operation-free boys in Malaysia may in fact be mind-boggling to doctors in North America who would otherwise predict a life of social ostracism, but such contradiction in opinion is demonstrative of the power of cultural norms in determining medical policy. Male Circumcision in Perspective A critique of male circumcision can also be linked to critiques of early genital surgeries on intersex children. Early sex reassignment surgery for children is as non-consensual as male circumcision is and can even be damaging to the child’s health. As described earlier, just as male circumcision is a guideline for the attainment of genital perfection for American men, non-consensual sex reassignment surgery is also a culturallysanctioned method of upholding ideals of genital perfection. However, while male circumcision defines masculinity and reaffirms restrictive regulations governing gender and gender expression, the implications of early genital surgery are even more profound. This type of “corrective” surgery reaffirms the belief that one’s physical genitals
Surgery and Sexuality 81 must have one, and only one, correspondThere is no denying that within the ing gender identity, dominant discourse on gender in our which will eventually lead to one prescribed Western heteronormative society, method of gender exthere are spoken and unspoken rules pression. Individuals who are born with for how gender should be expressed. intersex conditions temporarily have an ambiguous (or double-standards. It is for this particueven non-existent) gender stemming lar reason that anti-male circumcision from their ambiguous genitalia, by so- activists can find relevance in discourse cietal standards. Those who choose to adopted by anti-early genital surgery ackeep their ambiguous genitalia, even if tivists, and vice versa. they feel like they embody a particular gender, still are not seen as sufficiently However one chooses to view gender, gendered by society at large, which there is no denying that within the has strict rules of what genitals should dominant discourse on gender in our match with which gender. Thus, early Western heteronormative society, there genital surgery, like male circumcision, are spoken and unspoken rules for how seeks to erase the experiences of “those gender should be expressed. Even if ‘incoherent’ or ‘discontinuous’ gendered one believes that one embodies a gender beings who […] fail to conform to the naturally, to remain culturally relevant gendered norms of cultural intelligibil- and to exist socially within our Western ity by which persons are defined” (But- binary-driven society, one must still foller, 1999). low certain societal rules governing how people should express their genders. If The majority of infant genital altera- one questions the requirements of ideal tions are rarely clinically-indicated. masculinity when one challenges male Worse, for the more invasive forms of fe- circumcision, then by extension, one male genital cutting, as well as for many questions just what defines masculininstances of clitoral reduction or remov- ity and femininity. This destabilization al of a micropenis, the procedures result of gender begs the question: Why do in negative physiological effects on the non-consensual operations take place patients. This is particularly problematic on individuals to force them to fit into after one realizes that in the instances of categories that in themselves consist of operations on infants or young children, wholly constructed and ephemeral reguno informed consent has been given. lations? Thus, when a criticism of male However, male circumcision and early circumcision explores the definition of genital surgery on intersex children do masculinity, it also serves to destabilnot face much opposition in the med- ize other existing Western paradigms ical community or in the public mind governing gender. These collective ideas at large, particularly in comparison to are the same ones that restrict the freethe almost universal condemnation of doms of all individuals, since we are all female genital cutting. This is of course controlled by the rules of gender, gender due to simple cultural conditioning and roles, and gender expression. &
REFERENCES American Academy of Pediatrics. (2005). Circumcision Policy Statement. Retrieved April 10, 2008. http://aappolicy.aappublications.org/ cgi/content/full/pediatrics%3b103/3/686. American Academy of Physicians. (2007). Circumcision: Position Paper on Neonatal Circumcision. AAFP Clinical Recommendations. Retrieved April 10, 2008. http://www.aafp. org/online/en/home/clinical/clinicalrecs/ circumcision.html Anufuro, P., Oyedele, L., & Pacquiao, D. (2004). Comparative Study of Meanings, Beliefs, and Practices of Female Circumcision Among Three Nigerian Tribes in the United States and Nigeria. Journal of Transcultural Nursing 15(2). Benatar, D. & Benatar, M. (2006). Between Prophylaxis and Child Abuse: The Ethics of Neonatal Male Circumcision. In Benatar, D. (Ed.) Cutting to the Core. Oxford: Rowman and Littlefield Publishers, Inc. British Medical Association. (2006). The Law and Ethics of Male Circumcision - Guidance for Doctors. Retrieved April 10, 2008. http://www.bma.org.uk/ap.nsf/Content/ malecircumcision2006. Boulware-Miller, K. (1985). Female Circumcision: Challenges to the Practice as a Human Rights Violation. Harvard Women’s Law Journal 8. Butler, J. (1999). Gender Trouble. New York: Routledge. Canadian Paediatric Society. (2004). Circumcision: Information for Parents. Position Statements from the Fetus and Newborn Committee. Retrieved April 10, 2008. http://www.caringforkids.cps.ca/babies/ Circumcision.htm. Chase, C. (1998). Intersex Society of North America Amicus Brief on Intersex Genital Surgery. Colombia’s Highest Court Restricts Surgery on Intersex Children. Retrieved April 10, 2008. http://www.isna.org/node/97. Cook, R.J., Dickens, B.M., & Fathalla, M.F. (2002). Female Genital Cutting: Ethical and Legal Dimensions. International Journal of Gynecology and Obstetrics 70. Craig, A. (2004). North Carolina Medicaid and the Funding of Routine Non-Therapeutic Circumcisions. In Denniston, G., Hodges, F.M., & Milos, M.F. (Eds.) Flesh and Blood: Perspectives on the Problem of Circumcision in Contemporary Society. New York: Kluwer Academic/Plenum Publishers. Darby, R. (2005). A Surgical Temptation. Chicago: The University of Chicago Press, Ltd.
Davis, D. (2006). Genital Alteration of Female Minors. In Benatar, D. (Ed.) Cutting to the Core. Oxford: Rowman and Littlefield Publishers, Inc. Diamond, M. (1999). Pediatric Management of Ambiguous and Traumatized Genitalia. The Journal of Urology 162. Dreger, A. (2004). ‘Ambiguous Sex’ – or Ambivalent Medicine? In Caplan, A., McCartney, J., & Sisti, D. (Eds.) Health, Disease, and Illness. Washington D.C.: Georgetown University Press. Fausto-Sterling, A. (1997). How to Build a Man. In Rosario, V. (Ed.) Science and Homosexualities. New York: Routledge. Fox, M. & Thomson, M. (2005). A Covenant with the Status Quo? Male Circumcision and the New BMA Guidance to Doctors. Journal of Medical Ethics 31. Glick, L. (2005). Marked in Your Flesh. New York: Oxford University Press. Goldman, R. (1997). Circumcision: The Hidden Trauma. Boston: Vanguard Publications. Gollaher, D. (2000). Circumcision: A History of the World’s Most Controversial Surgery. New York: Basic Books. Gordon, D. (1991). Female Circumcision and Genital Operations in Egypt and the Sudan: A Dilemma for Medical Anthropology. Medical Anthropology Quarterly 5(1). Harrison, D. (2002). Rethinking Circumcision and Sexuality in the United States. Sexualities 5(3). Hosken, F. (1993). The Hosken Report. Lexington: Women’s International Network News. Kessler, S. (1997). Meanings of Gender Variability Constructs of Sex and Gender. Chrysalis Special Issue on Intersexuality. Retrieved April 10, 2008. http://www.isna.org/books/chrysalis/ kessler Krahl, W. & Kuhnle, U. (2002). The Impact of Culture on Sex Assignment and Gender Development in Intersex Patients. Perspectives in Biology and Medicine 45(1). Lorber, J. (1994). Paradoxes of Gender. Newhaven: Yale University Press. Meyerowitz, J. (2002). How Sex Changed. Cambridge: Harvard University Press, 2002. Mort, F. (1987). Dangerous Sexualities: Medicomoral Politics in England since 1830. London: Routledge and Kegan Paul. National Center for Health Statistics. (2007). Trends in Circumcisions among Newborns. Retrieved April 10, 2008. http://www.cdc. gov/nchs/products/pubs/pubd/hestats/ circumcisions/circumcisions.htm.
Surgery and Sexuality 83 Preves, S. (2002). Sexing the Intersexed: An Analysis of Sociocultural Responses to Intersexuality. Signs 27(2). Rich, A. (1980). Compulsory Heterosexuality and Lesbian Existence. Signs 5(4). Royal Australasian College of Physicians. (2004). Policy Statement on Circumcision. Health Policy and Advocacy. Retrieved April 10, 2008. http://www.racp.edu.au/download. cfm?DownloadFile=A453CFA1-2A57-5487DF36DF59A1BAF527 Savulescu, J. & Spriggs, M. (2006). The Ethics of Surgically Assigning Sex for Intersex Children. In Benatar, D. (Ed.) Cutting to the Core. Oxford: Rowman and Littlefield Publishers, Inc. Schober, J. (2004). Feminizing Genitoplasty. Journal of Pediatric Endocrinology and Metabolism 17. Shell-Duncan, B. (2001). The Medicalization of Female “Circumcision”: Harm Reduction or Promotion of a Dangerous Practice? Social Science and Medicine 52. Turner, S. (1999). Intersex Identities: Locating New Intersections of Sex and Gender. Gender and Society 13(4). Walley, C. (1997). Searching for “Voices”: Feminism, Anthropology, and the Global Debate over Female Genital Operations. Cultural Anthropology 12(3). Warne, G. (2003). Ethical Issues in Gender assignment. Endocrinologist 13(3).