The Triple Helix F2017

Page 1

FALL 2017


Therapy The Next Game-Changing Medical Treatment?





asu gwu



berkeley cambridge harker harvard jhu

cmu nus



cornell georgia tech georgetown osu ucdavis uchicago melbourne yale

staff editors-in-chief Lucy Van Kleunen & Adrija Darsha managing editors Preston Schwartz Annabel Sen blog editor-in-chief Daniel Yang marketing officer Carly Kebalac head of design Kaley Brauer editors Brian Zhao Elena Renken Huga Zoells Aryana Javaheri Joseph Chen

Sam Kortchmar Mark Sikov Owen Leary Lillian Cruz Aolin Zhang

writers Amita Sastry Sean Joyce Jose Flores Jack Hegarty Zachary Jordan Rahul Jayaraman Sara Shapiro Mira Gordin Molly Magid

Olivia Moscicki Aranshi Kumar Carly Kabelac Hattie Xu Andrew Thomson William Lee Navya Baranwal Sumaiya Sayeed Alex Song

layout and art Jennifer Osborne Caitlin Takeda Emily Reed



about The Triple Helix is an independent, studentrun organization committed to exploring the intersection of science and society. We seek to examine the socioeconomic, moral and environmental implications of scientific advances, and to highlight the surprising ways that science can affect our ideas of humanity. The Triple Helix has an international scope, with chapters all over the globe, ranging from Berkeley to Yale, to Cambridge, Melbourne and the National University of Singapore. You are currently holding a copy of one of the Brown chapter’s biannual magazines. Inside, you will find a collection of articles written, edited & designed by students here at Brown.



CONTENTS     THE TRIPLE HELIX                                    2 Phage Therapy Zachary Jordan ‘20 6 The criminalization of HIV/Aids Patients Aranshi Kumar ‘17 9 Evolution of Monogamy in Humans JOSE FLORES ‘18


12 Debating Evolution & God OLIVIA MOSCICKI ‘18 16 Vegetative Value: The Overlooked Importance of the Urban Forest JACK HEGARTY ‘20



18 The Future of DNA Technologies ANDREW THOMSON ‘18 22 The Deformity Drug & Drug Regulations NAVYA BARANWAL ‘20 24 The Evolution of Irrationality MOLLY MAGID ‘19




27 Liberation Technology: HOw revolutionary is it? SARA SHAPIRO ‘20

30 More than Meets the Ear CARLY KABELAC ‘19 32 Data Science for Social Good RAHUL JAYARAMAN ‘19 34 A Brain that Glows: GFP As a Probe for Studying Neural Activty AMITA SASTRY ‘20 37 Biowarfare’s Line of Fire: Social Impact of Bioweapons in Syrian Civil War SEAN JOYCE ‘19 40 Traditional and Bio-Medicine: Personhood and Prospects for Integration OLIVIA MOSCICKI ‘18


45 The Stigma of Pedophilia HATTIE XU’ 19 48 The Fickle State of Being Bored ALEX SONG ‘20 »»p.34


50 AI in Robots: Profile on Professor Mall ANDREW THOMSON ‘18 52 Imbalances in Blindness: GLobal Lapses in Medical Treatment SUMAIYA SAYEED ‘20 54 Gray Matter & Oxytocin: What Separates Men and Women MIRA GORDIN ‘20


57 Food Allergies, Epipens, and Respondr WILLIAM LEE ‘19



Phage Therapy ZACHARY JORDAN ‘20

In 1940, constable Albert Alexander was rushed to the hospital near his deathbed, and received the most cutting edge treatment mankind had to offer. He made a miraculous recovery...or so it had seemed. When the hospital ran out of their miracle treatment, he died shortly thereafter, abscesses covering his face, breath rasping through his throat. You may be guessing that Alexander suffered a nasty gash in war, or had come into close contact with someone afflicted with a debilitating disease - but in fact, Alexander died of something a little less dramatic: a mere scratch to his face by a rose thorn while gardening [12]. While it may horrify you that a short half century ago we hadn’t yet developed the tools to treat simple infections, the case of the constable actually represents a major step forward in the history of humanity. His case proved the efficacy of penicillin, the first true antibiotic. Three hundred years earlier, sickness was



ARTWORK Jennifer Osborne ‘20

EDITOR Hugo Zoells ‘20

nearly a death sentence. The bubonic plague, for example, had a mortality rate of up to 90%, while its counterpart, the pneumonic plague, was lethal in almost all recorded cases [13]. Meanwhile, malaria ravaged equatorial regions, causing life expectancy to hover at around the age of forty [14]. Science explained little; for hundreds of years, illness was attributed to religion, wealth, and ‘miasma’ – foul, odorous air. Over the next two centuries, we made great strides in the fields of math, physics, chemistry, and biology, but disease remained a mystery. It regularly reduced the world’s population by up to twenty five percent, and infant mortality rates in 1800 were estimated to be upwards of forty percent [9]. In the 1860s, germ theory finally made its way into the limelight. It postulated that our sickness was actually caused by infinitesimally small organisms, around us at all times but invisible to the naked eye. As with many scientific theories accepted as fact by modern society, it was initially met with scorn and disbelief, with religious theories still dominating many facets of the scientific sphere. But as technological advances allowed for stronger microscopes, bacteria began popping up everywhere. By the beginning of the 20th century, germ theory was the prevalent scientific theory concerning disease – and yet, in the 1910s, the Spanish influenza still managed to kill tens of millions of people as we sat idly by, hopelessly wondering what we could

possibly do to fight organisms so minute that it had taken us millennia just to develop the technology necessary to learn of their existence [15]. Luckily, we wouldn’t find ourselves helpless for long. In 1928, Alexander Fleming made a discovery that would fundamentally change the world, our quality of life and the fate of every future generation. While looking at bacterial cultures, he noticed that there seemed to be a ‘zone of death’ around spots of mold [12]. What Fleming had actually discovered in that moment was one of the best ways to fight these minuscule monsters: with their competitors. Bacteria often secrete toxins (like penicillin) intended to reduce competition for vital resources in their immediate vicinity, and by artificially synthesizing these secretions, we were no longer trying to outsmart Mother Nature - we just had to pit her against herself. So how do antibiotics work? Why don’t the toxins secreted by bacteria harm us as well? The answer lies in their specificity. Bacterial defenses generally target cellular components common only to prokaryotic cells [1]. For example, many bacteria have cell walls made of peptidoglycan, not present in human cells. Some antibiotics inhibit the assembly of this wall and open the inside of the bacteria to the environment, ultimately resulting in cell death [1]. Other treatments, known as macrolides, target and disable bacterial ribosomes. These structures are responsible for

the synthesis of essential proteins, and their function is vital to bacterial survival [1]. Understanding bacterial defense mechanisms allowed us temporary immunity from ravaging disease, and humanity rejoiced in its triumph over nature - for the first time in human history, we were able to fight back against the nameless, invisible killer that had decimated our species since the beginning of time. And for a while, all was well. Our population doubled, and doubled again. Life expectancy skyrocketed [5], while infant mortality consistently declined [9]. Nowadays, it’s more likely you’ll die of eating too many cheeseburgers than of infectious disease [11]. But now that we’re sixty years down the road, natural selection has started to fight back. Bacteria have spent the last half century familiarizing themselves with our antibiotics, and now many of the bacteria that were once easily dealt with are becoming increasingly difficult to eliminate. Harvard researchers recently set out to show just how quickly bacteria can adapt to our treatments: in just 11 days, a wild strain of E. Coli (responsible for Chipotle’s recent publicity issues) developed drug resistance sufficient to allow for bacterial growth and development in the presence of 1000 times the amount of antibiotic that is normally lethal [6]. Why are bacteria so good at getting around our latest scientific advancements? The answer lies in their large population sizes, short generation



times, and imprecise DNA copying mechanisms. Compounding errors in the genome replication of these bacteria allow them to mutate quickly, and large population sizes and short generation times mean that the population as a whole is able to take advantage of any beneficial trait that may arise and build on it [3]. The result: a population that’s very difficult to eradicate, especially since we’ve given it sixty years to familiarize itself with our weapons of choice. Naturally, this was mildly alarming to the scientific community. Medical researchers began combing through the record books, searching for overlooked alternatives to commonly used medicines. In doing so, they stumbled upon an area of research that began

at around the same time as antibiotic development, but was abandoned in favor of promising penicillin results: phage therapy [10]. The mysterious “bacteriolytic agent”, as it was then known, was discovered by Frederick Twort in 1915 [1]. Interest in this rediscovered field peaked in the 1990s, when scientists repeatedly found mysterious pieces of viral DNA in bacterial genomes. Some sequences were unlike any they had seen before in analyses of human viruses, indicating the possibility of bacteria specific viruses [4]. These viruses are now known as bacteriophages (“bacteria eaters”), and their medical implications are immense. They take the idea of fighting bacteria with their competitors to the next level – they are the natural para-

sites of the bacteria that infect us, and enjoy the same advantages against bacteria that bacteria do against humans. There are approximately ten times more phages on planet Earth than there are bacteria, and their generation times and mutation rates are proportionally shorter and higher than their hosts [4,10]. Instead of fighting bacteria, we could release their natural predators, all the while knowing that we would be perfectly safe. It’s a striking idea that has taken the medical research community by storm. Phages, classified by “cluster” (based on observable traits like infection mechanism and protein structure), have been discovered left and right. There are now nineteen recognized phage clusters, each with its own wide array of constituents [2]. As we continue to study these different denominations, we learn more about what makes phages so darn good at evolving more quickly than bacteria, so that hopefully we can make drugs that emulate their success. Synthesizing pharmaceuticals is often complicated, but in this case the first impulse of many researchers was simple: grow the bacteriophages in the lab, and inject them straight into the bloodstream of the sick. However, scientists found the implementation of this straightforward strategy to be more difficult than anticipated. As it turns out, people don’t really like the idea of being infected with a virus, even if they’re told it won’t affect their own cells. Additionally, the factors that make viruses such effective bacteria killers also make them extremely adaptable [4]. One possibility is that after all bacterial hosts are eliminated, the virus mutates to infect eukaryotic (read: human) hosts. While this is unlikely - bacteriophages evolve with specificity so they can target and bond to particular proteins on host membranes - it is possible, and the effects of



such a mutation could be disastrous for the person treated. The difficult publicity and unpredictability of viruses has driven researchers to draw inspiration from viral particles, analyzing their mechanisms of action to attempt to synthesize chemicals that mimic their function. One area of research that holds particular promise focuses on the set of proteins and genes in the viral genome that causes host cells to explode, or ‘lyse’. These agents are known appropriately as ‘lysins’, and scientists hope to identify their mechanisms and synthesize drugs that cause cell lysis only in bacteria [7]. The T7 bacteriophage is an excellent example: it infects and causes the lysis of only E. coli (perhaps there is hope for Chipotle after all) [8]. Phage research has accelerated considerably since the 1920s, but it has a ways to go before phage therapy can come close to displacing antibiotics as our

main form of treatment. Continued research is needed to bring true solutions to market, and the scientific community is rushing to oblige. In fact, Brown itself is involved in phage research. The first year Phage Hunters sequence, lasting two semesters, allows students to find, identify, purify, and extract phage from soil samples. Then, the genomes of a few phage are sent to the Howard Hughes Medical Institute, where they are sequenced and returned to Brown for gene annotation. The completed annotation is then published by the class to GenBank to aid in future phage research. Brown is involved in other ways as well, with some professors even conducting personal research on the subject. So in the coming years, keep an eye out for phage therapy - it may become the preeminent medical treatment before you know it!

[1] Abedon ST, Kuhl SJ, Blasdel BG, Kutter EM. Phage treatment of human infections. Bacteriophage. 2011;1(2):66–85.

[9] Roser M. Child Mortality [Internet]. Our World In Data. [cited 2016 October 28]. Available from:

[2] Ackermann H-W. Phage Classification and Characterization. Methods in Molecular Biology Bacteriophages [Internet]. 2009;:127–40.

[10] Schneider K. Bacteria-Killing Phages Could Be an Alternative to Antibiotics. Discover. 2014 Mar 4.

[3] Bacteria - Bacterial Adaptation [Internet]. - Growth, Resistance, Response, and Cells. JRank; [cited 2016 October 28].

[11] The top 10 causes of death [Internet]. World Health Organization. World Health Organization; [cited 2016 October 28]. Available from:

[4] Cashin-Garbutt A. Bacteriophage therapy: an alternative to antibiotics? An interview Professor Clokie [Internet]. News Medical; 2015 [cited 2016 October 28].

[12] American Chemical Society International Historic Chemical Landmarks. Discovery and Development of Penicillin. (accessed October 28, 2016).

[5] Life and Death in the Enlightenment [Internet]. Princeton University Press; [cited 2016 October 28].

[13] Plague [Internet]. Deadly Diseases. PBS. Available from: http://www.

[6] Pesheva E. A cinematic approach to drug resistance [Internet]. Harvard Gazette. Harvard University; 2016 [cited 2016 October 28].

[14] Max Roser (2016) – ‘Life Expectancy’. Published online at Retrieved from: [Online Resource]

[7] Potera C. Phage Renaissance: New Hope against Antibiotic Resistance. Environmental Health Perspectives. 2013 Feb;121(2). [8] Rivas A. Bacteria-Killing Protein Could Help Fight Antibiotic-Resistant ‘Superbugs’ [Internet]. Medical Daily. Medical Daily; 2013 [cited 2016 October 28].

[15] Billings M. The 1918 Influenza Pandemic [Internet]. The 1918 Influenza Pandemic. Stanford University; June 1997. Available from: https://virus.



The Criminalization of


In addition to living with a permanent and infectious disease, individuals living with HIV/AIDS in the United States must constantly shoulder the stigma pinned to them by the virus. American society views these patients different than those suffering from other chronic diseases. HIV-positive citizens are immediately and systematically marginalized. Compounding the negative public perception of HIV patients, state legislatures have enacted HIV-specific criminal exposure laws. By 2011, thirty-three states had passed a total of sixty-seven laws in an effort to reduce HIV transmission [2]. While the specifics of criminalized behavior and punishments vary significantly, a commonality amongst these regulations is their intent to protect the uninfected people from acquiring the debilitating virus. Yet, they simultaneously place a burden on infected individuals. Essentially, the criminalization of actions by HIV-positive patients sets forth many troubling legal and ethical implica-

tions. These ramifications of the legal prosecution of HIV-patients become clearer with an understanding of the history of HIV stigmatization. In the United States, the HIV epidemic began in 1981 when clusters of men in California and New York were diagnosed with specific, rare forms of cancer or pneumonia [8]. Although the condition was initially linked to gay men, cases among heterosexual men and women appeared in 1982 [8]. Because HIV was originally strongly associated with gay men and intravenous drug users, the virus was shrouded in shame. These groups already faced stigma, and an infectious disease amplified the sense of disgrace. In the midst of the epidemic, criminalization of HIV transmission and exposure began largely in the 1990s as a public health initiative. As people grew scared of acquiring the then-deadly virus, such laws were used as protective measures to stop the epidemic.

Typically, HIV criminal cases that lead to prosecution must entail reckless or intentional transmission of the virus, but legal guidelines vary between different states [2]. Statistically, from 2008 to 2015, there have been a total of 218 cases dealing with prosecution and arrest for HIV exposure [6]. Most of the legal disputes involve instances of possible sexual transmission. Additionally, these cases typically involve people who knew of their HIV status and did not disclose their status before sexual encounters [5]. Interestingly, charges are not necessarily contingent upon actual transmission of the virus; undisclosed exposure can simply be enough to prosecute an individual. States such as Arkansas pursue these cases as Class A felonies whereas states such as Montana and other states approach them as misdemeanors [2]. Misdemeanors represent minor crimes such as reckless driving whereas felonies entail major crimes such as murder; clearly, states employ a wide variety of punishments.

EDITOR Owen Leary ‘18



Yet, sexual transmission and exposure is not the only form of HIV criminalization. In addition to sexual exposures, there are other actions for which HIV-positive individuals could be criminally prosecuted for [2]. In many states, an infected individual who spits at someone or bites someone could be charged and arrested [5]. Like sexual exposure, the punishment is largely variable by state. Louisiana applies a fine on HIV-positive citizens who participate in these activities, but Pennsylvania treats these actions as 2nd degree felonies [5]. Generally, these laws were enacted to protect prison guards and medical professionals from HIV-positive patients spitting on or biting them. In 2010, the White House declared its intent to revise HIV criminalization laws in an effort to prevent the spread of false information or stigmatization [7]. Although these promises have not yet come to fruition, and declarations have not been implemented yet, they it delivered a strong message to state legislatures and universally increased HIV awareness. Essentially, new government regulations will take into account scientific facts. HIV can only be transmitted through semen, vaginal fluid, breast milk, or blood [1]. Saliva does not transmit HIV [1]. Therefore, by making spitting and biting higher criminal offenses for HIV-positive people, the public becomes highly misinformed about this infectious disease. It contributes to the stigma surrounding HIV/AIDS patients because the public begins to falsely believe that casual contact will spread the virus. There are

historical reasons for these outdated laws, and it is worth noting in order to understand HIV criminalization in the present. Most HIV criminalization laws were passed in the 1990s, before the scientific discoveries of ART (antiretroviral medication) and PrEP (post-exposure treatments) [2]. Consequently, they fail to account for the availability of successful new preventative measures. Research studies since the 1990s have shown that condoms effectively stop the transmission of HIV, and medication generally lowers patients’ viral loads to undetectable levels [2]. Following these procedures makes exposure very low-risk. Unfortunately, criminalization laws do not currently take into account whether an individual was taking medication or following preventative protocol. State legislature’s failure to properly incorporate scientific facts represents negligence in our justice system. The United States justice system exists to protect citizens from harm and punishes those who infringe on the rights of others. These are the principles upon which HIV criminalization laws were enacted. In some cases, it becomes quite clear that such legislation is essential. Take the case of Philippe Padieu, a Texas man who transmitted the virus to approximately twenty women [3]. He tested positive for HIV in 2005 and yet continued to have unprotected sex, even with repeated warnings from the public health department [3]. The women pursued a criminal case against him and described him as a “serial kill-

er [3].” Philippe was charged with aggravated assault with a deadly weapon and sentenced to 45 years in prison [3]. Yet, make no mistake—cases are rarely so black-and-white; in other words, Padieu’s case has little ambiguity compared to most. Nick Rhoades, an HIV-positive Iowa man, is an example of the system possibly condemning the wrong people [4]. Rhoades strictly followed his medication regimen and had a virtually undetectable viral load. He also said that he always used condoms. However, when a one-night stand found out about Rhoades’ HIV status, he immediately went to the hospital and began to process criminal charges. Although he ended up consistently testing negative for HIV, Rhoades’ one-night stand suffered debilitating panic attacks from the experience, and Rhoades was charged with criminal transmission of HIV—a class B felony in the state of Iowa [4]. Although initially sentenced to twenty-five years in prison, Rhoades only spent nine months in jail for the offense due to legal maneuvering by his lawyer [4]. Cases like Rhoades are more common than cases like Padieu’s [6]. They highlight the unwarranted nature of many HIV criminalization laws. Not only do they disregard current scientific and preventative achievements, they sometimes harshly prosecute the wrong people. Condom usage, medication, and new scientific techniques can bring the risk of HIV transmission down to 0.05 percent for HIV-positive people [2]. So why do we prosecute people harshly if the risk they put others in is minimal?



Furthermore, HIV criminalization laws are problematic due to privacy issues. Nick Rhoades was prosecuted largely due to lack of transparency about his HIV status—even though he properly followed all preventative measures. Human beings, as a basic right stated in the United States Constitution, have the right to privacy. Health and illness statuses are usually regarded as a part of this amendment. In this context, an important and provocative question must be asked: if Rhoades took every precaution to make sure the people around him are safe, does he still not have the right to his privacy? And yet, it is people like Philippe

Padieu who remind us that these laws must exist in some capacity. Unfortunately, there is a select group of people who disregard public health precautions, and these individuals must certainly face the consequences of their actions. Additionally, while human beings do have the right to privacy, they also should have the right to make informed decisions and to avoid harm. Scientific efforts can successfully lower the probability of HIV transmission to 0.05 percent—however, each individual may consider the possibility of this risk differently. Therefore, it is essential to consider whether someone’s right to know a partner’s HIV status trumps one’s privacy rights.

[1] Legal Issues. (accessed 12 February 2015). [2] CDC. CDC-HIV Specific Criminal Laws. policies/law/states/exposure.html (accessed 12 February 2015). [3] Shana Druckerman. (accessed 12 February 2015).


HIV criminalization laws are an essential part of our legal system, but we must make sure they exist to prosecute the correct people. In order to do so, legislatures must exclude clauses that address actions not transmitting HIV—such as biting and spitting. State laws must also take scientific progress into account; so that, if someone utilizes proper preventative protocol, he or she is not prosecuted harshly—at least not as felons. With proper revisions, HIV criminalization laws will punish those who actually present themselves as dangers to the community while simultaneously protecting citizens’ rights to privacy.

[6] The Center for HIV Law and Policy. Prosecutions and Arrests for HIV Exposure in the United States, 2008 – 201 5. Prosecutions%20for%20HIV%20Exposure%20in%20the%20U.S.%2C%20 2008%20-2015%20%28revised%203.11.15%29_0.pdf (accessed 2015).

[4] Saundra Young. Imprisoned over HIV: One man’s story. http://www. (accessed 2015).

[7] The White House. FACT SHEET: Progress in Four Years of the National HIV/AIDS Strategy. (accessed 2015).

[5] American Civil Liberties Union. State Criminal Statutes on HIV transmission. 2008.

[8] History of HIV&AIDS in the USA. (accessed 22 April 2015).


Evolution of


in Humans JOSE FLORES ‘18

When it comes to relationships, perhaps no idea is more ubiquitous in popular culture than the idea of “the one,” that (singular) partner that we’re meant to be with. As far removed as we often tend to think of ourselves from the natural world, finding a mate is not a unique concern. Where we break with the rest of the animal kingdom, however, is our propensity towards monogamy. While polygamous and polyamorous relationships exist throughout the world, monogamous relationships have long been viewed as the “norm” in the U.S. And as different as we are from other animals, we share myriad similarities -- is there a biological or neurochemical basis for

the ubiquity of monogamy? One of the most humbling aspects of biology is realizing the relatedness we share with every other living thing on the planet; it’s what allows us to infer and extrapolate what neurochemistry in another species can tell us about ourselves. Given how rare and seemingly paradoxical the advent of social monogamy is in the natural world, it’s easy to say that the monogamous status of humans can be fully encapsulated in terms of cultural norms and divorced from our own biology. It would be myopic, however, to ignore the roots of our own evolutionary tree.



ARTWORK Caitlin Takeda ‘20

Our closest living relatives on the tree of life, other primates, can shed light on the murky origins of monogamy. Social monogamy is rare in mammals, for only 3% of all mammals are considered socially monogamous. In contrast, 29% of primates are able to form socially monogamous relationships. [1] A relatively recent development in primate evolutionary history, the rise of social monogamy first occurred over 16 million years ago. By studying primate social behavior, we gain a richer understanding of the evolutionary footprints that have led to social monogamy in humans. One of the leading theorists on the evolution of human intelligence, Dr. Dunbar, suggested in a recent paper that it was the “intensity” of social relationships—the strong, pair-bonding relationship that develops between mates—that led to increases in the brain size of primates. [2] Primatologists have proposed competing theories to account for the reasons behind the unique pair-bonding found in primates. The fundamental idea behind the “mate-guarding” hypothesis is that monogamy arose because of a mutual intolerance of breeding females towards each other and the large territorial ranges between males and largely solitary females. [1] The density of females in a given area limits the amount of potential mates with whom a male can reproduce, and this density also limits the range of the territory a given male can protect. Thus, under these conditions, monogamy may have been the most efficient breeding strategy males could adopt in order to optimize their probability of passing on genes and having offspring that live to a sexually mature age — the equivalent of “winning” in the evolutionary game. [1] If winning the evolutionary game consists of passing on your genes as much as possible and ensuring those offspring survive, how could monogamy arise? This is inherently linked to the expensive-tissue hypothesis and



EDITOR Hugo Zoells ‘20

the observable practice of infanticide in primates. [3, 4] One of the central concepts of evolution is the idea of trade-offs. Increases in brain size come at the cost of the longer gestation time and period of lactation required to fully develop this larger brain. [3] A longer gestation time and lactation period result in the offspring being more dependent on parental care. In addition, these increases in duration result in an increase in the vulnerability of infants to infanticide. The presence of the pair-bonded male acts as a deterrent to infanticide in the form of parental care and protection from other males. This presence also increases the offspring’s odds for survival. We have convincing evidence that suggests that early modern humans may have practiced social monogamy and provided biparental care along similar lines. [4] Evolutionarily speaking, we have an understanding of the development of monogamy, but in order to understand the mechanisms behind monogamy, an understanding of the relevant neurochemistry is needed. The textbook model organism for pair-bonding is the prairie vole. Like humans, voles display variation in their mating patterns: prairie voles are largely monogamous while meadow and montane voles favor polygamous relationships. Remarkably, prairie voles are capable of forming long-lasting pair-bonds and exhibit biparental care of offspring. Out of attachment to their partner, prairie voles will typically not pairbond again if their first partner dies. [5] Two key neuropeptides, a category of neurotransmitters made out of amino acids, are involved in pair-bonding in prairie voles and more broadly in pair-bonding in mammals, in general. Compared to non-monogamous voles, prairie voles have a higher density of oxytocin cell receptors (OCR) in central parts of the mammal brain’s “reward system.” Studies have shown that the

interaction of key neuropeptides are heavily involved in the neural processing of social recognition among voles. [5] Oxytocin is essential for pair-bonding, and also the bonding between a mother and infant while arginine vasopressin (AVP) has been associated with courtship and scent marking. [5] A hypothesis emerged linking pair-bonding as the result of the additive interaction of social recognition pathways and the reward pathway described above. More specifically, it is believed that the reinforcing properties involved in sex gets tethered to olfactory, social cues associated with the partner and this chemistry underlies voles’ monogamy. [5] Although this particular model ignores a host of other neurotransmitters involved in pair-bonding and the interactions that take place at the molecular and organismal level, it has incredible implications for the mechanisms mediating pair-bonding in mammals—including humans. Working under this paradigm’s assumptions, researchers increased the number of arginine vasopressin receptors in male, non-monogamous meadow voles. After having mated, the male voles expressing increased AVP receptors displayed an increased frequency in pair-bonding compared to the controls. [5] Although

there is no data yet demonstrating a commonality between the pathways observed in voles and in humans, voles serve as an important model by demonstrating the prominence of neurotransmitters in inducing monogamy. When Dunbar wrote that it was the “intensity” of social relationships (as seen in pair-bonding) that ultimately led to greater brain sizes, he expanded on the definition of pair-bonding. He describes how primates are unique because they are able to “pair-bond” with any member of its species that they are able to form a significant social relationship with—not just their mates. [1, 6] When we consider our definition of monogamy and the number of organisms that have been categorized as monogamous, very few if any species would fit under that definition. Extracoupling exists and has been documented in several species thought to have been strictly monogamous. [1] Humans are not detached from the natural world and strict monogamy is not present in our own species. Under the social brain paradigm, animals living in groups with complex social structures require a brain that is capable of handling higher cognitive tasks such as facial recognition, memory consol-

idation, and language processing. [2] However, the expensive-tissue hypothesis states that due to the high energetic demand of the brain, this requires trade-offs in terms of other energetically “expensive” tissues or changes in reproductive strategies. [3] As a result, brain size in primates can be explained in terms of evolutionary trade-offs between fecundity—the number of offspring—and larger brains. As primates have longer and more metabolically expensive gestation periods, the presence of a second parent increases the possibility of survival of the offspring. In humans, this might explain the prevalence of monogamy and even the interplay between intelligence and monogamy. However, the models and hypotheses we considered have their limits when applied to humans. One thing can be certain -- early humans were subject to the same environmental pressures as other animals and often shared the same neural pathways for achieving similar end results. To ignore the possibility that monogamy in humans arose as a result of evolutionary pressures and physiological constraints is to deny our connection to the natural world and the relatedness that connects us to every other branch on the tree of life.

[1] Lukas D, & Clutton-Brock, TH. The Evolution of Social Monogamy in Mammals. Science 2013; 341 (6145), 526-530

[7] Opie C, Atkinson QD, Dunbar RIM, & Schultz S. Male infanticide leads to social monogamy in primates. PNAS 2013; 110 (33), 13328–13332

[2] Dunbar RIM, & Schultz S. Evolution in the Social Brain. Science 2007; 317 (5843), 1344-1347

[8] Young LJ & Wang Z. The neurobiology of pair bonding. Nature Neuroscience 2004; 7, 1048 - 1054

[3] Kotrschal A, Rogell B, Bundsen A, Svensson B, Zajitschek S, Brännström I et al. Artificial Selection on Relative Brain Size in the Guppy Reveals Costs and Benefits of Evolving a Larger Brain. Current Biology 2013; 23 (2), 168-171

[9] Dunbar RIM, & Schultz S. The evolution of the social brain: anthropoid primates contrast with other vertebrates. Proc. R. Soc. B 2007; 274 (1624), 2429-2436



Debating &

Evolution God OLIVIA MOSCICKI ‘18

In the spring of 2014, Stephen C. Meyer published his book Darwin’s Doubt: The Explosive Origin of Animal Life and the Case for Intelligent Design. Meyer is a founder of the Intelligent Design Movement (IDM) and the director of its intellectual haven, the Discovery Institute’s Center for Science and Culture (CSC) [1]. In his book, Meyer calls into question the validity of Darwinian evolution by critically examining the Cambrian Explosion. Meyer claims that the wide variety of organisms which seem to have come about in this period do not possess ancestors within the fossil record. He uses this dilemma as a jumping off point from which to discuss the apparent shortcomings of Darwinian evolution theory-- which suggests that all species

have come to be through the natural selection of certain advantageous inheritable traits, and his arguments in favor of intelligent design, the theory that these species were instead engineered by an “intelligent designer”. Despite its stand-alone success, Meyer’s book is part of a much larger-- and very controversial-- debate surrounding the validity of Darwinian evolution and intelligent design. Nearly 10 years ago, this debate manifested itself in a legal battle over scientific education in American public schools. The highly publicized district court case Kitzmiller vs. Dover posed defendant Dover Area School District (of Dover, Pennsylvania) against assorted Dover residents and Dover

High School parents. In December of 2004, the plaintiffs challenged the constitutional validity of the Dover Area School Board of Director’s decision to make ninth grade biology teachers read a statement to their classes which presented intelligent design as an alternative to the supposedly unconfirmed Darwinian evolution and advocated instead the reading the IDM textbook book Of Pandas and People. This policy framed both evolution and ID as scientific theories, each worth consideration in the classroom. However, the plaintiffs contended that such a policy specifically endorses a Christian viewpoint, thereby violating the First Amendment, which prohibits government “establishment” of one religion over another [2]. So, throughout the

EDITOR Sam Kortchmar ‘16



fall of 2005, experts on both sides, including Brown University Biology professor Ken Miller, set forth to argue for and against the notion that teaching intelligent design is in fact an endorsement of Christianity. Ultimately, the court found that the ID policy did indeed violate the First Amendment, and the notion that intelligent design was a scientific theory worth teaching was defeated [2]. What is intelligent design?, a website curated by the Discovery Institute, states, “The theory of intelligent design holds that certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection”, i.e. that Darwinian evolution theory is insufficient in explaining the complexity of living things. Intelligent design proponents argue that intelligent design is a scientific theory without any religious implications. They even go out of their way to distinguish ID from creationism, a theory of the world’s origin which follows the Christian biblical Genesis story. On the Discovery Institute website, designers say: “the charge that intelligent design is ‘Creationism’ is a rhetorical strategy on the part of Darwinists who wish to delegitimize design theory without actually addressing the merits of its case” [3]. However, when one looks closely at the supposedly scientific logic behind intelligent design, its “merits” do not hold up. Their “science” is based upon the notion that certain systems and structures in the biological sphere are too complex to have been created by unconscious particles reacting under random con-

ditions, as Darwinian evolution would imply. Thus, they theorize that there must be a “designer” who intentionally and carefully fabricated these “irreducibly complex” systems. IDM leader Michael Behe defines an “irreducibly complex” system as one in which the removal of any one component would render the system nonfunctional. He claims that these systems disprove evolution because “slight, successive modifications of a [functional] precursor system” are necessary for evolution to work. Intelligent designers often argue that the bacterial flagellum are an example of an “irreducibly complex” system. During his Kitzmiller testimony, however, in direct response to this argument, Ken Miller presented the Type-III Secretory System, a functional precursor to the flagellum which has been discussed in several scholarly peer-reviewed articles [2]. This aspect of Miller’s testimony was a key point in disproving the “science” of ID during the trial. In addition, the concept of irreducible complexity, if shown to be present in the natural world, would only imply that evolution is an incomplete theory, but would not prove the existence of a designer. Ultimately, ID’s scientific points may be interesting critiques of evolutionary theory, but they do not provide comprehensive evidence for the alternative ID supporters propose. Though intelligent design supporters claim that the theory is nonreligious, an exploration into the historical roots of the movement reveal it to be merely a new iteration of previous anti-evolution movements driven by religious fundamentalism. In the 1920s, this

took the form of outright creationism. Creationism supporters did not claim to be scientists like ID supporters do now, but they opposed the teaching of evolution in schools nonetheless. In 1925, then-governor of Tennessee Austin Peay signed legislation which made it illegal for public school teachers to “teach any theory that denies the story of the Divine Creation of man as taught in the Bible, and to teach instead that man has descended from a lower order of animals” [4]. Later that year, biology teacher John Scopes violated this law and was tried in the “Scopes Monkey Trial”, which became highly publicized and framed to the public as an epic debate between science and religion. Ultimately, Scopes was found guilty-- and evolution remained absent from public schools for 40 years. However, in 1968 the Supreme Court struck down an Arkansas law which prohibited teaching evolution in public schools, and evolution asserted its place in pubic biology curriculum. After this defeat, creationists shifted their self-presentation in order to frame their theory as a scientific one. They began to argue for “balanced treatment”, asking that Biblical creationism be presented as a viable alternative to evolution in the science classroom. However, the First Amendment prevented any legal statutes which required “balance treatment” from taking hold. Seeing that they would need to have the appearance of scientific clout in order to succeed in the courtroom, creationists again reframed their ideas and called them “creation science”. This version of the Christian anti-evolution theories was thwarted in the legal cases McLean v. Arkansas of 1981 and Edwards v.



Aguillard of 1987. In both cases, statutes mandating the teaching of both evolution and creation science were eliminated on the basis that they violated the First Amendment and falsely endorsed Christian beliefs under the guise of science2. After these court decisions, creationist arguments were yet again reframed as scientific-- and called “intelligent design”. Despite what its architects claim, intelligent design clearly originates from this deeply religious genealogy. This connection is quite notably found in the ID “ textbook” identified in the Dover trial. Of Pandas and People is published by the Forum for Theological Exploration, which is registered as a “religious, Christian organization” by the IRS. In addition, the text was authored by two self-identified creationists [2]. The book was first drafted before the 1980s court cases defeated creation science, and a close look into the successive versions of the text reveal that shortly after the defeat, the 150 words throughout the manuscript referring to creationism were simply replaced with the phrase intelligent design [2]. It can only be concluded that the authors were aiming to cover up their clearly Christian intentions. In addition, the intelligent design argument as we know it today is reminiscent of the 19th century religious figure Reverend Paley’s argument for the existence of God, which pointed to the complexity of life as evidence for a designer, namely, the Christian God [2]. Its clear ties to historically Christian movements and scientific inadequacies



reveal that intelligent design is not in the family of credible science, but simply represents a new generation in the genealogy of religious pseudoscience. So, it is established that despite what its supporters say, intelligent design is intimately connected with Christian views and finds its roots in the religion. Thus, the mandated teaching of intelligent design in public schools would violate the First Amendment insofar as it endorses one religion and consequently constricts the freedom of others. However, one question which does not seem to have been directly challenged is whether or not Darwinian evolution promotes nonreligion, which could also violate the First Amendment. How do non-legal sources weigh in on this question? The supporters of creationism have historically argued that Darwinian evolution is irreconcilable with Christian beliefs and inherently atheistic. However, their arguments seem to stem from a superficial cultural tension between science and Christian values rather than theological dissonance. When I sat down with Ken Miller, key witness in Kitzmiller v. Dover, he argued that evolution debates are “rooted in cultural arguments” and “American anti-intellectualism“. In addition, he pointed out that “only about 15% of Americans are classic biblical literalists meaning they really think earth is 600 years old, and they really think there was the starting couple Adam and Eve...3 times that many reject evolution...What I’m saying is there’s a deep

cultural aversion to what they regard as the culture of evolution” [5]. In other words, many anti-evolutionists oppose evolution not because it directly contradicts their religious beliefs, but because they perceive evolution to be part of an amoral scientific contingent. Miller described a very telling graphic found on the website of a prominent creationist which depicted evolution as a brick in the foundation of a wall made of several Christian aversions such as homosexuality, euthanasia, and abortion [5] . Thus, it seems that a reluctance to simultaneously support evolution and Christian beliefs is grounded in cultural opposition to evolution’s perceived moral lack, rather than a grounded theological argument that evolution is atheistic. Many others, in both the scientific and religious realms, find evolution and Christianity to be theologically reconcilable, if not mutually reinforcing. Miller himself, who is known in both intelligent design circles and the scientific community as the foremost defender of Darwinian evolution against ID, is a practicing Catholic. When I asked him if he ever had doubts about the reconciliation of his faith and his science, he said, “if two ideas are not in conflict they have no need of reconciliation” [5]. Indeed, in Kitzmiller v. Dover, Miller objected to the implication that students must “choose God on the side of intelligent design or choose atheism on the side of science’”2. Rather, God may be chosen in conjunction with science. The Catholic Church itself supports evolution, with

an emphasis on the fact that the theory does not necessitate a meaningless view of life [4]. In addition, as of March 2015, 13,000 Christian clergymen had signed a letter saying that they “believe that the timeless truths of the Bible and the discoveries of modern science may comfortably coexist” [6]. In Darwin’s time, theologists even argued that religion and evolution are mutually reinforcing. In the late 19th century, famous Scottish Evangelist Henry Drummond argued that “an immanent God, which is the God of Evolution, is infinitely grander than the occasional wonder-worker, who is the God of an old theology”4. To Drummond, understanding the world as shaped through evolution can not only coexist with Christian belief but also augment God’s power. Finally, Darwin himself, definitively proclaimed in a letter to

his contemporary John Fordyce: “Sir, it is absurd to think that a person may not be an evolutionist and a Christian” [5]. Thus, many scientific and religious authorities have confirmed the notion that evolution does not necessitate atheism and therefore does not violate the First Amendment. All in all, it seems that Darwinian evolution fits into the American political framework of religious freedom because it allows but does not necessitate religion nor atheism, unlike intelligent design which requires the existence of a deity-- and is rooted Christian creationism. This dynamic is evident in the fact that more and more Americans seem to support evolution, despite the persistent prevalence of Christian beliefs in America. In fact, supporting evolution is becoming more and more

[1] Stephen C. Meyer [Internet]. 2016 [cited 19 March 2015]. Available from: [2] Kitzmiller V. Dover. 2005. [3] Explaining the Science of Intelligent Design [Internet]. Intelligentdesign. org. 2016 [cited 19 March 2015]. Available from:

of a political asset and perhaps even necessity. Ken Miller pointed out that the four Republican Presidential candidates in 2008 who rejected evolution were not widely criticized for this stance. However, when then potential 2016 candidate Scott Walker skirted an evolution question on the BBC last year, the American media blew up with condemning responses [5]. This tide of reconciliation and support implies that evolution and Christianity, and perhaps more broadly, science and religion, need not have so much conflict. If indeed evolution exists in harmony with American politics, which supposedly leave room for all religious beliefs, then perhaps Christianity, and other religions, can exist in harmony with evolution--it would seem that Darwin and God need not be enemies.

[4] Dixon T. Science and religion. New York: Oxford University Press; 2008. [5] Miller K. Evolution and Intelligent Design. 2015. [6] The Clergy Letter Project [Internet]. 2016 [cited 19 March 2015]. Available from:



Vegetative Value:

The Overlooked Importance of the Urban Forest JACK HEGARTY ‘20

I grew up in a house nestled snug in the woods. I was surrounded by trees, and I loved it. Trees are natural jungle gyms waiting to be climbed; they are welcome sources for shade on a blistering hot day. They are vibrant mosaics of colors in autumn, and snow-coated sculptures in winter. Because of this, a love for trees always seemed entirely reasonable to me, yet I have found that this affection is not shared by all. Undoubtedly, part of my arborous affinity stems from the fact that I find trees so familiar. When I moved to Providence for college, though, the new environment didn’t make me homesick for my familiar woods, but rather awakened in me an entirely new wonder for trees.

While the urban forest is not the ecosystem I’m accustomed to, I’ve realized that it is even more fascinating than the woods I’ve known all my life. In theory, the metropolis is replete with steel and concrete, devoid of nature. Amidst the human construction of cities, the natural architecture of trees seems incongruous. Despite this perceived incompatibility, the environmental and economic impact of trees make the urban forest a necessary inclusion in the blueprint of cities. Cities pump toxic fumes into the atmosphere and drain pollution into the water table. Luckily, trees help mitigate

these caustic environmental effects. In fact, city trees are some of the most environmentally important on earth because of their proximity to the problem. Given the vast quantities of pavement in cities, runoff water—rain blocked from percolating into the ground—is inevitable. This runoff water picks up pollutants settled on sidewalks, roads, and parking lots. City stormwater systems then drain the contaminated water into nearby waterbodies. Not only does this man-made system of water control spread pollution, but it also results in damaging erosion and flooding [1]. Since trees absorb much of the rainfall before it is able to become runoff, they are able to proactively oppose this

PHOTO Kaley Brauer ‘17



EDITOR Aryana Javaheri ‘20

process. In Providence alone, urban trees prevent 31.5 million gallons of runoff annually [2]. Urban trees also actively absorb all of the pollutants (i.e. carbon monoxide, nitrogen dioxide, ozone, particulate matter, and sulfur dioxide) monitored under the Clean Air Act of 1970. Trees take in air through their stomata, the pores in their leaves, to obtain the carbon dioxide necessary for photosynthesis [3]. In the process, they also absorb and retain pollutants, acting as air filters for the outdoors. Simultaneously, trees reduce building energy consumption. The water that evaporates from their leaves directly chills the air, while the shade they provide keeps buildings cooler in the summer [4]. Together, these impacts reduce pollution as less energy is made to power air conditioners. For the future health of our planet, urban forests will be instrumental. However, many cities still disregard their tree populations. Afflictions from Dutch Elm Disease, to Chestnut Blight, to the Emerald Ash Borer beetle have decimated the tree population of many

American cities over the course of the past century [4,5]. In the wake of devastating mass die-offs, little action is being taken to reinstate the amount of foliage lost. Presumably, the reason cities have not replaced their trees is the perceived economic burden this would entail. However, planting trees is actually economically favorable. The trees of Houston, TX, for instance, are valued at $1.3 billion for reducing stormwater, $300 million, for absorbing pollutants, $111.8 million for reducing air conditioning costs, and $13.9 million for reducing heating costs [6]. This is not an isolated phenomenon: here in Providence, every dollar spent on tree planting and maintenance returns $3.33 to the city each year “... in the form of energy savings, CO2 removal, air quality improvement, stormwater uptake, and aesthetic value” [2]. In total, this means Providence’s trees have an annual worth of just under three million dollars [2]. For a single city, these numbers are impressive. With an aggregate estimated value of 2.4 trillion dollars, the economic impact of all the urban forests in America is noteworthy, to say the least [7].

[1] Karvonen A. Politics of urban runoff. Cambridge, Mass.: MIT Press; 2011 [2] Still D. [Internet]. 1st ed. Providence: State of Providence; 2008 [cited 18 October 2016]. Available from: sites/default/files/file/Parks_and_Recreation/Providence_Urban_Forest_as_of_2008.pdf [3] Nowak D, Hirabayashi S, Bodine A, Greenfield E. [Internet]. 1st ed. 2014 [cited 17 October 2016]. Available from: nrs/pubs/jrnl/2014/nrs_2014_nowak_001.pdf [4] Still D. [Internet]. 1st ed. Providence, RI: City of Providence; 2016 [cited 17 October 2016]. Available from: http://www.providenceri. com/efile/5424 [5] Anagnostakis S. Revitalization of the Majestic Chestnut: Chestnut Blight Disease [Internet]. 2016 [cited 29 October 2016]. Available from: Pages/ChestnutBlightDisease.aspx

Cities, in retrospect, lack a valid economic motive for continually ignoring their trees. Given this reality, it is likely that city leaders are simply unaware of the value of trees and as a result do not prioritize their presence. Even though the significance of the urban forest is overlooked, there is some evidence that the social value ascribed to trees is on the rise. In fact, Providence pioneered this movement. From the 1950s to 1980s, landscape architect Mary Elizabeth Sharpe led large scale tree planting efforts in the city, in an attempt to restore the urban canopy to its former size. Her legacy continues today; there is a plan to increase Providence’s tree coverage by 30% by the year 2020, for example [8,9]. Other cities have followed suit, such as New York City, which accomplished it ambitious goal of adding one million trees to the city in November 2015 [10]. Evidently, cities have started to catch on to how important the urban forest really is. As both a lover of trees and a proponent of practical environmental programs, I can only hope this trend continues.

[6] Foster J, Lowe A, Winkelman S. The Value of Green Infrastructure for Urban Climate Adaptation [Internet]. 1st ed. 2011 [cited 20 October 2016]. Available from: Green_Infrastructure_FINAL.pdf [7] Nowak D, Crane D, Dwyer J. Compensatory value of urban trees in the United States [Internet]. 1st ed. 2002 [cited 21 October 2016]. Available from: [8] Rhode Island Heritage Hall of Fame: Mary Elizabeth Sharpe, Inducted 2001 [Internet]. 2016 [cited 19 October 2016]. Available from: cfm?iid=440 [9] Trees 2020 [Internet]. 2016 [cited 21 October 2016]. Available from: [10] MillionTrees NYC [Internet]. 2016 [cited 19 October 2016]. Available from:




Future of DNA Technologies ANDREW THOMSON ‘18

Gene-editing technologies give scientists the power to change the creator and controller of all life—the DNA code. These technologies may soon give humans the power to cure diseases like cancer, manipulate deadly viruses, or even alter human embryos.[1] Hundreds of scientists gathered recently at the International Summit on Human Gene Editing in Washington D.C. in an attempt to determine how, and for what purpose, gene-editing technologies should be used.The driving force behind these discussions is a new gene-editing technology called CRISPR that uses the immune system of bacteria to track down, cut, and edit regions

of DNA.[1] CRISPR is revolutionizing the world of science and medicine because of its precision, efficiency, and accessibility—it can be used by anyone with basic laboratory equipment.[2] The summit released a press statement explaining that “Gene-editing technologies are already in broad use in biomedical research, and may have wide-ranging medical applications. But, the prospect of human genome editing raises many important scientific, ethical, and societal questions.”[1] The biggest concern is that CRISPR will be used to alter the human genome. Changing human beings at the genetic

level has never been done before, and many believe that interfering with heredity would change the definition of what it means to be human. This tech could also be used to create advanced genes that dominate the human gene pool, revive extinct viruses, or give parents the ability to determine physical traits of their children.[3] The last year has seen a flurry of publications documenting CRISPR breakthroughs, including a paper from Harvard and MIT [4] improving CRISPR accuracy and an experiment appearing in Nature showing CRISPR’s potential effectiveness fighting HIV.[3] However, the most disputed was a paper

EDITOR Elena Renken ‘19



from Chinese researchers [5] who used CRISPR to alter human embryos. The test was ineffective, but shows that anyone can use CRISPR in new, unregulated, and potentially harmful ways. In light of these concerns and the widespread availability of CRISPR technology, the summit called for a moratorium on the editing of human embryos, laid down guidelines for basic laboratory gene-editing research, and established an ongoing international forum to oversee genetic development.[1] Unfortunately, it is unclear whether the International Gene-Editing Summit will be enough to control the power of CRISPR.[6] According to Nicholson Price, a law professor at the University of New Hampshire School of Law, “The Washington Summit has no international authority, so there is still no legal restriction on what people are allowed to do. The Conference laid down a plan, but it may have a hard time enforcing it.”[7]

Why be concerned? “The prospect of rapid and efficient genome editing raises many ethical and societal concerns, concerns we may not have enough time to address,” said Feng Zhang, a leading MIT biomedical engineer, at the international summit. [3] CRISPR may one day be used to create ‘designer babies,’ and could change

physical traits like hair color, height, or even disease resistance in human embryos. For ethical and social reasons these types of editing have not been tried on human embryos until this past summer, when the Chinese team experimented on inviable human eggs.[5] Bo Huang, a biophysicist at the University of California in San Francisco, was quoted in a Nature publication called CRISPR, the disruptor saying, “People just don’t have the time to characterize some of the very basic parameters of the system. There is a mentality that as long as it works, we don’t have to understand how or why it works.” For a system so powerful, Huang said, “That seems very scary.”[3] The Chinese scientists who altered human embryos published their paper in a scientific journal called Protein and Cell. Their goal was to correct the gene defect that causes the blood disease beta-thalassemia, a condition that reduces the production of oxygen-carrying red blood cells in the body.[5] The team found that CRISPR successfully cut the target gene, but also cut other genes, and the repair mechanisms did not incorporate the right DNA back into the cut areas. They concluded that “our work highlights the pressing need to further improve the fidelity and specificity of the CRISPR/Cas9 platform, a prerequisite for any clinical applications of CRSIPR/ Cas9-mediated editing.”[5]

Price said that “The Chinese used CRISPR on human embryos because there are no strict regulations; I have not seen any repercussions for those scientists. CRISPR will be used internationally—how we control this technology will be a huge question for society.”[7]

Why CRISPR is Revolutionary “CRISPR has the potential to open up doors in human gene therapy, in controlling pests like mosquitoes, in disrupting viral genes and pathogens, and in agriculture, altering crops and animals,” said Jennifer Doudna, the co-inventor of CRISPR and professor of cell biology at U.C. Berkeley, in a recent TEDtalk.[2] Gene-editing technologies have been around since the 1970’s, but what makes CRISPR so new and different?[8] Older technologies are more expensive, less precise, and harder to use than CRISPR. With older technologies, a scientist must produce a synthetic protein that matches up with the targeted DNA section. It is very difficult to ensure that this protein goes to the right place, and even harder to make sure it cuts the DNA strand correctly.[8] Older gene-editing technologies can cost as much as $5000, while basic CRISPR systems can be purchased for $500 per target.[8] Nature quotes Stanley Qi, a systems biologist at Stanford



University, who said that “This power is so easily accessible by labs—you don’t need a very expensive piece of equipment and people don’t need to get many years of training to do this. We should think carefully about how we are going to use that power.”[3] Doudna said in a commencement speech at Brown University last year that “Older technologies are like having to rewire your computer each time you run a new piece of software. They are so inefficient and difficult to use that most scientists did not adopt them for use in their own laboratories or clinical applications. A technology like CRISPR has appeal because of its relative simplicity.”[8] She went on to explain that “high school students who come to my lab will learn how to successfully edit the human genome within a few weeks.”[8] Because CRISPR is easier to operate, a wider range of scientists now have the ability to pursue novel experiments, like the alteration of human beings.

Ethical Issues addressed by an International Summit The CRISPR rise has been called a ‘revolution’ because of this increase in accessibility. When only a handful of laboratories had the resources and knowledge to alter genomes, it was easier to regulate and control what kind of research these laboratories were doing.



[7] Now, anyone can potentially start manipulating human DNA, and there are no enforced international laws governing gene-editing. Some countries, like Germany, have strictly enforced limits on human embryo experimentation, while other countries like China, Japan, and Ireland have loose guidelines that are rarely enforced.[6] The International Summit on Human Genome Editing convened in an attempt to raise awareness and set a standard for these ethically-charged gene-editing issues. The distinction between somatic and germline cells was a crucial differentiation at the conference. The scientists were more comfortable with gene-editing of somatic cells than germline cells because their “genomes are not transmitted to the next generation.” Somatic cells are the cells of your body that reproduce by duplicating themselves, and germline cells are reproductive sex cells that pass genes on to the next generation.[1] When somatic DNA is altered only the targeted cells are edited, so not every cell is affected, and these changes stay in one individual. This means that if scientists make an unwanted or harmful DNA-edit, it will not become a part of the greater human genome. Germline cells include eggs, sperm and embryos. The summit statement explained that “Gene-editing might also be used to make alterations in gametes or embryos, which will be carried by all of the cells of a resulting child and will be passed on to subsequent gener-

ations as part of the human gene pool.” Embryonic cells grow and divide to create a person, so editing those cells early in development incorporates the DNA change into every cell.[1] The summit concluded that regulated somatic and germline laboratory research should proceed as long as the modified cells are not used to produce a pregnancy. They also approved clinical somatic applications like “editing genes for sickle-cell anemia in blood cells or for improving the ability of immune cells to target cancer.”[9] The panel concluded that “There is a need to understand the risks, such as inaccurate editing,” but that “because proposed clinical uses are intended to affect only the individual who receives them, they can be appropriately and rigorously evaluated within existing and evolving regulatory frameworks for gene therapy.”[1] The summit called for a moratorium on clinical germline applications, which range from, “avoidance of severe inherited diseases to enhancement of human capabilities,” to ‘designer babies’. They concluded that “It would be irresponsible to proceed with any clinical use of germline editing,” until a variety of conditions were met. These conditions include establishing adequate safety standards, proper regulatory oversight, a better understanding of the risks and benefits, and a broad societal consensus deeming germ-line editing to be acceptable. The panel did not

believe that any of these criteria were met, but that “as scientific knowledge advances and societal views evolve, the clinical use of germline editing should be revisited on a regular basis.”[1] Finally, the summit established that an ongoing national forum to discuss clinical uses of gene-editing, inform policymakers, and maintain research guidelines was to be formed by the science academies hosting the conference. The summit explained that “The forum should be inclusive among nations and engage a wide range of perspectives

and expertise—including from biomedical scientists, social scientists, ethicists, health care providers, patients and their families, people with disabilities, policymakers, regulators, research funders, faith leaders, public interest advocates, industry representatives, and members of the general public.”[1]

Conclusion While CRISPR is still not fully functional, genetically engineered plants, animals and humans are no longer

[1] Proceedings of the International Summit on Human Gene Editing; 2015 Dec 03; Washington, DC: The National Academy of Sciences, Engineering and Medicine; [cited 2016 Nov 07]. Available from: http://www8. [2] Doudna, J. How CRISPR lets us edit our DNA [TED talk web video]. London: TedGlobal; 2015 Sep 05. Available from: https://www.ted. com/talks/jennifer_doudna_we_can_now_edit_our_dna_but_let_s_ do_it_wisely?language=en [3] Ledford H. CRISPR, the disruptor. Nat. 2015 Jun 03; Available from: [4] Panpan H, Shuliang C, Shilei W, Xiao Y, Yu C, Meng J, Wenzhe H, Wei H, Jian H, Deyin G. Genome editing of CXCR4 by CRISPR/cas9 confers cells resistant to HIV-1 infection. Nat. 2015 Oct 20; 15577.

science fiction; humans now have the ability to alter and control the living world around them.[9] Doudna says that “This puts in front of all of us a huge responsibility, to consider carefully both the unintended consequences as well as the intended impacts of a scientific breakthrough.”[8] The world of gene-editing is contentious and fraught with societal and ethical issues, but it is ultimately up to us to determine how we use these technologies.

[6] Doudna, J. Jennifer Doudna: Genome Engineering with CRISPR-Cas9: Birth of a Breakthrough Technology [web video]. 2016; UC Berkeley. Available from: [7] Price, N. Recording from phone interview [primary source]. 2016 Feb 03. [8] Doudna, J. CRISPR-Cas9 Commencement Speech at Brown University [video recording]. Brown University Archives. 2015 Sep 12; Adler-Rothman Lecture Series. [9] Slaymaker IM, Gao L, Zetsche B, Scott DA, Yan WX, Zhang F. Rationally engineered Cas9 nucleases with improved specificity. Sci. 2015 Dec 01; 351(6268): 84-88.

[5] Liang P, Xu Y, Zhang X. CRISPR/cas9 mediated gene editing in human tripronuclear zygotes. Protein Cell. 2015 Apr 18; 13238-015.





On Christmas day in 1956, two parents in Stolberg, West Germany were appalled to see their baby girl born without any ears. Her father was an employee who worked for Grünenthal, a German pharmaceutical company. [1] Unbeknownst to the parents, an epidemic was silently raging as thousands of babies were born with impairments. In addition to absent ears, many infants were born with internal organ problems, digestive system defects, and miniature or absent limbs.[2] Almost all pediatric clinics in West Germany saw infants with phocomelia, the rare condition also known as “four-seal limbs,” but were unaware that other clinics were seeing phocomelic babies as well, believing that they each were observing isolated events.[1] The epidemic was seemingly invisible. Over 5 years after the first case in Stolberg, physicians in Germany and Australia finally found the connection between all these cases: thalidomide.



Produced by Grünenthal, the drug thalidomide was ostensibly a miracle drug that could work as a calming sedative in the fear-ridden post World War II era when sleeplessness pervaded many households.[1] The drug, however, snatched away the lives of many infants and changed the lives of thalidomide “survivors” forever. While some countries used thalidomide as a prescription drug, Germany and other countries sold it over the counter in easy-to-access pharmaceutical stores. [3] In fact, the only major country that did not openly embrace the drug with open arms was the United States. Dr. Frances Oldham Kelsey, receiver of the President’s Award for Distinguished Civilian Service for her regulatory work against thalidomide, was the physician and reviewer for the U.S. Food and Drug Administration who blocked thalidomide from the United States.[4] The drug passed the FDA’s criteria for pharmacological regulations, but Dr. Kelsey was concerned about the clini-

cal aspects of the drug. Thanks to her keen eye, Dr. Kelsey saved the United States from thalidomide’s marring effects, showcasing the importance of detailed and rigorous drug regulations. While many pharmaceutical companies believe that the duration and costs of drug regulations impede their highest potential for economic and innovative growth, a detailed evaluation of drug development, safety, and effectiveness is vital and actually increases the true potential of a drug. The case of thalidomide proves that faulty drug testing and a lack of beneficial regulations can harm the patients and lead to the ultimate detriment of a pharmaceutical company. Thirsty for profits, Grünenthal focused more on branching out its antibiotics rather than on testing drugs. The company faced serious reprimands when their antibiotic drugs such as Supracillin—proven to be toxic by damaging nerves between the inner ear and brain and cause deafness—

EDITOR Hugo Zoells ‘20

were not subjected to animal testing.[1] Likewise, one of their other antibiotics, Pulmo 550--supposedly superior to penicillin--yielded drastic side effects. [1] The fact that two of Grünenthal’s previous drugs had made it to the market without proper testing should have been a major warning sign for any of the company’s future releases. However, money chasing leaders in the company evaded thorough drug testing by taking advantage of subpar drug regulations, leading to the dissemination of thalidomide as a completely “safe” drug. Currently, the standard protocol in the United States for drug testing encompasses five stages: pre-discovery, discovery and preclinical, investigational new drug application, clinical trials, and the new drug application and approval.[5] Pre-clinical phases of drug testing involve the investigation of a drug’s mechanism and effects in vitro through cell studies and in vivo through animal studies. With pre-clinical phases, researchers can see the basic effects of a drug before exposing the drugs to humans in clinical trials.[6] In the case of thalidomide, inadequate animal testing led to false claims about thalidomide’s safety. When the scientists could not reach a median lethal dose or find any side effects on mice, guinea pigs, rabbits, cats, and dogs, the company claimed that thalidomide was nontoxic.[1] The company conducted

animal trials without peer review and claimed that rats reached a hypnotic state, which was enough to allow the drug to go to the market. While scientists could not find doses high enough to kill animals, they did not test on pregnant animals.[7] Had there been a comprehensive line of testing, the teratogenic effects of thalidomide would have been found. After the effects of thalidomide on babies were exposed, testing on pregnant animals showed a parallel pattern of deformities in offspring.[7] Despite the lack of testing, the company proceeded to advertise that the drug was completely safe for pregnant women. The thousands of deformities and deaths revolving around the drug could have been prevented if the tests on pregnant animals had been conducted before thalidomide’s journey to the hands of consumers. When pre-clinical phases are over, pharmaceutical companies in the United States have to fill out an investigational new drug report to conduct proper clinical trials. Clinical trials give insight into the safety, proper dosage and schedules, and effectiveness.[6] There is also surveillance after the drug is sent to market in which patients and doctors give ongoing feedback.[5] With thalidomide, rather than conducting thorough and methodical testing, Grünenthal arbitrarily disseminated free samples of the drug at different hospitals. Introduced by a method of

[1] Stephens T, Brynner R. Dark Remedy: The Impact of Thalidomide and Its Revival as a Vital Medicine. Cambridge: Perseus Pub, 2001. [2] Thalidomide Victims Association of Canada. The Canadian Tragedy [Internet]. Thalidomide Victims Association of Canada; [cited 2016 Nov 1]. Available from: [3] Thalidomide Society. About Thalidomide [Internet]. London: Thalidomide Society; [cited 2016 Nov 1]. Available from: http://www. [4] Kelsey F. Autobiographical Reflections. Food and Drug Administration Oral Histories, 2014.

“Russian roulette,” the company irresponsibly and inadequately tested the drug in humans.[1] Months after thalidomide’s release, Grünenthal started to hear complaints of dizziness, memory loss, constipation, and numbness in hands and feet. Even with one thalidomide-taking patient who actually committed suicide due to unbearable nerve pains, the company characterized the excruciating symptoms as rare allergic reactions and did nothing.[1] In order for people to be safer from the unknown effects of drug, there needs to be stricter post-market surveillance. A more vigilant and calculating post-market surveillance protocol can allow investigators to realize clear patterns between drugs and their effects. Approximately 10,000 babies worldwide were born with deformities due to thalidomide and only 5,000 survived past childhood.[1] With ongoing research and scientific developments, drugs can find themselves being used for new purposes. In a turn of events, the once tragedy-causing drug is now making a return to the pharmaceutical world as a possible anti-cancer drug, due to its ability to inhibit blood vessel growth. Moving on, cogent regulations have to be thorough and flexible. Along with emerging developments, drug regulations have to be strong and resilient enough to react to changing works in the bustling realms of pharma.

[5] U.S. Food and Drug Administration. The Drug Development Process. FDA. 2015 Jun 24. [cited 2016 Nov 2]. Available from: http://www.fda. gov/ForPatients/Approvals/Drugs/default.htm [6] Zielinski B. Biotechnology in Clinical Medicine. Cognella Academic Publishing. [7] Fulkerson N. Testing on animals leads to important medical breakthroughs. Northern Star [Internet]. 2011 Apr 6 [cited 2016 Nov 3]; Available from:



The Evolution of



Aristotle defined man as a “rational animal,” meaning that human beings have the capacity to engage in rational thought and decision-making. Furthermore, he asserted that this “rational principle” that humans have is what separates humankind from all other animals [1]. The idea of humans as rational thinkers has continued from the time of the Greeks to present day and is especially invoked in the field of economics. In 1955, the economist Herbert Simon proposed the Expected Utility Theory, which asserts that the rational approach to making decisions is to weigh the benefits and costs of each decision and then choose based on which provides the best predicted outcome [2]. However, this expected utility theory is not practical and does not describe how humans really make decisions [3]. Real life decision-making includes the presence of influences like cognitive biases that causes humans to deviate from this rational framework.

ARTWORK Emily Reed ‘20



EDITOR Joseph Chen ‘20

Cognitive biases are patterns of thought that create distorted perceptions that influence the decision-making process. One cognitive bias is called the framing effect, and it describes the idea that the different ways the same option is presented can influence how someone makes a decision. An example of the framing effect can be seen in a study where beef labeled as “75% lean” was rated healthier than the same beef labeled as “25% fat” [4]. This difference emerged even though the two labels were quantitatively equivalent. These cognitive biases are pervasive in human decision-making and make completely “rational” decision making impossible. Recent research seeks to understand the evolution of these biases. Monkeys, being our most closely-related evolutionary relatives, are the research subjects for experiments assessing whether other animals also demonstrate cognitive biases. For example, in one study, monkeys prefer to be shown one piece of food and then given two pieces, rather than being shown three pieces of food and then given two pieces [5]. In humans, we experience the same bias, preferring choices framed as gains rather than losses [6]. Research seeks to determine whether or not cognitive bias in decision-making is unique to humans, which would suggest that these biases are socially determined. Comparative studies of framing and risk aversion in human and primate decision making present evidence implying that cognitive biases do, in fact, have a biological origin. In a classic study of risky choice framing conducted by Kahneman & Tversky, participants were asked to make

decisions in a hypothetical situation about a disease epidemic. All participants were given the same information: without either program, 600 people would die. However, the way information was presented, how it was “framed,” was different. The scenario described two different programs to combat the disease; one which is riskier but may save more people, and the alternative, which doesn’t present any risk but may save less people. The information was either quantified in terms of how many people could survive, a “gain” condition/survival frame, or in terms of how many people could die, a “loss” condition/mortality frame (e.g. 200 people will survive vs. 400 people will die). Participants given the survival frame chose the safer program, where a constant number of people are always saved, more frequently. Participants given the mortality frame chose the riskier program, where there is a slight chance that all people will survive but also a greater chance that all the people will die, more often [6]. Beyond this study, ample research has been conducted that supports the conclusion that people are risk averse when a decision is framed as a gain and are risk seeking when a decision is framed as a loss [7]. Research with capuchin monkeys shows conclusions similar to studies done with humans. Of course, monkeys cannot be asked to evaluate a hypothetical scenario; instead, choices were set up in the context of food. In the “gain” condition, there were options between a safe amount of food and a risky amount of food. The safe presenter always showed the monkey one piece of food then added a piece of food. The risky presenter always

showed the monkey one piece of food and sometimes added two pieces. The monkeys preferred the safe presenter who always gave the same amount of food. In the “loss” condition, the safe experimenter always showed the monkey three pieces and took away one piece. The risky presenter, on the other hand, always showed the monkey three pieces and sometimes took away two pieces. In this case, the monkeys preferred the risky presenter [8]. Even though the amount of food given across multiple trials remained constant for both presenters, monkeys showed susceptibility to framing effects and the same loss aversion as their human counterparts. The evidence of cognitive bias in both humans and our close phylogenetic relatives suggests that, though not rational, cognitive biases may have provided an evolutionary advantage to our early ancestors. At first, it’s a strange proposition that an irrational thinking process would be selected for by evolution. But, it’s important to realize that we name these thinking processes irrational based on the economic perception of what is rational. In nature, creatures try to maximize their fitness, the ability to survive and reproduce, and maximizing fitness might not always match up with maximizing utility. Expected Utility Theory states that all decisions should be made according to consistent preferences. However, it may be important for an ape to pay attention to context around them and make decisions based on inconsistent preferences [9]. For example, loss aversion may have been selected for in an environment with scarce food, since resource losses may result in starvation and matter more than resource gains.



This means it would be more biologically rational to analyze risk in terms of what can potentially be lost and avoid analyzing risk in terms of what can be gained [10]. Studies with monkeys also present evidence that suggests evolutionary selection of bias. More importantly, there are significant differences in the development of cognitive bias across different species of monkeys. For example, chimps live in a higher risk area than bonobos since they experience more seasonal variation and dependence on risky hunting. Therefore,

chimps display riskier behavior across all conditions, even when they are risk averse. This means that cognitive bias in different species may have adapted to their ecological environment [9]. The results also suggest that decision-making strategies are context-dependent, which has implications for human decision-making as well. For example, a comparative study of perceptual biases in a Namibian population and British population found that evidence of these biases differed between the two groups [13]. This corresponds to the idea of creatures evolving to the particular biological or social niche in which

[1] Aristotle. “Nicomachean Ethics.” Nicomachean Ethics. Trans. W. D. Ross. Virtue Science, 2016. Web. html#1.13. [2] Simon, H. A.. (1955). A Behavioral Model of Rational Choice. The Quarterly Journal of Economics, 69(1), 99–118. [3] Kahneman D, Tversky A. 1979 Prospect theory: analysis of decision under risk. Econometrica 47, 263 – 291. [4] Levin, I. P., & G. J. Gaeth. 1988. “How Consumers are Affected by the Framing of Attribute Information Before and After Consuming the Product.” Journal of Consumer Research 15: 374-378. [5] Krupenye C, Rosati AG, Hare B. Bonobos and chimpanzees exhibit human-like framing effects. Biol Lett. 2015 Feb;11(2):20140527. [6] Tversky, Amos, and Daniel Kahneman. 1986. “Rational Choice and the Framing of Decisions”. The Journal of Business 59 (4). University of Chicago Press: S251–78. [7] Kühberger A. The Influence of Framing on Risky Decisions: A Meta-analysis. Organ Behav Hum Decis Process. 1998 Jul;75(1):23-55. [8] Lakshminarayanan VR, Chen MK, Santos LR. 2011. The evolution of decision-making under risk: framing effects in monkey risk preferences. J. Exp. Soc. Psychol. 47:689–93



they reside. Evidence suggests that humans evolved to use cognitive bias in decision making. In fact, the presence of these behaviors in multiple species indicates that these “irrational” behaviors may in fact be rational in a biological and evolutionary context. While Aristotle may have defined man based on his rational cognitive abilities, evolution defined man based on irrationality. Therefore, only through exploring this irrationality can we discover what makes us truly human.

[9] Santos LR, Rosati AG. The evolutionary roots of human decision making. Annu Rev Psychol. 2015 Jan 3;66:321-47. doi: 10.1146/annurev-psych-010814-015310. Review. PubMed PMID: 25559115; PubMed Central PMCID: PMC4451179. [10] Yexin Jessica Li, Douglas Kenrick, Vladas Griskevicius, and Steven Neuberg (2011) ,”The Evolutionary Roots of Decision Biases: Erasing and Exacerbating Loss Aversion “, in NA - Advances in Consumer Research Volume 38, eds. Darren W. Dahl, Gita V. Johar, and Stijn M.J. van Osselaer, Duluth, MN : Association for Consumer Research. [11] Cosmides, L., & Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition, 58, 1–73. [12] Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychologi- cal Review, 102, 684–704. [13] Caparos S, Linnell KJ, Bremner AJ, de Fockert JW, Davidoff J. Do local and global perceptual biases tell us anything about local and global selective attention? Psychol Sci. 2013 Feb 1;24(2):206-12

Liberation Technology

How Revolutionary Is It? SARA SHAPIRO ‘20

Liberation technology has the potential to expand freedom, whether it be political, social, or economic. It is a broad term that encompassess all informative and communicative technologies that have the power to increase a group of people’s freedoms, though this article will focus mainly on the Internet and its applications, such as social media websites, since these forms are the most revolutionary of liberation technologies in the modern age [1]. Liberation technology has become a major focus of activists, both within America and abroad, as a fundamental means of promoting political freedom. There has, however, emerged a powerful and important dichotomy of opinion as to whether liberation technology is currently an effective and fair way to promote revolution. Despite the widespread belief that the Internet is a means of promoting democratic ideals, the ability of oppressive governments to monitor, limit, and deny access to the Internet is often not accounted for. The assumption that the Internet leads to more freedom is fundamentally a belief that is stringent upon the fact that those who are most oppressed have unrestricted access to the Internet, which is rarely the case. To maximize the efficacy of the Internet as a tool for marginalized peoples to mobilize and spread dissent, such a paradox must be resolved.

EDITOR Brian Zhao ‘19



LIBERATION TECHNOLOGY AS A TOOL FOR REVOLUTION UNDER OPPRESSION There are a multitude of ways that liberation technology, specifically the Internet, can be used to counter oppression and governmental discrimination. The ability to communicate using the Internet as a forum has an immense and widespread impact by enabling those otherwise unable to mobilize to do so, allowing for a free proliferation of dissenting beliefs, and by creating national and global dialogues about the oppressions of various groups of people [1]. The most prominent example of liberation technology as a revolutionizing force was in the 2011 Egyptian revolution. Under President Hosni Mubarak, the Egyptian government was opaque, corruption was commonplace, and basic rights, such as the ones to speech, petition, and protest, were severely infringed upon [2]. While the revolution was not a completely Internet-oriented one, social media played a vital role in the mobilization of the opposition. Pri-

marily blogs, Twitter, and Facebook, but also other means of social media and Internet communication, allowed opposition activists to inform Egyptians and people abroad about their mission and to spread the word about protest events. In addition to communication, the Internet allowed the Egyptians to learn from the triumphs and defeats of the Tunisian revolution, helping them structure their own dissent. To describe these events as creating a fair and stable government would be an extreme miscategorization, but it is impossible to deny that the Internet was an irreplaceable means of unifying, informing, and mobilizing the opposition to Mubarak, serving as a testament to liberation technology being a tool of revolution [2]. The Internet also acts to monitor and correct abuses, forcing foreign governments to be held accountable. If a nation marginalizes a group of people, the whole nation and world can learn

about it via the Internet and other liberation technologies [1]. In 2003, Sun Zhigang was stopped by police in China and sent to a detention center when he was unable to hand over his temporary living permit or any form of identification. He died three days later. While the police said his death was due to a heart attack, a further autopsy indicated that the true cause was a brutal beating. Sun’s story was reported by the Nanfang Dushi Bao, a liberal Chinese newspaper, but proliferated throughout China via the Internet, causing the Chinese government to be held accountable and close detention centers, which prevented further victims like Sun from being held without cause. Without the reach of the Internet, Sun’s story would have never extended throughout the entirety of China, especially its rural areas where cases like Sun’s were most prevalent and change would not have been enacted [1].

DISSENTING OPINIONS Despite these examples, some disagree with the idea that liberation technology, in its current form, is actually an effective means of promoting revolution. Take Sun’s case. Could his death have provoked more change if the Internet were more free and available to the rural Chinese people, specifically migrant workers like Sun? After all, the Chinese government still has other means of oppressing its people that have not been solved [1]. While the proliferation of access to the Internet has widely been thought to increase liberty, democracy, and governmental accountability, a main blockade to liberation technology is that the government, at



least in developing nations, tends to be the main distributer of the Internet to its constituents. In a study published earlier this year by Weidmann et al., it was determined that there is extreme political bias in the distribution of the Internet, meaning that those groups that are most politically oppressed receive access to the Internet at far lower rates than those belonging to powerful groups [3]. This makes sense considering developing nations tend to be staunchly ethnically polarized, with one, powerful ethnic group, marginalizing another, weaker ethnic group. In the Weidmann

study, the Ethnic Power Relations list, developed by researchers at UCLA, was used to determine which groups were politically included and excluded. Politically included groups were defined as having national political power whereas excluded groups were defined as having no national political power, although some groups may have had regional political power [3]. If the powerful ethnic group is the one that allocates Internet technology, it follows that those in power would benefit from the Internet whereas those who are oppressed would not. This is because the group in power is often comprised of the people responsible

for technologically developing the nation and they can control the technological access of marginalized peoples to prevent the spread of dissent [1,4,5]. By monitoring digital communication through collecting Internet traffic data and utilizing routing data for a 16 day period over eight years (2004-2012) to determine the usage of the Internet in nations on the Ethnic Power Relations list and then using geolocating techniques to find where the collected data came from within these nations, the researchers have discovered that members of ethnically and politically oppressed groups receive about 60% of the Internet that groups in power re-

ceive [3]. In the context of revolutions utilizing liberation technology, the study presents a convincing argument. In the Egyptian revolution previously discussed, the government actually began a mission of modernizing Egypt, giving 80 million Egyptians access to the Internet [2]. Without such initial liberation being enacted, it is unlikely the revolution would have occurred in the same manner or at the rate it unfolded. Another consideration is that even if oppressed groups receive access to the Internet, their oppressors often monitor and control such activity, limiting

the full range to which liberation technology can be utilized. While there have been technological developments made to circumvent such oppressive forces, marginalized groups do not always have access to these technologies [1]. Thus, the fundamental question is if those who are most oppressed cannot fully nor fairly access the tool considered the future of revolution, how revolutionary can that tool truly be? Such digital discrimination means that multiple reforms need to be made to the infrastructure that upholds liberation technology in order to make it a truly effective and fair means of promoting freedom from oppression.

LOOKING TOWARDS THE FUTURE: MAKING LIBERATION TECHNOLOGY MORE LIBERATING The idea that liberation technology is a useful tool of the oppressed and the argument that authoritarian governments retain control over the domains of such technology both have merit and are not necessarily mutually exclusive. What is abundantly clear, however, is that significant development efforts are needed to maximize the efficacy of liberation technologies and account for the previously described inequalities. Data indicates that we have only utilized a small portion of what the Internet can offer in terms of helping oppressed groups mobilize, proliferate dissent, and create a lasting positive impact.

So, what can be done? Currently, Western companies like Nokia-Siemens are selling Internet-oppressive equipment to foreign dictatorships. This equipment allows oppressive governments to block and monitor the cyber-activities of civilians [1]. Such a practice should be banned by the governments of the nations in which these companies reside. Alongside such sales, technologies to get around surveillance are also on the rise, and wealthy liberal governments should subsidize such technologies for peoples in nations under digitally oppressive regimes[1,5]. These governments should also fund

[1] Diamond, Larry. “Liberation Technology.” Journal of Democracy 21.3 (2010): 69-83. Johns Hopkins University Free Press. Web. <https://muse.>. [2] Eltantawy, Nahed, and Julie B. West. “Social Media in the Egyptian Revolution: Reconsidering Resource Mobilization Theory.” International Journal of Communication 5: 1207-224. USC Annenberg. Web. [3] Weidmann, N. B., S. Benitez-Baleato, P. Hunziker, E. Glatz, and X. Dimitropoulos. “Digital Discrimination: Political Bias in Internet Service Provision across Ethnic Groups.” Science 353.6304 (2016): 1151-155. Science. American Association for the Integration For Science. Web.

improvements for encryption techniques to facilitate more open communication among oppressed peoples [1]. Finally, liberal governments can pledge support to grassroots revolutionaries under authoritarian regimes and help secure their rights when imprisoned or otherwise apprehended by their governments. With the enactment of these changes, liberation technology has the potential to live up to its revolutionary expectations.

[4] Unsworth, John M. “The Next Wave: Liberation Technology.” The Chronicle Review 50.21 (2004). Hewlett Foundation. William and Flora Hewlett Foundation. Web. <>. [5] Deibert, Ronald, and Rafal Rohozinski. “Journal of Democracy.” Journal of Democracy 21.4 (2010): 43-57. Johns Hopkins University Free Press. Web. 02 Nov. 2016. <>.



more than

MEETS the Ear


It is worth noting how after 200,000 years of life on Earth, all human actions are still governed by the laws of natural selection. No matter how advanced we may think we are, our organismal drive to reproduce is omnipotent. Some parts of our lives reveal this goal quite clearly; eating, exercising, and working all have a direct role in our ability to survive, reproduce, and thrive. However, the fitness benefits of leisure activities that now occupy such a large part of life in the 21st century are less obvious. Even with the beneficial biological impact of music rather unknown to the average listener, the typ-

ical American still spends around four hours a day listening to music, according to a study done by Edison Research [1]. Music has become a regular companion that follows us through our lives. In fact, our passion drives an industry worth $130 billion globally. Naturally, one may wonder, what qualities of music makes humans so obsessed with it? Music, like all forms of art, is an abstract stimulus. An abstract stimulus is a non-physical event that is able to produce a specific functional reaction in the brain. Despite its intangibility,

music can arouse feelings of intense pleasure similar to those produced by concrete stimuli, such as food and money, via neurological pathways. The neurological pathway responsible for the euphoric response to abstract and tangible stimuli like music and food begins with stimulation of dopaminergic activity. Dopamine, the neurotransmitter released as a result of dopaminergic activity, mediates pleasure in the brain.

ARTWORK Emily Reed ‘20 30


EDITOR Joseph Chen ‘20

Research on the neurobiological effects of music on the human brain has proven the correlation between music and feelings of pleasure. Neuroimaging of the brain while listening to music reveals increased activity in the emotion and reward circuits of the ventral striatum, a section of the brain closely associated with decision-making and reward, during pleasurable music. Thoughts of reward increase the release of dopamine in the ventral striatum. This effect on the ventral striatum via music provides biological evidence for why it increases pleasure in the listener. Intense pleasure manifests itself physically in the body as chills. The frequency and intensity of chills experienced by the listener serve as a marker of peak emotional response to music, as reported by a study published in Nature Neuroscience [2]. Although music’s biological effect is the same across all humans, the function that music serves in society varies between individuals, cultures, and time periods. Many modern listeners offer insight into music’s current role in society, specifically western culture. A study on “The Psychological Functions of Music Listening” break down the reasons for listening to music into three major categories in order of importance: social and cultural function, cognitive function, and physiologi-

cal relatedness [3]. Social and cultural functions serve as the most important reason for why modern Americans listen to music. Individuals who listen because of music’s social and cultural function seek arousal and mood regulation by using music as a way to convey emotions. The secondary influence is self-awareness. Self-awareness that is formed by listening to music is grouped under the category ‘cognitive function’. Individuals who listen to music for self-awareness purposes seek a personal relationship with music, in which the music serves as a nurturing environment for reflection. Surprisingly, the least influential factor for listening to music is ‘physiological relatedness,’ or social bonding. This implies that many western listeners do not use music to feel bonded to others over a similar identity. The ordering of music listening factors by importance in western society reveals our prioritization of personal well-being and self-acknowledgment over social relationships [3]. The combination of the effect that music has on the brain and the three driving factors for listening to music lay a groundwork for how music can be utilized as therapy to improve the wellbeing of individuals suffering from depression. There is a well established relationship between a diagnosis of depression and a lack of pleasure experi-

[1] How, and How Much, America Listens Have Been Measured for the First Time [Internet]. Billboard. [cited 2016Nov5]. Available from: http:// [2] Salimpoor VN, Benovoy M, Larcher K, Dagher A, Zatorre RJ. Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nature Neuroscience [Internet]. 2011Sep;14(2):257–62. Available from: nn.2726.html

enced by the affected individual. Music provides an opportunity for individuals facing depression to experience pleasure through physical, aesthetic, and relational means as a result of its biological influence. Physical and aesthetic pleasure are both positive results of creating music with others. In this case, pleasure occurs between the participant and a music therapist. During music therapy sessions, the therapist plays a neo-parental role by musically nurturing the patient. The therapist facilitates a discovery of self and self in relation to others for the patient, termed relational [4]. Music therapy is not only being used to target depression. Research is being done to determine its effect in treating autism, anxiety, Parkinson’s disease, cerebral palsy, and countless other diseases. The lack of side effects makes music therapy an ideal form of healing for many diseases. The broad influence of music on daily life, as well as its countless therapeutic benefits provides justification for its extreme prevalence in society today. Clearly, music’s role in reproduction and survival is far greater than it is often given credit for.

[3] The psychological functions of music listening [Internet]. Frontiers. [cited 2016Nov5]. Available from: [4] Music therapy for depression: it seems to work, but how? | The British Journal of Psychiatry [Internet]. Music therapy for depression: it seems to work, but how? | The British Journal of Psychiatry. [cited 2016Nov5]. Available from:



Data Science for Social Good RAHUL JAYARAMAN ‘19

Current uses for data analysis are wide and varied – we use it to optimize traffic flow on surface streets and highways, generate ads for social media users based upon their preferences, and perform other tasks that help our interconnected society function on a day-today basis. However, there isn’t enough emphasis on using data for the greater good, such as for solving social issues like homelessness and poverty. As the amount of massive online datasets has increased, so have the possible uses of that data for social good. These large datasets, generated after years of experimenting and polling, carry two types of data: demographic data and geospatial data. Demographic data is data that pertains to macroscop-



ic, overarching aspects of a community, such as racial/ethnic makeup, average income, and population density [1]. Geospatial data, on the other hand, carries geographical information with it, such as latitude/longitude coordinates. The most comprehensive source for demographic data is the US Census – which, despite occurring only once every decade, serves as an accurate indicator of local, state, and national trends in salary, population, and much more [2]. One of the major sources for geospatial data are geographic information systems (GISs), which help analyze geospatial relationships in a region [3]. A GIS, as described by National Geographic, “is a computer system for capturing, storing, checking, and displaying data related to positions

on Earth’s surface” [4]. For instance, current uses for GISs include disseminating and analyzing municipal data collected by city and county governments. Many of these datasets, because they were collected by the government, are available online in formats that are easily parsed and make this data easier to process and analyze [1]. But what good is this data if it cannot be used accordingly? Current endeavors to use these datasets have generated concrete results, but suggestions based upon these results have yet to be implemented. For instance, the United States Census Bureau has published yearly datasets that measure poverty based upon recommendations from a National Academy of Sciences report [1].

EDITOR Aryana Javaheri ‘20

The data has been adjusted based upon a variety of characteristics (geographic, income, etc.) and can be used to measure the extent of income inequality in various neighborhoods. GIS data, on the other hand, enables researchers to draw conclusions about a small set of people and then extrapolate these to other populated areas. However, there may be inherent error involved in using this method – thus indicating a need for standardization of this data (and specifically using it in cases of neighborhoods with similar characteristics). For instance, Chan et al. used GISs to measure the extent of community integration among once-homeless people; similarly, even Census data can be used to evaluate geographic trends, as Fiedler et al. showed in their analysis of immigrants in Vancouver [5,6]. However, one of the pitfalls of these small-scale analyses is that they cannot

be applied in a general fashion – what may be affecting one community’s statistics may differ from factors that affect other communities’. As time goes on, algorithms evolve to provide an even better estimate of a dependent variable, given input data. Machine learning algorithms, which have become popular in recent years, can be used to “train” computers to draw near-correct conclusions for theoretical data based upon real data. As a result, using algorithms such as Naïve Bayes and K-means clustering can help integrate more variables into researchers’ analyses of problems such as homeless and economic disparity based upon Census data; out-of-the-box machine learning packages (such as h2o. ai) can greatly simplify the data analysis portion of demographic research. In addition, other uses of GIS include

[1] Public - Use Files - U.S Census Bureau [Internet]. 2015 [cited 27 October 2016]. Available from: povmeas/data/public-use.html [2] US Department of Commerce. U.S. Neighborhood Income Inequality in the 2005–2009 Period. Washington, D.C.: United States Census Bureau; 2011. [3] Orlandini R. What is Geospatial Data? - Geospatial Data - CampusGuides at York University [Internet]. 2016 [cited 27 October 2016]. Available from: http://researchguides.library.

mapping homeless populations and their spread as well as identifying seasonal or yearly trends. Based on these results, municipal governments can decide whether to erect more homeless shelters or focus on initiatives that provide subsidized or rent-free housing to house the homeless. Data can and must be used for social good – currently, it seems as though data science can be somewhat elitist and far-removed from day-to-day problems, which is as far from the truth as could possibly be. There are a wide variety of uses for data science as a tool for enacting social change. After all, there are terabytes, if not petabytes, of public, open-source data just waiting to be analyzed – so why not go forth and actually analyze it?

[5] Chan D, Helfrich C, Hursh N, Sally Rogers E, Gopal S. Measuring community integration using Geographic Information Systems (GIS) and participatory mapping for people who were once homeless. Health & Place [Internet]. 2014 [cited 27 October 2016];27:92-101. Available from: [6] Fiedler R, Schuurman N, Hyndman J. Hidden homelessness: An indicator-based approach for examining the geographies of recent immigrants at-risk of homelessness in Greater Vancouver. Cities [Internet]. 2006 [cited 27 October 2016];23(3):205-216. Available from: http://www.sfu. ca/gis/schuurman/cv/PDF/2006Cities.pdf

[4] GIS (geographic information system) [Internet]. National Geographic Society. 2016 [cited 27 October 2016]. Available from:



A Brain That Glows:

An Exploration into GFP as a Probe for Analyzing Neural Activity AMITA SASTRY ‘20

Flipping the page to reach this article. Using your eyes to read and interpret the words of this very sentence. Climbing a tree, checking out a library book, grabbing a slice of pizza for dinner. These are all relatively simple tasks, but without our brains, it would be impossible to do any of them. We spend all day trying to be productive, having an immeasurable number of thoughts at any given point in time – but we seldom stop to think about how our brain can be the mastermind behind all of our abilities. The essential structure behind this amazing organ is the synapse, the junction between



two nerve cells, the facilitator of communication between our brain cells, or neurons. To carry out such an expansive range of processes the synapse must be pretty large, right? Consider the thickness of this page – it measures about 100,000 nanometers. Now consider the fact that synapses are almost 10,000 times smaller than that – a mere 20-40 nanometers wide [1]! How can something so small orchestrate such complex tasks? In general, the synapse consists of the negatively charged neuron becoming depolarized (undergoing a change in its voltage) to a positive value enough to signal the cell to release neurotransmitters at the synapse. Neurotransmitters are bet-

ARTWORK Jennifer Osborne ‘20

EDITOR Brian Zhao ‘19

ter known to us as chemicals such as serotonin, dopamine, and adrenaline. The reception of these chemicals could then result in any one of a variety of responses by the postsynaptic cell (the cell receiving a signal) – translating to actions or behaviors on the outer body. Research on synapses seems to be an endless pursuit, as our knowledge of the synapse is constantly changing, growing, and improving with the refinement of the techniques we use to study them. Traditional electrophysiological methods of studying the synapse include voltage and current clamps, where two electrodes are placed into the cell – one to measure the membrane voltage, and another to inject a current that adjusts the membrane potential to a desired value [2]. In essence, this method requires scientists to poke the neuron with electrodes to change and measure the voltage – a fairly invasive process. Since the inception of this method, others have been developed with the hope that they might offer a more comprehensive or meaningful perspective. Among these recent advances are synthetic Ca2+ indicators [3], and a promising recent development: fluorescent proteins obtained from bioluminescent species [4]. One such fluorescent protein is “green fluorescent protein”, or GFP, a glowing molecule that has been increasingly useful in visualizing the structures and nature of synapses. Its origin is not in the laboratory, but in a colorful ocean reef, where GFP must be extracted from a bioluminescent creature such as Aequorea victoria [5] (a type of jellyfish). The protein is then fused to a functional protein involved in con-

trolling membrane potential [5], or other physiological systems of interest within the neuron. Together, the protein conjugate is termed a “probe” and is introduced into an intact organism using specific targeting signals [5]. Ideally, as the neuron fires a synapse, the membrane potential will change from its original value, and the voltage sensing fluorescent protein will simultaneously undergo a visible change in fluorescence, indicating that the synapse is occurring. Designated a Nobel Prize winning “guiding star”, GFP has been applied in amazing ways – from studies of how body cells are affected by Alzheimer’s disease to how cells produce insulin in the pancreas [6]. In addition to providing insight into these other biochemical processes, it continues to be a cornerstone of research on neural activity. When compared to traditional methods of studying synapses, GFP outshines its competitors (quite literally!) in a number of ways. Cell stains and dyes illuminate particular structures statically in the neuron, but cannot show their mechanisms in action. Gathering information in detail about all of the events that occur in the process of synapse propagation can be a major breakthrough in both disease treatment and prevention and is perhaps the biggest reason for GFP’s prevalence over other methods of studying cells. GFP offers other advantages too – it boasts increased specificity (i.e. can be targeted for specific neurons or structures) [4] and is much less invasive than voltage and current clamps [7]. It is also useful for long-term research: GFP has the advantage of being genetically modified to be conjugated (united) to the biolog-

ical protein, while synthetic Ca2+ indicators, for instance, are cleared from the cytoplasm (intracellular fluid) [3]. GFP has been responsible for many wonderful discoveries – but it can be much better. Fluorescent protein-based voltage sensors have been known to have small response magnitudes and slow kinetics [7]. In other words, GFP often fails to fluoresce noticeably enough to consistently gain information from it, and it does so rather slowly. This can hinder research that utilizes GFP to monitor live processes as scientists may need a rapid and bright response by the protein to yield useful results. Thanks to ever-growing research on GFP, structural and molecular improvements have shown promising results in various trials. One such experiment has been a probe known as ArcLight, which makes use of a GFP variant termed “pHluorin”. ArcLight has been shown to have a quicker response speed, and an impressive fluorescent intensity change of approximately 35%, which is five times larger than signals from previously reported fluorescent protein voltage sensors [7]. Studies are also looking into changing the molecular composition of GFP so that it can not only do its own job, but also the job of the functional protein without needing to undergo a separate, external process of conjugation. This is a challenging endeavor because the GFP needs to be altered without disturbing the chromophore (the structure responsible for its glowing color) – but well worth the effort because the approach would then be even less invasive, allowing the GFP to be integrated into the host organism almost seamlessly [4]. And possibly



the most interesting new undertaking for researchers is obtaining GFP-like proteins from new species, such as Anthozoa (i.e. sea anemones, corals, etc.), in the hopes that they may contain new biochemical properties that can lead to even better understanding of neural function [5]. Owing to its superstar qualities as a probe and in other scientific contexts, GFP shows incredible promise for groundbreaking advances yet to be made in research and medicine. Its utility in monitoring neural activity already has a range of applications from allowing research on very specific cells and structures to understanding cell

processes that may lead to disease. Essentially, we are able to use GFP to comprehensively measure brain activity that is naturally occurring – already a major leap from traditional methods. But what if we could use it not just to observe but to actually make certain activity happen in the brain? A “bidirectional optical interface to brain function” means just that - and has been offered as a possibility for the future [4]. This application can be combined with future methods of restricting GFP to one exact circuit with a specific purpose (e.g. inhibition), effectively bringing us a giant step closer to understanding complex neuronal circuitry. How is this possible? The manipulation and

[1] Sukel, Kayt. The Dana Foundation [Internet]. New York: The Dana Foundation; 2011 [updated 2011 Mar 15, cited 2016 Oct 21]. Available from:

[6] The Nobel Prize [Internet]. Nobel Media AB; 2014. The Nobel Prize in Chemistry 2008; 2008 Oct 08 [cited 2016 Oct 21]. Available from: https://

[2] Guan B., Chen X., Zhang H. Two-electrode voltage clamp. Methods Mol Biol. 2013; 998; 79-89.

[7] Jin, L., Han, Z., Platisa, J., Wooltorton, J., Cohen, L., Pieribone, V. Single action potentials and subthreshold electrical events imaged in neurons with a novel fluorescent protein voltage probe. Neuron. 2012 Sep 06; 75(5); 779-785.

[3] Pologruto T., Yasuda R., Svoboda K. Monitoring Neural Activity and [Ca2+] with Genetically Encoded Ca2+ Indicators. JNeurosci. 2004 Oct 24; 24(43); 9572-9579. [4] Baker, B.J., Mutoh, H., Dimitrov, D., Akemann, W., Perron, A., Iwamoto, Y., et al. Genetically encoded fluorescent sensors of membrane potential. Springer. 2008 Aug 05; 36; 53-67. [5] Miyawaki, A. Fluorescence imaging of physiological activity in complex systems using GFP-based probes. Current Opinion in Neurobiology. 2003 Oct; 13(5); 591-596.


observation of specific circuits yields information about the physiological effects of specific activity, facilitating the creation of a comprehensive neural map [8]. This can provide invaluable insight into the neural basis of debilitating disorders such as Huntington’s, Parkinson’s, and Alzheimer’s disease, autism, and epilepsy [9]. Ideally, this will result in the development of enhanced treatment methods, earlier detection of or even prevention of these disorders of which we currently have limited understanding. One thing is for sure: GFP, a small fluorescent protein with endless possibilities, will continue to illuminate science for years to come.


[8] Kang, B., Baker, B. Pado, a fluorescent protein with proton channel activity can optically monitor membrane potential, intracellular pH, and map gap junctions. Sci Rep. 2016 April 04; 6; 23865. [9] Max Planck Florida Institute for Neuroscience. Disorders of Neural Circuit Function [Internet]. Max Planck Florida Institute for Neuroscience; 2016 [cited 2016 Oct 21]. Available from: https://www.maxplanckflorida. org/our-science/research-areas/disorders-of-neural-circuit-function/

Biowarfare’s Line of Fire:

Social Impact of Bioweapons Use in the Syrian Civil War SEAN JOYCE ‘19

In 2012, Syria was painted red; as the civil war between Bashar al-Assad’s government forces and various rebel groups continued to escalate, the fires of war engulfed cities like Aleppo and Damascus, resulting in incalculable bloodshed among both civilians and soldiers. Removed from this conflict on the ground, President Obama drew a “red line” on the use of chemical weapons—should al-Assad’s forces use them, the United States would intervene. A year later, this line was crossed, with government forces using sarin gas on opposing forces outside of Damascus [1]. No full-scale U.S. military engagement came, but Obama’s firm rhetoric on chemical weapons prompts reflection—why do chemical and biological weapons (abbreviated CBW) possess such a gravitas within the inhumane phenomenon of war? While the magnitude of individual pain they cause, as well as their potential to harm civilians, are the standard criteria on

which CBW are distinguished as particularly abhorrent, an often neglected aspect is their unrivaled potential for causing long-term social damage. Biological weapons in particular represent the greatest threat to the healthy functioning of society, and in specific regards to the Syrian Civil War, are all the more dangerous. Should bioweapons find use in the Syrian conflict, a post-war Syria will be plagued by issues of mistrust of the government and of scientific research, hindering its society as it emerges from a devastating conflict. The history of modern biological warfare and terrorism is fortunately fairly limited, but coupled with similar examples of other weapons of mass destruction, establishes enough precedent to consider their social impact. In its traditional definition, a biological weapon is any microbe, toxin produced by a microbe, or virus that is purpose-

ly used to cause disease among an opposing group; the most prominent examples include the causative agents of botulism, plague, anthrax, and hemorrhagic fevers. Few state governments have used them, with Japan’s use of the plague on China during WWII the most prominent example, although the United States and the Soviet Union both developed extensive bioweapons programs during the Cold War [2]. Strategically, biological weapons are difficult for states to use, due to the uncontrollable nature of the weapons and their likelihood of harming unintended targets like allied troops or civilians. Bioterrorism, on the other hand, seems an increasingly likely threat, given the continued advancements in biotechnology. And for a terrorist with no worry of collateral damage, the main drawback of biological weapons use is eliminated. In Syria, the threat follows this mold,

EDITOR Elena Renken ‘19



leaning mostly towards the terrorism side of the spectrum. Intelligence experts have concluded that Syria has the capabilities to produce biological weapons such as anthrax, ricin, or botulinum, although it is doubtful any weaponized agents currently exist in large amounts [3]. They further agree that with government and rebel forces fighting in close quarters, use of bioweapons by these groups is unlikely. The rise of ISIS and other terrorist groups, on the other hand, poses a much more significant threat—should these non-state groups wrest control of bioweapons facilities, they would be much more inclined to produce and use them. Syria’s extensive bio-pharmaceutical industry [4] is situated primarily around Aleppo, a site of major conflict and encroachment of terrorist groups [3]. Indeed, ISIS has already made forays into chemical weapons developments, demonstrating its aspirations for more advanced CBW weapons [5], while Al-Qaeda has previously made attempts to develop bioweapons [3]. If these groups succeed in developing bioweapons, Syria is ill-equipped to properly respond—its health care personnel and infrastructure have been devastated by the war [6], and as the 2001 anthrax attacks show, the monetary cost of a relatively small bioweapons response is hundreds of millions of U.S. dollars [7]; with its ongoing war, Syria simply does not have the materials, funds, or medical professionals to effectively respond to a bioweapons attack. It is this response, regardless of its effectiveness on a public health level, that would be one of the key vectors for a bioweapons attack to cause social harm. Bioweapons are principally agents of fear, given the lack of understanding of diseases and their invisibil-



ity [8]. As a government tries to react, it must combat this paranoia in addition to the spread of the disease itself. The difficulty of overcoming these often conflicting forces can be seen in the 2001 anthrax attacks. In this bioterrorism incident, letters containing Bacillus anthracis, the bacterium that causes anthrax, were sent through the U.S. Postal Service, killing several people and injuring nearly two dozen more [2]. As public health officials reacted to the attack, they initially provided very little information and downplayed the threat to postal workers who could still be at risk [9], likely in an attempt to reduce fear. The postal workers, however, began to question their safety, and were exposed to media reports contradictory to what officials had said, leading to a sense of uncertainty among the workers [9]. This lack of accurate communication by officials ultimately lead to a breakdown of trust between postal workers and the government agencies, which would further impact health measures. As health officials attempted to vaccinate postal workers at risk to anthrax, many workers refused to receive the vaccine, citing their lack of trust following this misinformation, as well as fear of being experimented on and skepticism of vaccine side effects [10], suspicions likely due in part to a lack of scientific understanding. Some drew on historical medical perversions as well—the Tuskegee syphilis experiments, in which the U.S. government performed experiments of African Americans under the disguise of a vaccine clinical trial was mentioned by minority groups as contributing to their distrust of the response [10]. Furthermore, minority groups were also less likely to believe that the health response was fair across racial and socioeconomic lines [11].

A bio-attack’s degradation of trust is especially problematic in the case of Syria. The Syrian Civil War is divided among semi-sectarian lines, with certain religious and ethnic groups mostly supporting a specific side [12]. As sectarian tensions are already running high, a bioweapons attack would lead to a response that some groups would inevitably see as favoring others. Additionally, scientific literacy in Syria is relatively poor [13], which would result in fear of the disease and less understanding of treatment measures on a scale larger than that seen in the United States. Given Syria’s war-ravaged health response system, effective communication with affected individuals would be even more challenging than in the U.S. The combination of these factors would likely result in a nearly complete lack of trust in the government, especially among minority groups. Trust between the government and people is seen as critical to the peacebuilding process after a civil war [14]; resulting attitudes towards a bioweapons response would seriously undermine the likelihood of lasting peace in Syria. This will not only continue to destroy lives of those involved, but will have global ramifications as the war has been contributing to the spread of global terrorism [15], and has inflamed regional and global conflicts such as those between Turkey and the Kurds and between the U.S. and Russia [16]. Biological weapons shape not only the public’s view of the government as a whole, but also its perception of scientific research. CBW represent a perversion of scientific research for destructive purposes, and their use shapes people’s opinions accordingly. During the course of the Cold War, belief that science was beneficial to society began

to decrease in the American public [17]. This has been attributed to nuclear disasters in the U.S. and abroad such as Three-Mile Island, Chernobyl, and an anthrax leak in the Soviet Union, as well as the heightening arms race between the U.S. and the Soviet Union and each nation’s desire to use science for destructive purposes. Specific to bioweapons, the United States past involvement in bioweapons research decades ago has still had an impact in public support for research. A new infectious disease laboratory in Boston, for example, has been met with public opposition due to skepticism on the true nature of its research [18]. Infectious disease research is almost entirely dual-use, meaning it could be used for peaceful or harmful purposes, so the public must be willing to trust researchers to only pursue research for good. Should a government pursue bioweapons research, as is potentially the case in Syria, the public will be much

less willing to back future research. At the least, al-Assad has used chemical weapons, which will surely reduce people’s support for related dual-use research. This is especially unfortunate in Syria, given its pre-war growth in its (relatively advanced) bio-pharmaceutical industry [7]. CBW use will deteriorate the public support that is necessary to pursue dual-use research and, coupled with general destruction from the war, will hinder Syria’s research output for years to come. The prevalence and necessity of dual-use research is a potential issue itself. With the increase in biotechnology across the globe and developments in bioengineering, the potential to develop bioweapons is only going to proliferate [19]. While biological weapons might not be used during the current Syrian conflict, subsequent conflicts across the globe will only have a higher risk. In Syria, these weapons will not

[1] Joint Intelligence Organisation. Syria: Reported Chemical Weapons Use. UK Cabinet Office; 2016. [2] Barras V, Greub G. History of biological warfare and bioterrorism. Clinical Microbiology and Infection. 2014;20(6):497-502. [3] Bellamy J. Syria’s Silent Weapons. Middle East Review of International Affairs [Internet]. 2016 [cited 23 October 2016];18(2). Available from:

only inflict mass human casualties, but will hamper both peace efforts, resulting in international destabilization, and the advancement of scientific research; the introduction of biological weapons in future conflicts would have a similar impact. Just as natural epidemics like the Ebola outbreak have resulted in global paranoia, bioweapons use will do so as well, contributing to global fear of other parts of the world [20]. Syria has already set a dangerous precedent in chemical weapons use, and steps must be taken to ensure that biological weapons remain off the table. International control will be necessary to ensure countries do not pursue offensive bioweapons research, and that safeguards are in place to prevent bioterrorism. While Assad may have crossed that red line, global society can work together to safeguard against future scientific transgressions.

[11] Eisenman D, Wold C, Setodji C, Hickey S, Lee B, Stein B et al. Will Public Health’s Response to Terrorism Be Fair? Racial/Ethnic Variations in Perceived Fairness During a Bioterrorist Event. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science. 2004;2(3):146-156. [12] Phillips C. Sectarianism and conflict in Syria. Third World Quarterly. 2015;36(2):357-376. [13] Dagher Z, Bou Jaoude S. Science education in Arab states: bright future or status quo?. Studies in Science Education. 2011;47(1):73-101.

[4] Kutaini D. Pharmaceutical Industry in Syria. Journal of Medicine and Life. 2015;3(3):348-350.

[14] Wong P. How can political trust be built after civil wars? Evidence from post-conflict Sierra Leone. Journal of Peace Research. 2016.

[5] Chivers C. ISIS Has Fired Chemical Mortar Shells, Evidence Indicates. The New York Times [Internet]. 2015 [cited 23 October 2016]. Available from: http://ISIS Has Fired Chemical Mortar Shells, Evidence Indicates

[15] The Dynamics of Syria’s Civil War M. The Dynamics of Syria’s Civil War. Perspective. 2016.

[6] Sharara S, Kanj S. War and Infectious Diseases: Challenges of the Syrian Civil War. PLoS Pathog. 2014;10(11):e1004438. [7] Schmitt K, Zacchia N. Total Decontamination Cost of the Anthrax Letter Attacks. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science. 2012;10(1):98-107. [8] Wessely S, Hyams K, Bartholomew R. Psychological implications of chemical and biological weapons. BMJ. 2001;323(7318):878-879. [9] Quinn S, Thomas T, McAllister C. Postal Workers’ Perspectives on Communication During the Anthrax Attack. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science. 2005;3(3):207-215.

[16] Carpenter T. Tangled Web: The Syrian Civil War and Its Implications. Mediterranean Quarterly. 2013;24(1):1-11. [17] Stern J. Dreaded Risks and the Control of Biological Weapons. International Security. 2003;27(3):89-123. [18] Bansak K. BIODEFENSE AND TRANSPARENCY. The Nonproliferation Review. 2011;18(2):349-368. [19] Biotechnology Research in an Age of Terrorism. 1st ed. Washington, D.C.: The National Academies Press; 2016. [20] Caduff C. On the Verge of Death: Visions of Biological Vulnerability. Annu Rev Anthropol. 2014;43(1):105-121.

[10] Quinn S, Thomas T, Kumar S. The Anthrax Vaccine and Research: Reactions from Postal Workers and Public Health Professionals. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science. 2008;6(4):321-333.



Traditional and Bio-medicine: Personhood and Prospects For Integration OLIVIA MOSCICKI ‘18 I was struck by the silence. As my fellow students and I sat barefoot in the traditional healer’s home outside the capital city of Madagascar, we looked at each other with confused glances. Why won’t he look at us? Why won’t he speak? Does he expect us to start with questions? I scribbled in my interview notes, “He seems shy. It’s quiet. I’m not sure who’s waiting for who.” As we later found out, he was asking ancestral spirits to pardon us for any fady we might unknowingly trespass.1 In Madagascar, Fady is the word for taboo. Before entering, he had warned us to remove our shoes. He was correct to assume our ignorance— my outsider’s brain had not realized there might be communications going on that I could not see or hear. Though I knew he was a healer who used candles and mirrors, I had still entered the room with notions of the medicine I knew—doctors in white coats who tend not to talk to spirits. 1



(Group Interview at Lazaina, 6/13/2015)

EDITOR Mark Sikov

As we talked with the traditional healer about his work, I found it somewhat difficult to reconcile his practice with the medicine I knew. I spent this past summer interviewing both traditional and biomedical practitioners in Madagascar about the ways in which they heal. Going back and forth between healers’ homes and hospitals or clinics, I was struck by how different their approaches seemed. Some exploration into the matter suggests that many of the differences in biomedical and traditional medicine lie in diverging perceptions of personhood. Personhood can be considered the cultural understanding of what constitutes a person within a society and environment. What makes them whole? What makes them healthy? How are they understood to fit within their surroundings? My experience interviewing traditional healers has suggested that traditionally understood persons are viewed as whole—intimately tied to their spirituality, their collective community, and their local environment. In contrast, within allopathic medicine, patient and practitioner can be understood and approached in divisible parts— each can be narrowed to the physical, the individual, and the portable. Traditional and biomedical medicine have historically been in opposition to each other, divided by powerful actors and diverging theoretical frameworks. This tension is manifested in colonial oppression of traditional practices. For example, during the colonization of Madagascar, French rule outlawed the practice of traditional medicine and attempted to replace longstanding healing practices with allopathic medicine. This repressive relationship has only begun to change recently. Beginning in 1996, the government of Madagascar

began to investigate the prospects of legal recognition for traditional medicine.2 Since then, the Ministry of Health has begun to register traditional practitioners and facilitate the formation of the Traditional Health Practitioner Association, an organized body for traditional medicine practitioners in which they can coordinate with each other and with biomedical practitioners.3 In the post-colonial period, efforts towards integration have begun to emerge in many regions across the Global South. This phenomenon culminated in the World Health Organization’s Traditional Medicine Strategy: 2002-2005 and their follow-up document, Traditional Medicine Strategy: 2014-2023. The WHO defines traditional medicine as “the sum total of knowledge, skills, and practices based on the theories, beliefs, and experiences indigenous to different cultures that are used to maintain health, as well as to prevent, diagnose, improve, or treat physical and mental illnesses.”4 In 2002, their strategy included plans to “integrate [traditional medicine] within national health care systems, where feasible, by developing and implementing national [traditional medicine] policies and programmes.”5 The document emphasizes the necessity of capturing the “potential contribution of [traditional medicine] to health, wellness and people-centered health care,” particularly in the Global South. 2 World Health Organization. (2001) Legal Status of Traditional Medicine and Complementary/Alternative Medicine: A Worldwide Review. 3 Rasoanaivo, Philippe (2006). Traditional Medicine Programmes in Madagascar. 4 World Health Organization (WHO). 2008. Traditional Medicine. 5 World Health Organization. (2014) WHO Traditional Medicine Strategy: 2014-2023.

However, their follow-up document tempers their earlier goals with the daunting challenges that they face in this effort. Though the integration of traditional and biomedical care models may have great potential to ameliorate the paucity of healthcare for many across the globe and revalue traditional knowledge systems as seen by those in power, do the two systems see things too differently to collaborate effectively?

Diverging Frameworks As we sat lined up against the wall in the spacious chapel of the holy grounds of Kingory Doany, a spiritual burial ground atop a massive hill outside the Madagascar’s capital city of Antananarivo, the traditional healer told us that his practice is a process which takes care of “body, soul, and spirit.”6 Traditional medicine in Madagascar encompasses the complete individual—one who is not divisible into parts, but must be viewed and treated in whole. Though in the past, both biomedical and traditional practitioners viewed the person in this way, the introduction of the Cartesian divide between mind and body prompted a departure from this vision for the biomedical lens. Though most contemporary scientists reject the rigid partitioning of mind and body, specialization still divides psychiatrists and other medical doctors, and treatment of disease located outside the brain is achieved through somatic means. This is in contrast to traditional practitioners who treat seemingly somatic disease also with mental and spiritual means. In the biomedical framework, 6

(Kingory, 6/22/15)



the person does not lose their function when they are divided, but their parts can be targeted and understood individually.7 For Malagasy traditional practitioners, patients must be treated with both the physical and the spiritual because they are both spiritual and physical beings. Malagasy spirituality most often features a combination of traditional Malagasy and Christian beliefs. While Christianity precludes the existence of any god but its own God, traditional Malagasy medical practitioners integrate this belief into their own traditional ancestral worship by situating their ancestors’ spiritual guidance in coexistence with God’s. For example, the healer we met with at Soavimasoandro requires all his patients to light a candle. He told us that its light casts away any lurking evil and links the patient to God. The patient must be spiritually connected and protected before the healing may begin.8 However, traditional healers’ treatments are not purely spiritual— almost all also use medicinal plants to treat the ill. In contrast, biomedical patients are to be studied objectively and scientifically— devoid of religion or belief. A doctor may call in a Chaplain to guide a patient through emotional turmoil, but that is not necessarily considered part of medicine.9 Biomedical practitioners primarily target biomedical antigens and mutations. In the case of pathology, these lead to “malfunctioning cells,” a divisible unit of the physical body that is supposed to be isolated from anything spiritual.10 Moreover, 7 8 9 10


Dubos, 1968, p. 76 (Soavimasoandro, 6/22/2015) Sulmasy, 1999, p. 1002. Hahn and Kleinman, 1983, p. 313


pharmaceutical products, an essential tool of the biomedical practitioner, are developed through tests and trials that investigate the physical effects on the patients. Though some include important mental side effects such as fatigue or depression, these aspects are still separate from the spiritual concerns of the traditional practitioner. In addition, Malagasy practitioners understand the patient and themselves to be indivisible from the collective community. For Malagasy healers, the coming of age moments of menstruation and circumcision delineate a transition from the individual child into the collective adult community, and it is only at this point which a person is eligible to practice as a traditional healer.11 Malagasy health is also intimately tied to the status of the ancestor, who is integrated into the collective long after they die. In afterlife, their spiritual guidance and protection is essential to wellbeing. This dynamic is felt most intensely at Kingory Doany, where ubiquitous tombs house celebrated ancestors to whom one can pray when in need. Here, the collective is felt not only as the community, but as the nation and its history.12 The biomedically understood person may be divided from the group or society in which they exist and retain their personhood—and functionality as medical participant. Though this practice seems to be changing as treatment of the social determinants of health gains traction, for the most part, the individualism of biomedicine isolates the patient from their social, economic, and political contexts, and the ways in 11 12

(Rabarijaona, 6/17/2015) (Kingory, 6/22/2015)

which those contexts may cause illness. Within Marxist theory, the individualistic biomedical understanding of personhood is criticized for its tendency to “[divert] attention away from social sources of pathology.”13 Though traditional medical frameworks understand disease in the context of its social origin, biomedical understandings usually focus the patient and her or his pathology to the individual body. To be whole, traditional healers must be grounded in their environment and community. The predominantly plantbased remedies used within traditional medicine are dependent on local ecosystems, and healers must have an intimate knowledge of their environment.14 Local traditional knowledge of medicine is based on local resources— usually in the form of easily accessible plant materials. For example, a healer we spoke with in Andasibe only uses medicinal plants that he or his patients can get themselves from the local forested areas.15 In contrast, biomedically viewed persons are portable and interchangeable. Many doctors do not practice where they are from. At the end of medical school in the United States, students are matched with various hospitals and programs and scattered in many directions. For example, within Madagascar’s biomedical health system, doctors are hardly ever from the place of their position, especially when they are stationed in rural areas.16This is primarily a result of a government program in which recent medical graduates are 13 14 15 16

Hahn and Kleinman, 1983, p. 314 Dubos, 1968, p. 143 (Andasibe, 7/2/2015) (Befelatanana, 6/19/2015)

placed in randomized rural locations.17 Biomedically viewed patients present the same universal body and medicines are universally applicable, and so they may be treated “in any place at any time.”18 Though traditional and biomedical frameworks both provide care to the ill, distinct understandings of personhood—as either whole or divisible— lay the foundation for differentiated medical practice.

Prospects for Integration How do these divergent understandings bode for integration? Although the WHO and various governments are exploring ways in which traditional and biomedical medicine can work together, are they simply too disparate? It would seem, although there are challenges, these two practices need not be isolated from each other. Furthermore, there are too many potential benefits to integration to let challenges prevent collaboration. As communities across the globe, though particularly across the Global South, navigate relationships between traditional and biomedical medicine, integration has emerged as a potential path of mutual benefit. Within an integrated health system, the best aspects of both traditional and biomedical medicine can be utilized. Both systems may cut costs to each other. First, traditional medicine may provide a cheaper alternative for low-budget healthcare systems. While pharmaceutical resources are usually imported and thus at great cost to 17 18

(CSB 2 Andasibe, 6/7/1/2015) Sulmasy, 1999, p. 1003

the government and individuals, local plants used in traditional medicine are both easily accessible and affordable. In many countries, patients already turn to traditional medicine when they cannot afford or reach biomedical care. According to the World Health Organization, roughly 80% of patients in African countries depend on traditional medicine.19 In Madagascar, the most extreme price differentials show biomedical treatments at up to 10 times more expensive than widely used traditional treatments for the same condition.20 In addition, the use of traditional medical knowledge may cut costs for pharmaceutical research. For example, Swiss pharmaceutical company Novartis has searched traditional Chinese medical resources for potentially effective chemical compounds. In fact, their treatment for malaria originates in a traditional treatment for fever found in the plant sweet wormwood. Paul Herring, the head of corporate research at Novartis says, “there are so many compounds in nature, from the seas to the jungles, it’s very difficult to know where to start…China has thousands of years’ experience of using plants in Chinese traditional medicines. The idea was, why not use the Chinese experience as a kind of filter?”21 Integration of traditional and biomedical practices could also lead to more 19 Kaptchuck, Ted J., & Tilburt, Jon C (2008). Herbal medicine research and global health: an ethical analysis. Bulletin of the World Health Organization 20 Quansah, Nat (2010). Integrated Health Care System: An Approach to Sustainable Development. SelectedWorks of Nat Quansah. 21 Zamiska, Nicholas (2006). “On the Trail of Ancient Cures” Wall Street Journal.

effective public health initiatives. On the one hand, traditional medical practitioners lack access to many of the effective rescues and knowledge which biomedical approaches can deliver. A study in Haiti revealed that “children of women who often or always sought care from traditional healers were 53% less likely to be fully vaccinated than were children whose mothers never used traditional healers.”22 Integrating biomedical technologies such as these into traditional practice could drastically increase access to lifesaving interventions like vaccination. In addition, biomedical training in methods to prevent the spread of infectious disease could greatly reduce the perceived danger of unsanitary traditional practices. In the “School and Community Health Project” in Kavrepalanchowk, Nepal, trained healers were shown to possess “significantly better knowledge of prevention of malnutrition, acute respiratory infection, diarrhea, and HIV/AIDS, and were better able to identify the symptoms of those illnesses.”23 However the transfer of biomedical knowledge to traditional healers is not the only, nor most important, way in which collaboration could provide more comprehensive public health. Indeed, this unidirectional collaboration has the danger to once again underestimate the value of traditional medical knowledge and continue an oppressive imposition of colonial practices. Importantly, beyond their medical expertise, traditional practitioners possess invaluable cultural knowledge that can aid in the distribution of care and reas22 Unite For Site. (2015) Module 6: Integrative Medicine - Incorporating Traditional Healers into Public Health Delivery. 23 Ibid.



sert local agency’s role in local health. For example, the integration of Nepalese traditional healers into the United Nations Development Program’s HIV/ AIDS prevention effort in Nepal’s Doti district increased the effectiveness of the program. Traditional healers “provided culturally acceptable HIV/AIDS education to the local people, distributed condoms, and played a role in reducing the HIV/AIDS-related stigma.”24 While public health initiatives run by outside agencies or purely biomedical driven frameworks are often divorced from local beliefs and practices, and struggle or cause harm as a result, traditional healers can use their knowledge to help embed and adjust these initiatives to their local context. Projects such as these provide hope for the successful collaboration of tra-

ditional and biomedical medical systems. Beyond those at the WHO, many authorities from inside and outside the medical sphere say they can—and should. Dr. Michel Ratsimbason, director of Madagascar’s National Center of Applied Pharmaceutical Research, believes that traditional and biomedical frameworks are “different systems, ways of thinking, really.”25 However, he also believes these different systems can complement each other and effectively work together for the improvement of Malagasy health. Furthermore, many traditional healers within Madagascar are beginning to work in conjunction with the biomedical system. No longer forced to work illegally, many of the traditional healers my group and I interviewed in Madagascar this summer expressed excitement about having the opportunity to reg-




(CNARP, 6/25/2015)

[1] Dubos, René (1968). Man, Medicine, and Environment. New York: Mentor. [2] Hahn, Robert A., & Kleinman, Arthur (1983). Biomedical Practice and Anthropological Theory: Frameworks and Directions. Annual Review of Anthropology, 12, pp. 305-333. [3] Kaptchuck, Ted J., & Tilburt, Jon C (2008). Herbal medicine research and global health: an ethical analysis. Bulletin of the World Health Organization, 86, pp. 577-656. [4] Quansah, Nat (2010). Integrated Health Care System: An Approach to Sustainable Development. SelectedWorks of Nat Quansah. [5] Rasoanaivo, Philippe (2006). Traditional Medicine Programmes in Madagascar. IK Notes, 91. [6] Sulmasy, Daniel P. (1999). Is Medicine a Spiritual Practice?. Academic Medicine, 74, pp. 1002-1005. [7] Swerdlow, Joel (2015). Nature’s Rx. National Geographic. Retrieved from in-nature/



ister with the government and work with biomedical doctors. Though it is no longer running due to lack of funds, the ‘Clinique de Manongarivo’—a health center piloted in Madagascar from 1993 to 1997—revealed that integration is functionally possible.26 At the clinic, traditional and biomedical practitioners worked together to heal patients—despite seemingly different understandings of their own and their patients personhood as either whole or divisible. Though there are many ways in which diverging views of personhood divide traditional and biomedical medical practice, it seems that they may still be able to work together to provide integrated and comprehensive care to all, and at least more, people. 26 Quansah, Nat (2010). Integrated Health Care System: An Approach to Sustainable Development. SelectedWorks of Nat Quansah.

[8] Unite For Site. (2015) Module 6: Integrative Medicine - Incorporating Traditional Healers into Public Health Delivery. [9] World Health Organization. (2014) WHO Traditional Medicine Strategy: 2014-2023. am/10665/92455/1/9789241506090_eng.pdf [10] World Health Organization. (2001) Legal Status of Traditional Medicine and Complementary/Alternative Medicine: A Worldwide Review. [11] World Health Organization. (2008) Traditional Medicine. [12] Zamiska, Nicholas (2006). “On the Trail of Ancient Cures” Wall Street Journal. [13] Note: the citations above in parentheses include the date and location within Madagascar of my own referenced observations.

EDITOR Lillian Cruz ‘17


Pedophilia HATTIE XU’ 19

High-profile cases of pedophilia have commanded national and international attention. These stories are often shocking and deeply upsetting, with topics ranging from the decades-long sexual abuse of children by leaders of the Catholic Church [1] to the vast depths of child pornography rings [2]. By and large, we only hear about pedophiles because they are in the news for being caught committing a crime. There are, however, pedophiles who do not act on their urges and instead hide their compulsions [3]. It is difficult to estimate the size of this population simply because they are not participating in illegal activities that would get them caught. These pedophiles are wary of approaching professionals for fear of being brought to authorities once they reveal their condition [4]. Even when they do reach out, they are often denied help.

What is pedophilia? A person diagnosed with pedophilia must fulfill several criteria: they must be at least 16 years old and at least five years older than the children fantasized about, have “recurrent, intense sexually arousing fantasies, sexual urges, or behaviors” about prepubescent children for at least six months, and experience distress because of these urges [5].

pedophila as a sexual orientation – an identity innate to the individual [6]. Almost all pedophiles are male, and many also exhibit co-occurring disorders, such as other types of paraphilia, personality disorders, or mood disorders [3]. This association with a plethora of other mental conditions suggests that pedophilia has neurological bases.

These symptoms and feelings of distress show that pedophilia is not a condition that a person can easily suppress. In fact, many researchers classify

Researchers are unsure how the condition arises, but some have identified biological distinctions between pedophiles and non-pedophiles. For exam-

ple, a group of scientists compared magnetic resonance images (MRI) between the two groups and found that the brains of pedophiles were associated with a lower volume of white matter in two fiber bundles [7]. Though this is not conclusive evidence of a physical cause for pedophilia, it does support the idea that the brains of pedophiles are somehow different. They are predispositioned to have these urges; pedophilia is not a choice.



Stigma and Social Perception There are few identities more stigmatized in the western world than that of a pedophile [8]. A group of researchers conducted two surveys, comparing perceptions of pedophilia to alcohol abuse in the first study and perceptions of pedophilia to sexual sadists and antisocial people in the second [9]. The survey asked participants to indicate whether they agreed with statements such as, “When I think of a person (with a dominant sexual interest in children/who drinks large amounts of alcohol almost daily), I feel afraid,” or “Would accept these persons in my neighborhood.” The two studies were conducted with different groups, but both found that the participants judged pedophiles

more harshly than other groups of people. In the first survey, 13.69 percent of participants agreed with the statement that it would be better for pedophiles to be dead, in contrast to 2.98 percent when referring to alcohol abusers. In the second survey, 28 percent of participants agreed that pedophiles were better off dead. This stigma against pedophilia is reflected in our culture as well. Vladimir Nabokov originally intended to publish Lolita, his novel about a man’s obsession with a 12 year old girl, under a pseudonym because of the controversial nature of its contents [10]. Prior to publication, Nabokov also required those who read his manuscript to keep it a secret. There are now entire books

written about the best ways to teach Lolita in classrooms due to the sensitivity of the topic, which can elicit strong reactions from some students [11]. Sometimes, hatred of pedophilia can translate into real-world action. In 2000, residents of a town in the United Kingdom protested suspected pedophiles that were named in a now-defunct tabloid which had run a “naming and shaming pedophiles campaign” [12]. In some instances, the demonstrations led to full-blown riots with protesters smashing windows and setting cars on fire. These severe and immediate actions against people not even confirmed to be pedophiles highlight the hatred that the label evokes.

Need for Intervention The intense stigma surrounding pedophilia can be harmful and counterproductive towards efforts to reduce the sexual abuse of children. Instead of helping pedophiles find ways to cope rather than act on their urges, often society only reacts when a pedophile has already committed a crime. The social stigma creates an environment in which pedophiles may not feel safe approaching professionals for help, which may make them more likely to live in secrecy or resort to fulfilling their fantasies after a lapse in self-control. Before I continue, I must make it clear that I am not defending pedophiles who have committed crimes and sexually abused children. These pedophiles deserve legal punishment because their actions have harmed innocent lives. However, as a society, we should be



more open to talking about all psychiatric conditions, including pedophilia. Attempting intervention programs would be a worthy undertaking because it may help deter pedophiles from acting on their impulses. Though the proportion of pedophiles in the general population is unknown, estimates range from one to five percent [6]. That may seem like a small number, but it translates to 3,189,000 to 15,945,000 people living with pedophilia in the United States alone. One of the most notable pedophilia intervention programs is Prevention Project Dunkelfeld, which was piloted in Germany between 2005 and 2011 [13]. After a media campaign, 596 men expressed interest in participating, and the project eventually enrolled 319 pedophiles in a one-year treatment program that used “pharmacological [and] psychological [...] intervention strate-

gies,” including hormonal therapy and cognitive-behavioral techniques. During post-project assessment, the men who participated in the program reported feeling less lonely, more able to cope, more empathy for victims, and more able to sexually self-regulate. Prevention Project Dunkelfeld provides evidence that intervention programs have a positive effect on pedophiles and help them better control their impulses, suggesting that they are less likely to harm children. In a similar effort to reduce feelings of loneliness, Circles UK introduces convicted sex offenders to a circle of volunteers who provide social and emotional support to the offender [14]. The circle helps the offenders identify behaviors that may lead to recidivism – “a person’s relapse into criminal behavior” [15] – and develop social skills and

coping abilities. Researchers reviewed the literature on the effectiveness of Circles programs in various countries, and they found that sex offenders who participated in the program had much lower recidivism rates compared to those who did not participate in Circles [16].

Stop it Now! is another project that allows pedophiles to reach out to others [17]. The project offers a telephone helpline for people who are victims of sexual abuse as well as those who are at risk of abusing. Operators encourage those who call in to recognize behavior that could lead to abuse and refer them

to other services for help. In a study about the program’s effectiveness, callers were asked to answer a questionnaire evaluating how they felt after a call. Most people reported feeling more in control of their urges and less isolated [18].

citizens who provide social circles.

a psychiatric condition, and strive to help even if we cannot fathom how pedophiles think the way they do.

Concluding Thoughts It may not be comfortable to talk about pedophilia, but we need to have conversations in order to effectively address child sexual abuse. The work cannot be left completely to those researching the condition; intervention programs depend on the help of the general population. Supporting pedophiles who are trying to improve themselves requires cooperation between mental health experts who teach cognitive-behavioral strategies and ordinary

It is easy for anger towards these crimes to manifest as the dismissal and shunning of pedophiles, but we must remember that there are many pedophiles who recognize the harm they would inflict if they act on their urges. Instead, they restrain themselves, but are still too afraid of being ostracized to seek help. We must be empathetic, keeping in mind that pedophilia is

[1] Rezendes M. Spotlight Church abuse report: Church allowed abuse by priest for years [Internet]. 2002 [cited 22 October 2016]. [2] Berger J. 71 Are Accused in a Child Pornography Case, Officials Say [Internet]. 2014 [cited 22 October 2016]. [3] Pessimism about pedophilia [Internet]. Harvard Health Publications. 2010 [cited 22 October 2016]. Available from: edu/newsletter_article/pessimism-about-pedophilia [4] Muller R. Non-Offending Pedophiles Suffer From Isolation [Internet]. Psychology Today. 2016 [cited 22 October 2016]. Available from: non-offending-pedophiles-suffer-isolation [5] The DSM Diagnostic Criteria for Pedophilia [Internet]. 2015 [cited 22 October 2016]. Available from: blog/1034-class-blog/1415-the-dsm-diagnostic-criteria-for-pedophilia [6] Zarembo A. Many researchers taking a different view of pedophilia [Internet]. 2013 [cited 22 October 2016]. Available from: [7] Cantor J, Kabani N, Christensen B, Zipursky R, Barbaree H, Dickey R et al. Cerebral white matter deficiencies in pedophilic men. Journal of Psychiatric Research [Internet]. 2008 [cited 22 October 2016];42(3):167-183. [8] Feldman D, Crandall C. Dimensions of Mental Illness Stigma: What About Mental Illness Causes Social Rejection? [Internet]. Journal of Social and Clinical Psychology. 2007 [cited 28 October 2016];26(2):137-154. [9] Jahnke S, Imhoff R, Hoyer J. Stigmatization of People with Pedophilia: Two Comparative Surveys. Archives of Sexual Behavior [Internet]. 2014 [cited 22 October 2016];44(1):21-34.

Initial assessments of intervention programs have been promising, so we must ensure that progress continues. Preventing abuse before it happens is the most effective way to combat abuse, and we owe it to children everywhere to make the effort.

[10] Diment G. Three Weeks Before Vladimir Nabokov’s Lolita, There Was Dorothy Parker’s. Coincidence? [Internet]. Vulture. 2013 [cited 22 October 2016]. [11] Kuzmanovich Z, Diment G. Approaches to teaching Nabokov’s Lolita. New York: Modern Language Association of America; 2008. [12] Families flee paedophile protests [Internet]. 2000 [cited 22 October 2016]. Available from: uk_news/872436.stm [13] Beier K, Grundmann D, Kuhle L, Scherner G, Konrad A, Amelung T. The German Dunkelfeld Project: A Pilot Study to Prevent Child Sexual Abuse and the Use of Child Abusive Images. The Journal of Sexual Medicine [Internet]. 2015 [cited 22 October 2016];12(2):529-542. [14] What is a Circle? [Internet]. 2015 [cited 22 October 2016]. Available from: what-is-a-circle-of-support-and-accountability [15] Recidivism [Internet]. 2014 [cited 28 October 2016]. Available from: [16] Thomas T, Thompson D, Karstedt S. Assessing the impact of Circles of Support and Accountability on the reintegration of adults convicted of sexual offences in the community. Centre for Criminal Justice Studies; 2014 p. 45-49. [17] How the Stop it Now! Helpline Works [Internet]. [cited 22 October 2016]. [18] NatCen Social Research. Call to keep children safe from sexual abuse: A study of the use and effects of the Stop it Now! UK and Ireland Helpline [Internet]. NatCen Social Research; 2014 p. 48-59.



The Fickle State of

Being Bored ALEX SONG ‘20

Many things we have to do are boring. We’ve all sat through a dull lecture, a long car ride, a soporific meeting, or read some terribly dry required reading. What was a common feeling that these activities induced? Boredom, something so forgettable and unfortunately regular that we often don’t think much about it. But what is boredom exactly, and is there a difference between finding class boring and being prone to boredom? For ages, scientists and the general public have paid little attention to this phenomenon, through labeling it simply as an unmistakable trait of the human condition. Only recently have researchers delved deeper into the study of boredom in the hopes of better understanding how the brain functions and how to steer clear of boredom’s negative consequences.

Long before people knew about neurons and synapses, the concept of boredom was clearly evident to philosophers and sociologists. Deemed “the root of all evil” by the Danish philosopher Kirkegaard [1], generations of scientists have pondered this natural force that seems to have plagued mankind for so long. The first attempt to observe and quantify boredom was done in the early 20th century. With the emergence of factory labor, a surge of industrial psychologists studied boredom in various assembly lines; however, when interest (and funding) from the English government dwindled. Industrial psychologists abandoned boredom for more “valuable” academic endeavors, and this line of research fell into obscurity until much later [2].

Scientists have lately become increasingly interested in studying boredom as a way to better understand its correlation to other psychological and physical issues. Experts have only recently presented the new idea that boredom can be categorized in two groups: trait and state boredom. Trait boredom is a chronic sense of boredom and often associated with depression, while state boredom is a lack of stimulation felt due to a singular event [3]. Like with depression, there are many preconceived notions, both positive and negative, about what being bored might indicate, but many of those ideas are being revised in favor of more complete conclusions, while others have been totally abandoned. The issue at hand is that each individual is, obvi-

EDITOR Aolin Zhang ‘18



ously, individual and deals with personal issues differently than others. A few years ago it was a mainstream fad to discuss the benefits of being bored. Reputable journals and institutions came out with articles and research praising the creativity and innovation-inducing properties of boredom [4]. However, even if some people might, while they daydream, come up with the next revolutionary invention, others may fall down a darker path. In studies of binge-eating, boredom was found to be one of the most frequent triggers, and in a 2003 survey, US teens who said they were often bored were 50% more likely to later take up smoking, drinking and drug use [5]. Now this isn’t to say that boredom was exclusively why those respondents drank, and those studies by no means fully explain the human complexities of binge-eating, but those examples do show that people may respond to boredom is various ways. Similarly,

simply saying that low job or academic performance is a sign of trait boredom does not take into account whether that student or worker was simply unsatisfied with their school system or job.The study of boredom is relatively new, and while there has been significant leaps in its research, there are admittedly many gaps to fill. More importantly however, is the questions of how would you know if you are suffering from trait boredom, and how could you minimize any possibly negative effects? Research has shown that men are generally more bored than women [6] and that those in their late teens, when their frontal cortexes are still maturing are more often bored [4]; however, anyone can fall prey to state boredom due to situational factors. The best way to combat state boredom is to develop new interests, mediate, and reflect on the potential usefulness of a seemingly dull task.

[1] Kirkegaard S. Either/Or. Copenhagen, Denmark: University bookshop Reitzel; 1843. 286 [2] Markey A. Three Essays on Boredom [dissertation on the Internet]. Carnegie Mellon University: College of Humanities and Social Sciences 2014 [cited 2016 October 23]. Available from: http://repository.cmu. edu/cgi/viewcontent.cgi?article=1424&context=dissertation [3] Weybright E, Caldwell L, Ram N, Smith E, Wegner L. Boredom Prone or Nothing to DO? DIstinguishing Between State and Trait Leisure Boredom and its Association with Substance Use in South African Adolescents. Leisure Sciences 2015 [cited 2016 October 23]; 37(4):311-331. Available from:

There are a plethora of ways to try and avoid being bored, which are especially crucial for certain jobs or as a student, but there are also times when noticing trait boredom could be a sign of health risks. There are real consequences of chronically being bored, like dropping out of school, job absenteeism, and depression [4], and while boredom will certainly not be the only sign nor the only factor, it is important to recognize and speak up if you do have health concerns. One of the most important part of being healthy is knowing yourself and self-advocating. Doctors and mothers alike are inclined to worry if, as a high achieving student, school may become a bore for you. So don’t let them fuss over you if you know they are worried about trait boredom, ask them to clarify: “Are you asking if I suffer from state or trait boredom?”

[4] Kubota T. The Science of Boredom. LiveScience [Internet]. 2016/09 [cited 2016 October 23] Available from: [5] Koerth-Baker M. Why Boredom is Anything But Boring. Nature [Internet]. 2016/01 [cited 2016 October 23]; 529(7585). Available from: http:// [6] Gosline A. Bored?. Scientific American Mind [Internet]. 2007/12 [cited 2016 October 23]; 10(1038) 20-27. Available from: http://www.nature. com/scientificamericanmind/journal/v18/n6/full/scientificamericanmind1207-20.html



AI in Robots

Professor Profile: Bertram Mall ANDREW THOMSON ‘18

You relax in your seat, reading the paper and chatting on the phone as your driverless car speeds down the highway. Suddenly, a cement truck pulls in front of you and your robotic driving system is faced with a decision; Should it swerve out of the way of the truck to save you, or collide with the truck and sacrifice itself in an attempt to protect the cars behind you? Bertram Malle, a psychologist at Brown University, is searching for the answer to this question.[1] Professor Malle explores how morals can be programmed into robots. He calls this morality “moral competence,” which is the ability to judge moral issues logically. Malle says his work is so important because scientists are worried about advancing robotic technologies that will be unpredictable and dangerous, and technology devel-

opment may stop if there are no safety precautions taken, “There is so much concern about the moral and social competence of these futuristic robots that it could very quickly stop the development of these technologies,” he said. “The robots are advancing, but the guidelines for moral limitations have not caught up yet”. Malle believes that in a matter of years driverless cars will be driving us to work, personal aid robots will be caring for our elderly, and home companion bots will be taking out our trash. These robots will confront difficult moral decisions as they become more integrated into people’s lives, Malle said, and he wants them to make the right choices. “The robot industry is putting robots on the market that have no limitations. Anything that someone commands a

robot to do the robot will do,” he said. “You get mad at your brother and tell the robot to kick your brother down the stairs, the robot will do that. There needs to be a moral framework.” Though Malle is now immersed in the study of artificial intelligence, he did not begin his career in the technology world. He grew up in Austria, where he studied psychology and philosophy at the University of Graz. “The way I looked at the world as a teenager I was always interested in the complexity of human interaction and human thought, observing how people socialized and interacted,” he said. Malle travelled to America to earn a doctorate degree in psychology from Stanford in 1995, and then worked at the University of Oregon for fourteen years teaching and leading research.[2]

EDITOR Elena Renken ‘19



In 2008, Malle caught wind of a merger between the psychology and cognitive linguistic science departments at Brown University. Malle said it was, “a unique opportunity to test out my interdisciplinary work within a single department,” so he moved east and began his work at Brown. Malle began his work just a few years ago. “I saw how robotic technologies are potentially so important for society, and it just kind of struck me: how people relate to each other and how people relate to robots may not be as different as we think”. For this reason, he said, it is important to develop moral, physical, and social norms for robots, before the technology advances too quickly. “If there isn’t more understanding of moral competence in robotic and artificially intelligent beings, there is much concern that scientists of the future making complex frameworks will build creations they cannot control.” In that case, he said, “scientists may be forced to stop developing robotic technologies entirely.” Moral competence is not just one set of morals- it must have relevant limiting factors that will “allow robots to fit seamlessly into their communities,” Malle said. He believes this is the only way robots will be integrated into people’s everyday lives. “A robot in Prov-

idence should reflect morals and laws that exist in Providence, not those that exist in South Korea.” Creating a global moral landscape for a robot involves “many different elements,” Malle said. For example, he is working on a robot that elicits moral vocabulary in a simple way. His goal is to create a robot that “could learn to study some texts more than others, which we would train to learn moral language based off of this learning mechanism.” He said he hopes that this robot will eventually be able to “learn how people speak and interact with each other in various areas,” and “adapt its own speech patterns to match different groups of people.” This robot will limit its own actions with a “dynamic set of moral and social norms,” a capability that no robots currently have. To explore how humans interact with moral robots, Malle ran a hypothetical experiment involving a widely used test in which an individual, robot or human, is faced with a runaway train. The individual can either divert the train to a different track and kill one person standing on that track or do nothing and allow the five people riding the train to die. “People prefer robots to take action in a moral situation more than humans,” Malle said. This means that people want the robot to

[1] Malle BF, Scheutz M, Arnold T, Voiklis J, Cusimano C. Sacrifice One For the Good of Many? People Apply Different Moral Norms to Human and Robot Agents. Brown Pub. 2015 Mar 01. Available from: http:// 2015%20HRI%20Robot%20moral%20dilemma.pdf

make the calculated and moral decision but prefer humans to remain as innocent bystanders. But, Malle said, “This pattern I have described to you only works if you believe this is a mechanical-looking robot. When the robot in the experiment looks like a human, the robot is treated exactly like a human.” He added, “That’s pretty shocking. Just by making the robot look like a person, suddenly people will treat it differently in a stressful moral situation.”[1] As a result, “It can be quite dangerous to make robots look like humans and have expressions and emotions because none of that is really true. Humans have a biological tendency to anthropomorphize and we have certain triggers and stimuli that cause us to respond strongly to faces—even on small stuffed animals,” Malle said. “Robots resembling humans will be treated like moral beings, when in reality they just calculate moral decisions about the world,” he said. “This miscommunication can lead to severe consequences.” Malle urges those around him to question what morality means to them. To prepare for the robotic technologies of the future, he said, we must “look for morality inside of our communities, and within ourselves.”

[2] Malle BF. Brown University Profile [2016 Nov 01] Available from:



Imbalance in Blindness

Global Lapses in Medical Treatment SUMAIYA SAYEED ‘20

In 1748, French ophthalmologist Jacques Daniel developed a technique that would bring the world closer to curing cataracts. The operation itself was unprecedented. Prior to his time, hitting the eye with a sharp object in a procedure known as couching was the generally accepted method of removing the obscurity behind the lens; Daniel’s use of surgery and cutting into the eye, however, was unheard of. Moreover, the operation allowed Daniel to observe the cataract and discover that it was not a liquid flowing through the lens of the eye that obscured the eye (cataract translates to waterfall in Greek). Rather, the cataract was a solid entity, and today it is understood to be a hardening of the lens nucleus.

Daniel’s breakthrough allowed for rapid growth in treating cataracts. Now, in the present day, the routine procedure – a surgery with the use of an emulsifier to break up the cataract and an intraocular lens to replace the old one – can restore vision to the correct state. Furthermore, developments in optometry have expanded beyond treating cataracts to include genetic editing, bionic implants, and bioprinting stem cells to target genetic diseases and large-scale organ or tissue failures. As a result, blindness around the world has decreased significantly, yet the distribution of blindness has not entirely decreased [1]. Of the 39 million blind around the world, 90% inhabit regions of developing countries, with only 10% in the developed world. Cataracts

make up the cause of blindness for about 50% of those blind in the developing world; in the United States, only around 5% of the blind cases are due to cataracts [2]. While the ophthalmologic community reveres the technological advancements of eye treatment, some very commonplace procedures are not accessible in certain areas, thus perhaps countering the momentous technological advancements discovered [3]. Many reasons contribute to this blatant disparity. In some areas, such as sub-Saharan Africa and India, two regions with the highest cataract blindness, the number of ophthalmologists is low in relation to their need, resulting in a lack of available treatment. In a study conducted on blind street

EDITOR Joseph Chen ‘20



beggars in Sokoto, Nigeria, 82% of the causes for their blindness were avoidable, including corneal opacities (preventable) and cataracts (treatable if detected earlier) [4]. In other cases, many people of poor backgrounds simply do not make use of the opportunities that are available to them. In a scenario told by an optometrist in Nigeria who set up eye camps throughout impoverished areas, a considerable portion of those who registered for a free eye checkup ended up not going; this and other studies show that people of these regions often have misconceptions of or distrust in doctors. Studies in Tanzania also show poor understanding of cataracts among locals [5]. On the contrary, people in the United States are treated for their cataracts as soon as a slight disturbance in the eyes is noticed, preventing potential blindness. In regions with low accessibility to health professions and lack of education among patients, people are not receiving treatment at all or before the cataract becomes too harmful. This relative helplessness of those who cannot see should evoke some response among humanitarians and charities,

and to an extent, a handful have been active in providing care. Two such examples are strikingly moving. Helena Ndume, an ophthalmologist and surgeon in Omaruru, Namibia, has performed cataract surgery on hundreds of patients, many of whom who have not seen in years. Through the setup of cataract camps and clinics, she is able to treat many patients, whom she feels have become more empowered and a part of society with their restored sight. In India, Asim Sil, another ophthalmologist, has taken part in transporting patients in rural areas, including those in the Sundarbans region, to hospitals where they can receive treatment through cataract surgery. He has even followed up and allowed for them to access local clinics to have postoperative checkups [6]. The truth about cataracts is that if left untreated, their effects can be incredibly debilitating, but if the proper resources are available, they are relatively simple to cure. While the motivations of medical and health care personnel can be rooted in empathy and goodwill, a certain trend reappears: medicine’s audience may only be for those

[1] Brian G, Taylor H. Cataract blindness - challenges for the 21st century. Bulletin of the World Health Organization. 2001;79(3):249–56. [2] Thylefors B, Négrel A-D, Pararajasegaram R, Dadzie KY. Global data on blindness. Bulletin of the World Health Organization. 1995;73(1):115–21. [3] Allen D. Cataract. BMJ Clinical Evidence. 2011Feb15. [4] Balarabe AH, Mahmoud AO, Ayanniyi AA. The Sokoto Blind Beggars: Causes of Blindness and Barriers to Rehabilitation Services. Middle East African Journal of Ophthalmology. 2014Apr;21(2):147–52.

who “deserve” it. As the evidence suggests, deserving medical treatment is related to how hard people try to seek help and unfortunately the geographic and economic conditions surrounding them. The study of cataract cases around the world leads one to wonder how greatly parallel the paths of technological advancement and medical practice are. If medical research is intended to serve a moral purpose, and if the current research is completing its moral task of helping the world, the disparity that exists may indeed be due to an inherent trend toward inequality [7]. If it is not, must we rediscover medicine and allow scientists to relearn the moral criteria of their work? We can in fact look back to Jacques Daniel, the man who brought back hope in times of despair and health in times of death. His motivations underlying his action and pure intentions can be seen as an inspiration. Perhaps a reminder to us all, the engraving on his tombstone reads “Post Tenebras Lux”, meaning “After Darkness, Comes Light”.

[5] Bronsard A, Geneau R, Shirima S, Courtright P, Mwende J. Why are Children Brought Late for Cataract Surgery? Qualitative Findings from Tanzania. Ophthalmic Epidemiology. 2008;15(6):383–8. [6] Dobbs D. Why There’s New Hope About Ending Blindness. National Geographic. 2016Sep. [7] Hardin G. Lifeboat Ethics: The Case Against Helping the Poor. Psychology Today. 1974Sep.



Grey Matter & Oxytocin: What Separates Men and Women MIRA GORDIN ‘20



ARTWORK Caitlin Takeda ‘20

EDITOR Brian Zhao ‘19

When we think about the biological distinction between men and women, a number of features come to mind: first and most clearly apparent are the sexually dimorphic reproductive organs. Then come secondary sex characteristics, such as body hair distribution or the Adam’s apple, which are governed by sex hormones, but do not have to do directly with reproduction. But are there more nuanced biological differences between men and women that operate on the level of molecular and cellular pathways? Investigating this crucial question could provide a basis for the many inexplicable distinctions between men and women. One such distinction is sex-dependent differences in the prevalence of various mental illnesses. A 2011 report published in the Journal of Abnormal Psychology seeks to identify these differences and to elucidate the underlying sex-based distinction that causes the disparity, a “potentially gender invariant latent structure of psychopathology” [1]. The study found that “women showed higher rates of mood and anxiety disorders, and men showed higher rates of antisocial personality and substance use disorders” [1]. A further analysis using a multidimensional model found that this disparity could be attributed to a tendency for women to internalize their experiences and for men to externalize them. Thus, observed differences in the occurrence of mental illness in men and women can be explained by distinct underlying methods of managing responses to an experience.

Can anatomical differences provide the cause for such differences in mentality? In a 2009 brief communication in the Journal of Neuroscience, researchers desired to determine whether observed differences in the proportion of gray matter in male and female brains were simply due to an increased likelihood of men having larger brains [2]. In order to separate these two factors, the authors paired male and female subjects with similar total brain volume (TBV) and measured the distribution of gray matter. They found that women consistently had a greater volume of gray matter in several regions of the brain. However, they could only speculate as to the concrete biological mechanisms that account for this tissue distribution and therefore could not assign direct consequences to the role of the disparities. Only putative comments can be made about the effect gray matter has on psychological traits and activities. In order to draw more concrete conclusions, it is necessary to delve into the neural pathways that operate on a cellular level and consider the highly specific details of biochemical interactions. Hormones are signalling molecules that can generate a systemic biochemical response in the brain by activating a variety of cells. For instance, oxytocin is a peptide hormone whose role is highly relevant to the regulation of social interaction and behavior [3,4]. It functions to encourage proximity to others and inhibit defensive behavior [3]. In doing so, oxytocin modulates sexual behavior, stress, and anxiety, factors which are often linked to forms

of mental illness. Consequently, it is evident that it must play a critical role in the biological basis of social psychology and the occurrence of mental disorders. In a 2016 article published in Cell, researchers from the Rockefeller University focused on a specific class of interneurons, neurons which connect cortical regions in the brain and are often known as “local circuit neurons” because of their role in relaying information between proximal neural structures [5]. This class, known as oxytocin receptor interneurons (OxtrINs), which are found in the region of the brain called the medial prefrontal cortex (mPFC), is activated by oxytocin [4]. These neurons process signals related to social behavior. Using a technique called optogenetic activation, which allows precise and targeted regulation of neurons through bursts of light, researchers isolated the effect of oxytocin in male and female mice, namely that activation of OxtrINs increases sociability and regulates sociosexual behavior in females, but not males, and mitigates anxiety in males, but not females [6,4]. The critical result in this study was the finding that this activation is mediated by corticotropin-releasing-hormone-binding protein (CRHBP), a protein that acts against the stress hormone CRH, and that this protein is specifically expressed in OxtrINs [4]. In other words, these neurons counteract stress reactions through their production of CRHBP. Only in male mice did CRHBP prevent the activation of layer 2/3 pyramidal cells (pyramidal cells are abundant in neural ar-



eas that are “associated with advanced cognitive functions” and respond to excitatory and inhibitory signals from other neurons) [4,7]. This implies that the neural reaction to stress is different in males and in females. Thus, the authors could conclude that OxtrINs modulate sexually dimorphic social and emotional behavior. What implications could such a result have? One result is the potential to develop treatments for social and emotional disorders that are customized based on biological sex. The Cell paper

reveals a complex network of interrelating factors surrounding the levels of oxytocin and CRH, both of which need to be taken into account when developing pharmacological therapies that target the mPFC. As a result, synthetically supplying drugs that increase or decrease the levels of these hormones could have a highly specific effect on the social and emotional mechanisms regulated by OxtrINs when sex is taken into account [4]. Specifically, oxytocin is being examined as a potential therapy for conditions such as schizophrenia and social anxiety disorders

[1] Eaton NR, Keyes Km, Krueger RF, Balsis S, Skodol AE, Markon KE, et al. An Invariant Dimensional Liability Model of Gender Differences in Mental Disorder Prevalence: Evidence from a National Sample. J Abnorm Psychol 121.1 (2012): 282–288. PMC. [2] Luders E, Gaser C, Narr KL, Toga AW. Why sex matters: brain size independent differences in gray matter distributions between men and women. J Neurosci 29.45 (2009): 14265-14270. [3] Heinrichs M, Dawans B, Domes G. Oxytocin, vasopressin, and human social behavior. Front Neuroendocrinol, Volume 30, Issue 4, October 2009, Pages 548-557, ISSN 0091-3022.



[4]. Before implementing such therapies, it will be important to consider the different ways in which male and female patients could respond. Beyond the concrete applications of the discovery, such research compels us to contemplate the nuanced differences in the way men and women respond socially and emotionally to the surrounding world, and the way in which these responses may feed into more serious issues such as mental illness.

[4] Li K, Nakajima M, Ibañez-Tallon I, Heintz N. A Cortical Circuit for Sexually Dimorphic Oxytocin-Dependent Anxiety Behaviors. Cell, Volume 167, Issue 1, 2016 Sep 22, Pages 60-72.e11, ISSN 0092-8674. [5] Markram H, Toledo-Rodriguez M, Wang Y, Gupta A, Silberberg G, Wu C. Interneurons of the neocortical inhibitory system. Nat Rev Neurosci, 2004, 5, 793-807. [6] Deisseroth K. Optogenetics. Nat Meth, 2011, 8, 26-29. [7] Spruston N. Pyramidal neurons: dendritic structure and synaptic integration. Nat Rev Neurosci, 2008, 9, 206-221.

Food Allergies, EpiPens, & Respondr

a Brown-created health app


Food allergies are everywhere – if you don’t have one yourself, you likely know someone who does. Statistics agree with this: around 15 million Americans have food allergies. [1, 2, 3, 4, 5] Of these 15 million, researchers estimate that around 9 million are adults, [2,3,5] and that 6 million are children. [3,4,5,6,7,8] While people may occasionally joke about “peanut-free tables,” food allergies are no laughing matter. In fact, outside of hospitals, they are the primary cause of anaphylaxis – a potentially life-threatening allergic reaction caused by a rapid and uncontrolled release of inflammatory chemicals (primarily histamines) by one’s immune system. Nausea, vomiting, skin rashes, and constriction of airways are all common symptoms. Unless immediately treated, loss of consciousness and even death can occur.

Food allergies often lead to life-threatening anaphylaxis. Frighteningly, they also seem to be on the rise: in 2008, the CDC reported an 18% increase in food allergies among children between 1997 and 2007. [1] This was a statistically significant increase that surprised many in the medical community. The CDC conducted another study five years later, and the results from the 2013 report were even more shocking: food allergies among children increased by nearly 50% between 1997 and 2011. [9] For reasons still unknown, there had, within a little over a decade, been an enormous increase in food allergies among children. Considering the gravity and prevalence of food allergies, a rapid and effective treatment method is essential. And to date, there is only one medically proven way to treat an allergic reaction: the immediate administra-

tion of epinephrine within minutes of symptoms appearing. [10] Epinephrine is available by prescription in various self-injectable devices, such as EpiPen® and Adrenaclick®. [10] Failure to rapidly treat food-related anaphylaxis with an injection of epinephrine can have potentially fatal results. Each year, there are around 200,000 ER visits for food-related allergic reactions; about half of these become full-blown anaphylactic reactions. [11] While people of all ages are susceptible to food allergies, fatal food-related anaphylaxis occurs most frequently in children and young adults. [12,13,14] This is due to a variety of factors, but the most probable explanation is that people of this age group are simply less aware of their surroundings. Additionally, there is the well-documented phenomenon of some adults “outgrowing” certain food allergies. [23] In America,



EDITOR Lillian Cruz ‘17

38.7% of children with food allergies have experienced at least one severe reaction. [15] Additionally, among children with food allergies, 30.4% are allergic to multiple foods. [15] Overall, food-related allergies cause 30% of fatal anaphylactic shock in children. [19] One of the factors which makes anaphylactic shock so frightening is that it can happen anytime and anywhere – especially with children. When emergency epinephrine is administered in school, there is a 20-25% chance that the child’s allergy was unknown at the time of the reaction. [16] Additionally, over 15% of school-age children with food allergies have experienced an allergic reaction while in school. [17,18] EpiPens are manufactured by the pharmaceutical corporation Mylan. They are currently the primary choice for epinephrine delivery systems on the market, but there is a massive accessibility issue: in 2015, the price of the device soared up another 32 percent. This is forcing some people to make difficult decisions - but none difficult as those being made by low-income parents of children with severe food allergies. According to the DRX, a unit of Connecture that tracks drug pricing, the recent EpiPen price increases are among the largest ever seen within the pharmaceutical industry. With insurance company discounts, EpiPen two-packs cost about $415 in the United States significantly more than in any other nation. Meda, a Swedish-based company, sells two EpiPens for about $85.” [20] This seemingly inexplicable price hike has been impacting countless Americans with food allergies, especially those in low-income brackets, like social worker Denise Ure of Seattle, Washington. Denise, who has a severe



peanut allergy, accidentally ingested a nut crumb in 2011. She went into anaphylactic shock, and ended up requiring three EpiPens and hospitalization. She expressed her horror when she realized that an EpiPen two-pack would cost her hundreds: ‘I was terrified because there’s this life-saving medicine that I needed, and I couldn’t afford it.’” Today, she carries two EpiPens she purchased in Canada, where they sell for half the price. [20] The big question is, then, “Why did Mylan raise the price of EpiPens by 400%?” The answer is disturbingly simple: because they could. In 2007, Mylan acquired the autoinjector, which portions out the perfect amount of epinephrine in a dose. The autoinjector mechanism is the main selling point of the EpiPen: during anaphylaxis, someone with an EpiPen knows that they are going to receive a precisely calibrated dose of epinephrine, which can be also be fatal in too large of a dose, or if administered improperly. When Mylan first acquired EpiPens, they were $57. Now, they can be more than $500. [21] Mylan CEO Heather Bresch, who was ultimately responsible for the marketing and pricing of EpiPens post-2007, saw a 671% salary increase during this time. [22] At its core, the issue of the overpricing of EpiPens is an ethical one. People with food allergies who cannot afford EpiPens, or those whose health insurance doesn’t cover these devices, are put at major risk. Without an EpiPen, someone undergoing anaphylactic shock – a condition treatable with a $1 dose of epinephrine – might very well die. [21] More at risk than anyone else, however, are children. Compared with chil-

dren without food allergies, children with food allergies are 2-4 times more likely to exhibit other related health issues, such as asthma. [1] Additionally, those with both food allergies and asthma are at a higher risk of fatal anaphylaxis. [1] Is there a solution to this complex and multifaceted prolem? At Brown HackHealth, a multi-day medical programming competition hosted by Brown University, a team consisting of me and three other Brown sophomores (Marko Fezjo, Jack Bernier, and David Branse) designed an Android and iPhone app called Respondr, which won second place. Respondr is an app that provides a quick and easy access button in case you are experiencing anaphylactic shock and, for whatever reason, do not have an EpiPen on you. This button sends push notification to EpiPen carriers nearby who are registered in the app’s system, thus quickening the speed of emergency relief. Clicking the notification will automatically pull up a map with the fastest directions to the person’s location. The app is very convenient and easy to use in the case of an emergency. And, with enough registered users, it could get an someone suffering from anaphylaxis the EpiPen he or she needs significantly faster than emergency servers. There is cross-platform functionality with iPhones and Android phones. When the app is first opened, a quick and easy registration page pops up. Users can indicate what devices they might need and can register in the app’s database which devices they carry (the app has notification features for both EpiPens and inhalers). They will receive notifications only for the devices that they indicated. After registering, the app is as simple to use as possible. Everything was designed to

be as user-friendly as possible; there is minimal text, and icons are used whenever possible. The app is still in development today: we are working on text integration to further speed up an emergency response. With text integration, we could include a feature in which pressing the app’s panic button would optionally text an emergency contact and/or 911 the status and location of the emergency. Additionally, we designed a machine-learning algorithm that can take data sets of emergency crises and use clustering techniques to identify areas that are most “at-risk.” As Respondr collects data through use, the algorithm will continuously get better at predicting over time which locations are

most vulnerable--and proportionally increase app activity in those places. In this way, our app is designed to be proactive, not just reactive. Additionally, thanks to the integration of Apple products, our app already has iWatch integration so that notifications can be sent straight to users’ wrists. We are currently working on incorporating more emergency features, such as a registry of CPR-trained individuals and a database of defibrillators. The group we think will be most interested in our app is parents of children with food allergies. Young children are most susceptible to food-related anaphylaxis as they are generally less aware and, proportionally, have more food allergies. To see whether parents were interested, we polled a Facebook

[1] Products - Data Briefs - Number 10 - October 2008 [Internet]. 2016 [cited 9 November 2016]. [2] Report of the NIH Expert Panel on Food Allergy Research [Internet]. 2016 [cited 9 November 2016]. [3] Population estimates, July 1, 2015, (V2015) [Internet]. Quickfacts.census. gov. 2015 [cited 9 November 2016]. [4] Gupta R, Springston E, Warrier M, Smith B, Kumar R, Pongracic J et al. The Prevalence, Severity, and Distribution of Childhood Food Allergy in the United States. PEDIATRICS. 2011;128(1):e9-e17. [5] Liu A, Jaramillo R, Sicherer S, Wood R, Bock S, Burks A et al. National prevalence and risk factors for food allergy and relationship to asthma: Results from the National Health and Nutrition Examination Survey 2005-2006. Journal of Allergy and Clinical Immunology. 2010;126(4):798806.e14. [6] QuickStats: Percentage of Children Aged <18 Years with Reported Food, Skin, or Hay Fever/Respiratory Allergies* --- National Health Interview Survey, United States, 1998--2009† [Internet]. 2016 [cited 9 November 2016]. [7] Population estimates, July 1, 2015, (V2015) [Internet]. Quickfacts.census. gov. 2016 [cited 9 November 2016].

group for parents whose children have allergies. Overwhelmingly, members seemed eager to download the app, and we received several personal messages of enthusiasm: 87% of 97 members of the group reported a desire to use the app. Serious food allergies affect a large percent of the American population, and they appear to be on the rise. Without treatment, anaphylactic shock more often than not results in death. With the price of EpiPens over $600 without insurance, they are also becoming increasingly inaccessible. No one should die for not being able to afford an enormously overpriced device that dispenses a dollar’s worth of medication. Ultimately, our goal is to help people, and we believe that this app is capable of doing so.

[12] Bock S, Muñoz-Furlong A, Sampson H. Further fatalities caused by anaphylactic reactions to food, 2001-2006. Journal of Allergy and Clinical Immunology. 2007;119(4):1016-1018. [13] Bock S, Muñoz-Furlong A, Sampson H. Fatalities due to anaphylactic reactions to foods. Journal of Allergy and Clinical Immunology. 2001;107(1):191-193. [14] Sampson H, Mendelson L, Rosen J. Fatal and Near-Fatal Anaphylactic Reactions to Food in Children and Adolescents. New England Journal of Medicine. 1992;327(6):380-384. [15] Allergy Facts [Internet]. ACAAI Public Website. 2016 [cited 9 November 2016]. Available from: [16] McIntyre C. Administration of Epinephrine for Life-Threatening Allergic Reactions in School Settings. PEDIATRICS. 2005;116(5):1134-1140. [17] Nowak-Wegrzyn A, Conover-Walker M, Wood R. Food-Allergic Reactions in Schools and Preschools. Archives of Pediatrics & Adolescent Medicine. 2001;155(7):790. [18] Sicherer S, Furlong T, DeSimone J, Sampson H. The US Peanut and Tree Nut Allergy Registry: Characteristics of reactions in schools and day care. The Journal of Pediatrics. 2001;138(4):560-565.

[8] Sampson H. Update on food allergy. Journal of Allergy and Clinical Immunology. 2004;113(5):805-819.

[19] Bershidsky L, Ponnuru R, McArdle M, Wilkinson F, Sharma M, Sharma M et al. Deaths Show Schools Need Power of the EpiPen: Margaret Carlson [Internet]. Bloomberg View. 2012 Jan 13. [cited 9 November 2016].

[9] Products - Data Briefs - Number 121 - May 2013 [Internet]. 2016 [cited 9 November 2016].

[20] How Marketing Turned the EpiPen Into a Billion-Dollar Business [Internet]. 2015 Sept 23. [cited 9 November 2016].

[10] Dykewicz M, Fineman S, Skoner D, Nicklas R, Lee R, Blessing-Moore J et al. Diagnosis and Management of Rhinitis: Complete Guidelines of the Joint Task Force on Practice Parameters in Allergy, Asthma and Immunology. Annals of Allergy, Asthma & Immunology. 1998;81(5):478-518.

[21] Why Did Mylan Hike Epipen Prices [Internet]. 2016 Aug 21. [cited 9 November 2016].

[11] Clark S, Espinola J, Rudders S, Banerji A, Camargo C. Frequency of US emergency department visits for food-related acute allergic reactions. Journal of Allergy and Clinical Immunology. 2011;127(3):682-683.

[22] CEO of Mylan Pharmaceuticals Sees Salary Increase [Internet]. Forbes. com. 2016 Aug 23. [cited 9 November 2016]. [23] Dhar M. Can You Outgrow Your Allergies? [Internet]. LiveScience. com. 2016 [cited 9 November 2016].



over.indd 1 d-1 1

find findour ourblog blog at at find us at

3/14/2014 5/6/2014 12:14:28 12:28:35 AM PM

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.