Page 1




Sound Sensation

Synesthesia OF





asu gwu



berkeley cambridge harker harvard jhu

cmu nus



cornell georgia tech georgetown osu ucdavis uchicago melbourne yale

staff president editor in chief finance outreach

Michael Spector Eric Sun

Connor Shinn

Justine Palefsky Rudy Chen

science policy Lily Chan Elena Suglia epublishing Rachel Occhiogrosso managing editors Eric Bai Alexander Meehan marketing Dana Schwartz Cindy Abarca Pu-Ning Chiang webmaster Adam Scherlis editors

Conor Wuertz Sarah Lewin Sonny Kim Kate Nussenbaum Jennie Carr


Alexandra Peseri Elizabeth Perkins June Choo Michael Yanagisawa Shannon McCarthy Timothy Lim

layout and art Afra Fnu Chen Ye Haily Tran Kaley Brauer

Matthew Lee Mali’o Kodis Oliver Lyman Oyinkan Osambiro

Cindy Abarca Evelyn Williams Mara Freilich Oliver Lyman Sydney Blattman

Beatrice Steinert Eric Bai Insil Choi Mary O’Connor

about The Triple Helix is an independent, studentrun organization committed to exploring the intersection of science and society. We seek to examine the socioeconomic, moral and environmental implications of scientific advances, and to highlight the surprising ways that science can affect our ideas of humanity. The Triple Helix has an international scope, with chapters all over the globe, ranging from Berkeley to Yale, to Cambridge, Melbourne and the National University of Singapore. You are currently holding a copy of one of the Brown chapter’s biannual magazines. Inside, you will find a collection of articles written, edited & designed by students here at Brown.

We would like to thank the professors who volunteered their time and expertise as part of our Faculty Review Board, including: David Christensen, Philosophy; Cornelia Dean, Environmental Studies; Mark Dean, Economics; Gerwald Jogl, Biology; Dawn King, Environmental Studies; David Laidlaw, Computer Science; Heather Leslie, Environmental Studies & Biology; Louis Putterman, Economics; Andrea Simmons, CLPS; Tracy Steffes, Education & History; Daniel Weinreich, Biology; Gary Wessel, Biology; and Beth Anne Zielinski, Molecular Pharmacology. Logo design by Chen Ye ‘17. Cover art by Eric Bai ‘15 and Insil Choi RISD ‘14.



Dear reader, This Spring 2014 issue of the The Triple Helix is one of this chapter’s finest yet. It embodies the hard work that the writers, editors and other staff of this publication have put into each article. As an organization we are in a very exciting time of growth and maturity, and as we evolve, our journal continues to reflect the depth of talent and passion that propels us forward. The articles in this issue span an impressive range of topics and highlight the interdisciplinary nature of the world we live in. From light pollution to exploring science through dance to the legality of the insanity defense, our writers have dived boldly into current and controversial topics. At the heart of each article is our passion for exploring the intersections of science, society, and law, and a critical approach that reflects the deep thought behind our writing. As we continue to grow, we are constantly pushing the boundaries of our organization. With our expansion into the digital space, we are now seeking to diversify the content we deliver and interact more with you, our readers. Therefore, I ask that you let us know your thoughts. If you have any comments or if you wish to join our organization, please contact us at or visit our website at Finally, I must thank our writers, editors and faculty review board members for their hard work, passion, and insight. This issue serves as a fitting tribute to the time and effort you have dedicated to it. Eric Sun Editor in Chief

FROM OUR HEADQUARTERS One important mission of the The Triple Helix is to get our readers thinking critically about the ethical impact of emergent technologies on public policy. There are a number of organizations that do this, but our efforts are unique because we are an independent, student-run publication. We offer an unbiased report on the topics that are the most interesting to us, the young people of this generation. I imagine that years in the future, these articles will serve as a valuable record of the youth perspective on great scientific debates of our time. Even as The Triple Helix continues to grow and expand in new directions, the core of what we do remains the same: creating an open forum for discussion and education. We must embrace the path forward and expand our readership by broadening our reach. Let us harness the power of social media to promote our original works so that more people can participate in the conversation. We are doing well, but I know we can do even better. It has been a privilege to serve with my dedicated team over the past eight months. Together, we have laid the foundations for success to build on in the coming months. To our readers and our Triple Helix Members, I thank you for your continued support and participation. To our graduating alumni, I look forward to hearing of the accomplishments of our TTH family. Please keep in touch so that we may celebrate your successes with you! Finally, I want to thank each of you for a wonderful term. Mirulda CEO





p34 A healthy heart with no blockage transitioning into a blocked heart, indicating need for preventative care. Ink and digital coloring.

6 MOVING CELLS, MOVING PEOPLE Cross-Disciplinary Production of Knowledge




Are They Killing Us?

How Nature and Nurture Discredit the Death Penalty




THE SENSE SENSATION OF SYNESTHESIA Terrell Jones, Carnegie Mellon p16 Using patterns to convey various aspects of the human body. Watercolor.



Considering the Potential of Developing Environmental Remediation Technologies

The Human Causes









An Interdisciplinary Endeavor TIMOTHY LIM


Using Human Ingenuity Scientific Problems




Email Privacy Constitution

and the





Medical Education Reform JUNE CHOO

56 THE GOOGLE DRIVERLESS CAR “A Cool Thing That Matters”


To highlight the creative energy inherent in the realms of art and science as well as the importance and power of conveying scientific concepts visually, we present Equilibrium: a showcase of student artwork inspired by concepts and systems from the scientific world. Keep an eye out for these one page features as you journey through the magazine.


Crossing the Healthcare Chasm to Preventative Care

p2 A brain exploding in a burst of color to convey the multisensory nature of synesthesia. Paper collage with pastel and colored string.

“Layers” Mary O’Connor Pen and Watercolor

The purpose of this piece is to showcase the beauty and complexity of our structural anatomy. The layering of the paper is meant to mirror that layering of tissue types within the body epithelium, muscle, and bone.



Sense Fusion THE





ARTWORK & LAYOUT Insil Choi RISD ‘14 Eric Bai ‘15 2


he composer Franz Liszt saw colors when musical notes were played. He was even known for instructing orchestra members to make the sound, “a little bluer, if you please,” or “not so rose.” What does it feel like to involuntarily see colors when you listen to music, associate tastes with words, or see numbers as having personalities? This is what people with synesthesia, a neurological condition present in roughly four percent of the population, experience. Synesthesia occurs when a stimulus in one sensory modality, such as vision, involuntarily generates a sensory experience in a distinct modality, such as hearing. There are many different types of synesthesia. The most common type is grapheme-color, where one involuntarily associates a particular letter with a particular color; for instance, the letter “q” might involuntarily and consistently appear a vibrant lime green. Other types are lexical-gustatory synesthesia, in which a word, such as, “knock” might involuntarily elicit a taste, such as that of Granny Smith apples. Ordinal-linguistic personification occurs when a synesthete involuntarily associates a personality, such as grumpy, with a number, month, letter, or day. These multi-modal sensory experiences can be creatively inspiring, causing many synesthetes to be drawn to the arts. underlying neural framework for synesthesia exists in babies, but is typically modified during normal development.


hat causes synesthetic experiences to occur? Many neuroscientists believe that most neonates experience synesthesia, and they argue that it continues into childhood and adulthood if the excessive neural connections between cortical areas in the brain that exist during infancy are not eliminated. This suggests that an underlying neural framework for synesthesia exists in babies, but is typically modified during normal development. Functional MRI studies indicate that the limbic system, which mediates emotional responses, as well as brain areas that process sensory input, are active during synesthetic experiences. By exploring neurological and cognitive studies about synesthesia, we can gain a better understanding of the roots of the condition, as well as some possible evolutionary advantages of synesthesia. Neurological and cognitive studies have linked synesthesia to neural connectivity. The neonatal synesthesia hypothesis states that babies that are newly born have synesthetic experiences [3]. The result of neonatal synesthesia is that neonates “‘mix sights sounds, feelings, and smells into a sensual bouillabaisse’ in which ‘sights have sounds, feelings have tastes,’ and smells can make a baby feel dizzy” [3]. This might either be because the neural pruning and inhibition that occurs in adults hasn’t occurred yet or because the limbic system is more mature that the cortex in neonates [3]. Ramachandran and Hubbard support the former hypothesis, arguing that synesthesia results from increased neuronal connectivity between brain areas devoted to different sensory modalities [1]. They assert that this increased connectivity is a result of decreased neural pruning, or elimination of synaptic connections between neurons, which is normally a process that occurs during fetal and postnatal development [1]. Data seems to support the pruning hypothesis; imaging studies of adult synesthetic brains have in-

dicated greater white matter and greater gray matter volume, as well as increased connectivity between cortical areas compared to normal adult brains [1]. Besides the neurological studies, there have been efforts to identify a genetic basis for synesthesia. A whole-genome scan and linkage study performed by Asher et al. indicates that there is no single gene that can be attributed to synesthesia [2]. It is likely that the condition is inherited in a polygenic, heterogenous fashion, in which many genes are involved and there are multiple ways of inheriting the synesthesia trait [2]. The study indicates a linkage between synesthesia and a genetic locus associated with autism-spectrum disorders. In many cases, synesthesia is reported as a symptom of autism. Brain imaging studies of autistics indicate abnormally elevated levels of neural connectivity [2], as is the case with synesthetes. Candidate genes in this locus include the TBR1 gene, which codes for a transcription factor that regulates the expression of genes such as RELN, which is involved in the development of the cortex [2]. The study also revealed a possible link between synesthesia candidate genes and genes associated with dyslexia, including KIAA0319 and DCDC2, which are implicated in neuronal migration [2]. Involvement of these genes in synesthesia seems plausible, since neural connectivity is modified in synesthetes and changes in neural migration might be involved in this modification. Another noteworthy synesthesia gene candidate is DPYSL3, which is implicated in axonal growth, guidance, and in neural differentiation [2]. This gene is primarily expressed in the late-fetal and early-postnatal brain and spinal cord, but not in the adult brain [2], which reflects the time course of neural pruning, and consequently appears to support the neonatal synesthesia hypothesis.



The connection of synesthesia to creativity and metaphor seems intuitive because multi-modal sensory information allows one to describe experiences in unique ways, such as describing a Monday as sky blue.


any hypotheses exist for why synesthesia has been evolutionarily conserved [1]. The synesthesia genes may have been maintained because they provide an epiphenomenal advantage – an evolutionary advantage for an unrelated purpose [1]. It is also possible that synesthesia might exist as one of the extremes of a normal distribution of communication between the senses in humans [1]. Evidence for this hypothesis lies in the fact that hallucinogenic drugs can cause synesthetic experiences [1]. Individuals who are not synesthetes associate high auditory frequencies to light colors and low frequencies to dark colors and similar patterns have been observed with grapheme-color synesthesia [2]. This evidence seems to indicate that the underlying neural framework for synesthesia exists in the non-synesthetic human brain. Ramachandran and Hubbard have suggested a hypothesis for the evolutionary advantage of synesthesia: synesthesia genes might confer the evolutionary advantage of creativity and metaphor [4]. The connection of synesthesia to creativity and metaphor seems intuitive because multi-modal sensory information allows one to describe experiences in unique ways, such as describing a Monday as sky blue. This might explain the increased incidence of synesthesia exists among artists [1]. Composer and pianist, Duke Ellington, associated color with timbre [5]. The writer



Vladimir Nabokov, had grapheme-color synesthesia [5]. In his autobiography, Nabokov describes the letters that generate the colors of the rainbow for him: “The word for rainbow, a primary, but decidedly muddy, rainbow, is in my private language the hardly pronounceable: kzspygv [5].” Marina Diamandis, Welsh singer-songwriter for Marina and the Diamonds, associates colors with music and days of the week [5]. However, an issue with the creativity and metaphor hypothesis is that synesthesia involves connecting two unrelated objects, like the number 5 and aquamarine, whereas metaphors generally link two objects that have some similarities, such as anxiety being metaphorically described as a “bush shaking in the wind” [1]. Even though creativity and metaphor may not be considered evolutionary advantages of the condition, they are aspects that many synesthetes cite as advantages of their unique sensory experiences. Another hypothesis for the evolutionary advantage of synesthesia is that it may confer sensory processing and cognitive benefits. Synesthesia appears to be implicated in enhanced sensory processing. For instance, number-color synesthetes can more easily discriminate similar colors than can those without number-color synesthesia. It should be noted that this enhanced discrimination could either be the result of their synesthesia, or simply because they have more

experience with viewing colors than their non-synesthetic counterparts. However, synesthetes also experience increased communication between their senses compared to non-synesthetes, even when they are not having a synesthetic episode. This appears to indicate that the benefits from synesthesia can generalize to increase communication and processing between the senses. One of the cognitive advantages synesthesia confers is memory. Savants with synesthesia have been known to have superior memories as a result of their synesthetic experiences: Daniel Tammet, a grapheme-color synesthete, memorized pi to 22,514 digits using his synesthesia. This link to memory seems to be the most compelling hypothesis for the evolutionary advantage of synesthesia, where these multi-modal sensory experiences “may serve as cognitive and perceptual anchors to aid in the detection, processing, and retention of critical stimuli in the world” [1]; indeed, Daniel Tammet put his synesthetic color associations to use to remember the order of the digits of pi by remembering the ordering of the colors. However, there are drawbacks to synesthesia: synesthetic experiences can result in a sensory overload and might become so overwhelming that these experiences affect their everyday lives [2].


t seems that the neurological basis for synesthesia is the maintenance of enhanced neural connectivity past infancy. There also appears to be a genetic basis for synesthesia that involves many genes and allows the condition to be inherited in many fashions. Although there are many hypotheses that indicate the evolutionary advantage of synesthesia, the hypothesis that addresses the memory advantage conferred by synesthesia is most compelling. Despite the possible disadvantage of sensory overload, there can be many advantages to synesthesia, including creative inspiration as well as enhanced memory and sensory processing. Regardless of whether we feel synesthesia is beneficial, we should celebrate different ways of seeing and perceiving the world. At first, orchestra members would make fun of Franz Liszt’s comments to make the sound, “a deep violet,” but soon they learned to accept that great musicians saw colors where there only appeared tones.

[1] Brang, D, & Ramachandran, V. S. “Survival of the Synesthesia Gene: Why Do People Hear Colors and Taste Words?” PLOS [Internet]. Available from: journal.pbio.1001205. [2] Asher J. et al. A Whole-Genome Scan and Fine-Mapping Linkage Study of Auditory-Visual Synesthesia Reveals Evidence of Linkage to Chromosomes 2q24, 5q33, 6p12, and 12p12. The American Journal of Human Genetics. 2009;84:285. Available from: http://www.ncbi.nlm.nih. gov/pmc/articles/PMC2668015/.

[3] Mauerer D. and Mondloch C. J. Neonatal Synesthesia: A Reevaluation. Available from: Maurer_NeonatalSynesthesia.pdf [4] Ramachandran V.S. and Hubbard.E.M. Synaesthesia -- A window into perception, thought and language. Journal of Consciousness Studies. 2001;8:3–34. doi: 10.1098/rspb.2000.1576. [5] Wikipedia. List of People with Synesthesia. Available from: http://




Š 2014, The Triple Helix, Inc. All rights reserved.

art: Beatrice Steinert layout: Chen Ye


dancer moves across a surface, directed by mathematical principles.

She runs in straight lines across a floor made of multi-colored pentagons. When she gets to the edge of the floor, she reaches out her arms and appears back on the other side as if the two edges are connected. Then, the lines she traces start to cut the pentagons and every color she walks across is recorded, in order, on the screen. The shapes rearrange and the viewer begins to see a pattern: when pentagons are re-formed from the cut pieces, they always do so in a particular fashion, according to their color assignment. This is Diana Davis’s entry into the “Dance Your PhD contest,” sponsored by Science and the American Academy for the Advancement of Science, in which researchers in theoretical fields explain their research through dance [1].

© 2014, The Triple Helix, Inc. All rights reserved.


broad audience and, for her, an exercise

it is a collaborative environment where dancers use their physical intuition to analyze the result of the instructions and work with the cell biologists to gain deeper insight into the molecular processes undertaken

in explaining the essentials of her thesis.


The video, in which the dance is paired

from two or more

with captions, certainly highlights the value


of physical motion in understanding some


abstract concepts [2].

The research is based upon

Diana Davis is a graduate student




and her research on geodesic flow seems particularly well suited to kinetic representation. Davis explained that art is a way to make math accessible to a

scholars scientific

a conceptual model that links or Interdisciplinary is a current buzzword in

integrates theoretical frameworks from

science. The “Dance Your PhD” exercise

those disciplines.” [3].

uses art as a tool for translation, which is certainly valuable for making complex

Communication across scientific

concepts accessible to new minds. But

disciplines can be challenging,

some scientists have realized that cross-

both in language and logistics. The

disciplinary collaborations can go beyond

explosion of interdisciplinary research

interpretation of knowledge created within

centers and initiatives is meant to ease

one discipline into creative production of

the challenges of interdisciplinary research

new knowledge. Academic disciplines

and provide incentives to form new frontiers

are artificial categories for assigning

of research through collaboration. Part



of this is the chance for more informal

focuses on augmenting understanding of

interactions, breaking down the silos of

topics that fall between disciplines. For

knowledge in official and casual interactions.


example, urban health involves disciplines as disparate as biology, chemistry, sociology,

At the University of Minnesota Institute for

urban studies, itself interdisciplinary, and

Advanced Study, David Odde, a biomedical

architecture. In 2007, Dr. Sally Aboelela

engineer, and Carl Flink, the

and colleagues conducted a literature

director of the Black

review and interviewed researchers in the

Label Movement

healthcare field to come up with a definition


of interdisciplinary research. The review


concluded that interdisciplinary research can


be defined as, “any study or group of studies




© 2014, The Triple Helix, Inc. All rights reserved.

advance knowledge in dance and science

the results, why the model produced the

into the molecular processes.

simultaneously. In their collaboration, called

expected results or why it didn’t. The

the Moving Cell Project, dancers and

dancers are able to take instructions

The researchers studying cell physiology

scientists collaborate in rapid prototyping

from scientists (the same instructions

struggle to comprehend a world driven by

exercises where dancers execute new

that would have been programmed into

fast collisions due to thermal forces rather

hypotheses in cell biology. In the Moving

a computer model) and execute them

than gravity [4]. Traditionally, research in

Cell project’s collaborative spaces, groups

immediately. Scientists are able to viscerally

cell physiology has progressed through

of researchers list rules for a performance

experience the result and build intuition

laborious programming of computer models.

that correspond to a hypothesis in cell

about a process. More importantly, it is a

A hypothesis is coded in a model, the model

physiology. For example, they may say that

collaborative environment where dancers

is interpreted, and the predictions are then

when a mover is in one half of a human

use their physical intuition to analyze the

iteratively tested. This process can take

sized-cell created by a ring of people, they

results of the instructions and work with

months. While science is constrained by the

must move slowly, while in the other half

the cell biologists to gain deeper insight

need to develop intuition about a system

they move quickly. Then, they set a rule

governed by thermal forces rather than

for collisions, for example, a change in

gravity, dance can be constrained by the

direction when there is a collision. After the

lack of physicality and violence in most

simulation runs, the movers and scientists

normal human relationships. The dancers

talk about

needed to learn to run into each other safely


to execute the rules of motion in a cell

with a statement that is almost too obvious, that moving bodies can tell us about bodies in motion, it proposes a non-linear way of thinking about progress where different disciplines interact in the problem formation phase of research and generate creative ideas

and found that they could execute dances

John Bohannon and the Black Label Movement presenting Dance Your PhD at TEDxBrussels

Š 2014, The Triple Helix, Inc. All rights reserved.


that went beyond ‘slap-stick’ style physical

dancers explore physicality on the dance floor, running

interactions to a much deeper commentary

into each other and kicking each other safely, resulting

on physicality. The “Moving Cell Project”

in captivating and aesthetically pleasing dances.

challenges the paradigm of interdisciplinary

While the scientific enterprise starts with organizing

research as addressing topics that have

principles and results in a performed simulation,

fallen into the cracks between disciplines.

the artistic motivation stems from exploration of

Rather, with a statement that is almost

dysfunctional relationships, such as one between

too obvious, that moving bodies can tell

two people caught in a sinking ship or a broken

us about bodies in motion. The project

relationship. From the choreographed disorder,

proposes a non-linear way of thinking about

the audience may distill some general

progress where different disciplines interact

organizing principles. Using scientists

in the problem formation phase of research

and dancers in a creative process is

and generate creative ideas [5].

a natural extension of communicating

science to the general public through

The process of condensing a hypothesis into


a set of rules benefits all involved. A common

is a common buzzword, the

outcome of expressing science through art

collaboration exemplified by

or cross-disciplinary communication is an

the Moving Cell Project shows

enhanced ability to articulate processes.

that researchers from diverse

David Odde, the cell biologist working on

fields can augment creative

this initiative, said that communication is

processes. UAG



most effective when scientists prescribe their ideas as rules or dance steps [6]. But the dance steps and rules aren’t always familiar


to the dancers; in fact, this collaboration

[1] John Bohannon. Dance Your Ph.D [Internet]. 2012 October 5 [cited 2013 February

has created a whole new vocabulary for

20]. Available from: <

Carl Flink’s dance company, which in turn


created the piece “HIT.” In that piece, the

[3] Aboelela S, Larson E, Bakken S, Carrasquillo O, Formicola A, Glied S, Haas J, and

highly athletic dancers in Black Label

Gebbie K. Defining Interdisciplinary Research: Conclusions from a Critical Review of

Movement seem to perform super-human

the Literature. Health Serv Res. 2007; 42(1):329-346.

feats, some of which are clearly derived

[4] Flink C, Odde D. Science + dance = bodystorming. Trends in Cell Biology 2012;

from biology, such as collapsing microtubles


and jumping salmon [7]. For the most

[5] Flink C. Professor of dance. Personal communication. 18 March 2013.

part, though, the dance is about a human

[6] Odde D. Professor of biomedical engineering. Personal communication. 16 March

view of catastrophe and the influences of


the company’s work with biologists are

[7] Robert Hammel. The Moving Cell Project [Internet]. [cited 2013 February 20].

only clear after talking with Flink [5]. The

Available from: <>.


© 2014, The Triple Helix, Inc. All rights reserved.

By borrowing the format of Tarot Cards, I aim to merge the meanings of each card with scientific elements such as the Black Hole, the White Dwarf, or the notion of 11 dimensions.

SANG EUN DAWN LEE RISD ‘14 Below is a reinterpretation of the song “Fly to the Moon” that gives a literal interpretation of the lyrics, “Let me see what spring is like on Jupiter and Mars.”



đ&#x;&#x201D;&#x2026; ARE THEY KILLING US?



ome view the light emanating from a big city as a magical aura, the glow and buzz of a thriving metropolis. But anyone who has ever seen a pitch-black sky bespeckled with thousands of glimmering gems knows it by another name: light pollution. Pollution seems like a strong word for our everyday lighting habits. We tend to think of pollution as trash suffocating marine life, debris collecting on highway shoulders, car horns piercing a symphony of birds, chemicals spilling into a local river, etc. â&#x20AC;&#x201C; light does not seem to fit in. It is the least tangible of the pollutions, impossible to touch or see. In recent human memory, light has been a fundamental part of life; to think that something as basic as light could constitute pollution is hard to believe. Before electricity, candles illuminated cities; now, neon signs and streetlights burn throughout the night. The mushy orange night sky is one that is natural to modern man, but not to nature. Never before has daylight shone all night long. Only in the past decades have scientists begun to measure the effects of light on the larger environment. Most studies agree that excessive artificial lighting poses a threat to animal habitats and health, but such harmful effects may also extend into human health. While these studies are not yet widely accepted, the possibility of artificial light harming us is particularly frightening â&#x20AC;&#x201C; a popular, trusted technology with ulterior motives. Should we fear persistent illumination, or simply accept it as a cost of modernity?

art: Beatrice Steinert; layout: Chen Ye

A BRIEF HISTORY OF LIGHTING Before the dawn of electric lighting, there were two sources of light: the sun and fire. Once the sun went down, people for the most part remained enveloped in darkness, prompting a desire for convenient lighting. The Industrial Revolution ushered gaslights into the public domain, albeit slowly; the simple lighting allowed the theater to hold evening performances and the Smithsonian to host evening lectures [1]. The true thirst for nighttime lighting would soon become apparent. In 1879, Thomas Edison lit up the first light bulb. While at first a novelty, the light bulb quickly became an imperative, a part of America’s vision of the future [1]. As cities competed to light their streets, the need for lighting became insatiable and electric companies responded with bigger and brighter lights. Soon enough, every community had its own “White Way,” a brightlylit stretch of town. When the Great Depression struck, citizens refused to turn off their lights for fear of crime, despite the financial savings that would result [2]. Lighting had become a fundamental part of urban life, even more valuable than money itself. The craze for artificial lighting continued to spread. Lights now illuminate empty parking lots, line driveways, and illuminate storefronts, amongst infinite other purposes. The end result: suburban towns feel like ghost towns without artificial lighting, and cities like Las Vegas and Hong Kong have become swirls of neon signs. When a street corner is hardly different by night than by day, when the light spills beyond its intended target, when whole cities can be seen from space, we have to wonder: are all these lights necessary? And, more importantly, are they hurting us?

THE IMPACT OF LIGHT ON ANIMALS The most obvious harm of lit-up cities is the loss of stars. Country dwellers lament that the city lights drown them out; the twinkles in a black sky becomes imperceptible in the fog of light. Scientists call this astronomical light pollution. In effect, the stars are a luxury that city inhabitants can no longer afford. However, the unintended side effects of artificially-lit night skies may be larger than we think. There is another form of light pollution – ecological – that scientists have identified. The damage that light causes makes intuitive sense. Anyone who has experienced a warm midsummer night in New England knows that the lights inevitably attract moths, beetles, mosquitoes etc. These insects then attract their predators; a patio lamp becomes a new environmental microcosm. Without patio lights, these micro-environments would never form; a small amount artificial lighting, it seems, can create a slightly altered local ecology. It is not hard to imagine that the compounded effects of global nighttime lighting may amount to larger problems. Indeed,

© 2014, The Triple Helix, Inc. All rights reserved.


a review of the current research suggests that the excessive light causes a countless species of animals to fall victim not only to problems such as confusion and disorientation [3]. The studied effects of ecological light pollution may in fact be fatal. Some birds crash into brightly lit buildings, and others circle searchlights until they die of exhaustion, showing the disorientation that city lights may cause [4]. Lighting along beachfronts has also devastated the already endangered sea turtle. Some turtles cannot find a suitably dark beach on which to lay their eggs; many of the hatchlings, attracted to the light, never make it back to sea [6]. The sea turtles’ confusion and deaths are well-known consequences of light pollution. In another study, the túngara frog’s mating patterns were studied in dim and dark conditions. The results showed that the female frogs became less selective about their mate choices in artificially-lit conditions, presumably to hasten the dangerous act of mating [7]. In short, our unnaturally-lit nights ha ve a sizeable if subtle effect on other animals around us [3]. The effects of light pollution on a single species may even cascade into effects on a local ecology. Aquatic invertebrates, such as zooplankton, rise to the ocean surface in dark conditions to avoid predation [8]. Thus unnatural lighting – from oil rigs, fishing boats, etc. – throws off the natural flow of zooplankton, disrupting not just the single species but all that depend on it. For example, zooplankton feed on algae, so with excessive artificial light, algae populations can grow exponentially [9]. Thus the effects of light pollution may have effects not only on single species, but on a larger environment. This domino effect makes the full scope of light pollution harder to study.


THE HIDDEN HUMAN COST Artificial lighting is a distinctly human invention created for our betterment, but recent research has begun to suggest a possible detrimental side effect of excessive light. After all, we surround ourselves in light: one study estimates that only 40% of Americans live where it gets sufficiently dark at night for the human eye to completely transition from cone to rod vision, and that 18.7% of the world’s land is polluted by astronomical standards [10]. These findings suggest that light pollution is both global and omnipresent. While the research is not yet conclusive, it points to a hidden human cost of light pollution. The light that is hurting our environments and animals may harm us humans, too. This is not to say that light pollution is the sole cause of human illnesses. Rather, controlled laboratory studies have shown that exposure to light can cause health problems. For example, exposure to light is thought to disrupt circadian rhythms, which control the human sleep cycle. This disruption may then affect the balance of hormones in our body, accelerating tumor growth [11]. Artificial light inhibits the production of melatonin, a hormone that is known to regulate the sleep cycle. A recent study has also linked melatonin and the organ that produces it, the pineal gland, with cancer-suppressing properties; it posits that one large factor in the recent epidemic of cancer is excessive lighting [12]. Thus, light pollution disrupts the human sleep cycle, which contributes to more serious health concerns [13]. Some studies have explicitly linked nighttime illumination, both intentional and occupational, with various cancers. One group reported a 35% increase in risk of colorectal cancer for nurses who worked night shifts

at least three times a month for 15 years or more [14]. In a similar study, there was a slight increase in risk of breast cancer for nurses with 20 or more years of night shift work [15]. Some researchers even suggest that the coinciding increases in cancer and light pollution are no accident, that these two phenomenon are somehow intertwined [13,16]. The research continues to unearth new connections, linking light pollution with other health problems such as depression, insomnia, and cardiovascular disease. Important to note, though, is that the conclusions reached by such studies are not yet widely accepted. Nevertheless, new studies continue to point to the potential harm of light pollution. For example, a 2006 study showed that infant mice exposed to constant artificial light could not establish a regular sleep cycle [17]. The present dearth of research should serve not as reassurance of the safety of light pollution, but as an imperative to do more. As we continue to piece together the growing body of research, we more clearly see a looming danger that we have the power to prevent.

A SOLUTION? The solution to light pollution involves balancing the hazardous effects of light and its necessary uses. This balance hinges on so many parties – politics, money, ecology, health, safety – that a solution seems unlikely. Yet the cost of light pollution in the U.S. alone is estimated at 7 billion dollars a year [18]. Thus environmental and health gains could be linked with financial savings – that is, more money – perhaps spurring sensible and immediate action. In looking for a solution to light pollution, we must realize that there is no quick fix. Indeed, small changes can go a long way in

© 2014, The Triple Helix, Inc. All rights reserved.

diminishing the effects of light pollution. Such small adjustments will allow us to see the stars, help animals navigate, and may even help decrease the human cancer risk. One proposed solution is to use “smart” light sensors that decrease wasted light [19]. Office buildings waste energy and light when they are kept lit after hours. Setting the lights to turn off when the office closes is a first step towards eliminating the glow-stick effect skyscrapers have on the nightscape. In the same vein, sensors can be used in low traffic areas so that streets illuminate only when in use. Small changes in lighting design can mitigate light pollution. Streetlight bulbs usually have a bulge hanging below the metal housing, allowing light to disperse in all directions, including upwards, sending wasted light into the night sky. If the bulge were eliminated, light could be directed to where it is needed. Many outdoor lights are also too powerful. For example, 500-watt “Rottweiller lights” line gardens and houses where 100-watts would suffice [19]. By aligning designated use with intensity of light, we can select appropriate lighting, eliminating the excess light that accompanies poor design. The solution requires us to think about what we really need light for, and where it is used frivolously and carelessly. Excessive lighting is what spills over into the surrounding environment, encroaching on a diverse cross section of ecosystems, including ours. To this end, one dictum for proper lighting proclaims: “Use the ‘correct’ lighting level, not too little, not too much” [20]. Perhaps we have been erring on the side of too much.

REFERENCES [1] Holden A. Lighting The Night: Technology, Urban Life and the Evolution of Street Lighting. Places Journal. 1992; 8(2): 56-63.

[13] Anisimov reproductive risk. Neuro

[2] Municipal Lighting Operators Point Out Folly of Street Lighting Cuts. The American City. 1932 Oct; 47(4): 110.

[14] Schernhammer ES, Laden F, Speizer FE, et al. Night-Shift Work and Risk of Colorectal Cancer in Nurses’ Health Study. Journal of the National Cancer Institute. 2003 Jun; 95(11): 825-828.

[3] Longcore T, Rich C. Ecological Light Pollution. Frontiers in Ecology and the Environment. 2004 May; 2(4):191-198. [4] Klinkenborg V. Our Vanising Night. National Geographic. Nov 2008. [cited 2013 Feb 13]. Available from: http://ngm.nationalgeographic. com/2008/11/light-pollution/klinkenborg-text [5] Derrickson KC. Variation presentation in northern Condor. 1988 Aug;

in repertoire mockingbirds. 90:592-606.

[6] Witherington BE, Martin RE. Understanding, Assessing, and Resolving Light-Pollution Problems on Sea Turtle Nesting Beaches. St. Petersberg, FL, Florida Marine Research Institute. 2000. [7] Rand AS, Bridarolli ME, Dries L, Ryan MJ. Light levels influence female choice in Tungara frogs: predation risk assessment? Copeia. 1997; 1997:477-450. [8] Gliwicz ZM. A zooplankton. Ecology.

lunar 1986;

cycle in 67:883-897.

[9] Moore MV, Pierce SM, Walsh HM, et al. Urban light pollution alters the diel vertical migration of Daphnia. Verh Internat Verin Limnol. 2000; 27:779-782. [10] Cinzano P, Falchi F, Elvidge CD. The first world atlas of the artificial night sky brightness. Mon Not R Astron Soc. 2000; 328:689-707. [11] Chepesiuk R. Missing the Dark: Health Effects on Light Pollution. Environmental Health Perspectives. 2009 Jan; 117:A20-A27.

VN. Light function and Endocrinology

pollution, cancer Letters.

[15] Schernhammer ES, Kroenke CH, Laden F, Hankinson, SE. Night Work and Risk of Breast Cancer. Epidemiology. 2006 Jan; 17(1): 108-111. [16] Stevens R, Blask DE, Brainard GC, et al. Meeting report: the role of environmental lighting and circadian disruption in cancer and other diseases. Environmental Health Perspectives. 2007 Sept; 115(9):1357-1362. [17] Ohta H, Mitchell AC, McMahon DG. Constant light disrupts the developing mouse biological clock. Pediatric Research. 2006; 60(3):304-308. [18] Gallway T, Olsen RN, Mitchell DM. The economics of global light pollution. Ecological Economics. 2010; 69:658-665. [19] Winterman D. Light pollution: Is there a solution? BBC News Magazine. 2012 Jan 17. Available from: [20] Crawford, DL. Light pollution, an environmental problem for astronomy and for mankind. Memorie della Società Astronomia Italiana. 2000; 71:11-40. [21] City Lights of the United States 2012 [image on the Internet]. 2013 [cited 30 May 2013]. Available from: http://www. [22] City lights public domain image picture [image on the Internet]. [cited 30 May 2013]. Available from: objects-public-domain-images-pictures/lampspublic-domain-images-pictures/city-lights.jpg.html

[12] Kerenyi NA, Pandula E, Feuer G. Why the incidence of cancer is increasing: the role of ‘light pollution’. Medical Hypotheses. 1990 Oct; 33(2):75-78.


© 2014, The Triple Helix, Inc. All rights reserved.



Š 2014, The Triple Helix, Inc. All rights reserved.

YOU’RE NO GOOD How Nature and Nurture Discredits the Death Penalty sydney blattman

art: beatrice steinert '15 layout: chen ye '17

Case 1: Murder and Mitigating Factors In 2006, Juan Quintero, an immigrant living illegally in Texas, shot and killed Houston police officer Rodney Johnson, who had just pulled him over for a speeding violation. After Quintero failed to present identification, Johnson searched the suspect, handcuffed him and brought him to the police car. When Johnson returned to the front seat of the car, Quintero pulled out a hidden gun and shot the officer four times in the head from the back seat. Johnson’s family brought the case to trial to plead for Quintero’s execution [1,2]. Defense attorney Matt Silverman accepted the challenge of defending Quintero’s right to live. Silverman determined that Quintero had suffered serious brain damage at age six when he fell off a roof. The defense obtained a brain scan as evidence of neurological defects. In court, Silverman and his colleagues presented the brain scan as well as testimonies of Quintero’s history of alcoholism, depression, and anxiety. The jury sentenced Quintero to life in prison without parole, marking a victory for the defense team that saved him from execution [2]. Courts approach a criminal trial like Quintero’s in two phases. In the first phase, known as the guilt phase, prosecutors must convince the jury that the defendant is guilty beyond a reasonable doubt. Then, if the defendant is convicted and the prosecutor wants to seek the death penalty, a second phase of the trial begins, in which the lawyers present evidence of aggravating and mitigating factors in order to adjust the severity of sentencing. Aggravating factors are circumstances that escalate the severity of the crime and tend to push the jury towards inflicting the death penalty. Mitigating factors encourage the jury to value the defendant’s life using relevant support, as defined by each state. The jury weighs the aggravating and mitigating factors and determines if execution is justified [3]. © 2014, The Triple Helix, Inc. All rights reserved.


Definitions of mitigation vary greatly across

After testing the participants for the MAOA

states [4]. Sixteen of the thirty-three states

polymorphism—the alternative form of the

[5] that still use the death penalty consider

gene—Brunner suggested a link between

the defendant’s inability to “appreciate the

MAOA deficiency and criminal behavior [7,8].

criminality of his or her conduct” in mitigation [4]. Twenty-one states accept evidence that

In response to Brunner’s results, Mobley

the defendant was “under the influence of

proposed that genetic testing could present

extreme mental or emotional disturbance”

a biological mitigating factor that would

when he committed the crime [4]. Mitigation

take him off death row. In an appeals court,

specialists like Silverman present evidence

Mobley successfully argued that he had

within these guidelines to dissuade juries

previously received insufficient legal counsel

from executing even the worst criminals.

because his lawyers did not advise him to

They are experts at presenting a defendant’s

undergo genetic testing. The court reduced

childhood experiences, mental illnesses,

his sentence to life in prison. However,

and other relevant circumstances to a jury

another court later overturned the ruling,

in order to reduce a sentence from the

claiming the law was not ready to accept

death penalty to life in prison.

this genetic evidence. The state of Georgia executed Mobley in 2005 [7,8].

according to most neuroscientists,

the mind is indistinguishable from the brain The acceptance of a biomechanical explanation as an excuse for behavior relies on the assumption that the mind determines behavior separate from the body. Philosophers use the term dualism to describe this perception of a “self” that exists beyond the body [10]. However, according to most neuroscientists the mind is indistinguishable from the physical

Case 2: Armed Robbery Meets Genetics Biomechanisms in In 1991, a court in Georgia sentenced United States Law

structure of the brain, which is observable and tangible [11]. It follows that all mental

Stephen Mobley to death for shooting

A study published in 2012 tested US state

the manager of a Domino’s Pizza. During

trial judges’ responses to biomechanical

Mobley’s appeals process, geneticist Han

evidence in criminal cases. Biomechanical

Brunner happened to release a relevant

evidence, such as the brain scan that

study identifying an abnormal form of the

revealed Quintero’s neurological disabilities

region of DNA that regulates the synthesis

and the genetic information requested by

of monoamine oxidase A (MAOA). The

Mobley, can suggest physical explanations

abnormality causes MAOA deficiency.

for a defendant’s behavior. In this study,

MAOA metabolizes, or breaks down,

researchers described a hypothetical case

serotonin, dopamine, and noradrenaline,

in which the defendant, John Donahue,

which are neurotransmitters implicated

attacked and severely injured a restaurant

in regulating aggression, addiction, and

owner. The researchers asked the judges

involuntary motion, respectively [6]. When

to determine an appropriate sentence, but

MAOA is not available to metabolize these

a neurobiologist provided only some of

neurotransmitters, regulation of these

the judges with a genetic and neurological

behaviors is limited. The participants in

explanation for the criminal behavior. The

Brunner’s study all exhibited abnormal

judges widely considered the biomechanism

behavior and a lack of impulse inhibition.

to be a mitigating factor when determining


the length of the criminal’s sentence [9].

activity is visible through identifiable processes in the brain, and a biomechanical explanation exists for every decision. As the study of judges shows, the law often accepts biomechanical evidence as an excuse for criminal behavior.

Biological Factors in Mitigation Based on the outlines for mitigation provided by states, explanations for behavior typically are grounds for avoiding the death penalty. States that acknowledge a criminal’s ability to “appreciate the criminality of his or her conduct” as mitigation [4] accept that biomechanical explanations for behavior do compromise the defendant’s legal responsibility. If a criminal cannot metabolize

© 2014, The Triple Helix, Inc. All rights reserved.

chemicals controlling his emotions, typical

never be appropriate. However, the law

identifies as the human instinct to demand

experiences can cause responses in the

still allows for the death penalty because

justice, is then the only purpose of the death

brain that mimic the average person’s

the law has not adapted to encompass

penalty [12]. The death penalty thus exists

neural response to extreme disturbances.

modern scientific theories.

to allow society to condemn a criminal’s immoral choices, which implies that the

This biological hyper-reaction can reduce a person’s perception of his own criminality because it may seem justified by his exaggerated brain response. Similarly, in states accepting that defendants “under the influence of extreme mental or emotional disturbance” do not deserve execution [4], explanations like the MAOA polymorphism also provide mitigation. Definitions of mitigation currently assume that the defendant is a “rational actor with free will” and that unusual circumstances beyond his control, such as mental illness or previous abuse, diminished his capacity to behave rationally [12]. This belief in “free

when the law accepts that all behavior can be explained,

defendant had the capacity to make the

it must also accept that every defendant in a capital trial can provide sufficient mitigating evidence to avoid execution.

against immoral action relies on a dualistic

will” suggests that an individual’s decision-

morally better choice. The assumption that some defendants have the ability to decide perspective. Because all behavior can be explained by biomechanisms, criminals never have full capacity to rationalize their decisions at the time of the crime, so moral condemnation, or retribution, is an illogical basis to support the continued use of the death penalty.

Serotonin Regulation and Crime The MAOA polymorphism is the most widely implicated gene in inducing violent behavior,

making capabilities are not entirely controlled

Extending this logic beyond capital cases

but geneticists are beginning to connect

by environmental and biological factors. The

may suggest that the law should not punish

other genes with criminal behavior. Serotonin

“rational actor” then must refer to a “self” that

robbers, rapists, or basically any criminals

is a neurotransmitter widely implicated in the

exists beyond the understandable physical

— after all, biology controls all behavior.

regulation of fear, aggression, and anxiety.

world, so the law is accepting a dualistic

But the law must punish these offenders to

The serotonin transporter is a protein that

view of the mind. Recent discoveries by

deter them from committing crimes and to

regulates synaptic, or active, serotonin levels

geneticists and neuroscientists, such as

prevent them from future criminal activity.

through the uptake of excess serotonin [6].

the MAOA polymorphism, are starting to

The death penalty is unique because its

A polymorphism in the regulatory region

provide solid evidence to support that the

purpose is entirely to punish. In the 1976

of the gene encoding a “short” version

mind is constrained to physical explanation,

case Gregg v. Georgia, the United States

of the transporter, rather than a normal

the opposite of a dualistic approach. The

Supreme Court upheld the constitutionality

“long” version, causes lower expression

law should not rely on dualism because

of the death penalty. The court stated

of the serotonin transporter when tested

science begs the contrary. When the law

that the death penalty is necessary for

in cell lines [15].

accepts that all behavior can be explained,

deterrence and retribution. [12]. Various

it must also accept that every defendant in a

studies have attempted to determine if

A 2002 study from the National Institute of

capital trial can provide sufficient mitigating

the death penalty deters criminals, and

Health examined the relationship between

evidence to avoid execution. Under these

the results fail to conclude that it does

neuronal responses to emotional stimuli and

circumstances, the death penalty will

[13]. Retribution, which the Supreme Court

the serotonin transporter polymorphism.

© 2014, The Triple Helix, Inc. All rights reserved.


The researchers monitored neuronal activity

that acts in isolation and that every gene

Essentially, scientific evidence could mitigate

while showing participants photos of facial

needs an environment” [20]. The field of

every capital crime if neuroscientists fully

expressions and asking them to mimic the

epigenetics explores how environments

understood the brain, making the death

expression. People with the allele, or the

change genomes. Epigenetic factors, or

penalty inappropriate in every case.

version of the gene, for the short transporter

circumstances that physically alter genes,

Neuroscience is far from a complete

showed a significantly greater amygdala

range from childhood trauma to daily dietary

understanding of the brain, but that

response when performing this task [16].

choices [21]. Upon conception, individuals

understanding should not be necessary

The amygdala is a brain structure widely

inherit a genome that becomes a blueprint

for the law to reject its current dualistic

accepted as a key player in regulating

for development. The genome is comparable

approach to mitigation. Close examination

fear and aggression. The experiment

to instrumentalists in an orchestra. The

of the current approach reveals that it

demonstrated a link between transporter

conductor, who is analogous to the

is unscientific and that it facilitates the

function and brain response [17]. A 2004

epigenome, tells the instrumentalists when

continued arbitrary infliction of the death

study out of Taiwan found a connection

to play and what to play so that they can

penalty in the United States.

between the short allele and criminal

create music. Progress in epigenetics has

behavior. Researchers found that the

major implications for mitigation of capital

short allele was significantly more prevalent

crimes because it will allow for identification

among Chinese men convicted of severe

of physical changes to the brain caused by

violent crimes compared to the control

environmental factors in addition to genetic

group. [18]. This early evidence suggests

factors. Epigenetics and genetics together

that criminals with the short form of the

suggest that biomechanisms incorporate

allele are less capable of rational thinking

nature and nurture, the only two factors

in emotional situations, and evidence from

that determine behavior.

this gene will likely soon join the MAOA polymorphism in the courts.

The Changing Genome Until





An End in Sight No criminal commits a crime without a biomechanism present. Science will force this reality on courts as researchers

psychologists debated whether nature or





nurture governs personality [19]. Nature

usable mitigation will be abundant.

encompasses factors inherited by people at birth, while nurture constitutes the effects


of life experiences on behavior. Recent

scientific evidence could mitigate every capital crime

advances in genetics and neuroscience have shown that it is impossible to separate nurture from nature.

Instead, how an

individual is nurtured actively changes his “nature,” or genome. Mriganka Sur writes in Science Magazine, “We now

if neuroscientists fully understood the brain

know that there is no such thing as a gene 20 THETRIPLEHELIX Spring 2014

© 2014, The Triple Helix, Inc. All rights reserved.



[1] ABC Good Morning America. Cop Killing Sparks Immigration Debate [Internet]. 2006 [cited April 15, 2013]. Available from: http://abcnews.

[9] Aspinwall LG, Brown TR, Tabery J. The Double-Edged Sword: Does biomechanism increase or decrease judges’ sentencing of psychopaths? [Internet]. Science 2012; 337(6096):846-849. Available from: http://www.

[2] Toobin J. The Mitigator. The New Yorker [Internet]. 2011 May 9 [cited April 15,2013]. Available from: http://www.newyorker. com/reporting/2011/05/09/110509fa_ fact_toobin?currentPage=all [3] Michigan State University and Death Penalty Information Center. Stages in a Death Penalty Case [Internet]. 2000 [cited 2013 Apr. 15]. Available from: http://deathpenaltycurriculum. org/student/c/about/stages/stages.PDF [4] Lenamon T. Terry Lenamon’s List of State Death Penalty Mitigation Statutes (Full Text) [Internet]. 2010 [cited April 15, 2013]. Available from: aspx?fid=d61d8c7b-896b-4c1a-bd87-f86425206b45 [5] Death Penalty Information Center. States With and Without the Death Penalty [Internet]. 2013 [cited April 15, 2013]. Available from: http://www. [6] Bear MF, Connors BW, Paradiso MA. Neuroscience: Exploring the brain. Baltimore and Philadelphia: Lippincott Williams & Wilkins; 2007. [7] Eastman N, Campbell C. Neuroscience and legal determination of criminal responsibility [Internet]. Nat. Rev. Neurosci. 2006; 7:311-318. Available from: nrn/journal/v7/n4/box/nrn1887_BX1.html [8] Spiegel A. Would Judge Give Psychopath With Genetic Defect Lighter Sentence? NPR [Internet]. 2012 Aug 17 [cited 2013 Apr 15]. Available from:

of the Human Amygdala [Internet]. Science 2002; 297(5580):400-403. Available from: http://www. [17] Rosen J. The Brain on the Stand. The New York Times [Internet]. 2007 Mar 11. [cited 2013 Apr 15]. Available from: magazine/11Neurolaw.t.html?pagewanted=all

[11] Kalat JW. Biological Psychology. 10th ed. Belmont, CA: Wadsworth CENGAGE Learning; 2008.

[18] Liao DL, Hong CJ, Shih HL, Tsai Sj. Possible association between serotonin transporter promoter region polymorphism and extremely violent crime in Chinese males [Internet]. Neuropsychobiology 2004; 50(4):284-7. Available from:

[12] Supreme Court of the United States. Gregg v. Georgia; 1976. Available from: historics/USSC_CR_0428_0153_ZO.html

[19] Cohn, J. The End of Nature v. Nurture?. New Republic [Internet]. 2011 Nov 15 [cited 2013 Apr 15]. Available from: blog/jonathan-cohn/97484/the-end-nature-v-nurture

[13] Death Penalty Information Center. Discussion of Recent Deterrence Studies [Internet]. 2013 [cited April 15, 2013]. Available from: http://www.deathpenaltyinfo. org/discussion-recent-deterrence-studies

[20] Sur M, The Emerging Nature of Nurture [Internet]. Science 2008; 322:1636. Available from: content/322/5908/1636.1.full.pdf?sid=55d0aabdca03-4a17-81c6-9420ea20079c

[14] Simpson JR, Edersheim JG, Brendel RW, Price BH. Neuroimaging in Forensic Psychiatry: From the Clinic to the Courtroom [Internet]. West Sussex: John Wiley & Sons; 2012. Chapter 10, Neuroimaging, Diminished Capacity and Mitigation. [cited 2013 Apr 15]. Available from: http://clbb.mgh.harvard. edu/wp-content/uploads/NeuroimagingDiminished-Capacity-and-Litigation.pdf

[21] Mansuy I, Mohann S. Epigenetics and the Human Brain: Where Nature Meets Nurture [Internet]. 2011 [cited April 15, 2013]. Available from: news/cerebrum/detail.aspx?id=32670

[10] Stanford Encyclopedia of Philosophy. Dualism [Internet]. 2011 [cited April 15, 2013]. Available from:

[15] Sibille E, Lewis D. SERT-ainly Involved in Depression, But When? [Internet]. Am J Psychiatry 2006; 163(1):8-11. Available from: http://ajp. [16] Hariri AR, Mattay VS, Tessitore A, Kolachana B, Fera F, Goldman D et al. Serotonin Transporter Genetic Variation and the Response

© 2014, The Triple Helix, Inc. All rights reserved.

[22] Florida Department of Corrections. Execution Chamber [Internet]. [cited 2013 May 6]. Available from: http://www.dc.state. [23] National Institute of Neurological Disorders and Stroke. Brain Basics: Genes at Work in the Brain [Internet]. [Updated 2013 Mar 20; cited 2013 May 6] Available from: http://www.ninds.nih. gov/disorders/brain_basics/genes_at_work.htm



Considering the Potential of Developing


Environmental Environmental Remediation Remediation Technologies Technologies By Elizabeth Perkins


n 2010, approximately 205.8 million gallons of gasoline were released into the Gulf of Mexico after the explosion of the Deepwater Horizon oil rig. Two decades earlier, the 1989 Exxon oil spill had been similarly devastating, due largely to its location in the Prince William Sound of Alaska. This rocky inlet provided a habitat for a diverse array of wildlife and made it difficult for our traditional technologies such as the skimmer, designed to skim oil from the surface of water, to remediate the sight efficiently. [1] For decades, most wastes, from disposed foods to industrial factory metals, have been sent to landfills.



Today, landfills tower all over the world and grow in size and number along with the global population. The biggest one in the US sits in L.A. County, California and accumulated more than 12,000 tons of waste in 2007. These landfills, home to disposed industrial metals and petroleum hydrocarbons such as benzene (a known carcinogen), release gaseous emissions and leachate. Leachate is contaminated run-off from landfills that seeps into the ground and can contaminate the groundwater and public water supplies if not adequately prevented from doing so. The build up of trash in these landfills is too large to be controlled without the use of contemporary technologies. Not to mention that many of these landfills are old and thus not lined below


â&#x20AC;&#x153;Human Vulnerabilty in a toxic Environmentâ&#x20AC;? by Haily TraN Environmental Remediation Illustration by Mary Oâ&#x20AC;&#x2122;Connor or covered above to stop the movement of wastes, as is often done in landfills today. Although emerging technologies are auspicious, more traditional methods of landfill management may become less capable of countering the growth in global waste and pollution. For example, despite the practiced and improving efforts of landfill management by many countries, people that live closer to landfills have higher instances of cancer. As landfills increase in number, an even greater number of people may become affected [2,3,4]. While society is generally aware of these environmental concerns, it is difficult to determine to what extent society is currently both causing and countering these problems. While some of the most serious of our concerns about the environment may not manifest for a few more decades, some of the consequences of inaction now will become harder to ignore in the approaching future. However, the burden of proof slows the advancements of technologies designed for this purpose, as many hesitate in face of the possibly unknowable long-term effects of many modern environmental remediation technologies. The development of environmental remediation technologies and research endeavors, many of which indicate potential environmental benefits, are encumbered by a lack of money, resources, political attention, and expertise, as well as slow moving policy procedures. This stagnancy is also propagated by disinterest, distractedness and skepticism (e.g. the percent of Americans that deny global warming has actually increased within the past few years) [5,6]. The U.S. Environmental Protection Agency has general criteria for approving the use of environmental remediation technologies. These include above all: the efficiency, cost effectiveness, and the risk to the handlers and communities associated with their use [7]. However, in cases that require an immediate response such as oil spills, modern technologies are much more likely to be approved for use due to the acute threat that arises with accidents like these. Provable technologies are more regularly used for on-going environmental maintenance, such as waste management and soil de-contamination.

One commonly practiced remediation method is composting, a timeless staple of waste management, now practiced by many individuals and families. Composting is a biotechnology: a technology that involves biological processes. Recently, bioremediation technologies (biotechnology designed for environmental remediation purposes) have advanced due to increased emphasis on environmental sciences, and an increased understanding of biological variety and function. Industrial environmental technologies in particular, require the collaboration of different disciplines, due to the complexity of the environment and its intimacy with society. Hence, increased research in other fields such as genetics, biology and engineering has supplemented the advancement of bioremediation technologies. [5] Within the past few decades, bioremediation has been of increasing interest to scientists and environmental efforts due in part to its successful and important role in environmental remediation efforts after the BP and Exxon Oil Spills. Bioremediation is grounded in the naturally occurring process of biodegradation. Biodegradation refers to the process by which microorganisms (such as bacteria) break down or consume a substance. In many cases, bioremediation takes advantage of pre-existing microorganisms that have naturally evolved in the environment. In the type of composting done by many people, food wastes are covered in soil and dried leaves to be broken down by microorganisms inhabiting the soil. Individual composting reduces leachate and methane gas emissions from landfills, by reducing the amount of organic wastes that are sent to landfills. [8] But when it comes to waste management, certain organic and inorganic wastes (such as metals, plastics, laboratory chemicals, etc.) pose a greater challenge to biodegradation. These types of wastes may eventually biodegrade as well, but at an extremely slow rate (depending on the material, it may take hundreds or thousands of years). More recently however, bioremediation science has been successful in hastening this process under certain conditions [9]. There are numerous bioremediation technologies, many of them recently developed: from simple techniques such as composting and tree planting, to the engineering of genetically modified microbes designed to exploit certain metabolic functions. Bioremediation has typically involved the use of microorganisms (such as bacteria) in waste management



or remediation of contaminated environments. Bioremediation efforts have generally been more successful at harnessing the metabolic abilities of indigenous microbes (vs. genetically engineered microbes). [5] The reduction of waste by the organism occurs in varying and specific ways. In a process called catabolism, the microbe does not benefit from metabolizing the waste. The microorganism produces enzymes as metabolites (substances involved in metabolism) or bi-products, which in turn catalyze the decomposition or re-arrangement of pollutant molecules into less harmful or more easily managed molecules or compounds. On the other hand, sometimes the waste provides metabolic energy to the active microbe. This is the case with composting, in which both micro and macro organisms (such as worms) obtain nutrients such as carbon and nitrogen from organic food wastes. The organisms use the nutrients for energy, to build proteins and fuel metabolic processes. [10] Depending on the type of microorganism and the environmental conditions, microorganisms are capable of breaking down many different types of waste materials. For instance, it was recently determined that certain types of bacteria and fungi can use mono-aromatic hydrocarbons (such as benzene) as a source of carbon. The extent to which manipulation by humans is required to make these outcomes possible varies and is specific to each particular circumstance [11,12]. Often, the initial recruitment of the microorganisms to the sight of contamination is not difficult. If the microorganisms are preexisting in a contaminated environment, they may be attracted to the nutrients provided by the wastes. For this reason, many of these microorganisms are first discovered at the sight of contamination [13]. In a case like this, the task that remains for humans is to encourage these microorganisms to thrive. This is achieved by providing and maintaining specific environmental conditions, which encourages population growth and maximizes

the metabolic processes of organisms, thus maximizing the speed and efficiency of waste reduction [14,15]. In the Exxon oil spill for example, workers added a dispersant to the oiled water in order to increase the available surface area for biodegradation, as well as a fertilizer to attract and nourish native microorganisms [1]. Bioremediation technologies have advanced far beyond just the simple methods used in the Exxon oil spill. For example, research revealing the potential of microbial fuel cells is significant due to the ability of this remediation method to both reduce waste and function as an energy source. Microbial fuel cells are devices that can generate electricity from the anaerobic degradation of organic wastes. As the microorganism generates CO2, protons and electrons through metabolic activity, a microbial fuel cell can convert protons and electrons from chemical into electrical energy. [18] In addition to treating contaminated sights, microorganisms can be used to detect pollution. For example, the fungus Telephora caryophella can accumulate arsenic in its membranes. By measuring arsenic levels of this fungus, arsenic in the environment can be determined [19]. Besides being efficient and promising, compared to more traditionally used methods, bioremediation is often less invasive and thus poses fewer risks to handlers. [5] One task associated with bioremediation is determining the appropriate environmental conditions for the desired outcome, which are very specific to each microbe, and the amount and type of targeted waste. Many factors must be considered, such as pH level, temperature, availability of limiting nutrients and the moistness of the environment in which the remediation

As the microorganism generates CO2, protons and electrons through metabolic activity, a microbial fuel cell can convert protons and electrons from

chemical into electrical energy.



1.5 billion of the predicted 30 billion dollars spent in Europe on soil treatment for the following year, could be saved if

10% of the soil were treated with


is occurring. Another challenge is to ensure that any intervention on our part (such as the addition of a fertilizer) is minimally risky to the environment or society [15,16,17]. Another important concern regarding bioremediation technologies is the release of metabolic byproducts by microorganisms that may be harmful to the environment or people in high concentrations or with long periods of exposure. Usually, scientists believe these bi-products to be minimally risky, and the biggest concern is about the release of methane by microorganisms. Moreover, research may allow us to increasingly identify and counter the risks associated with bioremediation while prospering from the benefits. For example, anaerobic degradation of organic wastes produces large amounts of methane in landfills. By drilling wells into the soil in order to expose it to oxygen, destroying anaerobic microbes and replacing them with native aerobic microorganisms, which eliminates methane and reduces leachate [20,21]. The extent of the diversity of microorganisms in the soil is only starting to be appreciated. Scientist believe that one third of all living organisms inhabit the soil, although we have most likely only identified about one percent of


existing microorganisms [22]. A further exploration of the abundant variety of yet unidentified microorganisms is an incentive provided by the increasing amount of discoveries of microbes that can consume various problems wastes, and may continue to heighten the potential of bioremediation [19]. There have been most recently, discoveries of different, previously unknown microbes that can reduce problem pollutants including radioactive waste and industrial metals [23,24,25]. Despite all of the potential for success implied by bioremediation research, and the slow decrease in regulations restricting the use of microorganisms, bioremediation still lacks qualified labor, funds, and educational emphasis. All of this requires money, but given that the most of the engineering and labor involved in bioremediation is already done by nature, a lot of money could be saved in the long run by increasing reliance on bioremediation. Moreover, those in control of government spending might be more willing to spend on bioremediation technologies, if the market opportunities in bioremediation were considered. For example, in 2000 Elsevier estimated that 1.5 billion of the predicted 30 billion dollars spent in Europe on soil remediation for the following year, could be saved if 10% of the soil was treated through bioremediation. This exceeds the estimated cost of 1 billion dollars spent in 2011 by the US on soil treatment. [5] In all likelihood, new environmental remediation technologies will be necessary in order to maintain both the health and safety of everyone, as well as the well-established comforts of consumerism and waste of modern societies. These technologies will also need to be minimally risky to the environment or society. With this in mind, society should encourage political and economic support of technologies that are both efficient

Environmental Remediation

â&#x20AC;˘ Microbe eats oil


â&#x20AC;˘ Microbe digests oil and changes it to water and harmless gases â&#x20AC;˘ Microbe releases water and harmless gases into soil or groundwater



and cost effective in the long run, even if that requires allocating more money for research now. Society should also support the development of high-potential environmental remediation technologies

Layout: Kaley Brauer ‘17

by promoting these areas of research in jobs, education, and political discussions.

Associate Editor: Mali’o Kodis ‘14

References [1] Yarris L. Lessons Learned from the Two Worst Oils Spills in U.S. History. [homepage on the Internet]. 2011 Aug 18. [cited 2013 May 10]. Available from:, U.S. Department of Energy Web site: http://newscenter. [2] Gunther S, Huge American Landfills and The People Who Live Nearby. Huge American Landfills and the People Who Live Nearby.[homepage on the internet] 2011 Aug 12. [cited 2013 Apr 20] Available from : [3] Helman C, America’s Biggest Landfills. Forbes. 2010 Oct 10. [cited 2013 April 20] Available from: -energy-biggest-landfills.html [4] Oyster, [Internet]. 2011 [cited 2013 May 10]. Available from: [5] Boopathy R. Factors limiting bioremediation technologies [Internet]. Biosource Technology; 2000 [cited 2013 May 11]. Available from: http:// /mip7013/arquivos/5316_Biorremediacao-1.pdf [6] Hanley C. Global warmin: Why americans are in denial, [Internet]. 2011 Sep 24 [cited 2013 May 11]. Available from: http://www.huffingtonpost. com /2011/09/24/global-warming-why-americans-deny_n_979177.html [7] Louis Maccarone. Senior Sanitary Engineer, RI Dept. of Environmental Management. Office of Waste Management. [email correspondance]. 20 March 2013. Cited 3 April 2013. [8] U.S. Environmental Protection Agency C. Resource conservationComposting for facilities Basics, [homepage on the Internet]. 2013 Jan 8 [cited 2013 May 7]. Available from: U.S. Environmental Protection Agency, Web site: [9] Burkart, K. Boy Discovers Microbe That Eats Plastic. [Internet]. Mother nature network. 2009. June 12. [cited 2013 March 16] Available from : [10] University of Illinois Extension C. The science of composting-Composting for the homeowner. [Internet]. No date [cited 2013 May 7]. Available from: University of Illinois, Web site: homecompost/science.cfm [11] Adeniji, A. Bioremediation of Arsenic, Chrominum, Lead and Mercury. [Internet]. US Environmental Protection Agency. 2004 August. [cited 2013 May 4] Available from: download/ studentpapers/bio_of_metals_paper.pdf [12] Farhadian, M. Vachelard, C. Duchez, D. Larroche, C. In Situ In Situ bioremediation of monoaromatic pollutants in groundwater. [Internet]. Biosource Technology. 5296-5298. Available from :http://www. [13] Daniel Burd. Plastic Not Fantastic. [lab report]. 2008 April 20. 6 p. [cited 2013 Feb 17]. Available from: archives/2008/08BurdReport.pdf. [14] United States Environmental Protection Agency. Bioremediation of hazardous wastes: Research, development and field evaluations. [Internet] United States Environmental Protection Agency. 1995 September. [cited 2013 Mar 20] . Available from: http://www.ep tio/d ownload/rem ed/biosym.pdf



[15] United States Environmental Protection Agency. A citizen’s guide to bioremediation.[Internet]. United States Environmental Protection Agency. 2012 September. 1-2. [cited 2013 Mar 20] Available from: http:// /pdfs/suppmaterials/ treat mentt ech/bioremediation.pdf [16] Regenises. Bioaugmentation. [Internet]. Regenises. 2009. Available from: bioaugmentation/ [17] Vidali, M. Bioremediation. An overview. [Internet]. IUPAC. 73(7). 1163-1171. 2001. Available from: pac/pdf/2001/pdf/7307x1163.pdf [18] Greenman J, Galvez A, Giusti L, Ieropoulos I. Electricity from landfill leachate using microbial fuel cells: Comparison with a biological aerated filter. [Internet]. Elsevier. 2009. 44. 112-119. [cited 2014 May 9]. Available from: [19] Baker Environmental Inc. Treatability study work plan. [Internet]. Department of the Navy. 09 January 1995. [cited 2013 May 8] Available from: [20] Maurice C. Bioindication and bioremediation of landfill emissions, [Internet]. No date [cited 2013 May 10]. Available from: Lulea University of Technology, Department of Environmental Engineering Web site: [21] Global Earth Products. Global earth products: Aerobic landfill solutions, [Internet]. No date [cited 2013 May 10]. Available from:, Web site: [22] Dey, U. Mondal N. K., Das K., D. Shampa. An approach to polymer degradation through microbes. Journal of Pharmacy.[Internet] 2012 May-June. 2 (3). 1-3. [cited 8 May 2013]. Available from : http://www. [23] Robbins J. The hidden world of soil under our feet. [Internet]. 2013 May 11 [cited 2013 May 12]. Available from: http://www.nytimes. com/2013/05/12/opinion/sunday/the-hidden-world-of-soil-underour-feet.html?pagewanted=1&ref=opinion [24] Emery, C. Genetic diversity of cleanup microbes. [Internet]. Frontiers in Ecology and Evolution. 2009 October. 7(8). 403. [cited 6 May 2013] Available from: [25] Gadd, G.M. Microbial influence on metal mobility and application for bioremediation.[Internet].Elsevier 2004; 122(2-4):109-119 [cited 2013 May 11]. Available from : http://www.scienc ticle/pii/S0 016706104000060 [26] Gulf oil spill. [image on the Internet]. 2011. [cite 12 May 2013]. Available from: [27] Microbe in oil. [image on the Internet]. 2011. [cite 12 May 2013]. Available from: biorem_img03.jpg [28] Microbe explanation. [image on the Internet]. 2011. [cite 12 May 2013]. Available from: biorem_img01.png



hat if a paper cut were a death sentence? Reaching into a file during a business meeting, you nick your finger on the edge of a memo. Today, the small cut would mean nothing more than a bandage. At the very worst, an infection may set in, and you are prescribed antibiotics to combat the harmful bacteria that entered through the cut, then you move on with life. But what if there were no antibiotics? What if that cut provided an entryway for pernicious bacteria into your body, and there was no way to kill them? In such a world, a paper cut could become the equivalent of a death sentence.

A future where bacteria are completely resistant to every antibiotic in humanityâ&#x20AC;&#x2122;s arsenal may be a possibility. With bacteria becoming increasingly resistant to antibiotics and the development of new antibiotics proceeding at too slow a rate, humans face the very real possibility of inhabiting a world in which we are unable to fend off harmful bacteria [1]. The potential causes and consequences of antibiotic resistance are varied and numerous, too numerous in fact to expose in this single article. Instead, this article intends to call attention to the impending threat of antibiotic resistance, namely how it has come to be a problem, and a few select avenues of ameliorating this issue, some of which look promising, others of which seem rather bleak.



PICTURES: Pigmented bacteria and mold species that were propagated on a 4’ by 6’ nutrient agar plate Taken during the Spring of 2013 by Eli Block



he mechanism of antibiotic resistance is beautifully simple: antibiotics are weapons developed for use against competing microbes [2]. Bacteria produce antibiotics to inhibit the function of competing bacteria by hindering processes like DNA replication and protein production [3]. Bacterial populations counteract this warfare by developing resistance, more specifically by acquiring genes that produce adaptations to counteract antibiotic imposed limitations [3]. An example of such an adaptation is an efflux mechanism that ejects harmful antibiotics from the bacteria before they can wreck havoc within the cell [4]. The elegance of this system lies in its mechanism for adaptation. Bacteria, unlike humans, are capable of horizontal gene transfer, in which genes are passed from one organism to another without reproduction [5]. Therefore, resistant bacteria can develop adaptations by acquiring genes not from their parents, but from their peers. These adaptations make bacteria extremely dangerous for humans. We rely on antibiotics to combat and destroy harmful bacteria within our body [6]. However, when bacteria have adaptations that allow them to survive in the presence of antibiotics, humans have limited recourse when fighting bacterial diseases [6].



his mechanism nicely accounts for the creation of antibiotic resistance in nature. However, human activity, particularly within the food industry, has helped hasten antibiotic resistance.

Antibiotics are routinely fed to animals on controlled animal feeding operations (CAFOs), regardless of whether the animal is actually sick. These antibiotics are meant to ward off potential diseases and increase the percentage of healthy animals produced, thereby increasing the CAFOs’ profits [7]. To determine the consequences of such antibiotic overuse, a recent study collected samples from a Chinese CAFO of ‘fresh’ manure, manure in the process of becoming fertilizer, and soil from farms fertilized with this manure [7]. They compared all three with soil from a Chinese forest and manure from pigs that had never been fed antibiotics [7]. The study found that the samples from the Chinese CAFO had “149 unique resistance genes, present anywhere from 192 to 28,000 times more frequently than in the control samples” [7]. These results prove alarming. Use of these antibiotics is increasing the number of resistant genes at a dizzying rate. Humans are using antibiotics to increase short-term monetary gains at the risk of creating antibiotic resistant bacteria. The ubiquitous question becomes whether this increase in resistant genes on CAFOs can affect human health. James Tiedje, one of the researchers of the original study asserts yes, a negative impact on humanity is possible [7]. He contends that by increasing the concentration of antibiotic resistant genes within the environment, it is only a matter of time before they start affecting humans [7]. Tiedje’s assertion that this transfer is possible has recently been supported by a separate study. A paper published at the end of March reported that two Danish farmers on different





farms contracted methicillin-resistant Staphylococcus aureus (MRSA) infections [8]. MRSA is a strain of Staphylococcus aureus bacteria [9]. Other strains of Staphylococcus aureus bacteria live innocuously on people’s skin [9]. Staphylococcus aureus can be dangerous if it enters the body through cuts, but it is usually contained and fought off by the immune system or antibiotics [9]. However, the MRSA strain is resistant to most antibiotics, which makes treatment difficult, and the bacteria particularly frightening [9]. In the study, MRSA bacteria were found both on the farmers and their farm animals [8]. More importantly, in the majority of cases the bacteria found on the humans and the animals were genetically identical [8]. Therefore, the farmers and their animals shared the infection [8]. Furthermore, the pattern of genetic variance of the MRSA samples taken from the farmer and her animals strongly suggests that the MRSA started in the animals and spread to the farmer [8]. Although the strain of MRSA found on these farms has proven to be slightly different than the most common form, the Danish farm example provides strong evidence for the transfer of antibiotic resistant bacteria from animals to humans [8]. Further, the animals on these farms were never fed antibiotics and, therefore, would not have been able to develop antibiotic resistant strains on their own [8]. This suggests the MRSA must have traveled through other means to get to the farm, which calls attention to the difficulty of containing resistant bacteria and makes a solution to resistance even more important [8]. The growing threat of antibiotic use on CAFOs has incited government action. In mid-March, New York Congresswoman and microbiologist Louise Slaughter proposed legislation that would curb antibiotic use on CAFOs in an effort to slow resistance [10]. The legislation, entitled the Preservation of Medical Treatment Act (PAMTA), outlines eight classes of antibiotics that should be banned from “non-therapeutic use” in animals, meaning any use beside treatment of disease [10]. The prospect of government regulation of antibiotic use on CAFOs in order to slow development of resistance in animals and the transfer of that resistance to humans proves to be one avenue to alleviating the threat of antibiotic resistance. However, such laws have been proposed as early as 1977 without much action following, which suggests that in order to solve this problem more expedient processes may need to supplement government help [10].



he pharmaceutical world has proposed a solution to resistance called antibiotic cycling, which would increase antibiotic shelf life by alternating which antibiotics are available to the public and, therefore, slowing the development of resistance [11]. Antibiotic cycling hinges on the idea of “drug rejuvenation” [11]. Drug rejuvenation asserts that nature may ‘deselect’ genes for adaptations against antibiotics when those antibiotics are not present within the environment [11]. This de-selection works because it would no longer be beneficial to have that specific set of adaptations [11, 12]. In essence, the high energy cost of maintaining adaptations that are no longer helpful force the bacteria to deselect for those adaptations [11,12]. Thus, rejuvenation would effectively turn resistant bacteria back into nonresistant bacteria and restore the original antibiotic’s effectiveness [11]. Antibiotic cycling would incite drug rejuvenation by alternating the use of different antibiotics [11]. For example, in cycling two antibiotics, while the first is in use, bacteria accrue adaptations that make them resistant to that antibiotic [11]. In the period where a second antibiotic is prescribed and the first is taken off the market, any resistance to the first antibiotic would dissipate as it is deselected for, allowing the first antibiotic another bout of effectiveness and hypothetically increasing the overall amount of time when the antibiotic is effective, even if that time is interrupted by periods of disuse [11]. While this system would extend the amount of time antibiotics are effective, it provides little incentive for pharmaceutical companies because it has yet to be proven soundly effective and will potentially decrease initial profits [11]. The initial reduction in profits seems counterintuitive because the drug will be effective for a longer time after development, but profitable periods will be interspersed with periods where the



drug brings in no profit. Therefore, it will take longer to regain investment and accrue profit [11]. Even without the reduction in immediate profits that would follow implementation of antibiotic cycling, the antibiotics industry already struggles with high cost of development and rapid antibiotic resistance. Burdened with a high cost of development for a product that will quickly become ineffective, many pharmaceutical companies are turning their backs on developing antibiotics and investing their capital elsewhere [1]. Consequently, society is being left with comparatively few antibiotics to fend off increasingly resilient bacteria [1]. The American government has recognized the increasing shift away from antibiotic development, and in 2012 passed a law offering incentives for antibiotic development including extended patents for antibiotics combating certain diseases and an easier process of federal review [1].

Miller’s team is exploiting this mechanism for iron uptake by creating analogues of siderophores [16]. These analogues, which use the functional group maleimide, will enter the bacteria through the same active transport as the siderophores, but will carry antibiotics with them into the cell [16]. By creating an analogue to enter the cell through iron uptake systems, Miller’s team is essentially tricking the bacteria into absorbing the antibiotic.

ther methods of combating antibiotic resistance have come from basic and applied science laboratories. Some of these methods revolve around key nutrients bacteria need to survive and making these nutrients either accessible or non-accessible.

While Miller’s research is significant in that it provides an entirely new mechanism to treat diseases, it becomes even more important in its specificity. Miller’s maleimide based analogue was designed to target Mycrobacterium tuberculosis [16]. When tested, this mechanism dispensed antibodies against Mycrobacterium tuberculosis, yet not other types of bacteria [16]. The specificity of this treatment method means it would only be prescribed to fight Mycrobacterium tuberculosis, which would reduce the exposure of nonrelated bacteria to the drug and, therefore, slow the development of resistance and increase this drug’s shelf life [16]. Thus, this method of exploiting a bacteria’s iron uptake system allows for extreme drug specificity, which would limit prescription of that drug to a smaller number of people and, therefore, would decrease the rate of resistance [16].

One company focusing on exploiting these key nutrients is PracticaChem LLC out of Notre Dame [13]. Headed by Marvin Miller, PracticaChem is testing the attachment of iron to antibiotics [13]. This method takes advantage of two properties of bacteria. First, most bacteria need iron to survive, yet it is relatively scarce [13]. Second, bacteria absorb iron from their surroundings [13]. This iron absorption occurs through multiple mechanisms, but most prominently through chelation followed by active

A second laboratory out of Rice University is taking the opposite approach of starving bacteria into giving up resistance. The logic behind this approach is simple. Retaining and copying genes costs a large amount of energy [17]. Thus, in moments when essential elements or nutrients are scare, bacteria will drop genes that are not immediately beneficial to conserve energy [17]. The research team out of Rice, headed by Pedro Alvarez, tested and confirmed their hypothesis that in the absence of essential elements

Overall, the pharmaceutical industry’s response to bacterial resistance to antibiotics has been lackluster. At their best, they created a well theorized cycling system that was stymied by a lack of economic incentive, a problem that has kept the entire industry from productively helping to combat antibiotic resistance.



“ 30

transport [14]. Chelation is the process whereby bacteria secrete small compounds called siderophores that leave the cell and look for iron in the environment [14,15]. After the siderophores find and bind to iron, active transport, or transport mediated by proteins, brings both the iron and siderophores into the cell where the iron is released for cell use [14,15].



like oxygen, bacteria deselect for their antibiotic resistance when that resistance is not beneficial [17]. Alvarez asserts that this information can be applied to prevent the spread of resistant bacteria from CAFOs to society [17]. He proposes that wastes drained out of CAFOs should flow through a nutrient devoid barrier, which would force bacteria into a state of distress, and, therefore, force them to deselect for resistance genes [17]. The findings from these and other laboratories seem promising. However, the true test for these developments will be implementation in society and whether they can prove effective outside of the lab.



ince antibiotic resistance endangers everyone who takes antibiotics, it has poised a pressing threat that has garnered wide scale research and debate to find a solution. Such a solution can be inserted at a number of stages in the development of resistance, from CAFO regulation, systematic use and disuse of antibiotics or more specific antibiotics. However, the best path to solving antibiotic resistance will have to pull from all of these avenues, requiring a complete overall of antibiotic distribution and use.

ASSOCIATE EDITOR: OYINKAN OSAMBIRO ‘14 LAYOUT: KALEY BRAUER ‘17 References [1] Wetzstein C. Antibiotic-resistant ‘superbugs’ alarm healthy care industry. The Washington Times [Internet]. 2013 [cited 2013 April 14]. Available from: mar/19/antibiotic-resistant-superbugs-alarm-health-care-i/?page=all [2] Whittaker D. Bacterial warfare using antibiotics and communication. Beacon [Internet]. 2013 April 22 [cited 2013 May 9]. Available from: [3] Neu H. The Crisis of Antibiotic Resistance. Science 1992 August. Vol. 257, (No. 5073): pp. 1064-1073. [4] Silva J. Mechanisms of Antibiotic Resistance. Current Therapeutic Research. 1996. Vol. 57: pp. 30-35. [5] Amabile-Cuevas C and Chicurel M. Horizontal Gene Transfer. American Scientist. 1993 July-August. Vol. 83 (No. 4): pp. 332-341. [6] The Antibiotic Crisis: How Did We Get Here And Where Do We Go? Medical News Today [Internet]. 2011 [cited 2013 April 15]. Available from: php [7] McKenna M. Antibiotic-Resistant Bacteria Surround Big Swine Farms—In China as Well as the U.S.. Wired Science Blog [Internet]. 2013 Feb 12 [cited 2013 Feb 18]. Available from: http://www.wired. com/wiredscience/2013/02/china-resistance-hogs/ [8] McKenna M. Gene Sequencing Pinpoints Antibiotic Resistance Moving from Livestock to Humans. Wired Science Blog [Internet]. 2013 [cited 2013 April 14]. Available from: wiredscience/2013/03/ag-drugs-proof/ [9] What is MRSA? Why is MRSA a Concern? How is MRSA Treated? Medical News Today [Internet]. 2013 [cited 2013 April 14]. Available from: [10] Slaughter Introduces Preservation of Antibiotics for Medical Treatment Act. [Internet]. [cited 2013 April 14]. Available from: http:// &id=2873&Itemid=100072 [11] Lavin BS. Antibiotic Cycling and Marketing into the 21st Century: A Perspective From the Pharmaceutical Industry. Infection Control and Hospital Epidemmiology 2000 January. Vol. 21, (No S1): pp. S32-S35.

[12] Herrmann M, Laxminarayan R. Antibiotic Effectiveness: New Challenges in Natural Resource Management. Annual Review of Resource Economics. 2010 [cited 2013April 02]. Available from: http:// disease-management/arre21_laxminarayan_1on00q.pdf [13] Fighting the microbial war against Superbugs. Innovation Park. PracticaChem [Internet]. [cited 2013 April 01]. Available from: [14] Miller M, Zhu H, Xu Y, Wu C, Walz A, Vergne A, Roosenberg J, Moraski G, Minnick A, McKee-Dolence J, Hu J, Fennell K, Dolence E, Dong L, Franzblau S, Malouin F, Mollmann U. Utilization of microbial iron assimilation processes for the development of new antibiotics and inspiration for the design of new anticancer agents. Biometals [Internet]. 2009 February, Volume 22, Issue 1, pp. 61-75. Available from: [15] Neilands JB. Siderophores: Structure and Function of Microbial Iron Transport Compounds. [Internet]. 1995 [cited 2013 April 14]. Available from: [16] Becker A. Trojan horse tuberculosis treatment. Chemistry World [Internet]. 2012 August 22 [cited 2013 April 01]. Available from: [17] Antibiotic Resistance Can Be Starved Out of Bacteria. Medical News Today [Internet]. 2013 [cited 2013 April 14]. Available from: http:// [18] Antibiotic Resistance Cartoon. [image on the internet]. [cited 2013 April 15]. Available from: Antibiotic-resistant%20bacteria.html [19] Staphylococcus aureus Bacteria. [image on the internet]. [cited 2013 May 7 ]. Available from: [20] Pills. [image on the internet]. [cited 2013 May 7]. Available from: Pills_3.JPG



Last Last LastFew Few FewMonths Months Monthsof of ofChurchill: Churchill: Churchill:

Visual Visual VisualHealth Health HealthAssessment Assessment Assessmentof of ofNorth North NorthAtlantic Atlantic AtlanticRight Right RightWhales Whales Whales During During During the the the summer summer summer ofof2000, of2000, 2000, an anan adult adult adult male male male North North North Atlantic Atlantic Atlantic right right right whale whale whale named named named Churchill Churchill Churchill became became became entangled entangled entangled ininunknown inunknown unknown type type type ofoffishing offishing fishing gear. gear. gear. Unable Unable Unable tototo open open open his his his mouth mouth mouth tototo feed feed feed due due due tototo the the the tightly tightly tightly wrapped wrapped wrapped fishing fishing fishing lines lines lines around around around his his his head head head and and and mouth, mouth, mouth, he hehe had had had tototo live live live off off off his his his blubber blubber blubber layer layer layer for for for months. months. months. Moreover, Moreover, Moreover, the the the rope rope rope deeply deeply deeply embedded embedded embedded around around around his his his rostrum rostrum rostrum (the (the (the upper upper upper part part part ofofthe ofthe the head) head) head) caused caused caused serious serious serious infection. infection. infection. Despite Despite Despite the the the several several several attempts attempts attempts tototo disentangle disentangle disentangle the the the fishing fishing fishing lines, lines, lines, the the the rescue rescue rescue team team team failed failed failed tototo remove remove remove them. them. them. On On On September September September 16, 16, 16, 2001, 2001, 2001, the the the team team team stopped stopped stopped receiving receiving receiving satellite satellite satellite signals. signals. signals. Somewhere Somewhere Somewhere off off off the the the New New New Jersey Jersey Jersey coast, coast, coast, Churchill Churchill Churchill slipped slipped slipped beneath beneath beneath the the the waves waves waves for for for the the the last last last time. time. time.

Rake Rake Rake marks, marks, marks, defined defined defined asastwo astwo two orormore ormore more parallel parallel parallel lines lines lines ininfront infront front ofofeach ofeach each blowhole, blowhole, blowhole, are are are observed observed observed from from from whales whales whales ininpoor inpoor poor health. health. health. The The The presence presence presence ofoforange oforange orange whale whale whale lice lice lice around around around the the the blowhole blowhole blowhole has has has been been been associated associated associated with with with injuries. injuries. injuries.

Churchill’s Churchill’s Churchill’s formerly formerly formerly rotund rotund rotund shape shape shape and and and shiny shiny shiny black black black skin skin skin have have have given given given way way way tototo white white white blotches blotches blotches and and and gray gray gray peeling peeling peeling skin, skin, skin, protruding protruding protruding bone bone bone around around around the the the spine spine spine and and and back, back, back, deep deep deep radiating radiating radiating cracks cracks cracks ininthe inthe the skin skin skin around around around his his his blowholes, blowholes, blowholes, and and and aaheavy aheavy heavy convering convering convering ofofof orange orange orange whale whale whale lice, lice, lice, which which which are are are the the the clear clear clear signs signs signs ofofof decaying decaying decaying health. health. health. Providing Providing Providing aagraphic agraphic graphic example example example ofofof the the the physical physical physical changes changes changes that that that show show show the the the health health health deterioration deterioration deterioration ininright inright right wahles, wahles, wahles, Churchill Churchill Churchill became became became the the the "poster "poster "poster child" child" child" for for for physical physical physical decline decline decline ininthese inthese these whales, whales, whales, teaching teaching teaching researchers researchers researchers which which which visual visual visual cues cues cues tototo use use use tototo assess assess assess right right right whale whale whale heath heath heath from from from the the the outside. outside. outside.



Distinct from scars, lesions may appear as white or grayish plaquelike patches with indistinct edges and include circular, outline, and swath types. Research suggests swath lesions possibly indicates a fatal condition of the whale.

In contrast to the concaved back of right whales in poor condition, healthy right whales have a flat back or “fat rolls” in the area behind the blowholes where they accumulate blubber.






rossing the Chasm” is a key marketing concept introduced by Geoffrey Moore, to describe a product reaching its profitability and popularity amongst consumers. When a product has “crossed the chasm,” the product or service has moved from the visionaries and early market adopters to the mainstream market [1]. Would this concept work for healthcare as well, especially to make preventative insurance plans cross the healthcare chasm? By “crossing the healthcare chasm,” I mean the widespread adoption of the preventative health paradigm to balance the current specialized treatment of diseases. Currently, a major obstacle to this vision is the unpopularity of preventative care over specialized care amongst Americans due to social beliefs that push for treatment over prevention of diseases. We can overcome this barrier by combining coverage for preventative care and specialized care in insurance plans, reducing the likelihood of chronic conditions, and transforming the approach towards receiving holistic medical care. Ultimately, these approaches will lead to lower healthcare expenditures and a more effective healthcare system that promotes health and wellness over treatment and symptom management.

The Healthcare Cost Inequality


he United States spends more on healthcare than any other country. In 2010 the US spent $2.6 trillion on healthcare, an average of $8,402 per person, more than any other country yet the outcome of care is worse than other countries [2, 3]. Furthermore, in the United States only 15.7% of healthcare funds are allocated for preventative care, while 61.3% goes to hospital care, physicians, and specialized care [4]. The United States’ current healthcare system emphasizes specialized care over preventative care, even though preventative care is more cost-effective and resource-efficient. If insurance companies allocated more funds for preventative care in their insurance plans, hospital care, specialized care, constant clinical visits, drug prescriptions, and administrative costs would decline [4]. Hence, health insurance companies should implement preventative care in insurance plans with prevention programs and built-in incentives. This makes health sense for the individual and economic sense for the insurance companies.



Crossing the

Healthcare Cha Cindy Abarca Brown University

The Chasm


reventative care has not crossed the healthcare chasm nor been integrated into specialized care for many reasons. Healthcare expenditures are focused on the treatment of medical conditions instead of the prevention of the factors that underlie medical conditions [4]. It neglects the fundamental factors such as exercise and nutrition that contribute to high rates of such chronic conditions as obesity, diabetes, and heart disease. Furthermore, prevailing social values and beliefs focus on curing diseases rather than promoting health because it is deemed easier to treat a disease than to change a lifestyle [4]. Some worry that preventative care necessitates

personal involvement in a patientâ&#x20AC;&#x2122;s life to promote a healthier lifestyle and also worry that an individual may be unwilling to make such changes. Yet, studies have shown that people possess a strong drive to engage in efforts directed at achieving future rewards [5]. Due to these barriers, the prevention of disease and the promotion of health and well-being are relegated to second place after the treatment of disease by clinical diagnosis and medical intervention [4]. Hence, an approach to overcome these obstacles requires improving the popular perception of preventative care through incentives to participants.

asm to

Preventative Care

ARTWORK & LAYOUT Insil Choi RISD â&#x20AC;&#x2DC;14 Eric Bai â&#x20AC;&#x2DC;15 SPRING 2014 || THE TRIPLE HELIX


Due to these barriers, the prevention of diseases and the promotion of health and well-being are relegated to second place after the treatment of diseases by clinical diagnosis and medical intervention

Crossing the Chasm


o increase involvement in preventative care, insurance companies must integrate prevention services into health insurance plans with specialized care coverage. This integration can be based on the framework of the “Live Well, Be Well” Prevention Plan developed by US Preventive Medicine, a private company whose mission is to directly motivate consumers to lead healthier lifestyles through participation in wellness programs [6]. “Live Well, Be Well” is a health improvement program that identifies an individual’s top health risks and designs a customized prevention plan to reduce those risks [6]. “Live Well, Be Well” embodies the mission of preventative care by providing individuals with services to reduce their likelihood of having to seek unnecessary specialized care. Preventative care in health insurance plans would follow the same approach of “Live Well, Be Well” by initially providing individuals with an assessment to identify their potential medical risks. The in-



surance company would then provide the individual a Health Coach who will collaboratively design a prevention plan and select prevention services to diminish the probability of the risk factors found in the assessment. Like “Live Well, Be Well,” programs and resources would include online and in-person Health Coaches; reminders of screenings via text messages, e-mails and/or phone calls; free gym memberships for individuals at high risk of obesity; an online system to track individual progress; and educational programs amongst others. I propose that a wellness program similar to “Live Well, Be Well” be incorporated into current health insurance plans to provide both specialized care, as necessary, and preventative care. Generally, people find it difficult to change their behavior, especially if others are forcing them, but incentives will encourage participants to follow the prevention program. Such incentives would include reductions to deductibles, premiums, and copayments, the

magnitude of which would be based on the participant’s level of involvement. To measure involvement, each time an individual participates in a program or preventative service, he/she will receive points, which are added to a score that determines their level and reduction in copayments [6, 7]. Based on the overall score, their annual or monthly premium will be reduced. Hence, the preventative care insurance plan will reduce the health insurance premium of the insured the more he/she actively participates. Some may argue that the preventative insurance plan is a form of bribing and forcing individuals to change their lifestyles through incentives, but this is not accurate. Incentives are intended to motivate the participants to partake in the prevention programs not only as a means of improving their health but by teaching them to associate a healthier lifestyle with monetary savings. Incentives popularize the preventative insurance plan and reward individuals for their loyalty, sim-

ilar to cash back bonuses from Discover. As with cash-back credit cards, consumers will receive rewards in proportion with their involvement with the prevention programs to demonstrate that preventative services provide both instant savings and long-term health results. Lastly, an important aspect of incorporating preventative care into insurance plans is determining how insurance companies will need to allocate funds for preventative care and specialized care. Therefore, it is essential for insurance companies to initiate preventative care in insurance plans with the most popular prevention programs and then add more programs based on the savings.

Every time an individual participates in a program or preventative service, he or she will receive points, which are added to a score that determines their level and reduction in copayments Benefits Shifting the health insurance paradigm to focus more on preventative care will provide many positive results. For example, disease prevention and health promotion can prevent or reduce health problems associated with acute and chronic conditions acquired by personal lifestyle choices. Currently, the most expensive services are the lengthy and costly treatments for chronic conditions as they account for $3 of every $4 spent on healthcare, nearly $7,900 for every American with a chronic disease [8]. A research study on the effectiveness of “Live Well, Be Well” suggested that it can reduce the risk of chronic conditions. The study consisted of 2,606 individuals who participated in the program for a year and revealed a significant reduction of health risk amongst the participants. Also, 48.70% of the individuals initially considered high risk dropped to moderate risk and 46.35% dropped from moderate risk to low risk [9]. The study demonstrated a reduction in the proportion of participants with key health risk factors: 42.7% of the participants’ blood pressure decreased, 29.94% experienced a reduction in stress, 31.13%

a reduction in fasting blood sugar, and 24.21% a reduction of alcohol consumption [9]. Hence, preventative care can help delay the onset of diseases and prevent disability or even premature death. Furthermore, the cost of introducing individualized prevention programs will reduce the cost of the insured by an average of $360 annually [10]. Because an individual will receive health insurance tailored to the prevention of risk factors revealed in the initial assessment, the insured will not incur the cost of unutilized services. Instead, the individual’s insurance plan will pay for the cost of the prevention programs and specialized care as necessary, whether for a congenital disease or other medical condition determined by a physician. This is the moment to revolutionize the form of providing health insurance to individuals by making preventative care in insurance plans cross the healthcare chasm. We can accomplish this when the healthcare system allows flexibility and accessibility for an individual seeking both prevention and treatment.

[1] Schmarzo B. Crossing the Chasm with Big Data. In Focus. [Internet]. 2012 June 19. [cited 2013 March 11]. Available from: [2] Henry J Kaiser Family Foundation. Health care costs: a primer: key information on health care costs and their impact [Internet]. Henry J. Kaiser Family Foundation; 2012 [cited 2013 March 11]. Available from: [3] OECD. OECD Health Data 2012: How does the United States compare [Internet]. 2012. [cited 2013 March 3]. Available from: unitedstates/BriefingNoteUSA2012.pdf [4] Shi L, Singh D. Essentials of the US health care system. 6th ed. New York: Jones & Bartlett Publishers; 2012. [5] Kivetz R, Itamar S. Earning the right to indulge: Effort as a determinant of customer preferences toward frequency program rewards. Journal of Marketing Research. 2002 May 10; 39(2):155-170. [6] Fleming, K. How to implement a large-scale preventive health care program. The Bureau of National

Affairs [Internet]. 2001 Jan 10 [cited 2013 Feb 18]. Available from: http://www.uspreventivemedicine. com/Files/PDFs/Press-Room/2010/How-to-Implement-Preventive-Health-Care.aspx [7] U.S Preventive Medicine. Program [Internet]. 2013 [cited 2013 March 11]. Available from: http://www. aspx [8] Triple Solution for a Healthier America. The impact of chronic diseases on healthcare. [Internet]. 2013 [cited 2013 March 11]. Available from: http://www. [9] Leoppke R, Edington D, & Beg S. Impact of the Prevention Plan on employee health risk reduction. Population Health Management. 2010; 13(5): 275-284. [10] U.S. Preventive Medicine. Lower health care costs [Internet]. 2011 [cited 2013 March 11]. Available from:



Neuroeconomics: An Interdisciplinary Approach



wenty intrepid academics, split evenly between economists and neuroscientists, convened at Win­ netu Oceanside resort in the summer of 2003. Their aim: to give birth to the multi­disciplinary approach of neuroeconomics [1]. Emerging from relative obscurity with close to zero citations in 2000 to roughly 900 in 2010 according to Google Scholar, neuroeconomics has established itself with conferences, textbooks and active societies in universities such as NYU, CALTECH and the University of Zurich. The National Institutes of Health and the Na­ tional Science Foundation have contributed to the rise of neuroeconomics with funding of up to $20.1 million for programs with ‘neuroeconomics’ in their description [2]. While the study of behaviour has traditionally belonged to the realm of social sciences and the study of the physical workings of the brain to neurobiology, neuroeconomics builds on earlier work by behavioral neuroscience and attempts to bridge both social and physical sciences by focusing on how the brain computes information to form decisions [3]. To do so, it employs the techniques and technology already in use in neuroscience to map areas of the brain that show activity during the decision­making process. The latest and most popular is functional magnetic resonance imaging (fMRI), which tracks bloodflow in the brain using changes in magnetic properties due to blood oxygenation [4].



A seminal study led by Alan G. Sanfey used this tech­ nique of fMRI to examine the neural activity of subjects playing an economic game [5]. The game, entitled the “The Ultimatum Game” involved two subjects with one subject starting with $10 and offering to split the money at different ratios with the other, e.g. $8 and $2, or $5 and $5 with the crucial point being that if a subject rejects an offer neither gets a prize. The standard economic theory of utility maximization states that with everything equal, a person will always seek to maximize his or her utility. In essence, people will always try to gain something from a situation if they are able to. This theory predicts then, that the second subject would have no qualms with splitting the money in the ratio of $9 and $1 because they are gaining an extra dollar versus nothing. This ‘game’ was studied extensively before neuroeconomics came into being but the utilization of neuroimaging brought a new dimension to it. The researchers confirmed earlier findings that subjects actually re­ jected offers that were too low

and only accepted ones that were closer to a 50:50 split, even though a rejection would leave both parties empty handed. The fMRI images of the subjects showed that when an offer was deemed as too low or “unfair”, there was increased activity in the anterior insula (Figure A), a region previously linked to the emotion of disgust. Conversely, when an offer was accepted, the dorsolateral prefrontal cortex (DLPFC), associated with social judgment and cooperation, showed much more activity than the anterior insula (Figure B) [6]. Such is the interdisciplinary nature of neuroeconomics that the findings by Sanfey and his team related directly to a 2003 behavioural psychology experiment undertaken by Sarah Brosnan and Frans de Waal entitled “Monkeys Reject Unequal Pay.” The experiment was economic in nature and sought to test if monkeys would recognize and react to differing prizes for the same task. Two capuchin monkeys placed in cages side by side where they could see each other’s actions were tasked with giving a token in exchange for either a cucumber (that under normal circumstances they would happily eat) or a grape (which they prefer over cucumbers) [7]. In the first round, Monkey A gave the token and ate the cucumber he received. Monkey B did the same but received a grape instead to the surprise of Monkey A. In the second round, when Monkey A was given a cucumber in exchange for the token he threw the cucumber at the trainer. This was repeated with many monkeys and each test had the same result: the monkeys felt that the game was unfair and therefore rejected the prize they would have accepted outside the experiment. Given that the monkeys rejected the secondary prize so con­ vincingly it is likely that their emotional feelings of injustice informed their economic decision to discard something they normally considered valuable. By extension, the neural processes were probably similar to those of the human brains shown in the fMRI scans in the Sanfey experiment. Judging from their reaction of violently throwing back the cucumber, the research­ ers came to the conclusion that the perceived injustice of the game likely evoked similar emotions of disgust and outrage. Both these results point to an emotional cause underlying

the invalidation of the utility maximization principle at the individual level. Bronson and de Waal postulate that, since capuchin monkeys are exceptionally tolerant and willing to share food, they may also share a mutual understanding of equity. This aspect of

The researchers confirmed earlier findings that subjects actually rejected offers that were too low and only accepted ones that were closer to a 50:50 split, even though a rejection would leave both parties empty handed. “fairness” in their social structure may explain why the subjects reacted in such a manner and suggests that a predisposition to an intimate group lifestyle may result in a heightened aware­ ness of “fairness”. Ultimately, what the study explores is the possibility of a neural process for judging fairness. It is this particular pathway and resultant behaviour that show that the value of the prize was judged on a relative scale. The reasons as to why economics has not updated to take these findings into account can be traced back to the origin of classical economic theory. Without the advanced imag­ ing technology that we have today, the founding theorists

Artwork on left: Haily Tran “ An Oversimplified mechanism of decision making ” SPRING 2014 || THE TRIPLE HELIX


Left: activity of the anterior insula when faced with an ‘unfair’ offer Right: activity in the dorsolateral prefrontal cortex when faced with a ‘fair’ offer

Both these results point to an emotional cause underlying the invalidation of the utility maximization principle at the individual level. working in the late 19th century had to bypass their inability to peer into the brain by assuming that its workings were unknowable and that emotions bore no consequence to deci­ sion making [8]. Samuelson formalized this assumption in the 20th century under the Theory of Revealed Preference, which postulates that a person’s behaviour can ‘reveal’ their original preference [9]. The Theory of Revealed Preference works under the assumption that humans are rational actors and therefore absolves economists from having to confront the “black box” of the brain. However, now that the technology is available to us and we are able to understand the brain, Caltech professor of cognitive economics Colin Cramrer proposes that neuroeconomics may



influence economics in two ways: incrementally and radi­ cally [4]. It can incrementally update economic theories to better reflect the neurobiological reality of decision making. For example, neuroimaging can allow us to quantify how the consumption of drugs affects the pleasure gained from future consumption, while traditional economic theory assumes no such effect [10]. A radical change to economics would require rewriting the fundamentals of the discipline to account for findings in neuroscience. This would mean incorporating basic fundamentals of neuroscience and treating them as part and parcel of economic theory. A key feature in a new paradigm would be to recognize that decisions are not deliberative, as economics assumes, but are instead a combination of both automatic process and deliberate value calculations [11].

The traditionalists, however, have not gone down without a fight. First articulated in an article published in 2005 and later incorporated into their 2008 textbook, Faruk Gul and Wolfgang Pesendorfer present a sobering critique of the fledg­ ling field of neuroeconomics entitled “The Case For Mindless Economics” [12]. They argue that economists do not need to bother themselves with the processes by which people make decisions but with what the eventual outcome actually is. As David Levine of Washington University states: “Look, if you are trying to understand a pilot’s ability to land a crippled plane, it’s not the patterns of his neuron firing that’s important. It’s the experience and training that he’s had, and the result of the landing. Neuroeconomics hasn’t offered anything that can improve on those measures” [1].

Even without an unwelcome intrusion by neuroeconomics, this past recession has cast a spotlight on economics as a discipline. At last year’s Society of Neu­ roscience, Yale economist Robert J. Shiller highlighted the fact that many economic models did not anticipate the 2008 housing market crash. He argued that, without understand­ ing and taking into account how humans behave in specific situations of uncertainty, economics cannot hope to prevent another crisis from occurring [13]. So while tradi­ tionalists may wish to retain their assumptions and models, recent events and advances in technology may conspire to make room for results derived from neuroeconomics to take hold and develop.

Associate Editor: Matthew Lee '15 Layout”: Afra Rahman '17 & Kaley Brauer '17 [1] Josh Fischman. The Marketplace In Your Brain. http://­Marketplace­in­Your­Brain/134524/ (accessed 31 March 2013)

[2] Glimcher et al. . Neuroeconomics: Decision Making and the

Brain. 1st ed. San Diego, USA. Academic Press; 2008

[3] Fehr E, Rangel A. Neuroeconomic Foundations of

Economic Choice—Recent Advances. Journal of Economic Perspectives.2011;25(4):3­30

[4] Cramer C et al. . Neuroeconomics: How Neuroscience

Can Inform Economics. Journal of Economic Literature.2005;XLIII():9­64

[5] Sanfey AG et al. The Neural Basis of Economic Decision­

Making in the Ultimatum Game. American Association for the Advancement of Science.2003;300(5626):1755­1758

[6] Weissman DH et al. Cognitive control in social situations: A

role for the dorsolateral prefrontal cortex. http://www.ncbi. (accessed 31 March 2013)

[7] Brosnan SF, de Waal FBM. Monkeys reject unequal pay.

Nature .2003;425():297­299

[8] Jevons W. The Theory of Political Economy. 1st ed. London.

Macmillan and Co. ; 1871

[9] Samuelson PA. Consumption Theory In Terms of Revealed

Preference . Economica.1948;15(60):243­253

[10] Bernheim, B. Douglas and Antonio Rangel. 2004. “Addiction

and Cue­Conditioned Cognitive Processes.” American Economic Review, 94(5):1558–90

[11] Bargh, John A. 1984. “Automatic and Conscious Processing

of Social Information,” in Handbook of Social Cognition. R. S. Wyer Jr. and T. K. Srull, eds. Hillsdale, NJ: Erlbaum, 1–43

[12] Gul, Faruk, and Wolfgang Pesendorfer. The foundations of

positive and normative economics (2008). 1st ed. Oxford. Oxford University Press; 2008

[13] Shiller, RJ. The Neuroeconomics Revolution. http://www.

project­­neuroeconomics­ revolution (accessed 31 March 2013)








he Fourth Amendment of the United States Constitution guarantees its citizens protection against unwarranted search and seizure, but the precise definition of such action has been historically unclear. This lack of clarity has become even more problematic since the advent of electronic communications. The precise legal status of emails has not yet been resolved, but the current statutes governing them are unconstitutional. Given peoplesâ&#x20AC;&#x2122; widespread and daily use of the technology, emails should be fully protected by the Fourth Amendment.



While this is a worrying problem on a purely academic level, there is also a real substantive problem; the government is actually using their ability to seize emails. Most recently, the American Civil Liberties Union, under the Freedom of Information Act, examined the investigative policies of the Internal Revenue Service and found that their institutional policy was to read emails without obtaining a warrant [1]. Regardless of whether or not someone has committed tax fraud, they should be able to avail themselves of the constitution, with all its various protections.

Not only are emails not currently covered by the Constitution, there are federal agencies whose policy it is to abuse the unclear status of emails in order to ease their investigative burden. A belief in the United States legal system is a belief in the absolute rights all those involved, no matter how guilty they may be, and thus it is imperative that this issue be resolved, so that the IRS and any other governmental agencies cannot continue to infringe upon citizensâ&#x20AC;&#x2122; civil liberties for their own convenience.



society as a whole recognizes the lack of privacy such a space affords. The Sixth Circuit Court of Appeals recently ruled in United States v. Warshak (2010) that emails are protected under the Fourth Amendment [4]. In analyzing the second condition, the Court noted that while service providers have intermediate access to emails, so do telephone companies and the post office, and yet both phone calls and letters are fully protected by the Fourth Amendment. Thus it concluded that in order for email to remain an effective form of communication, the expectation of email privacy must be recognized by society. The Court’s discussion of the first condition is more interesting. They ruled that it was met because specific language and content in the seized emails of the defendant, Warshak, suggests that he assumed his emails would be private. In making this judgment, the Court followed the standard procedure for assessing the first condition. However, there are certain circumstances under which it has been established that there is a society-wide personal expectation of privacy in personal communication, for example, regular mail and telephones. This suggests that, while the Court certainly set precedence for Given peoples’ widespread and email privacy in cases where the condaily use of the technology, tent of the emails suggest the defendant expected privacy, the issue is far emails should be fully protected from conclusively resolved on a broader scale. Until Congress amends the SCA, by the Fourth Amendment. or the Supreme Court rules that personal email is fully protected by the Constiolate the Constitution, the action must tution, emails will still be vulnerable to government unearth something the person fully expected to search and seizure. remain a secret. There must be clear demonstrable The other significant document in this indications, on the part of the persovvn violated, discussion is “The Right to Privacy,” a 1890 article that they expected the evidence to remain un- in the Harvard Law Review, published by Samuel available to the public at large. Thus trash left on Warren and future Associate Supreme Court Justice the curb, photos taken from public property and Louis Brandeis. To this day, the article is considered almost any public action are all unprotected by the the seminal exposition on privacy rights. In “The Fourth Amendment. Right to Privacy,” Warren and Brandeis argue that The second condition is that the expecta- the privacy is the right to be left alone, and that, as tion of privacy must be one that society in general technology advances, so too must the protections would recognize, as determined by the courts. This afforded by privacy law [5]. This is a clear argument ensures that, while the expectation of privacy must from one of the great jurists in American history be subjective, it cannot be arbitrarily so. If someone that Fourth Amendment protections should be exwere to delude themselves into thinking that they tended to today’s technological advances, especialwould have privacy in a public space, they are not ly email. protected under the Fourth Amendment, because he single most important legal statute for this discussion of email privacy is the Fourth Amendment, which states: “The rights of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized [2].” This is the foundation on which all subsequent discussion of privacy in the US is based. But in order to assess if the Fourth Amendment applies to a situation, the Supreme Court’s interpretation of the Amendment must be examined. The Court’s interpretation was brought to light in Katz v. United States, a 1967 case that defined a clear framework of the Fourth Amendment’s applicability. In deciding the case, the Court established two conditions that must be met in order for the action to be a breach of Fourth Amendment rights, both of which are still used today [3]. The first condition is that the government’s action must violate an individual’s actual, subjective expectation of privacy. To vi-


43 order for email to remain an effective form of communication, the expectation of email privacy must be recognized by society.



he Stored Communications Act (SCA) codifies the current laws outlining the protection of emails from the government. The SCA governs how third parties are to handle the data they possess. Two sections of this act potentially violate the Constitution. The government is required to obtain a warrant to review all emails sent or received within 180 days of the time of search, under section 2703 of the ECPA. But for emails that are more than 180 days old, the government needs only a subpoena, which is easier to obtain. Furthermore, in neither instance is the service provider required to inform the consumer that his or her information has been shared [6]. This is more than a trivial distinction. Subpoenas require that the company or individual produce certain documents, which must be specified, but the issuer need not know exactly where these documents are kept [7]. The standard for a search warrant is higher. Not only does it have to be shown that a crime has been committed and that documents relevant to the crime exist – both of which are also require for subpoenas – but the exact location must also be specified [7]. This is the crux that allows search warrants, and the Fourth Amendment that underlies them, to protect the privacy of citizens and residents of the United States. Section 2702 of the SCA — “Vol-



untary disclosure of consumer communications or records” is also potentially unconstitutional. Under Section 2702, service providers can release communications to government authorities if they feel that those communications are part of some criminal activity [6]. The reasoning behind the creation of the SCA is sensible given when it was originally written. In 1986, email was accessed almost exclusively through clients — programs installed on computers that download email data and store it on their local hard drives. Given the limited space on their servers, service providers would then delete the downloaded emails, leaving the consumer with the only copy. Under such a system, 180 days was a reasonable time to wait to search emails because by then they would only be available if no one had downloaded them. Thus it could be assumed that any emails remaining had been abandoned. While it may seem unjust for the government to search through abandoned emails, the act of abandoning an email on a server is analogous to abandoning a physical object on the side of the road and this, as will be discussed in the next section, is not protected by the Constitution. But today, many people access their emails through Internet browsers, never downloading their emails and using

the service provider’s servers as permanent storage. Even those who don’t rely on provider’s storage are not protected. In order to ensure customer satisfaction, the service providers keep all the emails online, just in case their client users decide to use their online access service. Emails stored online for longer than 180 days can no longer be considered abandoned, but the government still has the right to search them. There have been several attempts to update these policies. As of April 2013 there is a bill on the floor of the Senate that requires warrants for access to all emails, regardless of age [8]. The bill is co-sponsored by Senator Patrick Leahy, D-VT, the author of the original 1986 ECPA. Though this would seem to bode well for the bill’s success, Leahy has proposed amendments in the past to no avail. In 2011, he proposed a similar bill that did not pass and in 2012 he added an amendment with the same content to a video privacy act that never made it out of committee [9]. Still, the country’s history of legal rights and laws make it clear that the current measures to protect email privacy are both insufficient and constitutional.



s it currently stands, the SCA is unconstitutional. As the Sixth Circuit Court already outlined, the second condition for Fourth Amendment applicability is already met by email. Additionally, the first condition is also met by the presence of passwords to access email accounts. The use of a password — a digital lock and key — creates the expectation that only the user has access to that service. In essence, there is a reasonable expectation of privacy. Despite Justice Brandeis’ presence on the bench, it took the Supreme Court 40 years to extend protection to telephone wire taps; it should not take another 40 to protect emails. However, by examining analogous court cases, applying the Supreme Court’s own standards, and considering the intellectual precedence of this issue, it is clear that emails should be afforded full protection from unwarranted search and seizure under the United States Constitution.

[1] Nathan Wessler. New Documents Suggest IRS Reads Emails Without a Warrant [Internet]. 2013 Apr 10 [cited 2013 May 8]. Available from: http://www. [2] U.S. Const. amend. IV [3] Katz v. United States, 389 U.S. 347, (1967) [4] United States v. Warshak, (2010, Sixth Circuit Court of Appeals) [5] Brandeis L, Warren S. The Right to Privacy. Havard L.R. 1890; 4(5):193. Available from: Privacy_brand_warr2.html [6] 18 USC Chapter 121 - STORED WIRE AND ELECTRONIC COMMUNICATIONS AND TRANSACTIONAL RECORDS ACCESS, [Internet]. [cited 2013 Apr 16].

Available from: [7] Justice Department. Chapter 3, Part II: Grand Jury Manual [Internet]. [cited 2013 May 8]. Available from: [8] Email Warrants Proposed In Senators’ Bipartisan Reforms To ECPA Legislation [Internet]. [cited 2013 Apr 16]. Available from: [9] Proposed legislation would reform digital privacy law - SC Magazine [Internet]. [cited 2013 Apr 16]. Available from: proposed-legislation-would-reform-digital-privacy-law/article/203386/





n his 2013 State of the Union address, President Obama praised the innovative curriculum of P-TECH, a new Brooklyn high school designed to condense and streamline high school and college education with an early exposure to the working environment. This New York City educational experiment was conceived as a public, intellectual, and private collaboration, backed by the Department of Education, City College of Technology, and IBM.. Obama sees great potential in this form of skill-based secondary education, inspired by “countries like Germany, [which] focus on graduating their high school students with the equivalent of a technical degree…so that they’re ready for a job” [1]. Obama’s support for innovative approaches sparks hope for secondary education reform; however his goals set for improving post-secondary education only revolves around increasing access to it, without a significant effort on bettering the quality of educations offered at colleges and universities. That the government’s only focus for the post-secondary education is to increase access to college, reflects a lack of a governmental awareness of more fundamental issues in quality . While 70% of recent high school graduates pursue higher education [2], only 59% of graduates are employed in jobs that require a college degree [3]. For the 70% in higher education, it is unsettling to accept sociologists’ recent finding that only limited learning occurs on college campuses. In light of this dissonance between the diminished learning at colleges and labor market’s demand for degrees, focus should shift from increasing access to improving the quality of learning at these institutions of higher education. Using the same line of reasoning as P-TECH’s conception, US higher education should also incorporate some pre-professional tracks to increase the direct market value of a college education, at least for those comfortable with committing to field early on. One area of especial shortage is primary care physicians in the US. Like other European countries, an accelerated form of medical education in a 6 year continuum, as opposed to the traditional 4 year undergraduate and 4 year professional system, may offer appealing resolutions for remedying the diminishing quality of college education, increasing tuition, and dearth of primary care workforce. An accelerated 6-year medical program focused on primary care will provide a guaranteed primary care workforce, as well as improved link between basic science and clinical knowledge.



Both American students and society can hugely benefit from emulating European countries’ system of a combined undergraduate and professional education—medical education in particular.



nlike in other countries, US medical schools require a second application for entry after 4 years of undergraduate education. Many other countries follow a five to six year medical track that combines undergraduate and professional components. For example, British high school students apply to attend medical school directly, after demonstrating course competencies in their A levels, a more advanced British equivalent of US high school AP tests [4]. The first three years are devoted to a survey of basic biomedical courses, with the latter three years devoted to learning clinical medicine [5]. On the other hand, the four years of American undergraduate is less structured for aspiring physicians. As long as American premed students complete the course requirements and pass the entrance examination (the MCAT), students are eligible to apply for medical school, which is an additional four years of education. The first two years of the American medical school focus on textbook learning of medically relevant sciences, while the third and fourth years are spent on practical clinical work. There are clear structural advantages to both systems: the British one allows acceleration and continuity in medical education; American system allows secondary selection of medical student candidates, and thus can accommodate students at diverse stages of their lives. America should consider having both systems available to offer a variety of learning options for aspiring physicians. These structural differences influence the style of medical teaching and ultimately clinical outcomes. Commonly referred to as the “pre-clinical/clinical divide,” the lack of integration in the US between the preclinical basic sciences and the clinical learning presents significant problems [6]. American medical students are less equipped to make the conceptual links between specialized scientific knowledge and human disease pro-

cesses. While medical students in other countries learn basic science and its clinical implications concurrently, US students must make these connections on their own. This has long been recognized as an ineffective way of learning that forces the student to use “surface” learning techniques that train the memory, and not the mind. Knowledge gained in this way does not encourage critical thinking processes and is invariably forgotten, which contributes to the passive learning problem plaguing the intensive examination-driven system of the US [7]. On the other hand, a six year curriculum provides more coherence when transitioning from basic to clinical sciences. Although having a six-year continuity in no way guarantees that its students seamlessly make a connection between the basic and clinical sciences, the medical teaching institution does have a longer, and thus greater degree of control over the presentation of the curriculum. In this way, the schools can encourage better bridging of basic undergraduate learning with later clinical learning to encourage learning at a more profound level. Admittedly, a benefit to the American system is the individualized, personal teaching that places emphasis on greater practical experience. This is largely possible due to the selectivity of medical school entry, creating a small student body. To borrow an automobile analogy [8], the American system is designed to produce mechanics whose job it is to fix anything that needs repairing, while the European system educates engineers who understand how every part works. By merging the practical learning of the American system with the deep understanding of disease mechanisms of the British design, a new, combined medical education format may give a much fuller education.





here are some logistical challenges that complicate American adoption of this form of integrated medical education. Since the Tenth Amendment gives state and local governments authority over public education, the educational standards vary across school districts. Keeping track of different systems of secondary schools is difficult because of the sheer number and diversity across the US. Disparities in the caliber of faculty, grading system, and curriculum make it harder to assess the true ability of a student. In this light, the medical application serves as a practical second filter to assess the candidate’s qualifications. Since there are fewer colleges than high schools, medical schools can better discern student’s ability at a more standardized level when admissions are conducted after high schools. Furthermore, students with a poorer performing districts or disadvantaged backgrounds have a second chance in uncovering hidden talent during college. College functions as a clean platform to learn, placing everyone in an intellectual environment with equal access and opportunities. Thus, assessments at a college level give a better portrait of the candidates. Furthermore, choosing a life-long occupation at the age of 17 can be unsettling for many young Americans. Many European systems require students to choose a science or human-



ities track during high school, which is roughly around the age of 14. When applying to colleges, European students must choose their course judiciously, for once they are admitted in the course, it’s nearly impossible to switch without starting over. In contrast, 80% of freshmen at US colleges are unable to commit to a major, and 50% end up changing majors [9]. Given this pervasive uncertainty among American college freshmen, committing to a profession at the age of 17 may be more daunting of a commitment here than in Europe, where already a culture of early commitment to specialized learning exists. However, the success of accelerated programs show that the six-year medical schools can work in the US. Currently, there are about thirty programs in the US that offer a continuous, or even accelerated, medical education [10]. These are selective programs that enroll only a handful of students every year. Students in these programs are committing to medicine at the age of 17, and the retention rate in these programs is fairly high—for the Program in Liberal Medical Education at Brown University, over 80% finish their medical education through the program [11]. Though only at 17 years of age, these are students who will ultimately follow through with their commitment to a career in medicine.

Given that over 70% of recent high school graduates engage in higher learning [2], but 41% of graduates are employed in jobs that don’t require a college degree [3], the focus should shift from increasing access to quality of learning at these higher-level institutions.



his strategy of a combined program also falls in line with current findings on higher education. Sociologists are reporting that undergraduate educations are poor investments; even more revealing, legislators, business leaders, and educators are growing apprehensive of its value [12]. The former president of Harvard University, Duk Bok, laments “Colleges and universities, for all the benefits they bring, accomplish far less for their students than they should.” [13] It is an increasing, unspoken trend that many students forgo academic pursuits for collegiate culture, defined by social and extracurricular engagements. Matching classroom studies with clearly applicable occupational goals, decreasing the length of enrollment, and heightening intensity of study increases the perceived value of college education and perhaps academic engagement. In a different perspective, it can provide a separate mode of learning for those who enjoy learning at a focused and practical setting. Some students may appreciate learning when there is a tighter integration between theoretical knowledge and application. Efficient education not only benefits debt-burdened students, but

also the society at large. According to the Organization for Economic Co-operation and Development [OECD], the physician per person ratio in the US is 2.4, which is well below the OECD average of 3.1. The US has the leading cancer treatment system, but ironically, has one of the worst for preventative primary health care [14, 15]. The American Association of Medical Colleges projects an immense dearth of 45,000 primary care doctors by 2020 [16]. Yet out of all medical school students graduating in 2013, only about four percent will be choosing careers in primary care [17]. Though there are many reasons, one glaring cause for primary care being less popular is that primary care physicians tend to work longer hours and get paid less than specialists. With an average debt of $200,000, many graduates opt for high-paying specialties to pay off their debts. Financially, a focused route that saves two years of tuition may be of interests to many students. Just as P-TECH makes easy for students to enter technical industries, the allures of a six-year combined program can be used to make a push towards primary care. The six-year medical programs can be designed to be programs devoted to primary

care training. The government can aid in the implementing these tracks in states that most need them. Logistically, the direct route to a primary care physician will save students time and energy spent in reapplying to medical school as well as two years of tuition. Also, having younger physicians would mean increased time spent in the medical workforce. In the perspective of students, entering the workforce early will be beneficial in repaying accumulated school debt. For society, a renewed focus on primary care may translate to improved preventative care. Incentives are already underway in various institutions for an accelerated 3-year primary care degree program to encourage careers in primary care [18]. After seeing the success of this initiative at Texas Tech, dozens of schools are beginning to plan similar programs for their state [19]. These accelerated programs insinuate that for primary care training, traditional four-year training can afford to shed a year of medical school; in that case, perhaps an even shorter track of combined program—a five year program—should be offered for primary care to stimulate interest.





hough efficient and economic, the accelerated curriculum should by no means replace the existing system for the simple reason that there are unequivocal values in the liberal arts education. The strength of the American M.D. degree lies in its versatility. US medical school students are given the time and access during undergraduate years to find interests outside of medicine. As such, there are many physicians who make significant impact outside the hospital, especially in the fields of public health, literary arts, medical activism, and research. For example, the current president of World Bank, Jim Yong Kim, also was the past Dartmouth president, advisor to World Health Organization, and founder of Partners in Health, studied political science at Brown before entering medicine. Such foundations allowed him to take on a unique and influential path. Physician

writer Samuel Shem, who, in his novel House of God, brings to attention the disenchantment and inhumanity during intern years of medicine. It is a broad liberal education and the American philosophy of personal redemption and transformation that permits time for self-discovery before stumbling onto the footsteps of a medical school [20]. For the truly intellectually driven, the four years of time can be crucial for discovering one’s passion. But this does not mean we should not have options for those who find their calling in medicine early. It would be a significant improvement in American professional schools to offer different avenues to pursue a career goal. The value of a liberal arts education cannot be contested; however, in cases where students don’t see undergraduate years as enrichment, but another hurdle to medical school entry,

[1] Obama, B. Obama’s 2013 State of the Union Address. The New York Times. [Internet] 2012 Feb 12 [cited 2013 Apr 14]; Politics. Available from: http:// [2] Arum R, Roksa J. Academically Adrift. University of Chicago Press; 2011. [3] Cecilia Capuzzi Simon. Major Decisions. The New York Times. [Internet] 2012 Nov 2 [cited 2013 Apr 14]; Education Life. Available from: http:// [4] Parsell GJ, Bligh J. The changing context of undergraduate medical education. The Fellowship of Postgraduate Medicine. 1995 Mar 2. [cited 2013 Apr 14]; Available from: PMC2397995/pdf/postmedj00031-0015.pdf [5] Medicine-Course outline. University of Oxford. [cited 2013 Apr 14]; Available from: courses/medicine/medicine_course.html [6] Hannah SJ, Tang T. Reduced Undergraduate Medical Science Teaching is Detrimental for Basic Surgical Training. 2005 Jan 13. [cited 2013 May 16]; Available from: [7] Parsell GJ, Bligh J. The changing context of undergraduate medical education. The Fellowship of Postgraduate Medicine. 1995 Mar 2. [cited 2013 Apr 14]; Available from: PMC2397995/pdf/postmedj00031-0015.pdf [8] Peter, A. What is the difference between medical education in Europe and America? Available from: http://rheumatoid-arthritis-joint-conditions. [9] Ronan, Gayle B. College freshmen face major dilemma. NBC News. 2005 Nov 29. [cited 2013 Apr 14]; Available from: id/10154383/ns/business-personal_finance/t/college-freshmen-face-major-dilemma/#.UWzJzrrld8E



having a direct pathway is both economic and ideal. As a champion of liberty, American should provide diverse means to achieving a goal, especially if the profession in question will benefit the welfare of its citizens. To that end, the current system of medical education should branch out from the institutional orthodox to new paradigms. Obama’s appraisal of P-TECH’s aims—to resolve ineffective education, disparity of opportunities, and the country’s demand for tech-savvy employees—can be applied to model post-secondary education as well. An accelerated professional track can be a potential solution for diminishing academic standards at universities, soaring tuitions, exploding student loans, and shortage of certain professions in the country.

[10] Direct BA/MD. [Internet] [cited 2013 May 16] Available from: http://www. [11] Alpert Medical School Class 2016 Profile. [cited 2013 Apr 14]; Available from: [12] What’s the Price Tag for a College Education? Collegedata. [cited 2013 Apr 14]; Available from: [13] Arum R, Roksa J. Academically Adrift. University of Chicago Press; 2011. [14] Health Policies and data: Health at a Glance 2011. OECD. 2011 Nov 23. [cited 2013 Apr 14]; Available from: [15] Kane J. Health Costs: How the US Compares with Other Countries. PBS. Health. 2012 Oct 22. [cited 2013 Apr 14]; Available from: http://www.pbs. org/newshour/rundown/2012/10/health-costs-how-the-us-compareswith-other-countries.html [16] Novak S. Luring Students into Family Medicine. New York Times; 2012 Sep 9. [cited 2013 May 16]; Available from: http://www.nytimes. com/2012/09/10/us/10iht-educlede10.html?pagewanted=all&_r=0 [17] Lipson, Peter. Doctor shortage isn’t going away. Forbes. 2013 Mar 17. [cited 2013 May 16]; Available from: [18] Krupa, C. Med School on the fast track: A 3-year degree. American Medical News. Profession. 2012 May 7. [cited 2013 Apr 14]; Available from [19] Page, L. New Three-Year Track Seeks to Boost Family Medicine, Reduce Studet Debt. 2012 Oct. [cited 2013 May 16]; Available from: https://www. html [20] M.S. Schools and class in Europe and America. The Economist. 2010 Mar 29. [cited 2013 Apr 14]; Available from democracyinamerica/2010/03/born_well

VIDEO GAMES Using Human Ingenuity to Solve Scientific Problems by evelyn kendall williams


acman, Super Mario, Sims, Halo, Call of Duty, World of Warcraft - video games today are so numerous and diverse, itâ&#x20AC;&#x2122;s hard not to find one to enjoy. Indicative of this, approximately 97% of individuals in the United States aged 12 to 17 play some form of video game. Of these, 31% play every day and 21% play 3 to 5 days a week [1]. The gaming industry is thus a major force in entertainment today, and is only gaining momentum. In 2009, the global market value for all of entertainment was 1.32 trillion dollars [2]. With a projected global market value of 112 billion dollars in 2015, the video game industry has a noticeable influence in the entertainment world and is the fastest growing sector of entertainment today [1].

Video games are no longer used simply as a form of entertainment or as an educational tool, but as a means of harnessing public ingenuity to solve big problems. In the past, video games have often been cast aside as solely a form of entertainment, albeit with the potential to influence the adolescents who comprise much of the directed audience. More recently, however, the purpose of video games has broadened. Video games are no longer used simply as a form of entertainment or as an educational tool, but as a means of harnessing public ingenuity to solve big problems. These games take advantage of the large numbers of players to divide up larger scientific tasks among ordinary individuals. This

strategy, called crowdsourcing, enables scientists to use the population of video gamers to move closer to their goals and fill in the gaps of computer-based approaches [3]. In this way, technology can be used to tap into the intellect of ordinary gamers for help in addressing scientific challenges [4]. Several recently created video games, known as biogames, use this technique to further research and to try to aid doctors in fighting diseases. Researchers Luis von Ahn and Laura Dabbish at Carnegie Mellon created one of the first early examples of this technique with a video game called ESP, which took advantage of human cognition to label images on the Web that computers couldnâ&#x20AC;&#x2122;t accurately identify [5]. Building on this foundation, David Baker, Seth Cooper, Firas Khatib, Zoran PopoviÄ&#x2021;, and colleagues used a video game to further current understanding of protein structures and improve current methods of solving these structures. In this game, called Foldit, players try to correctly fold proteins to their most stable and lowest energy shapes. Previously, researchers used a computer program called Rosetta to try to solve protein structures. However, they soon realized that human problem solving skills could potentially yield more success. They decided to take advantage of this skill through a video game in which players can use tools based on the Rosetta algorithms. In the game, players are taken through a tutorial where they learn how to manipulate the protein to find the lowest energy conformation. A set of more common terms is used in place of the potentially confusing scientific terminology, enabling a more diverse player community. Seeking the most stable SPRING 2014 || THE TRIPLE HELIX



conformation, players can drag sidechains, shake, wiggle or freeze the molecule, form hydrogen bonds, and insert bands, among other techniques. As the player determines lowest energy structures for increasingly complex proteins, the points accrued increases and the player moves farther in the game. Eventually, the player can design new proteins and improve predictions for currently unsolved protein structures, competing and comparing strategies with other players through the online community to achieve the highest points [4]. Players can also collaborate on algorithms or “folding recipes” to solve protein structures computationally [3].

This game and community of players has led to tangible scientific achievements in the five years since the game has gone live. Foldit players were able to determine the structure of a protein called Mason-Pfizer monkey virus retroviral protease, which had previously not been solved computationally or experimentally. Foldit players have also improved current automated algorithms for finding protein structures. Players found a recipe called ‘Blue Fuse’ to be the most adept at determining protein structures, and used it most frequently in the game. The strategy was similar to, but more successful than, an algorithm being developed by structural biologist researchers in The Baker laboratory called Fast Relax. The Foldit community thus was able to create an algorithm for solving protein structures superior to any others that had been previously created [3].

This framework appeals to a wide variety of players by incorporating a varied set of motivating factors, including the point system, player rankings, social benefits accrued from the forum and chat features, and the scientific goal of learning and discovering new protein structures [4]. The game also appeals to players through its fun and easy interface. Recently, Foldit introduced a new feature: an in-game artificial

Another game, developed by researchers at UCLA, envisions applying similar techniques used in Foldit to improve medical diagnostics. Specifically, the Ozcan Research Group developed a video game with the potential to help diagnose patients with malaria utilizing human capabilities for visual recognition and learning. In the game, players are presented with images of red blood cells and must determine whether the cells are healthy or infected. The final diagnosis for each red blood cell is determined by combining the results across a large number of players. These

Foldit players were able to determine the structure of a protein … which had previously not been solved computationally or experimentally. intelligence assistant called Foldy 9000, making the game that much more fun and interactive. Foldy 9000 not only provides helpful suggestions and hints, but also interjects with funny comments and knows how to play chess. These factors, in addition to easy and free access to multiple downloadable versions of the game, has led to a large community of Foldit players [6]. At Andalas University in Indonesia, for example, 94% of students in a course requiring them to play Foldit reported that the game was fun and 47% of these same students reported playing the game outside of class. Additionally, of those who did not play outside of class, only one did not due to a lack of interest in the game [7].



results can then be used to determine if an individual has malaria by taking into account the number of infected and healthy blood cells [8]. Players are first taken through a training tutorial in which they learn to identify infected cells and healthy cells. In order to pass, players need to achieve greater than 99% accuracy. Once outside of the training mode, players can either use the syringe to kill the infected cells or the collect-all to collect the healthy cells in the blood smear. Approximately 20% of these red blood cell images are control cells and are already identified as either infected or healthy. These control cells determine the points accrued by the player and accuracy in diagnosing. The results across many players are fused together to determine the diagnosis for each red blood cell [9]. This game has the potential to increase the accessibility, speed, and accuracy of diagnosing patients with malaria. Light microscopy is currently the best method available for diagnosis of malaria and requires a pathologist to examine between 100 and 300 fields of view of a blood smear with a 100x objective lens in order to accurately identify if a patient is infected [9]. Staining the slides alone takes at least 45 minutes, while the time it takes to examine the slides varies depending on many factors such as the stage of malaria as well as the proficiency of the pathologist [10]. This game could decrease the time and tedium of such examination. In addition, in many tropical and sub-tropical areas of the world

where malaria is one of the major health concerns, resources are not available to diagnose patients quickly and accurately. For example, malaria has caused approximately 40% of all hospitalizations in Africa and 20% of all childhood deaths in sub-Saharan Africa [9]. This game, by providing a means to remotely diagnose patients with malaria, could help decrease this number while the resources needed are not available. In addition, by maintaining an accuracy of diagnosis within 1.25% of a professional pathologist, it could significantly reduce the number of misdiagnoses [8]. At a rate of 60% false positives in Sub-Saharan Africa due to errors in diagnosis, improved accuracy could lead to more appropriate treatment for patients as well as better conservation of resources [9]. These two video games, and biogames in general, have revealed that the optimal approach to solving complex scientific problems is not always based solely on computers and improved technology. They have shown that with new, increasingly creative methods, researchers can begin to address previously insurmountable scientific challenges. In this case, an integrated approach in which both computer based algorithms and human problem solving techniques are utilized was optimal. Video games were the ideal medium for this joint effort, allowing human input to modify and manipulate computer processes. This approach paved the way for the discovery of new protein structures, the improvement of existing algorithms, and a transformation in current medical diagnostics, while showing that as much as computers and processing technology has improved in the recent years, raw human intellect and problem solving skills cannot be neglected.

Associate Editor: SARAH LEWIN ‘13 Artwork: MARY O’CONNOR ‘16 Layout: KALEY BRAUER ‘17 & AFRA RAHMAN ‘17 [1] Adachi P., and Willoughby T. More than just Fun and Games: The

Longitudinal Relationships Between Strategic Video Games, SelfReported Problem Solving Skills, and Academic Grades. Journal of Youth and Adolescence. 16 January 2013. doi: 10.1007/s10964-013-9913-9

[2] World Association of Newspapers and News Publishers. SFN Report: Global Entertainment and media market to exceed US $1.69 trillion in 2014 [Internet]. 2010 [Cited March 11 2013]. Available From: http://

[3] Good B., and Su A. Games with a Scientific Purpose. Genome Biology [Internet]. 28 December 2011 [Cited March 2013]; 12(12): 135. doi: 10.1186/gb-2011-12-12-135

[4] Cooper S., Khatib F., Treuille A., Barbero J., Lee J., Beenen M.,

Leaver- Fay A., Baker D., Popović Z., and Foldit players. Predicting Protein structures with a multiplayer online game. Nature. 30 June 2010;466(7307): 756-760. doi:10.1038/nature09304

[5] Von Ahn L., and Dabbish L. Labeling Images with a Computer Game [Internet]. April 2004 [Cited March 11 2013]. Available from: http://

[6] Foldit: Solve Puzzles for Science [Internet]. [cited April 1 2013]. Available From:

[7] Farley, Peter C. Using the Computer Game “FoldIt” to Entice

Students to Explore External Representations of Protein Structure in a Biochemistry Course for Nonmajors. 4 February 2013; 41(1): 56-57. doi: 10.1002/bmb.20655

[8] Mavandavi S, Dimitrov S, Feng S, Yu F, Sikora U, Yaglidere O,

Padmanabhan S, Nielsen K, Ozcan A. Distributed Medical Image Analysis and Diagnosis through Crowd-Sourced Games: A Malaria Case Study. PLos ONE. 2012 May 11;7(5):e37245. doi:10.1371/journal. pone.0037245

[9] Distributed Medical Image Analysis and Diagnosis through CrowdSourced Games: A Malaria Case Study. PLoS One [Internet]. 11 May 2012 [Cited March 11 2013]; 7(5): e37245. doi: 10.1371/journal. pone.0037245

[10] Tangpukdee N, Duangdee C, Wilairatana P, Krudsood S. Malaria

Diagnosis: A Brief Review. Korean J Parasitol. 2009 June;47(2):93-102. doi: 10.3347/kjp.2009.47.2.93



the google driverless car ‘a cool thing that matters’



© 2014, The Triple Helix, Inc. All rights reserved.

Picture a car driven with superhuman vision,

that Google began work on the robot car,

to monitor safety of autonomous vehicles.

instantaneous reflexes, and up-to-date

the Toyota Prius was ranked #1 most fuel-

Employees working in call centers would be

knowledge of roads across the United

efficient mid-size car [2]. It has continued

able to anticipate issues that drivers may

States. You’re probably picturing something

to perform well, and has maintained its top

encounter in their future travels. In relation to

from The Bourne Identity or The Terminator.

rating in 2013 [3]. In addition to Toyota,

autonomous cars, such issues may include

In fact, Google is developing a car that is

Audi and Lexus have also embraced the

impending traffic jams, dead end roads, or

almost exactly that. It’s called the Google

“hands off” approach, as they have also

changes in route due to construction zones.

Car and is driven by a robot. With radar

partnered with Google engineers to make

External monitoring is advantageous in that

sensors for eyes and ears, range finders

self-driving versions of the Audi TT and

it introduces a level of perception that the

for awareness of surrounding cars, and

Lexus RX450h [4]. Google has outfitted

average human driver could never have. In

Google maps at its fingertips, the Google

these otherwise traditional vehicles with

this way, potentially unsafe situations can

Car may come to redefine the American

several radar sensors, cameras, and laser

be avoided, and the safety of the “hands

automobile. It could enhance the safety of

range-finders to observe traffic. Advanced

off” driver, as well as the other drivers on

roadways, provide a new driving experience,

software analyzes this data and makes

the road, is greatly enhanced [6].

and even allow the handicapped to travel

decisions that control every aspect of

independently. The car has been thoroughly

the car, from steering and navigation to

The question arises: how comfortable will

tested, and performed well on road tests of

acceleration and braking [1].

drivers of autonomous vehicles be, knowing that someone else can control their cars?

over 500,000 miles in length. If the last legal and technical hurdles can be overcome,

Currently, a human “driver” must still be

Cruise control is a comparable advancement

the Google Car has the immense potential

present in the vehicle, with the ability to

in technology. Drivers initially resented their

to revolutionize travel on the roadways of

enable a manual override at any time [1]. A

loss of control following the advent of cruise

the world [1].

blind person is capable of overriding the car

control, but were eventually placated by

in this way, assuming the fault condition is

its efficiency and safety benefits. More

The most common model of a Google car

not detected automatically. If done manually,

recently, researchers have been studying

takes the form of a robotically controlled

the driver has to use an emergency stop

acceptance of adaptive cruise control, which

Toyota Prius. The Toyota Prius is a fully

button, although systems may vary,

is a step beyond the generic one, in that

hybrid electric mid-size hatchback, which

depending on the model of automobile

the system adjusts speed as a function of

uses a combination of gasoline and electric

[5]. Cars can also be overridden by external

distance to the car in front of it. Studies

battery for power, thus reducing carbon

sources. A system similar to the type used

have shown that fuel savings and safety

emissions and improving mileage. The year

by air-traffic-controllers could be adapted

considerations seem to be the major

© 2014 The Triple Helix, Inc. All rights reserved.


motivators to accept adaptive cruise control

Mercedes’ robot car, “VaMoRs,” drove

of road tests. The development team has

[7]. “Adaptive cruise control can be seen

entirely on its own in 1986 and reached

driven cars on winding and traffic-heavy

as a transition to the self-serving car,” said

speeds up to 96 km/h, or roughly 60 mph,

roads, such as the Golden Gate Bridge,

Betram Malle, a professor of psychology in

the following year [9] Currently, Google,

as well as San Francisco’s infamously

the Department of Cognitive, Linguistic, and

Bosch LLC., Continental, the Volkswagen

meandering Lombard Street [4]. The only

Psychological Sciences at Brown University.

Group, Volvo, Audi, Lexus, and others

Google Car accident recorded thus far

Another example is that of the seat belt,

have made notable progress in the realm

was a minor fender-bender in 2011, which

which was not universally accepted when

of autonomous vehicles [10].

occurred while a human was manually

first released. After the public realized the

operating the car [14].

safety benefits of seatbelts, however, their

Google began working on its first model

use increased and was eventually made

of an autonomous vehicle after Sebastian

In order for the Google Car to drive across

mandatory by law. Indeed, safety concerns

Thrun, founder of Google’s Street View,

the country, it will need the permission of all

can often prompt the general public to give

won the Pentagon’s 2005 DARPA challenge

the states, or blanket permission from the

up some sense of individual autonomy, and

with his driverless car named Stanley

federal government. Considering that the

if extreme enough, lead to government

[11]. The DARPA challenge is a federally

DARPA challenge is federally sponsored, the

intervention [6]. Perhaps the safety benefits

sponsored competition for autonomous

car will likely be given federal permission to

of driverless cars will one day motivate the

vehicles. Initially founded with the purpose

be sold and driven across state-borders,

government to draft legislation that prohibits

of developing technology for the military,

contingent on its road-test performance. In

manual driving. Gary Marcus, a writer for

it has since expanded to include vehicles

2012, the National Highway Traffic Safety

The New Yorker, exclaimed, “Within two

for commercial use. The team leaders of

Administrator, David Strickland, said, “The

or three decades the difference between

winning vehicles from DARPA challenges

development of automated vehicles is a

automated driving and human driving will

in several recent years, as well as Anthony

worthy goal” [15]. He explained that the

be so great you may not be legally allowed

Levandowski, the man who built the first

federal government is beginning to look

to drive your own car” [8].

driverless motorcycle, are all working

into the safety regulations that would be

together on Google’s team [12].

needed for a country in which driverless


cars dominated roadways [15].

The concept of a driverless car might seem

Getting the Green Light

like a novelty to many, but it has in fact

Nevada was the first state to approve

Leave It to the Robots

existed for decades. A notable example

driverless cars on roadways in June 2011,

In 2012, Google’s co-founder Sergey Brin

is Mercedes-Benz’s robotic van, designed

which allowed Google to begin testing [4].

stated, “Self-driving cars will be far safer

by Ernst Dickmanns and colleagues at the

Florida was the next to follow suit, doing

than human-driven cars” [1]. Considering

Bundeswehr University of Munich, in Munich,

so in April of 2012 [13]. In September of

that human error is the cause of over ninety

Germany. In the early 1980s Dickmanns’

2012, Governor Jerry Brown of California

percent of automobile accidents, he may

team equipped a 5-ton Mercedes-Benz

signed legislation that created safety and

be correct. In addition to the Google Car’s

van with cameras and sensors, similar to

performance regulations to test and operate

finely tuned sense of direction and quick

the inputs in today’s driverless cars. These

driverless cars on state roads and highways

reflexes, it offers an escape from several

and other technologies made it possible

[1]. State laws such as these have allowed

common human distractions, such as text

for steering, acceleration, and breaking to

Google to test its car thoroughly, and it has

messaging, tiredness, and drunkenness

be controlled via computer commands.

performed flawlessly on over 500,000 miles

[1]. Not only is the car safer, but it is also a


© 2014, The Triple Helix, Inc. All rights reserved.

more accessible mode of transport for the

between air and road travel is thus an

and accidents. Such road changes may

handicapped, especially the blind. The car

abstract fit. Both modes of travel face the

fail to be reflected in the car’s onboard

has already been tested with a blind driver

potential of exceptional errors, but these

“map,” causing the car to become lost [14]

inside. Regarding his experience, Steve

errors become magnified on public roads.

As mentioned above, however, humans

Mahan, who is blind, exclaimed, “Where

Much of air travel is currently automated,

working to monitor the driverless cars

this would change my life is to give me the

with pilots and Air Traffic Controls to

from afar would be able to foresee these

independence and the flexibility, to go to

assist with actions such as landing, and

road changes, and notify the cars and

the places I both want to go and need to

to take care of and to prevent problems

their respective drivers. In this case, the

go, when I need to do those things” [16].

that might arise. In contrast, autonomy on

human driver can then switch to manual

As mentioned above, when operated by its

public roads faces many computational

control. This issue is also being addressed

internal technology, the autonomous car has

problems regarding uncertainty in sensing

is through a collective database. Google

kept a flawless driving record. The spread of

and perception, predicting and reacting to

self-driving cars communicate with each

driverless cars will create a positive feedback

what pedestrians and other cars might do,

other, which makes it only a matter of time

loop in which a greater number of such cars

and other aforementioned issues. Therefore,

before the change gets mapped, and cars

will result in fewer accidents, validating their

autonomous cars are currently still far from

no longer get lost. The more Google cars

safety, and subsequently increasing their

achieving the one in ten million chances

there are on the road, the quicker the issue

popularity. This will lead to an increased

of fatal harm that have been achieved by

will be fixed. Some may argue that this level

number of Google driverless cars, other

the top 39 airlines [18]. As aforementioned,

of computer communication is sensitive

reliable driverless cars on roadways, and

if a system similar to the type used by

to human hackers. “I think any system

even fewer accidents. The overall result will

air-traffic-controllers could be created for

is vulnerable, at some level,” said Odest

produce a ripple effect, whereby all drivers,

driverless cars, many road and driving errors

Jenkins, a Computer Science Professor at

including those who manually operate their

will be foreseen, eliminated, and overall,

Brown University, whose research focuses

cars, will experience safer roadways.


on robot learning from demonstration, or Robot LfD. Such hacking has not been


Google’s engineering team is especially

demonstrated yet, and if released onto

For all its promise, Google’s robot car is

focused on preparing the car for rough

the consumer market, driverless cars will

not perfect. Some potential issues revolve

conditions in which the cars may experience

be armed with software and humans to

around accidental shut-off, flat tires, cyber

difficulties. For example, snow makes

working to try to prevent such devastating

security, and liability. Google’s current focus

it difficult for the cars to “see” the lane

interference [6].

is on improving sensors and hardware failure

markers and other cues needed to maintain

support. Sergey Brin drew a parallel between

safe positioning on the road. The team


air travel and driving—the technology in both

is working to equip the car for snowy

themselves in the form of construction

driverless cars and airplanes may experience

terrain, construction signals, snow-covered

zones, accident zones, and situations

system failure. Unlike airplanes, however,

roadways, temporary construction signals,

involving human traffic directors [1].

the robot car eliminates the possibility of

and other tricky situations, such as detours,

Driverless cars have no issue interpreting

human operational error [17]. Currently

that many drivers encounter [19].

traffic signals, speed limits, and proximity




to other cars, but may have difficulty

though, air space is a more tractable domain for autonomy because it is highly

Another major obstacle is that of temporary

understanding a human directing traffic

controlled and regulated. The analogy

changes in routes, due to construction

with hand signals. The cars may become

© 2014 The Triple Helix, Inc. All rights reserved.


confused when a human’s hand signals

drives in a year, the remaining one

the company contracting with Google

conflict with a traffic light or stop sign. To

percent is actually somewhat significant.

and several other driverless car projects,

fix this, Google’s engineers are exploring

LIDAR’s hefty price tag thus comes with

has developed an HDL-64E system with

ways to teach the car how to interpret a

great innovation, because it increases

64 lasers, and an HDL-32E system, with

person’s hand signals, and when to obey

autonomy of the vehicle from 99, up to

32 lasers. The latter is less precise, but

those, rather than traffic lights. Indeed,

99.9 percent, by providing high definition,

still far more perceptive than humans. It

several robotics programs across the

nearly three-dimensional information about

is less expensive than the HDL-64E, and

country are working to hone a robot’s

the surrounding environment. The unit spins,

has been used successfully by a number

gesture and person recognition. One

while various lasers are emitted to collect

of companies [21]. Continental is one such

such program, unaffiliated with Google,

distance-sensing data [6] Although very

company, which has outfitted a VW Passat

exists at Brown University, where the

valuable, LIDAR systems make the car

with more affordable technology. However,

Brown Robotics Group has partnered

too expensive for most consumers, as

Continental’s car has only been tested

with i-Robot Corporation. Researchers

well as for some of the companies trying

on 10,000 miles of road, as compared to

from both places are working on enhancing

to develop driverless cars. Several auto

Google’s nearly 500,000 miles [20].

gesture and person recognition [5] Various

industry suppliers, as well as Chris Urmson,

“robots,” in the form of cars as well as

who has worked on the Google Car, have

The issue of liability is one of the critical and

smaller-scale technologies, have been

promised that reasonably priced LIDAR

unresolved questions about autonomous

designed to recognize motions performed

and other comparable technology systems

cars. If insurance companies realize how

in different ways by many humans, but not

are on their way [20] LIDAR systems are

much more prone to error human drivers

with 100% accuracy. Hand signals may

currently being manufactured on the

are than robots, they are likely to increase

vary from person to person, traffic cop

scale of100’s and 1000’s, but if this is

premiums on manually driven vehicles. As

to traffic cop, making this is an essential

increased to 100,000’s and greater, the

a result, those without automated cars

obstacle to be overcome [5].

cost will decrease through the economies

may have higher premiums than those

of scale, and LIDAR will become cheaper

who do own automated cars [14] This

[6]. Fortunately, the National Highway Traffic

could eventually create a problem of the

While the Google car and others like it

Safety Administration (NHTSA) values

haves and have-nots, in which members

may have mastered steep roadways,

the potential safety benefits of driverless

of lower economic class cannot afford the

they also come with steep price tags.

cars and has thus decided to invest in the

expensive autonomous cars, and will thus

Approximately $150,000 in equipment

accompanying technology. This will likely

be unduly burdened with higher insurance

is needed for each Google Car. That

produce more cost-effective, and eventually,

premiums [5]. Premium adjustments such

figure includes a $70,000 LIDAR (Light

affordable systems, at least for the middle

as this are consistent with other safety

Detection And Ranging) system. LIDAR

and upper socioeconomic classes [20].

features that have lowered premiums, such

Costly Cars

as Anti-Lock-Braking-System, (ABS) brakes

is the beam technology that equates to

and air bags [22].

the car’s “eyes” on the road [19] Prior to

Although Google’s car has received the

the development of LIDAR technology, the

most publicity, several other companies

highest performing driverless cars were said

have been working to build driverless

Displacing Human Driver s

to operate autonomously for approximately

cars. Some of these companies have

If the technical, liability, and legal issues

ninety-nine percent of the time. Considering

reduced input costs by choosing to outfit

are all resolved, robot cars may soon take

the many hours that an average individual

their cars with fewer lasers. Velodyne,

away chauffeurs’ jobs. Autonomous vehicle


© 2014, The Triple Helix, Inc. All rights reserved.

technology will eventually move beyond

cars and to invoke overrides, will increase.

with increased efficiency and security [12].

traditional SUVs and sedans, to trucks,

Driverless cars will likely create more jobs

Google’s driverless car offers many potential

buses, and agricultural vehicles, thus

than they destroy [6].

safety benefits, has been thoroughly tested,

reducing and possibly eliminating the need

and performed well on lengthy road tests.

for drivers of public transportation, school

The Road Ahead

buses, and other means of transportation.

The development of the Google Car is

its driver and passengers, commute to

Professor Jenkins explained that the military

certainly in line with the company’s overall

work, and run errands, all via highways

is likely to be the first to take advantage

mission—“help solve really big problems

and neighborhood streets. Eventually, the

of autonomous cars, followed by “smart

using technology” by “doing cool things that

technology will likely evolve to handle a

public transportation,” and ultimately

matter.” Automobile safety is a significant

greater variety of situations. Ultimately,

consumer vehicles. There will most likely

issue in today’s world; in 2010, the National

the Google Car may serve as a fully-

be a gradual phasing out of the demand

Highway Traffic Safety Administration

automated and competent chauffeur [14].

for humans employed as drivers, in all

recorded upwards of 5.4 million police-

Advancements to increase safety measures

sectors of automobile transportation

reported motor vehicle crashes in the

and hone person recognition abilities are

[5]. While demand for human drivers will

United States [23]. As mentioned above,

under development. If realized, the driverless

decrease after driverless cars permeate

over ninety percent of crashes, (4.86 million

car is expected to permeate the market

the world of public transportation, the need

in 2010) are due to human error [1]. The

in a matter of decades, thus improving

for manufacturers of autonomous vehicle

Google Car has the potential to eliminate

safety for all drivers.

inputs, as well as for people to monitor the

human error and drive us down a road

References [1] Chea, T. California governor signs driverless cars bill [Internet]. The Associated Press; 2012 Sep 26 [cited 2013 Feb 2]. Available from: [2] United States Department of Energy. 2005 Compare Side by Side: Fuel Economy [Internet]. Environmental Protection Agency; 2005 [cited 2013 March 2]. Available from: http:// [3] United States Department of Energy. 2013 Most and Least Efficient Cars [Internet]. Environmental Protection Agency; 2013 April 11 [cited 2013 March 1]. Available from: [4] Newman, G. A Future Filled with Driverless Cars [Internet]. Insurance Journal; 2013 Feb11 [cited 2013 Mar 2]. Available from: http://www. [5] Odest [6] Bertam, Interviewed



M., Jenkins, O., by Peseri, Alexandra.



& Littman, 2013. April

M. 11.

[7] Use patterns among early adopters of adaptive cruise control. [8] Marcus, G. Moral Machines [Internet]. The New Yorker; 2012 Nov 27[cited 2013 April 15] 2012 Nov 27. Available from: http://www.newyorker. com/online/blogs/newsdesk/2012/11/google-driverless-car-morality.html [9] Wikipedia. Ernst Dickmanns [Internet]. Wikipedia; 2013 Mar 13 [cited 2013 April 11]. Available from: [10] Mariacher, E. 3 driverless cars trends in 2012 [Internet]. Blogger; 2013. Jan 2 [cited 2013 April 13]. Available from: http://driverless-cars.blogspot. com/2013/01/3-driverless-cars-trends-in-2012.html#.UWoAenCrJ0o

[11] Haglage, A. Google, Audi, Toyota, and the Brave New World of Driverless Cars [Internet]. The Associated Press; 2013 Jan 16 [cited 2013 Feb 22]. Available from: http://

Currently, the car can flawlessly pick up

[19] Urmson, C. The self-driving car logs more miles on new wheels [Internet]. Google Official Blog; 2012 Aug 07 [cited 2013 Mar 2]. Available from: http://googleblog.blogspot.

[12] Thrun, S. What We’re Driving At [Internet]..Google Official Blog; 2010 Oct 09 [cited 2012 Dec 19]. Available from: http://

[20] Priddle, A. & Woodyard, C.: Google discloses costs of its driverless car tests [Internet]. USA Today; 2012 Jun 14 [cited 2013 Mar 14]. Available from: google-discloses-costs-of-its-driverless-car-tests/1#.UVo-03BWB0p

[13] Valdes, A. Florida embraces self-driving cars, as engineers and lawmakers prepare for the new technology [Internet]. News Channel 5: WPTV; 2012 May 7 [cited 2013 Mar 9]. Available from: http://

[21] San Francisco Gate. Velodyne’s LiDAR division doubles production capacity to meet demand [Internet]. Hearst Communications Inc.; 2013 March 11 [cited 2013 April 11]. Available from: Velodyne-s-LiDAR-division-doubles-production-4344316.php

[14] Blodget, H. Here Are Some Of The Problems Google Is Having With Its Self-Driving Cars [Internet] Business Insider; 2013 Mar 3 [cited 2013 Mar 9]. Available from: http://www.businessinsider. com/google-self-driving-car-problems-2013-3#ixzz2N0lv9uEc

[22] Sochon, P. Best Practice Workplace Driving Safety Programs in NSW. [Internet]. 65th Road Safety Congress; The Royal Society for the Prevention of Accidents (RoSPA); 2000 Mar 8 [cited 2013 May 7]. Available from: RoadSafety/conferences/congress2000/proceedings/sochon.pdf

[15] DiCaro, M. As Gov’t Considers Regulations, Autonomous Car Boosters Show Off Plans.[Internet]. Transportation Nation; 2012 Oct 23 [cited 2013 Mar 17]. Available from: http:// [16] Author Unlisted. Self-Driving Car Test: Steve Mahan [Internet]. Google Jobs; 2012 [cited 2013 Feb 10]. Available form: com/about/jobs/lifeatgoogle/self-driving-car-test-steve-mahan.html [17] Tam, D. Google’s Sergey Brin: “You’ll ride in robot cars within 5 years” [Internet]. CNET; 2012 Sep 25 [cited 2013 Mar 1]. Available from: [18] OAG Aviation. OAG Aviation & accident database, 20 years of data (1993 - 2012). [Internet]. OAG Aviation; 2013 [cited 2013 April 4]. Available from:

© 2014 The Triple Helix, Inc. All rights reserved.

[23] National Highway Traffic Safety Administration. Crashes [Internet]. 2010 [cited 2013 April 14]. Available from: [24] ‘Jurvetson Google driverless car.’ 2011 March 3. [Internet]. Available from: File:Jurvetson_Google_driverless_car_trimmed.jpg [25] ‘Google’s Lexus RX 450h Self-Driving Car.’ 2012 Nov 15. [Internet]. Available from: File:Google%27s_Lexus_RX_450h_Self-Driving_Car.jpg [26] Jurvetson, S. ‘Velodyne High-Definition LIDAR.’ 2009 Oct 24. [Internet]. Available from: [27] Scoble, R. ‘Seeing Car Tech At Stanford. Here is Velodyne LIDAR.’ 2010 Oct 21. [Internet]. Available from: http://
















read, watch, discuss, learn, and maybe laugh


The Triple Helix Spring 2014  

Spring 2014 issue of The Triple Helix at Brown University.

Read more
Read more
Similar to
Popular now
Just for you