Page 1

Vol 7 Spring 2010 | UChicago

A Production of The Triple Helix

THE SCIENCE IN SOCIETY REVIEW The International Journal of Science, Society and Law

Neuromarketing Who Decides What You Buy?

Aquaculture: Its Growing Role as a Food Source Fighting Disease: Are Global Funds Misallocated? Water Politics: A New Age

ASU • Berkeley • Brown • Cambridge • CMU • Cornell • Dartmouth • Georgetown • Harvard • JHU • LSE • Northwestern • NUS • Oxford • Penn • UChicago • UCL • UNC Chapel Hill • University of Melbourne • UCSD • Yale

EXECUTIVE MANAGEMENT TEAM Chief Executive Officer Julia Piper Executive Editor-in-Chief Bharat Kilaru Executive Editor-in-Chief, E-Publishing Zain Pasha Executive Production Editors Chikaodili Okaneme Yang Zhang Chief Operating Officer, North America Daniel Choi Chief Operating Officer, Europe Hannah Price Chief Operating Officer, Asia Kevin Pye Phyo Nay Yaung Chief Operating Officer, Australia Elizabeth Zuccala Chief Technical Officer Nandita Seshadri Executive Director of Science Policy Karen Hong Executive Director of Business and Marketing Anshul Parulkar Executive Director of Internal Affairs Jennifer Yang INTERNATIONAL DIRECTORS Annual Meeting Planning Committee Jennifer Ong Roza Anbari Alexander Han Steven He Business and Marketing Division Alexander Han Member Resources Division Nikhita Parandekar Science Policy Division Paul Shiu Hann-Shuin Yew GLOBAL LITERARY & PRODUCTION

Managing Production Editor Darwin Chan Senior Production Editors Luna Chen Jessica Lee Jasmine Chuang Production Editors Becca Liu Baher Guirguis Annie Chiao Nancy Li Zhihan Ye Reshmi Radhakrishnan BOARD OF DIRECTORS Kalil Abdullah Manisha Bhattacharya Joel Gabre Kevin Hwang Melissa Matarese Erwin Wang TRIPLE HELIX CHAPTERS North America Chapters Arizona State University Brown University Carnegie Mellon University Cornell University Georgetown University Harvard University John Hopkins University Northwestern University University of California, Berkeley University of California, San Diego University of Chicago University of North Carolina, Chapel Hill University of Pennsylvania Yale University Europe Chapters Cambridge University Asia Chapters National University of Singapore

THE TRIPLE HELIX A global forum for science in society

The Triple Helix, Inc. is the world’s largest completely student-run organization dedicated to taking an interdisciplinary approach toward evaluating the true impact of historical and modern advances in science. Work with tomorrow’s leaders Our international operations unite talented undergraduates with a drive for excellence at over 25 top universities around the world. Imagine your readership Bring fresh perspectives and your own analysis to our academic journal, The Science in Society Review, which publishes International Features across all of our chapters. Reach our global audience The E-publishing division showcases the latest in scientific breakthroughs and policy developments through editorials and multimedia presentations. Catalyze change and shape the future Our new Science Policy Division will engage students, academic institutions, public leaders, and the community in discussion and debate about the most pressing and complex issues that face our world today.

Australia Chapters University of Melbourne

All of the students involved in The Triple Helix understand that the fast pace of scientific innovation only further underscores the importance of examining the ethical, economic, social, and legal implications of new ideas and technologies — only then can we completely understand how they will change our everyday lives, and perhaps even the norms of our society. Come join us!

Senior Literary Editors Dayan Li Jennifer Ong Jonathan Sung E-Publishing Editors Kevin Pye Phyo Nay Yaung Mira Patel Matthew Howard Arjun Ghosh Priya Malhotra

UChicago4.9.indb 2

5/8/2010 7:58:41 PM



Fighting Disease: Issues with funding distribution


Testing Asylum Seekers: DNA testing and its influence on refugees


Whole Brain Emulation: The brain’s design and simulation software


Neuromarketing: Who Decides What You Buy?

Victoria Phan, UCSD

Local Articles 6

Fighting Disease: Are Global Funds Misallocated?

Chana Messinger


A New Age of Water Politics

Dan Plechaty


Public Health and Probiotics: Missed Opportunities

Elizabeth Gaston


Why Aren’t We Nematodes?

Elizabeth Harris


A World Without Men

Indra Wechsberg


Scheduling in Professional Team Sports

Jacob Parzen


Mediating the Personal Genomics Revolution

Laurel Mylonas-Orwig


The Growing Role of Aquaculture As a Food Source

Matt Doiron

International Features


Cover Article


Aquaculture: Farming in the Sea

Kathryn Blackley, Cornell


The Guarded Gate: DNA Testing for Refugees

Nipun Verma, Cornell


Through a Baby’s Eyes: Studies in Infant Cognition

Megan Altizer, Yale


How Brain Emulation Will Impact the Future of Our Society

Thomas S. McCabe, Yale


Music Facilitating Speech: Melodic Intonation Therapy for Patients with Maria Lisa Itzoe, Brown Speech Deficits


Evolving Interaction in Robots

Cover design courtesy of Vicky Phan, UCSD

UChicago4.9.indb 1

Andrew Sheng, CMU

5/8/2010 7:58:43 PM


Message from the President & Editor in Chief Dear Reader,

STAFF AT UCHICAGO President Sean Mirski Editor-in-Chief Bharat Kilaru Managing Editors Dan Plechaty Rohan Thadani Directors of Science Policy Jim Snyder Michelle Schmitz Daniel Choi

It is with sincere pride and pleasure that I present before you the latest issue of the Triple Helix. We have striven over the past few years to create a publication that, while run and largely staffed by undergraduates, maintains a level of intellectual discussion and debate that rivals that of professional journals and yet still finds itself accessible to the average, well-informed individual. Whether we have succeeded is entirely determined by you, the readers, but what we can say regardless that it has been one hell of a ride. As members of the Editorial Board, we begin the process by reading through submitted topic proposals, and then watching as each article takes a shape of its own over the months of writing and countless rounds of revision. What is truly astonishing is that, despite the multiple catastrophes and falls along the way, each writer perseveres out of dedication to their article and its ideas, and we believe that this unswerving commitment is what makes articles within the Triple Helix stand out as not only knowledgeable, but also bold. Happy readings. Cheers, Sean Mirski and Bharat Kilaru The President and Editor-in-Chief The Triple Helix, University of Chicago

Director of Marketing Lauren Blake Editors Laurel Mylonas-Orwig Elizabeth Harris Andrew Kam Anna Zelivianskaia Stephen Li Karthik Vantakala Neil Shah Alex Turzillo Writers Chana Messinger Laurel Mylonas-Orwig Elizabeth Gaston Matt Doiron Dan Plechaty Jacob Parzen Indra Wechsberg Elizabeth Harris Faculty Review Board Professor Daniel Bennett Professor Nancy Cox Professor Nathan Ellis Professor Eugene Chang Professor Reuben Keller Professor Charles Wheelan Professor Allen Sanderson Professor Richard Hudson Dr. William B. Dobyns

Message from the Managing Editors The rapid pace of scientific discovery often means that we advance our knowledge before we understand the social and ethical implications of it. It also means that the public’s scientific literacy lags far behind current thought. The Triple Helix seeks to remedy both of these issues by surveying the scientific landscape and by analyzing the significance of the discovered trends. We have a variety of articles for you in this edition, which provide a glimpse into the varied interests that our writers have. As they span a diverse set of topics, it may be easy to lose sight of the common goal behind all of them. Thus, we hope that this serves as a reminder that what we really strive for is to examine the implications of science on society and law, while maintaining the rigor and precision that our academic background demands of us. We hope that you enjoy the issue and learn something interesting along the way! Sincerely, Dan Plechaty and Rohan Thadani Managing Editors The Triple Helix, University of Chicago

2 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 2

Š 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:58:48 PM


Message from the CEO Dear Reader, Once again, we are at a time of change. This year, in tandem with the American Association for the Advancement of Science (AAAS) conference in San Diego, The Triple Helix hosted its Leadership Summit and Membership Workshop to bring together students throughout the world and plan our future. However, despite their startling creativity and surprising expertise, the most striking discovery was their raw passion for what lies ahead. Hours and days go by just in discussion. With such enthusiasm behind every idea, it is difficult to envision anything but overflowing success. Before you look through The Science in Society Review issue awaiting you, I hope to share with you my insight into the level of work behind every word. The articles in the following pages are derived from an outstanding level of editorial and literary commitment. Each piece represents not only the work of the writer, but also the work of one-on-one associate editors, a highly effective editorial board, astute international senior literary editors, an impressive faculty review board, and an imaginative production staff that reinvents the journal every issue. As you read the following pieces, we hope you will come to appreciate the truly professional level of work that goes into every paragraph. And it is with that same dedication to improvement that every division of The Triple Helix creates progress everyday. Over the last year, Julia Piper and TTH leadership redefined the limits of the organization yet again with our amazing progress in the Electronic Publishing, Internal Affairs, and Science Policy divisions. We have truly come a long way. However, our greatest accomplishment has been the new wave of global connectedness and communication. As we enter the next cycle, I hope to witness the next surge of interest and passion from every member as we strive to achieve the dreams we have always had for the organization. We invite you as readers and supporters to come forward and develop new visions that will push us to the next level. The opportunity is upon us. Sincerely, Bharat Kilaru, Incoming CEO The Triple Helix, Inc.

Letter from the Outgoing CEO Even after a year as The Triple Helix’s CEO, I find myself struggling to successfully communicate the singularity of TTH’s management approach. I think the concept of a completely undergraduaterun international non-profit corporation is baffling to many because its depends completely on the effectiveness of 20 and 21 year olds with little free time and even less experience. But it works. It works because TTH takes the inexperience that other organizations consider limiting and turns it into an advantage. It is through the annual refreshment of our international leadership that TTH stays engaged and innovative. With this in mind, I’d like to welcome our new Executive Management Team, individuals who without bachelor’s degrees are poised to lead an international team. In true TTH form, however, this inexperience allows them a fresh perspective, a fresh enthusiasm, and a fresh start to build a new team and a new future. Readers, stay tuned, as I think we will see great things to come. Sincerely,

Julia Piper Outgoing Chief Executive Officer The Triple Helix, Inc.

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 3

THE TRIPLE HELIX Spring 2010 3 5/8/2010 7:58:48 PM


Neuromarketing: Who Decides What You Buy? Victoria Phan


eople who have found themselves indulging in clothing popular brands and catchy slogans—tools crafted purposely trends, jiving to mainstream music, or frequenting the local by marketing executives to catch our attention. This tendency Starbucks can see that companies spend billions a year to gravitate toward familiar symbols and phrases is the drivresearching how to perpetuate such conformity. What people ing force behind the concept of neuromarketing. Scientists are may not know is that the advertising itself is becoming far more focusing on these natural inclinations, using brain imaging scientifically advanced. Neuromarketing is an emerging branch techniques to gauge consumer reactions and expand upon of neuroscience in which researchers use medical technology more common, traditional methods, such as surveys and focus to determine consumer reactions groups [3]. to particular brands, slogans, There are multiple types of Despite the ongoing and advertisements. By observbrain-imaging technologies used ing brain activity, researchers in in current neuromarketing studdebate about the ethics of lab-coats can predict whether you ies: fMRI (functional magnetic neuromarketing, ...results prefer Pepsi or Coke more accuresonance imaging), QEEG (quanrately than you can. Critics have titative electroencephalography), are leading researchers to already begun to denounce the and MEG (magnetoencephalograbelieve that nobody currently idea for its intrusiveness; however, phy). However, the fMRI method though the field is already highly is currently the most popular has the power to fully alter controversial, there is no doubt amongst marketing companies, our personal opinions and that its continuing development since it utilizes mainstream techwill ultimately have a profound nology to produce clear images of preferences. impact on consumerism and the real-time brain activity [4]. As an overall study of human behavior. imaging technique, the process In America’s capitalist society, advertisements drive our also translates results more easily into layman’s terms: rather everyday lives. While the idea of actual ‘mind control’ may seem than presenting data in strings of incomprehensible numbers, far-fetched and unrealistic, the fact remains that the marketing fMRI technology gives people the opportunity to actually viindustry has had a firm grasp over the American perception sualize the activity patterns in their brains [5]. of everything from smoking to sex education. Our current fMRI works by gauging amounts of hemoglobin, the concept of marketing, with its image-based ads, department oxygen-carrier on red blood cells, in certain parts of the body. store window displays, and catchy TV jingles, actually did For mental imaging, the machine “measures the amount of not exist before the mid-1900s. Starting in the 1950s, fast food oxygenated blood throughout the brain and can pinpoint an industries teamed up with processed food companies to shape area as small as one millimeter” [6]. The harder a specific area the concept of what we now understand to be McDonald’s and of the brain is working, the more oxygen it requires; so when Burger King ‘cuisine’ [1]. In the 1980s, the invention of cable the fMRI machine scans the brain, it picks up on the areas with TV, VCRs, and remote controls revolutionized the advertising concentrated amounts of hemoglobin and displays them as world, as it allowed the media to become much more easily regions of high mental activity on the computer screen. These accessible to average families [2]. These developments soon computer images are what researchers use to identify the parts allowed advertising executives to cater to the public’s general of the brain being utilized. interests and subconscious desires. For neuromarketing, scientists use fMRI to observe areas Over time, the marketing industry has learned to exploit of the brain that respond to consumer-based stimuli, such as our responses to a wide variety of images and concepts. It is not particular brands, price ranges, and even taste preferences difficult, however, to recognize and understand the methodology [4]. The researchers have found that the regions in the brain behind these marketing campaigns. The strategic placement of corresponding to the prediction of gain and loss (the nucleus Victoria’s Secret models into Super Bowl halftime commercials accumbens and the insula, respectively) are indicators of behavior has an obvious sexual appeal. Celebrities are paid to endorse and reaction to finances and economics [3]. In other words, we particular products, since their personal testimonies make make our decisions based on cursory judgments of whether any company just seem better. Even the catchiness of a jingle we will gain or lose money when purchasing a product. makes us more likely to pause when we see a bag of Kit Kats Though fMRI technology was first used for marketing or Goldfish crackers. But somehow, despite the almost laugh- purposes in the late 1990s, the actual term “neuromarketing” ably obvious marketing methods, we still respond positively to was only just coined by Erasmus University’s Professor Ale

4 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 4

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:58:48 PM

UCSD Smidts in 2002, and the general premise of the research was not widely recognized until the first neuromarketing conference in 2004. However, the potential results and subsequent discoveries about human responses to the media are causing this infant branch of science to rapidly gain popularity [4]. The infamous “Pepsi vs. Coca-Cola” experiment, in which scientists studied the motivation behind brand preferences, was what first put early neuromarketing in the spotlight. The researchers observed that although Pepsi and Coke are essentially identical, people often favor one over the other. They subsequently sought to investigate how cultural messages work to guide our perception of products as simple as everyday beverages [7]. The experiment was simple: there were two taste tests—one blind and one in which subjects knew which beverage was which—and the researchers observed the corresponding brain activity. When volunteers were unaware of which brand they were drinking, the fMRI showed activation in the ventromedial prefrontal cortex, a basic “reward center,” when they drank Pepsi. However, when the subjects knew which soda was which, the scans showed brain activity in the hippocampus, midbrain, and dorsolateral prefrontal cortex (which are centers for memory and emotion), in favor of Coke. So essentially, people actually liked the taste of Pepsi, but they were more inclined to believe that they preferred Coke, based off of nostalgia and emotional connections. From these results, the researchers determined that “a preference for Coke is more influenced by the brand image than by the taste itself” [4]. The outcome of these studies is intriguing and even a bit entertaining; however, upon a second glance, it can also be alarming. The fact that a series of ads could actually cause your brain to believe something that contradicts what the rest of your body thinks is unnerving, to say the least. Because of this, there is a growing amount of controversy surrounding the subject of neuromarketing. One of the more paranoid views on this subject is that people may eventually fall victim to an uncontrollable force compelling them to think or act a certain way. While it is still too early for anyone to make definitive legal restrictions on the technology, people are already anxious about its subliminal undermining of free will. Commercial Alert, an organization protesting the development of neuromarketing, has expressed concern over the use of medical technology for advertising purposes, claiming that brain scans “subjugate the mind and use it for commercial gain” [6]. The group has argued that any power-hungry neuroscientist could use these studies to manipulate the public’s desire for specific products, or that the research could be used in the realm of politics and propaganda, dragging us down a slippery slope toward totalitarianism and war [6].

On the other hand, more optimistic observers contend that the studies could in fact be beneficial for our society. For example, neuromarketing has the potential to be a great boon to public service industries by helping them understand how to improve anti-drug or anti-smoking campaigns [3]. By utilizing these new advancements in neuroscience, we could educate the public more effectively; we would know how to better present information to inattentive children, how to best impact teenagers having unprotected sex, and how to inform the public about conserving energy. The road toward understanding consumer responses opens paths to understanding human behavior in general, which could be invaluable to the development of our global community. Despite the ongoing debate about the ethics of neuromarketing, the amount of research we have today is still minimal, and the results are leading researchers to believe that nobody currently has the power to fully alter our personal opinions and preferences. Most professionals are presently under the impression that this field is underdeveloped and that researchers are hyping it up using neuroscience, a current ‘hot topic,’ to elicit extra funding [3]. However, though there isn’t much evidence so far to prove that the imaging studies will have a drastic effect on consumers, researchers agree that even a slight edge in the competition to win the public’s attention would be worth the cost for many advertisers. Like all new scientific advancements, neuromarketing is thus far merely a research tool. Marketing expert Martin Lindstrom views the area of study as “simply an instrument used to help us decode what we as consumers are already thinking about when we’re confronted with a product or a brand” [6]. In either case, the studies would reveal more intimate details about human thought-processing and decision-making on a broader scale. So the question remains: Is neuromarketing a step forward in understanding the human mind, or is it an invasive marketing ploy geared toward demolishing privacy and personal opinion? As of right now, nobody seems to be sure. Though there is always the possibility that this technology could be exploited for immoral purposes, one could say that any scientific discovery has the same potential for misuse in the wrong hands. The best way to limit the media’s influence is to educate ourselves about the science and to be more deliberate with our decisions; a well-educated consumer is less likely to make rash judgments based on unfounded claims. Still, knowing that companies have people researching how our minds work probably won’t stop most of us from pining after all of the latest products —we will always have commercialism to thank for that.


ACNR. 2005; 5(3): 36-7. 5. Bloom, P. Seduced by the flickering lights of the brain. Seed Magazine. 2006 Jun 27 [cited 2010 Jan 7]. Available from: by_the_flickering_lights_of_the_brain/ 6. Lindstrom, M. Buyology: Truth and Lies about Why We Buy. New York: Doubleday; 2008. 7. McClure, SM, Li J, Tomlin D, Cypert KS, Montague LM, Montague PR. Neural Correlates of Behavioral Preference for Culturally Familiar Drinks. Neuron. 2004; 44: 379-387.

1. Spring, J. Educating the consumer-citizen: a history of the marriage of schools, advertising, and media. Mahwah: Lawrence Erlbaum Associates, Inc.; 2003. 2. Fox, S. The mirror makers: a history of American advertising and its creators. Edition 1997. New York: Morrow, 1984. 3. Schnabel, J. Neuromarketers: the new influence-peddlers? The Dana Foundation. 25 Mar 2008 [cited 2009 Oct 26]. Available from: detail.aspx?id=11686. 4. Bridger D, Lewis D. Market researchers make increasing use of brain imaging.

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 5

Victoria Phan is an undergraduate at the University of California, San Diego.

THE TRIPLE HELIX Spring 2010 5 5/8/2010 7:58:48 PM


Fighting Disease: Are Global Funds Misallocated? Chana Messinger


f the many global issues the world faces, one of the to fighting infectious disease, child mortality and promoting most prominent is allocation of the world’s resources maternal health. AIDS/HIV, alone, constitutes a 64% slice of to fight disease. Three of the eight Millennium De- the budget, which amounts to over 2.5 million dollars [7]. The velopment Goals agreed to by 192 nations and over twenty- President’s Emergency Plan for AIDS Relief, created in 2003, three international organizations relate to combating disease gave $15 billion to fight AIDS, and this amount was increased and promoting health. These goals, set forth in 2001, are the to $48 billion when it was renewed earlier this year. To fight markers by which the United Nations evaluates progress on malaria, which kills one person every 15 seconds, $1.2 billion important global issues. Unfortunately, policy decisions are not was given in 2005 by USAID, to be spent over a period of five always entirely based on the scientific and statistical evidence years [8]. An argument might be made for research, given that available. In fact, there are severe misallocations in the way AIDS has no known cure, whereas the others do. However, that limited funds have been used to fight disease. Current only 12% of the US budget for AIDS is allocated specifically policies on AIDS, malaria, diarrhea and other diseases are to research, undercutting this line of reasoning [9]. Money almost entirely at odds with the way that the money could allocated to combat diarrhea-related illness and pneumonia save the most lives, focusing money and attention on the first, was not even listed on the USAID site. Those diseases, which an expensive and as yet unsolved problem, and underfunding are leading causes of death in the developing world, are part and marginalizing the others, which are curable and less costly. of a larger initiative to promote maternal and child health and In deciding how much funding to funnel towards a suppress infectious diseases. particular disease, one important factor should be fatality. Not only, however, is money not donated in proportion Malaria kills over 1 million people every year, AIDS kills 2 to how deadly a disease is, but also, the costs of prevention million, and diarrhea causes the death of up to 6 million [1,2,3]. and treatment are not being addressed. Treatment of some The numbers are even starker when specifically children are diseases is, overall, more cost-effective than treatment of other considered, as they should be, given that the fourth millen- diseases, and so would save more lives per dollar donated. nium development goal relates to child mortality. In Nigeria Even if AIDS were responsible for as many deaths as it might and Ethiopia, 237,000 people died appear to be from the amount of from AIDS [4]. Over twice that money the US apportions against number of children under five it, the fact remains that AIDS is [B]ased solely on the relative died of pneumonia and diarrhea a much more expensive disease preponderance and fatality of [5]. Researchers at the Johns Hopto treat than are the others. Yet, kins Bloomberg School of Public all the aforementioned diseases the diseases at hand, diseases Health and the WHO estimate that – AIDS, diarrhea, pneumonia, such as malaria and diarrhea 10.6 million children die before malaria – are preventable: AIDS their fifth birthday worldwide. with safe sex practices and drugs should receive at least as Diarrhea accounts for 17% of these for mothers, diarrhea with clean much fiscal attention as deaths and malaria for 8%. In fact, water, pneumonia with vaccines diarrhea has been described as the and malaria with drugs and the AIDS. This is not the case. leading cause of death for chiluse of bed nets. Diarrhea requires dren. In contrast, AIDS caused a one-time investment into clean the deaths of only 2.5% of these children [8]. It makes sense water and hygienic sewage for any given community, which then, that based solely on the relative preponderance and might be expensive, but could easily recoup its own cost as fatality of the diseases at hand, that diseases such as malaria these simple but effective measures reduced the prevalence of and diarrhea should receive at least as much fiscal attention the disease. Vaccines, such as the one for pneumonia, must be as AIDS. This is not the case. distributed on a case-by-case basis, but once it is eliminated The actions of the United States, the most powerful and from an area, it often never returns, as is clear from the exwealthy participant in this global summit, are quite telling. In ample of the United States. Bed nets are extremely inexpensive, 2008, United States aid, mostly in the form of direct bilateral and hugely reduce the rate of malaria if used correctly. But donations to combat AIDS and HIV, constituted half of the stopping the spread of HIV and AIDS requires continued world’s funds allocated to this particular problem [6]. Of the education, voluntary implementation of safe sexual practices United States Agency for International Development’s (USAID) and an intensive drug regimen. total Health budget of $4.15 billion, 24%, combined, is allocated The treatments themselves put the disconnect between

6 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 6

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:58:48 PM

UCHICAGO disease fatality and funding for treatment into sharper perspec- child mortality would necessarily include a focus on AIDS, as tive. Oral Rehydration Salts, the most widely accepted treat- this disease kills 270,000 children each year. However, the arment for acute diarrhea, tificial division created cost 8 cents per person. by emphasizing them Pneumonia antibiotics separately quickly generally cost $1 a day, gives rise to allotand only have to be takment of funding that en for a few weeks [10]. equates one disease, Malarial drugs are more AIDS, with the rest of expensive, about $4 a the illnesses that affect day, but a new program children. AIDS is still has been implemented extremely important, that combines pressure and needs funding, but on drug companies and these other diseases subsidies to make them are being unfairly discost approximately 5 missed. The problem is cents [11]. By comparithat, as separate causes, son, an HIV cocktail in any money donated to the United States costs combat AIDS is not thousands of dollars a given to alleviate any month. UNAIDS estiother disease and vice mates that to treat and versa. care for all Africans Secondly, societal infected with HIV/ perspectives on the isAIDS in a given year Reproduced from [28] sues, which often inwould cost $1.5 billion form political decision[12]. Implementing prevention programs and antiretroviral making, seem to be playing a large part. AIDS is at the forefront therapy would cost billions more. From a strictly utilitarian of the national and global consciousness. Google Trends, for perspective, money allocated to fight malaria, pneumonia, example, a fairly accurate measure of internet-user sentiment, diarrhea and other preventable, curable diseases would help puts searches for “AIDS” and “HIV” at 4 to 10 times more and save more people than money given to fight AIDS. As frequent than “malaria”, “pneumonia” or “diarrhea [16].” Nigerian President Olesegun Obasanjo noted, “It should be Similarly, the New York Times has published almost 6,000 recognized that given the nexus of malaria and HIV/AIDS, it articles dealing with AIDS in the last 27 years, with articles makes no practical sense to spend on the subject of diarrhea numso much on one while leaving the bering just 48 [17]. The reasons other underfunded”[13]. are varied. Tropical diseases Money allocated to combat There are four main reasons have been a part of the human why AIDS is overly emphasized. condition for hundreds of years, diarrhea-related illness and The first is that it is treated as sepawhereas the first known cases of pneumonia was not even rate from other diseases. The 2004 AIDS were discovered in 1981. annual World Health report from Another aspect of popular preslisted on the USAID site. the World Health Organization sure is the fact that AIDS is still Those diseases, which are (WHO) addressed AIDS and the a problem in the US, whereas need for a comprehensive strategy the other diseases mentioned are leading causes of death in the to stop and reverse the spread not, and furthermore, while the developing world, are part of of this pandemic. It asked for tropical diseases mostly affect expanded treatment, more comchildren, AIDS is widespread a larger initiative to promote munity involvement and further across the age spectrum, and maternal and child health integration of different sources in fact mostly affects people of of knowledge [14]. In order to prime working and child-bearing and suppress infectious achieve such a goal, the WHO age [18]. diseases. called on the international comThirdly, lobbyists fighting munity to respond quickly, with for more funding for AIDS apmoney and aid, so as to effectively pear to have been hugely sucfight the disease. The very next year, the annual health report cessful. As Philip Lee, University of California at San Francisco focused on child mortality, noting that almost 11 million chil- professor of social medicine says on the subject, “The system dren under the age of five die each year [15]. An emphasis on is a political process”[19]. There is not one AIDS lobby, but

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 7

THE TRIPLE HELIX Spring 2010 7 5/8/2010 7:58:49 PM

UCHICAGO rather multiple organizations that have formed powerful coalitions, such as National Organizations Responding to AIDS, which has over 170 member organizations[20]. They even have specific lobby days in Congress, which are May 24 through June 3 [21]. Just last year, in Massachusetts, over 500 people lobbied their state Congress for the yearly AIDS Lobby Day on behalf of Project AIDS Budget Legislative Effort (ABLE) [22]. The AIDS Action Council claims to have successfully helped in the reauthorization of the CARE Act and attained agreement in House of Representatives for removing a ban on funding of syringe exchange programs in Washington, DC. Their mission involves “advocacy on a national level” and they profess to have assisted in implementing important public health policies in the United States [23]. A centralized source of information on South African NGOs called NGO pulse runs a class called the Advanced HIV and AIDS Lobbying and Advocacy Course [24]. This is but one example, but it is indicative of a broader trend. There is no malaria lobby, pneumonia lobby or diarrhea lobby; such lobbies simply do not exist. All such causes are in desperate need of funds, and charitable policies of any kind should be encouraged as much as possible. At the same time, there is also the matter of responsible giving. Good intentions are not enough. Political decisions, even if made in the name of doing good for people around the world, generally ought to be done on the basis of good evidence. When money is given with as much thought to the status of the cause as the help that is needed, there is a substitution of opinion for fact. Ezekiel Emanuel, a bioethicist, calls the ignored issues “mundane but deadly diseases,” emphasizing not only the danger of these illnesses but also the effect that social approval has on the attention and support they receive [25]. Philanthropists are free to distribute their monies as they

wish, but the federal government of the United States must be held to a higher standard. Obasanjo’s message, given in the year 2000 at a world summit on malaria is still relevant. As he said, “Africans have consistently put it to the world that malaria is the number one health problem. When recognition of the HIV/AIDS virus came to the fore, Africans continued with their message that malaria was still killing more people. But we went unheeded”[26]. It seems to be a fact that popular opinion is a major factor in the way money is allocated to combat disease, one that is perhaps stronger than how the money can be used to save the most lives. The future of change in this area is the molding of public opinion to make underfunded diseases as well known as those such as AIDS. People who feel that these other, ignored, diseases need more attention and funding are likely to create organizations dedicated solely to one of these problems. This focus demonstrates the importance of each particular illness. Then, coalitions can form and eventually give rise to lobbies, which can affect political decisions. More importantly, the rise of organizations in relation to one disease, for example, malaria, should work to raise awareness and disseminate important information. In this way, it will become part of the national consciousness and relevant evidence, such as that found in this article, will become common knowledge among both the public and politicians. These strategies have been used successfully by those concerned, rightfully, about AIDS, and they can be appropriated for use to fight other diseases. When all of the causes are equally well-known, then the relative importance and opportunity costs will be brought into question and funds may be allocated more fairly.


14. “Annual World Health Organization Report: 2004.” World Health Organization.” January, 2005. 15. “Annual World Health Organization Report: 2005.” World Health Organization.” January, 2006. 16. “Google Trends.” Google Trends. November 20, 2009. nds?q=AIDS%2C+HIV%2C+malaria%2C+pneumonia%2C+diarrhea 17. “Diseases, Conditions, and Health Topics.” New York Times. January 24, 2010. aids/index.html?s=oldest& 18. “AIDS & HIV Statistics for the USA by Race and Age.” AVERT. January 24, 2010. 19. Thompson, Dick. “The AIDS Political Machine.” Time Magazine. January 22, 1990.,9171,969229-2,00.html 20. “National Organizations Responding to AIDS.” NORA. 21. “National AIDS Lobby Days.” 22. Jacobs, Ethan. “As funding cuts take toll, AIDS lobby day brings huge crowd to State House.” AIDS Education Global Information System. February 5, 2009. http:// 23. “About AIDS Action.” AIDS Action. 24. “RECABIP: Advanced HIV and AIDS Lobbying and Advocacy Course.” NGO Pulse. December 3, 2008. 25. “Google Trends.” Google Trends. November 20, 2009. nds?q=AIDS%2C+HIV%2C+malaria%2C+pneumonia%2C+diarrhea 26. Dugger, Cecilia. “As Donors Focus on AIDS, Child Illnesses Languish.” New York Times. October 29, 2009. 27. “Africa-malaria-funding: One billion dollars a year needed on malaria: summit.” Agence France-Press. April 25, 2000. AF000477.html 28.

1. NIAID Malaria Research Program.” National Institute of Allergy and Infectious Disease. October 30, 2009. 2. “Global HIV/AIDS estimates.” AVERT. January, 2008. worldstats.htm 3. “Deaths from Diarrhea.” Wrong Diagnosis. January, 2005. http://www. 4. “Global HIV/AIDS estimates.” AVERT. January, 2008. worldstats.htm 5. Dugger, Cecilia. “As Donors Focus on AIDS, Child Illnesses Languish.” New York Times. October 29, 2009. html?_r=2&scp=1&sq=AIDS childhood mortality&st=cse 6. “Report on funding for AIDS by G8 countries and other major donors.” Kaiser Family Foundation & UNAIDS. July, 13, 2009. KnowledgeCentre/Resources/FeatureStories/archive/2009/20090708_kaiser_G8.asp 7. “Funding.” USAID. November 20, 2009. health/pop/funding/index.html 8. “AIDS funding from national governments,.” AVERT. November 19, 2009. http:// 9. U.S. Federal Funding for HIV/AIDS: The FY 2007 Budget Request. February, 2006. search? pdf+AIDS+funding+us+research&cd=1&hl=en&ct=clnk&gl=us&client=firefox-a 10. “Pneumonia Treatments and Drugs.” Mayo Clinic. May 9, 2009. http://www. 11. McNeil, Donald. “Plan Tries to Lower Malaria Drug Cost.” New York Times. April 17, 2009. 12. Hernandez, Julia. “The High Cost of AIDS Drugs in Africa.” July 23, 2001. http:// 13. “Africa-malaria-funding: One billion dollars a year needed on malaria: summit.” Agence France-Press. April 25, 2000. AF000477.html

8 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 8

Chana Messinger is an undergraduate at the University of Chicago.

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:58:52 PM


A New Age of Water Politics Dan Plechaty


he annals of recorded history are replete with examples of the deification of rain. Whether through offerings at the Egyptian temple for the goddess Tefnut, or seasonal rituals among the Cherokee in Native America, rain was conceived of as a primal and unpredictable force, necessary for sustaining life but also able to bring death. Although rain remains as unpredictable as ever, instead of supplicating the gods as in the past, we have endeavored to store rainfall and improve our agricultural methods. Our technology is certainly much improved, but with rapid population growth and increasing demand for industrial and agricultural products, water access issues are more important than ever. No longer a religious issue, politics has taken over the role of deciding who gets how much water and at what cost. Although regional solutions differ when it comes to specific water uses, common to all is the need to focus on policies that will ensure longterm water availability through investments in infrastructure and appropriate pricing strategies. Governments will need to remove subsidies and begin taxing water, which will require the mediation of long-standing disputes over the ownership of this resource. Personal water usage, such as for drinking and cleaning, is highly visible but not very important in terms of total volume. In developed countries such as the United States, personal water use accounts for less than 1% of total water expenditures [1]. Instead, the main uses are agricultural and industrial, specifically for the production of thermoelectric energy. Thus it makes more sense to focus on maintaining water access levels for these industries. It would seem odd to worry about water running out, given its ubiquity, but it is not necessarily available in the location or form that we desire. The vast majority of water on earth is salt water, and desalination is currently cost-prohibitive on agricultural and industrial scales [2]. Furthermore, freshwater is not always located where we need it, such as near large urban centers or fields and factories. Transporting water is costly, and as we will see the unsustainable use of aquifers and groundwater will pose major developmental challenges in the decades to come. We will focus on regional industrial and agricultural practices and water access, with the caveat that they are connected to and affected by global markets. The Ogallala Aquifer in the central United States, for example, has experienced steady declines in water levels, reaching over 10% in most areas following post-war economic development in the 1950s [3]. Spanning the High Plains from South Dakota down to Texas, this aquifer provides drinking water for 82% of the area’s population, as well as 30% of national irrigation [3]. Large cities are a major drain on water supplies, and arid regions support agriculture only through large-scale irrigation. Water subsidies also encourage farmers to grow crops that are not suitable for these regions, and

Š 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 9

reduces the incentive to invest in more efficient drip irrigation systems. While patchwork water policies have not helped the matter, subsidies and state-by-state legislation are not the only problems [4]. Even when free market practices prevail, water prices would only reflect the cost of extraction, storage and transportation. This means that we are using water without taking into account whether or not there will be any left in the years to come; in the parlance of economics, it is a market failure. One way to fix this is to enforce a tax to internalize these social costs, known as a Pigouvian tax after the English economist Arthur Pigou [5]. The same effect could also be achieving by limiting the quantity extracted, such as through a cap and trade system. While this would raise prices for farmers (and food prices for consumers), it would also set our agriculture on a more sustainable path and prevent large-scale water shortages in the future. However, higher water prices are regressive as the poor will spend a greater proportion of their income on food and water, justifying increased government assistance in this realm. Half the world away in India, water access problems are even more omnipresent in national politics. Compared to the United States, India has a much greater population density, an economy more dependent on agriculture (21% of GDP), and employs relatively more people in it as its technology is less capital-intensive [6]. The upshot of this is that many more people are dependent on farming for their livelihood, and farming is much more dependent on rainfall. This is particularly dangerous in India, where monsoons provide more than half of the water in less than 15 days [7]. Collecting and storing this water for the rest of the year is an infrastructural challenge that the government could undertake to fix. Aquifer depletion is again a problem, as are local subsidies, but they are more understandable given the level of poverty amongst the farmers. Sustainable use of their aquifers will be necessary soon, though, as the Indian subcontinent begins to feel the effects of global warming. The general effect will be an increase in the intensity of the hydrological cycle [8]. This means that total rainfall will likely remain the same (though specific regions could see more or less rain), and monsoons will account for even more of the rainfall, perhaps inducing flash floods that would be damaging to crops. Extensive investment in infrastructure and water efficiency technologies, combined with a gradual fadeout of subsidies (and eventually taxes) on aquifer water, will be necessary in years to come. Otherwise, there will not be enough water during the majority of the growing seasons, while there will be dangerous amounts of it at other times. As for the United States, total rainfall is expected to decrease significantly, using the midrange predictions of a survey of climate models [9]. This will be combined with decreasing aquifers and river levels that are already well below historical

THE TRIPLE HELIX Spring 2010 9 5/8/2010 7:58:52 PM

UCHICAGO averages [3]. One can first expect more clashes between states ing claims in a peaceful manner, but we must first recognize over water use. It is unclear whether the western United States the formidable challenges in international relations that this will even be able to continue its current level of population presents. Failing to do so may result in a century where wars growth; the mid-West, however, could see a boom as its wetter for oil are supplanted by wars over water. climate and the freshwater of the Great Lakes become more Population growth means that it is even more imperative valuable [9]. Regardless, the net effect is that Americans can to use land and water in the most efficient ways possible, as expect to pay much they are inputs into more for both food almost every ecoand water. This will nomic process. One likely lead to political part of the solution support for offsetting is making sure that these price increases these decisions are in some way; either not distorted by local through price supwater use rules that ports, or some way do not price water to further hide prices adequately, either and market forces. through subsidies Decreasing the or not adjusting for overall costs paid by more sustainable use consumers, however [4]. Higher prices will will only worsen our hopefully induce long-term water ispeople to invest in sues. If no action is more efficient wataken until aquifers ter technologies or are depleted and even explore ideas the effects of global for desalination. warming are beginThis framework can ning to be felt, it be further extended could be debilitating by more clearly defor both agriculture fining property rights and industry. to common water reThe actual sources. The second implementation of part of the solution is such policies, howgovernment regulaever, may not be so tion: there is a responsimple. Many wa- Expected decadally averaged changes in the global distribution of precipitation per degree of sibility to let market ter resources touch warming (percentage of change in precipitation per degree of warming, relative to 1900–1950 as experiences decide the baseline period) in the dry season at each grid point, based upon a suite of 22 AOGCMs for a multiple political midrange future scenario (A1B, see ref. 5). Reproduced from [9]. what technologies jurisdictions, meanare best for farmers, ing that there is often no clearly defined set of property rights. and governments can invest in water infrastructure which This has already led to conflicts among the Great Lakes states will have spillover benefits, such as flood prevention. Water over access to Lake Michigan. How do we determine how much politics is not one-size fits all, however, and special care must water Indiana citizens should get vis-à-vis Illinois, and if the be paid to regional climates, economies, water sources, and water is a common resource, who exactly are we going to tax populations. By laying the groundwork and discussing serifor its use? The problem becomes more pronounced when the ously the perils of water shortage, we can move to conserve political entities are antagonistic, such as the dispute between our water use today and secure this most precious of resources Israel and Syria over the Golan Heights and its large water for the long term. reserves. It should be a goal of the international community to try and enforce a legal framework to address these compet- Dan Plechaty is an undergraduate at the University of Chicago. References: 1. Hutson, Susan B. Estimated Use of Water in the United States in 2000. Mar. 2004. U.S. Geological Survey. 1 Dec. 2008 < circular1268.pdf>. 2. “Thirsty? How ‘bout a cool, refreshing cup of seawater?” 7 Nov. 2008. U.S. Geological Survey. 1 Dec. 2008 <>. 3. “Area-weighted water-level change.” High Plains Aquifer Water-Level Monitoring Study. 9 July 2007. U.S. Geological Survey. 1 Dec. 2008 < hpwlms/tablewlpre.html>. 4. Howe, Charles. Water Pricing: An Overview. Summer 1993. Universities Council on Water Resources, Issue 92.

10 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 10

5. Pigou, Arthur. Wealth and Welfare. New York: Macmillan Company, 1912. 6. “India: Priorities for Agriculture and Rural Development.” The World Bank. 7. “India’s Water Crisis: When the Rain Falls.” 10 Sept. 2009. The Economist. 8. Chattopadhyay, N, and M. Hulme. Evaporation and potential evapotranspiration in India under conditions of recent and future climate change. Agricultural and Forest Meteorology 87 (1997) 55-73. 9. Solomon, S. et al. Irreversible climate change due to carbon dioxide emissions. Proceedings of the National Academy of Sciences in the United States of America, vol. 106 no. 6. 10 Feb. 2009.

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:58:52 PM


Public Health and Probiotics: Missed Opportunities Elizabeth Gaston


icroorganisms have long held a negative reputation. Because microorganisms are often considered only as agents of disease, the general public views them with fear and squeamishness. Manufacturers and researchers focus on developing hand soaps, household cleaning products, and medications to kill bacteria in order to prevent or cure disease. However, for every human cell in a healthy body, there are at least ten microbes, many of which are actually necessary or beneficial to the maintenance of life [1]. Elie Metchinokoff investigated the benefits of bacteria on intestinal health in 1907, but due to the microbiology research community’s intense focus on antibiotics, little progress in the study of healthful microbes was made until recently [2]. In 2001, probiotics were defined by the World Health Organization as “live microorganisms which, when administered in adequate amounts confer a health benefit on the host”, and in 2007 the National Institutes of Health began the Human Microbiome Project, an international effort to study the naturally-occurring microbes in human hosts [3]. Probiotics are a reemerging field of microbiology with the capacity for significant impacts on human health, yet the United States has not provided specific regulations to monitor the production and sale of probiotics, leading to an injurious lack of information available to consumers. Many Americans are aware of probiotics only as a component in heavily advertised yogurt products. The yogurt ads, aimed at middle-aged women, suggest that probiotics’ primary role is in increasing intestinal regularity. Yet, in large clinical trials probiotics have demonstrated positive health effects for a variety of gastrointestinal diseases. Acute gastroenteritis, an inflammation of the intestines caused by bacterial, viral or parasitic infection, is the cause of 15% of deaths in children aged 0 to 5 worldwide. However, probiotic treatment or prophylaxis of this disease reduces the duration or the risk of diarrhea in children and adults [4]. Probiotics may also help patients with irritable bowel syndrome (IBS). Several trials showed reduced stomach pain and overall better clinical outcomes in IBS after taking the probiotic Bifidobacterium infantis 35624 [5]. Some probiotics reduce the risk of antibiotic-associated diarrhea in children or infants, and other probiotics can reduce symptoms of lactose intolerance. Not all benefits of probiotics are gastroenterological. Several species of lactobacilli have been studied for their general immunomodulatory effects in vitro. In one double-blind study, children taking the probiotic L. rhamnosus GG missed significantly less school than the children taking a placebo and had 17% fewer respiratory infections [6]. A study on the elderly corroborated this finding, showing a decrease in the duration of respiratory infections [6]. Another species of

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 11

lactobacilli, L. plantarum, was shown to lower the cholesterol and triglyceride levels of rats [6]. While these studies show promising initial effects, they have yet to be corroborated with full-scale clinical trials. All these discoveries were made within the last decade: as probiotic research funding increases with the human microbiome project and new corporate interests, knowledge of probiotics’ benefits will likely increase. The popularity of probiotics has increased over the past few years not just among medical researchers, but also among consumers. In a trip to the grocery store, consumers can find fermented milk, frozen yogurts, candy bars, granola, cookies and juice all with advertised probiotics. In fact, yogurts are among the most well known sources for probiotics. In the vitamin section, probiotic cocktails and pure cultures can be found in both pill and capsule form. Probiotics may be a current trend in food marketing, but their wide range of benefits should be seriously considered by medical professionals and patients. The labeling and regulation of probiotic products in the United States is substandard, and because of this failing, doctors and consumers may have difficulty choosing appropriate products. In the United States, no probiotic products tested for specific health benefits have been marketed towards doctors or patients [7]. The FDA has demanded that products with any specific health claims to cure, alleviate or prevent disease must be tested as drugs [8]. The companies that currently make and sell probiotic bacteria market their products as food or dietary supplements which are not subject to the same regulations as drugs simply because they are “not intended” to be used as such as per the FDA regulations [8]. Although manufacturers may benefit in the long term from certifying their probiotic products as drugs, many obstacles exist. Drugs are subject to stricter manufacturing laws than food [8]. Once

THE TRIPLE HELIX Spring 2010 11 5/8/2010 7:58:52 PM

UCHICAGO yogurt or chocolate is designated a drug, it may not be sold in the refrigerator alongside its competing non-drug food products at the grocery store. Food also does not qualify for federal funding of trials to determine health benefits, which adds an additional hurdle to conducting research and bringing products to the market. In order for patients to fully benefit from probiotics, the FDA needs to regulate their production and sale, preferably through probiotic-specific directives. Probiotic products are inadequately labeled. In America, the strain of bacteria contained in a product does not have to be identified on the label of probiotic supplements [7]. For every disease affected by probiotics, only certain species and strains will provide a significantly beneficial effect. Some strains of bacteria even lose their effects when combined; Lactobacillus reuteri 55730 is immunostimulatory while the highly related strain Lactobacillus reuteri 6475 is antinflamatory [9]. Both strains of lactobacillus are probiotic but they have different effects and when the strains are combined they lose their inhibitory effects on pathogenic bacteria [10]. Without clear and correct labeling of the bacterial strains, consumers and doctors cannot know which probiotic could improve their specific health problems. These products could be further improved with the addition of labels explaining clear and specific potential health benefits for reducing respiratory disease, antibiotic-associated diarrhea or the symptoms of IBS. Eventually, with rigorous clinical trials, probiotics can be classified as drugs, but we need an intermediate solution: the development of regulations specific for probiotics. The information about the quantity of live bacteria and the storage conditions necessary to keep probiotic bacteria alive also is often not included in product information [7]. Probiotics, like drugs, are dose-dependent. Patients need to know the dose of a probiotic necessary for the optimal effect and the frequency they should consume the probiotic. The colonization of consumed probiotic bacteria in the human gut has been observed to be transient and therefore must be replenished for continued effect [5]. Another problem develops if the quantity of bacteria present at manufacturing changes by the time the product is consumed because it was not stored properly at some point during its production and transport. Bacteria must be viable to provide probiotic effects through competition with other organisms and secretion of various

immunomodulatory and unknown factors. In the United States, there is currently no legal definition of the term “probiotic,” and it has been misused and placed on products with no probiotic effect. Yogurt often is made with the bacterial species Lactobacillus acidophilus and thus is called a probiotic. However, L. acidophilus has yet to conclusively demonstrate any health benefits in clinical trials [6]. By abusing the term probiotic, manufacturers undermine the entire category of products in the eyes of consumers. The development of a legal definition by the FDA would prevent this misinformation. Recently, in Canada, officials have been taking positive steps toward effectively regulating probiotics. In April 2009, the Canadian government issued a Guidance document defining probiotics and when they should be regulated as drugs and when companies can apply to have such products considered foodstuffs. Canadian officials have judged that in order to be considered a probiotic, a product’s claims must be substantiated by significant clinical data and labeled accordingly [11]. Several probiotics have already been approved and are being sold under the new guidelines [12]. The United States could benefit greatly from similar probiotic-specific regulations to reduce misinformation. Although no food and drug companies have stepped up to define their products as drugs so far, as scientists learn more about the bacteria through the human microbiome project and other research efforts, making regulated drugs available to physicians will become cost-effective especially for smaller drug supplement companies. Biogaia, a company that sells clearly labeled Lactobacillus reuteri supplements, has doubled its profits in 2008 and has been growing steadily for most of the decade [13]. Biogaia’s remarkable profitability demonstrates the benefits that increased regulation and labeling restrictions can have for both companies and consumers. Although many of the mechanisms of probiotics’ actions are not yet known, and the benefits and roles of the bacteria that colonize the human gut are still being studied, the implications of probiotics are gradually being realized. The sooner reasonable regulation of probiotic products is available, the sooner the patient population can take advantage of the beneficial properties of these microorganisms.


Manufacturing Practices (CGMPs) and Interim Final Rule (IFR) Facts. United States Department of Health and Human Services. Retrieved 21 November 2009 from: < GuidanceComplianceRegulatoryInformation/RegulationsLaws/ucm110858.htm> 9. Jones, S. Versalovic, J. (2009). Probiotic Lactobacillus reuteri biofilms produce antimicrobial and anti-inflammatory factors. BMC Microbiology. 9(35). 10. Spinler, J. Taweechotipatr, M., Rognerud, C. et al. (2008). Human-Derived Probiotic Lactobacillus reuteri Demonstrate Antimicrobial Activities Targeting Diverse Enteric Bacterial Pathogens. Anaerobe. 14(3)166-171). 11. Guidence Document: The Use of Probiotic Microorganisms in Food. Health Products and Food Branch, Health Canada. April 2009. Retrieved 1 November 2009 from <> 12. Falkenstein, C. (2 July 2009). DDS® Probiotics Approved by Health Canada – One of A Few. NPI Center. Retrieved on 29 January 2009 from <http://www.npicenter. com/anm/templates/newsATemp.aspx?articleid=24398&zoneid=8> 13. Starling, S. (13 Feb 2009). Recession-Proof Biogaia doubles 2008 profits. NutraIngrediants. Retrieved on 21 November 2009 from <http://www.nutraingredients. com/Industry/Recession-proof-BioGaia-doubles-2008-profits>. 14.

1. Human microbiome project. NIH Roadmap for Medical Research, National Institutes of Health. Updated 23 October, 2009. Retrieved 1 November 2009 from <>. 2. E. Metchnikoff, The prolongation of life: optimistic studies, G.P. Putnam’s Sons, New York and London (1908). 3. Report of a Joint FAO/WHO Expert Consultation on Evaluation of Health and Nutritional Properties of Probiotics in Food Including Powder Milk with Live Lactic Acid Bacteria. Health and Nutritional Properties of Probiotics in Food including Powder Milk with Live Lactic Acid Bacteria. 4 October 2001. 4. Hseih, M., & Versalovic, J. (2008). The Human microbiome and probiotics: implications for pediatrics . Current Problems in Pediatric and Adolescent Health Care, 38(10), 309-327. 5. Preidis,G., & Versalovic, J. (2009). Targeting the Human Microbiome With Antibiotics, Probiotics, and Prebiotics: Gastroenterology Enters the Metagenomics Era. Gastroenterology, 136(6), 2015-2031. 6. Reid, G. (2008) Probiotics and prebiotics – Progress and challenges. International Dairy Journal. 6, 969-975. 7. Sanders, M.E. (2009). Guidelines for probiotics. Functional Food Reviews, 1(1), 3-12. 8. Food and Drug Administration. (2007). Dietary Supplement Current Good

12 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 12

Elizabeth Gaston is an undergraduate at the University of Chicago.

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:58:53 PM


Why Aren’t We Nematodes? Elizabeth Harris


hy are humans not nematodes? The question of what within an organism’s genome encodes for the complexity of the organism has long puzzled biologists. It has already been proven that genome size, and even the number of protein coding genes present in a given genome, do not give any indication of the complexity of the organism; however, recent analysis of the transcription of genomes could provide a possible answer. Recent microarray experiments indicate that the human genome is pervasively transcribed and that human cells contain numerous RNAs that do not code for proteins. Furthermore, by comparing human genome transcription to genome transcription in other organisms, it has been found that an increased ratio of noncoding to protein-coding DNA correlates with increasing organismal complexity [1]. Non-coding RNAs are profoundly changing the very perception of what a “gene” is, the nature of scientific research by creating a need for more collaborative projects, as well as possibly providing the key to what makes one species different from another. The two classic examples of non-coding RNAs that have significant function are tRNA (transfer RNA) and rRNA (ribosomal RNA). Both of these RNAs have long been known to play important roles in RNA translation, as tRNA recruits new amino acids to a growing protein, while rRNA forms the functional catalytic part of the ribosome through which proteins are assembled. More recent research has yielded a

number of new functional non-coding RNAs such as snRNA (small nuclear), antisense RNA (AS), snoRNA (small nucleolar), miRNA (micro), and Piwi interacting RNA (pi) [2]. Research has proven that ncRNAs are involved in transcriptional activation, gene silencing, imprinting, dosage compensation, translational silencing, and many other functions [3]. The discovery of large and diverse classes of functional ncRNAs has changed the views of geneticists on gene expression and regulation challenged the strict definition of the word “gene”. Furthermore, the discovery of the regulatory functions of specific non-coding RNAs presents a new complexity to the study and understanding of a number of human diseases. Non-coding RNA may eventually explain human diseases beyond the traditional explanation given by “gene centric” genetics, which focuses solely on DNA transcripts that translate into specific functional proteins [1,4]. Most previous research has focused only on the function of genes that are transcribed into RNA and rests on the view that only the genes that encode mRNA – the RNA template used for translation – hold significance. After transcription from a DNA template, single stranded RNA is spliced, so that only specific parts form the mRNA. Other DNA transcripts that are never translated into protein have traditionally been viewed as nothing more than genetic noise. However, the recent discovery of functions for non-coding RNAs has led to a paradigm shift in genetics, in which researchers have started to study DNA transcripts that

Reproduced from [13]

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 13

THE TRIPLE HELIX Spring 2010 13 5/8/2010 7:58:53 PM

UCHICAGO do not lead to the eventual formation of a functional protein, and improvement of existing technologies for the large-scale but nonetheless have important functions in the cell. The cur- identification of coding sequences, transcription units and rent definition of a gene among academic geneticists is much other functional elements” [8]. more liberal, permitting a gene to be any DNA sequence that Looking at a few of the participants in the consortium, it produces a functional product: “a locatable region of genomic becomes clear that the diversity of the approaches to the question sequence, corresponding to a unit of inheritance, which is at hand is unified in the technology used: the research group associated with regulatory regions, transcribed regions, and headed by Bradley Bernstein of the Broad Institute of MIT and other functional sequence regions” [5]. Harvard focused on histone modifications using chromatin imRecent experiments reveal that many non-coding transcripts munoprecipitation followed by high-throughput sequencing, the are cell-type specific and are developmentally regulated, rais- group headed by Thomas Gingeras of Affymetrix, Inc focused ing the possibility that non-coding transcripts play a role in on identifying protein-coding and non-protein coding RNA determining and maintaining a given cell type [6]. Since cells transcripts using microarrays, high-throughput sequencing, are the building blocks of organisms, non-coding transcripts sequence paired-end tags, and sequenced cap analysis of gene then play a role in forming a specific organism. Furthermore, expression tags, the group headed by Michael Snyder of Yale a number of non-coding RNAs have been associated with hu- University focused on identifying transcription factor binding man diseases. Due to their role in gene regulation, a number sites using chromatin immunoprecipitation followed by highof non-coding RNAs have are throughput DNA sequencing [9]. associated with various types Basically all of the research groups Recent experiments reveal that of cancers and neurological used Chip based technologies and diseases [4,7]. These findings high throughput sequencing. As many non-coding transcripts pose a new challenge to gene a recent article points out, “the are cell-type specific and are therapy: the development of ‘sociological issues’ that surround treatments based on genomics technology and its development” developmentally regulated, can no longer rely solely upon must also be investigated [10]. In raising the possibility that nonthe discovery of genes—proteinthe article entitled “Sharper tools coding transcripts—responsible and simpler methods” the author coding transcripts play a role in for specific diseases. New disasserts that “one of the most imdetermining and maintaining a coveries concerning the role of portant consequences of genomics non-coding transcripts in gene initiatives has been the introduction given cell type. regulation and cell type deterof high-throughput technologies mination may reveal causes for to the discovery phase of research,” certain diseases, which would provide possibilities for more and that the “reliance on mechanical automation, instrumentaeffective treatments. It is clear that understanding the func- tion and systems for managing laboratory information has had tion of non-coding RNAs should be an important part of the a big impact on the workplace, making issues of operation, rising field of genomics and gene-based medicine. organization, and diversification of intellectual capital part of However, the technology used to produce information on the competitive mix” [10]. The effects of these technologies the scale of a whole genome and specifically on non-coding on the organizational structure of scientific research, such as RNAs has led to some changes in the sociology of science large collaborative research projects, could have profound and could possibly lead to more profound changes in years impacts also on the funding of research, as the author of the to come. In general, the large scale nature of the research article states, “it may be that sources of funding will have necessary to produce information on the level of a genome to be uncoupled from traditional grant cycles to ensure the has led to more collaborative research endeavors such as EN- necessary flexibility and capacity to respond to change” [10]. CODE and HapMap and the famous Human Genome Project. The shift in scientific research towards large collaborate These collaborative research consortiums reveal a stray from efforts could also pose dilemmas in regards to authorship and the traditional lab under a single principle investigator based research privacy. The criteria for participation in ENCODE research that could potentially have significant effects on the describe a number of guidelines including: “[that] each particinature of funding as well as in the areas of authorship and pant will fully disclose all publicly funded algorithms, software research privacy. source code and experimental methods to the other members ENCODE, an acronym standing for “the ENCyclopedia of the Consortium for purposes of scientific evaluation and Of DNA Elements,” is a public research consortium respon- is strongly encouraged to disseminate this information to the sible for producing the recent microarray experiments which broad scientific community, [that] each participant agrees that indicate that the human genome is pervasively transcribed s/he will not disclose confidential information obtained from [8]. The consortium was launched by the US National Hu- other members of the Consortium, [that] each participant will man Genome Research Institute in September 2003, with the take part in group activities, including attending periodic purpose of finding all functional elements in the human ge- workshops to discuss the project’s progress and coordinating nome [9]. So far, only the pilot phase of the consortium has the publication of research results, [and that] each participant been completed. This pilot phase focused on the “application will share data according to the ENCODE Data Release Policy

14 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 14

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:58:53 PM

UCHICAGO providing for pre-publication release of any analysis results” are generally the least conserved across different species, so [9]. These guidelines illustrate the way in which research- these sequences likely serve as the source of the genetic difers work together in large projects and highlight the delicate ference that makes humans unique from worms, cats unique handling of scientific results and ideas. However, the nature from yeast, and elephants unique from horses. Therefore, these of the consortium also creates an environment that is less sequences serve as a likely source of the elements that make defined with regards to intellectual property. humans different from worms, cats different from yeast, and The results of these research collaborations, whatever elephants different from horses. their long term effect on the sociology of science, are intriguBut how can genetics explain the correlation between the ing and could perhaps hold the frequency of non-coding sequencanswer to the mystery of what es and organismal complexity? While the study of non-coding in the genome makes one species Of the few ncRNAs whose funcdiffer from another. As previtions have been identified, all are RNAs could have profound ously mentioned, one of the key involved in some aspect of gene effects on knowledge about findings of the ENCODE pilot regulation. This observation has project, which used numerous human diseases and the discovery led to the theory that non-coding microarray experiments on a RNAs most likely confer organof possible treatments, they may ismal complexity by increasing targeted 1% of the human genome is that the human genome also provide an answer to a set of the amount and complexity of is pervasively transcribed [11]. gene regulation. It has been unanswered questions in biology: proposed that prokaryotes are Parts of the genome, once considered unimportant relics of what about an organism’s genome limited in their complexity by ancient DNA integration, were their reliance on a protein based makes it that organism? transcribed [11]. The transcripts regulatory system. Contrarily, euwere also found to overlap one karyotic organisms have greater another. In fact, recent analysis complexity due to their use of indicates that 90% of the human genome may be transcribed RNA as a “digital regulatory solution” [1]. Further evidence and that 98% of the transcriptional output in humans is ncRNA suggests that the differences between species and individuals [7,8]. may be fundamentally governed by a specific repertoire of In light of the unusually high frequency of non-coding regulatory non-coding RNAs [1]. intergenic (nucleotide sequences between genes) and intronic The discovery of the pervasive transcription of the human sequences (non-coding nucleotide sequences within genes), genome and the constantly growing number of functional the discovery of pervasive transcription yields a surprising non-coding RNAs has instigated a paradigm shift in genetics: conclusion: organisms of high complexity contain a much higher protein-coding mRNAs are no longer the sole focus of genetic proportion of noncoding intergenic and intronic sequences and genomic research. As geneticists now consider non-coding than organisms of less complexity; that is, proportion of protein RNAs crucial to regulation of transcription; the term “gene” coding sequences declines with the increasing complexity of has become amorphous. While the study of non-coding RNAs an organism [12]. In higher organisms, such as humans and could have profound effects on knowledge about human disother mammals, protein-coding sequences occupy only a small eases and the discovery of possible treatments, they may also portion of the genome [1]. In contrast, 80-95% of prokaryotic provide an answer to a set of unanswered questions in biology: genomes are protein-coding sequences [1]. A comparison what about an organism’s genome makes it that organism? of the genomes of two eukaryotic organisms, humans and What about the genomes of different organisms makes the C. elegans nematodes reveals that the protein-coding genes organisms different? The high level of non-protein coding vary by less than 30% [1]. There is less than 30% variation sequence transcription unique to higher-level organisms has between a nemotode made up of only 103 cells and a human finally provided a convincing answer. composed of around 1014 cells! Surely something besides the protein coding genes are responsible for the marked difference Elizabeth Harris is an undergraduate at the University of Chicago. between these two species. Non-coding genomic sequences References: 1. Mattick, John S. and Makunin, Igor V. Non-coding RNA. (2006) Human Molecular Genetics 1, R17-R29. 2. Morison, I. M., Ramsay, J.P. and Spencer, H.G. A census of mammalian imprinting. (2005) Trends Genet. 21, 457-465 3. Willingham Aarron T. and Gingeras, Thomas R, TUF Love for “Junk” DNA. (2006) Cell 125. 4. Mattick, J.S.Challenging the dogma: the hidden layer of non-protein-coding RNAs in complex organisms. (2003) Bioessays, 25 930-939. 5. Pearson, Helen. Genetics: What is a gene? (2006) Nature 441, 398-401. 6. Cheng J. et al. Transcriptional maps of 10 human chromosomes at 5-nucleotide resolution. (2005) Science, 308, 1149-1154 7. Pang K.C. et al. RNAdb—a comprehensive mammalian non-coding RNA database.

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 15

(2005) Nucleic Acids Res. 33 D125-D130. 8. Collins, Francis S., Green, Eric D., Guttmacher, Alan F., Guyer, Mark S., on behalf of the US National Human Genome Research Institute. A vision for the future of genomics research: A blueprint for the genomic era. (2003) 422, 1-13 9. The World Wide Web. 10. Duyk, M Geoffrey. Sharper tools and simpler methods. (2002) Nature Genetics Supplements 32, 465-468 11. The ENCODE Project Consortium. Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project. (2007) Nature 447, 14. 12. Mattick, J.S. Non-coding RNAs: the architects of eukaryotic complexity of the human transcriptome. (2003) Eur. J. Hum. Genet. 13, 894-897 13.

THE TRIPLE HELIX Spring 2010 15 5/8/2010 7:58:53 PM


A World Without Men Indra Wechsberg


s it possible to biologically suppose a world in which only one sex represents the human species? As it turns out, studies demonstrate that nature does not favour males as the victorious contenders in a gender-centric race for survival. But how biologically viable is a man-free world? It is commonly understood that our species is dependent on both sexes for reproduction and long-term survival; human life can only begin with the fusion of an egg and a sperm, both chromosome-containing gametes that contribute to the genetic makeup of future offspring. Yet the Y chromosome, the bastion of masculinity, has been deemed an evolutionary target of gradual deterioration losing genes once shared with the X chromosome at a pace approximating five genes per million years [1]. As such, the Y chromosome is predicted to be completely bereft of functional genes within ten million years [1]. The demise of this chromosome would signify the loss of male gene expression (including genetic instructions for sperm production) and thus sexual reproduction. Nonetheless, this potentially apocalyptic scenario would not amount to the extinction of our species, at least not for women. In order to fully comprehend the significance of a man-free world, we will examine how the Y chromosome has taken a threatening turn, the possibility of alternative reproductive paths for women, and the prospects for sustaining development of the less-fit half of the population. Why Man (and not Woman) is an Endangered Animal Although the Y chromosome is undergoing degeneration because of its evolutionary disadvantage, it was once up to normative chromosomal standards. Scientists have been able to deduce that the X and Y chromosomes evolved from matching autosomes in an ancient ancestor from two pieces of evidence: the human Y chromosome has recognisable counterparts on the X chromosome with similar DNA sequences and the tips of the X and Y chromosomes align during meiosis in males and exchange pieces as if the X and Y were a matching set [2, 3]. While the human sex chromosomes did originate as a matching pair like the other 22 sets of chromosomes in our cells, they genetically diverged beginning about 300 million years ago, sometime close to the emergence of mammals [3]. It was around this time that a reptile-like ancestor to mammals experienced a random mutation in a small part of the autosomal copy that would later transform into the malebearing Y chromosome [3]. Derived from the duplication of the X chromosome-bound SOX gene, this mutation manifested itself in the acquisition of a new gene, SRY or sex-determining region Y [3]. The advent of the SRY gene does not allow us to cast the gender spectrum into oblivion any time prior to 300 million years ago, however. The presence of the SRY gene is simply how our species currently determines sex. While the previous mechanism for activating male development

16 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 16

is unknown, mammals did have other means of doing so, such as through other genes or an incubation temperature, for example [2]. What occurred 300 million years ago to the autosomenow-turned-Y chromosome, in addition to the debut of the SRY gene, was the first of a series of four discrete DNA inversions. It is probable that a part of the DNA on the Y chromosome essentially flipped upside down, relative to the equivalent part of the X chromosome, during an attempt at meiosis in the reptile-like ancestor in question [3]. Since recombination requires that two similar sequences of DNA line up next to each other, an inversion is supposed to have suppressed future interaction between the formerly matching areas of the X and Y [3]. By comparing DNA sequences across species, biologists were able to roughly calculate when the formerly matching genes and their respective regions began to structurally deviate from each other. Monotremes (such as the platypus and echidna), among the earliest to branch off from fellow mammals, possess both the SRY gene and an adjacent non-recombining region, implying that the arrival of the SRY gene and the halting of nearby recombination coincided with the emergence of mammalian lineage [3]. Each of these four episodes of inversion incrementally served as impediments to recombination leading to the genes in the affected regions to stop working and decay [3]. Even though the Y chromosome actually expanded temporarily at times (by stitching autosomal DNA into areas still able to recombine), failures of recombination led to a net shrinkage of genes, which statistically reflected degeneration [3, 6]. A number of the Y genes have no kin at all on the X chromosome and

Reproduced from [9]

Š 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:58:54 PM

UCHICAGO Reproduced from [10]

there is an unusually high amount of “junk” DNA, sequences consisting of no instructions for making useful molecules [3]. To make matters worse, the Y chromosome harbors no more than several dozen genes, far fewer than the 2,000 to 3,000 on the X chromosome, of which 95 percent makes up the non-recombining region [3]. Women find themselves in better circumstances for possessing two X chromosomes. Meeting the recombination criteria of aligning two like DNA sequences, women avoid the genetic death trap of a diminishing vital chromosome set up by inversion. In actuality, the degenerative effects faced by the Y chromosome were far from over as of 30 million to 50 million years ago, the period of the last inversion [3]. Mutation is a genetic feature that must continuously be accounted for in DNA replication. Logically, the more cellular division that occurs, the more susceptible DNA is to mutation. This case holds in the massive daily output of sperm [2]. Genes and chromosomes in the human testes are very vulnerable to mutations as DNA is copied many times before going into sperm, whereas egg cells go through only twenty-four divisions before they are released for fertilisation [2]. Among the three types of mutations (beneficial, synonymous/neutral, and deleterious), the Y chromosome is better off avoiding deleterious mutations in order to support male development. If the SRY gene, for example, undergoes a deleterious mutation, then the gene is switched off and fails to signal the development of testosterone, ensuing female development in an XY embryo [2]. This discrepancy between genotype and phenotype causes infertility. Between 1 and 2 percent of all men owe their infertility to some type of Y chromosome degeneration [2]. What is worse is that the disabling of the Y chromosome is not restricted to this afflicted minority. If thousands, even millions, of sperm are produced every day, then fertile men’s Y chromosomes are also subject to infertility-inducing mutations [2]. The fewer opportunities to pass down genes, the more the Y chromosome approaches © 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 17

an evolutionary stalemate and endangers males. The reason for this definitive dead end not only lies with the mutation rate but also with the fact that an accumulation of these unfavourable mutations results from the lack of recombination. For one, deleterious mutations in some Y-linked genes can be carried along, even to the point of fixation in a population, by physical linkage to strongly beneficial mutations in other Y-linked genes [1]. The accumulation of fixation is exemplified by Muller’s ratchet: an already existing amount of mutations cannot decrease because irreversible mutations run in one direction [4, 5]. Although this principle is generally applied to relatively small asexual populations, the ratchet is relevant and perhaps important to the human Y chromosome due to its largely non-recombining portion which observes a mutational bias [5, 6]. It is the absence of recombination that prevents the Y chromosome from ever shedding deleterious mutations, unless the carrier of the mutated Y chromosome dies or simply fails to reproduce [4]. When mutation rates are lower in females than in males, fewer mutations accumulate on the X chromosomes, which in turn leads to reduced selection against deleterious alleles on the Y chromosome [5, 6]. In other words, unless the mutation is deleterious enough to compromise life and consequently remove itself from the gene pool, it will remain. On the other hand, given that X chromosomes are free to recombine and that mutation is more frequent in Y chromosome-containing half of the population, men evolve into the endangered sex.

Kaguya, the champion of bi-maternalism, was the result of a viable mouse embryo from two eggs Asexual Reproduction: Why Men and Their Sperm Could Become Irrelevant If men were to be on the brink of extinction, could women cease depending on the Y chromosome for reproduction? The answer is contingent upon developing an embryo without the fertilisation from a male. In 2004, Japanese scientists proved that parthenogenesis was to some extent possible in mammals. Kaguya, the champion of bi-maternalism, was the result of a viable mouse embryo from two eggs [7]. The key to development involved tricking the embryo into thinking one of the maternal germ cells possessed genes imitating those found in sperm [7]. The sperm-like behaviour of the egg would be simulated by switching off maternal imprints, the critical point being that only one copy of an imprinted gene should be active (either the mother or the father’s copy) [7, 8]. Since the combination of two adult eggs produced an embryo with a poorly developed placenta that would die prematurely, a non-growing oocyte without the maternal imprint was required [7, 8]. To reduce premature deaths, the study focused on the appropriate gene expression of two particular genes:

THE TRIPLE HELIX Spring 2010 17 5/8/2010 7:58:54 PM

UCHICAGO insulin growth factor 2 (IGF2) which governs the growth and development of the fetus and is only turned on in the sperm and H19 which turns off the IGF2 genes in eggs [7, 8]. Utilising mutant mice with a deletion in the H19 gene, the Kono team produced 457 reconstructed hybrid eggs, each of which now contained an immature IGF2-producing egg [7, 8]. From the in vitro-cultured embryos that developed, 317 blastocysts were implanted in 26 surrogate mothers, 24 of which became pregnant [7, 8]. Only ten pups survived gestation by autopsy recovery with 2 pups successfully restored and showing apparently normal neonate morphology [7]. Kaguya was the only mouse who grew to adulthood and showed normal reproductive performance [7]. Although the existence of Kaguya proved the development of parthenotes possible, the Kono team concluded that genomic imprinting remains very much a barrier as paternal genetic contribution is responsible for growth regulation during development [7]. In addition to the impracticalities of in vitro development, the Kono study demonstrated the biological costliness of producing just one Kaguya. Furthermore, the study claims that there may be other unknown barriers to parthenogenetic development. Thus, unless female humans can overcome genomic imprinting by natural means, they continue to require male DNA for reproductive purposes outside of a laboratory. While genomic combination constitutes the basis for human reproduction, it not necessarily true that there are

no natural alternatives. The fact remains that the Y chromosome has grown gradually expendable while reproductively speaking, men are not entirely so. In order to maintain sexual reproduction, the SRY and other corresponding male genes must be preserved. One of the alternatives Sykes proposes is the rise of fertile XX males [2]. Although he provides no explicit method for the creation of this alternative reproductive mechanism, he reminds us that if the few genes necessary for sperm production already left the Y chromosome and became implanted in one of two X chromosomes, the species would be saved [2]. The transfer of these male genes lies at the crux of sustaining sexual reproduction along with the male population and there is a mammal other than Kaguya that may serve as a guide. In 1995, researchers discovered that male mole vole carries neither an SRY nor Y chromosome due to the evolution of other activated genes that substitute for the reproductive role [2]. As such, it remains within the realm of possibility to develop an entirely new sex determining system that will assume the responsibilities of sexual production. If the human race was over 300 million years ago dependent on a reproductive mechanism different from the XY chromosome system, then there is time for men to evolve a transfer of genes, especially with a time frame of 10 million years. The degeneration of the Y chromosome is therefore not a definite sign of extinction nor is it the sole means of reproduction. Indra Wechsberg is an undergraduate at the University of Chicago.

Reproduced from [11]

References: 1. Hughes JF, Skaletsky H, Pyntikova T, Minx PJ, Graves T, Rozen S, et al. Conservation of Y-linked genes during human evolution revealed by comparative sequencing in chimpanzee. Nature. 2005; 437: 101-04. 2. Sykes B. Adam’s Curse: A Future without Men. New York: W.W. Norton & Co; 2004. 3. Jegalian K, Lahn BT. Why the Y is So Weird. Sci. Am. Feb 2001: 56-61 4. Smith, John Maynard. The Evolution of Sex. Cambridge: Cambridge University Press; 1978. 5. Charlesworth, B, Charlesworth D. The Degeneration of Y chromosomes. Phil. Trans. R. Soc. Lond.B. 2000; 355: 1563-157.

18 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 18

6. Engelstädter, J. 2008. Muller’s Ratchet and the Degeneration of Y Chromosomes: A Simulation Study. Genetics. 2008; 180: 957–967 7. Kono T, Obata Y, Wu Q, Niwa K, Ono Y, Yamamoto Y, et al. Birth of parthenogenetic mice that can develop to adulthood. Nature. 2004; 428: 860-63. 8. Trivedi, BP. The End of Males? Mouse Made to Reproduce Without Sperm. National Geographic News; 2004 April 21 [cited 2009 Nov 30]. Available from: http:// 9. 10. &blobname=ch20f4.jpg 11.

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:58:55 PM


Scheduling in Professional Team Sports Jacob Parzen


very year from 1980 until 2004, Henry and Holly Stephenson ventured to their whiteboard to create a 2430game schedule for the following Major League Baseball (MLB) season. Every year they procured the six-figure contract. However, after a decade of failed attempts to dethrone the Stephensons and land the job with his computer programming techniques, Michael Trick, the co-founder of the Sports Scheduling Group and a professor at the Tepper School of Business at Carnegie Mellon University, finally got computer to beat man [1]. Nevertheless, computer programming in sports scheduling remains a relatively new field, and even Trick admits that the competition between programming and the human pattern recognition that the Stephensons employ is not necessarily one-sided [2]. Above all, sports programming is a developing field. So where is sports programming actually used? The aforementioned MLB season is a complex case. 30 teams play 162 games each over a 6-month span with few breaks. Teams are partitioned into divisions and play divisional foes a disproportionate number of times. 3 divisions form a league, of which there are two. Teams from different leagues only play interleague games during a short, specific part of the season. In short, MLB schedule specifications abound. A more fundamental example (though programmers would contest that no “simple” example exists!) is the Atlantic Coast Conference (ACC) seasonal basketball competition. The ACC consists of 9 teams which play every other team twice, once at their home venue and once away. These two types of competition, along with every other pertaining to team sports, are united by the concept of constraint. There is no “right” answer in sports scheduling [2]. Dozens of factors, termed constraints, are taken into account when choosing between two schedules and, more often than not, the tiebreaking factors among equally feasible schedules are not easily accounted for in programs. That is, the programmers don’t know what criteria will be most coveted. There have been years when MLB executives, taking into account complaints about travel from preceding years, have chosen schedules that minimize the total distance traveled by any given team over schedules that have other advantages [2]. On the other hand, they may hone in on television scheduling and choose accordingly when the economy is struggling. Distance traveled is one of the constraints that the traveling tournament problem (TTP), discussed below, places most emphasis on. Katy Feeney, MLB’s senior vice president for scheduling, noted that despite awkward travel, the Sports Scheduling Group’s 2005 schedule was chosen because it had very few semi-repeaters, which are periods when a given team plays a series (most often 3 games, though 2 and 4 are used infrequently) with one team, plays a series with another team, and then returns to play the first team again [1]. This demonstrates that, while important,

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 19

distance traveled will not always be the tiebreaking factor. Above all, the ultimate selection of schedules is a subjective process. Practically speaking, no schedule can be the best in everything; there are simply too many “decision makers” that schedulers must account for [2]. Fans, owners, players, television partners, and other parties all affect the schedules that schedulers present to league executives. Inevitably, every party will disapprove of the final schedule in some manner. The realistic goal of every scheduler, then, is to limit the unhappiness, not to eliminate it. For computer programming purposes, scheduling problems in many sporting leagues are often said to be either temporally constrained or temporally relaxed [3]. In the temporally constrained case, the number of time slots for games is minimized so every team must play once in every round (“round” may refer to day, week, etc., depending on context). The number of required time slots is above the minimum in the temporally relaxed case, so some teams may have periods of time without a game. The MLB schedule is temporally relaxed because teams have days off when other teams are still playing. The most fundamental temporally constrained problems are the single and double round robin tournaments. In the single round robin case, the league consists of a fixed number of teams and every team must play every other team once. The goal of the scheduler then, is to determine which teams play against each other in each round, and, for every pairing, which team plays at their home venue. Double round robin tournaments, in which every team must play every other team twice, are typically partitioned into two half-series, where each pairing occurs once in each half with alternating home-away rights. The second half is rarely independent of the first. For example, in a mirrored schedule, the halves are complementary to one another but with different home-away rights (that is, the first game in the first half is equivalent to the first game of the second half but at the alternate venue and so on). Finally, in leagues with odd numbers of teams, one team will have a bye in each round by necessity. The ACC conference basketball competition is an example of a double round robin tournament. It utilizes the concept of a mirrored schedule to ensure maximum separation of meetings between any two teams [4]. Byes can be used to break undesirable strings of consecutive home or away games, though overreliance on home-bye-home-home sequences, for example, is also a sign of a weak schedule [4]. From the 1970s to 2003, much of the literature on schedule programming focused on single and double round robin tournaments [3]. This may help to explain why parties such as the Sports Scheduling Group didn’t challenge the Stephensons until recently, for the MLB season is more complicated than a round robin schedule. Lately, however, other methods have found increased attention in the programming community.

THE TRIPLE HELIX Spring 2010 19 5/8/2010 7:58:56 PM

UCHICAGO The TTP, typically a temporally relaxed problem, revolves around team travel and “flow,” the pattern of home and away games in a schedule [5]. With the TTP, programmers seek to minimize the total distance traveled by any given team, for this adds to wear and tear on the players. Another active goal is to avoid long homestands and roadtrips, which are strings of consecutive home and away games, respectively. The TTP has been used extensively to generate feasible schedules for the MLB season. Programmers often approach the TTP by dividing the MLB season into sections of round-robins among subsets of teams [6]. The benefits of this process are twofold. Algorithmically, programmers are able to utilize the results from the round robin tournaments to optimize the sections

and afford a schedule [6]. Furthermore, such schedules have a more straightforward structure and, in the eyes of league executives, are less prone to bias [6]. This is because each team will face every other team in each round robin section, so any given team will not have to play strong teams at a disproportionate frequency. Schedulers such as the Stephensons, who References: 1. Guzzo, Maria. Striking it BIG (12 November 2004). Pittsburg Business Times. Available from: story7.html. 2. Trick, Michael. Michael Trick Video on Sports Scheduling. Online video clips. Available from: 3. Kendall, Graham, et al. Scheduling in sports: An annotated bibliography (2009). Operations Research, 37, 1-19. Available from: science?_ob=ArticleURL&_udi=B6VC5-4WGF173-2&_user=5745&_rdoc=1&_fmt=&_ orig=search&_sort=d&_docanchor=&view=c& acct=C000001358&_version=1&_urlVersion=0&_userid=5745&md5=6e33927d7107309 343750f1791f4eb62. 4. Nemhauser, George L., and Trick, Michael A. Scheduling a Major College Basketball Conference (1997). Operations Research, 46, 1-8. Available from: http://

20 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 20

map everything out on a large board, don’t have the tools to afford these advantages [7]. To say that computer programming has swept by human scheduling would be premature. Trick’s group also successfully designed the schedule for the 2007 season, but this schedule drew criticism from select parties for neglecting to account for unforeseen constraints [8]. The MLB season starts its season in early April, when many Midwest and Northeast cities still have less than ideal weather conditions. Yet a number of Midwest and Northeast teams commenced the season with homestands, and no one likes to go to a ballgame in the snow. The moral of the story is that new constraints are constantly entering the picture for human schedulers and computer programmers alike, Adapted from [9] and [10] competing with existing constraints and necessitating improvements in the methodologies employed. There has been a noticeable increase in sports scheduling articles in recent years [3]. These programs are adept at solving challenging problems that sports scheduling bring and can do the brute force searches against which teams like the Stephensons cannot compete. The methods employed have clearly reached high levels of sophistication and with each passing year, programming teams will seek to better the schedules from previous years. Notably, unforeseen constraints continue to enter the picture and make room for improvement. Still, even as the debate between programmers and manual schedulers wages on, it is difficult to overlook the progress computers have made in generating feasible schedules, and to speculate on how they will improve in the coming years. Jacob Parzen is an undergraduate at the University of Chicago. 5. Easton, Kelly, et al. The Traveling Tournament Problem: Description and Benchmarks (2001). Lecture Notes in Computer Science, 2239, 580-584. Available from: 6. Trick, Michael A. Integer and Constraint Programming Approaches for RoundRobin Tournament Scheduling (2003). Lecture Notes in Computer Science, 2740, 63-77. Available from: fulltext.pdf. 7. The Sports Scheduling Group (Spring 2006). Engineering Enterprise. Available from: pdf. 8. Ruddick, Chris. Who is the genius who came up with the MLB schedule? (9 April 2007). Monsters and Critics. Available from: sport/mlb/article_1289122.php/Who_is_the_genius_that_came_up_with_the_MLB_ schedule. 9. 10.

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:58:56 PM


Mediating the Personal Genomics Revolution Laurel Mylonas-Orwig


hen Linda Avey and Anne Wojcicki hatched the idea for the company 23andMe in early 2006, their concept was straightforward: create an accessible, affordable way for customers to peek into their DNA from the comfort of their own laptops [1,2]. Signing up, as the company website explains, is easy—simply order a kit, fill the sterile tube with spit, return it to the lab, and log in two to four weeks later to start “exploring your genome” [1]. This business model soon proved successful, and a batch of other personal genomics services quickly appeared. In addition to 23andMe, companies like Knome, Navigenics, Psynomics, and deCODE Genetics currently offer a wide variety of athome DNA tests, priced anywhere from a few hundred to a few thousand dollars [3]. Now, anyone with curiosity and some extra cash can learn about variations in their genome linked to traits ranging from Alzheimer’s disease to innate sprinting ability. What would have seemed impossible only a few years ago is now a rapidly expanding branch of personalized science. As the personal genomics field has grown, however, regulatory frameworks have not kept pace. At present, there are no independent standards governing personal genomics tests; instead, each company is left to police itself, with questionable success. The problem this raises is three-fold: first, the types of tests offered are inconsistent. Some, like the 23andMe test, look for a variety of interesting but non-serious characteristics while others target a single, serious disease [4]. Second, there are no independent criteria to evaluate the accuracy of the test; instead, each company sets its own. Third, there is no standard for how the information is disseminated. Some companies send results straight to the consumer, while others insist on sending results to a customer’s doctor, or involving a genetic counselor. These differences make it difficult for the average consumer to judge

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 21

how useful—and more importantly, how accurate—these tests are. As personal genomics continues to revolutionize the way in which we interact with our DNA, it is imperative that we develop regulations to ensure the validity of these tests, as well as guidelines to help consumers determine their proper use. Searching for SNPs Despite the myriad uncertainties tied to at-home DNA tests, the method of decoding portions of a genome has a solid scientific grounding and varies little from company to company [3]. The process begins with a tube of the customer’s saliva, which contains cheek cells shed from the soft lining inside the mouth. Once the saliva reaches the lab, researchers extract DNA from these cells and feed it into machines that use gene chips—microchips containing DNA strands with specific sequences of genetic code—to decipher it. When run against a person’s spit sample, the gene chip DNA joins with complementary sequences, reading pre-selected sections of the DNA [3]. What scientists are looking for are Reproduced from [12] single nucleotide polymorphisms (SNPs)—small variations in the genetic code that make each person’s DNA unique. Because roughly 99.5 percent of human DNA is identical from person to person, these single letter variations indicate differences that can be correlated with certain distinct traits [3]. For example, babies with the CC or CG form of one SNP are more likely to gain a 6-point increase in IQ when breastfed, because these variations mediate a response to a fatty acid found only in breast milk. Babies with the GG form, on the other hand, respond differently and do not receive the IQ benefit [5]. When personal genomics companies sequence a customer’s DNA, they look for any of the approximately 10 million known SNP variations, and then attempt to correlate those with various diseases,

THE TRIPLE HELIX Spring 2010 21 5/8/2010 7:58:56 PM

UCHICAGO traits and risk factors [3]. Thus, decoding only a miniscule eight- to sixteen-thousandths of one percent of a full genome is enough to sequence about half a million SNPs, and reveal much of what makes a person unique [3]. The technology used to search for SNPs has proven itself reliable and reproducible [3, 4]. However, generating this small mountain of raw data is the easy, well-understood part of the process. The second piece—interpreting the SNPs—is far from straightforward. Although a single SNP can indicate whether someone has a working copy of a gene, there is no way to be sure about what many SNP variants mean with regards to someone’s risk of disease [3,6]. This uncertainty stems from the complex genetic interactions that produce most diseases. In fact, according to Dr. George Church, a geneticist at Harvard Medical School, only about 1,300 genes have strong, recognizable links to medical conditions, and these relatively clear-cut cases are the exception [3]. Interpreting One’s Genome: Mountains or Molehills? When Amy Harmon, a science writer for the New York Times, was offered a chance to be one of the first 23andMe testers, she had reservations. Harmon feared disturbing revelations from her DNA about her own life span, or perhaps rogue genes that she had unknowingly passed on to her daughter [5]. After taking the test, however, Harmon found that much of what she learned was more amusing than worrisome. She wrote, “[Although] I tragically lack the predisposition to eat fatty foods and not gain weight…people who, like me, are GG at the SNP known to geneticists as rs3751812 are 6.3 pounds lighter, on average, than the AA’s. Thanks, rs3751812!”[5]. Another 23andMe test-taker, Lindsay Richman, posted excitedly to the website’s discussion board after learning that she was immune to norovirus infections, which cause the common stomach flu. “RESISTANT!!!” her post title shouted. She went on to say that she had not had the stomach flu in almost 20 years [3]. 23andMe’s decision to provide data about mainly nonsensitive traits is prudent, given that offering relative risk for characteristics like stomach flu resistance and average body weight is less likely to provoke a backlash from medical and legal professionals [7]. Indeed, while 23andMe’s services give customers unprecedented access to their genome, the information it provides has proved to be much less alarming to customers than some feared. Yet it is important to note that even if 23andMe wished to provide direct genetic testing for more sensitive traits—for example, cancer—the diagnostic and therapeutic usages of genetic testing are still in their infancy

22 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 22

[7]. Although it is possible to determine some relative risk factors through a combination of genetic analysis, personal health and family history, these at-home DNA tests cannot provide a diagnosis, a fact 23andMe is quick to point out [5]. Despite such limitations, at least one other company is moving decisively in the diagnostic direction. In December 2007, Dr. John Kelsoe, a psychiatric geneticist at the University of California, San Diego, announced that he had discovered several gene mutations closely tied to bipolar disorder [8]. Three months later, his company, Psynomics, began marketing a mail-order DNA kit to test for these variations, ostensibly to confirm a diagnosis of bipolar disorder. When the Psynomics test hit the market, the reactions around the scientific and ethical worlds were immediate and strong. Many, like Francis Collins, director of the National Human Genome Research Institute, accused Kelsoe of relying on flimsy research to support his claims. Collins labeled the test “misleading,” noting that on his list of accepted genes for common diseases, “there are no entries yet for bipolar disorder” [4]. Others, however, lauded certain aspects of the Psynomics test, most notably the decision to have the test results sent to a doctor of the customer’s choosing, instead of directly to the customer—a step intended to prevent self-diagnosis. Dr. Martin Schalling of the Karolinska Institutet in Sweden argues that this makes the Psynomics test “different than others that are truly at-home tests [because] the results go to the treating physician” [9]. Another distinction of the test is that it is designed for customers who fit a certain profile: namely those who are white, of Northern European ancestry, and have a family history of bipolar disorder [9]. According to the Psynomics website, customers with these risk factors who test positive for Reproduced from [13] one of the mutations are two to three times more likely to have the disease [10]. While such a definitive statement may sound like a promising step towards DNA diagnostics, there are many scientists who feel that Kelsoe’s choice to market the test, given the current data, is a poor one. Dr. Tom Insel, the director of the National Institute of Mental Health, argues: “Based on everything we know, this science is not ready for prime time…What has been found is an association with a common [genetic] variant that increases your risk of illness. It confers a very slight increase in risk. But that is a long way from being able to use that single genetic association to make any practical clinical decision” [9]. Indeed, Kelsoe himself admits that the science behind the Psynomics test is not well substantiated. When it comes to

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:58:57 PM

UCHICAGO determining if gene variants are ready for commercialization, he acknowledges, “We’ve cut kind of a low threshold, and that is replication in at least one independent study”—though he also notes that he does not require that study to be done by another research group [9]. This lax condition stands in contrast to 23andMe’s verification method, which, according to the corporate fact sheet, “employs a systematic vetting process” of independent studies [1]. However, Kelsoe defends his decision to set such a low standard, and to market his test at all, by noting the current difficulty of diagnosing bipolar disorder. On average, seven years elapse from the onset of symptoms before a diagnosis of bipolar disorder is made, and patients are misdiagnosed three times [4]. “The goal of this is to try and help doctors make an accurate diagnosis more quickly so the patient can be treated appropriately,” Kelsoe says. “Anything is going to help, even if it just helps a little bit” [8]. Mediating the Revolution In April 2009, an article entitled “Genetic Risk Prediction— Are We There Yet?” appeared in the New England Journal of Medicine. The authors’ answer, in a nutshell, was no. The central question of the article, however, remains: why not begin testing for common genetic variants with established associations to disease, even if knowledge about these variants and associations is incomplete [6]? The NEJM article contended that it is irresponsible to provide information based on incomplete information. Ironically, Dr. Kelsoe uses nearly the same reasoning to argue for marketing the Psynomics test, asserting that incomplete information is better than nothing. This is clearly not a question that can be answered simply. Yet as the debate about this point continues, the personal genomics field grows larger. With an estimated 1,000 tests on the market today, the era of personal genomics has arrived [8]. Although the information we have about genetic risk indicators is incomplete, these tests are already on the market, and consumers are buying them. Given the ambiguous nature of the science behind the test results, appropriate guidelines are urgently needed to advise those considering this type of genetic testing about how to interpret the results, and when to act on the findings [6]. Furthermore, regulatory bodies must consider how to react to these tests. When is it appropriate to mediate someone’s contact with his or her own genome? This is the question we must answer. When developing guidelines with this question in mind, we must take into account that not all personal genomics tests are created equal. After all, learning about a genetic predisposiReferences: 1. 23andMe, Inc. Fact Sheet. 23andMe, Inc. corporate/ 2. Hamilton A. The Retail DNA Test. Time Magazine. 2008 Oct 27. 3. Barry P. Seeking Genetic Fate. Science. 2009; 176: 16-21. 4. Couzin J. Gene Tests for Psychiatric Risk Polarize Researchers. Science. 2008; 319: 274-277. 5. Harmon A. My Genome, Myself: Seeking Clues in DNA. The New York Times. 2007 Nov 17. 6. Kraft P and Hunter DJ. Genetic Risk Prediction—Are We There Yet? New England Journal of Medicine. 2009 April 15. 7. Subhajyoti D. A gold rush for personal genomics? Biologist. 2008; 55: 230-232.

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 23

tion to bipolar disorder is much more serious than finding out about a genetic predisposition to the stomach flu. Moreover, the strength of the connections underlying each trait tested is not equivalent. In this case, the gene connected to stomach flu predisposition, FUT2, is well understood. In nearly all studies, subjects who lacked a functional copy of the gene did not get sick when exposed to the virus—the research equivalent of a home run [3]. However, for GRK3—the gene Dr. Kelsoe claims is tied to bipolar disorder—the evidence is not nearly as strong. To date, several independent studies have failed to replicate the link that Dr. Kelsoe reported in his findings [4]. While this does not necessarily mean that GRK3 is not an underlying cause of bipolar disorder, it does cast doubt on the validity of the Psynomics test. As Dr. Frances Flinter of the Human Genetics Commission (HGC) in Great Britain notes, “Some [personal genomics] tests can cause considerable surprise or concern to those taking them…Some, to say the least, are of doubtful value. We need a set of principles that can be adopted within existing legal frameworks in different countries” [11]. In an attempt to begin constructing such a framework, the HGC published a set of principles meant to promote high standards and consistency among commercial DNA tests at an international level [11]. The principles cover all aspects of personal genomics tests, and are straightforward: purchasers need to be aware of what they may find out; tests for serious hereditary diseases should only be provided with before and after counseling; tests should provide easy to understand information on how these test work and what the results mean [11]. Although most of the points addressed seem commonsense, the publication of this set of principles is an important first step towards providing consumers with a basis for deciding how and when to use these tests. In the meantime, personal genomics companies are left to regulate themselves, and customers must interpret results on their own. The upshot is that consumers face a serious challenge, especially when it comes to test results concerning serious and complex traits. Scientists like cancer geneticist Allan Balmain see such ambiguity as a chance for personal genomics companies to “exploit the naïveté in the general population” [3]. In any case, until guidelines and regulations are in place to help consumers navigate the blossoming field of personal genomics, the best prescription, as the Federal Trade Commission notes, may just be a healthy dose of skepticism [9]. Laurel Mylonas-Orwig is an undergraduate at the University of Chicago. 8. Wohlsen M. Bipolar Disorder At-Home Test Causes Stir. Huffington Post. 2008 March 22. 9. Doheny K and Chang L. At-Home Bipolar Disorder Test: Help or Hindrance? 2008 June 4. 10. Psynomics. Physicians—The Science. < php.> 11. HGC publishes consultation on direct-to-consumer genetic testing. Human Genetics Commission. 2009 Sept 8. < asp?Newsid=132 > 12. 13.

THE TRIPLE HELIX Spring 2010 23 5/8/2010 7:58:58 PM


The Growing Role of Aquaculture as a Food Source Matt Doiron


housands of years ago, humans in the Middle East made one of the most critical advances in the development of civilization: agriculture. Ancient agricultural practices increased the available food, fueling the growth of populations and the development of advanced societies and states. In modern times, agriculture has become so productive that only a small fraction of the population of developed countries need to work in the agricultural sector to generate sufficient food to feed the entire society. However, most agricultural practices have focused on the farming of terrestrial plants and animals; while there has always been some level of wild fishing for subsistence or commerce, only recently have industrial operations emerged to cultivate fish, shrimp, algae, and other sea life. Background Aquaculture is a term used to describe the intensive production of seafood analogous to that common for land animals that are eaten by people; fish, for example, are raised in captive tanks or pens connected to a water purification system, natural river, or the ocean. The fish are given food, antibiotics, and other inputs as they are raised until they have grown enough to be sold commercially. In 2004, aquaculture was responsible for 43% of global seafood production, or about 48 million tons (60 million tons if aquatic plants are counted) [1]. Of this total, nearly 31 million tons were produced in China. India, Vietnam, Thailand, and Indonesia were the other countries that produced over 1 million tons of farmed seafood. The leading species harvested worldwide are carps, oysters, and other types of shellfish. The National Oceanic and Atmospheric

24 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 24

Administration places U.S. aquacultural output in the year 2005 at almost 790 million pounds (395 thousand tons), and at a dollar valuation of about $1.1 billion in 2005 dollars. The primary product is catfish, while one of the fastest growing products is salmon [2,3]. In addition, about 70% of annual U.S. seafood consumption is imported, with aquacultural products making up about half of this [2]. The high imports are partly because aquaculture has actually grown fairly slowly in the U.S., which ranks third in the world in seafood consumption, but only tenth in aquaculture production [1,2]. Within the U.S., Mississippi is the leading state for production followed by Arkansas and Florida [2]. Through aquaculture, global seafood volume can potentially be increased, which is important for the overall food supply and potentially for global nutrition as well, as seafood provides many health benefits in comparison to meat. Furthermore, increases in global wealth may lead to increased demand not only for food in general but for protein sources in particular, and an increased availability of fish can substitute for increased production of chicken, beef, pork, and other meats. Economic Factors The primary benefits of aquaculture are the increase in available calories and protein and the advantages of aquaculture compared to its primary substitutes: better nutrition than meat and producing less pressure on the environment than wild fish. Table 1 compares some important nutritional data on Atlantic salmon, catfish, chicken, pork, and steak. As shown in the table, some species contain more fat when they have been farmed than when caught in the wild. However,

Š 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:58:58 PM

UCHICAGO when compared with other varieties of meat, farmed fish has comparable and in some cases lower quantities of saturated fat, cholesterol, and sodium. In addition to this nutritional information, many species of fish are recognized as providing omega-3 fatty acids, which have been found to reduce the risk of cardiovascular disease; for this reason, the American Heart Association recommends eating fatty fish twice per week and specifically cites the nutrition benefits from eating fish rather than other meat products [4]. Seafood can not only benefit residents of developed countries who agonize over saturated fats and omega-3 fatty acids, however. Merely increasing the volume of protein available can potentially grant health benefits to people around the world whose diet is restricted by price more than by nutrition. From a financial point of view, fish raised in aquaculture settings are generally less expensive than their wild counterparts, making seafood more accessible to poorer consumers around the world. Beginning in the 1990s, the global wild fish catch began to level off; as a consequence, the catch per capita has been decreasing as population has increased [3]. Part of the reason for this leveling off is that the production possibilities of wild seafood stocks are becoming maxed out: estimates are that about three-quarters of the world’s fish stocks are either fully exploited or overexploited [1]. With the world population increasing and becoming richer, economic theory would predict that the demand for seafood and fish in particular should increase in the future. The more seafood that can be drawn from aquaculture, the less danger posed to dwindling fish populations. In addition, while some argue that wild fish stocks are overexploited because of the tragedy of the commons (the trend for open areas to be overused when there are no prices restricting access to the resources) there is clear ownership of a farmed fish stock, encouraging efficient long-term planning. Environmental Factors Unfortunately, the environmental effects of aquaculture are not as beneficial, with concerns regarding ecology and pollution being particularly grave. The leading concern is that many fish farmed in fish farms are carnivorous and so up to 4 pounds of other types of fish needs to be harvested in order to produce 1 pound of consumed fish (this is the case for popular salmon, for example) [6]. Clearly, this effect reduces the potential increase in fish stocks promised by aquacultural practices. It should be noted, however, that carnivorous fish are less selective in their diets than humans and so their food can be drawn from other farm-raised fish or from wild-caught fish facing little or no environmental pressure (and, of course, wild-caught carnivorous fish consume a similar mass of wild References: 1. “The State of World Fisheries and Aquaculture, 2006.” Food and Agriculture Organization of the United Nations. 2007. 2. “Aquaculture.” National Oceanic and Atmospheric Administration. Accessed 27 October 2009. business/aquaculture/ 3. Goldburg, Rebecca J. , et al. “Marine Aquaculture in the United States.” Prepared for the Pew Oceans Commission. 2001. 4. “Fish and Omega-3 Fatty Acids.” American Heart Association. Accessed 21 November 2009.

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 25

fish themselves). Additionally, farmed fish that are raised in caged environments in bodies of water, rather than in inland tanks, can also escape into the wild where they may compete with local species - and even with local populations of the same species - for resources. For example, escaped Atlantic salmon are commonly caught in the wild in both the Atlantic and the Pacific Oceans [7]. Furthermore, just as with farming of land animals, there are concerns regarding pollution from animal waste and from the use of chemicals and antibiotics. To some degree, the environmental costs depend on the appropriate trade-off. Some concerns about aquaculture arise from comparing the practice to the emergence of a more vegetarian diet. This allows a case against aquaculture to be based not only on the issues mentioned above but also by citing animal cruelty, for example. If a certain level of seafood consumption is given, however, then clearly the tradeoff is reduced to the environmental costs mentioned above versus the risks to aquatic ecosystems if commercial fishing remains primary. And if it is accepted that there will be a large and growing demand for meat for the foreseeable future, aquaculture may look still better on an environmental basis when set against the industrial farming of pigs, chicken, and cows. This leads the World Wildlife Federation, for example, to take a somewhat neutral view on the overall effect of aquaculture despite recognizing the damages of factors such as chemical use and insisting that ecological standards be developed and seafood labeled according to its origin [8]. Seafood Watch, a program of the Monterey Bay Aquarium, publishes guidelines for determining if any fish- farmed or wild- is produced in a way that does not harm the environment, with many farmed varieties considered environmentally friendly (with farmed salmon being a notable exception) [9]. Conclusion Seafood is a healthy source of protein, and a large and growing population gaining increased prosperity would be expected to consume more of it; this might even be desirable due to seafood’s health benefits compared to red meat. Today, a large and growing share of available seafood originates not from the wild but from farms that seek to replicate the success humanity has had with agriculture. Environmental problems exist, such as the need to procure many pounds of feed fish to produce carnivorous fish for consumption, the potential of escape into the wild, and pollution. Yet these factors are not likely to be strong enough to prevent aquaculture from continuing to grow as a technique to help feed a hungry world. Matt Doiron is an undergraduate at the University of Chicago. 5. “Nutrition Data.” Accessed 21 November 2009. 6. Cressey, Daniel. “Future Fish.” Nature. Vol. 458. 26 March 2009. 7. Naylor, Rosamond L. , et al. “Effect of aquaculture on world fish supplies.” Nature. Vol. 405. 29 June 2000. 8. “Aquaculture.” World Wildlife Foundation. Accessed 29 October 2009. http:// 9. “Seafood Recommendations.” Monterey Bay Aquarium. Accessed 21 November 2009. sfw_recommendations.aspx

THE TRIPLE HELIX Spring 2010 25 5/8/2010 7:58:58 PM


Aquaculture: Farming in the Sea Kathryn Blackley


ponds are a less environmentally devastating use of land than slash-and-burn agriculture [5]. Moreover, it can be a source of both food and income in developing countries [5].

The Exhausted Seas Aquatic resources have long been considered inexhaustible, and fishing an industry limited not by demand, but by technology. Though technology has allowed individual fishers to increase their catches, during the 20th century, it became so efficient that overfishing contributed to numerous fish population collapses. One of the most notorious was the collapse of Atlantic cod (Gadus morhua) off the coast of Newfoundland, which caused the Canadian government to order the complete cessation of cod fishing in 1992 [1]. After a fish population has collapsed, cessation of fishing is not always enough for a return to earlier numbers, and fishers are understandably averse to regulations which restrict their enterprise [2]. Abundance records for many fish populations begin in the 1970’s and 1980’s, when overfishing or environmental factors may have already depleted populations, so the consequences of fishing are not completely known [2]. While fish populations worldwide were in crisis, the human population doubled from 3 billion in 1959 to 6 billion in 1999, and the fishing industry had to find alternatives to the wild catch [3].

Challenges of Aquaculture As with most instances of human intervention into a natural process, there are negative side effects to aquaculture. Contained fish will eat and produce waste in a more concentrated manner than fish in any natural ecosystem. Though concentrated effluent caused by aquaculture can locally increase biodiversity, it can also cause eutrophication, a condition where excess nutrients can disrupt the balance of a biological community [5]. This is a challenge particularly for freshwater systems, where the Reproduced from [18] volume of water is not as great [5]. When the systems are structured as ponds the pond water must be replaced periodically [5]. Aquaculture can also disrupt the surrounding ecosystem. Birds that eat fish can change their hunting patterns in response to aquaculture, which can disrupt the ecosystem [9]. Catfish in aquaculture ponds were not only consumed by their regular predators, but also by herons and egrets that took advantage of their availability [9]. Fish occasionally escape: irregular stresses like storms can increase the number of individuals entering the surrounding ecosystem [10]. When the species is not native to the area, it can introduce pathogens and parasites that are detrimental to other organisms [11]. The very same characteristics that make fish successful in aquaculture, including tolerance to environmental changes and quick reproduction, make them more likely to become invasive [5]. Atlantic salmon (Salmo salar), which are farmed widely both in and out of their natural range, have been closely examined as an example of escapement consequences [10]. The escaped salmon can disturb the ecosystem even if they don’t reproduce, as their increased size and aggressiveness can cause native fish to become less productive [10]. When they reproduce with the native species, the resulting hybrids have less reproductive success, which can threaten the survival of the native species if it is already in crisis [10]. If the escaped fish reproduce with each other, their offspring have a fitness advantage over the native species, which will decrease genetic diversity, making the population more susceptible to

griculture and animal domestication have thrived in various forms for millennia, and people expect that most of the meat, animal products, fruits, and vegetables consumed worldwide originated at some type of farm. Until recently, the only exception was the sea, where fishing boats still caught fish in the wild. However, the depletion of fish populations worldwide has finally brought some seafood into the domain of agriculture through the rapidly expanding industry of aquaculture.

An Alternative Aquaculture, the practice of raising or “farming” fish or other aquatic organisms for consumption, was rare in the 1950’s, but has provided nearly all the growth in the world fish harvest since the 1980’s [4,5]. In 2000, fish raised in an aquaculture setting accounted for one quarter of all the fish consumed by humans, and by 2008 the estimate increased to 50% [6,7]. Aquaculture can be practiced in a variety of ways – through the use of artificial ponds for freshwater fish, cages for marine fish, and marine environments for shellfish [5,6]. Aquaculture 26 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 26

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:58:58 PM

environmental change or disease [10]. The environmental changes created by the physical containments can also present challenges [12]. In Norway, a species of hydroids has begun to live on aquaculture nets [12]. Although it is not known why this has become a hospitable habitat, aquaculture farmers must clean the nets to allow quality water to reach their fish [12]. This type of water contamination, generally called biofouling, significantly increases costs of aquaculture in many different habitats [13]. While anti-biofouling measures have been studied in the shipping industry, there has not yet been much transfer to the aquaculture industry [13]. Aquaculture as a Sustainable Practice To continue in the future, aquaculture will need to find alternative controls to eutrophication and to replace fish meal [6]. In more intensive systems, the fish protein in the fish meal fed to the farmed fish is more than twice the eventual output of protein [6]. Fish oil, which provides dietary lipids, has steadily decreased in world production for the last twenty years, causing its price to increase [7]. Alternative sources of lipids, including vegetable oil, terrestrial animal lipids, or fish by-products, are one way to decrease the use of fish meal [7]. Some combinations of these could provide most of the lipid requirements for aquaculture, but before they could be used widely, it would be necessary to determine which sources are appropriate for different kinds of fish [7]. Despite these lipids being more affordable, consuming alternative lipids can decrease the amount of fish oil in the fish, which is a concern because it constitutes main nutrition benefits of fish consumption [7,14]. Recirculating aquaculture is a potential solution for effluent disposal, as the fish are kept in tanks that allow for more control over both nutrients and waste [15]. Due to the capital required, this would not be practical for many of the species now in aquaculture, but those used in gourmet cooking could be raised in this manner because they can be sold even at very high prices [15]. A system applying this concept Refernces: 1.Olsen EM, Heino M, Lilly GR, Morgan MJ, Brattey J, Ernande B, Dieckmann U. Maturation trends indicative of rapid evolution preceded the collapse of northern cod. Nature 2004 Apr 29; 428:932-935. 2. Hutchings JA, Reynolds JD Marine Fish Population Collapses: Consequences for Recovery and Extinction Risk. Bioscience 2004 Apr;54(4):297-309. 3. US Census Bureau [Online]. 2009 Sept 10 [cited 2009 Oct 12]; available from: URL: 4. Ninan KN, editor. Conserving and Valuing Ecosystem Services and Biodiversity: Economic, Institutional and Social Challenges. Sterling (VA): Earthscan Publications Ltd.; 2009. 5. Diana, JS. Aquaculture Production and Biodiversity Conservation. Bioscience 2009 Jan 8;59(1):27-38. 6. Naylor RL, Goldburg RJ, Primavera JH, Kautsky N, Beverdidge MCM, Clay J, Folke C, Lubchenco J, Mooney H, Troell M. Effect of aquaculture on world fish supplies. Nature 2000 June 29; 405:1017-1024. 7. Turchini GM, Torstensen BE, Ng W-K. Fish oil replacement in finfish nutrition. Reviews in Aquaculture 2009 Feb 10; 1(1):10-57. 8. HORIZON International. Integrated Aquaculture Provides a Viable Alternative to Slash-and-Burn Agriculture, Reducing Destruction of Tropical Forests [homepage on the Internet]. Ucuyali Basin, Peru: HORIZON International; c2003-2006. [updated: 2009 Nov 6; cited: 2009 Nov 6]. Available from: cat1_sol3.htm. 9. Price IM, Nickum JG. Colonial Waterbirds 1995 18 (1): 33-45. 10. Naylor R, Hindar K, Fleming IA, Goldburg R, Williams S, Volpe J, Whoriskey F, Eagle J, Kelso D, Mangel M. Fugitive Salmon: Assessing the Risks of Escaped Fish from Net-Pen Aquaculture. Bioscience 2005 May 55(5):427-437.

Š 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 27

Reproduced from [19]


to pond freshwater aquaculture, which would use plants to facilitate the circulation, is currently in development [16]. Such a system would require plant species that can survive in the fish-filled ponds and effectively clean them [16]. Future With the US Census Bureau projecting the world population to increase by another 3 billion before 2050 and with wild fish populations still struggling to recover from decades of poor fishing practices, aquaculture is expected to further increase as a worldwide source of fish [3,5]. New aquaculture locations will need to be carefully placed, and the use of alternative lipids will have to be weighed against the nutritional costs to consumers [6]. With different types of aquaculture becoming popular, certifications and labels would need to be standardized by the industry [17]. As governments and industries strive to become more environmentally sound, management of aquaculture should be a priority. Kathryn Blackley is a sophomore biological sciences major at Cornell University. 11. Cook EJ, Ashton G, Campbell M, Coutts A, Gollasch S, Hewitt C, Liu H, Minchin D, Ruiz G, Shucksmith R. Non-Native Aquaculture Species Releases: Implications for Aquatic Ecosystems. In: Holmer M, Black K, Duarte CM, MarbĂ N, Karakassis I, Editors. Aquaculture in the Ecosystem. : Springer; 2008 Mar 11. p 155. 12. SINTEF. Biofouling on aquaculture constructions [homepage on the Internet]. Norway: SINTEF. [updated 2009 July 9; cited 2009 Nov 6.] Available from: http:// 13. Willemsen PR. Biofouling in European Aquaculture: Is There An Easy Solution? CRAB Project. 14 Bell JG, McEvoy J, Tocher DR, McGhee F, Campbell PJ, Sargent JR. Replacement of Fish Oil with Rapeseed Oil in Diets of Atlantic Salmon (Salmo salar) Affects Tissue Lipid Compositions and Heptaocyte Fatty Acid Metabolism. Journal of Nutrition. 2001; 131:1535-1545. 15. Losordo TM, Masser MP, Rakocy J. Recirculating Aquaculture Tank Production Systems: An Overview of Critical Considerations. Southwestern Regional Aquaculture Center. 1998 Sept;451. 16. ScienceDaily [homepage on the Internet]: Floating Iris Plants May Help Clean Fishery Wastewater. ScienceDaily LLC; c1995-2009 [updated 2009 Feb 10; cited 2009 Nov 8]. Available from: htm. 17. Lee, D. Understanding aquaculture certification. Rev Colomb Cienc Pecu 2009; 22(3):319-329. 18. 19.

THE TRIPLE HELIX Spring 2010 27 5/8/2010 7:58:58 PM


The Guarded Gate: DNA Testing for Refugees Nipun Verma The Human Provenance Project results are admissible in court. Nevertheless, is DNA testing The United Kingdom Borders Agency started the pilot pro- for family reunification ethically justifiable? gram, Human Provenance Project, in September 2009 to pinFamily reunification is vital for refugees. The absence point the nationalities of people seeking refugee status in the of family members can exacerbate the trauma of migration United Kingdom. Specifically, U.K. officials were concerned and can impede assimilation into a new country [6]. Several that Kenyans were trying to pass international documents stress themselves off as Somalis, who this importance of family reuniAlthough DNA testing have a greater chance of being fication. The Universal Declarato establish biological granted refugee status due to the tion of Human Rights of 1948 and civil war in Somalia. The prothe United Nations Covenant on relationships is scientifically gram would use DNA testing to Civil and Political Rights of 1966 accurate, it poses some compare nucleotide sequences of both recognize that “the family living individuals to sequences is the natural and fundamental important limitations and of historic populations to detergroup unit of society and is enethical ramifications. mine ethnic origin. The program titled to protection by society would also use isotopic analysis, and the State” [7]. In addition, which matches certain isotope the executive committee of the ratios in hair and nails to the ratios found in the individual’s United Nations High Commissioner for Refugees has issued place of birth or upbringing [1]. However, the DNA analysis numerous recommendations urging refugee family reunificafor African populations has limited resolution and is subject tion. However, the executive committee’s recommendations to considerable errors. There is also no evidence that isoto- are not binding upon governments and are fairly broad and pic ratios present at birth or early childhood are preserved non-descript. As a result, national governments have develin continuously growing tissues. More obviously, these tests oped their own procedures to determine the legitimacy of ignore the fact that people move and nationalities can change, family reunification in individual cases [7]. Although the whereas DNA remains the same [2]. The scientific community humanitarian reasons for allowing family reunification are and refugee support groups expressed outrage the moment the understood, financial support for refugees comes from doprogram was announced. This reaction led to the temporary mestic welfare programs, so governments have an incentive suspension of the program in October 2009 [3]. to limit the number of refugees admitted [8]. Ultimately, the issue of family reunification underscores the tension between DNA Testing for Family Reunification a national governments’ responsibility to respect the human The emergence of the Human Provenance Project highlights rights of refugees and their interests in curbing migration the capacity for scientific technology, including DNA testing, across their borders. to expand from its traditional fields and to influence refugee cases. When refugees flee their country of origin, they often Fraud in Refugee Family Reunification leave family members behind. Many countries have established The problem of fraudulent applications is an important point procedures that use DNA testing to determine biological re- of focus for governments that deal with refugees, as shown lationships in family reunification cases, but other countries, by the emergence of the Human Provenance Project. In the notably the United States, have not yet come to a decision on context of family reunification, this would arise when refugees this issue [4, 9]. Programs like the Human Provenance Project, claiming to be family members have no actual hereditary which try to establish nationality, should be terminated because link. In 2008, the U.S. Department of State suspended the they are fundamentally flawed. But what about employing humanitarian program, Priority 3, which reunited African DNA testing to establish biological relationships? Comparing refugees with relatives living in the U.S. In February 2008, DNA sequences between individuals can be used to establish the U.S. government started a pilot DNA testing program biological relationships because blood relatives share similar to verify the genetic ties between relatives. The initial DNA sequences of DNA, which can be obtained from cell samples testing included 500 individuals, primarily from Somalia and drawn from blood, saliva or hair [5]. The technological ac- Ethiopia, but it was later expanded to over 3000 individuals curacy and validity of this genetic testing is unquestioned; from Ethiopia, Uganda, Ghana, Guinea, Gambia and Cote DNA testing to establish paternity is regularly used and the d’Ivoire. DNA testing showed that a large number of the ap-

28 THE TRIPLE HELIX Fall 2010 UChicago4.9.indb 28

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:58:58 PM

CORNELL plicants were not related to their putative family members, and thus they were ineligible for family reunification. Due to the high number of fraudulent applications, the reunification program was suspended in October 2008 and has not yet been reinstated [9]. In the past few months, reports have surfaced that the Obama administration is considering restarting the Priority 3 program with new procedures that include DNA testing for some refugee applicants [10]. If the program does include DNA testing, the U.S. will be far from alone. Many nations, including Denmark and Canada, have already instituted procedures for DNA testing as part of family reunification procedures. Other countries, like Germany and Switzerland, have established DNA testing for immigrants, and these procedures often overflow into the refugee context [4]. As such, DNA testing is well established in some countries and is increasing in popularity in others. Problems with DNA Testing Although DNA testing to establish biological relationships is scientifically accurate, it poses some important limitations and ethical ramifications. Most importantly, families are not always biologically related. For example, the traditional family conception ignores the case of adopted children. Also, there is no universal definition of family; it is a socially constructed term that differs from one culture to another. In many cultures, family incorporates both biological and close social relationships [12]. In fact, after U.S. DNA testing revealed fraud, many refugee advocates argued that the definition of family among Africans extends beyond blood relatives, especially in cases in which relatives are scattered due to persecution or warfare [11]. The application of DNA testing can produce practical problems as well. There are concerns that DNA testing is more likely to be requested from individuals from poorer countries. These individuals are less able to obtain documentary evidence from their governments, and the receiving governments are References: 1. Genetics without borders. Nature. 2009; 461(7265):697. 2. Travis J. Scientists decry isotope, DNA testing of ‘nationality’. Science. 2009 October 2; 326(5949):30-1. 3. Williams C. UK Border Agency suspends ‘flawed’ asylum DNA testing. The Register. 2009 October 9. [cited 2009 November 28]. Available from: http://www. 4. European Council on Refugees and Exiles. Survey of Provisions for Refugee Family Reunion in the European Union. London: ECRE; 1999. Available from: http:// 5. UN High Commissioner for Refugees. UNHCR Note on DNA Testing to Establish Family Relationships in the Refugee Context. Geneva: UNHCR; 2008. Available from: 6. UN High Commissioner for Refugees. UNHCR Guidelines on Reunification of Refugee Families. Geneva: UNHCR; 1983. Available from: http://www.unhcr. org/3bd0378f4.html 7. Council of Europe. Thomas Hammarberg. Viewpoint: “Refugees must be able to reunite with their family members.” [cited 2009 November 29] Available from: http://

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 29

more likely to reject these documents as fraudulent. Furthermore, DNA testing can be expensive, and applicants may not be able to pay. Others may be constrained by religious beliefs that ban the surrendering of blood samples. Lastly, DNA testing poses serious concerns on the right to privacy for refugees because of the risks that personal data obtained can be disclosed to unauthorized parties [12]. A Compromise? Nations ultimately have a responsibility to guard their own borders. DNA testing can be useful in family reunification cases as long as the method’s limitations are recognized and its use is carefully regulated. First and foremost, DNA testing should be used as a last resort, and results showing no biological relationship should be overturned by sufficiently strong contrary evidence equivalent to a family tie. There must be strict and uniform national guidelines detailing Reproduced from [13] when DNA testing should be used, in order to ensure its application is as non-discriminatory as possible. The cost of DNA testing for refugees should be borne by the receiving government and there must be clearly defined measures for data protection. Family reunification at heart is a matter of humanitarianism and needs to recognize the rights of refugees. However, reuniting families of displaced refugees also has positive social and economic consequences for the receiving country. The presence of family members eases assimilation into the national culture and integration into the workforce. Although an open door policy is not the solution, neither is the creation of more restrictive measures that unfairly prevent refugees from joining their families. Nipun Verma is a senior in the College of Arts and Sciences and is studying biology. 8. U.S. Department of Health and Human Services: Office of Refugee Resettlement. Washington D.C.: US Department of Health and Human Services. 2009 June 1. [cited 2009 November 28]. Available from: html 9. U.S. Department of State. Fraud in the Refugee Family Reunification (Priority Three) Program. Washington D.C.: Bureau of Population, Refugees, and Migration. 2009 February 3 [cited 2009 November 28]. Available from: prm/rls/115891.htm 10. Lee M. US mulls DNA tests for some refugees. CBS News. 2009 November 5 [cited 2009 November 28]. Available from: stories/2009/11/05/ap/preswho/main5540304.shtml 11. Jordan M. Refugee Program Halted As DNA Tests Show Fraud. The Wall Street Journal. 2008 August 20 [cited 2009 November 28]. Available from: http://online.wsj. com/article/SB121919647430755373.html 12. Taitz, J, Weekers J, Mosca D. DNA and immigration: the ethical ramifications. The Lancet. 2002; 359(9308): 794. 13.

THE TRIPLE HELIX Fall 2010 29 5/8/2010 7:58:59 PM


Through a Baby’s Eyes: Studies in Infant Cognition Megan Altizer


abies. They inspire cooing and melt even the hardest of hearts. Although one would hardly expect to look behind those big round eyes and heads of peach fuzz to find answers about cognition, in recent decades psychologists have looked to infants to unravel the mystery of basic human cognition and development. Initially, methodology appears to be a large roadblock in understanding how babies could contribute to this research. Without the ability to speak, how can one expect infants to aid in the advancement of human cognition? The solution developed by psychologists to overcome this seeming difficulty in communication is perhaps one of the most ingenious innovations in developmental psychology, and it lies behind those big round eyes: a technique called “looking time”. Babies look longer at objects that they find novel or surprising. Psychologists have harnessed this basic fact and created experiments that exploit this idea in order to understand developmental cognition. The following three experiments, merely a handful of short profiles from the vast body of work in infant cognition, can lend understanding to exactly how this type of technique is used and what psychologists can learn from infants. The first profile, one of the most well known infant cognition studies, was conducted by Karen Wynn of Yale University and reveals that the surprisingly complex abilities of infants extend even to mathemat-

ics. This looking time study investigated the mathematical abilities of infants approximately five months of age. The experiment began with the placement of a single object, in this case a Mickey Mouse doll, on a stage. A screen was then raised to hide the first doll from view. A second doll was then added on the stage; though it was placed behind the screen and out of sight, it passed through the view of the infant as it was placed onstage. Once the screen was dropped, the scene either featured a possible outcome, the two dolls that the infant had seen placed on stage, or an impossible outcome, a single doll. Results indicated that infants looked significantly longer at the impossible outcome, suggesting that this scene surprised them, or violated their expectations. In order to gain more convincing evidence, the infants were exposed to a second condition involving the reverse arithmetic situation. The experiment was repeated with a new introductory scene with two dolls. A screen was again raised, but this time infants saw a hand removing one object from behind the screen. The screen then dropped to reveal a single object (possible) or two objects (impossible). Again, in-

Reproduced from [4]

30 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 30

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:58:59 PM

YALE fants looked significantly longer at the impossible outcome. Wynn noted that it is possible that such results indicated an ability to “calculate the results of a continuous amount of physical amount of substance” rather than concrete mathematical abilities [1]. In other words, it is possible that the babies understood that one plus one is some amount more than one, but not necessarily two. As a result, in order to test this hypothesis, Wynn conducted a third experiment. This final condition was similar to the first, except that the impossible result featured three dolls instead of one. That is to say, it was a test of the infants understanding of the equation one plus one equals two, not three. Yet again, infants looked longer at the impossible event which featured three dolls where there should have been two. This increased looking time at the impossible scenario suggests than infants are computing in discrete mathematical terms; they do not simply conceptualize the idea that addition results in something more than one or subtraction results in something less than two or three. Wynn wrote that such results suggest an innate mathematical capacity in humans, one which “may provide foundations for the development of further arithmetical knowledge”[1]. Additional experiments have shown that infants exhibit a basic understanding of physical concepts as well. In order to understand the principles governing the physical world around them, infants develop categories in which to classify events. These categories, which include occlusion (the hiding of one object behind another), containment (in which one object is placed inside another), and covering (in which an object is covered by a rigid screen), are understood through the attribution of variables including height and transparency. Through various looking time experiments, scientists have found that infants process these categories through a module – when watching an event occur, they make a model of this event in their mind in order to predict the outcome of the event. This model is then analyzed through the principles the infant has previously learned about that category. Variables, like occlusion, containment, and covering, are then included in the model as well. While the understanding of which variables are important generally develops with age, evidence suggests that two physical principles are innate. These principles include continuity, the idea that “objects exist continuously in time and space”

and solidity, the idea that “for two objects to each exist continuously, the two cannot exist at the same time in the same space “[2]. While these findings may at first seem abstract and rather useless, Renee Baillargeon, a distinguished Professor at the University of Illinois Urbana-Champaign, found possible teaching value in these experiments. By providing key conditions to infants viewing physical events, scientists were able to successfully teach infants about their physical world at a younger age. Other experiments highlight the infants’ social knowledge. A now famous infant cognition study was conducted in 2003 by Valerie Kuhlmeir, formerly a postdoctoral student at Yale University, now of Queen’s University, along with Karen Wynn and Paul Bloom, both of Yale University. It investigated the infant’s ability to understand the goals of others. In order to do understand the goals of others, it is essential that humans are able to posit the others internal beliefs, including emotions and intentions, which often drive certain behaviors. This experiment, a computer animation, involved a ball attempting to “climb” a hill. The ball was then helped or hindered by other shapes. In a second movie, the ball would move next to either the shape that helped it or the shape that hindered it. Through looking time measurements, it was found that infants 12 months of age showed a preference for the video in which the ball moved next to the helper shape, rather than the shape that hindered it. Analysis of these results suggests that these infant attributed mental states and goals to the shapes, and therefore preferred the video Reproduced from [5] which provided a more logical continuation of the first video – the ball associated with its helper, not its hinderer. The psychologist conducting the study concluded that infants could not only “recognize a goal event, but also to later infer a new disposition in a new situation” [3]. It is through studies like those described above that psychologists are better able to understand the development of the human mind and the tug of war between nature and nurture. One would hardly expect such a wealth of knowledge to stem from such adorable sources, but this research is a testament to the ingenuity and persistence of psychologists in the field. One can only wait with curiosity to see what infant cognition research can reveal about the human psyche in the future.


3. Bloom P, Kuhlmeier V, Wynn K. Attribution of dispositional states by 12 month olds. Psychological Science. 2003 September; 14 (5); 402-8. 4. 5.

1. Wynn K Addition and Subtraction by Human Infants. Nature. 1992 August 27; 358: 749-50. 2. Baillargeon R, Infants’ Physical World. Current Directions In Psychological Science. 2004 June; 13 (3): 89-94.

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 31

Megan Altizer is a sophomore in Silliman College at Yale University.

THE TRIPLE HELIX Spring 2010 31 5/8/2010 7:59:02 PM


How Brain Emulation Will Impact the Future of Our Society Thomas S. McCabe


ver the past fifty years, scientists have learned how to model increasingly complex phenomena using computer simulations. These simulations, from models of the weather, to algorithms for forecasting the stock market, to battle simulations in times of war, have had a large impact on our lives and on the structure of our society. However, there is one kind of computer simulation - namely, full-scale, realistic simulation of the human brain - that may have not just large, but profoundly transformative impacts on our entire civilization. Whole brain emulation (WBE) is a future technology that will create a new kind of intelligence - one that is based on computers rather than the cells and DNA that have been the foundations of life for the past few billion years. The basic idea behind WBE is that, if anything is simulated well enough, the behavior of the simulation as a whole will mimic the behavior of the thing being simulated. For instance, if one simulates a Space Shuttle accurately enough, the simulated Shuttle will be able to do anything the physical Shuttle can do, including blasting off, flying into orbit, re-entering the atmosphere, and landing [1]. Hence, if one constructs a detailed enough model of the human brain, down to the level of individual neurons, the model will be able to do everything a human can do, including learning, thinking, and creative reasoning [2,3]. There are three key technologies currently under development that will lay the foundations for a WBE project: high resolution scans of large areas of the brain, programs to translate the imaging data into a model of the brain, and computing power and memory for running the final simulation.

32 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 32

Adequate imaging technology must satisfy two different criteria: the scanner must have a high enough resolution to construct models of individual neurons, and it must also be able to scan a large area, so that a full human brain can be scanned in a reasonable amount of time. We have technologies, such as scanning electron microscopy, that can do the former, and other technologies, such as magnetic resonance imaging, that can do the latter, but we currently donâ&#x20AC;&#x2122;t have anything that can accomplish both simultaneously. However, scientists in the field are making a lot of progress with new scanning techniques, such as massively parallel electron microscopy, that can scan rapidly at the resolution required. Systems that are powerful enough to scan the entire human brain appear to be feasible within a decade or two [2]. The second component, data processing, is largely a problem of writing better image analysis software. Scientists today can look at sections of scanned brain tissue and identify neurons, synapses (connections between neurons), glial cells, and other structures, but this process is extremely slow, and therefore impractical for Reproduced from [24] WBE [4]. However, if a computer could parse the images and build a 3-D model automatically, it would make building a model of the whole brain viable. Research in this area is ongoing, and the large library of image-processing techniques we currently have will probably work reasonably well, given enough effort and funding [5]. The third main component, computer processing power and storage, is very easy to forecast, because of the exponential trends collectively known as Mooreâ&#x20AC;&#x2122;s Law. Over the past century, numerous measures of computer power have improved Š 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:59:03 PM

YALE at an exponentially accelerating rate, with doubling times intelligence and thinking ability than anything our physical ranging from a year to a decade [6]. These trends are expected bodies can do. We do not usually regard disabled people as by industry specialists to continue for at least another twenty being unable to work, contribute to society, or rule nations years, and probably longer [7]. and empires simply because their biological bodies are less Although such comparisons are necessarily inaccurate, functional. Similarly, WBEs will not be prevented from dothe general scientific consensus is that the human brain has a ing any of these things, just because they have no biological processing capacity of around one to ten quadrillion floating bodies at all. point operations per second (FLOPS) [8-11], which is about In addition, it’s important to ask the question: what new the same as that of our most powerful supercomputer, IBM’s abilities will brain emulations have, as compared to modernBlueGene/P [12]. This includes not only our logical and delib- day humans? One of the most important is the ability to easerative reasoning abilities, but all the computation involved in ily replicate themselves. Humans generally reproduce on the everything our brain does, from moving muscles, to creativity, order of twenty years. A WBE, on the other hand, is simply a to seeing and modeling the world around us. Because of the computer program- if a very complex one- and it can be copied exponential progress of computing technology, it will probably as quickly as any other computer program. It will probably be only another decade or two before ordinary computers, take decades before scientists and engineers have finished all ones available at your local store, can attain these speeds. the work of developing the scanning technology, creating fast Futurists, neurologists, and computer industry experts have enough computers, and then actually building the simulation. calculated that a research group with a modest budget should However, once this work is completed, the simulations will be able to match the brain’s computing power in ten to twenty be able to copy themselves at the speed of electronics. years [2, 13]. This implies, among other important consequences, that Overall, projects in the field of computational neurosci- there will be extremely fast growth in the number of WBEs ence, the study of the brain as an [16,17]. The overpopulation of information processor, are makhumans is already a major coning rapid progress, although we cern for society. With reproduction In addition, it’s important to aren’t ready to attempt WBE yet. times on the order of minutes to Researchers at IBM have recently days- the time it takes to copy a ask the question: what new simulated a mouse-scale brain computer program- WBEs could abilities will brain emulations on the neuronal level, using a well run up against the carrying BlueGene/L supercomputer to capacity of the Earth’s computhave, as compared to modernmodel each of the eight million ers before we even have time to day humans? One of the most neurons [14]. Another, more derealize that a problem exists, let tailed simulation modeled the alone formulate a solution. important is the ability to cortex of a small mammal, with What is the carrying capacity easily replicate themselves. 22 million neurons [15]. of the world’s computers? CurOnce WBE has reached a rently, it’s not very large; as of Humans generally reproduce high enough stage of develop2009, Folding@HOME, the world’s on the order of twenty years. ment, it’s important to note that largest distributed computing WBEs will, essentially, have all project, has a total capacity of A WBE, on the other hand, is the capabilities that humans do, around 8 quadrillion floating simply a computer program- if as their thought processes will point operations per second be indistinguishable from those (FLOPS), or about the same as a a very complex one- and it can of humans. It is still widely desingle human brain [18]. However, be copied as quickly as any bated among philosophers and if historical trends continue, our scientists whether WBEs will be computational capacities will just other computer program. conscious, whether they will keep growing exponentially, until experience pleasure and pain, they run up against the bounds and whether they will have souls set by the laws of physics [13]. or personal identities. However, it is generally agreed that, These bounds, it’s important to note, are extremely high. For given enough simulation detail, they will be capable of doing every bit of data crunched, at room temperature, Landauer’s anything that we can do, from composing symphonies, to principle requires that 2.9 * 10-21 joules of energy be expended programming computers, to proving Fermat’s Last Theorem. [19]. The Earth receives a steady energy flow of about 122 PW, It is true that, for the foreseeable future, WBEs will not or 122,000,000,000,000,000 joules per second, from the Sun be able to directly manipulate human or human-like bodies, [20]. Hence, the total amount of computing power that the unless we deliberately build bodies for such a purpose. How- Earth can support is around 1036 FLOPS, which corresponds ever, in the modern era, this is becoming, and will continue to to a carrying capacity of around 1020 WBEs, a number fifteen become, increasingly irrelevant. Most of the important parts billion times larger than the current population of seven bilof our civilization and our economy already rely more on our lion humans.

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 33

THE TRIPLE HELIX Spring 2010 33 5/8/2010 7:59:03 PM

YALE In addition, WBEs will theoretically be able to think at much faster speeds than humans. A physical neuron runs at around 200 cycles per second, or 200 Hz, and there is no way to speed this up substantially. A WBE, however, could speed itself up simply by moving to a more powerful computer. A brain emulation is, after all, a computer program, and if you run a program on a more powerful computer, it will run faster [21]. By 2025, the world’s top supercomputers will have processing power in the range of 1018 FLOPS, or around a thousand times the power of the human brain. While this explosion in artificial intelligence and computing power may offer many benefits for society, the uncontrolled replication of WBEs may in fact threaten our economy and way of life. Humans, like all other organisms, have survival instincts hardwired into their brain; it is possible that a simulation thereof may have the same characteristics. As noted earlier, WBEs will be capable of doing many of the tasks humans currently perform, and driven by this instinct for survival, may enter into competition with us for those jobs as well as our resources. Though they do not directly consume the same natural resources that we do, they may compete with us over technological and economic resources. Combined with the vast computational power and nearly infinite numbers of WBEs, it may prove impossible for humans to control them. In a worst case, albeit speculative scenario, our governments may lose control of our computer networks, which we rely to run our utilities and communication systems, store and disseminate information, and control traffic and transportation, among numerous other things. Even if we all wanted to, it’s doubtful that we could shut down the world’s computer networks. To

quote Ray Kurzweil, a technology entrepreneur:

References:, June 2009. Web. 8 Nov. 2009. <>. 13. Kurzweil, Ray. The Singularity Is Near When Humans Transcend Biology. New York: Viking Adult, 2005. Print. 14. Frye, James, Rajagopal Ananthanarayanan, and Dharmendra S. Modha. Towards Real-Time, Mouse-Scale Cortical Simulations. Tech. no. RJ10404 (A0702-001). IBM Research Division, 5 Feb. 2007. Web. 8 Nov. 2009. < rj10404.pdf>. 15. Djurfeldt, M., M. Lundqvist, C. Johansson, M. Rehn, O. Ekeberg, and A. Lansner. “Brain-scale simulation of the neocortex on the IBM Blue Gene/L supercomputer.” IBM Journal of Research and Development 52.1/2 (2008): 31-41. Print. 16. Hanson, Robin. “Economics Of The Singularity.” IEEE Spectrum June 2008. IEEE Spectrum. Institute of Electrical and Electronics Engineers, June 2008. Web. 8 Nov. 2009. <>. 17. Hanson, Robin. “The Economics of Brain Emulations.” Tomorrow’s People: Challenges of Radical Life Extension and Enhancement. George Mason University. Web. 8 Nov. 2009. <>. 18. “Client statistics by OS.” Folding@Home. Stanford University, 8 Nov. 2009. Web. 8 Nov. 2009. <>. 19. Landauer, R. “Irreversibility and Heat Generation in the Computing Process.” IBM Journal of Research and Development 5 (1961): 183-91. International Business Machines. Web. 8 Nov. 2009. < ibmrd0503C.pdf>. 20. Smil, Vaclav. “Energy at the Crossroads.” Proc. of Global Science Forum Conference on Scientific Challenges for Energy Research, Paris, France. Organisation for Economic Co-operation and Development, 17 May 2006. Web. 8 Nov. 2009. <>. 21. Yudkowsky, Eliezer S. “Recursive Self-Improvement and the World’s Most Important Math Problem.” Bay Area Future Salon. SAP Labs, Palo Alto, CA. 24 Feb. 2006. Lecture. 22. Kurzweil, Ray. Age of spiritual machines: when computers exceed human intelligence. New York: Viking, 1999. Print. 23. Rothblatt, Martine. “Legal Rights of Conscious Computers.” Immortality Institute Conference. Atlanta, Georgia. 2002. Speech. 24.

1. Schweiger, Martin. Orbiter - A free space flight simulator. 29 Sept. 2006. Web. 8 Nov. 2009. <>. 2. Sandberg, Anders, and Nick Bostrom. Whole Brain Emulation: A Roadmap. Tech. no. 2008-3. Future of Humanity Institute, Oxford University, 29 Oct. 2008. Web. 8 Nov. 2009. <>. 3. Goertzel, Ben. “Human-level artificial general intelligence and the possibility of a technological singularity.” Artificial Intelligence 171.18 (2007): 1161-173. Print. 4. Fiala, John C. “Three-Dimensional Structure of Synapses in the Brain and on the Web.” Synapse Web. Proc. of 2002 World Congress on Computational Intelligence, Honolulu, Hawaii. Laboratory of Synapse Structure and Function, University of Texas at Austin, 2002. Web. 8 Nov. 2009. < tools/sightings/2002_Intl_Joint_Conf_Neural_Networks_Fiala_Three-dimensional_ structure.pdf>. 5. Kirbas, Cemil, and Francis Quek. “A review of vessel extraction techniques and algorithms.” ACM Computing Surveys 36.2 (2006): 81-121. Print. 6. Kurzweil, Ray M. “The Law of Accelerating Returns.” Kurzweil Technologies, 7 Mar. 2001. Web. 8 Nov. 2009. < art0134.html?printable=1>. 7. Geelan, Jeremy. “Moore’s Law: “We See No End in Sight,” Says Intel’s Pat Gelsinger.” Java Developer’s Journal 1 May 2008. Java Developer’s Journal. Sys-Con Media. Web. 8 Nov. 2009. <>. 8. Merkle, Ralph C. “Energy Limits to the Computational Power of the Human Brain.” Foresight Update 6 (Aug. 1989). Institute for Molecular Manufacturing. Web. 8 Nov. 2009. <>. 9. Moravec, Hans. “When will computer hardware match the human brain?” Journal of Evolution and Technology 1 (1998). Institute for Ethics & Emerging Technologies, 1 Dec. 1997. Web. 8 Nov. 2009. < htm>. 10. Bostrom, Nick. “How long before superintelligence?” Int. Jour. of Future Studies 2 (1998). Future of Humanity Institute, Oxford University, 25 Oct. 1998. Web. 8 Nov. 2009. <>. 11. Dix, Alan. “The brain and the Web: A quick backup in case of accidents.” Interfaces 65 (2005): 6-7. Lancaster University, 29 Aug. 2005. Web. 8 Nov. 2009. <>. 12. “June 2009 - TOP500 Supercomputing Sites.” TOP500 Supercomputing Sites.

34 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 34

“If all computers stopped functioning, society would grind to a halt. First of all, electric power distribution would fail….There would be almost no functioning trucks, buses, railroads, subways, or airplanes. There would be no electronic communication… You wouldn’t get your paycheck. You couldn’t cash it if you did. You wouldn’t be able to get your money out of your bank” [22]. Of course, these negative outcomes are by no means guaranteed. Indeed, there are many scientists who think that it is best to speed up WBE research in order to harness the large benefits that it would bring to our civilization. It must finally be noted that it is both impractical and undesirable to ban the development of WBE technology indefinitely. However, because of the potential for devastating consequences, it is extremely important for all of us to ensure that the process of developing WBEs is gradual and carefully planned. Unfortunately, this sort of careful planning is not how we are currently responding to the issue. There are currently no plans in place to deal with the economic impact of WBEs, or to ensure that WBEs will not display hostile tendencies. We can’t afford to wait, as we don’t have a clear idea of how long we have, or just how bad the consequences of delaying will be. We must have a plan of action in place now. Thomas S. McCabe is a sophomore in Berkeley College at Yale University.

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:59:04 PM


Music Facilitating Speech: Melodic Intonation Therapy for Patients with Speech Deficits Maria Lisa Itzoe

Figure 1. Locations of the two primary language areas in the brain [6]. Broca’s area (anterior) is implicated in speech production; it is patients with damage to this region that have shown improved fluency after completing MIT.


magine knowing what you want to say but not being able to put your thoughts into words. Each conversation becomes a struggle, not only for yourself but also for those around you who attempt to discern the broken fragments of speech that you manage to enounce. This frustration is a daily experience for patients with Broca’s aphasia, a speech production deficit usually resulting from brain damage to the anterior areas of the left hemisphere (particularly the inferior frontal area (IFG), often referred to as “Broca’s area;” see Figure 1). Although thoughts and verbal comprehension are relatively well-preserved, it has been suggested that Broca’s patients struggle in being able to select from competing alternatives within the lexical and semantic networks (i.e. on the levels of both words and meanings), which results in slow, monotonous speech [1]. For many years, limited knowledge on the source of the deficit has limited effective therapy for these patients. In recent decades, however, the application and incorporation of music into more traditional methods of speech therapy has become a key development in helping to improve Broca’s aphasics’ speech production, articulation, and rhythmic intonation. Melodic Intonation Therapy: A Brief History First proposed in 1973 by researchers Robert Sparks and Audrey Holland, Melodic Intonation Therapy (MIT) is a hierarchical treatment program founded upon the observation that patients

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 35

with production deficits in conversational or spontaneous language are able to achieve almost perfect fluency when asked to sing, not speak, the words of previously learned songs [2]. Unlike actual singing, MIT uses a limited range of notes, usually consistent across three or four whole notes that are more similar to the actual variation of melody within conversational speech. This type of melodic intonation varies its pattern based on the type of phrase being targeted; for example, a declarative sentence will have a different pattern of intonation than an interrogative sentence. Overall, the three elements of melodic line, rhythm, and points of stress help the patient to build up a musical representation of a phrase, perhaps allowing them to recruit undamaged neural areas for an alternative way of accessing, as well as producing, the words themselves [3]. Another key component of MIT is its emphasis on slow speed and precision throughout each of the exercises. By using a slow tempo for the intonation and then precisely isolating rhythm and stress, the therapy prevents the patients from becoming overwhelmed by the task because it demands that they direct their attention to individual word fragments-the necessary building blocks for successful production of prosody (intonation) and speech. The particularly innovative aspect of MIT is its incorporation of physical movement into the patient’s speech therapy. By having the patient slowly tap beats of a sentence, MIT emphasizes the syllables that need stronger articulation while assigning

Figure 2. fMRI showing the chief language areas involved during lexical task of picture naming [7].

THE TRIPLE HELIX Spring 2010 35 5/8/2010 7:59:04 PM





Phrase with syllables indicated Specific Tone Assigned

Lunch -time

Ham and Cheese

I want a ham and cheese sandwich.

A – B

A - B- A

A- B- A – B- B -




Figure 3. An example of how MIT therapists might apply melody to phrase; notice that as level increases, the phrases increase in length and the tones become more steady in order to transition the patient towards more normal speech prosody.

a musical note to each word in the phrase. This simultaneous use of physical stress of rhythm and application of melody has been shown to aid the patient’s ability to communicate with rhythmic fluency and articulation. Keys to Success? Possible Reasons Why MIT Works The treatment program consists of four basic levels, each progressively encouraging the patient’s independent production while relying on the foundation of a distinct melody and rhythm. The guided sequence gradually grows to include longer phrases, decreasing the therapist’s involvement and the patient’s dependence on melodic intonation (see Figure 3). Before a patient graduates to the next level, s/he must correctly produce 90 percent of the phrases presented. Any struggle in articulation or any type of phonemic paraphasia results in “incorrect production,” decreasing the patient’s final score and increasing the length of time spent at that level. Because patients may exhibit weaknesses in different areas of production, which may cause them to spend more time on a specific section of MIT, the procedures within the various levels are meant to be modified to best aid the success of each individual. For example, a patient who struggles particularly with articulation may need more practice emphasizing accentuated beats or staccato-like tones in a slow tempo so s/he has the time needed to process the sound and form the production. This may cause him to spend more time focusing on the handtapping of beats in a melody (Level I) in order to recognize where stress should be placed before applying that rhythm and melody to a short phrase (Level II). However, if the patient struggles more with overall fluency once the phrase is articulated, s/he might benefit more from practiced melodic intoning of the whole phrase. Perhaps this repetition over time strengthens the neural representation of the phrase as the patient becomes more familiar with the rhythmic flow of the words. The patient might then be able to apply that fluency in spoken context and facilitate his expression of the rhythmic association

36 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 36

between words in speech. In either situation, the leadership and guidance of the therapist or clinician who sets an example, which the patient then follows, is a critical aspect of Melodic Intonation Therapy. This is especially important at the initial levels. Hierarchical Organization: A Step-by-Step Process for Treatment The preliminary level of MIT requires the patient simply to practice discerning individual beats of a melody (within a small range of 3-4 notes). The therapist first hums the pattern to the patient while physically tapping the stressed beats, and the patient repeats the example with the therapist, who serves as reinforcement of the rhythm. It is perhaps this reinforcement and repetition that help strengthen the patient’s connection between the melody (a primarily right hemisphere function) and rhythm (in which the patient is expressing a deficit, possibly due to left hemisphere damage). Level I seems to require bilateral activation by combining melody with rhythm, possibly working to exercise and strengthen undamaged left hemispheric regions as they are required to parse the beats in the phrase. This preliminary foundation establishes the technique used in the subsequent stages of therapy. Level II consists of five separate steps that each retain the tapping of the rhythm, perhaps serving as a motor task that helps the patient recognize when certain beats of a phrase should be articulated. This continual physical stressing of the beats seems to be a crucial component of MIT because hand-tapping may solidify motor memory while simultaneously creating a memory of rhythmic sound and words. Thus, as the patient taps the rhythm of the words with reinforcement from the clinician and sings the melody, s/ he may be creating multimodal memories that will help him to later access the words during conversation. During this segment of therapy, the phrase begins to be placed in a more conversational context as the patient is required to repeat the intoned phrase in response to Reproduced from [10] the therapist’s question. When

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:59:04 PM

BROWN the role of the therapist as the patient is required to be more independent in his speech production. The final level of Melodic Intonation Therapy, Level IV, introduces a technique known as sprechgesang, in which the melodic pitches of previous intoning are transitioned to pitches used in conversational speech. After the practice in repetition from the previous levels, the patient is expected, and generally able, to repeat the normally intoned phrase. At the completion of the Melodic Intonation Therapy process, the successful patient can fluidly produce the set of 12-20 phrases with which s/he worked at each level in response to questions asked by the therapist.

Reproduced from [11]

one of the short phrases has been taken through each of the above steps, the same process is followed for the other phrases in the set. For example, one set may contain phrases concerned with lunchtime, such as “time to eat” and “ham and cheese sandwich.” This method helps to improve a patient’s fluency and production of speech within a general domain. Levels III and IV become increasingly difficult because of the greater time latency between the therapist’s presentation of a question and when the patient is instructed to intone his response. Additionally, the patient is now challenged to choose the appropriate phrase with which to answer the question and to correctly respond while including the rhythmic and melodic patterns. Together, these two changes greatly decrease the aphasic’s reliance on repetition by decreasing

Implications for Music As Speech Therapy Despite convincing neurological and clinical evidence, much debate still exists in current literature of the true effectiveness of music therapy on speech production. All forms of therapy incorporating music depend on the patient’s ability to understand rhythms and melodies and the ability to access memory of familiar songs or hymns [4]. Where traditional speech therapies have previously failed, much success has been seen so far with Melodic Intonation Therapy. A possible explanation is that music activates numerous neural structures bilaterally, so melody and rhythm may actually reengage language processing areas or take advantage of neural plasticity of other areas within the left hemisphere in non-fluent aphasics [5]. Perhaps MIT provides the key with which Broca’s aphasics may unlock their own speech production. Maria Lisa Itzoe is an undergraduate at Brown University.

Reproduced from [8] and [9]

References: 1. Yorkston KM, Beukelman DR. Communication efficiency of dysarthric speakers as measured by sentence intelligibility and speaking rate. J Speech Hear Discord. 1981;46:2296-2301. 2. Sparks RW, Holland Al. Method: melodic intonation therapy for aphasia. J Speech Hear Discord. 1976;41:287-297. 3. Belin P, Van Eeckhout P, Zilbovicius M, et al. Recovery from nonfluent aphasia after melodic intonation therapy: A PET study. Neurology. 1996;47(6):1504-1511. 4. Kim M, Tomaino C. Protocol evaluation for effective music therapy for persons with nonfluent aphasia. Topics in Speech Rehabilitation. 2008;15(6):555-569.

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 37

5. Naesar MA, Helm-Estabrooks N. CT scan localization and response to melodic intonation therapy in nonfluent aphasia cases. Cortex. 1985;21 (2):203-223. 6. 7. image2.jpg 8. 9. 10. 11.

THE TRIPLE HELIX Spring 2010 37 5/8/2010 7:59:05 PM


Evolving Interaction in Robots Andrew Sheng


he web of life forms a symphony of interaction and communication. Consider all the interactions a creature will perform and undergo throughout its life. A simple glance around one’s surrounding yields many examples of such activities – a human speaks a string of words, a bee performs a complicated dance, a dog urinates onto a hapless bystander’s leg. All of these actions carry an intention to convey some sort of message to others – EXAM TERRIFYING, FOOD THERE, MY TERRITORY. Entire academic fields, such as sociology, dedicate themselves to analyzing these behaviors. The means by which such interactions may have arisen during the emergence and evolution of life remain one of the mysteries of the modern world. This paucity of knowledge arises from the problem that experiments regarding emergent behavior are extraordinarily difficult to conduct due to the lack of simple sample organisms for experiments. Case studies with individual microorganisms would be difficult to analyze and control. On the other hand, higher organisms tend to have generative cycles on the magnitude of months, years, decades – experiments with thousands of generations would require an extremely dedicated multigenerational research team. Furthermore, the distant past offers little helpful information

– social structures are not easily fossilized. Given the lack of a suitable sample population, one faces the question: Is the quest for the origins of social interaction a limited one restricted to the analysis of the behavior of contemporary lifeforms? Does the lack of a suitable medium render the simulation of communication evolution an impossibility? Unfortunately, the natural world does not seem to yield any optimal organisms. There exist few creatures quick to reproduce, simple to observe, easy to mutate, and, perhaps most importantly, free of preexisting behaviors hardcoded by genetics (1). But, on a brighter note, when one considers models from fields other than biology, there does indeed exist such an “organism” - a software algorithm! An algorithm is merely a set of easily replicable instructions; it also lacks the evolutionary baggage of billions of years of history. When placed into the body of a robot, the algorithm may play out the behavior coded by its instructions. Although the replication of digital information is trivial (much to the despair of anti-piracy groups), the simulation of the process of biological evolution is a much more difficult matter. Unlike a cell, which can often survive a malformed protein or damaged DNA, a computer program can be easily

Reproduced from [5]

38 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 38

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:59:09 PM

CMU destroyed by the introduction of even a single error. As a result, works controlling them) were found to have developed the the mutable gene must not be the program itself, but instead ability to utilize their cameras and lights so that they could some outside information that is interpreted by the program. inform their companions of food and poison. Some populaOne implementation of this concept is a model called tions of robots evolved the tactic of flashing their lights when a “neural network.” In a neural network, a program makes near food (as an invitation), while others leaned toward the use of an amount of premade data as a blueprint (or gene) tactic of flashing lights when near poison (as a warning). In for the construction of a field other words, a group of robots of simulated neurons. Each had autonomously developed More than five hundred simulated neuron is governed the ability to communicate – a by two factors: connections to development made even more “generations” of collection other neurons and some rudicurious by the fact that earlier, and mutation later, the robots mentary calculative ability (such the robots had been largely unas determining whether the sum aware of the existence of their (more specifically, the neural of a set of numbers is above or fellows (1). Thus was born the networks controlling them) below some given threshold). The behavior of cooperation in order network is “run” by inputting to take advantage of the strength were found to have developed data into some neurons. These of numbers – a single robot workthe ability to utilize their neurons proceed to process ing alone cannot locate “food” the data and then output it to as fast as a group of machines cameras and lights so that they their neighbors, each “layer” working together as a team. could inform their companions of neurons making use of their However, the end result internal instructions to interpret is not that of a robotic utopia. of food and poison. the data. To gather output, data One aspect of the playing field is extracted from the output of was that the food “zone” was several select neurons. Since the too small to support the entire topology and properties of the network are entirely determined robotic population; the robots would be required to push by its “genes”, which exist independent of the host program, away others in order to acquire points toward their own placea neural network may be transmogrified via modifications to ment in the next generation. As a result, the robots did not its blueprint. This mechanism allows the simulation of natural form a very harmonious society. Instead, some robots would selection (and by extension, evolution) via the random and/or send misleading messages to others in a “selfish” attempt selective mutation and breeding of neural networks (2). In sum, to fool others in order to lure them away from the food. For a robot serves as a simulation of a primitive organism. One example, in a population of robots with the “lights-meansuch population of organisms resides in Switzerland, where food” protocol, rogues would develop the tendency to cast flashing blue robots fight over glowing red floor tiles (1,3). signals over barren ground or poison in an attempt to fool In an attempt to simulate the emergence of communi- others in order to increase their own chances of a free food cation, researchers at the Ecole Polytechnique Federale de zone. The mutation-based appearance of such rogues would Lausanne have constructed a field with two small zones – often destroy cooperative groups of robots, as “survivors” one marked as “food,” the other marked as “poison,” both would generally evolve to become less inclined to trust the identical unless observed from a close distance. A population signals of other robots (1). of small robots equipped with cameras and lights formed the A later study repeated under similar circumstance with denizens of this field. Each robot was guided by two factors: similar robots found the existence of another deceptive behavior internal neural networks and an overarching rule of “food – that of withholding useful information from others. In many good, poison bad” (1). test “cases,” most robots would often move to the strategy of The research began with the robots randomly flashing avoiding using their lights while in the process of collecting their lights – none of them could “understand” the speech of food – this would prevent others from noticing anything special any other, only the local presence of food and poison. They about that particular patch of land. However, the researchers were then left to wander randomly, eventually discovering found that in no case did the robots completely cease all usage food by the sheer mechanism of trial and error. The simulation of their lights; instead, even the most xenophobic machines would finish after a preset amount of time. Afterwards, the would make some use of light, perhaps due to the fact that the Adapted from [8] researchers ranked the robots by their success at collecting marginal reward for “take-but-don’t-give” is lessened when food and avoiding poison. The “genes” of those robots best at all robots refuse to share useful information (3). Therefore, collecting food were then mixed together (in an approximation even though the robots fell to selfishly firing each other lies of of mating) and randomly mutated. The new genomes were omission, the robotic society did not completely degenerate then replaced into robots in order to continue the simulation into a purely competitive environment. further. The study is admittedly simplistic. After all, human societies More than five hundred “generations” of collection and do not quickly degenerate into anarchies upon the appearance mutation later, the robots (more specifically, the neural net- of criminals and con artists – there exist mechanisms to punish

© 2010, The Triple Helix, Inc. All rights reserved. UChicago4.9.indb 39

THE TRIPLE HELIX Spring 2010 39 5/8/2010 7:59:09 PM

CMU Adapted from [6] and [7]

humans who scream “that’s food” while pointing at cyanide. Furthermore, it is possible that the experiment unfairly promoted the benefits of pure competition over cooperation; the robots could be able to discover a more cooperative strategy in a different environment (4). The Lausanne study reveals some interesting insights. Social behaviors commonly associated with living organisms are not restricted to life. Instead, it is likely that many behaviors are simply evolutionary responses to various environmental pressures (i.e. a given behavior could have developed as a random, yet beneficial, trait-guided action). Such an action could have thus improved an organism’s fitness enough to pass it on the succeeding generations. In a frontier world, cooperation would help one’s companions gather resources, while in a civilized place, deception could allow one to gain at the expense of others (1). The fact that blocks of silicon and metal can spontaneously develop the ability to cooperate and cheat helps to bring out the more human question of exactly how many of one’s actions are consciously generated, and how many are solely due to evolutionary psychology. The emergence of the various robotic behaviors suggests that early societies (that is, of microorganisms, probably not humans) could have been highly dynamic, rapidly shifting between cooperative and competitive behaviors (depending on which one would be more useful in a given situation) before settling at some sort of equilibrium (4). A resident of the primordial ooze would probably have been in a similar situation as the robots; it would have just had a simple predefined set of behaviors, anything else would have been developed later. The results of the study could also provide guidance to those seeking to develop complex systems composed of many independent actors – whether these actors are robots or something else. The sheer variety of different behaviors developed by the robots during their evolution indicates that seemingly simple, decentralized systems may give rise to very complex behavior. Whether this complex behavior is desirable probably varies on the situation at hand. The Lausanne study is just one case in which robotics is being put to use in an unconventional situation to advance the development of the understanding of the development of interactive behavior. Although there still exist significant differences between organisms simulated on silicon and organisms in flesh, it is conceivable that one day such experiments could guide humanity to an understanding of that composition of that symphony of interaction and communication, the web of life. Andrew Sheng is an undergraduate at CMU.

References 1. Floreano, Mitri, Magnenat, Keller. Evolutionary Conditions for the Emergence of Communication in Robots. Curr Biol. 2007; 17:514-519. 2. Nolfi, Parisi. Evolution of Artificial Neural Networks. In: Arbib, editors. Handbook of Brain Theory and Neural Networks. 2nd ed. Cambridge, MA: MIT Press; 2002. p. 418-421. 3. Mitri, Floreano, Keller. The evolution of information suppression in communicating robots with conflicting interests. PNAS. 2009; 106:15786-15790. 4. Surfdaddy Orca. Darwin’s Robots [document on the Internet]. h+ Magazine; 2009

40 THE TRIPLE HELIX Spring 2010 UChicago4.9.indb 40

[cited 2009 November 5]. Available from: darwins-robots 5. 6. type=figure&id=A3709 7. 8.

© 2010, The Triple Helix, Inc. All rights reserved. 5/8/2010 7:59:10 PM

ACKNOWLEDGMENTS The Triple Helix at the University of Chicago would sincerely like to thank the following groups and individuals for their generous and continued support: University of Chicago Annual Allocations Student Government Finance Committee Bill Michel, Assistant Vice President for Student Life and Associate Dean of the College Arthur Lundberg, Student Activities Resource Coordinator The Biological Sciences Division The Physical Sciences Department The Social Sciences Department Professor Daniel Bennett Professor Nancy Cox Professor Nathan Ellis Professor Eugene Chang Professor Reuben Keller Professor Charles Wheelan Professor Allen Sanderson Professor Richard Hudson Dr. William B. Dobyns

If you are interested in contributing your support to The Triple Helixâ&#x20AC;&#x2122;s mission, whether financially or otherwise, please feel free to visit our website at Š 2010 The Triple Helix, Inc. All rights reserved. The Triple Helix at the University of Chicago is an independent chapter of The Triple Helix, Inc., an educational 501(c)3 non-profit corporation. The Triple Helix at the University of Chicago is published once per Autumn and Spring Quarter and is available free of charge. Its sponsors, advisors, and the University of Chicago are not responsible for its contents. The views expressed in this journal are solely those of the respective authors.

UChicago4.9.indb 1

5/8/2010 7:59:10 PM

Business and Marketing Interface with corporate and academic sponsors, negotiate advertising and cross-promotion deals, and help The Triple Helix expand its horizons across the world!

Leadership: Organize, motivate, and work with staff on four continents to build interchapter alliances and hold international conferences, events and symposia. More than a club.

Innovation: Have a great idea? Something different and groundbreaking? Tell us. With a talented team and a work ethic that values corporate efficiency, anyone can bring big ideas to life within the TTH meritocracy.

Literary and Production Lend your voice and offer your analysis of todayâ&#x20AC;&#x2122;s most pressing issues in science, society, and law. Work with an international community of writers and be published internationally.

A Global Network: Interact with high-achieving students at top universities across the worldâ&#x20AC;&#x201D; engage diverse points of view in intellectual discussion and debate.

Science Policy Bring your creativity to our newest division, reaching out to students and the community at large with events, workshops, and initiatives that confront todayâ&#x20AC;&#x2122;s hardest and most fascinating questions about science in society.

For more information and to apply, visit Come join us.

UChicago4.9.indb 2

5/8/2010 7:59:10 PM

Profile for The Triple Helix at UChicago

Science in Society Review - Spring 2010  

Science in Society Review - Spring 2010