DUJS 09W

Page 1

Note from the Editorial Board Among the scientific community, the consensus is clear: actions of man have a real and pressing impact on the Earth’s climate and the environment as a whole. Today much of the scientific discussion has shifted from disputing the reality of climate change to the investigation of solutions to its consequences. Curbing the progression of global climate change and protecting the planet has emerged as a defining challenge of our generation, and will most likely continue to shape science and policy decisions for years to come. In this issue of the DUJS, we cover climate change and the environment from several angles. The articles range from analysis of the impact to theoretical solutions to environmental policy, both at Dartmouth and on a larger scale. Global climate is such a fundamental component of ecology that scientists have yet to realize many of its more subtle repercussions. For example, Sharat Raju ’10 investigates the effects of pesticides in the pathology of Parkinson’s disease and Yifei Wang ’12 explores whether ocean acidification may be deafening whale populations. Then Shu Pang ’12 investigates the connection between climate change and an increase in asthma prevalence. Next we look to the scientific remedies, as Marietta Smith ’12 discusses the contributions of Thayer researchers who have developed an efficient synthesis of cellulosic biofuels. In addition, Jingna Zhao ’12 explores the use of nanotechnology for pollution control. The environmental coverage continues with discussion on environmental policy that could lead to the implementation of these scientific remedies. Sunny Zhang ’10 speaks with Dartmouth professor and director of the Institute of Arctic Studies, Ross Virgina while Laura Calvo ’11 analyzes the effectiveness of Dartmouth’s own sustainability efforts. Additionally, we celebrate the 2008 Nobel Prize laureates in science. First, Peter Zhao ’10 discusses the historic work that led to the discoveries of the human papillomavirus and HIV. Next, Hannah Payne ’11 recounts the innovative ideas behind the green fluorescence protein fusions and their applications. Then Hee-Sung Yang ’12 explores the physics of the antagonism between matter and anti-matter. Finally, the DUJS continues its coverage of science in the Dartmouth community, with Diana Lim’s ’11 article on the new DHMC simulation center, Victoria Yu’s ’12 discussion of science funding during the economic recession, and an update on recent scientific papers published by Dartmouth faculty. We hope you enjoy this issue of the DUJS, and take the matter of environmental conservation to heart as a problem that shapes not only the lives of humans, but also the overall health of all life on Earth.

The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge. EDITORIAL BOARD President: Shreoshi Majumdar ’10 Editor in Chief: Colby Chiang ’10 Managing Editor: Sean Currey ’11 Design Managing Editor: Peter Zhao ’10 Asst. Managing Editor: Shu Pang ’12 Asst. Managing Editor: Jay Dalton ’12 Layout Editor: Alex Rivadeneira ’10 Online Content Editor: Laura Calvo ’11 Public Relations Officer: Edward Chien ’09 DESIGN STAFF Diana Lim ’11 Jingna Zhao ’11 Jocelyn Drexinger ’12 STAFF WRITERS Brian Almadi ’11 Jonathan Anker ’11 Elizabeth Asher ’09 Alexandra Boye-Doe ’10 Sarah Carden ’10 Daniel Choi ’11 Shelley Maithel ’11 Hannah Payne ’11 Sharat Raju ’10 Marietta Smith ’12 Hee-Sung Yang ’12 Victoria Yu ’12 Yifei Wang ’12 Aviel Worrede-Mahdi ’12 Muhammad Zain-ul-Abideen ’12 Sunny Zhang ’10 Faculty Advisors Alex Barnett - Mathematics Ursula Gibson - Engineering Marcelo Gleiser - Physics/Astronomy Gordon Gribble - Chemistry Carey Heckman - Philosophy Richard Kremer - History Roger Sloboda - Biology Leslie Sonder - Earth Sciences Megan Steven - Psychology Special Thanks Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Provost’s Office Whitman Publications Private Donations The Hewlett Presidential Venture Fund Women in Science Project

DUJS@Dartmouth.EDU Dartmouth College Hinman Box 6225 Hanover, NH 03755 (603) 646-9894 dujs.dartmouth.edu Copyright © 2009 The Trustees of Dartmouth College

Winter 2009


The Dartmouth Undergraduate Journal of Science aims to increase scientific by providing an interdisciplinary forum for sharing undergraduate research an

In this Issue... DUJS Science News Aviel Worrede-Mahdi ‘12, Shu Pang ‘12, Victoria Yu ‘12

4

Science in the Economic Recession Victoria Yu ‘12

6

As the shadow of the recession descends on the nation, how will science funding at Dartmouth and other institutions be affected?

DHMC: Simulation for Success Diana Lim ‘11

Antibiotic Resistance of Tuberculosis Jay Dalton ‘12

9 11

The unique characteristics of tuberculosis give it enormous potential for developing resistance to even the strongest antibiotics. It is one of the biggest health threats of this generation and the discovery of new treatment methods will be essential in the ongoing fight against this disease.

Nobel Prize Series: Medicine, Chemistry & Physics Peter Zhao ‘10, Hannah Payne ‘11, Hee-Sung Yang ‘12 14

Interview with Ross Virginia Sunny Zhang ‘10 Liquid Gold: Good to the Last Drop Elizabeth Asher ‘09

20 24

Visit us online at dujs.dartmouth.edu

Dartmouth Undergraduate Journal of Science


fic awareness within the Dartmouth community h and enriching scientific knowledge.

Thayer School of Engineering & Biofuels Marietta Smith ‘12 26 Sustainability at Dartmouth Laura Calvo ‘11

28

Nanotechnology & Pollution Jingna Zhao ‘12

31

A Cacophony in the Deep Blue Sea Yifei Wang ‘12

34

Increasing acidity in oceans may be amplifying sounds through the water. Surprising as it may seem, the amplified noise may be damaging the hearing of whales.

Pesticides & Parkinson’s Disease Sharat Raju ‘10

36

Global Climate Change & Asthma Shu Pang ‘12

38

While the underlying mechanisms behind asthma are complex, it appears that several consequences of climate change may be contributing to the disease. Altered vegetative growth, ground ozone levels, and even increases in wildfires and storms may be involved in asthma’s prevalence.

tResearch Acrylamide Formation in Potatoes Katie Cheng ‘10, Boer Deng ’10, Emma Nairn ‘10, Tyler Rosche ‘10, Stephanie Siegmund ‘10

WINTER 2009

42

DUJS


News

DUJS Science News

See dujs.dartmouth.edu for more information

Compiled by aviel worrede-mahdi ‘12, shu pang ‘12 & victoria yu ‘12

tBiology Dartmouth researchers discover a protein methylation pathway in Chlamydomonas flagella

Cobalamin-independent methionine synthase (MetE), a flagellar protein in Chlamydomonas, has been discovered as a key component to the assembly and/or disassembly of the cellular flagellum, as reported by Dartmouth researchers Roger Sloboda, Megan Ulland, and Mark Schneider in Molecular Biology of the Cell in October. Recent research shows that MetE also functions in the flagella as a catalyst of the reaction of homocysteine into methionine. Methionine when converted into S-adenosyl methionine (SAM) is a component of the flagellar proteome. Because SAM is the methyl donor for methylating proteins, MetE is directly related to the production of these proteins. Low amounts of MetE were found in full-length flagella, increased amounts in regenerating flagella, and highest amounts in resorbing flagella suggesting a link between flagellar protein methylation and the cellular cycle. For the first time, protein methylation at arginine residues is being shown to be concurrent with flagellar resorption, proving that it is important for more than just gene transcription. It “may be a necessary step in the disassembly of axonemal structures or required to promote the association of disassembled axonemal proteins,” the team stated in their article. The 4

coinciding of protein methylation with flagellar resorption, a step that takes place before cellular division, is the gateway to further studies on the linkage between flagellar protein methylation and cell cycle progression. (1)

Link found between Drosophila CheB gene expression and TaySachs disease

Recently, the protein CheB42a in Drosophila was found to be a regulator of progression into late stages of male courtship. This finding has been the platform for studies involving other CheB genes and their connection to the function of the GM2Activator Protein (GM2AP) in humans. Claudio Pikielny of Dartmouth Medical School has concluded that CheB’s function in pheromone response might include biochemical mechanisms important to lipid metabolism in human neurons. Mutated versions of CheB42a in Drosophila exhibited results suggesting its functioning in gustatory perception of female cuticular hydrocarbon pheromones, the team reported in the Journal of Biological Chemistry in October. Because of this, genes in the CheB series were investigated to pinpoint the underlying mechanism. DNA database searches revealed that there were sequence similarities between CheBs, CheBrs, and the human GM2-AP, a member of the myeloid differentiation protein-like (ML) superfamily. The sequence similarities between these genes mean that all three bind to similar chemical ligands. This and the fact that CheB falls under the domain of a lipid binding

protein, hints that CheB function in gustatory perception of the lipid-like pheromones of Drosophila is related to the critical steps of lipid metabolism. Human GM2-AP loss in TaySachs disease results in neurodegeneration by inhibiting GM2 ganglioside degradation. Hence, it is suggested that CheBs function in pheromone response might include the biochemical mechanisms critical for lipid metabolism in human neurons. Research is continuing in Pikielny’s lab as more data about the CheBs function and detection of lipid-like pheromones of Drosophila might elucidate new aspects of human lipid metabolism. (2)

tPhysiology & Medicine The hazards of smoking taken to a new level

The dangers of firsthand and secondhand smoke are well known to the public. However, a recent study published in Pediatrics, conducted in part by DMS professor Susanne Tanski, has brought attention to another deleterious consequence of smoking: “thirdhand” smoke. Thirdhand smoke is defined as the residual toxins that remain in a room or on the body of a smoker long after initial exposure to tobacco smoke. These toxins include compounds such as hydrogen cyanide, toluene, carbon monoxide, arsenic, lead, butane, and even radioactive polonium-210, and many of them are carcinogenic. Thirdhand smoke is particularly harmful to children, who are more vulnerable to even small amounts of its contaminants. This is particularly true of infants whose propensity for physical contact with objects increases the number of instances in which they will ingest or inhale thirdhand smoke. The article further notes that “cognitive deficits,”

Dartmouth Undergraduate Journal of Science


such as lower reading scores, have in previous studies been associated with exposure to toxins of thirdhand smoke. The study, designed as a survey of 1510 households, was conducted by the American Academy of Pediatrics. The goal was to observe the possible relationship between knowledge of thirdhand smoke’s effects and the household bans on smoking. Results suggested that those who are more informed about the vices of thirdhand smoke may be more inclined to ban smoking from their homes. However, the incorporation of thirdhand smoke in antitobacco campaigns is yet to be seen. (3)

A new protein function revealed cell division mechanisms

A new function was discovered for the Drosophila protein NOD that sheds light on the intricate mechanics of cell division, Dartmouth College chemistry professor F. Jon Kull reported with his research team early this month in Cell. When chromosomes do not segregate correctly in cell division, complications can arise as in the case of cancerous cells. The protein NOD is distantly related to motor proteins that drive cellular activities like transport and cell division. Though NOD itself lacks the capacity for movement along microtubules (MTs), it stimulates microtubule polymerization, highlighting its importance in chromosome segregation. Found in fruit flies, the NOD protein will help determine how related proteins in humans work. With their recent findings, the researchers were able to propose an in vivo model for NOD function. “This work describes a novel mode for kinesin function, in which NOD does not walk, but rather alternates between grabbing on to and letting go of the end of the growing filament, thereby tracking the end as it grows. The diversity of function of these proteins is remarkable,” said Kull in the press release. The research is at the frontier in providing a better understanding of the mechanics of cell division, one of the key components of life. (4) Winter 2009

tComputer Science Algorithm developed to approximate polygonal curves

Dartmouth computer science professor Scot Drysdale and his research team published a computer algorithm for constructing a graph of all possible arcs as well as of polygonal curve approximation in Computational Geometry last March. The algorithm’s components include concepts of circular ray shooting, tolerance boundaries, and the Voronoi diagram. In the paper, Drysdale outlines the process leading up to the application of the algorithm. A bisector ray is intersected with an intersection of two wedges, regions of the centers of the disks, and a Voronoi region. The goal behind this is to see if the two intersections overlap. A gate is included at every vertex to avoid overshooting the bend and the uncertainty accompanied with whether the curve will stay close to designated points in regions consisting of sharp corners. Biarcs, segments consisting of pairs of circular arcs, became important when the team realized that they could make calculating the second arc symmetric to calculating the first if they reversed the direction of the second arc and its tangent. Biarcs respecting the tangent directions at the original points is the means in which the algorithm interjects between a newly chosen subsequence of input points. In the future, Drysdale and his colleagues hope to find algorithms without as many restrictions. This would enable the study of biarcs or arcs that end at different points than they began. (5)

2. E. Starostina, A. Xu, H. Lin, C. W. Pikielny, J. Biol. Chem. 284, 585-594 (2009). 3. J. P. Winickoff et al., Pediatrics 123, e74-e79 (2008). 4. J. C. Cochran et al., Cell 136, 110-122 (2009). 5. R. L. S. Drysdale, G. Rote, A. Sturm, Comp. Geom. 41, 31-47 (2008). Thumbnail Images: A. thaliana CobalaminIndependent Methionine Synthase X-ray structure courtesy of RCSB/Protein Data Bank. NOD-Microtubule-Chromosome illustration courtesy of Jared Cochran.

References 1. M. J. Schneider, M. Ulland, R. D. Sloboda, Mol. Biol. Cell 19, 4319–4327 (2008). 5


Science Policy

Science in the Recession

The Interface of Science and Economics VICTORIA YU ‘12

A

s the second quarter in the 2009 fiscal year comes to a close, the United States finds itself in a state of economic recession (1). Federal authorities, corporate heads, and the average Joe alike have felt the impact of the new seven-year, 48-year, even alltime lows across the economic and financial sectors (2). Somewhere among the many affected sit Dartmouth College and its scientific community. And while members of this community are well aware that they will soon encounter the extra constraints brought about by this recession, specifics about the looming limitations are still largely unknown.

An Overview: The Financial Crisis Thus Far In a formal statement issued on December 11, 2008, the Business Cycle Dating Committee of the National Bureau of Economic Research (NEBR) announced that the United States is currently in a recession that has its roots in December 2007 (1). It was not until mid-September 2008, however, that the financial crisis was brought to the attention of an alarmed public when the mortgage crisis, trouble among major banks, and word of a $700 billion federal bailout made headline news. Months before the NBER’s December announcement, the stock market also began a ten day fall at the start of October 2008. The drop included the “Black Week” (Oct. 6 to Oct. 10) during which the Dow Jones Industrial Average alone fell 1,874 points (3). Most recently, floundering American automobile manufactures GM and Chrysler have successfully sought the help of the federal government who has allocated over half of its original $700 billion bailout to the automakers (4). And all the while, the value of the U.S. dollar has been falling. 6

In Context: The Dartmouth Endowment

Elsewhere in the Realm of Higher Education

As at many other colleges and universities, the recession has most directly impacted Dartmouth through its crippling effects on the endowment. Dartmouth’s endowment – an accumulation of donations that are invested in the market and are thus vulnerable to a volatile economy – fell 6 percent ($220 million) alone in the first quarter of the 2009 fiscal year (5). Most affected by endowment among the Dartmouth scientific community are undergraduate departments. With 36 percent of the College’s budget based in the endowment, the undergraduate science departments are in the early but crucial stages of adjusting to the recession’s impact (5). “Everything is so uncertain,” said David Glueck, the chemistry department chair. “The College is rightfully planning ahead for the worst, because we’re probably not at the bottom yet” (6). Glueck also explained that, for the time being, students may be minimally affected. “[The College has] consistently said that they won’t touch financial aid. They’ve also asked [professors], for example, not to raise course fees,” he said. This, however, leaves the science departments with no other streams of revenue for teaching classes, aside from small and heavily earmarked departmental endowments, Glueck said. Nevertheless, Glueck predicted that the science departments at the College will be the least impacted departments, as others with many visiting and adjunct faculty will be required to make more cuts. The Thayer School of Engineering and the Dartmouth Medical School (DMS) will also feel some financial limitations from the fallen endowment, though to a far lesser degree as only 3.1 and 12.3 percent of the Dartmouth endowment are allocated to the respective professional schools (7).

“[The recession] is very much a problem for every institution of higher education, every medical school is facing the same pressures,” said Michael Wagner, chief financial officer at DMS (8). Indeed, other top universities have also reported significant hits to their endowments in the first quarter of FY2009 (Yale, 13.4 percent (9); Princeton, 11 percent (10); Harvard, 22 percent (11)). And letters from college presidents across the nation have announced similar initial plans: hiring freezes, postponing building projects, beginning long-term cutback goals. The extent of the damage and the emphasis on specific precautionary measures, however, will ultimately depend on which streams of revenue (tuition, endowment, etc.) various institutions rely upon most and the recession’s specific effects on these sources, noted Wagner (8). When it comes time to decide what cuts to make, however, science’s importance to the future should be taken into consideration, said Mary Pavone, director of Dartmouth’s Women in Science Project (WISP). “To do anything that would hurt the development of scientific talent [or] the development of a strong scientific workforce would be really detrimental to the US. … I think that the College … recognize[s] this, as do other universities,” she said (12).

The State of Research Funding The bigger story in collegiate scientific communities – and the American scientific community at large – is the recession’s impact on research funding. A substantial portion of the College’s financial transactions lie in those related to sponsored research.

Dartmouth Undergraduate Journal of Science


In 2007, the College received $173,989 million in sponsored research recovery, the second highest source of operating revenue behind tuition and fees (13). However, with the economic downturn, both federal and private sponsorship are expected to grow more focused and less generous. “No doubt the recession is going to cause problems [for external grants]. First, there is less tax revenue coming in, and second, the politicians are spending it somewhere else,” Glueck commented on federal research sponsorship in the near future of the recession. A look at the federal budget in recent years further reveals that this anticipated sluggishness in the growth of federal research funding is not a new trend. Funding from such federal bodies as the National Institute of Health (NIH), for instance, has “flattened off” over the last few years, noted Wagner (9). In fact, in his budget proposal for FY2009, former President Bush allocated 0.5 percent less money to NIH than he did in FY2008. Moreover, statistics show that since NIH completed its five-year budget redoubling in FY2005, funding growth for the NIH has been a meager 1 to 3 percent – growth that has been overpowered by high biomedical inflation rates of 3.5 percent during the last two fiscal years (14). And overall, general basic research funding has grown minimally since its sharp drop in 2006 (15). This is only half of the story, however, as cuts from some agencies’ budgets have supplied increased funding to other R&D efforts – a heightened specificity in funding allocation that is expected of shallower coffers. “We expect that [future growth in funding] could be focused growth on certain areas of healthcare research that the administration might be interested in,” said Wagner (8). The give and take of this “focused growth” is evident in former President Bush’s FY2009 budget proposal. Aside from increasing funding for research into such diseases as HIV, the president has further increased funding to the Department of Defense (DoD), over half of which goes to research in universities. Also, there has been a proposed 19 percent increase in the Department of Energy’s Office of Science and a 16 percent increase in the National Science FounWinter 2009

Image retrieved from http://en.wikipedia.org/wiki/File:Dartmouth_College_campus_2007-06-23_Dartmouth_Hall_02.JPG (Accessed 2 February 2009).

The financial crisis has affected many institutions, including Dartmouth. Research funding will likely be reexamined in the coming years.

dation’s “Research and Related Activities” funds. These specific efforts come as a result of the president’s American Competitiveness Initiative, a plan to bolster American scientific efforts and thereby help the U.S. remain a major player in the global arena. However, the plan also comes at the cost of cuts in overall funding to the Environmental Protections Agency (EPA), NIH, and Department of Agriculture (14). Non-governmental sponsors, too, are feeling the pressures of the recession. The American Chemical Society’s Petroleum Research Fund, for instance, has already released a statement to announce that funding for grants accepted this year may be delayed, said Glueck (6). Funding from private companies and foundations may also change, according to Wagner. “[N]ow going forward, companies are being hit by many pressures … and they’re having to think about how to reallocate R&D dollars … Most of the money that [foundations] use to distribute for grants, it originated from a contribution from family and generates investment income. The extent that [the contribution] was invested in money in the stock market or investment vehicles that are experiencing [the consequences of the recession], they will have less to give,” he explained (8).

Guarded Optimism Through it is too early to tell exactly how the new budget will impact the Dartmouth scientific community (It’s all “very fluid,” said Glueck (6).), there is, at present, a sense of guarded optimism for the near future of science at Dartmouth, in the U.S., and globally. As Asia continues its explosive, competitive R&D expansion and President Obama shows promising support for scientific research, the stage is set for scientific advancement that can push through the recession. Back at the College, construction of the new Class of 1978 Life Sciences Complex has begun, and neurobiology will soon be upgraded to department status (16). Moreover, the College appears determined to continue its legacy of dedication to the undergraduate experience. “The Dean of the Faculty (Professor Carol Folt) and the faculty themselves are deeply committed to student research. In addition, this has been a top priority of the President’s throughout his time at Dartmouth,” said Margaret Funnell (17). At the Thayer School of Engineering, Dean of Graduate Studies Brian Pogue seemed relatively optimistic that Thayer can keep negative change to a minimum. He cited the 7


DoD, the military, and the Department of Homeland Security as major contributors to research funding – all departments in which the president has consistently increased funding. “[The big dotcom bust in the early 90s], certainly in my experience, had a bigger effect on the engineering world because it was funding for high tech things that was decreasing. … This financial crisis seems to be more localized around money that’s invested in stocks or housing or financial issues, so it seems to me that it’s affecting engineering schools less,” Pogue added (18). Wagner seemed to sum up the atmosphere within the Dartmouth scientific community: “I think people are always thinking about new ideas. I think they recognize the reality [of the recession]. … [The right approach] is to try to focus on what we’re great at, which is educating great students and doing great research and having a really important impact … nationally and, in some cases, internationally. That’s what we have to keep an eye on: the things we’re doing so well at, figure out where we want to invest money for new initiatives, and just keep executing” (8). References 1. Business Cycle Dating Committee of the national Bureau of Economic Research, Determination of the December 2007 Peak in Economic Activity (2008). Available at http://www.nber.org/cycles/dec2008.html (29 December 2008). 2. CNNMoney.com Staff, The Crisis: A Timeline (2008). Available at http://money.cnn.com/ galleries/2008/news/0809/gallery.week_that_ broke_wall_street/ ( 29 December 2008). 3. Yahoo!Finance, Dow Jones Industrial Average (2008). Available at http://finance. yahoo.com/echarts?s=^DJI#chart6:symbol=^dji; range=20080929,20081029;indicator=volume;c harttype=line;crosshair=on;ohlcvalues=0;logsca le=on;source=undefined (29 December 2008). 4. C. Isidore, Bush announces auto rescue (CNN, 2008). Available at http://money.cnn. com/2008/12/19/news/companies/auto_crisis/ index.htm (29 December 2008) 5. T. Lahlou, Endowment plunges $220 mil. in 3 months (The Dartmouth, 2008). Available at http://thedartmouth.com/2008/11/10/news/ endowment/ (29 December 2009). 6. D. Glueck, Personal interview, 9 December 2008. 7. The Dartmouth College Endowment (pamphlet, 2008). Available at http:// www.dartmouth.edu/~control/endow/ ppp/2007dcendowment.pdf (30 December 2008). 8. M. Wagner, Personal interview, 26 November 2008. 9. R. Levin, Budget Letter (Yale University, 8

2008). Available at http://opa.yale.edu/ president/message.aspx?id=84 (13 January 2009). 10. Staff, Tilghman letter on Princeton’s response to the economic downturn (letter from President Tilghman, 2009). Available at http://www.princeton.edu/main/news/archive/ S23/13/63G01/ (13 January 2009). 11. Endowment Declines 22 percent through October 31 (Harvard Magazine, 2008). Available at http://harvardmagazine.com/ breaking-news/endowment-declines-22through-october-31(13 January 2009). 12. M. Pavone, Personal Interview, 7 January 2009. 13. Appendix A, Dartmouth College Financial Highlights: Fiscal Years 1999-2007 (posted by the Board of Trustees at Dartmouth, 2008). Available at http://www.dartmouth.edu/ presidentsearch/position/leadership/appendix-a. html (30 December 2008). 14. J. F. Sargent et al., Federal Research and Development Funding: FY2009 (Congressional Research Service, Washington, DC, 2008). Available at http://www.ncseonline.org/nle/ crsreports/08-Sept/RL34448.pdf (30 December 2008) 15. Trends in Basic Research by Agency, FY 1975-2009 (American Association for the Advancement of Science, 2008). Available at http://www.aaas.org/spp/rd/trbas09p.pdf (30 December 2008). 16. Dartmouth College Office of Public Affairs, Dartmouth Board of Trustees, administration discuss College’s financial strategy (2008). Available at http://www.dartmouth.edu/~news/ releases/2008/11/08.html (30 December 2008). 17. M. Funnell, Personal interview, 14 January 2008. 18. B. Pogue, Personal interview, 1 December 2008.

Dartmouth Undergraduate Journal of Science


medicine

Simulation for Success The Nature of Medical Training diana lim ‘11

I

n a society that regards the health of the patient with utmost importance, it is an odd fact that the most common mode of medical teaching for physicians is to learn through onthe-job training — creating an environment that puts the patient at risk. Until recently, on-the-job training was the only way to learn medicine. Physicians and nurses could only practice treatments when patients entered the hospital with their ailments, and experience with rare situations was only available to those who were in the right place at the right time. Now, with the development of simulation tools and simulation teaching centers, like the newly opened Patient Safety and Training Center at the Dartmouth Hitchcock Medical Center (DHMC), the nature of medical training is quickly changing.

Photograph by Diana Lim ’11

A neonatal intensive care simulation unit at the DHMC simulation center. Winter 2009

The Tools The only simulation tool available for medical training in the past was the actor. An actor would be hired to play the part of the patient and answer questions based on a script. This mode of simulation is still used in medical schools today, but it clearly does not suffice for training on physical medical procedures. Additionally, without alternatives a curious situation arises in which physicians begin to want patients with more problems in order to gain the training they need. To fulfill this need, artificial task trainers and mannequins were created. Task trainers grant medical professionals the ability to practice isolated procedures in a low stress environment. These trainers, which are replicas of isolated portions of the body, allow for focused task practice without the distraction of other factors that may influence the procedure (1). Although helpful for textbook procedures, they are not realistic. Task trainers are separated portions of the body, and do not prepare students for real, complex medical situations involving the entire body. This insufficiency of task trainers led to the creation of high-tech mannequins. Simulation mannequins made by several companies such as Meti, Laerdal, and Gaumard Scientific were created with the aim of eliminating patient risk in medical training by providing a lifelike replacement for the patient (2, 3, 4). The high-tech mannequins created by these companies allow for more holistic training and can be programmed in multiple ways to show a variety of symptoms and reactions to procedures. One of the most recent mannequins created is the iStan released by Meti in June 2008 (2). The iStan, jointly funded by the U.S. Army Research Development Engineering Command and the U.S Army Medical Research and Material Command, was originally created as a more portable and versatile simu-

lator for use in combat scenarios (2). Unlike previous mannequins that were build from the outside in — by putting wired parts within a hard rubber shell — the iStan was built from the inside out — branching from an accurate skeletal frame (2). Many upgraded features make the iStan more realistic, such as its life-like bodily secretions and sweating, jugular vein distention, bilateral chest movement and flail chest, real breath, heart and bowel sounds, movable skeletal structure, and vocal ability (2). The outer cover for the iStan was based on a real human cast, making the structure more accurate both anatomically and visually, unlike the unrealistic previous models such as the HPS by Meti and Keri by Simulution (2, 5). The new mannequin can complain, cry, and even drool, giving feedback for every procedure that is done to it (2). The iStan represents a new generation of simulation mannequins that are coming closer to eliminating the need for patient risk in medical training. The focus Meti put on making the mannequin look as realistic as possible — from its portable, unplugged structure to the texture of its skin — creates a more human connection between the mannequin and students, allowing for a more complete simulation training experience. By consulting educators in the fields of medicine, nursing, disaster medicine, emergency response, and the military, Meti and similar companies have succeeded in making a new generation of medical simulation tools that allow for a higher level of preparedness and safety (2). They have opened a door to training without patient risk, and with the ability to test hypothetical high-risk situations before they become a reality.

The Facilities As important as the simulator mannequins are, training for real medical situations would not be possible without a realistic setting to train in. Simu9


Image by Diana Lim ’11

An adult critical care simulation unit at the DHMC simulation center.

lation centers offer what mannequins cannot — an environment in which physicians, nurses, and staff can fine-tune the cooperation and speed needed to respond effectively to the reactions of the patients that the mannequins replace. The most recent simulation training center opened last November at DHMC. An 8,000 square-foot facility, the center itself is the size of a small hospital and offers a realistic and versatile environment for simulation learning, unlike other centers that only cater to specific specialties (6). The center, a result of three years of planning, has a Neonatal Intensive Care Unit, a Pediatric Intensive Care Unit, an adult critical care unit, a birthing room, and an operating room that caters to the learning needs of nurses, medical students, hospital volunteers, housekeepers, and physicians alike (6). The hospital’s aim in creating the new center was to bring simulation teaching, which has been active in DHMC for ten years, to an on-site location that would provide a multidisciplinary and multimodal approach to medical training (6). The center’s design and construc10

tion brings together the best elements from several of the most prominent simulation centers such as Riverside Methodist in Ohio and Mayo Clinic in Minnesota, as well as new elements unique to DHMC (6). Taking advice from these other centers, DHMC’s medical board focused on providing a space with plenty of room for storage at an on-site location to convenience busy physicians and clinicians who take courses at the center (6). The center offers courses in sedation and rescue, anesthesia crisis resource management, intubations, ACLS, PALS, and airway management among others (7). The courses use mannequins, patient actors, and task trainers to refine performance of specific procedures, cooperation and speed in the OR, patient communication, safe transportation, and care of medical equipment (7).

The Future With the formation of multidisciplinary simulation centers that make use of upgraded simulation tools, the

future of medical training is changing. The need for patient risk in on-the-job training is diminishing as more realistic ways to replicate hospital situations are developed. Simulation enables medical professionals to drastically increase the level of patient safety in hospitals. References 1. Simulation Development and Cognitive Science Lab (2008). Available at http://www. hmc.psu.edu/simulation/equipment/tasks/tasks. htm (18 December 2008). 2. METI: Medical Education Technologies, Inc. (2008). Available at www.meti.com (17 December 2008). 3. The Next Generation of Laerdal Simulation (2008) Available at http://www.laerdal.com/ SimMan3G/ (19 December 2008). 4. Gaumard Simulators for Health Care Education (2008). Available at http://www. gaumard.com/ (18 December 2008). 5. Simulution: Practice Made Perfect (2008). Available at http://www.simulution.com/ (19 December 2008). 6. A. Leland, Personal interview, 1 December 2008. 7. Dartmouth-Hitchcock Simulation Center (2008). Available at http://an.hitchcock.org/ dhmcsimulation/index.html (15 December 2008).

Dartmouth Undergraduate Journal of Science


medicine

New Tricks For an Old Foe The Threat of Antibiotic Resistant Tuberculosis Jay Dalton ‘12

T

uberculosis (TB) has affected human beings since Neolithic times (1). In ancient Greece it was known as phthisis, which means “wasting.” During the 17th and 18th centuries in Europe it caused the “White Plague” and was known as consumption, accounting for 25 percent of all adult deaths during this period (2). These two names reflect the slow deteriorative progression of the disease in the host. It is the long time scale of the infection period that makes tuberculosis so dangerous. Tuberculosis has become a threat again in the modern era of antimicrobial warfare, because its unique characteristics give it enormous potential for developing resistance to even the strongest antibiotics. Tuberculosis combines one of the slowest division rates among bacteria with a hardy cell wall defense system (2). Both of these factors stretch treatment into a multiple month process, creating a massive window for human error in the form of incorrect or missed dosages (1). Similarly, this slow pace of infection increases the possibility of evolution-based antimicrobial resistance by giving tuberculosis bacteria time to mutate (2). For these reasons, multi-drug resistant tuberculosis is one of the biggest health threats of this generation and the discovery of new treatment methods will be essential in the ongoing fight against this disease.

Tuberculosis Tuberculosis is normally a chronic, widely variable disease that is usually caused by inhalation of the airborne causative agent (3). The symptoms of TB include fever, cough, difficulty breathing, inflammatory infiltrations, formation of tubercles, caseation, pleural effusion and fibrosis (3). TB is caused by a mycobacterium. Mycobacterium tuberculosis (MTB) is the most common bacterial agent responsible for TB, howevWinter 2009

er, M. bovis, M. microti canetti, and M. africanum can also result in TB. MTB is an obligate aerobic bacterium, therefore it needs oxygen to survive (4). Because of MTB’s large metabolic oxygen requirement, it is typically found in oxygen rich areas such as the respiratory system. Although it generally starts in the lungs, it may spread to other regions of the body via the lymphatic system or blood vessels (3).

and lysis of lipids, compared to other bacteria. This explained why over 60 percent of the MTB cell wall is composed of lipids (2). This propensity for lipid production creates the unique mycobacterium cell wall, which is one of the most important characteristics of MTB as it is highly conducive to antibacterial resistance formation.

The Cell Wall

The cell wall of MTB gives it many pathogenic properties. For instance, MTB is a facultative intracellular parasite, which means that it can reproduce inside or outside of host cells. As a result, MTB is able to survive within immune cells known as macrophages without being destroyed by phagocytosis (6). When infectious particles reach the alveoli sacs in the lungs, macrophages phagocytose (engulf) the bacteria and clump together into granulomas in order to contain the infection. Although this process keeps 95 percent of TB infections from becoming activated upon bacterial entry, MTB is able to remain dormant for many years, thanks to its cell wall which provides resistance to lethal oxidation (1, 2). Once captive within immune cells, MTB is transferred to the lymph system and bloodstream (1). Through this process MTB is able to spread to other organs and multiply in oxygen rich regions (1). Despite the possibility of extra-pulmonary TB, the majority of cases occur in the upper lungs following reactivation of dormant MTB (1). When MTB becomes active, its cell wall plays a major role in replication and resistance to immune responses. After the initial infection, MTB can reproduce within the macrophage until the cell bursts, which alerts macrophages from peripheral blood (2). However, the process continues because the newly produced MTB cannot be fully destroyed by the macrophages.

The cell wall of MTB is one of the major determining factors of its virulence (2). The structure of the cell wall has three major components: mycolic acids, cord factors, and Wax-D. The mycolic acid molecules are of primary interest due to their deadly qualities. Mycolic acids are unique to Mycobacterium and Corynebacterium (2). They are alpha branched saturated fatty acids with chain lengths as long as 80 carbons (5). Mycolic acids create a lipid shield, which protects against cationic proteins, lysozyme, and oxygen radicals of phagocytosis (2). Gram staining is a means of categorizing bacteria based on the makeup of their cell walls. The amount of the polymer peptidoglycan, which forms a mesh-like structure, determines whether or not the bacterium can hold onto the stain administered in gram staining. MTB cannot be classified as truly gram positive or negative, because its cell wall is impervious to gram staining due to the high content of lipids, especially mycolic acid. This characteristic places MTB in a category of bacterium known as acid-fast bacteria, whose acid-rich cell walls retain a red dye used for staining, despite attempts at de-colorization (1). The full genome of MTB was sequenced in 1998. One of the major discoveries was that a large amount of coding is dedicated to the genesis

Pathology

11


As MTB replicates further, Tcell lymphocytes are activated. These immune cells are activated by major histocompatibility complex molecules, which allow the T-cells to recognize MTB antigens (2, 7). At this point cytokines are released, which activate the macrophages and allow them to destroy MTB (2). This activation is a cell-mediated immune response, as opposed to the original antibody-mediated immune response (2). At this phase in the disease, small rounded nodules known as tubercles form and create an environment in which MTB is unable to multiply. However, because of its cell wall, MTB can survive in the low pH and anoxic tubercles for long periods of time (2). These tubercles are surrounded by many inactivated macrophages in which MTB is able to replicate (2). Through this process, although the cell-mediated immune response is capable of destroying individual bacterium, it is also responsible for the growth of tubercles, which occurs as MTB replicates within and subsequently ruptures inactivated macrophages (2). In these ways, the cell wall of MTB allows it to evade or complicate each step of the immune process, creating a need for man-made antibiotics.

Antimicrobial Resistance The waxy, hydrophobic cell wall of MTB gives it the ability to survive long exposure to substances such as acids, detergents, oxidative bursts, and antibiotics. In fact, the typical “short” treatment of MTB involves a four antibiotic treatment for two months and then a two antibiotic treatment for an additional four months. The antibiotics involved are isoniazid, rifampicin, pyrazinamide, and ethambutol. The cell wall of MTB is so resistant to normal antibiotic measures that the above listed antibiotics, especially isoniazid, are targeted at the synthesis of mycolic acids (8). The inhibition of the gene InhA has been found to induce the lysis of MTB cells (8). However, some MTB strands have mutated and begun to augment the mycolic acids of their cell walls with cycloprophyl groups (8). Although these groups have been shown to hamper 12

Image courtesy of the Centers for Disease Control and Prevention.

Colorized scanning electron micrograph at 15549x magnification, showing details of the cell wall configuration of tuberculosis bacteria. The cell wall is a key part of the pathogen.

persistent infection, they also protect MTB against immune responses (8). This is simply one example of the adaptive ability of MTB. The long period of treatment, combined with a laundry list of side effects, which can include hepatitis, optic neuritis, and seizures, compels many patients to stop taking medication after symptoms subside (1, 6). The standard therapy for active TB is a six-month program with two months dedicated to isoniazid, rifampin, and pyrazinamide and four months of isoniazid, rifamate, and rimactane (1, 6). Ethambutol or streptomycin is also added until the patient’s drug sensitivity is known (6). This long period of treatment is a direct result of the slow reproductive time and resistant cell wall of MTB (1, 2). Both of these factors give MTB ample time to capitalize on patient or doctor error by producing mutations like the addition of cycloprophyl mentioned above. Two types of drug resistant MTB strains are currently recognized. Multidrug-resistant tuberculosis (MDR TB) is resistant to at least two of the four first-line drugs listed above (2). Extensively drug resistant tuberculosis (XDR TB) is defined as resistant to isoniazid, rifampin and also to fluoroquinolone and at least one of three injectable second-line drugs (2). XDR TB has an estimated cure rate of only 30 percent in patients with an uncompromised im-

mune system compared to a 95 percent cure rate of normal tuberculosis (2, 9). The four-drug regimen of tuberculosis treatment is a means of avoiding MDR TB and XDR TB. In 2008, the World Health Organization (WHO) indicated that MDR TB was at a record high of 489 cases, compared to 139 cases in 2006, and that XDR TB had been reported in 45 countries (2). These findings reflect a pressing need not only for greater adherence to prescribed drug treatments, but also for the discovery of new antibiotics or other means of combating this rapidly growing problem. The WHO has implemented a new system in response to its drug-resistant tuberculosis findings, which is known as directly observed treatment, shortcourse (DOTS) (10). This approach facilitates cooperation between doctors, health workers, and primary health care agencies in order to monitor tuberculosis patients and facilitate the complete eradication of infection (10).

Conclusion As demonstrated, even in its nonresistant form, MTB provides a massive challenge for the immune system through its unique cellular properties. It is capable of infiltrating the body’s own immune cells, of surviving for weeks outside of the body, and for resisting most standard antibiotics.

Dartmouth Undergraduate Journal of Science


When coupled with human error, tuberculosis proves a deadly adversary. As demonstrated by the findings of the WHO, MDR TB is a global concern, which needs to be closely monitored in the future. Similarly, although still rare, XDR TB threatens the medical landscape of this generation and needs to be met with patient compliance and cooperative action on the part of doctors and health agencies. References 1. S. Sharma, Tuberculosis (2005). Available at http://www.emedicinehealth.com/tuberculosis/ article_em.htm (18 November 2008). 2. K. Todar, Mycobacterium tuberculosis and Tuberculosis (2008). Available at http://www. textbookofbacteriology.net/tuberculosis.html (18 November 2008). 3. Tuberculosis (2005). Available at http:// www2.merriam-webster.com/cgi-bin/mwmednlm ?book=Medical&va=tuberculosis (19 November 2008). 4. Aerobic (2005). Available at http://www2. merriam-webster.com/cgi-bin/mwmednlm (19 November 2008). 5. J. Lackie, The Dictionary of Cell and Molecular Biology (2008). Available at http:// cancerweb.ncl.ac.uk/cgi-bin/omd?mycolic+acid (19 November 2008). 6. S. Swierzewski, Tuberculosis (2007). Available at http://www.pulmonologychannel. com/tuberculosis/index.shtml (29 November 2008). 7. MeSH Descriptor Data (2008). Available at http://www.nlm.nih.gov/cgi/mesh/2008/MB_cgi? mode=&term=Major+Histocompatibility+Comple (19 November 2008). 8. W. Jacobs, Mycolic Acids of Mycobacterium tuberculosis: An Achilles Heel or a Neutralizing Weapon? (2001). Available at http://www. rockefeller.edu/lectures/jacobs011901.html (29 November 2008). 9. Extensively Drug Resistant Tuberculosis (2007). Available at http://www.medicinenet. com/extensively_drug-resistant_tuberculosis_ xdr_tb/article.htm (29 November 2008). 10. Tuberculosis: An Airborne Disease (2004). Available at http://findarticles.com/p/articles/mi_ m1309/is_/ai_54157859 (29 November 2008).

Winter 2009

Ad

13


NOBEL PRIZE 2008

Nobel Prize in Physiology or Medicine

Human Papillomavirus and Human Immunodeficiency Virus PETER ZHAO ‘10

F

rom influenza to smallpox to Ebola, viruses, some of the smallest and most intriguing infectious agents, have long plagued society. To scientists, understanding viral mechanisms of infection is critical to learning how to combat them. The 2008 Nobel Prize in Physiology or Medicine was awarded to recognize two enormously important discoveries that occurred about twenty years ago in the field of virology: cancer-causing strains of human papillomavirus (HPV), and the human immunodeficiency virus (HIV).

indicating that they had been infected by a virus (1). These results showed that the virus could induce cancerous growth in normal cells. Moreover, the results suggested that other cancer-causing viruses might also exist. Throughout the 1970s, zur Hausen continued searching for cancercausing viruses. His first breakthrough came when he succeeded in isolating

Harald zur Hausen, Human Papillomavirus, and Cervical Cancer Harald zur Hausen of Germany was awarded half the prize for his discovery that certain HPV strains can cause cervical cancer. Zur Hausen specializes in oncovirology, the study of cancer-causing viruses, and his work has raised new questions about the nature of virus-host interactions. Zur Hausen was one of the first medical scientists to postulate a connection between viral infection and cancer. In 1967, he contributed to a groundbreaking study in which a team of scientists led by researcher Werner Henle found that a herpes-like virus could transform healthy lymphocytes into cancerous ones (1). Henle’s team identified the presence of the EpsteinBarr virus (human herpesvirus 4) in a culture of cancer cells from a patient with Burkitt’s lymphoma. After lethally irradiating the cells, the team mixed them with a solution of normal peripheral leukocytes, which under normal circumstances do not divide. Surprisingly, after two to four weeks of incubation, these leukocytes began proliferating. In addition, the team discovered viral antigens in the culture, 14

Image courtesy of Holger Motzkau and available at http://en.wikipedia.org/wiki/ File:Harald_zur_Hausen-press_conference_Dec_06th,_2008-6.jpg (Accessed 18 January 2009).

Harald zur Hausen received half the Prize for his discovery of HPV’s role in cervical cancer.

HPV strain 6 from genital warts and showing that the virus was responsible for the warts. While many scientists believed that one type of HPV was responsible for all warts, zur Hausen was convinced that there were at least several, and that different types were responsible for non-genital warts and genital warts (2). He also suspected that some of these HPV types could be oncoviruses. In 1976, he boldly hypothesized that HPV infection was one of the primary causes of cervical cancer. The hypothesis was strongly contested by other scientists, who believed that a herpesvirus was the culprit. Zur

Hausen spent nearly ten more years building evidence for his hypotheses. Using molecular cloning methods, zur Hausen was finally able to isolate HPV 16 DNA from cervical cancer tumors in 1983 (3). He used similar methods to isolate HPV 18 a year later, proving that there were multiple types of HPV. Furthermore, zur Hausen discovered that DNA from the tumors would react to probes for HPV 16 and HPV18, proving that these viruses were involved in causing cancer (2). At the time, these two HPV types were responsible for over 70 percent of all cervical cancers. Zur Hausen’s work has led to the development of Gardasil (Merck) and Cervarix (GlaxoSmithKline), the first vaccines against a preventable cancer. The vaccines, which target HPV 16 and 18, prime the body’s immune cells to recognize and attack the virus, thereby preventing the initial infection from occurring. The vaccine has been shown to be one hundred percent effective at eliminating the pre-cancerous lesions associated with HPV 16 and 18.

Luc Montagnier, Françoise Barré-Sinoussi, and the Human Immunodeficiency Virus Luc Montagnier and Françoise Barré-Sinoussi of the Pasteur Institute in France were each awarded a quarter of the prize for their discovery of the human immunodeficiency virus (HIV) in 1983. At the time, Barré-Sinoussi was working under Montagnier. Both scientists were experts on retroviruses, viruses that use an enzyme called reverse transcriptase to encode DNA from an RNA template. In 1981, the US Centers for Disease Control reported a series of strange opportunistic infections in gay men. Similar cases

Dartmouth Undergraduate Journal of Science


began appearing in France in 1982. Over the next 18 months, the number of cases worldwide multiplied rapidly, and teams of scientists competed in a worldwide effort to identify the pathogen. Willy Rozenbaum, a clinician at the Hôpital Bichat in France, was convinced that the flurry of diseases was being caused by a new retrovirus (4). The crucial sample was a biopsy from a swollen lymph node taken from one of Rozenbaum’s patients. Montagnier and his team immediately began working on the lymph node, mincing it and dissecting the fragments into cells. He then cultured the T-lymphocytes of the dissected lymph node with human interleukin-2 and an antiserum to interferon to coax the virus out of an inhibited state (5). Fifteen days later, Barré-Sinoussi detected the first traces of reverse transcriptase, a hallmark of retrovirus activity. The team attempted to precipitate the virus using antibodies against two other known retroviruses, but observed no precipitation and concluded that this virus was indeed a new type of retrovirus (5). After obtaining several other specimens from patients afflicted with

the unknown disease, the team noticed cross-reactivity between the viral proteins of the specimens. The crossreactivity implied that the same virus was present in each of the specimens. By May 1983, Montagnier and Barré-Sinoussi had collected enough evidence to characterize the new retrovirus, naming it LAV for lymphadenopathy-associated virus. They had also collaborated with electron microscopist Charles Dauget to obtain the first electron microscope images of the virus (5). However, the team had no evidence that the LAV virus was the cause of the ongoing AIDS epidemic. That evidence would in fact come a year later from an American team led by biomedical researcher Robert Gallo. Gallo’s team, which had developed a method to grow T-lymphocytes in vitro in 1976, published a series of papers that proved Montagnier’s virus was the cause of AIDS (6). Because of the perceived impact of such a discovery, the competition between the French and American scientists was tough and sometimes bitter. Despite proving the pathogenicity of the virus and developing most of the meth-

ods that allowed Montagnier’s team to conduct its experiments, Gallo was not included as a recipient of the Nobel Prize. However, the overall result of this fierce competition was that the HIV virus was isolated and identified less than 3 years after the US Centers for Disease Control’s first reports of the disease. The first diagnostic tests were developed two years later, and the first antiretroviral drugs shortly thereafter. The result was that countless lives were saved due researchers’ efforts. References 1. W. Henle et al., Science, 157, 1064-1065, (1967). 2. Harald zur Hausen (2008). Available at http://www.gairdner.org/awards/ awardees2/2008/2008awarde/haraldzurh (5 January 2009). 3. The Nobel Prize in Physiology or Medicine (2008). Available at http://nobelprize.org/nobel_ prizes/medicine/laureates/2008/press.html (27 December 2008). 4. The discovery of the AIDS virus in 1983. Available at http://www.pasteur.fr/ip/easysite/ go/03b-000027-00i/the-discovery-of-the-aidsvirus-in-1983 (3 January 2009). 5. L. Montagnier, Science, 298, 1727-1728, (2002). 6. D.A. Morgan, F.W. Ruscetti, R. Gallo, Science, 193, 1007-1008, (1976).

Image courtesy of the Public Health Image Library.

Scanning electron micrograph of HIV viruses (green) budding from a cultured T-lymphocyte. Winter 2009

15


NOBEL PRIZE 2008

Nobel Prize in Chemistry

Applications of the Green Fluorescent Protein HANNAH PAYNE ‘11

A

s visual creatures, humans believe what they see. We rely on our vision for macroscopic observations. Vast advances in microscopy now also enable visualization of cellular and sub-cellular structures. However, even the best microscopes cannot directly view molecular-level processes such as gene expression or protein interaction in vivo. In 1962, scientists found a solution in the most unlikely place – the green fluorescent protein (GFP) isolated from the bioluminescent jellyfish Aequorea victoria, native to the northern Pacific Ocean. The 2008 Nobel Prize in ChemisRibbon diagram of try was awarded green fluorescent in three equal protein, based on parts for the the determined X-ray discovery and structure. development of GFP. Osamu Shimomura was recognized for discovering and characterizing GFP in 1962. Martin Chalfie first demonstrated that GFP could be expressed in other organisms without the aid of auxiliary proteins thirty years later. Finally, Roger Tsien developed a diverse palette of GFP-related fluorescence proteins, with improved brightness, photo-stability, and other useful properties.

(460 nm) closely matching the peak emission wavelength of aequorin (470 nm). Others concluded that GFP (the acceptor) absorbs the energy emitted by aequorin (the donor) in a process now known as Fluorescence Resonance Energy Transfer (FRET). Shimomura also

Image courtesy and permission of Richard Wheeler (Zephyris). Retrieved from http://en.wikipedia.org/wiki/ File:GFP_structure.png (Accessed 16 January 2009).

Discovery Osamu Shimomura was initially interested in the protein aequorin, obtained from A. victoria, which emits blue light in response to calcium. However, the jellyfish appears bright green, not blue. The puzzle was solved in 1962 by Shimomura’s discovery of GFP, which has a peak excitation wavelength 16

identified the central chromophore, the specific chemical group responsible for GFP’s fluorescent properties (1). At the time of Shimomura’s discovery, experts believed that auxiliary enzymes were required to form this central chromophore. This would mean that if GFP were expressed in any organism other than A. victoria, it would likely be non-functional. Martin Chalfie settled the issue by obtaining the gene for GFP from Douglas Prasher who had originally cloned it, and successfully expressed the flourescent protein in E. coli in 1992. He then expressed GFP in the nematode C. elegans, driven by a promo-

tor for β-tubulin which is strongly expressed in six touch receptor neurons (1). This demonstrated the usefulness of GFP as a genetic marker, without the need for additional genetic manipulation. It was later expressed in yeast and, more importantly, mammals (1). At this point, the mechanisms of GFP chromophore formation and fluorescence were still not understood, despite the obvious utility of the protein. Roger Tsien found that oxygen was all that was needed to activate the GFP, and since aerobic cells constitute the vast majority of biological research, the protein could be broadly expressed in these cells without additional enzymes. In addition, by inducing point mutations in the GFP gene, Tsien improved brightness and photo-stability, and created a colorful spectrum of variants which have proved invaluable in simultaneous labeling of multiple proteins. Tsien also engineered fluorescence proteins such as tdTomato and mCherry, which fluoresce in the orange-red part of the spectrum, based on the protein DsRed from the coral Discosoma. With the help of other collaborators, he also solved the crystal structure of GFP (1).

Properties The original green florescent protein (GFP) found in Aequorea consists of 238 amino acids, of which residues 65-66-67 form a fluorescent chromophore in the presence of oxygen. The tertiary structure comprises a cylindrical eleven strand β-barrel threaded by an α-helix containing the fluorescent chromophore (1). GFP is maximally excited by UV light at 400 nm, with another smaller peak at 470 nm (blue light). It emits photons with a sharp peak at 505 nm (green light) (1). It is non-toxic when

Dartmouth Undergraduate Journal of Science


expressed at reasonable levels. Additionally, it can typically be fused to other proteins without changing either its fluorescence properties or the functional properties of the protein. These qualities make GFP extremely conducive to diverse biological applications.

Research Applications Since its discovery, GFP has revolutionized biological research. Over 20,000 publications involving GFP have appeared since 1992, permeating essentially every area of biology (1). Along with the discoveries being recognized this year with the Nobel Prize, concurrent advances in imaging techniques and data analysis have combined to maximize the utility of this versatile tool. In the laboratory, scientists commonly fuse GFP to a protein of interest, in order to track the trafficking of the protein within a cell (1). For example, the oscillating movement of proteins during cell division in bacteria was observed by fusing MinC to GFP (5). Expressing the GFP gene alone with a specific promoter is often used to label classes of cells in which that promoter is active. This is commonly used in neurobiology to label subsets of neurons, as first achieved by Chalfie in C. elegans. Currently, a modified form of GFP is being used to build a complete wiring diagram of the Drosophila brain (2). By concurrently expressing three or four fluorescent proteins, numerous individual cells can be viewed simultaneously, using a technique known as “Brainbow” (3). Another exciting application is specific expression of fluorescent proteins in tumor cells (4). Fluorescence technology is also used to monitor the distance between two proteins that have been tagged using GFP-family proteins. In the technique known as FRET, the emission spectrum of one fluorophore (the donor) matches the excitation spectrum of the other (the acceptor). When the two tagged molecules are in close proximity (less than about ten nanometers) and the donor is excited by incident light at its excitation wavelength, the emitted energy is transferred to the acceptor and is then emitted at the acceptor’s characteristic emission wavelength. FRET can not only reveal if Winter 2009

two molecules are close together and therefore interacting in some way, but can also provide an estimate of the exact distance between them, based on the decay rate of the signal (1). Other techniques include monitoring the interaction of two proteins by tagging each protein with half of the GFP molecule, so that when they bind, GFP is completed and becomes functional. GFP has also been strategically fused with other molecules to construct pH or calcium sensors, enabling concentrations to be imaged at high spatial and temporal resolutions across populations of cells (1).

Conclusion Shimomura’s initial discovery of GFP, followed by its expression in other organisms by Chalfie, and finally the development of a chromatically and functionally diverse spectrum of related florescent proteins by Tsien, have dramatically impacted biological research. Aside from increasing the visual appeal of otherwise dull figures, GFP-family proteins have already shed light on numerous important findings. The future looks even brighter. References 1. The Royal Swedish Academy of Sciences, Scientific Background on the Nobel Prize in Chemistry (2008). 2. A. Chiang, “Building a wiring diagram of the Drosophila brain” (2008). Lecture delivered at the Marine Biological Laboratory, 30 June (2008). 3. J. Livet et al., Nature 450, 56 (2007). 4. M. Yang et al., Proc. Natl. Acad. Sci. U. S. A. 100, 14259 (2003). 5. D. M. Raskin, P. A. J. de Boer, Proc. Natl. Acad. Sci. U. S. A. 96, 4971 (1999).

17


NOBEL PRIZE 2008

Nobel Prize in Physics

Origins and Mechanisms of Broken Symmetry HEE-sung yang ‘12

W

hy can the Universe exist as it is now? Physicists have been striving to answer this question. A proposed theory is “broken symmetry.” In 2008, the Nobel Prize committee recognized significant contribution of three Japanese scientists in the field of symmetry. On October 7, 2008, the Royal Swedish Academy of Sciences announced that the Nobel Prize of Physics will be jointly awarded to Toichiro Nambu, Makoto Kobayashi and Toshihide Maswaka. Nambu, professor emeritus at the Fermi Institute of the University of Chicago, worked on the mathematical model of spontaneous broken symmetry, while Kobayashi and Maskawa investigated the origin of broken symmetry. Their work on spontaneous broken symmetry explains “why the universe is made up of matter and not anti-matter” (1). Their theory also hints at why the Universe has managed to survive for centuries. The Big Bang created both matter and anti-matter. Had the Universe been symmetrical, the world we live in would not have formed. When matter and antimatter collide, nothing is left but radiation energy. This “broken symmetry” is a central field in physics, as physicists believe it is this asymmetry that keeps the Universe in its current state. The world remains as it is because an excess of a matter particles triumphed over ten billion anti-matter particles (2). Yoichiro Nambu charted the course to broken symmetry, by introducing spontaneous violation of symmetry in 1960. While studying superconductivity, a state in which currents flow without any resistance, he recognized spontaneous symmetry violations that indicated superconductivity. He then translated his computations into elementary particle physics, which soon became a milestone amongst theories about the Standard Model. The Standard Model is a theoretical model proposed in particle physics 18

that explains how nature works. The Model explains three of four fundamental forces of nature and what types of particles participate in these interactions. First of all, particles are divided into fermions (quarks and leptons, the matter constituents) and bosons (force carriers or force-mediating particles). Fermions are then categorized into quarks, which carry color charge; and leptons, which do not carry it. Because of color charge, quarks are involved

Image courtesy of Holger Motzkau and available at http://en.wikipedia.org/wiki/ File:Makoto_Kobayashi-press_conference_Dec_07th,_2008-2b.jpg (Accessed 28 January 2009).

Makoto Kobayashi received a quarter of the prize for discovering the origin of broken symmetry and predicting the existence of a third generation of quarks.

in interactions with strong nuclear force, while leptons are responsible for other three interactions. There were previously three quarks: up, down and strange. The other three quarks – charm, top and bottom – were discovered by Kobayashi and Maskawa, to be confirmed a few years later. On the other hand, leptons include three neutrinos (electron neutrino, muon neutrino, and tau neutrino) and three negatively-charged particles (electron, muon, and tau). Bosons include gluons and

mesons for strong nuclear forces; W+, W- and Z0 for electroweak forces; finally, a graviton, which has not been observed yet, for gravitational forces. (2) At the time of his discovery, Nambu’s assumption that spontaneous symmetry violations in electromagnetism can also be applied to elementary particle physics was considered bold (2). His insight into spontaneous broken symmetry can be explained in a following analogy. Imagine a perfectly symmetrical spinning top representing an unstable state. When it loses its balance the symmetry is broken (3). Also, in terms of energy, the state of a fallen pencil is more stable than a perfectly symmetrical balanced state. Therefore, given that the vacuum has the lowest state of energy in the cosmos, it can be inferred that the Universe does not have a symmetrical quantum field: there was an imbalance between matters and antimatters when the Universe was formed. Nambu’s discovery affected the Standard Model in two ways. First, it allowed the model “[to unify] the smallest building blocks of all matter” (4). Second, the effects of the third fundamental force in nature, strong nuclear force, could be incorporated (5). On the other hand, the other two scientists, Kobayashi and Maskawa, made a different claim as bold as Nambu’s. Their discovery is based on the double broken symmetry during a radioactive decay of kaon particles, named by James Cronin and Val Fitch. This unusual broken symmetry demanded an explanation, as this phenomenon threatens the Standard Model (2). Kobayashi and Maskawa theorized that there must be more than three quarks – at this point three of the six quarks were yet to be discovered – in order for their analysis based on the Standard Model and the model itself to hold. Kobayashi and Maskawa also calculated the probability that a quark in kaon will transform itself into an anti-quark and vice versa. Their cal-

Dartmouth Undergraduate Journal of Science


culations implied that if a similar type of transformation were to happen to matter and anti-matter, further quark families had to exist, besides the first family, up and down quarks (2). It took three decades for their theory to be confirmed. One of the quarks in the second family, charm, was eventually discovered in 1974; the two quarks of the third family in the model, top and bottom, were discovered in 1994 and 1977, respectively (2). In 2001, the BaBar particle detector in Stanford and the KEK accelerator in Taskaba, Japan confirmed that B-mesons, the cousins of kaons, also experience broken symmetry, if rarely, in the way Kobayashi and Maskawa predicted thirty years ago (2). While the breakthroughs by three Japanese scientists improved the Stan-

Winter 2009

dard Model, some answers still remain unanswered. One of nature’s fundamental forces, gravitational force, has not yet been incorporated, and the existence of the Higgs boson has not been confirmed yet. However, Kobayashi believes “the issue of the standard model is almost over,” and now scientists are awaiting “new physics.” (6). Although physicists are still not content with the existing Standard Model, it is no doubt that their work certainly put physics into a new dimension.

Available at http://nobelprize.org/nobel_prizes/ physics/laureates/2008/info.pdf (23 Jan. 2009). 3. A. Cho, 2008 Physics Nobel Prize Honors American and Japanese Particle Theorists (2008). Available at http://sciencenow. sciencemag.org/cgi/content/full/2008/1007/1 (24 Jan. 2009). 4. The Royal Swedish Academy of Sciences, Press Release (2008). Available at http:// nobelprize.org/nobel_prizes/physics/ laureates/2008/press.html (23 Jan. 2009). 5. D. Overbye, “Three Physicists Share Nobel Prize,” The New York Times, 7 Oct. 2008. 6. A. Smith, Telephone Interview (2008). Available at http://nobelprize.org/nobel_prizes/ physics/laureates/2008/kobayashi-telephone. html (23 Jan. 2009).

References 1. C. Moskowitz, Will the Large Hadron Collider Destroy Earth? (2008). Available at http://www. livescience.com/mysteries/080909-llm-lhc-faq. html (23 Jan. 2009). 2. The Royal Swedish Academy of Sciences, The Nobel Prize in Physics 2008 (2008).

19


interview

In the Field With an Arctic Pioneer conducted by Sunny Zhang ‘10

R

oss Virginia is highly involved at Dartmouth College as a professor of environmental studies, the director of the Institute of Arctic Studies, as well as the principle investigator of the Integrative Graduate Education and Research Traineeship program IGERT. Furthermore, his influence extends even to Antarctica, where a valley is named after him to honor his research in soil biology. DUJS staff writer Sunny Zhang spoke with Virgina on topics ranging from the early days of the Arctic Institute, to the new graduate program in polar research, and the very pressing and real effects of climate change. DUJS: To begin with, can you shed some light on the background of the Arctic Institute? Ross Virginia: The institute goes back to 1989. When the original director left, they asked me to fill the position. I’m a polar researcher who studies regions in Antarctica. The Arctic Institute has been involved on the undergraduate level by offering courses on polar subject matters as well as holding student exhibits. It has also been involved in public outreach. An example is the thin ice exhibit at the Hood Museum last year. Now it is looking to bridge the gap in graduate education. DUJS: I know that Dartmouth recently received close to $3 million of grant money from the NSF (National Science Foundation) for a new polar sciences and engineering graduate program, the Integrative Graduate Education and Research Traineeship (IGERT) program. How did Dartmouth get this grant and what is the program’s purpose and goal? RV: Dartmouth is a major graduate institution and we had a lot graduate students scattered about doing polar things but no way to draw them together as a community. They were getting a very traditional science based education, and what occurred to us is, this new generation of polar scientists and engineers, what should they know and what aren’t they getting at a traditional graduate program? We realized they were missing exposure to the human dimensions of the work, the relevance of working with people, and particularly with the people that are experiencing this climate change in the north. A big part of this grant is to help students broaden the interdisciplinary part of their science research and then see how their research fits in to meet the needs of the people that live in the north and also to help them to partner and work more closely with indigenous people of the north. It’s taking science students and chal20

Image courtesy of Ross Virginia.

Ross Virginia and helicopter atop the Taylor Glacier, at the end of the East Antarctic Ice Sheet.

lenging them to ask different questions of their science after recognizing what the needs are of the people that are being affected as opposed to just knowing what the science question is, standing alone. Our IGERT program partners with Greenland and the Cold Regions Research and Engineering Laboratory (CRREL). DUJS: What resources are available to IGERT students? RV: What the IGERT does is the NSF provides Dartmouth with a pile of money and IGERT students receive a two year fellowship at a higher education institute. There’s a national IGERT recruiting network and there’s additional funds and a special curriculum developed for these students. Probably the most exciting part of that is that students are able to spend a good portion of a summer in Greenland where they will be doing science in Greenland, and then will go to the capital of Greenland, Nuuk, and will interact with the Innuit Circumpolar Council and various other organizations to determine what is important to the people of Greenland in terms of climate change, their use of natural resources, and the like. DUJS: Is the program starting up this coming year? RV: The award was made this August. The first set of IGERT students will be admitted next fall. We’re working to recruit students, getting the curriculum developed, and working with Greenland to establish programs. In order to work with Greenland effectively on IGERT we have to deDartmouth Undergraduate Journal of Science


Image courtesy of Ross Virginia.

Polar desert landscape of Virginia Valley, which is named after Ross Virginia. The valley is located in Olympus Range, Victoria Land, Antarctica.

velop and sustain this relationship. I went to Greenland in August. We’re bringing Greenland researchers and students here. Dickey is also funding a fellowship and offering it to students that want a fellowship in Greenland. We’re doing this back and forth to start a collaboration. DUJS: What words would you use to summarize the IGERT program? RV: The words that really describe what NSF is trying to do is “interdisciplinary” and “transformative.” They’re really trying to change the way faculty and graduate students work together. DUJS: So instead of graduate students focusing on one narrow scientific topic, this program will allow them to integrate themselves into bigger issues, more relevant to human society. RV: Right. It is definitely trying to integrate students from different departments and backgrounds and encourage them to work together. This IGERT program is NSF’s way of trying to broadly infect graduate education and the focus is on traineeship. They’re investing in individual students and will follow these students throughout their careers to figure out whether these types of programs, extra attention, and support make for more science and a better future. Winter 2009

DUJS: Can you talk a little more about what research and programs the Arctic Institute engages in? How does the Institute work? RV: The institute is part of the Dickey Center. The Dickey Center broadly helps enhance and connect any international activities on campus. The Arctic Institute operates in the same way. We have funds to help undergraduate students that want to do an internship or research in Alaska or Greenland, for example. We want to help students be engaged in issues in the north and related to that. The student group, Dartmouth Council on Climate Change, works closely with us. This group is very interested in climate change and climate change policy. They’re a Dickey student group that has a budget and can bring in different speakers, and works like the World Affairs Council. For faculty, we provide support for them to attend international meetings, help bring speakers in, just in general trying to increase the amount of activity that is going on. Now that this IGERT has been awarded, the Dickey Center will be the home for that IGERT. We want to work on connecting the students, faculty, and the public. Dickey and the Arctic Institute got the IGERT here, now our job is to help the various departments get involved and meet their aspirations for IGERT. DUJS: Shifting gears a little, how is your research related to climate change and the impact it has on the earth? 21


RV: I am an ecosystem ecologist by training. I focus on nutrient cycles, carbon and nitrogen cycles and how these nutrients cycle in soil systems and the biodiversity of soils, and how the life in soils influence the rate of cycling. As the earth warms, that stimulates the metabolism of soil, increases the rate of biological activity. In that process the old ancient carbon that accumulated when the system was colder is being metabolized by microorganisms and released into the atmosphere as carbon dioxide. This is one of the positive feedback cycles of climate change. As it gets warmer there’s more biological activity, more melting of permafrost, more carbon dioxide released into the atmosphere, and more warming. In 1989, I went down to Antarctica to work on the dry valleys there and have been down there fourteen times since then. DUJS: Have you noticed differences in the weather pattern in Antarctica or any other physical changes since you’ve started working down there? RV: We’ve actually noticed in the areas I work in that those areas have been cooling. Part of climate change is that not all places get warmer – this particular part of the Antarctica is getting cooler. Human activity is still driving this change. When the ozone hole opens over Antarctica, it’s like a rip or tear in a blanket, and more energy can escape into space. Even though green house gases are being pumped in, and you’d think Antarctica is getting warmer and warmer, the part of Antarctica really affected by the ozone hole seems to not be heating up yet. Most of the climate models suggest that the ozone is repairing itself. The warming is seen very strongly in the Arctic regions and in the peninsula of Antarctica where the ice shelves are breaking off and floating away. The actual core of Antarctica has been cooling over the last two decades. That’s still climate change. We’ve been looking at how that cooling is influencing the function of ecosystems and the cascade of effects that the small change of cooling provides on the polar systems. DUJS: So after you and your colleagues have gathered all this data and evidence that shows there’s a shift in how the earth is acting due to human activity, how do you bring that to the attention of policy makers who will hopefully try to implement policies that make the public more aware of what’s going on? RV: That’s one of the things that the Dickey Center is working very hard to do. We’re hosting a workshop December 1-3 and we’re bringing in people from Canada, Russia, Greenland, and the United States that include deputy foreign ministers, academics, and people involved in commerce and business who will engage in a three day roundtable discussing the security implications of the melting ice in the north. As the ocean melts, that’s going to open up all different kinds of opportunities. There will be more oil and gas developments offshore, there will be more commerce, but also more opportunity for international conflict. A lot of these boundaries and resources have not been fully settled upon. The northern coast of these Arctic nations 22

have large populations of indigenous populations and there are a lot of environmental and economical concerns as well as development issues. It is essential that these indigenous peoples have control over their fate. There can be conflicts of interest among various nation states as well as involving indigenous peoples. There will also be a race for resources as ice starts to melt in the north. For example, the northwest passage internal waters that you can transverse with ships now as the ice begins to melt: the U.S. sees that as an international passageway between two ocean bodies but Canada claims that as internal waters. It’s not settled. This conference is really trying to bring scientists, policy makers, and representatives of indigenous groups together to hopefully develop questions, an agenda, and priorities so when the next administration comes in they have an idea of what should be considered and worked on. It has been about ten years since the U.S. has revised its Arctic policy document and things are changing so quickly up there that some changes in policy needs to be done and needs to soon. The Dickey Center is trying to get this dialogue going and get the right people together. Academic institutions can get people to talk in ways that they can’t talk in Washington, D.C. DUJS: In what areas of climate change research do you think more work needs to be done? What is currently being researched that is essential to the overall understanding of what is happening to our environment? RV: Looking north, it is really important to understand the behavior of sea ice in the Arctic Ocean. That will provide feedback on climate inland and will have major feedback effects on marine ecosystems. The polar bears have just been put on the threatened species list and the polar bear is just one species but it is emblematic of the marine mammal food web. Another issue is the contamination of the Arctic food web. We think of the north as a very pristine place but it is actually a place where a lot of our pollutants end up. An important area of research is how climate change influences not only animals involved but also between the animals and the people. On the terrestrial side, a big problem is that as the north gets warmer and drier, there are more and more fires. There are many reports from Alaska now of fires and the black soot rises up and lands on the snow and increases the rate at which the snow melts. This whole feedback situation comes into effect when the soil gets warmer and drier, becoming biologically more active, and releasing more CO2 in the atmosphere. We can be trying to cut back carbon emissions down here but we’re not gaining as much as we think we are because we’re being bitten by the climate change effects that are already taking place. So knowing what these effects are and knowing how effective our reduction of emissions against the natural ecosystems will be is really important. DUJS: Can this process of climate change be slowed down much or do we just have to deal with it as it comes? Dartmouth Undergraduate Journal of Science


RV: A lot of it is having to deal with it. Everyone is talking now about adaptation and mitigation. How do we accept what is going on? How do we adapt to the change that is on its way? How do we reduce the impact of emissions themselves? The adaption process is where it’s really important for science to engage with people who are forced to adapt this change. What science is important to them? They want to be full partners in this science. They have a lot of knowledge of their environment. The knowledge comes from living in a place for hundreds of thousands of years. One of the challenges that IGERT has as well is to help students understand how this traditional knowledge and western science collectively contribute to figuring out what is happening. DUJS: I’ve read articles about people who live on islands that are being flooded by the rising sea levels and have already been or will be forced to leave their homes. What is the cause of this rise? RV: This is another big issue. Sea level rise is exceeding the rate of current models. People are trying to figure out why. There’s some evidence that the ice sheets in Greenland are breaking off icebergs at a faster rate that people thought was possible. What appears to be happening is that we’re getting more melt at the top of the ice sheet during the

Winter 2009

summers. That water is literally burrowing it’s way down a mile of ice and then reaching what had previously been the frozen interface between the ice sheet and the ground. That water gets up between those two is like a skate on a skating rink. There’s less friction that is causing the ice to surge more quickly towards the coast. Our current models don’t adequately deal with this. There’s a big race among the glaciologists to sort that out and understand better what is going on. DUJS: Do you think policy makers will ever reach a consensus on what path to pursue in dealing with climate change? RV: The politics side is really messy but I think that the extent to which you think climate change is real and how big of a problem it is, that assessment then drives how much you’re willing to do about it. There’s still not complete consensus on how much to spend, which areas to focus on, and where to start. The Dickey Center, the Arctic Institute, and the IGERT program are really trying to enhance the information that is available to people, and people will ultimately have to decide what they want to do about it.

23


ENVIRONMENT & CLIMATE CHANGE

Liquid Gold

Good to the Last Drop ELIZABETH ASHER ‘09

“The economy runs on oil,” said Paul Nadeau, Ph.D. and industry geologist, “and oil may be running out” (1) Understanding oil’s importance, efficiency, predictions about peak oil production and the inherent risks in finding oil is necessary to secure sufficient energy resources needed to meet growing world demands.

Head Honcho Fossil fuels make up 85 percent of the world’s energy budget, and oil remains the world’s largest source of energy. Each day 85 million barrels of oil per day are burned to meet 40 percent of the energy demand (2). The second and third largest sources of energy are natural gas (24 percent) and coal (23 percent) (2). The Middle East alone holds the majority of known oil reserves. By 1980, after decades of foreign production beginning in 1933, Saudi Arabia wrested control of its oil industry, dwarfing the production of American oil moguls like Exxon-Mobil and Shell (3). Today foreign oil companies like Exxon-Mobil receive a finder’s fee to produce oil in host countries such as Saudi Arabia. The move resulted in the prior American oil giants providing services to oil-rich nations.

Efficiency It takes money to make money, and energy to produce energy. Easily transported and efficient, oil remains the world’s most prized natural resource, although its efficiency depends on its type. Crude oil produces 10 times the energy it takes to extract it, less valuable heavy oil has a 5:1 energy return, and non-conventional hydrocarbons such as heavy oil sands and oil shale yield a 2:1 and 1:1 energy return, respectively (1). Biofuels, a possible oil alternative, boast an 8:1 return, three times 24

Image retrieved from http://en.wikipedia.org/wiki/File:Hubbert_world_2004.png (Accessed 22 January 2009).

Oil production has already peaked in most non-OPEC countries.

greater than the ratio of non-conventional hydrocarbons (4). Geologists in the business, however, maintain that if biofuels are to compete on the same scale as oil, the majority of America’s arable land must be devoted to biofuel crops, not food crops (3). Other viable alternative energies like geothermal and wind, produce electricity are not as portable as fossil fuels, making them less viable options.

Draining the Last Drop As Marion King Hubbert pointed out as early as the 1950s, the question is not when will we run out of oil, but when the global demand for oil exceed supply. He noted that demand would increase exponentially, initially fitting the production curve for oil but soon far exceeding it. The trends have followed his predictions: since the 1970s production rates have dwarfed discovery rates. Hubbert predicted that oil production would peak in 2020 (5). Thereafter, accommodating growing demand would require tapping other energy sources. Estimates of peak production range, however, from Hubbert’s lower

bound of 10 years from now to an optimistic 100 years in the future (assuming unconventional hydrocarbon reserves will be discovered in Canada and Latin America) (5). David Deming Ph.D., an industry geologist, cites the difference between resources and reserves, the known and extractable oil resources using current technology, as the reason behind these vastly different estimates. He asserts that the focus on current known reserves is shortsighted. He argues that reserves increase due to innovations in technology; given current technology, only 20 percent of oil found in the ground is extracted (pumping CO2 or water into the reservoir rocks can displace the oil found in its pores and recover as much as 75 percent of oil) (6). A 1 percent increase in recovery rates would augment known reserves by 1,500 million barrels of oil.

Risky Business Many discovered oil fields are abandoned because they are too small to be economically viable or too expensive to drill because the oil is too deep. Reservoir rock porosity, a measure of

Dartmouth Undergraduate Journal of Science


the amount of holes in a rock storing oil (oil is contained in the holes, not in the rock itself) plays a large role in economic worth of an oil field because porosity determines oil production rate. An oil supply with 20 percent porosity produces a profitable well and short payout. Geologists assign risk to each contributing factor of a find: the source rock, the seal rock, the reservoir rock, the trap, and the timing; they assess overall risk as the product of risk associated with each factor. John Carmony, an independent contractor and wildcatter (someone who drills for oil far from producing wells), reports only a 10 percent success rate drilling in West Texas. “But that’s the best day of your life when you find oil, and the rig starts producing,” Carmony said (3). The high-paying success of Carmony and many other geologists outweighs all of their failures. In addition to calculated risk, however, uncertainty of how much oil is in a field plagues the industry. “You never really know how much is there until you drain the last drop,” Carmony said (3).

Technology In the wake of an energy crisis, the search for the earth’s remaining large, economic oil fields continues. Today, a combination of geochemistry, geophysical methods, and geomorphology reduce the risk of revisiting and reworking oil-producing regions. Early 20th century decisions of where to drill exploited structural geology, focusing on the basic idea that buoyant oil migrates to structural highs, or anticlines, where it is trapped. According to geologist Paul Nadeau Ph.D., the Golden Zone, a temperature zone from 60 to 120 degrees Celsius, is home to the majority of the world’s economic oil resources (1). Moreover, Nadeau asserts that only structural and stratigraphic traps, oil traps created by rock layering and rock type, within this temperature range are viable. Depending on the regional geothermal gradient, which describes the increase in temperature as a function of depth (ranging from 20-30 C/-km), the temperature range corresponds to different depths and is on average 2 km thick (1). Below 60 degrees, the microbial Winter 2009

process of biodegradation turns crude oil into less valuable tar. At above 120 degrees, Nadeau argues that excessively high pore pressure equal to lithostatic pressures create rock failure and open up migration pathways for oil to move upwards, into the golden zone. Within the Golden Zone, however, quartz cementation in sandstones and fibrous illite formation in clays provide excellent rock seals, trapping hydrocarbons where they can safely mature. Although the majority of oil reserves have been found in the Golden Zone, data from the Gulf of Mexico suggest that Nadeau’s model may not explain the relationship between pressures in wells at depth. Nadeau believes that the key to finding oil is geothermal temperature gradient, but Dartmouth alumnus 1972 Patrick Ruddy Ph.D., relies on threedimensional seismic imaging. Ruddy boasts a 70 percent rate of success applying three-dimensional seismic technology in Hungary, a region where only two-dimensional seismic had previously been used. Seismic technology distinguishes rock type by density at a resolution of 10m using p-wave (the primary and fastest seismic waves) refractions to map unconformities, rock formations and major faults. Exploding dynamite or vibrosis trucks create p-waves that travel down into the earth. Changes in densities refract the p-waves, and receivers stationed at set distance from the wave’s source, record its travel time. Distance of the receiver from the source is accounted for to create a subsurface map where ‘distance’ represents time. While shooting two-dimensional seismic requires piecing together a puzzle in order to gain an idea of the geology, three-dimensional seismic uses receivers radiating from the source to give geologists a more complete understanding of structure. (6) A third exciting development in off shore oil fields is the application of geomorphology. Deep-sea turbidity sands have lead to offshore oil discoveries in current and ancient depositional environments off the coast of Brazil and West Africa. Typically, turbidity sands create reservoirs and stratigraphic traps in structural lows (originally overlooked in favor of structural highs) where reservoir sands can accumulate (6). Geomorphologists often work with geophysicists and seis-

mic data to make sense of these subsurface depositional environments.

Remarks Applying new technology in a region can decrease the inherent risk of petroleum geology, which most often includes drilling dry holes or uneconomic wells and the potential danger to expensive equipment and workers when working deep reservoirs. The industry, however, may require constant innovation for the rate of discovery to equal the rate of production. Equally important, the sprint to the finish may leave the Unites States in last place unless other energy resources are developed. References 1. P. Nadeau, “The Golden Zone Distribution of Hydrocarbons in Sedimentary Basins: A global View” (2008) Speech Delivered at the Hanover Inn, Hanover. 23 Oct. 2008. 2. J. D. Edwards, AAPG Mem. 74, 21-34 (2001). 3. J. Carmony, “Failure and Success in the Oil Patch” (2008) Speech Delivered at the Hanover Inn, Hanover. 30 Oct. 2008. 4. D. Blume, [Biofuel] David Blume Debunks Pimental (2007). Available at http://www. mail-archive.com/sustainableorgbiofuel@ sustainablelists.org/msg71532.html (19 November 2008). 5. D. Deming, AAPG Mem. 74, 45-55 (2001). 6. P. Ruddy, “Geologic Exploration-Seismic Interpretation” (2008) Speech Delivered at the Hanover Inn, Hanover. 6 Nov. 2008.

25


Environmental engineering

Thayer School of Engineering & Biofuels The Future of Cellulosic Ethanol Technology MARIETTA SMITH ‘12

I

n 1908, Henry Ford labeled ethanol the fuel of the future. Ford designed his model-T to run on a combination of gasoline and alcohol, but cars soon traversed the nation on gasoline alone (1). A hundred years later, environmentally-conscious scientists and policy-makers are turning back to ethanol and other biofuels to find solutions to America’s costly dependence on oil. In fact, the United States Department of Energy’s deadline for replacing thirty percent of current gasoline consumption with biofuels is 2030 (2). Meanwhile professors at Dartmouth’s Thayer School of Engineering are helping to define the future of energy for our nation. Their research reflects these contemporary concerns. Thayer School researchers, in conjunction with Lebanon’s Mascoma Corporation, recently reported a method of metabolic engineering of a thermophilic bacterium that produces a high yield of ethanol in the Proceedings of the National Academy of Sciences last November (3).

Biofuels, Basically Biofuels are the product of biomass, the term given to organic renewable material. Biomass includes crops, wood, manure, and some degradable garbage. It can be divided into two main groups: agriculture and waste. Whereas agricultural biomass may be used for either food or fuel, waste biomass is only used for the production of biofuels (4). Creating biofuels is a complex process. Before reaching the consumer as fuel, biomass must undergo a series of transformations, each with sub-steps. Once harvested, biomass is prepared as feedstock, converted to intermediate products and converted once again to a final, energetic fuel. These energy products are then distributed accordingly (4). The integration of farming, engineering, technology, and economics determines the success of the process. The main conversion techniques 26

More on Mascoma

of biomass vary. The process of gasification occurs in a gasifier whose heat, steam, and controlled amount of oxyEstablished in 2005 by profesgen decompose biomass into gaseous sors Lee Lynd and Charles Wyman of hydrogen, carbon monoxide, carbon Thayer School, Mascoma Corporadioxide and other compounds. Pyroly- tion develops cellulosic techniques for sis, another technique, is gasification the conversion of biomass to ethanol. without the presence of oxygen (5). The Professor Lynd cites the creation of method of starch and sugar fermenta- Mascoma as a fusion of “science and tion relies on enzymes to decompose technology from the bottom-up” with glucose-containing material in the pres- “a top-down goal of worldly contribuence of oxygen (6). Biomass containing tion” (8). Mascoma’s solutions call lignin, cellulose, and/or hemicellulose for “a complete rethinking of the way may also undergo this starch and sugar in which we fuel our economy” (9). fermentation after being pretreated to Currently, Mascoma has 115 embreak into component sugars within ployees, over half of whom have Ph.D.s. lignocellulosic biomass fermentation. Receiving its funding mainly from state In transesterification an alcohol cata- and national grants, Mascoma dedilyst bonds to fatty acids found in greases, oils, and fat to reduce the viscosity thereby producing a combustible form (4). Landfill gas collection captures naturally produced methane and carbon dioxide at waste disposal sites with a series of wells and vacuums (7). Multiple engineered techniques are used within each conversion method (4). Additionally, anaerobic digestion may be used for the conversion of biomass. During anaerobic digestion, bacteria in the absence of oxygen digest biomass and release gas (4). Psychrophilic, mesophilic, or thermophilic bacteria that can work at low, medium, and high temperatures, respectively, are used for this process. The work conducted Bioreactor used for cellulosic ethanol research. by Mascoma Corporation cofounder and Thayer School of Engi- cates roughly 70 percent of its efforts neering professor Lee Lynd et al. focuses toward research. Although corporate on a particular technique using anaero- headquarters are in Boston, MA, the bic digestions of thermophilic bacteria. Research and Development Lab is loImage retrieved from http://en.wikipedia.org/wiki/File:Pg166_bioreactor.jpg (Accessed 21 January 2009).

Dartmouth Undergraduate Journal of Science


cated in Lebanon, NH, which allows for conjunctive work with Dartmouth (9). Mascoma strives to execute a “strategy of technology discovery, development, and deployment” (10). By creating a network with research institutions and innovative corporations, Mascoma hopes to establish a collaborative effort. Its work with Dartmouth on thermophilic bacteria echoes these goals.

The Study Within the PNAS study, researchers from Thayer School of Engineering and Biological Science Department as well as Mascoma Corporation engineered Thermoanaerobacterium saccharolyticum to produce ethanol at a high yield. As a thermophilic saccharolytic anaerobe, T. saccharolyticum normally produces organic acids and ethanol; however, within this study knockouts of the genes acetate kinase (ack-), phosphate acetyltransferase (pta-), and L-lactate dehydrogenase (L-ldh-) led to a strain, ALK2, which produces ethanol as the only detectable organic product. These genes were selected because of their involvement in organic acid formation. Ethanol fermentation in ALK2 differs from other microbes with homoethanol fermentation, because while using pyruvate:ferredoxin oxioreductase the electrons are transferred along a new pathway, from ferredoxin to NAD(P). Furthermore, although previously developed mesophilic strains show a preference toward glucose consumption, ALK2 uses xylose and glucose simultaneously. When compared to the wild type, ALK2 showed slight differences that were easily accounted for based on the techniques used. At 37 g/liter, this engineered strain’s maximum ethanol titer is the highest reported amount for a thermophilic anaerobe (11). Before developing ALK2, the fermentation products in xylose-grown cultures of knockout mutants of T. saccharolyticum with L-ldh-, ack- pta-, and ack- pta- L-ldh- strain ALK1 genotypes were analyzed. Knockout plasmids pSGD9 and pSGD8E were used to target ack- pta- and L-ldh-, respectively. All of these mutants yielded an increase in ethanol with strain ALK1 yielding ethanol as its only product. MuWinter 2009

tant L-ldh- did not produce lactic acid and ack- pta- produced less hydrogen and did not produce acetic acid (11). ALK1 cultivated in continuous culture for approximately 3000 hours with progressively higher feed xylose concentrations produces strain ALK2. As previously mentioned, this strain exhibited a greater capacity for xylose consumption and produced a mean ethanol yield of 0.47 g of ethanol/g xylose. This yield did not decrease in continuous culture without antibiotic selection for over hundreds of generations (11). Previously engineered methods for creating biofuels using anaerobic digestion had revolved around mesophilic bacteria. Although these methods increase the ethanol yield, they rely upon “costly cellulose enzymes” (3). The genetically engineered thermophilic bacteria can produce ethanol without the addition of enzymes, thereby reducing costs (10). These cost-effective efforts improve the chance of establishing a cellulosic biofuels industry. However, much work remains before the strain ALK2 can be incorporated industrially. Because these strains can withstand higher concentrations of ethanol before ceasing production, one objective is to reduce the difference between this maximum tolerated concentration and the maximum concentration of ethanol produced. Compared to the work done on other organisms, this goal is realistic (11).

The Defining Challenge of Our Time Professor Lynd labeled the attainment of energy as “the defining challenge of our time” (8). Although biofuels provide an economically viable alternative to gasoline, the success of contemporary sustainable efforts relies on more than alternative fuels alone. Lynd asserts that “a sustainable world involves multiple complementary changes” (8). Thayer School has consistently supported the efforts of Lynd and others by choosing to emphasize energy and the environment. “[Thayer] is willing to let things start small,” Lynd explains, “the institution understands it is not possible to always operate on a huge scale” (8).

A blend of technology and public policy are required to make the necessary systemic alterations for a sustainable world. Although the process will take time, work like that of Mascoma Corporation and Professor Lynd provides the base for these alterations. Multilateral integration will be necessary if we are to one day come to fulfill Ford’s prophecy and live sustainably. References 1. K. Addison, Ethanol Fuel: Journey to Forever (2008). Available at http://journeytoforever.org/ ethanol.html (1 Nov 2008). 2. U.S. Department of Energy, Biofuels Initiative (2007). Available at http://www1.eere.energy.gov/biomass/biofuels­_ initiative.html (9 Nov 2008). 3. S. Knapp, Dartmouth Researchers Advance Cellulosic Ethanol Production (2008). Available at http://www.dartmouth.edu/~news/ releases/2008/09/08.html (1 Nov 2008). 4. U.S. Environmental Protection Agency, Biomass Conversion: Emerging Technologies, Feedstocks, and Products (EPA Publication 600-R-07-144, 2007; http://www.epa.gov/ sustainability/pdfs/Biomass%20Conversion. pdf). 5. U.S. Department of Energy, Biomass Gasification (2008). Available at http://www1. eere.energy.gov/hydrogenandfuelcells/ production/biomass_gasification.html (4 Jan 2009). 6. Oregon Department of Energy, Biofuel Technologies (2006). Available at http://www. oregon.gov/energy/renew/biomass/biofuels. shmtl (4 Jan 2009). 7. U.S. Environmental Protection Agency, Landfill Methane Outreach Program (2009). Available at http://epa.gov/lmop/overview.htm (4 Jan 2009). 8. L. Lynd, personal interview, 31 October 2008. 9. L. Lynd, “Biofuel Production” (2008). Presentation Delivered at DUJS Paper Party, 12 November 2008. 10. Mascoma Corporation (2008). Available at http://www.mascoma.com/index.html (27 Oct. 2008). 11. A. J. Shaw, K. Podkaminer, S. Desai, J. Bardsley, S. Rogers et al, Proc. Nat. Acad. Sci.U.S.A. 105, 13769-13774, (2008).

27


Environmental policy

Sustainability at Dartmouth Environmentalism in Deserto laura calvo ‘11

I

ncreasing awareness of the effect of greenhouse gases on the world climate, the depletion of resources, and the declining state of our overall environment have made sustainability a major focus of many academic institutions. The Green Report Card, provided by an independent group, assesses the sustainability efforts of hundreds of campuses and grades them across nine categories. These nine categories include policy and practices of sustainability in administration, climate change and energy, food and recycling, green building, student involvement, transportation, endowment transparency, investment priorities, and shareholder engagement. Dartmouth College is a private institution with a population of 5,704 students on a 269-acre campus with high energy needs and funded by a $3,702 million endowment for the 2008 academic year (1). It is important to understand our energy report card and what is being done to reduce our carbon footprint and increase sustainability here at the College.

How Much Do We Consume & Where Does it Come From? Two main systems provide the entire campus with the energy it needs: the steam distribution system and the electrical distribution system. In 1898, the College built a power plant that operates 24 hours a day, 365 days a year. Over 100 campus buildings are serviced by the power plant, which was originally constructed to mitigate the cost of heating buildings in the cold climate of New Hampshire. The steam distribution system, working at 20 psig, consists of a central 8 foot by 8 foot concrete tunnel system which holds all the necessary components to provide the cabling for the computer network, tele28

Image courtesy of the Dartmouth Sustainability Initiative.

Dartmouth’s total energy consumption from 1998 to 2007.

phones, fire alarms, and power outlets for the College’s auxiliary buildings, in addition to producing the steam for the heating and cooling systems for campus. The electrical distribution system, a 4160-volt AC system that runs from beneath the Green in the center of campus, provides power to the main campus buildings. National Grid is the supplier of about 55 percent of the College’s yearly energy needs, routed from two 13.2/4.16 KV substations in Hanover. (2) Every year, Dartmouth releases approximately 87,000 metric tons of carbon equivalents (MTCE) in greenhouse gases, based on a 2005 estimate. In 2007, total energy use was measured at 968.35 x 109 BTU, with #6 fuel oil contributing the most at 78 percent, non-#6 fuel at 11 percent, purchased

electricity at 10 percent, and gasoline at 1 percent. Number 6 fuel oil is by far the most consumed fuel on campus, the main energy source for the college’s power plant, which co-generates electricity and steam from the burning of oil. Total energy use per degree day, which takes into account climate change in heating/cooling needs due to temperature differences from year to year, was measured at 144.94 x 106 BTU for 2007. There was a 4.23 percent increase in total energy consumption from 2006 to 2007, although energy per degree day decreased by 0.12 percent, indicating the heightened energy need was a result of more temperature extremes throughout the year. Electricity consumption also increased from 62,685 MWh in 2007 to 65,383 MWh

Dartmouth Undergraduate Journal of Science


in 2008, as well as gasoline consumption which increased from 82,802 gallons in 2006 to 101,611 gallons in 2007, a whopping 22.72 percent increase versus the 16.16 percent decrease in gasoline consumption from 2005 to 2006. Despite the College power plant’s capacity to generate electricity locally, the consumption of purchased electricity has increased at a rate much faster than the consumption of generated electricity, with more than two-thirds of the total electricity consumption in 2008 coming from purchased electricity. (3)

Where Does All The Trash Go? In the spring of 1988, an Environment Studies 50 class led an effort to explore more sustainable methods of handling solid waste on campus. Since then, there have been several projects established to reduce solid waste generation and to provide more productive ways to dispose of solid waste through reuse and recycling. (2) The Dartmouth Recycles program, an effort to divert as much of the College’s waste from entering the local landfill as possible, was established in July of 1988. A report entitled “Reduce, Recycle, and Educate: A Solid Waste Management Program for Dartmouth College” based on research from early 1988 stated that 52 percent of the College’s waste could be recycled. The financial burden of increasing landfill fees were minimized by reducing waste through the Enviromug program and diverting newspaper waste to be recycled to create animal bedding, costing $45 per ton instead of $60 per ton landfill cost. Since these efforts, the diversion rate of waste from the landfill had been around 20-35 percent each year. In 1990, the rate was at 20 percent, and in 2004 the diversion rate was at 36 percent (see graph). (2) In April 2002, a composting program began on Fullington Farm, led by the College’s Facilities Operation and Management (FO&M). Through this program, every year from April to November, approximately 16,000 cubic yards are collected to yield compost that has provided high-quality fertilizer for campus construction projects or to Winter 2009

Image courtesy of Dartmouth Facilities Operations & Management.

Dartmouth’s recycling record from 1990 to 2003.

be sold to local landscape companies. Of this compost, approximately 2,400 cubic yards is comprised of sludge, 4,000 cubic yards of food waste, and 9,600 cubic yards of yard waste, paper waste, and saw dust. In the first spring of the program’s establishment, approximately 200-300 pounds of vegetable scraps from Dartmouth’s dining services were delivered to Fullington Farm daily, and 50,000 pounds of waste was composted in total. (2) In 2004, Dartmouth College competed among seventeen top university recycling programs in the United States in an event called Recycle Mania. After 10 weeks of a campus-wide effort to increase recycling awareness and participation, the College ranked second in the competition, totaling 56.22 pounds of recycled material per student living on campus, falling behind only Miami University in Ohio who totaled 58.28 lbs/student (see graph) (2)

What is Being Done to Increase Sustainability? On September 29, 2008, President James Wright announced a new set of initiatives to set a higher standard for sustainability at Dartmouth College. The main focus of this sustainability effort is to contribute to the global task of limiting our impact on climate change by reducing campus greenhouse gas emissions. President Wright’s plan sets milestones for reducing our emissions to sustainable levels through methods determined by the College’s Energy

Task Force. These recommendations would lead to a 20 percent reduction from 2005 levels in emissions by 2015, a 25 percent reduction by 2020, and a 30 percent reduction by 2030. In order to reach these goals, a $12.5 million investment will be applied to energy efficient upgrades throughout campus, such as super-insulation and replacing obsolete equipment and technology, as voted for by the Dartmouth College Trustees. Many of these improvements will be focused on the 20 percent of campus buildings that consume 80 percent of the total campus energy. (4) However, with the recent economic crisis in the last quarter of the fiscal year 2008, the College will suffer a $40 million budget cut over the next two years. Although the administration has affirmed that tenured and tenuretrack faculty and financial aid will not be affected, all other areas of the college’s activities are susceptible to significant changes, the specifics of which will be decided by February 2009. (5) The administration has also implemented other programs to reduce Dartmouth’s ecological footprint. Sustainable Dining has reduced disposable items and increasing composting and recycling efforts in campus dining facilities. Also, high performance building design and construction are being applied to new building projects, so that the best sustainable technologies are used to increase resource efficiency of these projects. (4) The College’s Office of Sustainability is the main impetus of sustainability initiatives on campus. Programs such 29


as the Big Green Bus and Carry Your Trash Week raise campus awareness about environmental conservation. The Sustainable Move In/Out is a program instituted in 2006 to collect unwanted appliances and clothing from students, reselling them the next term to raise money for sustainable projects, such as the Upper Valley United Way’s WARM fund, while the Carpool Facebook Application and Zimride programs are providing a more environmentally friendly option for student transportation. Student-driven organizations such as Green Greeks, Ecovores, and the Environmental Conservation Organization (ECO) aim to increase sustainable projects throughout campus, from dorms to Greek organizations. September 2008 was the opening of the Sustainable Living Center, a residence facility in North Hall housing eighteen students determined at living a more sustainable lifestyle and promoting sustainable education throughout campus. (4)

How Does Our Progress Compare to Other Academic Institutions? In the Sustainability Living Center’s first term of operation, 58 percent less electric energy was used in Fall 2008 in North Hall than previous residents during the fall term (see graph). This is a significant decrease, mostly driven by changing student habits, such as using a drying rack instead of a dryer and turning off lights when they are not necessary. Furthermore, in the 2009 College Sustainability Report Card, Dartmouth College was ranked one of the top 15 institutions in the country, receiving an “A-” on the green report card. In addition to being an overall college sustainability leader, Dartmouth is recognized as an endowment sustainability leader, receiving straight “A” grades for endowment transparency, investment priorities, and shareholder engagement in regards to environmental sustainability policies. Although Dartmouth is one of five Ivy League institutions to earn top marks on this report, many other institutions fare better in other rankings. (1)

30

Image courtesy of Dartmouth Facilities Operations & Management.

Comparison of college recycling totals, measured in pounds per student, during the annual RecycleMania event.

The Future of Sustainability With the looming budget cuts that will drastically effect the college’s spending practices, it seems that step towards fulfilling President Wright’s sustainability initiative may be delayed or scrapped for the near future. For now, new construction plans have been put on hold, and other planned projects will certainly be held back as well. Perhaps new renovations and investments, such as the purchase of more efficient equipment or revamping the insulation of buildings may be put on hold for now, yet grassroots efforts towards sustainability will surely continue. The Sustainable Living Center will be entering its second term of operation, setting an example for what the rest of the campus community can be doing on a daily basis to reduce our energy needs.

Available at http://www.dartmouth.edu/~opdc/ energy/index.html (18 Dec. 2008). 4. Dartmouth Sustainability Initiative (2008). Available at http://www.dartmouth.edu/~sustain/ (18 Dec. 2008). 5. D. Klenotic, Welcome to Dartmouth Life: Alumni Council Gets into Thick of It in 197th Session (Dartmouth College Office of Alumni Relations) (2008) Available at http://alumni. dartmouth.edu/news.aspx?id=478 (18 Dec. 2008).

References 1. The College Sustainability Report Card, Dartmouth College (2008). Available at http:// www.greenreportcard.org/report-card-2009/ schools/dartmouth-college (18 Dec. 2008). 2. Facilities Operations and Management, Dartmouth College (2008). Available at http:// www.dartmouth.edu/~fom/ (18 Dec. 2008). 3. Dartmouth College Energy Task Force, Office of Planning, Design, & Construction (2008).

Dartmouth Undergraduate Journal of Science


ENVIRONMENTal engineering

Turning to Nanotechnology for Pollution Control Applications of Nanoparticles Jingna zhao ‘12

D

uring the last twenty years, scientists have been looking towards nanotechnology for the answer to problems in medicine, computer science, ecology and even sports. In particular, new and better techniques for pollution control are emerging as nanoparticles push the limits and capabilities of technology. Nanoparticles, defined as particles 1-100 nanometers in length (one nanometer being the equivalent of one billionth of a meter) hold enormous potential for the future of science. Their small size opens up possibilities for targeting very specific points, such as diseased cells in a body without affecting healthy cells. In addition, elemental properties can change rather dramatically at the nanometer range: some become better at conducting heat or reflecting light, some change color, some get stronger, and some change or develop magnetic properties (1). Certain plastics at the nanometer range have the strength of steel. Tennis racquet manufactures already utilize nano-silicon dioxide crystals to improve equipment performance. The super-strength and other special properties emerge because microscale flaws between molecules are absent at the nanoscale (1). Nanoparticles without these flaws allow materials to reach the maximum strength of their chemical bonds. These special properties and the large surface area of nano-particles prove valuable for engineering effective energy management and pollution control techniques. For example, if super-strength plastics could replace metal in cars, trucks, planes, and other heavy machinery, there would be enormous energy savings and consequent reduction in pollution. Batteries are also being improved using nanoscale materials that allow them to deliver more power faster. Nano-materials that absorb enough light for conversion into electrical energy have also been used to recharge batteries. Other Winter 2009

environmentally-friendly technologies include energy efficient non-thermal white LED’s, and SolarStucco, a selfcleaning coating that decomposes organic pollutants using photocatalysts.

Nanotechnology and Pollution Control Pollution results from resource production and consumption, which in their current state are very wasteful. Most waste cannot be reintegrated into the environment effectively or cheaply. Thus, processes like petroleum and coal extraction, transportation, and consumption continue to result in photochemical smog, acid-mine drainage, oil slicks, acid rain, and fly ash. In his paper for the Foresight Institute, Stephen Gillett identifies the “Promethean Paradigm”: the inefficient dependence on heat for energy since burning fuel discards much of its free energy during the conversion of chemical energy into heat and then to mechanical energy. Biological systems, on the other hand, efficiently oxidize fuel through molecular-scale mechanisms without extracting the chemical energy through thermalization (1). Overcoming the Promethean Paradigm requires controlling reactions at the nanoscale. Thus, nanofabrication holds much potential for effective pollution control, but it currently faces many problems that prevent it from mass commercialization — particularly its high cost. The basic concept of pollution control on a molecular level is separating specific elements and molecules from a mixture of atoms and molecules (1). The current method for separating atoms is thermal partitioning, which uses heat to force phase changes. However, the preparation of reagents and the procedure itself are costly and inefficient. Current methods of energy extraction utilize combustion to create heat energy, most of which is wasted and re-

sults in unwanted byproducts that require purification and proper disposal. Theoretically, these high costs could be solved with the nanostructuring of highly specific catalysts that will be much more efficient (2). Unfortunately, we have yet to find an optimal way of obtaining the particles in workable form. Current means are essentially “shake and bake” methods called wet-chemical synthesis, which allows for very limited control on the final product and may still result in unwanted byproducts (1). Although there are still many obstacles to overcome, the world is starting to recognize the potential in nanotechnology. In 2007, the Brian Mercer Award for Innovation from the Royal Society was awarded to researchers at the University of Bath for their work in developing nano-porous fibers that trap and remove carbon dioxide along with other pollutants and recycle them back into the production process. These fibers can recycle many forms of gases depending on their composition and the way they are spun (3). The high surface area characteristic of nanosized particles makes such technology particularly efficient for applications with space constraints. Early tests have shown that this process will require only a small percentage of the energy used by current technology. Along with the award, the United Kingdom also granted £185,000 ($280,000) for the further development and commercialization of this technology. The hope is to eventually utilize it for pollution control by removing benzene from petrol vapor.

Air Pollution Air pollution can be remediated using nanotechnology in several ways. One is through the use of nano-catalysts with increased surface area for gaseous reactions. Catalysts work by speeding up chemical reactions that transform harmful vapors from cars and industrial plants into harmless gases. Catalysts 31


Image courtesy of Lawrence Livermore National Laboratory (Scott Dougherty).

Artist’s rendering of methane molecules flowing through a carbon nanotube.

currently in use include a nanofiber catalyst made of manganese oxide that removes volatile organic compounds from industrial smokestacks (4). Other methods are still in development. Another approach uses nanostructured membranes that have pores small enough to separate methane or carbon dioxide from exhaust (5). John Zhu of the University of Queensland is researching carbon nanotubes (CNT) for trapping greenhouse gas emissions caused by coal mining and power generation. CNT can trap gases up to a hundred times faster than other methods, allowing integration into large-scale industrial plants and power stations. This new technology both processes and separates large volumes of gas effectively, unlike conventional membranes that can only do one or the other effectively. For his work, Zhu received an $85,000 Foundation Research Excellence Award. 32

The substances filtered out still presented a problem for disposal, as removing waste from the air only to return it to the ground leaves no net benefits. In 2006, Japanese researchers found a way to collect the soot filtered out of diesel fuel emissions and recycle it into manufacturing material for CNT (6). The diesel soot is used to synthesize the single-walled CNT filter through laser vaporization so that essentially, the filtered waste becomes the filter.

Water Pollution As with air pollution, harmful pollutants in water can be converted into harmless chemicals through chemical reactions. Trichloroethene, a dangerous pollutant commonly found in industrial wastewater, can be catalyzed and treated by nanoparticles. Studies have shown that these “materials should be highly suitable as hydrodehalogenation

and reduction catalysts for the remediation of various organic and inorganic groundwater contaminants� (7). Nanotechnology eases the water cleansing process because inserting nanoparticles into underground water sources is cheaper and more efficient than pumping water for treatment (8). The deionization method of using nano-sized fibers as electrodes is not only cheaper, but also more energy efficient (8). Traditional water filtering systems use semi-permeable membranes for electrodialysis or reverse osmosis. Decreasing the pore size of the membrane to the nanometer range would increase the selectivity of the molecules allowed to pass through. Membranes that can even filter out viruses are now available (9). Also widely used in separation, purification, and decontamination processes are ion exchange resins, which are organic polymer substrate

Dartmouth Undergraduate Journal of Science


with nano-sized pores on the surface where ions are trapped and exchanged for other ions (10). Ion exchange resins are mostly used for water softening and water purification. In water, poisonous elements like heavy metals are replaced by sodium or potassium. However, ion exchange resins are easily damaged or contaminated by iron, organic matter, bacteria, and chlorine.

Cleaning Up Oil Spills According to the U.S. Environmental Protection Agency (EPA), about 14,000 oil spills are reported each year (11). Dispersing agents, gelling agents and biological agents are most commonly used for cleaning up oil spills. However, none of these methods can recover the oil lost. Recent developments of nano-wires made of potassium manganese oxide can clean up oil and other organic pollutants while making oil recovery possible (12). These nanowires form a mesh that absorbs up to twenty times its weight in hydrophobic liquids while rejecting water with its water repelling coating. Since the potassium manganese oxide is very stable even at high temperatures, the oil can be boiled off the nanowires and both the oil and the nanowires can then be reused (12). In 2005, Hurricane Katrina damaged or destroyed more than thirty oil platforms and nine refineries (13). The Interface Science Corporation successfully launched a new oil remediation and recovery application, which used the water repelling nanowires to clean up the oil spilled by the damaged oil platforms and refineries (14).

Concerns In 2009, NanoImpactNet, the European Network on Health and Environmental Nanomaterials will hold its first conference to study the impact of nanomaterials on health and environment. The small size of nanoparticles warrants investigation of the consequences of inhalation and absorption of these particles and their effects inside the body, as they are small enough to penetrate the skin and diffuse through cell membranes. The special properties of nanoparticles inside the body are unclear and unpredictable. Also, Winter 2009

many are worried about the effects of nanoparticles on the environment. New branches of science such as econanotoxicology have arisen to study the movement of nanomaterials through the biosphere. We do not yet know how much will be absorbed by the soil, air, or water, and how severely the widespread presence of nanoparticles in the environment will impact the ecosystem. To address these concerns, NanoImpactNet aims to set up regulations and legislation to ensure that nanoparticles, with so much potential for cleaning up pollution, will not become a new form of pollution themselves.

water/13D.pdf (18 December 2008) 11. U.S. Environmental Protection Agency, Response to Oil Spills (18 September 2008). Available at http://www.epa.gov/emergencies/ content/learning/response.htm (31 December 2008) 12. J. Yuan et al., Nature Nanotechnology 3, 332-336 (2008). 13. United States Department of Commerce, Hurricane Katrina Service Assessment Report. (June 2006). Available at http://www.weather. gov/os/assessments/pdfs/Katrina.pdf (17 December 2008). 14. B. Lamba, Nanotechnology for recovery and reuse of spilled oil. (9 September 2005). Available at http://www.physorg.com/news6358. html (10 December 2008).

Conclusion Nanotechnology’s potential and promise have steadily been growing throughout the years. The world is quickly accepting and adapting to this new addition to the scientific toolbox. Although there are many obstacles to overcome in implementing this technology for common usage, science is constantly refining, developing, and making breakthroughs. References 1. S. L. Gillett, Nanotechnology: Clean Energy and Resources for the Future (2002). Available at http://www.foresight.org/impact/whitepaper_ illos_rev3.PDF (5 January 2009) 2. S.L. Gillett, Nanotechnology. 7, 177-182 (1996) 3. A. McLaughlin, Pollution control technology wins Royal Society award (20 February 2007). Available at http://www.bath.ac.uk/ news/2007/2/20/merceraward.html (29 November 2008) 4. Air Pollution and Nanotechnology. Available at http://www.understandingnano.com/air.html (18 December 2008) 5. J. Zhu, Dr John Zhu, School of Engineering (19 September 2007). Available at http://www.uq.edu.au/research/index. html?page=68941&pid=68941 (15 December 2008) 6. T. Uchida et al., Japanese Journal of Applied Physics 45, 8027-8029 (2006). 7. M.O. Nutt, J.B. Hughes, M.S. Wong, Environmental Science & Technology 39, 1346–1353 (2005). 8. Water Pollution and Nanotechnology. Available at http://www.understandingnano. com/water.html (17 December 2008). 9. F. Tepper, L. Kaledin, Virus and Protein Separation Using Nano Alumina Fiber Media. Available at http://www.argonide.com/ Paper%20PREP%2007-final.pdf (5 January 2009) 10. D. Alchin, Ion Exchange Resins. Available at http://www.nzic.org.nz/ChemProcesses/ 33


marine biology

A Cacophony in the Deep Blue Sea How Ocean Acidification May Be Deafening Whales YIFEI WANG ‘12

R

ising atmospheric concentration of carbon dioxide caused by increasing human activities has posed a threat to the balance of the natural carbon cycle. It is well known that the increasing atmospheric concentration of greenhouse gases endangers our living environment through global warming. Inevitably, as the atmospheric CO2 rises, more is absorbed in the oceans and seawater gets progressively more acid through the formation of carbonic acid, H2CO3. Burning of fossil fuels, deforestation, industrialization, cement production, and other land-use changes all expedite this process. Excessive uptake of anthropogenic carbon dioxide from the atmosphere induces an increase in the oceanic concentration of carbonic acid. This in turn brings about an accumulation of hydrogen ions, a decrease in the pH of the oceans and a reduction in the number of carbonate ions (CO32− ) available, a phenomenon known as ocean acidification. (1) Evidence suggests that these changes will have significant consequences for marine taxa, particularly those that build skeletons, shells, and tests of biogenic calcium carbonate (2). Under normal conditions, calcite and aragonite are stable in surface waters since the carbonate ion is at supersaturating concentrations. However, as ocean pH falls, so does the concentration of this ion, and when carbonate becomes undersaturated, structures made of calcium carbonate are vulnerable to dissolution. Scientists have determined that the rate of current and projected increases in atmospheric CO2 is approximately 100 times faster than has occurred in at least 650,000 years. Evidence from species of marine taxa tested to date indicates that the calcification rates of tropical reef-building corals will be reduced by 20–60 percent at double pre-industrial CO2 concentrations. (2) Since the marine calcifiers such 34

Image courtesy of Ben Halpern, National Center for Ecological Analysis and Synthesis.

Mollweide projection of changes in sea surface pH from the pre-industrial era to the 1990s.

as Acropora eurystoma, Porites lutea, Galaxea fascicularis, and Turbinaria reniformis, and calcifying macroalgae such as Coralline Algae and Halimeda are all sensitive to changes in carbonate saturation state, it has become increasingly difficult for marine calcifying organisms to form biogenic calcium carbonate. Recent research effort suggests that ocean acidification is the primary inducing agent in extinctions of numerous reef species and less diverse reef community (1, 2). Even more destructive to the ecosystem are the heavy impacts on higher trophic-level organisms that rely on these calcifiers to survive. For example, crustose coralline algae (CCA) are a critical player in the ecology of coralreef systems as they provide the “cement” that helps stabilize reefs, make significant sediment contributions to these systems, and are important food sources for sea urchins, parrotfish, and several species of mollusks. Experiments exposing CCA to higher concentrations of CO2 indicate up to a 40 percent reduction in growth rates,

78 percent decrease in recruitment, 92 percent reduction in total area covered by CCA, and a 52 percent increase in non-calcifying algae (2). This will clearly affect the living environment of the organisms whose survival relies on it. Excessive concentration of greenhouse gases also induces other side effects, such as climate change and increase in water temperature that indirectly ex��� pedite this destructive process. Among the numerous detrimental side effects of ocean acidification, the hardest one to imagine would be deafening the whales by altering the ambient noisiness of the ocean. Lately, this seemingly bizarre theory is backed up by numerous scientific endeavors. Two recently published articles offered convincing, although somewhat convoluted explanations for this trickle effect. Peter Brewer and his team at the Monterey Bay Aquarium Research Institute, in California published an article in Geophysical Research Letters suggesting that the ocean has become noisier as a result of increased acidity (3). In

Dartmouth Undergraduate Journal of Science


far beyond global warming. The possibilities of discovering and resolving the domino effects are endless. References 1. J. M. Guinotte, V. J. Fabry, Ann. NY Acad. Sci. 1134, 320 (2008). 2. D. P. Manzello et al., Proc. Natl. Acad. Sci. 105, 10450 (2008). 3. K. C. Hester, E. T. Peltzer, W. J. Kirkwood, P. G. Brewer, Geophys. Res. Letters 35, L19601 (2008). 4. A. Pazir, M. Winklhofer, Geophys. Res. Letters 35, L16710 (2008).

Image courtesy of the National Oceanic and Atmospheric Administration.

Humpback whale communication may be affected by increasing ocean acidity.

the same journal, Alexander Pazur and Michael Winklhofer stated that geomagnetic field variations could further amplify sounds through the ocean (4). But how could ocean acidification affect the hearing of whales? One possibility is the increased sound transmission through the acidified water (3). Brewer and his colleagues proposed that increased concentrations of CO2 invoke an imbalance in dissolved ions that absorb vibrations at acoustic frequencies, resulting in significant reduction in ocean sound absorption for frequencies lower than 10 kHz, the frequency at which most whales communicate. This consequently amplifies the ambient low-frequency ocean noise level. In his paper, Brewer also pointed out another less evident, but equally crucial factor: increased heat flux induced by growing atmospheric CO2 heats the ocean, further contributing to the decreased sound absorption in the lower frequency range. CO2 is not the only greenhouse gas contributing to the ocean noise level. Another factor that is worth noticing is the deposition of sulfur and nitrogen from the combustion of fossil fuels. These atmosphere additions of strong acids change ocean alkalinity and pH. Solid statistics are even obtained to back up their theory: the ocean absorbs at least 12 percent less sound now than it did in pre-industrial times. And what’s more frightening is that by calculation, it is surmised that this number Winter 2009

might rise to 70 percent in 2050. The increasing noisiness of the ocean poses an impending threat for the whales. (3) Scary as it sounds, this potential ecological hazard cannot be solely attributed to human activities. More surprisingly, it has also been suggested by Pazur that there are strong correlations between geomagnetic field and climate parameters. According to Pazur, reduction in magnetic field strength releases up to ten times more carbon dioxide from the surface of the ocean. (4) Furthermore, rotational acceleration or deceleration due to waxing or waning ice sheets might trigger instabilities in the geodynamic and promote geomagnetic events of large magnitude, which further affects to CO2 solubility and absorption rate. (4) The magnetic field effect on gas solubility presents a physical link between geomagnetic field and climate. The small-scale laboratory experiments indicate lower solubility of CO2 in seawater under reduced magnetic field intensity. The extra amount of CO2 not dissolved due to reduced solubility would not only add to greenhouse effect but also acidify the water and endanger the living environment for whales and other marine creatures. Although we may cast shadows on the credibility of some of the explanations offered, the result is solid and observable. From this, we can see that the damage brought about by greenhouse gases is endless and 35


neurology

Pesticides On the Brain

The Environmental Factors Behind Parkinson’s Disease SHARAT RAJU ‘10

P

arkinson’s disease (PD) is characterized by the progressive degeneration of dopaminergic neurons within the substantia nigra, a region in the midbrain critical for motor planning and reward-seeking behavior. The substantia nigra interacts directly with the putamen and caudate nucleus via the release of dopamine. This release, in turn, activates a motor pathway within the basal ganglia involving regions of the globus pallidus, the sub-thalamic nucleus, thalamus, and, eventually, motor regions of the cerebral cortex to facilitate normal movement. The many neuronal projections coursing through the basal ganglia can be classified into two primary categories: a direct pathway and an indirect pathway. Simply, the direct pathway facilitates motor output while the indirect pathway inhibits movement. The dopaminergic projections from the substantia nigra excite the direct pathway and inhibit the indirect pathway, thus appropriately promoting normal movement. Loss of these dopaminergic neurons often results in a stronger indirect pathway and weaker direct pathway, resulting in the tremors and rigidity characteristic of many Parkinson’s patients. Despite its prevalence among the population, the underlying cause of PD is still under intensive study. Recent hypotheses have centered on the formation of “Lewy bodies” in the substantia nigra (3). In normal cells, modification of a protein with a ubiquitin “tag” earmarks the protein for degradation in the proteosome (3). This Lewy bodies, consisting of the protein alphasynuclein bound to ubiquitin, are unable to be degraded in the proteosome and thus form dense aggregates (3). These protein aggregates are neurotoxic, resulting in the gradual loss of dopaminergic neurons in the substantia nigra (3). However, these days, medical research seems to be headed towards a more multifaceted approach, exploring disease pathology from many 36

angles. Recent studies have examined a possible environmental cause for the disorder, namely extensive exposure to pesticide or farm areas. Pesticides act as acetylcholinesterase inhibitors, the enzyme normally responsible for denaturing excessive acetylcholine in the synapse. Inhibition of this essential enzyme results in elevated and often neurotoxic levels of acetylcholine, subsequently influencing several key areas of synaptic transmission.

A Neurochemical Model A study published in last October at the Mississippi State Center for Environmental Health Science explores the possible effects of these pesticides on dopaminergic neurons in the substantia nigra (1). The researchers utilized a rat model involving both long-acting (chlorpyrifos) and short-acting (methyl parathion) pesticides, and subsequently measured the long term effects of these pesticides when administered to young rats (1). Results were assessed through measurement of both immediate (22 days) and long term (50 days) dopamine and dopamine metabolite levels (1). Metabolite levels provide important information about the functioning of monoamine oxidase (MAO), an enzyme critical to the break down of synaptic dopamine (1). The two metabolites analyzed are (dihydroxyphenylacetic acid) DOPAC, an intermediary in the MAO metabolic pathway, and homovanillic acid (HVA), generally the final product of the pathway (1). Researchers also examined any alterations in the expression levels of Nurr1, LmxB, tyrosine hydroxylase, dopamine transporter genes, or nicotinic acetylcholine receptor subunits, all components essential to dopaminergic function (1). Immediately following 21 days of treatment, the study found no significant differences in dopamine or other metabolite levels from either methyl parathion (MPT) or chlorpy-

rifos (CPS) (1). However, there was a substantial increase in DOPAC level at P50 (postnatal day 50) upon exposure to CPS, representative of a higher rate of dopamine turnover rate (1). Results also demonstrated a significant decrease in the ratio of α6 to α7 subunits of the acetylcholine receptor immediately following MPT and CPS treatment (1). This decrease was not maintained at P50; however, MPT did demonstrate a significant elevation of the α6 acetylcholine subunit (1). Higher levels of the intermediary DOPAC following CPS treatment demonstrate an increased turnover and processing rate of dopamine within the synapse (1). Importantly, the increase presents at P50, well after pesticide treatment and following the return of acetylcholinesterase to basal levels (1). This phenomenon reveals subtle alterations in the metabolism and processing of synaptic nigrostriatal dopamine, extending beyond the initial acetylcholinesterase-inhibiting effects of pesticide (1). CPS and MPT also significantly impacted expression of postsynaptic acetylcholine receptors found on nigrostriatal dopaminergic neurons (1). Normally, binding of agonists to nicotinic acetylcholine receptors will increase dopamine neuron firing rates and consequentially, enhance dopamine release (1). Alteration of receptor subunits may affect dopamine release, but it may also negatively impact the survival capacity of these dopaminergic neurons (1). Studies have shown that an abnormal increase in the α6 subunit (a result of MPT treatment) can damage the nigrostriatal dopamine system, contributing to the neural degeneration characteristic of Parkinson’s disease (1).

A Mitochondrial Model Following reports of Parkinsonian symptoms resulting from exposure to the mitochondrial pro-toxin N-methyl4-phenyl-1,2,3,6-tetrahydropyridine

Dartmouth Undergraduate Journal of Science


(MPTP), a group of researchers at Emory university examined the impact of chronic exposure to the mitochondrial inhibitor and common pesticide, Rotenone (4). MPTP itself is harmless; however, degradation of MPTP produces the metabolite MPP+, which inhibits complex I of the electron transport chain, an essential element of cellular respiration and energy production (4). Unfortunately, MPP+ appears to specifically target the dopaminergic system due to its high affinity for the dopamine transporter, which allows the toxin to easily access the soma (4). Rotenone is similar to MPP+ in its selective inhibition of complex I of the electron transport chain (4). However, its hydrophobicity allows it to readily diffuse across any cellular membrane, eliminating any specificity for dopaminergic neurons (4). In the study, researchers treated 2-month old rats with varying doses of Rotenone, ranging from 1-12 mg/kg per day (4). High doses of Rotenone produced the expected systemic and nonspecific toxicity due to i t s ability to effortlessly cross membranes. Surprisingly, however, lower doses of Rotenone (2-3 mg/kg per day) produced highly specific lesions of the dopaminergic system (4). Of the 25 rats treated within this “optimal” dose range, 12 of the rats presented significant lesions in the dopaminergic system of the substantia nigra (4). Resulting pathologies included significant degeneration of dopaminergic neurons in the substantia nigra and striatum, the depletion of tyrosine hydroxylase, a key enzyme in the formation of L-DOPA (dopamine precursor), and the development of cytoplasmic aggregates structurally similar to Lewy bodies (4). Behaviorally, rats treated with Rotenone presented many of the hypokinesic symptoms of human Parkinson’s patients, and 7 of the treated rats even developed a phenotype suggestive of a resting tremor (4). Clearly, it appears that the nigrostriatal dopaminergic system has an underlying vulnerability to complex I inhibitors, even to the non-specific Rotenone (4). Though the electron transport chain Winter 2009

was inhibited, researchers hypothesize that ATP deficiency is not a sufficient explanation for the neurodegeneration due to the unusually low concentration of Rotenone required to produce a Parkinsonian phenotype (4). Rather, the authors theorize that inhibition of complex I results in “production of reactive oxygen species,” presumably due to a downstream reduction in cytochrome oxidase’s ability to convert free oxygen into water (4). Reactive oxygen proceeds to damage proteins and DNA, eventually triggering cellular apoptosis and neurodegeneration via release of cytochrome C from the mitochondria (4).

authors theorize that negative history patients may be possess some genetic susceptibility, which requires an environmental trigger, such as pesticide exposure, to activate the disorder (2). On the other hand, despite the genetic vulnerability of positive history patients, statistics show little pesticide influence due to the limited number of patients with familial history available for the study; in other words, an association for positive history patients cannot be ruled out (2). The study also found organochlorines and organophosphates to be particularly potent in triggering PD symptoms; however, living on a farm and drinking well-water had little correlation with PD incidence (2).

Conclusion

Epidemiology Last March, a unique epidemiological study examining the relationship between pesticide exposure and Parkinson’s disease was conducted by a team of researchers at Duke University, the Miami Institute for Human Genomics, and the University of Miami School of Medicine. Unlike previous studies, it compared the effect of pesticide exposure on patients with a familial history of Parkinson’s versus the impact on patients without such a history (2). The authors utilized questionnaires and telephone interviews to establish a medical history and evaluate the level of pesticide exposure for each patient (2). The study only examined white families and excluded patients with multiple symptoms to eliminate potential confounds (2). The researchers found a highly significant association between pesticide and Parkinson’s disease in patients without a familial history of PD; however, little to no correlation was found in patients with a familial history (2). The

Parkinson’s disease affects more than 50,000 Americans each year, resulting in muscle rigidity and even complete loss of physical movement. Though certain clinical features of the disorder are discernible, there is no current underlying explanation for the rapid destruction of nigrostriatal dopaminergic neurons. There are various methods of symptom relief including LDOPA (dopamine precursor) treatment and monoamine oxidase inhibitors, but no cure has been identified. Hopefully, new studies can help shed light on the disorder and lead the way to permanent relief for Parkinson’s patient. References 1. J. B. Eells, T. Brown. Neurotoxicology and Teratology 31 (2008). 2. D. B. Hancock et al. BMC Neurology 8 (2008) M. H. Polymeropoulus, et al. Science 276, 2045-2047 (1997). 3. R. Betarbet. Nature Neuroscience 3, 1301-1306 (2000). Molecule: Rotenone, a broad-spectrum pesticide and insecticide that acts by interfering with the electron transport chain and preventing the conversion of electric potential into usable chemical energy. Image retrieved from http://en.wikipedia.org/ wiki/File:Rotenone.png (Accessed 8 January 2009).

37


medicine

Global Climate Change & Asthma Pulmonary Consequences of Fossil Fuels SHU PANG ‘12

A

wave of research in the past few years shows a positive correlation between the ongoing climate change and increases in prevalence and severity of asthma and other related respiratory allergic diseases. Increasing temperatures disrupt normal pollen production, worsen ground-level ozone pollution, increase ambient air pollution and alter climate patterns, resulting in storms and wildfires, all of which contribute to asthma. The links between climate and the above conditions are supported by numerous studies. The WHO estimates that each year, 300 million people worldwide suffer from asthma (1). In the United States alone, 20 million people have active asthma, including 6.2 million children under 18 (2). It is hoped that stronger measures will be taken to reduce greenhouse gasses and other factors that lead to global warming.

Mechanism of Allergic Asthma Asthma is a chronic respiratory disease that inflames and narrows the airways, leading to symptoms including wheezing (a whistling sound with each breath), chest tightness, breathlessness, and coughing (3). There are two main categories of asthma, nonatopic and atopic, with the latter comprising the majority of asthma cases (4). Atopic asthma involves T helper 2 cells (TH2), which drive hypersensitivity to innocuous antigens (also known as allergens). It is therefore commonly referred to as allergic asthma (5). While genes like IL13, IL4RA and filaggrin have been found to be positively associated with asthma, there is much evidence that environmental factors play a substantial modifying role (6). The mechanism behind asthma is complex, involving airway inflammation, intermittent airflow obstruction 38

Image retrieved from http://en.wikipedia.org/wiki/File:Lungs.gif (Accessed 12 January 2009).

Lithographic plate from Gray’s Anatomy of the bronchi and bronchioles. During an asthma attack, the bronchioles constrict, and breathing can become extremely difficult. Severe attacks can be lifethreatening.

and bronchial hyperresponsiveness (7). This article focuses on the basic mechanism behind allergic asthma, but from what is known, the mechanism of nonatopic asthma is very similar (8). The initiation mechanism for allergic asthma requires that immunoglobin E (IgE) antibodies react with allergens such as dust particles and pollen grains. This sensitizes respiratory mast cells, found in connective tissue, which release substances like histamine in response to inflammation of body tissues (9). Later inhalation of allergens leads to interaction of epitopes with cell-bound IgE and activation of secretory pathways that release histamine, leukotrienes, prostaglandin, platelet activating factor (PAF), and a range of cytokines and chemokines that all mediate the inflammatory response (10). There is an immediate “early-phase”

allergic response of one to twenty minutes with vasodilatation and bronchoconstriction, followed by a slower “latephase” response of four to eight hours with cellular infiltrate, including TH2 lymphocytes, eosinophils, monocytes, and basophils, and a greater sustained increase in bronchospasm, even if the allergen is no longer present (11).

Climate Change Alters Vegetation Vegetation, particularly pollen grains, has long been linked to allergic reactions, particularly asthma. Pollen proteins like Amb a1 are wellrecognized causes of TH2 immune responses. Recent studies have identified lipid components in pollen that might

Dartmouth Undergraduate Journal of Science


modulate TH2 immune responses as well (12). In fact, Jan Gutermuth of Division of Environmental Dermatology and Allergy at Technische Universit채t M체nchen in Munich showed that aqueous extracts of white birch pollen, instilled intranasally into mice increased TH2 immune responses (13). How does climate change affect vegetation, and what role does this play in worsening asthma? Vegetation changes have proven to be very sensitive indicators of climate change. In transitional zones, vegetational responses can occur within a decade of climate change (14). For many herbaceous and woody plants, flowering and pollination are intimately linked to temperature. Flowering speeds up with global warming, due to both higher temperatures and higher carbon dioxide (CO2) levels. A study of 385 British plant species conducted in 2002 found that during the past decade, the average first flowering had advanced by 4.5 days (15). Furthermore, in Switzerland and Denmark, there has been a distinct rise in the annual quantity of hazel, birch, and grass pollen over the past 30 years (16, 17). Increasing pollen production and longer pollen seasons due to earlier blooming increase the burden of asthma and other allergic diseases. An increased global temperature means an increase in greenhouse gases, predominantly CO2, which increases the allergenicity of plants. One study showed that high CO2 levels on poison ivy increased photosynthesis and biomass and that these CO2-enriched plants produced a greater percentage of unsaturated urushiol, which is one of the antigenic products that tend to worsen or induce asthma (18). Another study of ragweed pollen showed that increased pollen production implies an increase in airborne allergenic load. Four sites, namely urban, suburban, semirural and rural, were studied, using the urban environment as a surrogate for climate change. The urban area was around 2째C warmer with a 30 percent higher CO2 level than the rural site, and as expected, urban ragweed grew faster with a greater above-ground biomass, flowered earlier and produced more pollen than the rural site (19). There was over a 7-fold increase in pollen production from the urban sites, indicating an increased airborne allergic burden in the urban model that Winter 2009

Image courtesy of the National Oceanic and Atmospheric Administration

Satellite image of Hurricane Ivan making landfall near Alabama in 2004. Climate change will likely increase the frequency and severity of storms, leading to increased levels of allergy-laden pollen.

represents a warmer climate change. The evidence points to the fact that climate change has had and will have further impact on a variety of allergenic plants (20). Increased temperature stimulates earlier flowering and longer pollen seasons for some plants and increased CO2 increases plant biomass and pollen production and may cause plant products to be more allergenic.

Climate Change Increases Ground Ozone Tropospheric (ground-level) ozone is formed by a heat dependent

photochemical oxidation of volatile organic compounds (VOCs), nitrogen oxides (NOx), and atmospheric hydroxyl radicals. Even without these precursor molecules, higher temperatures increase ozone production (21, 22). In urban areas, anthropogenic nonmethane VOC compounds from combustion of fossil fuels including vehicle exhaust and industrial emissions are key contributors to ozone production. Acute ozone exposure is known to decrease pulmonary function, increase airway hyperresponsiveness (AHR), and induce airway inflammation (23, 24). Exposure of mice to both ozone and carbon particles for four hours had a synergistic effect, signifi39


cantly decreasing alveolar macrophage phagocytosis and increasing lung neutrophilia (a condition marked by a abnormally large number of neutrophils, a type of white phagocytotic blood cell). A mechanism for the enhanced effect may be that the carbon particles act as carriers for the ozone, bringing it into areas of lung not easily accessible to ozone in the gaseous phase. Alternatively, the ozone may change the composition of the carbon particles from an innocuous to a harmful form (25). Michelle Bell of Yale University modeled temperature-dependent ozone pollution up to the year 2050 for fifty US cities, assuming constant anthropogenic emissions, and found a 2.1 percent increase in asthma hospitalizations across all cities (26). The increase of urban development worldwide and the continued use of fossils fuels will lead to greater ozone exposure in the future, increasing the number of asthma cases.

Climate Change Worsens Air Pollution (and Vice Versa) The 1970 Clean Air Act (CAA) in the United States identified six criteria air pollutants (CAPs) and the 1990 amendments to the CAA defined 188 hazardous air pollutants, and set standards to protect human health and the environment. As the climate warms, pollutants increase, and as pollutants increase, climate warms even more, thereby escalating the disease burden related to allergy and asthma worldwide. During high temperature combustion, oxygen reacts with nitrogen to generate nitric oxide (NO) and to a lesser extent, nitrogen dioxide (NO2) and other nitrogen oxides (27). Nearly half of all NOx emissions come from motor vehicles in the United States. Even though NOx emissions are relatively short-lived (only hours to days), even short-term exposure is associated with chronic and acute changes in lung function, including bronchial neutrophilic infiltration, increased proinflammatory cytokine production, and enhanced response to inhaled allergens (28, 29). Continuing the status quo of NOx emissions will lead to con40

tinued increases in ground level ozone, and increased allergen sensitivity. Sulfur dioxide (SO2) is a known respiratory irritant. Over 65 percent of SO2 emissions in the United States comes from coal-burning electric utilities (30). Because SO2 is fifty times more soluble than CO2 in water, it is likely to be absorbed in the upper airways in subjects at rest and increasing ventilation results in deposition in deeper parts of the lung. Shortly after exposure, inhalation of SO2 causes significant bronchospastic effects with rapid onset of symptoms. Most individuals with asthma experience bronchospasm at levels of 0.5 ppm (28). Since the burning of coal is currently the second largest global fuel source of SO2 emissions and is predicted to be the first by 2010, coal burning will contribute substantial amounts of atmospheric sulfur oxides in the future, exacerbating asthma and other respiratory diseases and promoting additional climate change. Another form of air pollution is atmospheric particulate matter (PM), from both natural and human sources, the latter particularly in suburban and urban areas where diesel fuel-burning vehicles are mainly used (31). Many studies have shown that increased exposure to PM worsens asthma and is linked with decreased lung function in both children and adults (33, 34). The impact of air pollution on asthma and allergies was examined in a cross-sectional study analyzing the long-term exposure to background air pollution related to respiratory and allergic health in schoolchildren (34). The study involved 6,672 children from ages nine to eleven who underwent a clinical examination including a skin prick test (SPT) to common allergens, exercise-induced bronchial reactivity (EIB), and skin examination for flexural dermatitis. The prevalence of asthma, allergic rhinitis (AR) and atopic dermatitis was assessed by a standardized health questionnaire completed by the parents. Using measurements from background monitoring stations, threeyear-averaged concentrations of air pollutants (NO2, SO2, PM and O3) were calculated at the 108 different schools. The results demonstrated that a moderate increase in long-term exposure to background air pollution was associated with a significant increase of respirato-

ry diseases in the children (34). Thus, pollution contributes to climate change, which in turn leads to more pollution and more asthma cases worldwide, propelling the continuation of a vicious cycle.

Climate Change Induces Wildfires and Storms In addition to the alteration of pollen production and worsening of air pollution, the changing climate will likely lead to increased drought, heat waves, and wildfires in some areas, and increased storms and extreme precipitation events in other areas (35). These changing regional patterns may further exacerbate allergic disease and asthma. Wildfires emit smoke that contains a concoction of carcinogenic and respiratory irritant substances, including CO, CO2, NOx, ozone, PM, and VOCs (36, 37). Epidemiologic studies have shown modest shortterm increases in cardiorespiratory hospitalizations resulting from acute exposure to wildfire smoke (38, 39). The Intergovernmental Panel on Climate Change (IPCC) considers it very likely that there will be an increase in heavy precipitation and tropical cyclones events globally (40). Places where climate change causes heavy precipitation during pollen season are expected to worsen asthma by an increased airborne burden of respirable allergen-laden particles released from fragmented pollen grains (41). Therefore, climate change induces wildfires and storms, which further exacerbate asthma cases worldwide.

Conclusion The changing global climate is negatively affecting human health. In this review, the adverse effects of a warmer Earth are discussed with a focus on asthma and allergic respiratory diseases in both children and adults. Increased antigenic pollen grains, ground-level ozone pollution, and air pollution as well as dynamically changing climate patterns are all factors that contribute to asthma. The rate of climate change in the future will depend on how rapidly and successfully global

Dartmouth Undergraduate Journal of Science


mitigation and adaptation strategies are implemented. Hopefully, a global effort will develop to work towards reduction of the causes of anthropogenic climate change as part of a long-term commitment to protect public health. References 1. Asthma (2008). Available at http://www.who. int/topics/asthma/en/ (10 November 2008). 2. Moorman JE et al, MMWRSurveill Summ, 56, 1-14, 18-54, (2007). 3. Asthma (2008). Available at http://www.nhlbi. nih.gov/health/dci/Diseases/Asthma/Asthma_ WhatIs.html (11 November 2008). 4. A. B. Kay et al., Immunology Today, 20, 528-533, (1999). 5. J. M. Hopkin, Current Opinion in Immunology, 9, 788-792, (1997). 6. W. Busse , L. Rosenwasser, J Allergy Clin Immunol., 111, S799-804, (2003). 7. Asthma (2008). Available at http://emedicine. medscape.com/article/296301-overview (9 November 2008). 8. S. G. O. Johansson, J. Lundahl et al., Current Allergy and Asthma Reports, 1, 89-90, (2001). 9. A. P. Kaplan, Proc Natl Acad Sci USA, 1, 1267–1268, (2005). 10. D. M. Segal, J. D. Taurog, Metzger., Proc. Natl. Acad. Sci. USA, 74, 2993-2997, (1977). 11. M. C. Liu et al., Am. Rev. Respir. Dis., 144, 51-58, (1991). 12. R. Valenta, V. Niederberger, J Allergy Clin Immunol, 119, 826-830, (2007). 13. Gutermuth J. et al. J. Allergy Clin Immunol, 20, 293-299, (2007). 14. D. Peteet, Proc Natl Acad Sci, 97, 1359-61, (2000). 15. A. H. Fitter, R. S. R. Fitter, Science, 296,1689-1691, (2002). 16. T. Frei, Grana, 37, 172-179, (1998). 17. A. Rasmussen , Aerobiologica, 18, 253-65, (2002). 18. J. E. Mohan et al., Proc Natl Acad Sci, 103, 9086-9089, (2006). 19. L. H. Ziska et al., J Allergy Clin Immunol, 111, 290-295, (2003). 20. P. J. Beggs, Clin Exp Allergy, 34, 1507-1513, (2004). 21. J. Aw, M. J. Kleeman., J Geophys Res, 108, 4365, (2003). 22. S. Sillman , P. J. Samson, J Geophys Res, 100, 11497-11508, (1995). 23. J. Seltzer et al., J Appl Physiol, 60, 1321-1326, (1986). 24. H. S. Koren et al., Am Rev Respir Dis, 139, 407-415, (1989). 25. G. J. Jakab, R. Hemenway., J Toxicol Environ Health, 41, 221-231, (1994). 26. M. L. Bell et al., Climactic Change, 82, 61-76, (2007). 27. NOx: What is it? Where does it come from? (2008). Available at http://www. epa.gov/air/urbanair/nox/what.html (8 November 2008). 28. J. Q. Koenig, J Allergy Clin Immunol, 104, 717-722 (1999). 29. C. Barck, J Lundahl, G Hallden, Bylin G, Environ Res, 97, 58-66 (2005). Winter 2009

30. SO2: What is it? Where does it come from? (2008). Available at: http://epa.gov/ air/urbanair/so2/what1.html (3 November 2008). 31. M. P. Fraser, Z. W. Yue, B. Buzcu, Atmos Environ, 37, 2117-2123 (2003). 32. H. D. Kan et al., Biomed Environ Sci, 18, 159-163 (2005). 33. R. J. Delfino et al., Environ Health Perspect, 112, 932-941 (2004). 34. C. Penard-Morand et al. Clin Exp Allergy. 35, 1279-1287, (2005). 35. D. Bates et al., Environ Health Perspect, 79, 69-72, (1989). 36. L. Cheng et al., Atmos Environ, 32, 673-681, (1998). 37. H. C. Phuleria et al., J Geophys Res Atmos, 110, D07S20, (2005). 38. D. Moore et al., Can J Pub Health, 97, 105-108, (2006). 39. J. A. Mott et al., Int J Hygiene Environ Health, 208, 75-85 (2005). 40. M. L. Parry et al., Cambridge University Press, 23-78 (2007). 41. R Newson et al., Thorax, 52, 680-685 (1997).

41


Chemistry

Effects of Frying Oil on Acrylamide Formation in Potatoes Katie cheng ‘10, boer deng ‘10, emma nairn ‘10, tyler rosche ‘10 & stephanie siegmund ‘10

Abstract In 2002, Swedish scientists discovered acrylamide formation in some starchy foods prepared at high temperatures (1). This has since become a major concern in the food industry due to the potential negative health effects of acrylamide. Different cooking oils have varying fatty acid contents, which are likely to affect acrylamide formation. In order to investigate this difference, potatoes were fried in corn, soybean, and sunflower oil. A standard of acrylamidespiked Pringle chips, for which the acrylamide content was known, was used. The acrylamide was removed from each sample utilizing the Soxhlet extraction method, using first pentane and secondly methanol as solvents. The resulting extracts were evaporated, resuspended in methanol, and analyzed via GC-MS. It was found that the potatoes fried in sunflower oil had the highest level of acrylamide formation.

Introduction The compound 2-propenamide, commonly known as acrylamide, is an α-β unsaturated conjugate molecule with the structure H2C=CH—CONH2 (Figure 1). Acrylamide’s vinylic structure makes it a convenient tool in biochemical research to selectively modify thiol groups in compounds (2). However, this reactivity is likewise effective on biological molecules, and therefore poses a danger upon human exposure. This study investigates exposure to acrylamide from a common food source—fried potatoes—and how the conditions under which the source is prepared affects the formation of acrylamide. Consistent evidence from recent studies has shown that acrylamide is formed in foods with a high content of the free amino acid asparagine and of reducing sugars. Taubert, Harlfinger, et al. discovered that when these conditions are met under high temperatures (120-230°C), acrylamide formation is possible by the Malliard browning reaction (3). The optimum conditions for such formation in food occur in potato chips, since potatoes contain very high concentrations of asparagine per mass (93.9 mg/100 g) (2). When exposed to heat, the α-NH2 group of asparagine reacts with the aldehyde of the D-glucose sugar, also present in high concentration in potatoes, undergoing a nucleophilic addition to form a Schiff base. The Schiff base then rearranges to the asparagines derivative N-glycoside, an intermediate which can undergo decarboxylation and loss of the asparagine α-NH2 to form acrylamide. This heat induced reaction between an amino acid and reducing sugar is the oxidative Malliard browning reaction (2). Other amino acids such as methionine, arginine, threonine, and valine can also form acrylamide through this reaction, but 42

these play a minor role in comparison with asparagine. (2) Once formed, acrylamide from external sources can be easily absorbed by inhalation, ingestion, or skin absorption, and reacts with proteins to form its metabolite glycidamide, an epoxide, which can then interact with DNA. The electrophilic double bond of acrylamide also allows it to interact with other active hydrogen-containing functional groups in the body, especially –SH and α-NH2 groups of free amino acids, and the NH group on histidine (2). The effect of these reactions is the formation of hemoglobin adducts and neurotoxins, as indicated by the work of Calleman, Stern, et al. (4) . These adducts and toxins have led to serious health effects such as protein malfunction and muscle control problems. Discovery of acrylamide in potatoes and other food sources has sparked study of the various conditions in which acrylamide can form during cooking. Although extensive studies have been done on the temperature dependence of acrylamide formation, less time has been devoted to investigating the effects of the type of cooking oil used for frying. This study looks at three common cooking oils—corn, sunflower, and soybean oil—and how their properties influence acrylamide formation via the Malliard reaction. Upon heating, variation in triglyceride and fatty acid composition among the three oils causes differing rates of degradation and contaminant formation, which may or may not influence acrylamide formation. Among the differences in post-frying oil composition were a far higher branched-chain and steryl ester content in corn oil than either of the other two, and greater “flavor stability” in corn oil fried products (5). Likewise, triglyceride composition becomes altered upon heating in all three oils, where production of polar contaminants is proportional to the amount of unsaturated fatty acids present (6). These characteristics also cause varied rates of temperature increase and thus may impact Malliard reaction conditions.

Materials and Methods The first stage of the experiment involved verifying the current methods of acrylamide extraction and detection. For this verification, one container of Pringles® Original was obtained and ground to a fine powder using a mortar and pestle. This powder was divided into four samples of approximately 40 grams each and placed in cellulose thimbles for the Soxhlet extractors labeled Samples 1-4. The mass of each sample, including powder and thimble, was determined and recorded (Table 1). Samples 1 and 2 were spiked with 630 µL of 0.01791 M acrylamide in methanol (8.0 x 10-4 g). Each sample was then placed in a Soxhlet extractor, and 200 mL of pentane was added to the extractor. The Soxhlet extractors Dartmouth Undergraduate Journal of Science


Sample # #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13

Sample Name Methanol (spiked) Acetone Methanol control Acetone control NEW methanol (spiked) Soybean Oil #1 Soybean Oil #2 Soybean Oil #2 Sunflower Oil #1 Sunflower Oil #2 Sunflower Oil #3 Corn Oil #1 Corn Oil #2

Initial Mass (g) 40.0341 39.4829 40.9909 41.1362 39.6827 17.4045 21.1098 34.1495 19.2136 16.4209 11.3594 25.1053 23.9237

Table 1. Samples used to verify current methods of acrylamide extraction.

were used to extract the acrylamide from the fried potatoes. Soxhlet extraction is known to be extremely effective in extractions of acrylamide (7, 8). The samples were then heated and refluxed in the Soxhlet extractor. Samples 1 and 3 were refluxed in pentane for two days, while the pentane in samples 2 and 4 was replaced after 24 hours with 200 mL acetone. Extra pentane solvent was added to Samples 1 and 3 to maintain a sufficient volume for refluxing. After the fat present in the samples was sufficiently solvated, the thimbles containing defatted sample were removed from the extractor, dried in an oven to evaporate the pentane, and changes in mass were determined and recorded. The solution of pentane in the Soxhlet extractor was then replaced with approximately 200 mL methanol, to solvate acrylamide, and the sample was heated and refluxed for seven days. Following the extraction of Samples 1-4, the cellulose thimbles were removed from the Soxhlet extractors, dried to evaporate residual methanol, and the new masses were then determined and recorded. The methanol containing the acrylamide in solution was collected from the Soxhlet extractor. The samples were evaporated on a rotary evaporator down to a small, measured volume for each sample, between 0-3 mL. This volume was centrifuged to remove any residual particles as a “pellet,” and a 1 µL aliquot was run on a Gas Chromatograph/Mass Spectrometer (GC/MS) to determine compound identity and concentration (Table 2). The GC-MS technique was chosen for this experiment because this technique has worked successfully in the past for similar experiments. A calibration curve was generated by injecting 1 µL samples of acrylamide in methane into the GC/MS at the calculated concentrations of 173 ppm, 108 ppm, 43.25 ppm, and 0.16 ppm (this curve originally also incorporated 161 ppm and 16 ppm samples, but these were later rejected due to inconsistency). These concentrations were calculated by dividing the number of moles of acrylamide added by the 100 mL volume of the methane solvent. Once the calibration curve had been created from the standard solutions generated in the laboratory and acrylWinter 2009

GC-MS Parameters Injection Temperature 250 °C Oven Start Temperature 80 °C Oven Finish Temperature 230 °C 10 deg/min for 10 minutes Degrees per minute 5 deg/min for 10 minutes Time 19.5 minutes Table 2. GC/MS Device Parameters.

amide concentration values were obtained from Samples 1-4, the test samples were prepared. Two potatoes were obtained for each of three sample oils, plus one potato for a control. These potatoes were sliced into small pieces approximately 1 cm3 in size and then fried for 15 minutes in approximately 700-1000 mL of oil preheated to 160°C. Some variation after addition of potatoes was noted (see Discussion). After frying was complete, the samples were dried to remove excess oil, and ground into a course powder using a mortar and pestle. This powder was then divided into 2 to 3 samples for each oil type, placed into cellulose thimbles, and the mass was determined for each sample and recorded. The samples included Soybean Oil #1, #2 and #3, Sunflower Oil #1, #2 and #3, and Corn Oil #1 and #2. Finally, the procedure above for extraction and detection of acrylamide was repeated with several changes to the procedure. First, a refrigeration system was used to maintain the Soxhlet condenser at 10° C. Also, no additional acetone solvent was added to the samples after 24 hours, as this was determined to solvate only acrylamide added, but no acrylamide contained naturally in the potato chips (see Discussion). Lastly, additional centrifuging was necessary to remove all residual particles from the samples before GC/MS injection.

Results and Discussion Acrylamide formation in fried foods is emerging as a major concern in the food industry given its potential adverse health effects. As acrylamide formation predominantly occurs during the cooking process, an investigation of the effects of different cooking oil types was done.

Figure 1. GC/MS Calibration Curve. 43


Since corn, soybean, and sunflower oil vary in triglyceride and fatty acid composition, their rates of degradation and contaminant formation are likely to vary as well. Thus, there should be a difference in acrylamide formation depending on which type of oil is used in cooking. In this study, a standard sample with a known concentration of acrylamide was used to validate the experimental techniques used in the study. The standard chosen was Pringles Original Baked potato chips. Although the preparation method for these differed from the frying technique commonly used to make potato chips, these were specifically chosen because they contained the highest acrylamide amount of any such product on the market at 1200 ppb (9). The standardization included two control samples, as well as two samples spiked with additional acrylamide. One spiked sample and one control sample was dissolved in acetone while the other spiked sample and the other control sample was dissolved in methanol in order to compare the most effective method. However, when the extraction was complete, the acetone had solvated no acrylamide from the un-spiked Pringles sample, and therefore only the samples solvated in methanol samples were used. Upon analysis of the samples post-Soxhlet processing, the difference between spiked and un-spiked samples was used to calculate the percent of acrylamide extracted. The calculation included using the total mg of acrylamide in the spiked sample and subtracting this value from the amount that should have been removed from the potato, found from the un-spiked control trial. This difference gives the acrylamide yielded from spiked, which is divided to give a 94 percent extraction. This meant results gathered could be taken as the data from a valid technique. The three trials of sunflower and soybean oil, and two trials of corn oil yielded different concentrations of acrylamide. The average for each oil is shown in Figure 8, indicating sunflower oil to have the highest acrylamide concentration, followed by corn oil, and lastly soybean oil. Figure 9 shows the corrected values with respect to percentages extracted based on the internal methyl acrylamide standards. These adjusted values were calculated by taking the parts per mil-

Figure 2. Average ppm of acrylamide found in the tested oils. 44

Figure 3. Relative values of acrylamide found based on % extracted.

lion present in each sample vial multiplied by the solvent and its density, dividing by 1000 to give the mg amounts. To calculate the ppm (mg per kg) value, this value is divided by total mass. The numbers calculated using these corrected values, however, cannot be taken individually as “parts per million� figures; rather, they must only be considered in terms of their relative amounts in comparison to the other oils to show that the acrylamide formation trend is sunflower, corn, then soybean. This is because these values are found using the internal standard, methyl acrylamide, of which the affinity of extraction, as compared to acrylamide, is unknown. While acrylamide was shown to be extracted at 94 percent as per the experimental method, methyl acrylamide extraction varied from 38 percent to 80 percent. Without correction, however, measurements of the three oils can be considered as numerical parts per million values, calculated from acrylamide extracted after concentrating each sample with a rotary evaporator from approximately 35 grams to 2 grams of acrylamide in methanol solution. Several factors may have swayed experimental results that must be accounted for in terms of human error. Soxhlet extraction is a notably accurate method of extraction, and it is not expected that any acrylamide was lost in the solvation process. No acrylamide should have been lost in the pentane solvent, as acrylamide is insoluble in pentane, however, the pentane was not checked for the presence of acrylamide, which may have been an oversight. Some variation in slicing causes each piece of potato to fry at a slightly different consistency in the oil. An attempted to account for these size discrepancies was done by taking a random sample of twelve pieces and calculating the average surface area to volume ratio for all pieces. Along with these incongruities in sample preparation, some oversights in frying must also be accounted for. Because of the different heat drop and recovery times when the potatoes were added to the oil, the volumes of oils used in frying differed. Only about 800 mL were used to fry the soybean Dartmouth Undergraduate Journal of Science


samples, while 1000 mL were used to fry the sunflower and corn samples. Because soybean oil takes much longer to recover it heat, a smaller volume was used in an effort to keep temperature of oil versus time consistent with the other two oils. However, this changed the oil to potato ratio, and better consistency in this ratio would have reduced the error. The key conclusion to be made based on the results of this study, shown by both the adjusted and unadjusted values of acrylamide found after frying and extraction, is that a difference in acrylamide formation can be seen amongst the three oils. Sunflower oil generates the largest concentration of acrylamide, while soybean oil generates the least. Interestingly, sunflower oil has the lowest specific heat value of the three oils tested, while soybean had the highest. However, an attempt to correlate acrylamide formation with specific heat values cannot be accepted as legitimate. Experiments displayed drops in temperature followed by varied rates of recovery for each oil during the frying process, but the net temperature difference of each oil is the same. That is, while sunflower oil, with its low specific heat dropped the most and the quickest in temperature, its temperature also increased the fastest to reach the 160째C frying temperature. Although acrylamide formation may be temperature dependent, this temperature dependency is not altered by the specific heats of the oils. This means a likely possibility for influencing the formation of acrylamide may lie in the inherent compositions of the oils themselves before and after heating. Because the composition of each oil differs in triglyceride and contaminant content, it is possible the presence of certain molecules in the oils, whether from the original plant source or added or created during the refinement process could inhibit acrylamide formation. In fact, recent studies by Danisco et al. have found biomolecules in seaweed with the ability to oxidize the reducing sugar before it has a chance to react with asparagine in the Maillard reaction (10). Further experiments must be done to determine if such molecules could be found in specific cooking oils, as this could make a significant impact on food preparation techniques.

Available at http://ww.cfsan.fda.gov/~dms/acrydata.html (accessed August 2008) 10. J. Borch, C. Poulsen, D. L. Boll, A Method of Preventing Acrylamide Formation in Foodstuff. International Application Number: PCT/ IB2003/005278, May 13 2004.

Acknowledgements The authors would like to thank professors Gordon Gribble and Siobhan Milde for their guidance. The research was conducted as part of the Chem. 63 course. References 1. P. Rydberg, S. Eriksson, E. Tareke, et al. J. Agric. Food Chem. 51, 7012-7018 (2003). 2. M. Friedman. J. Agric. Food Chem. 51, 4504-4526 (2003). 3. D. Taubert, S. Harlfinger, L. Hekes, R. Berkels, E. Schomig, J. Agric. Food Chem. 52, 2735-2739 (2004). 4. C. J. Calleman, E. Bergmark, L.G. Costa, Environ. Health Prespect. 99, 221-223 (1993). 5. V. W. Tros, JAOCS. 997-1001 (1981). 6. G. R. Takeoka, G. H. Full, L. T. Dao, J. Agric. Food Chem. 45, 3244-3249 (1997). 7. J. R. Pedersen, J. O. Olsson, Analyst. 128, 332-332 (2003). 8. M. D. Luque de Castro, L. E. Garcia-Ayuso, Anal. Chem. Acta. 369, 1-10 (1998). 9. Survey Data on Acrylamide in Food: Individual Food Products. Winter 2009

45


Article Submission

DUJS

t What are we looking for? The DUJS is open to all types of submissions. We examine each article to see what it potentially contributes to the Journal and our goals. Our aim is to attract an audience diverse in both its scientific background and interest. To this end, articles generally fall into one of the following categories:

Research

This type of article parallels those found in professional journals. An abstract is expected in addition to clearly defined sections of problem statement, experiment, data analysis and concluding remarks. The intended audience can be expected to have interest and general knowledge of that particular discipline.

Review

A review article is typically geared towards a more general audience, and explores an area of scientific study (e.g. methods of cloning sheep, a summary of options for the Grand Unified Theory). It does not require any sort of personal experimentation by the author. A good example could be a research paper written for class.

Features (Reflection/Letter/Essay or Editorial)

Such an article may resemble a popular science article or an editorial, examining the interplay between science and society. These articles are aimed at a general audience and should include explanations of concepts that a basic science background may not provide.

t Guidelines: 1. The length of the article must be 3000 words or less. 2. If it is a review or a research paper, the article must be validated by a member of the faculty. This statement can

be sent via email to the DUJS account.

3. Any co-authors of the paper must approve of the submission to the DUJS. It is your responsibility to contact the

co-authors.

4. Any references and citations used must follow the Science Magazine format. 5. If you have chemical structures in your article, please take note of the American Chemical Society (ACS)’s

specifications on the diagrams.

For more examples of these details and specifications, please see our website: http://dujs.dartmouth.edu For information on citing and references, please see: http://www.dartmouth.edu/~sources Specifically, please see Science Magazine’s website on references: http://www.sciencemag.org/feature/contribinfo/prep/res/refs.shtml

46

Dartmouth Undergraduate Journal of Science


DUJS Submission Form t Statement from student submitting the article: Name:__________________

Year: ______

Faculty Advisor: _____________________ E-mail: __________________ Phone: __________________ Department the research was performed in: __________________ Title of the submitted article: ______________________________ Length of the article: ____________ Program which funded/supported the research (please check the appropriate line): __ The Women in Science Program (WISP)

__ Presidential Scholar

__ Dartmouth Class (e.g. Chem 63) - please list class ______________________ __Thesis Research

__ Other (please specify): ______________________

t Statement from the Faculty Advisor: Student: ________________________ Article title: _________________________ I give permission for this article to be published in the Dartmouth Undergraduate Journal of Science: Signature: _____________________________ Date:______________________________ Note: The Dartmouth Undergraduate Journal of Science is copyrighted, and articles cannot be reproduced without the permission of the journal. Please answer the following questions about the article in question. When you are finished, send this form to HB 6225 or blitz it to “DUJS.� 1. Please comment on the quality of the research presented:

2. Please comment on the quality of the product:

3. Please check the most appropriate choice, based on your overall opinion of the submission:

__ I strongly endorse this article for publication

__ I endorse this article for publication

__ I neither endorse nor oppose the publication of this article

__ I oppose the publication of this article

Winter 2009

47


Write

Edit

Submit

Design

Spring 2007 Vol. IX | No. 2

Decoding the Language of Proteins

Microscopic Arms Race: A Battle Against Antibiotic Resistance

48

Dartmouth Undergraduate Journal of Science


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.