DUJS 11S

Page 1

Note from the Editorial Board Dear Reader, What does a “cell” mean to you? Some may think back to the days of Robert Hooke, who first coined the term “cell” for the basic unit of life. Over the past several hundred years, numerous advances have been made in characterizing the biological cell in both flora and fauna. Many have studied the cell’s ability to divide and to replicate, to generate proteins, and to carry out the metabolism essential for life. The high-power electron microscopy techniques that are available today have uncovered even more mysteries about what exists inside these membrane-bound bubbles of chemical reactions. Of course, the biological cell is not the only “cell” we have today. Others may consider the electrochemical cells that create electrical energy from chemical reactions. Such reactions, including those that occur inside a battery, create the necessary voltage to drive many electronic devices. Next, our generation has seen the growth of solar cells or “photovoltaic cells” that convert energy from the sun directly into electricity, and this has become increasingly popular with the growth of interest in energy conservation. Yoo Jung Kim ’14 elucidates the causes and treatments of autoimmune diseases, which are triggered by our own lymphocyte and leukocyte cells. Hunter Kappel ’14 gives a historical overview of the “immortal” HeLa cell line found in Henrietta Lacks that made possible many advances in research and a multitude of discoveries. Thomas Hauch ’13 describes the future of harnessing sunlight to drive an alternative form of photosynthesis to create energy. Amir Khan ’14 reasons whether patenting a specific gene could have positive benefits in research, or whether such a practice would be motivated by profits. Daniel Lee ’13 discusses synthetic cells and synthetic genomes, which are capable of self-replication. Jay Dalton ’12 explores the role of apoptosis in development. In this issue, Andrew Zureick ’13 conducts an interview with Marcelo Gleiser, a professor in Dartmouth’s Physics and Astronomy Department. Gleiser discusses the origins of chirality, life, and the cell, touching also on cosmological questions. Priya Rajgopal ’11 writes a review article on new research goals for multiple sclerosis. The DUJS also features several submissions of original undergraduate research. Sean Currey ’11 details the results of the GreenCube II mission, which seeks to verify the presence of gravity waves. Marielle Battistoni ’11, Elin Beck ’12, Sara Remsen ’12, Frances Wang ’12, Katherine Fitzgerald ’11, and Suzanne Kelson ’12 report their findings from the Biology Foreign Studies Program (FSP) in Costa Rica. Three papers are included, one entitled “Energy Optimization and Foraging Preference in Hummingbirds” the second “Effects of Ocean Acidification on a Turtle Grass Meadow,” and finally “Effects of Epiphyte Cover on Seagrass Growth Rates in Two Tidal Zones.” In addition, Elizabeth Molthrop ’12 discusses biophilic design, which bridges architecture, design, and science. We hope you find the exciting variety of science in this issue stimulating and enjoyable! Sincerely, The DUJS Editorial Board

spring 2011

The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge. EDITORIAL BOARD President: Shu Pang ’12 Editor-in-Chief: Andrew Zureick ’13 Managing Editors: Daniel Lee ’13, Kyle Heppenstall ’13, Aravind Viswanathan ’12 Assistant Managing Editors: Thomas Hauch ’13, Amir Khan ’14 Layout Editor: Shaun Akhtar ’12 Design Editor: Chen Huang ’12 Online Content Editor: Kristen Flint ’14 Public Relations Officer: Derek Racine ’14 Secretary: Clinton Grable ’14 Event Coordinator: Jaya Batra ’13 DESIGN STAFF Yoo Jung Kim ’14 Derek Racine ’14 Sara Remsen ’12 Hazel Shapiro ’13 STAFF WRITERS Prashasti Agrawal ’13 Shaun Akhtar ’12 Jay Dalton ’12 Clinton Grable ’14 Thomas Hauch ’13 Kyle Heppenstall ’13 Hunter Kappel ’14 Amir Khan ’14 John Kim ’13 Yoo Jung Kim ’14 Aaron Koenig ’14 Daniel Lee ’13 Michael Mantell ’13 Joyce Njoroge ’11 Archana Ramanujam ’14 Elisabeth Seyferth ’14 Kali Pruss ’14 Medha Raj ’13 Robin Wang ’14 Kevin Wang ’13 Danny Wong ’14 Viktor Zlatanic ’14 Andrew Zureick ’13 Faculty Advisors Alex Barnett - Mathematics William Lotko - Engineering Marcelo Gleiser - Physics/Astronomy Gordon Gribble - Chemistry Carey Heckman - Philosophy Richard Kremer - History Roger Sloboda - Biology Leslie Sonder - Earth Sciences David Kotz - Computer Science Special Thanks Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Provost’s Office R.C. Brayshaw & Company Private Donations The Hewlett Presidential Venture Fund Women in Science Project DUJS@Dartmouth.EDU Dartmouth College Hinman Box 6225 Hanover, NH 03755 (603) 646-9894 http://dujs.dartmouth.edu Copyright © 2011 The Trustees of Dartmouth College


The Dartmouth Undergraduate Journal of Science aims to increase scientific by providing an interdisciplinary forum for sharing undergraduate research an

In this Issue... DUJS Science News Kyle Heppenstall ‘13, Daniel Lee ‘13, and Andrew Zureick ‘13

4

Autoimmune Diseases: A Rising Epidemic Yoo Jung Kim ‘14 6 Interview with Marcelo Gleiser, Dartmouth Professor of Physics and Astronomy Andrew Zureick ‘13

9

Henrietta Lacks and Her “Immortal” Cells Hunter Kappel ‘14

12

Artificial Photosynthesis: Looking to Nature for Alternative Energy Thomas Hauch ‘13

14

Gene Patents: For the Sake of Research, or For Profit? Amir Khan ‘14 16 Synthetic Cells Daniel Lee ‘13 The Role of Apoptosis in Disease and Development Jay Dalton ‘12

19

21

Visit us online at dujs.dartmouth.edu

Dartmouth Undergraduate Journal of Science


fic awareness within the Dartmouth community h and enriching scientific knowledge.

Promoting Remyelination and Preventing Demyelination: New Research Goals in Finding a Therapy for Multiple Sclerosis Priya Rajgopal ‘11 24 GreenCube II: Multiple Balloon Measurements of Gravity Waves in the Skies above New Hampshire Sean Currey ‘11

29

Energy Optimization and Foraging Preference in Hummingbirds Marielle Battistoni ‘11, Elin Beck ‘12, Sara Remsen ‘12, and Francis Wang ‘12 34 Biophilic Design: A Review of Principle and Practice Elizabeth Molthrop ‘12

37

Effects of Ocean Acidification on a Turtle Grass Meadow Marielle Battistoni ‘11, Katherine Fitzgerald ‘11, and Suzanne Kelson ‘12 40 Effects of Epiphyte Cover on Seagrass Growth Rates in Two Tidal Zones Kelly Aho ‘11 and Elin Beck ‘12

spring 2011

43

DUJS


News

DUJS Science News

See dujs.dartmouth.edu for more information

Compiled by Kyle Heppenstall ’13, Daniel Lee ’13, and Andrew Zureick ‘13

i GENETICS First DUJS Ad Fontes Forum Focuses on GM Foods

This spring, four panelists met at Dartmouth to discuss genetically modified (GM) food in the DUJS Ad Fontes Forum. The forum included two panelists in favor of continued use and research for GM crops: Nina Fedoroff, a biologist and former science and technology advisor to the Secretary of State, and Sharon Bomer, Executive Vice President of the Food and Agriculture Section at the Biotechnology Industry Organization. The other side of the argument was presented by Doug Gurian-Sherman, plant pathologist and senior scientist at the Food & Environment Program for the Union of Concerned Scientists, and Eric HoltJimenez, agroecology expert and Executive Director of Food First/Institute for Food and Development Policy. Fedoroff and Bomer argued that

biotechnology should be one of the many tools employed by modern agriculture to help solve the problems of high population growth and shrinking land area per capita. As part of their evidence, they provided specific examples of successful genetically engineered crops like soybeans, corn, papaya, and canola. On the other hand, Sherman claimed that biotechnology should not be used to genetically engineer crops due to the high opportunity cost. He supported using more traditional methods to genetically modify crops, such as breeding, because they are more efficient. Jimenez expressed concerns about the agricultural systems, not the crops. He argued that polycultures (areas with a wide variety of crops) and improved water and soil quality are the best ways to enhance agriculture. After each panelist was given an opportunity to state his or her viewpoints, the panelists fielded questions from the audience about Bt corn, agricultural sustainability, and gene flow.

Image courtesy of Joseph Mehling ’69.

Dartmouth College President Jim Yong Kim gave opening remarks for the DUJS Ad Fontes Forum in the Moore Theater at the Hopkins Center for the Performing Arts. 4

i CHEMISTRY CDDO-Me: The First Drug on the Market from Dartmouth?

Dartmouth chemistry professor Gordon Gribble recently published an article in collaboration with Michael Sporn, a Dartmouth Medical School professor. Sporn and Gribble have been working with a class of compounds called triterpenoids, which are found in virtually all plants, since 1995. Synthetic oleanane triterpenoids (SO’s) help to relieve oxidative stress by repressing expression and formation of reactive nitrogen and oxygen species, which are often produced by metabolic action in the cell. Gribble started with two naturally occurring triterpenoids, oleanolic acid and ursolic acid, and “did every possible chemical reaction to them [he] could think of” alongside Tadashi Honda, he said. One small change made it 1000 times more reactive in inhibiting the synthesis of the enzyme “inducible nitric oxide synthase” (iNOS), which makes nitric oxide. When overexpressed, “nitric oxide destroys cartilage, causes inflammation, Parkinson’s, Crohn’s…” explained Gribble, “and our compounds prevent that.” Gribble’s lab synthesized Bardoxolone-methyl (CDDO-Me) in an 11-step process. By making it a good Michael acceptor, it can “soak up” the reactive, toxic species. Its two conjugated enones make it 400,000 times more reactive than the oleanolic acid. On the biological side, CDDOMe, with its anti-inflammatory properties, is able to relieve symptoms of chronic kidney disease caused by diabetes and pancreatic cancer near the end of life, essentially reversing kidney damage. CDDO-Me is now in Phase III of clinical trials; Gribble and Sporn hope to see effective results, as this could potentially be “the first drug ever put on the market by Dart-

Dartmouth Undergraduate Journal of Science


ty against diseases with high mutation rates, like influenza. Lastly, Ulmer mentioned recent efforts to deliver vaccines directly to the DNA through plasmids. For example, he discussed a group of scientists that replaced the structural genome of an alphavirus with antigenproducing DNA material in hopes of creating an efficient vaccine for the virus.

in conjunction with chemotherapy and radiation therapy will likely increase survival rates. With more extensive research, the use of magnetic particles may soon become a standard regime in cancer treatment.

i BIOTECHNOLOGY

Study Investigates Treatment and Recovery of Psychosis Patients

Ivkov Discusses Treating Cancer with Magnetic Nanoparticles at Recent Jones Seminar Image courtesy of Gordon Gribble.

Gordon Gribble, Dartmouth professor of chemistry.

mouth.” He anticipates that the Food and Drug Administration will make the final decision sometime late 2012.

i MEDICINE Ulmer Discusses Vaccines

The Dickey Center for International Understanding hosted a global health conference including a presentation by Jeffrey Ulmer, the Global Head of External Research at Novartis Vaccines and Diagnostics. Ulmer discussed some challenges and achievements of current vaccine development. Liability, corporate financial risk, high development costs, and tight regulations are some of obstacles facing the advancement of vaccine development, according to Ulmer. Another hurdle with vaccine development is an incomplete understanding of disease biology. Ulmer explained how vaccines have been successful against diseases connected to antigens with slow mutation rates. However, efforts to create effective vaccines have been less fruitful for other types of diseases like those related to T-cells. Ulmer also spoke about recent developments in vaccinology, like using genomic sequencing to recognize protein vaccine candidates. He also discussed research into discovering new biological adjuvants to increase vaccine potency and provide stronger immuniSPRING 2011

Robert Ivkov of Johns Hopkins University recently presented at one of Dartmouth’s weekly Jones Seminars on Science, Technology, and Society. He has researched the use of magnetic nanoparticles to direct heat in chemotherapy treatments. Applying heat to increase the efficacy of chemotherapy has been attempted in clinical trials to treat cervical, bladder, breast, brain, and early prostate cancer. This process known as hyperthermic intraoperative intraperitoneal chemotherapy, has shown efficacy in treating gastrointestinal cancers. However, currently this method is too invasive for most types of cancer, as the heat applied is indiscriminate and has the potential to harm healthy tissue. Since every degree above 42.5˚C doubles the rate of cell death, locally heating cancerous tumors without heating surrounding tissue is a critical requirement. Ivkov has begun developing a method for a heating agent directly inside a tumor with good distribution by using of magnetic nanoparticles attached to antibodies. These antibodies target receptors specifically on the cancerous tumors. Ivkov is able to heat up the regions populated with nanoparticles by applying an AC (alternating current) magnetic field. This has achieved several goals: the delivery of treatment was selective rather than indiscriminate, the procedure was noninvasive, and the AC device effectively controlled the dosage of heat. Continuing investigation and application of hyperthermia use

i NEUROSCIENCE

Robert Drake, Professor of Psychiatry and of Community and Family Medicine at Dartmouth Medical School, found that patients with both early-phase primary psychosis and substance-induced psychosis who were admitted to emergency rooms tended to show steady improvement despite minimal use of mental health and substance abuse services. His study, published recently in the American Journal of Psychiatry, compared psychosis treatment and recovery in 217 patients with early-phase primary psychosis complicated by substance use versus 134 patients with earlyphase substance-induced psychosis. Results showed that patients with primary psychosis were more likely to be treated with antipsychotic and mood-stabilizing medications, to be hospitalized, and to visit psychiatrists as outpatients. In contrast, patients with substance-induced psychosis mainly received outpatient substance abuse treatments and addiction medications. Overall, patients made steady progress despite minimal use of these mental health and substance abuse services, with the substance-abuse psychosis group using the services even less than the primary psychosis group did. The researchers suspect that the substance-induced psychosis patients were unlikely to look for treatment beyond their original ER visit because of their lack of insurance, awareness, or even need. Further research will explore more “adaptive treatment strategies” for substance-induced psychosis patients, as no studies exist on issues like the length of time for which antipsychotic medication should be prescribed after the initial psychotic episode. 5


MEDICINE

Autoimmune Diseases A Rising Epidemic Yoo Jung Kim ‘14

Image retrieved from http://pharmrev.aspetjournals.org/content/59/4/289/F13.large.jpg (Accessed 8 May 2011).

German hematologist and immunologist Paul Ehrlich, the 1908 Nobel Laureate of Medicine. His theory against the possibility of autoimmune response remained influential for over 50 years.

I

n the dawn of the twentieth century, Paul Ehrlich, an illustrious German hematologist, immunologist, and 1908 Nobel Laureate of Medicine, posited a biological theory of horror autotoxicus, the unwillingness of the organism to endanger itself by the formation of toxic autoantibodies. In other words, an organism’s immune system could not develop an autoimmune response. Ehrlich’s theory would remain as a widely accepted canon in the then fledgling field of immunology, despite ample evidence to the contrary published by his scientific contemporaries and later biologists. Their findings suggested that autoimmune reactions are responsible for a wide range of disorders, including paroxysmal cold hemoglobinuria, sympathetic ophthalmia, ocular inflammation due to lens antigens, some hemolytic anemias, and certain encephalitides. (1) Ehrlich’s postulation would linger on for more than five decades, until it was finally debunked by a discovery

6

made by a young scientist named Noel Rose, now the Director of the Johns Hopkins Autoimmune Disease Research Center. Working as a part-time research assistant in the 1950s, Rose injected thyroglobulin, a major protein constituent of the thyroid gland, back into the thyroids of the rabbits from which it was originally derived, and noticed that the rabbits soon developed lesions in their thyroid tissue—a sign deduced to be an autoimmune response similar to that of a known condition called Hashimoto’s Thyroiditis (3). Yet, despite Rose’s discovery, over a decade passed before autoimmunity became a commonly accepted precept; the damage was done. The time it took the scientific community to fully accept the growing reality of autoimmunity has delayed the translation of its findings into medical knowledge, with grave implications in current epidemiological diagnosis of autoimmune diseases.

The Immune System and Autoimmunity An immune system is a highly regulated biological mechanism that identifies and reacts to antigens from various foreign substances found in an organism’s body and reacts to these possible pathological threats by producing certain types of lymphocytes such as white blood cells and antibodies that have the ability to destroy or neutralize various germs, poisons and other foreign agents (2). Typically, the immune system is able to distinguish the foreign agents from the organism’s own healthy cells and tissues. Autoimmunity, on the other hand, describes a diseased condition in which an organism fails to recognize its own cells and tissues, thereby enabling the immune system to trigger a response against its own components. (2). A low degree of autoimmunity is an integral part of an effective immune system. For example, low-level autoimmunity has been

demonstrated to be a possible factor in reducing incidence of cancer through versatile CD8+ T cells, which kill target self-cells by releasing cytokines capable of increasing the susceptibility of target cells to cytotoxicity, or by secreting chemokines that attract other immune cells to the site of autoimmunity (8).

Autoimmune Diseases Autoimmune diseases occur when there is interruption of the usual control process, thereby allowing the system to malfunction and attack healthy cells and tissues (9). A common example of autoimmune disease is Type I Diabetes, which affects nearly a million people in the United States. It is a condition in which the pancreas does not produce enough insulin to control sugar blood levels due to the autoimmune destruction of the insulin-producing pancreatic β cells (10). Some other common autoimmune disorders include rheumatoid arthritis, systemic lupus erythematosus (lupus), and vasculitis (9). Autoimmune disorders typically fall into two categories: systematic and local. Systemic autoimmune diseases are associated with autoantibodies, which are not tissue specific, and the spectrum of damage may affect a wide range of tissues, organs, and cells of the body. Localized autoimmune diseases are associated with organ-specific conditions that affect a single organ or tissue. However, the boundary between systematic and nonsystematic disorders becomes blurred during the course of the disease as the effect and scope of localized autoimmune disorders frequently extend beyond the initially targeted areas (6). The onset of autoimmune disease is associated with a trigger, which can be pulled in numerous ways. In one possible example, certain substance in the body that is normally confined to a specific area may be released into another area due to internal trauma; the translocation may stimulate the

Dartmouth Undergraduate Journal of Science


immune system to recognize a natural body component as foreign and trigger an autoimmune response. In another scenario, a normal component of the body may be altered via virus, a drug, sunlight or radiation; the altered substance may then appear foreign to the immune system. Very rarely, a foreign substance resembling a natural body component may enter the body, thereby inducing the immune system to target both the similar body substance and the foreign substance (11). Just as the triggers for autoimmune disorders are wide and varied, so are their effects. The debilitating effects of various autoimmune disorders include the destruction of a specific type of cell or tissue, stimulation into excessive growth, or interference in function. Organs and tissues affected by more common autoimmune disorders include components of the endocrine system, such as thyroid, pancreas, and adrenal glands; components of the blood, such as red blood cells; and the connective tissues, skin, muscles, and joints (9).

Treatments for Autoimmune Diseases Since cures are currently unavailable for most autoimmune disorders, patients often face a lifetime of debilitating symptoms, loss of organ and tissue function, and high medical costs (5). For many autoimmune disorders, the goals of treatments are to reduce chronic symptoms and lower the level of immune system activity while maintaining the immune system’s ability to fight foreign contaminants. Treatments vary widely and depend on the specific disease and the symptoms. For example, those afflicted with Type I Diabetes must replenish their insulin levels, usually through injections. In autoimmune diseases like Type I Diabetes, patients may need supplements to provide a hormone or vitamin that the body is lacking. If the autoimmune disorder either directly or indirectly affects the blood or the circulatory system, such as autoimmune hemolytic anemia (AIHA), lupus, and antiphospholipidal antibody syndrome (AAS), patients may require blood transfusions. In autoimmune disorders that affect the bones, joints, or muscle, such as mulSPRING 2011

tiple sclerosis (MS) and rheumatoid arthritis, patients often require assistance to maintain mobility or medication to suppress pain and reduce inflammation in affected areas (12). In many cases of autoimmune diseases, medicine is often prescribed to control or reduce the immune system’s response. Such medicine may include corticosteroids and immunosuppressant drugs, such as azathioprine, chlorambucil, cyclophosphamide, cyclosporine, mycophenolate, and methotrexate (11).

Autoimmune Disorders: Women’s Disease? Approximately one-third of the risk of developing an autoimmune disease can be attributed to heritable factors, especially gender. Women account for about 75% of the estimated 23.5 million people in America afflicted by autoimmune diseases, and autoimmune diseases constitute some of the leading causes of death and disability in women below 65 years of age (5, 13). While the relationship between sex and prevalence of autoimmune disorders remains unclear, researchers have noted that women have higher levels of antibodies and mount larger inflammatory responses than men when their immune systems are triggered, possibly increasing the risk of autoimmunity (3, 13). Autoimmune diseases tend to fluctuate in accordance with hormonal changes, such as during pregnancy, menstrual cycle, menopause, aging and usage of birth control pills (3). Autoimmune diseases fluctuate by racial lines as well, since two gene variants were found that are related to an increased risk of lupus among African American women (5). Despite the prevalence of female patients, autoimmunity is rarely discussed as a women’s health issue (7).

Environmental Triggers Besides genetic factors, pathological and environmental factors play a role in initiating or exacerbating certain autoimmune disorders. For example, the product of a human gene that confers susceptibility to Crohn’s disease recognizes components of certain bacteria, and viral infections have long been suspected as triggers of Type

1 Diabetes. Conversely, other research suggests that the numbers of regulatory T cells that normally hold potentially destructive immune responses in check are reduced by viral infections (5). Exposure to various synthetic chemicals and metals in the initiation of autoimmune disease may also increase susceptibility to autoimmune disorders. Generally, metals inhibit immune cell proliferation and activation; mercury, gold, and silver, for example, can induce lymphocyte proliferation and subsequent autoimmunity. A broad range of synthetic chemicals, including hormone supplementation, hormone blockers, pesticides, insecticides, fungicides, and food and herbal products, may elicit estrogenic or anti-estrogenic activity (5).

A Rising Epidemic and Challenges in Combating Autoimmune Disorders In 2005, in a report to the US Congress, the National Institutes of Health (NIH) released Progress in Autoimmune Disease Research, which reported that more than eighty known autoimmune diseases, such as multiple sclerosis, Type 1 Diabetes, rheumatoid arthritis, Crohn’s disease and myasthenia gravis, affect anywhere from 14.7 to 23.5 million people in America (4, 5). While a significant portion of Americans suffer from autoimmunity disorders, according to a 2001 survey by the Autoimmune Diseases Association, over 45 percent of patients with autoimmune diseases have been labeled chronic complainers in the earliest stages of their illness. Diagnosis is difficult because a patient’s symptoms are likely to be vague with a tendency to fluctuate in the beginning. Thus, a typical patient usually undergoes a series of tests before a correct diagnosis is made, which can sometimes take years. (7)

Conclusions Considering the prevalence of autoimmune disorders in America, both the scientific community and the public must recognize the urgency of the autoimmune epidemic. Today’s practicing physicians must be aware of the 7


rising epidemic of autoimmune disease and be able to diagnose their symptoms properly and accurately before irreversible damage occurs. In addition, since the patients afflicted with autoimmune disorders are predominantly women, these disorders should be discussed as an issue relevant to women’s health. Due to the vast number of autoimmune diseases, further research is unlikely to pinpoint a single cause for autoimmunity. Rather, research should be focused on defining varied common triggers, including environmental pollution and specific pathological agents. References 1. A. Silverstein, Nat. Immunol. 2, 279-281 (2001). 2. Multiple Sclerosis Glossary (2010). Available at http://www.ucsfhealth.org/education/multiple_ sclerosis_glossary/index.html (April 2011). 3. D. J. Nakazawa, The Autoimmune Epidemic (Touchstone, New York, ed. 1, 2008). 4. Progress in Autoimmune Disease Research (2005). Available at http://www.niaid.nih.gov/topics/autoimmune/ Documents/adccfinal.pdf (April 2011). 5. Autoimmune Disease (2011). Available at http://www.nlm.nih.gov/medlineplus/ autoimmunediseases.html (April 2011). 6. Autoimmune Disorders (2007). Available at http://www.labtestsonline.org/understanding/ conditions/autoimmune.html (April 2011). 7. Autoimmune Disease in Women (2011). Available at http://www.aarda.org/women_and_ autoimmunity.php (April 2011). 8. U. Walter, P. Santamaria, Curr. Opin. Immmunol. 6, 624-631 (2005). 9. Questions & Answers (2011). Available at http://www.aarda.org/q_and_a.php (April 2011). 10. A. B. Notkins, A. Lernmark, J. Clin. Invest. 9, 1247-1252 (2001). 11. P. L. Cohen, Merck Manual of Medical Information (2007). Available at http://www. merckmanuals.com/home/print/sec16/ch186/ ch186a.html (April 2011). 12. Autoimmune Disorders (2007). Available at http://health.nytimes.com/health/guides/ disease/autoimmune-disorders/overview. html#Treatment (April 2011). 13. K. McCoy, Women and Autoimmune Disorders, Every Day Health (2009). Available at http://www.everydayhealth.com/autoimmunedisorders/understanding/women-andautoimmune-diseases.aspx (April 2011).

8

Dartmouth Undergraduate Journal of Science


Interview

Marcelo Gleiser

Dartmouth Professor of Physics and Astronomy Andrew Zureick ‘13

What are some of the big mysteries surrounding the origins of the cell, life, and chirality?

Image retrieved from http://www.dartmouth.edu/~mgleiser/graphics/gleiser.jpg (Accessed 8 May 2011).

Professor Marcelo Gleiser.

T

he DUJS talked to Marcelo Gleiser, Dartmouth professor of physics and astronomy who has been a part of the Dartmouth College faculty since 1991. He currently teaches Physics 1, Understanding the Universe; Physics 16, Introductory Physics II (Honors); and Physics 92, Physics of the Early Universe.

As a physicist, I’m very interested in fundamental questions about nature. In particular, the origins questions. I have spent a long time of my life thinking about the origin of the universe, and the origin of matter in the universe. A few years back I asked: how does matter organize into living matter? How do you go from atoms and molecules to living atoms and molecules? This transition from non-living to living is one of the most fascinating and completely open questions in science. Where does life come from, or how does the self-organization of biochemical reaction networks actually become a living thing? All living systems have proteins, and all of these proteins are made of amino acids. If you synthesize all of the amino acids, like alanine, in the laboratory, you will get a mixture of 50% that are “left-handed” and 50% that are “right-handed,” which says something about the spatial structure of those molecules. They come in two possible conformations. They can be either in the “left-handed” form or “right-handed” form, like your two hands. Hands

are not superimposable on each other, so you cannot put a right hand on a left hand or shake a person’s left hand with your right hand. These molecules are like that too. It turns out that these two forms are non-superimposable mirror images of each other. So, you would expect life to have both left-handed and right-handed amino acids. But when you look at the proteins of living things, from bacteria to people, they are all left-handed. Why did life choose a specific kind of chirality—chiral from “hand” in Greek—to work? Nobody knows! I am always very interested in these kinds of asymmetries, because I think asymmetries are the key to understanding the origin of complexity in nature, or the complex structures we see in nature, from DNA structures to hurricanes. They have to do with some kind of asymmetry. The origin of life seems to be much related to this question of chirality. It is an open question. If you talk to people, some will say you need chirality to get life, while others believe you need life and then you can get chirality. I am part of the team that believes you need chirality to get life. If you look at the structures within a cell like the nucleic acids that make up DNA and RNA, they also have chirality. In their case, it has to do with the sugars that form the backbone of these

What was your path to becoming a professor at Dartmouth?

I did my PhD in theoretical physics at the University of London, King’s College. In my area, you have to do some post-doctoral fellowships before you can apply for professorship. I did two post-doctoral fellowships, where you are paid to do research in a group. One was at Fermilab, a high-energy physics lab close to Chicago. The other was at the University of California at Santa Barbara [at the Institute for Theoretical Physics]. I came here as an assistant professor a long time ago [1991]. SPRING 2011

Image retrieved from http://en.wikipedia.org/wiki/File:L-alanine-3D-balls.png (Accessed 8 May 2011). Adjustments by Chen Huang ‘12.

L- and D-Alanine are non-superimposable mirror images of each other. 9


molecules; they’re always right-handed. So, you have left-handed proteins and right-handed DNA. These two seem to be connected with a key and lock mechanism to make the biochemistry of life possible. To me, it is interesting, coming from physics, not from biochemistry, that there are actually some things we can say about what’s going on. These are the questions that are fascinating. However, since I am a cosmologist by training and interested in the origin of the universe, I always look back at the beginning of things. I think about traveling through time back four billion years in the history of the earth to when there was no life. What was here? Was it real mass? It was hot, and the oceans, if they formed, would have evaporated really quickly. In Darwin’s “warm little pond,” chemical reactions began to take place in higher rates without being so disturbed by the environment, but still being influenced by it. Eventually, this first living thing—and by living, I mean self-supporting, self-organizing chemical reaction network capable of metabolism and duplication—came about. That is the most essential definition of life; nobody really agrees on what life even is. An operational definition of life is a “self-supporting chemical network capable of metabolism and duplication that is absorbing energy from the environment and putting energy out.” Of course, you can have living things that don’t multiply; there are some problems with these definitions. At what level of chemical complexity can something that is non-living become living? Can you draw some sort of transition there?

I’ve read about the endosymbiotic theory as it pertains to the origin of eukaryotic cells—are there any other widely accepted theories?

You start with these chemical reactions, but they need to be protected from the environment in order to work well. The external environment can be too messy. That’s where the cell or “protocell” comes in—you will be creating some sort of veil that will isolate interesting catalytic, metabolic reactions from possible interference from the 10

Image retrieved from http://commons.wikimedia.org/wiki/File:Gravitationell-lins-4.jpg (Accessed 8 May 2011).

A Hubble Space Telescope image of gravitational lensing caused by the Abell 1689 cluster. Dark matter accounts for most of the mass contained within the cluster.

outside. You need not only the self-interactions on these networks, but also the protection. How does that happen? Why would it happen in the first place? There are many people who have been looking at that, and David Deamer of University of California at Santa Cruz talks about little lipid drops that could serve as the environment where cells are born. Alexander Oparin was a Russian scientist from the early 20’s and 30’s who dreamed up this whole way of thinking about how life could have appeared on earth. Initially, there are chemical reactions that are isolated in little bubbles, and these bubbles can collide with each other. If they split and have enough chemicals in them they can become protocells themselves. Whoever has the most efficient metabolic system would win. He starts thinking about these things and people begin modeling them. Along with the handedness of chemicals and the self-organization of these autocatalytic chemicals, which

can make more of themselves, you need these protective environments. I have a paper with my PhD student, Sara Walker. She’s currently a NASA astrobiology fellow studying the origin of life. You can start with a reaction network of simple chemical reactions that self-organize into little bubbles. The chemicals like to be inside little bubbles, and the chemicals inside the bubble are chiral. So, you have little chiral networks of reactions inside little bubbles, and we call these things protocells. All we did was start with simple (perhaps, not so simple) non-linear reactions that people use to describe polymerization, and we obtain these very interesting asymmetric solutions.

What exactly are dark energy and dark matter?

If you study the recipe of the universe, and what the universe is made up of today, the stuff we’re made up of—atoms, molecules, protons, and

Dartmouth Undergraduate Journal of Science


electrons—make up only 4%! The rest, 96%, is other stuff. Of this other stuff, 23% is what we call dark matter, which is made of particles (little pieces of matter like protons) that do not interact with electric charges and do not interact with forces that keep a nucleus together. All we know is that they have mass and they interact by gravity; we don’t know what they are at all. We don’t even know if they exist, but these models fit well with what people see in galaxies. They’re called dark because they don’t produce visible light, as opposed to stars. They produce invisible light like infrared radiation. How do you know dark matter is there? One way is to look at galaxies and see how galaxies rotate, and you find that to explain the speed at which the stars are rotating in a galaxy, especially the outer stars. You need to have the galaxy also having a protective layer, an all-enveloping layer made of dark matter particles. You can think of the galaxy as what you see, with this invisible cloak of surrounding dark matter particles that make up six to ten times more mass than the stars themselves. Then you have dark energy, the remaining 73%, which is even more mysterious than dark matter. We have only known of dark energy since 1998, so it is very recent. How did we know about it? People look at very far away galaxies, like five billion light years across from us, about half the way across the universe. What they find there is that they have stars that can explode like supernova, stars that make a big bang each time they die. People are able to see these, even though they are very far away, because supernovae are very bright. They look for Type 1A Supernovae, and they realize they are lodged in galaxies that are moving away from us much faster than we would expect. We knew, since 1929, that the universe was expanding, but we did not know it was expanding this fast. What could be making the universe expand so fast? There’s some sort of force pushing matter apart. It is the geometry of space that stretches. When you think of cosmology, you think of space as a rubber sheet, and this rubber sheet has been stretched out faster than expected. What is going on, why is that happening, and what could be causing it? We have a few candidates; the most SPRING 2011

plausible one is that this stuff that is pushing the universe apart comes from the fluctuations of energy of space itself. In physics, especially quantum physics, nothing stands still. Everything is always oscillating. There is some kind of residual energy there. Possibly, if you add up all this residual energy of stuff across the whole volume of the universe, you get this effect that may be pushing the universe apart. This thing is called dark energy. Ask me again in 10 years, and maybe we will have a better idea.

Image courtesy of Marcelo Gleiser.

Could you tell me about your book, A Tear at the Edge of Creation?

This is my third book published in English, published last year. It is several things: for one, it is a critique of our idea that we can find a theory that can explain all there is, the so-called “Theory of Everything.” People like Stephen Hawking and Brian Greene write about this theory a lot. There is this notion that science can come up with a single, all-encompassing explanation of the why the world is the way it is. For physics in particular, this theory explains why the particles of matter interact the way they do. What I’m saying is that there is no reason whatsoever to believe that such theories exist, or make sense. I go on to show that, cul-

turally, this idea that everything comes from a single source can be traced back to monotheistic notions, such as “all is one because all comes from God.” I think that is just a prejudice that we have. When we look at nature, we don’t see any evidence that it is true. We see simplification; that’s what science is about, trying to simplify complicated things, but not to the level of finding a single explanation that is behind everything. The book is a critique of that. In fact, it shows that, instead of looking for this all encompassing super-theory and super-symmetry, we should really be looking at asymmetry and broken symmetries as the driving engine behind all that is interesting in nature. The book has several parts. One is called the asymmetry of time, in which I talk about cosmology and why time is going forward. I talk about dark energy and dark matter, and the asymmetry of matter. There is something called antimatter in nature. We only see matter; we do not see anti-matter unless we make it in a lab. I also talk about the asymmetry of life, that is, the chirality and the origin of life on earth, and why chirality is so important. I talk about what this all means to us on this planet, that is, the more we study life, the more we realize that complex life, like multicellular organized life, is probably very rare because it depends on a series of conditions which are difficult to satisfy in the universe. It’s not just that the planet has to have water, carbon, nitrogen, hydrogen, and oxygen. It requires much more than that to have a long-living, complex multicellular organisms. That brings us back to the center of things. Obviously we are not at the center of the universe, but we are incredibly important in the big scheme of things because we are self-conscious, very sophisticated living things. Hence, I try to bring human life back to its place where we are important, and our role is to preserve life. The book ends with an ecological manifesto in which people become the guardians of life. We have a mission. The book also talks about aliens and the possibility of alien life. I find it very hard to believe. There may be other intelligences out there, but if there are, they are so far away that we will never know. We are here alone, and because of that, we have some things we have to take care of. 11


cell BIOLOGY

Henrietta Lacks and Her “Immortal” Cells Hunter Kappel ‘14

I

n 1951, a scientist at the Johns Hopkins University Medical Center created the first immortal human cell line. He used a tissue sample taken from a young black woman with cervical cancer named Henrietta Lacks. Her cells had a groundbreaking impact on modern medicine (1). Henrietta Lacks—born in 1920 in rural Maryland—was a poor illiterate tobacco farmer. She was the great-great-granddaughter of slaves, and she died at the age of 31. She left behind five children. No obituaries of her appeared in newspapers, and she was buried in an unmarked grave (2). To scientists, Henrietta Lacks became known as HeLa, from the first two letters of her first and last names. The cells taken from her cervix were the first “immortal” human cells to grow in culture (2). Lacks’ husband took her to Hopkins in 1951, as it was the only major hospital near their home that treated black patients (3). Doctors at Hopkins diagnosed her with cervical cancer, specifically “Epidermoid carcinoma of the cervix, Stage I.” Cancer originates from a single cell gone wrong and is categorized based on that cell type. Henrietta Lacks developed a type of cervical cancer called a carcinoma, which grows from the epithelial cells that cover and protect the surface of the cervix. When Henrietta Lacks visited Johns Hopkins, doctors at the hospital were involved in a nationwide debate over what constitutes cervical cancer and how best to treat it (3). Cervical carcinomas are divided into two types: invasive and noninvasive. In 1951, most doctors believed that invasive carcinomas were fatal, and noninvasive carcinomas, or carcinomas in situ, were not fatal. Doctors aggressively treated the invasive type and generally did not worry about the noninvasive type because they thought that it could not spread. Richard TeLinde, one of the top cervical cancer experts in the country, disagreed.

12

He believed carcinoma in situ was an early stage of invasive carcinoma (3). TeLinde would review all medical records from patients who had been diagnosed with invasive cervical cancer at Hopkins in the past decade to see how many initially had noninvasive carcinomas. TeLinde often used patients from the public wards for research, usually without their knowledge (3). He found that 62 percent of women with invasive cancer first had noninvasive carcinomas. TeLinde tried to grow living samples from normal cervical tissue and from living samples from both types of cancerous tissue in order to compare all three. He contacted the head of tissue culture research at Hopkins, George Gey (3). Gey was “determined to grow the first immortal human cells: a continuously dividing line of cells all descended from one original sample, cells that would constantly replenish themselves and never die” (3). Thus when TeLinde offered Gey a supply of cervical cancer tissue, Gey gladly attempted to grow living samples from this tissue. TeLinde began collecting cervical cancer tissues from all women with cervical cancer who visited Hopkins, including Henrietta Lacks (3). Lawrence Wharton Jr., a surgeon at Hopkins, proceeded to treat Henrietta Lacks’ invasive carcinoma with radium. Before treating her tumor, however, he collected samples of both her cancerous and healthy cervical tissues (3). When Wharton finished operating on Lacks, he wrote in her chart, “The patient tolerated the procedure well and left the operating room in good condition.” He also wrote, “Tissue given to Dr. George Gey” (3). Gey successfully grew a culture of Lacks’ cancerous cells. According to journalist Rebecca Skloot, author of The Immortal Life of Henrietta Lacks, “Henrietta’s cells weren’t merely surviving, they were growing with mythological intensity” (3). Following Lacks’ death in 1951,

doctors began planning a massive operation to produce trillions of HeLa cells each week: a HeLa factory. One of the primary purposes of starting such a factory was to help stop polio (3). HeLa cells improved and standardized the field of tissue culture. Doctors froze HeLa cells and, for the first time, closely examined cell division. Freezing was one of the first of several major improvements HeLa cells brought to the field of tissue culture. Besides freezing HeLa cells, doctors also cloned HeLa cells. They were the first human cells to be cloned (3). The early cloning technology that started because of HeLa cells led to many other advances that also necessitated the ability to grow cells in culture. Such advances included isolating stem cells, cloning entire animals, and in vitro fertilization (3). HeLa cells also led to advances in human genetics. Scientists had long incorrectly believed that human cells contained forty-eight chromosomes. They struggled to get an accurate count because chromosomes clumped together. In 1953, however, a geneticist in Texas accurately calculated the number of chromosomes in a human cell after mix-

Image retrieved from http://commons.wikimedia.org/wiki/File:Hela_Cells_Image_3709-PH. jpg (Accessed 8 May 2011).

HeLa cells undergoing division.

Dartmouth Undergraduate Journal of Science


ing a liquid with a HeLa cell. The chromosomes inside the HeLa cell spread out, and the geneticist was able to clearly see the chromosomes inside it (3). As a result of the discovery in Texas, two other geneticists from Spain and Sweden discovered that the normal human cell has 46 chromosomes. Now that scientists knew the number of chromosomes contained by such a cell, they could tell when a person had a surplus or a dearth of chromosomes. Scientists in turn made it possible for doctors to diagnose genetic diseases. Researchers began identifying and classifying chromosomal disorders. Researchers discovered that the cells of patients with Down’s syndrome, Klinefelter syndrome, or Turner syndrome all either contained too many or too few chromosomes (3). Scientists also exposed HeLa cells to radiation to better understand the effects of nuclear radiation on human cells. Scientists put HeLa cells in centrifuges, in which the pressure was 100 times that of gravity, to examine what happened to cells under the conditions of spaceflight and deep-sea diving (3). HeLa is not the only cell line used in research today. Even though it is the most commonly used cell line, others common cell lines and their origins include: 3T3 (mouse embryo), MCF7 (69 year old woman), VERO (African green monkey), JURKAT (14 year old boy), HEK-293 (human embryo), HT-29 (44 year old woman), COS-7 (African green monkey), MDCK (Cocker Spaniel), and LNCAP (50 year old man) (4). Scientists used HeLa cells in order to make advances in all of the following: virology, polio, scientific standards, live cell transport, genetic medicine, clones, for profit distribution of cells, space biology, genetic hybrids, ethics, salmonella, HPV, HIV, telomerase, tuberculosis, and nanotech (4). In 1952, researchers infected HeLa cells with many diseases such as mumps and measles, which led to the creation of the modern field of virology. Researchers also discovered that the cells were susceptible to polio; they used the cells in Salk’s vaccine, the largest vaccine field trial to date. The cells were grown in bulk and used to test glass used in beakers and slides. Scientists also discovered a way to transport live cells and to SPRING 2011

therefore mail them around the world. In 1953, geneticists discovered that when a stain called hematoxylin is mixed with a HeLa cell, the chromosomes contained by the cell become visible. In 1954, scientists developed a method for keeping single cells alive long enough to replicate them. HeLa cells thus allowed for many advances and developments to be made in the field of cloning human cells (4). In 1954, Microbiological Associates began commoditizing HeLa cells and mass producing them. In 1960, HeLa cells were sent into space in a Soviet satellite prior to the flight of any astronauts. NASA later included HeLa cells in their first manned mission, and they discovered that cancer cells grow faster in space (4). In 1965, scientists fused HeLa cells with mouse cells and created the first cross-species hybrid. This genetic hybrid allowed for advances to be made in the field of gene mapping (4). HeLa cells also allowed for advances and developments to be made in the field of medical ethics. It was after scientists injected patients with cancer cells to discover how cancer spreads that medical review boards and informed consent by patients were both institutionalized (4). In 1973, scientists used HeLa cells to better understand the invasiveness and infectiousness of salmonella, and they also used HeLa cells to study the behavior of salmonella inside human cells (4). In 1984, more than thirty years after Lacks’ death, German virologist Harald zur Hausen helped uncover how Lacks’ cancer started and why her cells never died. He discovered a new strain of a sexually transmitted virus called Human Papilloma Virus 18 (HPV-18). He used HeLa cells to prove that HPV-18 causes cancer (3). A molecular biologist named Richard Axel used HeLa cells to determine what was required for HIV to infect a cell. He infected HeLa cells with HIV and discovered that HIV infects not only blood cells. This was an important step toward understanding and potentially stopping HIV (3). In 1989, a scientist at Yale University explained the mechanics of HeLa’s immortality. He used the cells to discover that human cells contain telom-

erase, an enzyme that rebuilds a cell’s telomeres. The presence of this enzyme prevents HeLa cells from dying (3). In 1993, scientists exposed HeLa cells to M. tuberculosis to learn how the disease attacks human cells (4). In 2005, researchers used HeLa cells to test nanotechnology, injecting HeLa cells with iron nanowares and silica-coated nanoparticles (4). Scientists have used HeLa cells to develop many vaccines. They have exposed the cells to radiation, cosmetics, drugs, household chemicals, viruses, and biological weapons. Without HeLa cells, we would not have tests for diseases such as polio, HIV, and tuberculosis. We would not be able to test potential drugs for breast cancer and leukemia. Without HeLa cells, the human biological materials industry would be short millions of dollars. HeLa cells have become essential to the groundbreaking biological research that continues today. References 1. S. Zielinski, Henrietta Lacks’ “Immortal” Cells (2010). Available at http://www.smithsonianmag. com/science-nature/Henrietta-Lacks-ImmortalCells.html (20 March 2011). 2. D. Garner, A Woman’s Undying Gift to Science (2010). Available at http://www.nytimes. com/2010/02/03/books/03book.html?_r=1 (20 March 2011). 3. R. Skloot, The Immortal Life of Henrietta Lacks (Crown Publishers, New York, 2010). 4. E. Biba, Henrietta Everlasting: 1950s Cells Still Alive, Helping Science (2010). Available at http://www.wired.com/magazine/2010/01/ st_henrietta/ (20 March 2011).

13


Botany

Artificial Photosynthesis

Looking to Nature for Alternative Energy Thomas Hauch ‘13

Image retrieved from http://www.nellis.af.mil/shared/media/photodb/photos/070731-F8831R-001.jpg (Accessed 9 May 2011).

Photovoltaic technology has yet to produce significant breakthroughs.

L

ike it or not, we are on the fast track to a global energy crisis. Whether or not the climate is changing or species are dying, we simply cannot ignore the need for a new, alternative source of energy. Over the next few decades, shortages of fossil fuels will become common throughout the world (1). Within several hundred years, even with new discoveries, the supply of these fuels will run out entirely. That we will one day deplete the earth of coal, oil, and natural gas is hardly a revelation. Yet, with each passing year, policymakers remain oblivious to this truly inconvenient truth. In the 21st century, renewable energy must begin to assume a greater role in fulfilling our needs. Among the various possibilities that exist today, solar power remains the most viable option in the very long-term. Although it is a relatively underdeveloped technology (it currently supplies just about 0.01% of the world’s energy demand), it is perhaps the most promising for future generations (2). Sunlight is the foundation of all known life, delivering more than 120,000 terawatts of constant power, far more than any conceivable future demand by other resources (3). Solar power produces zero carbon dioxide (CO2), zero pollutants, and zero noise. It supplies usable en14

ergy in its most pure and simple form. At present, however, solar power is not a viable alternative to fossil fuels (2). Photovoltaic devices, which convert sunlight directly into electrical energy, have not evolved significantly since they were invented in the mid20th century. They remain expensive and have found widespread use in only a few places, like outer space (4). There are, of course, other solar energy technologies available. Solar thermal systems, in which sunlight is concentrated in order to drive a heat engine, have actually begun to outpace sales of photovoltaics. But in terms of space and cost, these systems are no more likely to replace fossil fuels in the near future (2). Government incentives can and should encourage the use of solar power, but until it becomes commercially competitive on its own, consumers are not going to make a large-scale transition. That is not to say that solar power does not have other shortcomings either. Sunlight is a time- and weatherdependent energy source (3). It works well, that is, until the sun goes down or clouds fill the sky. If it is to fulfill a substantial part of future needs, then we must find ways to store this energy effectively. We could use existing batteries or even design new ones, but this merely obfuscates the problem by adding unnecessary costs and complexities. A technology known as “artificial photosynthesis,” however, might be the solution to this problem. The idea was born from a branch of research called biomimicry, which seeks to emulate the systems that already exist nature. The goal of artificial photosynthesis is to exploit the basic chemical pathways of photosynthesis in order to create hydrogen, methanol, and other “clean” fuels (5). In theory, artificial photosynthesis would not only relay zero emissions, but it could also actually “mop up” residual CO2 in the lower atmosphere while creating a storable form of energy (3). The evolution of photosynthesis is what made life for all animals possible.

As humbling as it might seem, we owe our very existence to just a few elegant processes that take place at the cellular level. In the so-called light reactions, sunlight is absorbed by chlorophyll a, a complex and highly specific pigment found in all photosynthetic organisms (6). The absorption of sunlight excites electrons in chlorophyll, and in turn, this “excitation energy” is used to oxidize water and generate the molecules ATP and NADPH. During the light independent reactions, carbon dioxide is converted into carbohydrates using the energy stored in ATP and NADPH. In order to apply this process to our energy needs, we need only a few basic modifications. Instead of oxygen and glucose, an artificial system would probably release hydrogen gas, methanol, or some other type of clean fuel. As it stands, we already have the technology to capture sunlight, and we already have fuel cells to generate electricity. What remains to be seen, however, is an effective means of breaking down water using sunlight. The oxidation of water molecules, which takes place during the light reactions, is one of the essential processes of photosynthesis. It provides the electrons necessary to convert carbon dioxide into glucose, and it also creates oxygen gas as a byproduct. As in nature, this process requires a fast and efficient catalyst. Unlike plants, however, we face the additional burden of making sure this catalyst is cost-effective. Not surprisingly, the current research in artificial photosynthesis is focused almost entirely on finding a suitable catalyst (7). Green plants perform the breakdown of water within a complex of proteins called Photosystem II, in which manganese-containing enzymes serve as catalysts for oxidation (6). Manganese-based complexes modeled off of Photosystem II have shown some promise as catalysts for water oxidation, but they are not particularly robust and degrade rapidly in water (7). Moreover, although man-

Dartmouth Undergraduate Journal of Science


ganese is efficient enough for plants and small organisms, it probably would not put much of a dent in mankind’s incredible appetite for energy. Just what kind of catalyst do we really need? To put the idea into numbers, a suitable catalyst must be able to keep up with solar flux at ground level, which is approximately 1000 W/m­(3). Previous studies have shown iridium oxide and platinum to be capable catalysts (7). Due to their costs (iridium is the least abundant element on earth), however, neither one is particularly suitable for use on a large scale. Using an abundant transition metal, like cobalt or manganese, is imperative for creating a cost-efficient platform. Very often, however, these sacrifices require modifications throughout the rest of the device. To make up for their limitations, these catalysts must be anchored to some sort of additional, nanostructured support. In a study performed at Berkeley National Laboratory, researchers Feng Jiao and Heinz Frei studied the catalytic abilities of nano-sized crystals of cobalt oxide (Co3O4), which were embedded in a layer of silicon dioxide (7). Using micron-sized particles, the researchers had little luck with cobalt oxide. However cheap it might be compared to platinum, it was nowhere near fast enough for artificial photosynthesis. They discovered that reducing the size of these particles can radically alter their electrochemical properties. Compared to larger (micron-sized) clusters of cobalt oxide, nanocrystals yielded 1,600 times more oxygen, and their turnover frequency was on par with solar flux at ground level (7). These drastic improvements, according to the researchers, are likely due to the comparatively large internal surface area of the nanocrystals, although the exact chemistry behind this phenomenon is still unclear. Meanwhile, the next big step for the researchers at Berkeley Lab is to actually integrate their particles in a more comprehensive system. There are, after all, countless researchers who are already looking beyond the issue of oxidation. These individuals are looking to design so-called “artificial leaves,” which would integrate the capture of sunlight, the separation of charges, and the creation of hydrogen or methanol fuel, into a single platform. SPRING 2011

In a recent report published in 2010, researchers at Shanghai Jiaotong University claimed to have designed the world’s first artificial leaf (8). Guided by the principles of biomimicry, they turned to nature (and the leaves of the native Chinese plant Anemone vitifolia) for inspiration. As they discovered, plants are remarkable not only for the microscopic structures that enable light conversion, but for their macroscopic design as well. Plant leaves are true masterpieces of engineering. Lens-like epidermal cells, on the upper surface of leaves, focus incoming sunlight (6). The leaf itself is composed of alternating layers of spongy tissue and highly ordered column-shaped cells. The columnar cells propagate light deeper into the leaf, while the spongy tissue scatters the light so that more of it is absorbed. Not surprisingly, chloroplasts are concentrated in the spongy layers and almost entirely absent from the columnar cells. In order to mimic a plant leaf, the researchers at Shanghai University sought to implement a similar, hierarchical structure for efficient light-harvesting and charge-separation which could absorb incident photons, transfer this excitation energy to a donor-acceptor interface, where photochemical charge separation takes place, and couple these charges with an appropriate catalyst for the production of hydrogen fuel (8). In their study, the researchers used nitrogen-doped titanium dioxide, a well-studied and widely used photocatalyst for hydrogen production, as the primary support for their structure. The researchers then compared the performance of their “artificial inorganic leaves” (AIL-TiO2) with control TiO2 nanoparticles, which lacked the additional structures. When placed in a 20% methanol solution, the hydrogen production of AIL1-TiO2 was eight times greater than that of TiO2 nanoparticles synthesized without templates (8). Other templates showed similar performance levels. By embedding nanoparticles of platinum into the leaf surface, they were able to increase the activity (and, unfortunately, the cost) of the artificial leaves by an additional factor of ten. According to the researchers, nitrogen doping could enlarge the absorption spectrum of the template. The

hierarchical structures modeled from the leaves could also endow the templates with superior light-capturing abilities. Regardless of the explanation, what matters is that the creation of artificial leaves is not just a fantasy. In fact, others have since followed suit. Some, like Daniel Nocera of MIT, claim to have made breakthroughs of their own. In March, Nocera and his team announced the creation of the first “practical” artificial leaf (9). Unlike the artificial inorganic leaves designed in Shanghai, Nocera’s leaf requires sunlight and water, and not particular electrolyte solution. The device is about the size and shape of a poker card. A gallon of water and steady sunlight, according to Nocera, are all it needs to power a house in the developing world for an entire day (9). We have yet to see it in action, but if Nocera’s claims are any indication, the field of artificial photosynthesis should yield some exciting developments in the coming years. It is still too early to make predictions about these and other devices. Even if they succeed, these technologies still require funding and support for large-scale installation. We can only hope that policymakers, as well as private investors, are willing to look past the present, beyond nuclear and “clean coal” and other makeshift solutions, and towards a brighter future. References 1. P. B. Weisz, Physics Today. 57, 47-52 (1980). 2. P.V. Kamat, J. Phys. Chem. C. 111, 2834-2860 (2006). 3. P. Hunter, The promise of artificial photosynthesis (2004). Available at http://www. energybulletin.net/node/317 (14 March 2011). 4. Solar Energy Technologies Program: Silicon (2005). Available at http://www1.eere.energy. gov/solar/silicon.html (20 March 2011). 5. D. Gust et al., Accounts Chem. Res. 42, 1890-1898 (2009). 6. M.J. Farabee Photosynthesis (18 May 2010). Available at http://www2. estrellamountain.edu/faculty/farabee/biobk/ BioBookPS.html (2 April 2011). 7. F.Jiao and H. Frei, Angew. Chemie. Int. Edit. 48, 1841-1844 (2009). 8. H. Zhou et al., Adv. Mater. 22, 951-956 (2010). 9. Debut of the first practical ‘artificial leaf’ (17 March 2011). Available at http://www.eurekalert. org/pub_releases/2011-03/acs-dot031811.php (19 March 2011).

15


GENETICS

Gene Patents

For the Sake of Research or For Profit? Amir Khan ‘14

E

very individual possesses inherent risks for developing a disease; however, scientists have recently uncovered genetic mutations that yield unusually high probabilities for their onset. Research into these mutations has illuminated, among many other possibilities, gene therapy as a potential treatment. Unfortunately, as discoveries in “genomics” have continued to rise, the issue of ownership and protection of these findings has become a dividing point in both the scientific and non–scientific communities. Such “gene patents,” which are issued to researchers for their methods of isolating genes and testing for these genes, have caused many controversial issues.

The Controversy The legality, as well as the effects, of gene patenting remains a national controversy. District Court Judge Robert Sweet, who invalidated seven gene patents filed by the company Myriad Genetics in early 2010, classifies genes as parts of nature, making them unpatentable. However, Myriad and proponents of patenting strongly disagree with Sweet. They view isolated genes as inventions of their labs, and the US Patent and Trademark Office seems to agree, seeing the purified, isolated form of genes as eligible for patenting. Myriad also believes that gene patents increase funding, which labs often rely on heavily. Doug Calhoun, a New Zealand patent lawyer, agrees with Myriad, believing that invalidating patents will stifle the growth and success of young private research companies. However, US Attorney General Eric Holder and the Department of Justice stand by Sweet’s verdict. Furthermore, Mildred Cho, a bioethicist at Stanford, disagrees with Myriad’s fears over funding, stating that large-scale government funding for research will remain unaffected and provide sufficient resources for labs to produce successful results. Other opponents of gene patenting assert that 16

Image retrieved from http://commons.wikimedia.org/wiki/File:Protein_BRCA1_PDB_1jm7.png (Accessed 8 May 2011).

Crystal structure of the BRCA1 gene.

gene patents create a genetic monopoly that hurts the quality of treatment for patients. Francis Collins, director of the National Institutes of Health (NIH), believes that patents reduce our understanding of genetics. Professor Thomas Jack, a molecular biologist at Dartmouth, also believes that patents on genes slow scientific progress. Other opponents believe gene patents decrease the quality and quantity of genetic testing and treatment and inhibit research. With more than 20 percent of the human genome patented, gene patents prevent other scientists from studying these parts of the genome and improving testing and treatment for mutations. Thus, the courts should maintain the invalidation of gene patents; invalidation will cause genetic research to thrive in a state of direct competition that will not only offer diverse options for high-quality testing and treatment but also potentially cure deadly diseases affecting the world. Breast cancer remains one of the more controversial diseases affected by gene patents. The importance of some genetic mutations, such as in genes

BRCA1 (breast cancer 1) and BRCA2, lies in their ability to “predispose people to disease or influence their response to a drug” (1). Some BRCA1 and BRCA2 mutations, for example, lead to an 80 percent chance of developing breast cancer and a 50 percent chance of developing ovarian cancer. Such elevated probabilities emphasize the importance of identifying these genes in individuals with a family history of breast and ovarian cancer. Thus, in 1995, Myriad Genetics provided such testing after obtaining patents for isolated BRCA1 and BRCA2 genes and methods of testing for them. These patents were originally protected for 20 years and generated Myriad about “$326 million in annual revenue” (2). However, prominent researchers, lawyers, and ethicists challenged the patent, and in early 2010, the courts invalidated seven of the patents that covered isolating the genes and diagnostic methods. NY Times journalists Schwartz and Pollack describe the justifications of the court: “Judge Sweet… ruled that the patents were ‘improperly granted’ because they involved a “law of nature.” He said that many critics of

Dartmouth Undergraduate Journal of Science


gene patents considered the idea that isolating a gene made it patentable “a ‘lawyer’s trick’ that circumvents the prohibition on the direct patenting of the DNA in our bodies but which, in practice, reaches the same result” (3). Sweet argues, “Genes, products of nature, fall outside the realm of things that can be patented” (3). He asserts the idea that patents would have detrimental outcomes on the future of DNA research by reducing further research and improvements in testing and treatment. Myriad sought an appeal to the case in June 2010, with its own strong opinions, as well as those from others, in support of gene patenting. Diamond v. Chakrabarty, in which the court ruled that scientists could patent genetically engineered components of natural organisms, strongly influences the argument for genetic patenting. Patenting revolves around the idea that “most patent systems protect only inventions, not discoveries,” which raises the issue of whether to categorize isolated genes as genetic engineering (4). If isolated genes indeed fall under man-made genetic engineering, they are classified as inventions, just as in Diamond. Myriad feels their isolated genes fit such categorization. Myriad deems their work as invention due to the precedence of Diamond, asserting “the work of isolating the DNA from the body transforms it and makes it patentable” (3). David Kappos, director of the USPTO, agrees with Myriad, declaring, “the purified version of a naturally occurring compound where the purified version does not exist in nature in a pure form is indeed eligible for patent protection” (5). Calhoun agrees with Myriad and the USPTO, stating “isolating a gene upgrades it from a discovery to an invention” (6). In Diamond, scientists patented genetically engineered bacteria, and Calhoun finds such patenting equivalent to gene patenting. Rather, Calhoun blames the word “gene” for the controversy over gene patenting, calling it “an evocative word that conjures up an image of the essence of life” (6). Nevertheless, the arguments of the defendants did not stand up in court; as previously stated, legal professionals such as Sweet found genes to be inherent parts of nature and their isolation unpatentable biological discoveries, SPRING 2011

not inventions. Holder supported this perspective, stating in an amicus brief: “The chemical structure of native human genes is a product of nature, and it is no less a product of nature when that structure is ‘isolated’ from its natural environment than are cotton fibers that have been separated from cotton seeds or coal that has been extracted from the earth” (7). However, aside from the issue of whether scientists can legally patent genes, proponents of gene patenting also fear the consequences that eliminating gene patents will have on funding. They feel that Sweet’s decision could “make it harder for young companies to raise money from investors” (3). Calhoun even questions the availability of the breast cancer test without patenting, stating that “the company would not have been able to develop tests for breast cancer without the funding it received from investors who believed that Myriad would enjoy the patent exclusively” (6). Myriad and supporters of gene patenting thus believe the patents for their genetic “inventions” encourage research funding and inspire further research into genomics. While funding remains important to researchers, gene patenting simply does not encourage funding. According to Mildred Cho, “the majority of genetic diagnostic patents…have issued for discoveries that were funded by the US Government, including two million US dollars to the University of Utah for the identification of the BRCA1 sequence” (8). Government funding through universities and programs will remain unaffected by Sweet’s decision and can fit the monetary requirements of many projects, as it did with BRCA1 sequencing. Such active involvement of the government in funding belies the fear of patent supporters that projects will suffer a dearth of funding because of patent invalidation. Rather, invalidation will improve the quality of genetic testing and advance our knowledge of the relationship between genetics and pathology. Gene patents place strong restrictions on the growth of research into the field of genomics. Because the scientific community does not fully understand the human genome and the function of every gene, “opening the field of researchers and allowing unimpeded data sharing”

will create a realm of collaborative research that can benefit humanity to the fullest extent possible (9). However, gene patents do not contribute to this realm, doing “nothing to promote innovation in terms of new technologies or methods for determining disease risk” (9). Gene patents rather prevent such a cooperative scientific world, for gene patents “could impede the development of diagnostics and therapeutics by third parties because of the costs associated with using patented research” (1). Furthermore, with scientists constantly discovering new variations in different genes, gene patents counter the benefits of individualized medicine by impeding “the understanding of the clinical significance of variants” (8). Collins believes that gene patents do indeed inhibit scientific advancement and prevent cooperation among labs. Collins spurred the influx of many genetic discoveries in his role as director of the Human Genome Project; however, Collins at no point filed for patents for his discovered genes. On the contrary, Collins sought to “discourage unwarranted gene patenting, insisting that all information about the human DNA sequences be placed immediately in the public domain” (10). Even with his vast contribution to genetics, Collins believes that these patents pose grave consequences for the future achievements and discoveries of science. Jack asserts, “genes should not be patentable as inventions, especially with the growth of genome sequencing” (11). Thus, as current “genome technologies move towards whole–genome analysis”, these gene patents will only serve as obstacles in our path towards understanding the full function and effect of every gene variant (8). Because such an understanding still remains distant, different companies can interpret testing results in a variety of ways and can even yield improper results due to lab errors. This premature state of genetic testing only emphasizes the need for the widespread availability and further improvement of testing. However, due to Myriad’s monopoly over BRCA testing, patients face a grave “inability to obtain second– opinion testing,” which remains crucial when considering that “there are differences in how you study mutations, weigh them, and interpret the data” (9,12). 17


of gene ownership, thus limiting others from researching how to improve genetic treatments and testing and make them more accessible. These limitations only serve to hurt the diseased patients depending on the results of research. Therefore, the research of science and disease, which directly affects the plight of victims worldwide, must remain free from such limiting patents in order to develop the best results.

Conclusion

Image retrieved from http://www.nih.gov/about/director/images/Collins_formal_300.jpg (Accessed 8 May 2011).

Francis Collins, director of the National Institutes of Health since 2009, is an opponent of gene patents.

The prevention of “newer methods of more comprehensive BRCA testing” instead of Myriad’s expensive test also causes patients to suffer financially and negates the potential improvement of testing (8). Despite the fact that “the technology for sequencing or genotyping DNA to look for specific mutations is already highly evolved,” gene patents, according to Jack, continue to seek an atmosphere where “no one else is allowed to develop a new method of testing” (10,11). As Jack has stated, gene patents clearly prevent competition, and such competition motivates companies to constantly improve products, delivering “products to the marketplace faster, better, and cheaper” (12). Collins supports such competition, declaring, “It would be better for the public to have competition in the marketplace, in order to provide an incentive for higher quality and lower price” (10). Linda Avey, founder of the personalized genomics company 23andMe, also conveys the important role of competition. My hope is that [Sweet’s] ruling stands and companies will need to actually innovate and create new advances based on genetic findings, not dependent on sole access to them. Rather than relying on obscure patent language and legal strategies, companies will need to develop products that are competitively positioned (12). Gene patents evidently cause companies to settle into a smug monopoly 18

As we begin to reach the hopeful possibility of genomic sequencing, the scientific community needs to cooperate to improve our knowledge of diseases and treatments. Unfortunately, these gene patents cause scientists to race to secure genes for massive profit and thus prevent us from achieving that very possibility. The motivations behind gene patents hinder the progression of global health care, as people suffering from diseases such as AIDS, breast cancer, and cystic fibrosis pile into hospitals. With such a dire need for research, researchers must free themselves from the desire to increase their incomes. Rather, scientists should concern themselves with increasing available treatments for patients and selflessly working for the betterment of our world. As Myriad Genetics has appealed Sweet’s verdict, the legal system, as well as the research world, must maintain a strong front against the detriment of gene patenting. As a former Myriad consultant said, the company’s “interest in making money had completely subsumed their willingness to be reasonable and collegial” (13). If gene patenting does not remain invalidated, such an attitude may spread through the research community, and its consequences will damage the future productivity of research. A brief look at history can show us the potential damage patents can have on the scientific world. If researcher George Gey had filed a patent for his discovery of the notorious HeLa cells, which stand as products of nature, such a patent would have had unimaginable consequences on the entire world. A vaccine for polio would have gone undeveloped, and the sharp reduction in our understanding of cancer, AIDS, and radiation would have left us in a

crippled world void of so many potential benefits. But Gey did not file a patent. Instead of concerning himself with profits, Gey concerned himself solely with research, sharing his HeLa cells with researchers of all kinds for the benefit of our world. Gey admirably acted out of a passion for research, not a passion for profit. Thus, while companies like Myriad may still conduct their research for profit, the invalidation of gene patents will help spread the desperately needed attitude that Gey selflessly displayed, an attitude of research for the sake of the research. References 1. Genetics and Patenting (07 Jul 2010). Available at http://www.ornl.gov/sci/ techresources/Human_Genome/elsi/patents. shtml (March 2011). 2. A. S. Kesselheim, M. M. Mello., Gene Patenting—Is the Pendulum Swinging Back? New Engl. J. Med. 362, 1855-1858 (2010). 3. J. Schwartz, A. Pollack, “Judge Invalidates Human Gene Patent.” New York Times 30 March 2010: B1. 4. Competition and Patents. Available at http:// www.wipo.int/patent-law/en/developments/ competition.html (March 2011). 5. G. Quinn, Conflicting Positions on Gene Patents in Obama Administration (2 November 2010). Available at http://ipwatchdog. com/2010/11/02/conflicting-positions-on-genepatents-in-obama-administration/id=13085 (March 2011). 6. D. Calhoun, New Sci. 206 24-25 (2010). 7. U.S. Government Files Brief in ACLU and PUBPAT Gene Patenting Case (30 October 2010). Available at http://www.aclu.org/freespeech-womens-rights/us-government-filesbrief-aclu-and-pubpat-gene-patenting-case 8. M. Cho, Trends Biotechnol. 28, 548-551 (2010). 9. C. Karambelas, Report finds gene patents prevent competition, don’t promote advancement (14 April 2010). Available at http:// news.medill.northwestern.edu/chicago/news. aspx?id=162971 (March 2011). 10. F. Collins, The Language of Life: DNA and the Revolution in Personalized Medicine (HarperCollins, New York, 2010). 11. T. P. Jack, Personal Interview, 2 November 2010. 12. B. Keim, End of Gene Patents Will Help Patients, Force Companies to Change (1 April 2010) Available at http://www.wired. com/2010/04/gene-testing-future/ 13. J. Borger, Rush to patent genes stalls cure for disease (15 December 1999) Available at http://www.guardian.co.uk/science/1999/dec/15/ medicalresearch.genetics (March 2011).

Dartmouth Undergraduate Journal of Science


CELL BIOLOGY

Synthetic Cells DANIEL LEE ‘13

T

he methods of isolating and manipulating genes to understand their roles in the genome have improved tremendously since their inception in the 1970’s. In recent years, the growing precision and speed of these methods have allowed geneticists to develop and refine such tools as DNA fingerprinting, disease-resistant crops, and tests for heritable diseases. In May 2010, scientists at the J. Craig Venter Institute, a non-profit research organization, added to this array of scientific tools methods for synthesizing artificial, self-replicating genomes. The creation and transplantation of the first human designed genome was an immense technical achievement that took JCVI researchers 15 years and $40 million to complete (1). The research is proof of principle that genomes can be digitally designed, and then inserted into cells, thereby enabling a great deal of cellular customization. Geneticists have heralded this feat as the beginning of a new wave of interest into synthetic cell research (2). While both scientists and non-scientists have naturally raised bioethical concerns, government and commercial funding remain strong (1). JCVI’s synthesis and transplantation of an artificial genome is a significant landmark for modern genetics.

The Experiment The purpose of the experiment was to transplant an artificially created genome of the bacteria Mycoplasma mycoides to a similar bacteria Mycoplasma capricolum (3). It was hypothesized that doing so would convert the M. capricolum cells into M. mycoides as JCVI laboratories had done in the past with a naturally created genome (4). The challenge in the experiment was in sequencing and synthesizing an artificial genome that could accurately mimic its non-synthetic counterpart. To begin, researchers sequenced the genome of M. mycoides into a computer file (4). The researchers then edSPRING 2011

Image courtesy of Tom Deerinck and Mark Ellisman of the National Center for Microscopy and Imaging Research at the University of California at San Diego.

A scanning electron micrograph of Mycoplasma mycoides JCVI-syn1.0: The J. Craig Venter Institute’s “synthetic cell.”

ited the file, added new sequences, and sent it to Blue Heron, a bio-synthesis company. There, it was made into 1,078 pieces, each 1080 base pairs long (4). Complementary base pair sequences at the ends of these snippets allowed scientists to glue them together. Since connecting all these sequences at once would have been technically overwhelming, the process was divided into three-stages, with each stage forming larger and larger pieces until the entire genome of M. mycoides was reconstructed (3). Upon completion the sequence was introduced to M. capricolum populations (4). The result was a group of cells that expressed the characteristics of a wild-type M. mycoides using the cellular machinery of M. caprciolum (4).

Relevance to Science The technical significance of this feat is substantial. This process required

not only precise synthesis, but also careful sequencing of the natural M. mycoides genome. Several times throughout the study, slight errors stalled the entire effort. In one instance, a single base pair miscalculation took researchers three months to find and fix (5). Throughout the process of reconstructing the genome, JCVI developed new techniques and refined existing ones to adapt to the unprecendented variety required of a synthesis that size (3). The effectiveness of JCVI’s refined methods has encouraged additional research into genome synthesis. By being able to selectively put together a replicating genome, populations of cells can be tailored to have advantageous functions such as attacking water-born pathogens (5). The biotech company Synthetic Genomics Inc., owned by J. Craig Venter, is already working on applying the techniques developed from this work for commercial use (5). The company has a $600 million contract 19


with Exxon Mobil to build a breed of algae that can capture carbon dioxide and turn it into clean-burning fuel (5). Three other biotech companies, Amyrise Biotechnologies, LS9, and Joule Unlimited, are working on similar projects to create fuel cells which are funded by the Department of Defense (1). Synthetic cells are also being developed for other uses. JCVI is currently working on making its findings more accessible for other scientists to modify for their specific research needs. Dr. Daniel Gibson, who led the synthetic genome project, said that the JCVI research team is now trying to build a genome with only the minimal components needed to sustain a living cell (6). Doing so, Gibson claims, will allow scientists to have a generalized set of fundamental pieces needed to construct a working cell, providing a base for other labs to build on in the future (6).

The Future of Genome Synthesis The ability to construct genomes initially concerned many bioethicists. Following the publication of JCVI’s research, President Obama requested that the White House bioethics committee investigate JCVI’s finding (2). It was feared that JCVI’s efforts would give way to new research working to synthesize life from non-living components, a bioethical taboo (2). However, the committee ruled otherwise soon after the investigation was commissioned. David Baltimore, a geneticist at Caltech closely involved with the investigation, told the New York Times that while JCVI’s research is a “technical tour de force… [the research] has not created life, only mimicked it” (2). The cellular machinery used to hold and express the synthetic genome was created by natural means. Additionally, the genome itself was built off the blueprint of an existing organism, not an original creation. The JCVI website states, “We do not consider [our research] to be ‘creating life from scratch’ but rather we are creating new life out of already existing life.” (6). Whether biologists will ever be capable of creating an original life form is unlikely. Not only would ethics 20

Image courtesy of the J. Craig Venter Institute.

The JCVI Synthetic Biology Team, including Dr. Daniel Gibson (back row, second from right), leader of the synthetic genome project.

committees quickly terminate such research efforts, the technical and financial effort to produce such an organism would be impractical. Naturally occurring organelles and other cellular components are already being effectively harnessed, and the need for modified ones is not pressing. Moreover, even if such components are developed, the level of coordination that they would need with one another would be an even more daunting and costly effort. The focus of JCVI’s research on the software of cells rather than hardware thus represents the direction that genetics research will likely continue in the near future. Not only does this avoid ethical controversies, but also the techniques and knowledge to do so are already well understood. Given this understanding and the large amount of government funding and commercial interest, the field of synthetic genetics will only continue to grow.

References 1. Synthetic Genome Brings New Life to Bacterium (20 May 2010). Available at http:// www.sciencemag.org/content/328/5981/958.full (24 March 2011). 2. Synthetic Bacterial Genome Takes Over Cell (20 May 2010). Available at www.nytimes. com/2010/05/21/science/21cell.html (23 March 2011). 3. D. Gibson et al., Science. 328, 52-56. (2010). 4. JCVI: First Self-Replicating, Synthetic Bacterial Cell Constructed (20 May 2010). Available at http://www.jcvi.org/cms/press/ press-releases/full-text/article/first-selfreplicating-synthetic-bacterial-cell-constructedby-j-craig-venter-institute-researcher (23 March 2011). 5. Scientists Create First Synthetic Cell (21 May 2010). Available at http://online.wsj.com/ article/SB100014240527487035590045752564 70152341984.html (28 March 2011). 6. JCVI: Research / Projects / First SelfReplicating Synthetic Bacterial Cell / Frequently Asked Questions (25 May 2010). Available at http://www.jcvi.org/cms/research/projects/firstself-replicating-synthetic-bacterial-cell/faq/#q5 (22 March 2011).

Dartmouth Undergraduate Journal of Science


cell biology

The Role of Apoptosis in Disease and Development Jay Dalton ‘12

I

recently saw the Rude Mechanicals’ phenomenal adaptation of Hamlet, and for one reason or another, as the final act drew to a close, I was in a decidedly morbid state of mind. Perhaps the acting prowess demonstrated by the troupe caused my suspension of disbelief to be utterly complete even in the prosaic setting of Brace Commons. Perhaps it was the sight of so many familiar Dartmouth faces twisted in the throes of Shakespearean tragedy. Or, perhaps, it was simply because literally all of the important characters in the play die. Regardless of the reason, the contrast between Ophelia’s tastefully understated offstage suicide and Hamlet’s melodramatic onstage murder got me thinking about the two vastly different cellular fates: apoptosis and necrosis. Necrosis, which is akin to Hamlet’s rapier-induced demise, is simply premature cell death, and can lead to the death of the organism in multi-cellular life if the damage is severe enough. Apoptosis, on the other hand, can be seen as the cell willfully shuffling off its mortal coil, along with its mortal condensed chromatin, apoptotic bodies, and other cellular material. The dramatic irony of apoptosis, at least from the perspective of a researcher, is that the programmed death of the individual cell is essential to life processes ranging from embryonic development, everyday organ system function, and bodily maintenance. First described in 1842 by Carl Vogt, apoptosis was not precisely defined as programmed cell death until anatomist Walther Flemming’s work in 1885. It was not revitalized as a subject of study until the middle of the 20th century (1). As it is understood today, apoptosis is initiated by two different means; either through targeting mitochondria functionality, or by using adaptor proteins to directly transduce the apoptotic signal. A multitude of environmental signals, which includes heat, radiation, nutrient deprivation, viral infection and hypoxia, has been SPRING 2011

implicated in causing apoptosis (2,3).

Mechanism of Action Mitochondria-focused apoptosis is further subdivided into two separate mechanisms (4). The first method of mitochondrial apoptosis is mediated by the formation of channels such as the mitochondrial apoptosis-induced channel (MAC) (4). MACs are ion pores formed on the outer mitochondrial membrane in response to apoptotic stimuli such as Bax and Bak, which are members of the Bcl-2 protein family that contains both pro- and anti-apoptotic signal carriers (4). Once induced, MACs release cytochrome c into the cytosol, which initiates the so-called commitment step of the mitochondrial apoptotic cascade (5). In fact, this pathway has been utilized therapeutically via Bax inhibitors in order to knock down MAC channels and thereby prevent this form of apoptosis (5). When cytochrome c is released, however, it binds with both apoptotic protease activating factor-1 and ATP, which then bind to pro-caspase-9 to create a protein complex known as an apoptosome (4). This structure cleaves the pro-caspase to its active form, caspase-9, which in turn activates the effector caspase-3 (4,5). This type of signaling cascade is typical of biological signal transduction, and especially of apoptosis. The other mitochondrial-apoptosis pathway releases second mitochondria-derived activator of caspases (SMACs) into the cytosol via an increase in mitochondrial permeability. SMACs bind to and inhibit inhibitor of apoptosis proteins (IAPs) (6). IAPs function normally by suppressing caspases, which are cysteine proteases (6). When IAP inhibition is lifted, caspases carry out the degradation of the cell (6). The direct signal transduction method is less well understood. Two current theories implicate tumor necrosis factor (TNF) as the driving force behind one pathway and the Fas ligand

as the instigator behind the other (7). The result of cytokine production, TNF has two receptors in the human body: TNF-R1 and TNF-R2 (7). Binding of TNF to these receptors leads to caspase activation. The other direct signal transduction method, which utilizes the Fas ligand, is closely related to the TNF pathway. Binding of the Fas ligand to the Fas receptor results in the formation of the death-inducing signaling complex (8). Like most apoptotic targets, this complex contains a number of caspases.

Apoptosis in Normal Development Programmed cell death might be the body’s best option compared to necrosis, but the link between apoptosis and the generation of life is not exactly intuitive. Apoptosis is one of the key mechanisms of embryonic development of organs and structures in both humans and other animals, and is as much a part of embryo development as is cell proliferation and differentiation (8). In fact, apoptosis can occur even in the follicular stage and is greatly enhanced by androgen and gonadotropin releasing hormones. For instance, follicular atresia, the main process of the menstrual cycle, is pre-

Image retrieved from http://upload.wikimedia.org/wikipedia/commons/5/5f/Apoptosis_ stained.jpg (Accessed 10 May 2011).

In this section of a mouse liver, a stained apoptotic cell is visible. 21


dominantly dependent on granulosa cell apoptosis (8). Currently, five celldeath ligand-receptor systems have been identified in granulosa cells that regulate atresia, all belonging to the TNF family (9). These receptors, upon contact with their respective ligand, form trimers that fit into the grooves formed between TNF monomers. As a response to this binding, a conformational change occurs in the receptor, and in turn, the inhibitory protein, “silencer of death domains,” is dissociated (9). When dissociation is complete, the adaptor protein, “TNF receptor associated death domain protein,” is able to bind to the death domain (9). A small proportion of follicles can escape initial apoptosis through/ with the protection of growth factors and estrogens. However, programmed cell death occurs continually throughout embryogenesis. For instance, early brain development involves both Jnk1 and Jnk2 protein kinases, both of which are implicated in apoptosis. In fact, mutant mice lacking these proteins are embryonic lethal and were found to have severe apoptotic dysregulation. Additionally, normal fetal lung development is associated with a progressive increase of epithelial and interstitial apoptotic activity. In addition, from birth onwards, the number of cells undergoing apoptosis increases dramatically and occurs in a spatially, temporally, and cell-specific manner. (10) Contrarily, pregnancies with abnormally increased occurrence of apoptosis and apoptotic cells have the potential to impair the functions of the fetal membrane. This is the case in diseases such as fetal growth restriction (FGR). Although apoptosis occurs in both normal and FGR-affected fetal membranes, apoptotic cells are present in greater quantities and are concentrated in the chorionic trophoblast layer of the FGR-affected fetal membrane. The chorionic trophoblast layer, which separates the mother from the developing fetus, is vital for normal fetal development and growth, and increased apoptosis may impair this layer’s functions and lead to premature rupture of the membrane. (10) Beyond its role in the development of the fetal membrane, apoptosis is even involved in crafting such human characteristics as the spaces between our 22

digits during development (1). Apoptosis also seems to play a crucial role in Thymocyte (T-cell) biology. T-cells expressing nonfunctional or auto-reactive T-cell receptors are eliminated during development by apoptosis. Dysregulation of apoptosis in the immune system thus results in autoimmunity, tumorogenesis, and immunodeficiency (4,5).

Role of Apoptosis in Disease, Inhibition and Excess

Image courtesy of CDC/ C. Goldsmith, P. Feorino, E. L. Palmer, W. R. McManus.

HIV-1 virions present on a human lymphocyte.

There is a wide range of diseases that can result from loss of control of cell death (excess apoptosis). These include neurodegenerative diseases, hematologic diseases, and general tissue damage. For example, the progression of HIV is directly linked to rampant apoptosis. In a normal individual, CD4+ lymphocytes are in equilibrium with cells produced by bone marrow. However, in HIV-positive patients, this balance is lost due to the bone marrow’s inability to regenerate CD4+ cells. When stimulated, CD4+ lymphocytes die at an accelerated rate through uncontrolled apoptosis induced by HIV. In fact, this uncontrolled cell death is largely responsible for the progression of the human immunodeficiency virus infection into AIDS due to the depletion of CD4+ T-helper lymphocytes. This leads to a compromised immune system and symptoms typical of AIDS. In addition, cells may die as a direct consequence of viral infection. HIV-1 expression induces tubular cell G2/M arrest and apoptosis (1,3). Similarly, Ascoviruses perform viral infection and replication through the induction of apoptosis. Cell fragmentation occurs upon the viral instigation of apoptosis, and it is postulated that the virus utilizes the apoptotic bodies in order to form vesicles (11). Viruses can remain intact from apoptosis particularly in the latter stages of infection. Ascoviruses can be exported in apoptotic bodies that pinch off through the normal apoptotic procedure from the surface of the dying cell (11). Viral phages in these apoptotic bodies are then consumed by phagocytes, which prevent the initiation of a host immune response (11). Interestingly, viruses are able to

induce disease not only through increasing apoptosis, but also through inhibiting it. A significant component of an organism’s ability to stave off extensive bodily infection is through apoptosis. However, certain viruses can halt this process in order to proceed with their invasion. Ordinarily, viral infection induces apoptosis, either directly or through the host’s immune response. Accordingly, the purpose of the cell death is to reduce production of new virus and further cell infection. However, new inhibitory mechanisms have evolved in many viruses that thwart the completion of defensive cell destruction. Viruses utilize a range of mechanisms to inhibit apoptosis, including activation of protein kinase R (PKR), interaction with tumor suppressor p53, and expression of viral proteins coupled to major histocompatibility complex (MHC) proteins on the surface of the infected cell (1,4). An example of such inhibitory mechanisms is a quality of the Herpes Simplex Virus (HSV), which inhibits apoptosis through the action of two genes, Us5 and Us3 (11). HSV enacts a latent infection with quiescent, persistent qualities. In fact, it is the characteristic latent period of the virus that precipitates the inhibition of apoptosis mechanisms. Specifically, during latent infections of the cell the virus expresses Latency Associated Transcript (LAT) RNA. This form of RNA has the capability to regulate the host cell genome and interfere with cell death mechanisms. LAT expression of the HHV-8 form of the virus produces miRNAs, which suppresses production of Thrombospondin-1 protein involved in apoptosis and angiogenesis. Inhibitory peptides and fragments of TSP1 bind to Cluster of Differentiation 36 (CD36), an

Dartmouth Undergraduate Journal of Science


integral membrane protein, leading to lowered expression of the FAS ligand, which normally activates the expression of Fas-receptor induced apoptosis. Additionally, expression of LAT by an HSV virus reduces the production of other proteins involved in the apoptosis mechanism, including proteins caspase-8 and caspase-9. Through the inhibition of apoptosis mechanisms during LAT expression, host cells are maintained and the proliferation of the virus is ensured (12). Although these cells would normally die to prevent outbreaks, the virus has effectively disabled the cells’ apoptotic abilities and has created a habitable environment for dormancy and further infection. In addition to viral infections, apoptotic inhibition can result in a number of cancers, autoimmune diseases, and inflammatory diseases. The most logical result of a lack of programmed cell death is cancer, which is usually characterized by an overexpression of IAP family members (1,4). As a result, malignant cells experience an abnormal response to apoptosis induction: cycle regulating genes,

SPRING 2011

such as p53, ras or c-myc, are mutated or inactivated in diseased cells, and other genes, such as bcl-2, also modify their expression in tumors (15).

Conclusion It is clear that apoptosis is not only a crucial biological process from pre-birth until death, but also a promising and daunting target for disease research. From the spaces between our fingers to the army of immune cells that protect us, the ability to cause cells to die is as essential as the processes that keep them alive. A better understanding of apoptosis could hold enormous promise in treating cancer, viral infections, and autoimmune diseases. More importantly perhaps, the full picture of apoptosis could solve essential mysteries about how our physical beings are formed by elucidating the architecture and blueprints of embryonic development. All of these mysteries and possible advances are wrapped up in the cascade of signals holding each of our cells on the precipice of destruction at the hands of apoptosis.

References 1. J. Kerr, J. Pathol. Bacteriol. 90, 419-435 (1965). 2. S. Popov, Biochem Bioph. Res. Co. 293, 349-355 (2002). 3. B. Brune, Cell Death Differ. 8, 864-869 (2003). 4. R. Cotran, K. Vinay, T. Collins, Pathologic Basis of Disease (Philadelphia: W.B Saunders Company Thomson, 1998). 5. S. Fesik, Y. Shi, Science. 5546, 1477-1478, (2001). 6. L. Dejean, S. Martinex-Caballero, K. Kinanally, Cell Death Differ. 8, 1387-1395. (2006). 7. G. Chen, D. Goeddel, Science. 5573, 1634-1635. (2002). 8. H. Wajant. Science. 5573, 1635-1636. (2002). 9. S. Santos et al., J. Cell Biol. 192, 571-580. (2000). 10. C. Haanen, I. Vermes, Eur. J. Obstet. Gyn. R. B. 1, 129-133 (1996). 11. M. Hussain, S. Asgari, Apoptosis. 12, 1417-1426 (2008). 12. A. Gupta et al., Nature. 7098, 82-85 (2006). 13. A. Rolaki et al., Reprod. Biomed. Online. 1, 93-103 (2005). 14. L Old, Science. 4726, 630-2 (1985). 15. P. Murthl et al., Placenta. 4, 329-338 (2005). 16. K. Jerome et al., J.Virol. 73, 8950-8957 (1999).

23


neuroscience

Promoting Remyelination and Preventing Demyelination New Research Goals in Finding a Therapy for Multiple Sclerosis Priya Rajgopal ‘11

R

ecent Multiple Sclerosis (MS) research has made it apparent that demyelination has further consequences than its primary effects of inflammation and impaired conduction. It is now well understood that demyelination leads to significant progressive axonal and neuronal degeneration. This finding explains why patients receiving immunosuppressive therapies still show disease progression – they have unremitting demyelination because sufficient remyelination does not occur, leaving axons exposed and constantly susceptible to damage (1). Consequently, there has been a push towards researching neuroprotective and myelin repair strategies as new therapy requirements for MS patients (2). This review will focus on recent advances in promoting remyelination and preventing demyelination, with specific attention directed towards methods that utilize endogenous oligodendrocyte progenitor cells (OPCs) rather than the transplantation of exogenous ones.

cruitment of mature oligodendrocytes, which are derived from adult central nervous system (CNS) stem cells called OPCs located in the subventricular zone (SVZ). OPCs are induced to proliferate and migrate by growth factors that are up-regulated during remyelination (3). It is thought that remyelination may be limited in MS patients because of compromised OPC differentiation and maturation processes or decreased OPC recruitment (2). This knowledge has led to much investigation focusing not only on how to prevent demyelination, but also on the proliferation, differentiation, and recruitment of OPCs to promote remyelination. There is an abundance of research that has shown the effectiveness of various compounds on these processes in animal models. The goal is to eventually find a method of promoting remyelination or inhibiting demyelination that can be used to treat MS patients in order to retain axonal function and hinder the progression of the disease.

Introduction

Common Animal Models Demyelination is the destruction for Demyelination of myelin protein, which forms a sheath around neuronal axons. In the central Disorders nervous system (CNS) demyelination is caused by the direct attack of oligodendrocytes, which make and maintain the myelin sheath. Remyelination on the other hand, is the process in which myelin sheaths are restored to demyelinated axons. Although remyelination produces thinner and shorter myelin sheaths, functional deficits are mostly restored. This process is the normal body response to demyelination but is impaired in patients with multiple sclerosis. MS patients are therefore left with axons that are demyelinated and vulnerable to damage, resulting in neurological deficits. While the reason for remyelination failure in MS patients is still ambiguous, progress has been made in learning how remyelination occurs (1). Remyelination involves the re24

To fully understand the relevance of the studies demonstrated in this review, it is important to understand the animal models for demyelination that are often used. It has been difficult to identify a model that replicates the characteristics of MS because every demyelinating model that has been found has intact remyelination processes. While the goal of MS research for remyelination techniques is to discover an intervention that will reactivate the dormant process, these models can only demonstrate at best, acceleration of the already ongoing active process or in some cases an abnormal increase in myelin or oligodendrocytes (2,4). Some examples of models that are frequently used are the cuprizoneinduced demyelination model, the ex-

Image retrieved from http://upload.wikimedia.org/wikipedia/commons/thumb/a/ a8/Neuron_with_oligodendrocyte_and_myelin_sheath.svg/2000px-Neuron_with_ oligodendrocyte_and_myelin_sheath.svg.png (Accessed 10 May 2011).

Oligodendrocytes produce and maintain myelin sheaths around neuronal axons.

perimental autoimmune encephalomyelitis (EAE) model, and the lysolecithin (LPC)-induced focal demyelination model. In the cuprizone-induced demyelination model dietary cuprizone is fed to the animal, which results in the demyelination of specific CNS tracts in a dose-dependent manner (1). In the EAE model, which is a mouse model for MS, the disease is elicited by introducing MS antigens and their adjuvants (5). In the lysolecithin model, lysolecithin is injected into the lumbar spinal cord producing a focal region of primary demyelination (6). Unfortunately, all three of these models show extensive, if not complete remyelination. While this must be taken into consideration when analyzing these studies, it should also be noted that even in MS, a disease characterized by failed or inadequate remyelination, there is evidence that in some patients complete remyelination occurs in a significant proportion of lesions (1).

How Can Remyelination be Enhanced? One of the key approaches currently being tested in animal models is

Dartmouth Undergraduate Journal of Science


endogenous repair of myelin sheaths. This approach uses methods to promote the repair of myelin by already present precursor cell populations in the adult CNS, rather than by transplantation of exogenous cells (1). There are a wide variety of endogenous repair methods whose roles in remyelination are currently being investigated. These mechanisms generally fall into three categories: those that directly promote OPC differentiation; those that are found to inhibit remyelination and can be targeted to enhance remyelination; and those that are normally necessary for remyelination but could be deficient in patients with demyelinating disorders.

Directly promoting OPC differentiation

Mechanisms that directly promote OPC differentiation are currently by far the most researched methods of remyelination. This approach is attractive because if OPC differentiation mechanisms can be understood, then the hope is that they can be manipulated into therapies to promote remyelination. For example, chemokines and their receptors are being investigated for their role in MS. Of particular interest is the finding that the chemokine CXCL 12 is a molecule known to mediate the migration, proliferation, and differentiation of neuronal precursor cells within the developing CNS (7). In a recent study by Patel et al. the expression of CXCL 12 and its receptor CXCR 4 were assessed within the demyelinating and remyelinating corpus callosum of a murine cuprizone-induced model. Two pieces of evidence were found supporting the hypothesis that CXCR4 activation is important in remyelination. It was found that antagonizing CXCR4 prevents remyelination within the corpus callosum after cuprizone exposure ended, and that in vivo CXCR4 RNA silencing inhibits remyelination after demyelination. Patel et al. concluded from their findings that upregulation of the chemokine CXCL12 is fundamental for the differentiation of CXCR4-expressing OPCs into mature oligodendrocytes within the demyelinated model, and that if these are blocked, there is remyelination failure. This study suggests that CXCL 12 and CXCR4 could be potential targets to enSPRING 2011

hance remyelination in MS patients (7). Another receptor that when activated was found to promote oligodendrocyte formation and maturation is the thyroid hormone beta receptor. It has been known that thyroid hormones participate in oligodendrogenesis and myelination during mammalian development (8). In 2004, Fernandez et al. showed that if thyroid hormone is administered during the acute phase of MS, there is increased expression of platelet-derived growth factor alpha receptor, which restores normal levels of myelin basic protein mRNA and protein, and allows early remyelination. They also found that thyroid hormone exerts a neuroprotective effect with respect to axonal pathology (8). Unfortunately, the therapeutic potential of thyroid hormone has been challenged due to concerns of cardiac toxicity (9). However, Potter et al. found that these concerns are actually only mediated by the alpha receptor, and therefore a beta selective thyroid hormone receptor ligand could be utilized. Potter et al. has shown that GC-1, a beta selective ligand, can be used to induce differentiation of OPCs with the same success of a ligand for the alpha and beta receptor isoforms (such as the one used by Fernandez et al). It was confirmed that the thyroid hormone receptor was up-regulated with oligodendrocyte differentiation. These findings suggest that selective control of thyroid hormones and their receptors could potentially prove to be a successful strategy in promoting remyelination in MS patients, while still avoiding the concerns accompanying non-selective TH stimulation (9). Other compounds that are not endogenous to the model, but that can be used to enhance the function of the already present OPCs have also been found. Bordet et al. has used the fact that several growth factors have been shown to affect OPCs survival and proliferation, to find a drug that could replicate the actions of these growth factors. The lab had previously identified a class of cholesterol-oxime compounds that demonstrated neuroprotective properties. Of these compounds, olesoxime, which has been previously shown to accelerate axon regeneration and remyelination in the peripheral nervous system, was tested for its role in OPC differentiation. In the cu-

prizone-induced demyelinated model, olesoxime was shown to increase the number of myelinated axons in the corpus callosum, increase myelin sheath thickness, increase the number of mature oligodendrocytes, and improve the clinical course demonstrated by Rotarod scores. In lysolecithin-induced demyelination models, olesoxime had the same positive effects, and was also shown to reduce the lesion load after demyelination. It was concluded that olesoxime dose-dependently accelerates OPC differentiation by promoting their maturation in vitro. This compound also shows some promise as a therapeutic drug for MS patients because it is orally bioavailable, crosses the blood-brain barrier easily, and has already been shown to be safe for humans. However, positive evidence of olesoxime’s efficacy in animal models is needed before it can be further developed as a treatment for MS (2). There are a plethora of other endogenous molecules, as well as exogenous ones that have been shown to aid in oligodendrocyte differentiation. For example, Sox17, a transcription factor known to be prominently expressed at OPC cycle exit and at the onset of differentiation, was investigated in demyelinating models. Strong evidence was shown supporting the fact that Sox17 is involved in promoting OPC differentiation that leads to OPC maturation and potentially remyelination. Up-regulation of Sox17 could again have therapeutic implications for MS (10). A final example of a molecule that has been recently found to have an effect on OPC differentiation is minocycline. In the past, minocycline has been shown to decrease the severity and progression of EAE in mice. It is now being demonstrated that minocycline also promotes remyelination via immature oligodendrocyte differentiation in various animal models by weakening microglial reactivity. Again, with further research, therapeutic treatments are a hopeful possibility (11).

Targeting natural inhibitors of remyelination

Although far less common, mechanisms have been found in models that naturally inhibit remyelination, which could potentially be targeted to 25


enhance remyelination. Hyaluronan is a newly discovered example of such a molecule. Hyaluronan has been found to accumulate in demyelinated lesions in MS patients and in rodent models, and has been shown to prevent remyelination by inhibiting OPC maturation (12). Further studies have shown that murine OPCs make hyaluronan themselves. However, they also express several hyaluronidases, which are enzymes that degrade hyaluronan during growth-factor-induced oligodendrogenesis in vitro. Hyaluronidase expression fluctuates as OPCs differentiate and mature into myelinating oligodendrocytes in normal animals. Based on these findings, it was hypothesized that specific temporal patterns of hyaluronidase expression and hyaluronan turnover may be key regulators in the generation, differentiation, and maturation of oligodendrocytes. To test this hypothesis, the effects of viral mediated over-expression of one specific hyaluronidase, PH20 was analyzed. PH20 increases proliferation of OPCs but decreases differentiation into

mature oligodendrocytes. It was also shown that breakdown of hyaluronan by PH20 causes inhibition of remyelination in lysolecithin-induced demyelinated mice models. Therefore, it is possible that PH20 breaks down hyaluronan, releasing products that inhibit oligodendrocyte maturation. Since PH20 expression is seen in MS lesions, it is possible that it may represent a new therapeutic target for patients (13). The assumption is that this would ensure that hyaluronan would not be destroyed and would not release its harmful products, thereby promoting remyelination. Before this is seen as a therapeutic possibility, further research is likely necessary to ensure that there are no other forms of hyaluronidase naturally present in the brain that could destroy the hyaluronans near the lesions. Otherwise this could cause the same detrimental effects of PH20. Targeting a specific hyaluronidase such as PH20 might render ineffective if there are other hyaluronidases that perform the same negative function.

Image retrieved from Robin J. M. Franklin, Charles ffrench-Constant, Nature. 9, 839-855 (2008).

Top: Demyelination can be followed by remyelination, preserving the axon, or, in the case of MS patients, no remyelination, leaving the axon at risk of degeneration. Bottom: Light microscopy images of adult rat cerebellar white matter. Remyelination typically produces myelin sheaths thinner and shorter than those produced during original development. 26

Deficiency of molecules normally necessary for remyelination

In regards to promoting remyelination there has also been some focus on identifying components of the cell that are necessary for normal remyelination, which could potentially be deficient in patients with demyelinating disorders. The role of iron has been the subject of much research in recent years (14,15,16). It has been concluded by some that iron is essential to myelin production by showing that reduced iron in the diet is associated with hypomyelination (14). In other experiments, it was shown that iron levels might affect oligodendrocyte development at early developmental stages, and that myelin composition is altered by limited iron. These changes in myelin that were induced in mice by iron deficiency, could be reversed by a single injection of apotransferrin (16). Schulz et al. went on to show that astrocytes are the source of iron for oligodendrocytes, and that efflux of iron from these cells is necessary for remyelination in a demyelinated model. A major iron transporter in astrocytes, ferroportin (Fpn), was analyzed to determine whether or not astrocytes deliver iron to oligodendrocytes during situations where high amounts of iron are needed, such as during remyelination. Astrocyte-specific Fpn knockout mice were created and were induced with localized demyelination using intraspinal LPC. Fpn knockout mice showed significantly reduced remyelination as compared to the wildtype control mice, suggesting that astrocytes do indeed provide iron to oligodendrocytes during remyelination (15). The clinical implications of this research must be further analyzed as it is still inconclusive whether or not iron deficiency is a problem in MS patients. Even if it is found that many patients are iron deficient, it is unlikely that they are completely void of iron transport from astrocytes to oligodendrocytes since the function of iron is widespread and essential for normal brain function. Therefore research must be directed towards determining if there is a dose-dependent relationship between iron efflux to oligodendrocytes, and remyelination. Even with the vast research pre-

Dartmouth Undergraduate Journal of Science


sented on possible mechanisms of promoting remyelination, we do not conclusively understand how remyelination works in the human body. As a result, it is impossible to explain why remyelination fails in patients with MS. However, with the continuation of research similar to that presented above, and with the improvement of animal models of demyelination, a more comprehensive and conclusive explanation is in sight. Given the plethora of research in this specific area of Multiple Sclerosis, it is now agreed upon that promoting remyelination should be a requirement of future MS therapies.

The Opposite Approach: Can Demyelination be Prevented? While remyelinating axons is an important research endeavor to develop ways to treat MS patients and slow disease progression, methods of preventing the disease entirely are also highly desirable research goals. In the past there has been a lot of emphasis on suppressing the immune system in MS patients in order to prevent oligodendrocytes from being attacked. These efforts are unlikely to prove effective as they do not target specific components of the MS immune response, but rather cause widespread immunosuppression (17). In addition to researching therapeutic strategies to target these specific MS immune response pathways there has been a push towards discovering other means of inhibiting demyelination. Intrathecal methotrexate (ITMTX) is a drug that has proven to be somewhat effective on progressive MS patients due to its anti-inflammatory properties (18). It has been hypothesized however, that the benefits of ITMTX could go beyond these properties because patients who do not respond to other anti-inflammatory drugs, do benefit from receiving ITMTX. In order to test this hypothesis, ITMTX was introduced in a non-inflammatory, cuprizone-induced demyelinated model. It was found that ITMTX inhibits demyelination and astrogliosis in the corpus callosum of these animals (19). While these findings by no means demonstrate that MS can be avoided or cured, SPRING 2011

they do demonstrate that demyelination could potentially be inhibited by nonimmunosuppressive means, thereby avoiding systemic immunosuppressive side effects and potentially providing a more pathology specific response (17). Similarly, it has also been shown that different levels of expression of endogenous molecules and their receptors can inhibit the process of demyelination without the use of immunosuppressant drugs. For example, galanin, a neuropeptide with multiple regulatory roles in the nervous system was investigated for its myelin protective role in cuprizone-induced demyelinated models. Over expression of galanin in galanin transgenic (Gal-Tg) mice significantly inhibited demyelination compared to that in wildtype mice. In addition, it was found that expression levels of Galanin Receptor 1 (GalR1) in Gal-Tg mice were highly activated after demyelination, and GalR2 was up-regulated later in the re-myelination period. These data suggest that there is potential pharmacological significance for molecules that can activate galanin receptors in MS patients (20). A different lab also researched this phenomenon based on the fact that galanin expression is specifically up-regulated in microglia in MS lesions. Using the EAE model, it was found that over-expression of galanin in transgenic mice eliminated the disease entirely, and that loss-of-function mutations in galanin or its receptor increased the progression and severity of the disease (21). These experiments show that what was found in cuprizone-induced demyelinated models also holds true when antigens against myelin cause onset of the disease. Finally, it is important to consider the research that is being done to improve therapies that target the immune system of MS patients. Efforts are being made to engineer therapies that are more specific to the pathology of the disease, rather than those that attack the entire system (17). Ideally, this will create therapies that are not only more effective, but that will also have fewer side effects. In rats induced with EAE, it was demonstrated that introduction of autoantibodies could improve the clinical effects of the disease. In the EAE model, myelin oligodendrocyte glycoprotein (MOG)-specific antibody was intro-

duced to demyelinate axons. Experiments were designed that introduced a novel antigen-specific therapy based on filamentous phage that displays the antigenic determinant of interest. The presentation of the phages to the EAE mice reduced anti-MOG antibodies in the brain, and thereby prevented demyelination. There was also a decrease in inflammation in the CNS of these mice. These results show that delivery of MOG via filamentous phages can deplete MOG autoantibody levels or could be stimulating other immune mechanisms to improve clinical indicators and effects of the demyelinating disease (5). It is apparent that the reverse approach of preventing demyelination could conceivably prove successful as a therapeutic strategy for treating MS. Using various drugs, manipulating endogenous molecules, and targeting specific parts of the immune system are all methods that are currently being explored. The research presented that focuses on the immune system is particularly important in showing that targeting antibodies as opposed to targeting cells of the immune system (which current therapies usually employ) might be a more effective method when looking to find therapeutic methods that address the autoimmune characteristics of MS. Further research in these methods of inhibiting demyelination is anticipated to deliver effective therapies in the future.

Conclusions and Next Steps The discovery that MS disease progression is not only due to demyelination but also to the axonal damage incurred due to lack of remyelination has indeed sparked a new direction in MS research. While these methods of promoting remyelination and preventing demyelination are important advancements in MS research, it is important to remember the fact that the models that are used are artificially induced to demyelinate. Therefore, we must consider whether or not demyelination in these models is similar enough to the demyelination seen in MS patients for these studies to be relevant. In terms of inhibiting demyelination, it could be possible that the models used have dif27


ferent properties that might allow various mechanisms to allow inhibition of demyelination that would not work in MS patients. Especially in cuprizone and lysolecithin-induced demyelinated models, it seems as though these mechanisms of demyelination are vastly different than the mechanisms that cause MS. To further validate the studies that have been done, it must be shown that the method of demyelination is less important than the simple fact that demyelination is occurring. This way, these methods of inhibiting demyelination will be more likely relevant in the clinical context. Alternatively, discovering a model that does not remyelinate and achieving the same experimental results would be an effective way of proving clinical relevance. In analyzing the research on promoting remyelination, it seems possible that remyelination may be a difficult therapy to implement because new myelin sheaths that are produced could again be susceptible to autoimmune attack, just as they were before. Although remyelination would certainly slow the disease progression and perhaps only temporarily renew lost neuronal function, methods of ensuring that autoimmune attack does not occur again should be researched. In shifting the focus of MS research from immunosuppressant methods to other aspects of the disease, such as promoting remyelination, the autoimmune facet of this disorder should not be forgotten. Continuing to find methods of suppressing the MS autoimmune response should be coupled with research on promoting remyelination or preventing demyelination. In this manner, combination therapies should be developed to tackle all aspects of the disease simultaneously. References 1: R. Franklin, C. ffrench-Constant, Nat. Rev. Neurosci. 9, 839-855 (2008). 2: T. Bordet et al., Program No. 223.6. 2010 Neuroscience Meeting Planner, San Diego, CA 2010. [Society for Neuroscience] 3: W. F. Blakemore, K.A. Irvine, J. Neurol. Sci. 265(1-2), 43-46 (2008). 4: A. Bieber, et al., Glia. 37, 241–249 (2002). 5: B. Solomon et al., Program No. 223.3, San Diego, CA, 2010. [Society for Neuroscience] 6: K. Kucharova, W. Stallcup, Program No. 258.4,San Diego, CA, 2010. [Society for Neuroscience] 7: J. Patel, et al., Program No. 258.10, San Diego, CA, 2010. [Society for Neuroscience] 28

8: M. Fernandez et al., P. Natl. A. Sci. USA. 101(46), 16363-16368 (2004). 9: E. Potter, et al., Program No. 438.2, San Diego, CA, 2010 [Society for Neuroscience] 10: N. Moll, et al., Program No. 258.1, San Diego, CA, 2010. 11: A. Defaux, et al., Program No. 258.2., San Diego, CA, 2010. [Society for Neuroscience] 12: J. Sloane et al., P. Natl. A. Sci. USA. 107(25), 11555-60 (2010). 13: M. Preston, et al., Program No. 438.1., San Diego, CA, 2010. [Society for Neuroscience] 14: B. Todorich, J. M. Pasquini, C. I Garcia, P. M.Paez, J. R. Connor, Glia. 57(5), 467-78 (2009). 15: K. Schulz, et al, Program No. 258.5., San Diego, CA, 2010. [Society for Neuroscience] 16: M. Badarocco, M. Siri, J. Pasquini, Biofactors. 36(2), 98-102 (2010). 17: C. Raine, et al. Multiple sclerosis: a comprehensive text. [Google books version], 2008. Retrieved from http:books.google.com 18: V. Stevenson, A. Thompson, Drug. Today. 34(3), 267-282 (1998). 19: A. Sadiq, et al., Program No. 223.12., San Diego, CA, 2010. [Society for Neuroscience] 20: L. Zhang, et al., Program No. 258.11., San Diego, CA, 2010. [Society for Neuroscience] 21: D. Wraith et al., P. Natl. A. Sci. USA. 106(36), 15466-15471 (2009).

When the market’s down ...

Invest in science.

DUJS The Dartmouth Undergraduate Journal of Science Meets Mondays at 9:00 PM In Kemeny 004 Blitz “DUJS” or go to dujs.dartmouth.edu for more information.

Dartmouth Undergraduate Journal of Science


Physics

GreenCube II

Multiple Balloon Measurements of Gravity Waves in the Skies Above New Hampshire Sean Currey ‘11

T

his paper details the experimental method and results of the GreenCube II mission, a student-driven research program at Dartmouth College. The objective of the mission was to use multiple-point high altitude sounding balloon measurements to characterize the gravity wave structure over New Hampshire’s Mt. Washington. Each payload collected GPS and temperature data. The results were compared to a numerical simulation to verify that the perturbations measured were a product of gravity wave action. Although the measurements appear accurate, the simulation is not yet accurate enough to verify the presence of gravity waves.

Introduction The GreenCube project stems from Dartmouth physics professor Kristina Lynch’s interest in small, autonomous science payloads for multipayload auroral sounding rockets, and from Professor Robyn Millan’s interest in small, CubeSat-like orbiters for future science missions. The goals of the GreenCube project are to maintain a scientifically interesting, student-driven balloon-borne CubeSat program in the Dartmouth Physics Department, to incorporate new design features into small payloads for LCAS-class auroral sounding rocket proposals by Professor Lynch, and, on a longer timescale, to incorporate designs into future plans for small spacecraft for orbital science missions (1). Unlike the preceding two GreenCube missions, which verified the feasibility of using small payloads to collect data from the atmosphere, GreenCube II was designed as an actual science mission to collect measurements on atmospheric gravity waves – movements of air perpetuated by a gravitational restoring force. Gravity waves are invis-

ible to the naked eye but play a major role in the transfer of energy from the lower to the middle and upper atmosphere. Understanding gravity waves may one day allow us to more accurately predict the weather or conditions in the upper atmosphere. However, because of their inherent invisibility, gravity waves are difficult to measure. Terrain-generated gravity waves are divided into two categories: mountain waves and lee waves. Both types are formed when wind blowing over the Earth’s surface is obstructed by a terrain feature, such as Mt. Washington in New Hampshire. Air is forced up and over this feature, which in turn forces the air above it up, and so on. These waves are referred to as mountain waves. As the wind settles on the opposite side of the feature, it oscillates up and down as it settles. The waves generated here are called lee waves, due to the fact that they are generated on the leeward side of the mountain (2). The goal of GreenCube II was to use multiplepoint measurements to analyze the structure of mountain waves. After examining occultations measured by the GPS satellites, the team determined that Mt. Washington was a likely source of mountain waves. The team assumed that the atmospheric density changes indicative of occulations were caused by gravity waves.

Objectives The objective of the GreenCube II mission was to successfully launch two sounding balloons spaced in such a manner that the flight paths could be compared to measure the wave structure above Mt. Washington. The balloons were to fly over Mt. Washington and burst between 80,000 and 90,000 ft and then descend to a location at which they could be recovered. After launch, the data collected was used to determine the size of the wave structure over Mt. Washington and how the structure changed with time. The data collected was also compared with the Taylor-Goldstein equation to verify that the perturbations seen by the payloads were in fact generated by gravity waves.

Experiment Equipment description

Image courtesy of Max Fagin.

Members of the GreenCube II team prepare for launch at Mt. Washington Airport in Whitefield, New Hampshire. SPRING 2011

The two GreenCube payloads contained a GPS to record its position and five thermistors to record local atmospheric temperatures. The payloads recorded GPS data every five seconds and temperature data every 10 seconds, and send this information to the ground team via radio. The payloads were attached to high altitude sounding balloons. In addition to the GreenCube payloads, each balloon also carried a commercial camcorder on its lower secondary payload (designed to carry an emergency locator transmit29


Fig. 2: The above graph is a plot of all position coordinates for both payloads created using Google Earth. The blue trajectory belongs to the first balloon launched (Payload 1) and the red belongs to the second balloon launched (Payload 2). Payload 2 was launched 90 seconds after Payload 1. Fig. 1: One of the two GreenCube payloads flown.

ter). These cameras captured HD video of the Earth from the balloon altitudes, including images of cloud formations which bore the signs of atmospheric gravity waves.

Flight description

Two adjacent GreenCubes were launched from Mt. Washington Airport. The balloons reached an altitude of approximately 90,000 feet before bursting. The payloads then descended via parachute and were retrieved using the realtime GPS track received through the ham radio system. The balloons flew over the Presidential Range and were recovered in Maine. The flight time was approximately two hours. GPS data was transmitted to the ground crew in real time over the course of the flight. GPS position vectors and timestamps were recorded and transmitted every five seconds, as shown in Fig. 1. The balloons ascended in a roughly linear fashion. After the balloon burst at the flight apogee, the balloons descended in a roughly exponential profile. As the atmosphere becomes denser at lower altitude, the parachute created more drag, lowering the ascent rate. Both balloons burst between 25 and 28 km in altitude, well within the preflight prediction of 80,000 to 90,000 feet. The velocity of the balloon was derived from the recorded position and time data by dividing the distance between GPS position coordinates by the time delay between them (usually 5 seconds). In this manner, both the speed of the payload and its heading were calculated horizontal velocity profiles, as shown in Fig. 2. The team made an important assumption that the velocity of the payload at any given moment is the same as the wind speed, i.e. the payload accelerates nearly instantaneously with the wind. The payload profiles look almost identical, indicating that they ascended through similar atmospheric features. As the balloons approach 5 km in altitude they enter the jet stream and accelerate very rapidly. After achieving about 15 km in altitude the balloons begin to slow down. The many small-scale fluctuations seen in these horizontal velocity profiles could be caused by gravity waves. 30

GPS data

To start our gravity wave analysis we transformed our velocity data into components. A look at the compass heading of the balloon over the course of its ascent shows that the balloon oscillated in direction around a 120 degree heading. Therefore, we designed a new coordinate system that better reflected the direction of the balloon, in which the “along” velocity shows movement in the direction of the prevailing winds, and the “across” velocity registers movement perpendicular to the prevailing winds. When the new coordinate system is applied, the oscillations are more pronounced and the heading changes are more apparent. The fluctuations in horizontal velocity are strongest perpendicular to the balloon’s path.

Temperature data

Also flown aboard the payloads were five thermistors which recorded temperature data. Data from the thermistors was transmitted approximately every 10 seconds. Before analyzing the temperature data, the data from all five thermistors was averaged so that one tem-

Fig. 3: The graph above shows the horizontal velocity profiles of both payloads as they ascended to apogee. The overall velocity curves are permeated by small perturbations in horizontal velocity. Dartmouth Undergraduate Journal of Science


perature profile was derived from each payload. The data was also converted into potential temperature to follow convention. Potential temperature is defined as the temperature of a volume of gas adiabatically changed from its initial pressure to a standard reference pressure (3).

The standard pressure used was 1000 millibars. Because the GPS data and temperature data were handled by two different systems, they therefore were reported at different rates. The time stamps sent out on each radio transmission represented times corresponding to the GPS data but not necessarily the temperature data. The temperaa.

b.

Fig. 4: 4a is a quiver plot showing the “across” (perpendicular to prevailing winds) component of payload velocity plotted along payload trajectory. The red lines connect atmospheric features detected by payload 1 with corresponding features detected by payload 2. Note that the lines are not perfectly horizontal, suggesting that the two payloads encountered the same features at slightly different altitudes. 4b shows a clearer example of the concept. SPRING 2011

ture data was splined with the GPS data regarding when and at what altitude the temperature data was recorded.

Results Combining the position and velocity data on a quiver graph allows for a glimpse into two slices of the atmospheric velocity vector field above Mt. Washington (Fig. 4a). At first glance, the velocity vector profiles look very similar. Features appear identical across lines of equal altitude, therefore there are no obvious time or range dependence. However, a closer look shows very faint changes along lines of equal altitude. For example, along the 14 km altitude contour, a peak in the cross component velocity can clearly be seen just above the contour on both payloads as they ascent through this area. However, when payload 1 descends the peak occurs exactly on the contour, and payload 2’s peak occurs below this contour. The change in altitude of this atmospheric feature is indicative of either a change with respect to time or distance of this feature. We can use this phase change to measure the horizontal wavelength of this feature. The change in altitude of two vertically propagating wave structures can be used to calculate the horizontal wavelength of the structure as shown in Fig. 4b. Because the horizontal wavelength is proportional to the change in altitude, the peak to peak change in altitude of one full vertical wavelength can be used to calculate the horizontal wavelength.

In Fig. 4a, we plotted lines connecting similar features. Rather than connecting peaks, however, we chose to connect the nodes together. The nodes represent a change in the across velocity from southeast to northwest. The lines with the greatest slope are located between 13 to 16 km in altitude. These lines describe the atmospheric fea-

Fig. 5: Average potential temperature for each payload plotted against altitude. 31


Fig. 6: Shear layers. As the balloons ascend through various sections of the atmosphere the horizontal velocity suddenly changes. The figures above show this phenomenon. The left figure shows the payload ascending through the Tropopause at 15km. The path is shown at the vector bases, and the magnitude of the vector describes the velocity. The right figure shows a bird’s eye view of this graph. The balloon hits the shear layer and its path is directed in the +x-direction.

tures with the greatest change. The phase change becomes zero at increased altitudes. The line around 14.5 km had the greatest slope at -17.36 m per km distance. The peak to peak vertical wavelength of payload 1’s ascent measured 1.3 km. Therefore, the minimum horizontal wavelength we observed was 76.5 km. The balloons themselves only collected reliable data above 10 km and less than 40 km. Therefore, the structure is so large that horizontal distance is a negligible factor in determining the “across velocity.” Starting at 15 km altitude the balloon makes repeated fan-like patterns in which the balloon sharply changed direction and then gradually returned to its normal heading. The largest of such fluctuations occurs at 15 km, corresponding with the Tropopause and therefore the largest change in temperature. The Tropopause is the boundary region between the Troposphere and Stratosphere, and is characterized by a local minimum in temperature (4). The other prominent sharp accelerations occur at 17.8 and 20.5 km, which are the same regions that correspond to the second and third largest changes in temperature. This could be indicative of an atmospheric shear layer, an area of the atmosphere where the velocity of the wind is vastly different from the layers above and below it. This sudden change in velocity of the gas would change the balloon’s heading and cause it to record a very different temperature. To conclude, the large changes in temperature and velocity occur together, and indicate that the payload is passing through a discontinuous shear layer.

should have experienced while flying over Mt. Washington. The mountain was modeled as a Gaussian function with a height and width representative of the actual size of Mt. Washington. The incoming horizontal wind velocity was generated by smoothing the velocity data obtained by Payload 1 during its ascent. The simulation then solved the Taylor-Goldstein equation using inverse Fourier transforms (2).

The results are shown in Fig. 8. The colored lines in this figure represent the streamlines on the velocity field over

Simulation Although the perturbations in velocity and temperature seen by the balloons were indeed measurable, we cannot say with any certainty that they were caused by gravity waves. Therefore, we created a numerical simulation that predicts the perturbations in vertical velocity the balloon 32

Fig. 7: A numerical simulation illustrating the formation of mountain waves. Incoming air from the left is forced over the terrain feature, sending velocity perturbations propagating forwards in distance and upwards in velocity. Dartmouth Undergraduate Journal of Science


Conclusion The GreenCube II mission was successful in collecting multiple-point measurements over the course of its two hour flight. The GPS and temperature data from the payloads was received in real-time and translates into accurate temperature and flight profiles. This data was used to estimate the structure of the gravity wave system over Mt. Washington. Although as the simulation shows, it is still difficult to tell whether these perturbations are caused by gravity waves, or instead by another phenomenon, such as atmospheric shear layers. A higher fidelity simulation is needed to verify this.

Fig. 8: Payload 2’s vertical velocity perturbations and flight path are plotted over the simulated mountain waves created by Mt. Washington.

the mountain. The trajectory of payload 2 was plotted over this field, along with the perturbations in horizontal velocity. Fig. 9 shows the expected amplitudes of the vertical velocity as a function of downstream distance, versus the actual amplitudes seen by payload 2. This figure shows that the model closely guesses the amplitude or perturbations inside the region affected by mountain waves. However, there are also large perturbations seen by payload 2 outside this region. Does this indicate that the balloon’s perturbations were not caused by gravity waves? Not necessarily. This model only predicts perturbations in vertical velocity, which as stated earlier is difficult to measure with a sounding balloon due to buoyancy concerns. Creating a model that predicts horizontal perturbations might yield better results. Additionally, this is only a two-dimensional model, with a very simple contour representing Mt. Washington. In reality, the Mt. Washington ridgeline is shaped like an integral sign, which might force air over the mountain in a manner rather different from the straight ridgeline model used in this simulation. Lastly, the terrain around Mt. Washington is corrugated, which could account for the large perturbations seen far downstream of the mountain. A more highfidelity simulation is needed to test these sources of error.

Nomenclature q = potential temperature [˚C, K] P = Pressure [millibars] P0 = Reference Pressure [millibars] Cp = specific heat capacity [J/Kg K] R = gas constant [J/mol K] T = Temperature [C, K] l = wavelength [m] mred = slope [dimensionless] w1 = perturbation velocity (vertical) x = position (x-direction) [m] z = position (z-direction) [m] w = Fourier transform of perturbation functions k = wavenumber in x-direction m = wavenumber in z-direction References 1. K. Lynch, Personal Interview, 2009. 2. C. Nappo, An introduction to atmospheric gravity waves. (Academic Press, San Diego, 2002). 3. M. Yau, R. Rogers, Short Course in Cloud Physics, (ButterworthHeinemann, ed. 3, 1989) 4. H. Crutcher, “Temperature & humidity in the troposphere,” World Survey of Climatology, Volume 4 (Elsevier Scientific Publishing Company, New York, 1969).

Fig. 9: The expected vertical velocity perturbations (black) are plotted with the actual perturbations seen by Payload 2 (green). Because the green line is not fully contained in the black line, we know that there must be other sources of velocity perturbations besides mountain waves. SPRING 2011

33


Ecology

Energy Optimization and Foraging Preference in Hummingbirds marielle battistoni ‘11, elin beck ‘12, sara remsen ‘12, and frances wang ‘12

O

ptimal Foraging Theory predicts that organisms will maximize their foraging efficiency by balancing time spent feeding and time spent searching for new feeding sites. Foraging efficiency is particularly important for hummingbirds (Trochilidae) because of their high metabolism and energy requirements. We hypothesized that hummingbirds would prefer large flowers and clusters with more flowers to optimize their energy expenditure. Hummingbirds should also spend more time at flowers with a higher nectar concentration because of the greater energy reward. We observed hummingbird behavior in response to manipulations of reward size using artificial feeders in Monteverde, Costa Rica. Our results supported the hypothesis that hummingbirds prefer flowers that offer higher energy rewards.

a foraging strategy that maximizes their energy intake (4). In this study, we investigated the factors that determine hummingbird patch choice and feeding duration. We tested the hypothesis that hummingbirds should not forage randomly but should forage to optimize potential energy intake. We predicted that hummingbird foraging preference would be based primarily on visual cues such as flower size and clustering of flowers. Because larger flowers should contain more nectar, we predicted that hummingbirds would prefer large feeders to small feeders and large clusters of feeders to small clusters of feeders (5). Finally, we predicted that nectar quality would affect hummingbird foraging choice, such that hummingbirds would spend more time at feeders with concentrated nectar over diluted nectar.

Introduction

Methods

Foraging animals must constantly make decisions about which patches of food to visit and how long to visit them. Optimal Foraging Theory predicts an energy trade-off: energy is required to locate and travel to a new patch, but as the amount of time spent within a patch increases and resources are consumed, the energy available decreases (1). According to the Marginal Value Theorem, animals forage in a patch until the rate of energy gain is less than or equal to the potential energy gain from other patches within the habitat (2). Hummingbirds (Trochilidae), with a high wingbeat frequency and a heart rate that can achieve up to 1300 beats per minute while hovering, have an extremely high metabolism for their size (3). Hummingbirds feed on nectar because its high sugar concentration provides a quick source of energy (4). Hummingbirds are ideal organisms for studying Optimal Foraging Theory because they face a high cost if they do not choose

We studied hummingbird foraging behavior at the Monteverde Biological Station, Costa Rica. We constructed hummingbird feeders using 6mL and 50mL plastic vials. Vials were covered in red tape to attract hummingbirds (6) and filled with a 30% sucrose solution (6). The mouths of the vials were also covered in red tape, and a hole punch was used to create a standard opening. The vials were spaced 30 cm apart on tree branches at a height of 1.5-2 m outside the Monteverde Biological Station.

Patch and flower size

To test whether hummingbirds prefer more or larger flowers in a patch, we arranged clusters of different sizes by hanging vials next to each other. We had six different cluster treatments: 1, 3, and 5 vials of both small (6mL) and large (50mL) volume. We randomized two, three, or four clusters of feeders at four different sites around the field station. We conducted twenty-minute observations of all of the sites on January 20 and 21, 2011 between the hours of 06:00 and 18:00, for a total of 24 effort-hours. After three trials, treatments were rotated around our study site to control for site effects. We recorded the number of visits to each cluster and the duration of each visit.

Concentration and volume

Image courtesy of Marielle Battistoni ‘11.

Hummingbirds appear to follow Optimal Foraging Theory. 34

We performed two additional experiments on 22 January 2011 for 12 effort-hours each (between the hours of 06:00 and 14:00) to examine why hummingbirds spend more time at larger flowers. To test whether the nectar concentration or the physical size of the feeder was more important to feeding duration, we set up 40% and 10% sucrose solutions in three large and three small individual vials. To test whether hummingbirds discern the volume of nectar in the feeder, we concealed a small vial inside a large Dartmouth Undergraduate Journal of Science


vial to create a treatment with a large physical size and small nectar volume. Four of these compound vials were set up, in addition to four large and four small vials as controls. The sugar concentration and volume treatments were hung at four sites along the road by the field station’s garden. Within each experiment, blind treatments were distributed randomly. We observed the sites for twenty-minute trials, recording data in the same way as in our previous observational study.

Analysis

We performed Analysis of Covariance (ANCOVA) to evaluate the potential effect of site on the visit duration. For the patch and flower size experiments, we used Analysis of Variance (ANOVA) to compare mean number of visits between large and small feeder clusters to establish hummingbird visual foraging preference. We also used ANOVA to compare mean visit duration between large and small feeders to assess whether hummingbirds remain at feeders that offer higher energy rewards. To analyze whether hummingbird visit duration is driven by volume, we performed ANOVA to compare mean visit duration between large, compound, and small feeders. We used ANOVA to examine the effects of sucrose concentration on hummingbird foraging preference by comparing mean visit duration between high and low concentration feeders of both sizes. We then performed post hoc Tukey HSD tests to determine which comparisons were significant. Statistical tests were conducted with JMP 8.0.

Results We found no evidence for a statistical interaction between site and hummingbird visitation to each treatment (ANCOVA, F23,196 = 1.33, p = 0.19). Therefore, we pooled the data from all four sites in our analysis.

Fig. 1: Mean number of hummingbird visits to clusters of 1, 3, and 5 feeders of 40% sucrose concentration at Monteverde, Costa Rica. Error bars represent one standard error from the mean.

Fig. 2: Mean duration(s) of hummingbird visits to large and small feeders with 40% sucrose concentration weighted by total number of visits to each treatment at Monteverde, Costa Rica. Error bars represent one standard error from the mean. SPRING 2011

Hummingbirds made significantly more visits to the larger clusters of feeders than to smaller ones (F2,217 = 24.93, p < 0.0001; Fig. 1). Hummingbirds also spent significantly more time at large feeders than at small feeders (F1,211 = 4.99, p = 0.027; Fig. 2). In the volume experiment, hummingbirds spent significantly more time at large compound feeders than small feeders (F2,9 = 5.18, p = 0.032; Fig. 3). Time spent at large feeders alone was not significantly different from either compound or small feeders. Hummingbirds stayed significantly longer at highconcentration feeders than low-concentration feeders (10%: x¯ = 2.42, (SE = 0.56); 40%: x¯ = 4.72, (SE = 056); F2,4 = 8.52, p = 0.015). Hummingbirds spent significantly more time at feeders that were both large and contained high sugar concentrations (F3,8 = 8.47, p = 0.0073, Fig. 4).

Discussion Because hummingbirds have a high metabolism, they must optimize their energy expenditure. Hummingbirds may increase their foraging efficiency by distinguishing between high- and low-energy flower rewards (7). In our study, hummingbirds consistently optimized their energy intake by spending the most time at large feeders, large clusters of feeders, and feeders with high sugar concentrations. Our data support our hypothesis that hummingbirds visit large clusters of flowers significantly more often than small clusters (Fig. 1). Hummingbirds may be minimizing their travel-associated energy costs by choosing to feed at higher density patches. Furthermore, when there are more

Fig. 3: Mean duration (s) of hummingbird visits to large volume feeders, large volume feeders containing a small volume solution, and small volume feeders weighted by total number of visits to each treatment. Trials were conducted at Monteverde, Costa Rica. Error bars represent the standard error of the mean.

Fig. 4: Mean duration (s) of hummingbird visits to feeders containing 10% sucrose solution and 40% sucrose solution weighted by total number of visits to each treatment. Dark bars represent small feeders and light bars represent large feeders. Trials were conducted at Monteverde, Costa Rica. Error bars represent one standard error from the mean. 35


centration nectar available in laboratory tests, producing high-concentration nectar may be too energetically costly for plants. Galetto and Bernadello found that flowers produce only enough nectar to attract a hummingbird long enough to receive pollen (12,13). When birds are satiated after one feeding, they will not expend more energy to visit other flowers. If the flowers all have lowconcentration nectar, hummingbirds must visit multiple flowers to satisfy their energetic needs, thereby increasing pollination and the plant’s reproductive success (12). Our results support the hypothesis that hummingbirds follow Optimal Foraging Theory and choose foraging sites to maximize their energy intake. Further studies could investigate how other aspects of flower morphology and nectar production affect patterns of hummingbird foraging preference. Image courtesy of Marielle Battistoni.

Dartmouth students collected data on hummingbirds at the Monteverde Biological Station in Costa Rica.

flowers in a cluster, it is more likely that all flowers will not be already depleted. Alternatively, hummingbirds may rely primarily on visual foraging strategies and simply have been able to find larger clusters of feeders more easily (8). We also found that hummingbirds spend more time per visit at large feeders than small feeders (Fig. 2). Hummingbirds may feed at large feeders longer to maximize their energy intake because larger flowers may contain a greater nectar reward. Fenster et al. found that ruby-throated hummingbirds prefer larger artificial flowers in a system where larger corolla size was correlated with both more and higher quality nectar (5). Consistent with our predictions, we found that hummingbirds discriminated only between the sizes of feeders, not their volumes (Fig. 3). This result may be biased by the fact that no hummingbird in our experiment could deplete all the nectar in a single visit because our vials contained a much greater volume of sugar water than is found naturally in flowers (9). However, Salguero-Faria and Ackerman found that the volume of nectar offered by a Puerto Rican orchid had no effect on hummingbird visitation and pollination success (10). This is consistent with our results and with the conclusion that hummingbird visit duration is not driven by volume. We also found that hummingbirds feed longer at large high-concentration feeders than at small high- and low-concentration feeders and large low-concentration feeders (Fig. 4). By staying longer at high-concentration flowers, hummingbirds can maximize their energy gain while minimizing their effort time and travel costs (11). Hummingbirds’ clear preference for certain flower characteristics is likely to exert selective pressures on hummingbird-pollinated plants. These plants must balance the energy allocated to attract hummingbirds with energy for metabolic processes, including plant growth. Plants may have developed a balance between quality and quantity of rewards that optimizes their energy investment in exchange for hummingbird pollination. Our results indicate that plants can best attract hummingbirds by investing in large flowers with high-concentration nectar. Despite hummingbird preference for the highest con36

Acknowledgements We would like to thank the staff of Monteverde Biological Station for their hospitality. We would also like to express our gratitude for the feedback and support of the course TAs, the students of the Foreign Study Program, and Ryan Calsbeek, who aided us in refining our experimental design and data analysis. References 1. G. Pyke, Am. Zool. 18, 739-752 (1978). 2. E. Charnov. Pop. Biol. 9, 129-136 (1976). 3. R. Suarez, Experientia. 48, 565-570 (1992). 4. M. Fogden, P. Fogden. Hummingbirds of Costa Rica. Zona Tropical: Miami (2005). 5. C. Fenster et al., Am. J. Bot. 93,1800-1807 (2006). 6. A. Ödeen, O. Håstad J. Comp. Physiol. 196, 91-96 (2010). 7. L. Wolf, F. Hainsworth, F. Gill, Ecology. 56, 117-128. (1975). 8. G. Stiles, Ecology 56, 285-301 (1975). 9. P. Feinsinger, Biotropica. 15, 8-52 (1983). 10. J. Salguero-Faría, J. Ackerman, Biotropica 31, 303-311. (1999). 11. S. Tamm, C. Gass, Oecologia. 70, 20-23 (1986). 12. A. Bolten, P. Feinsinger, Biotropica. 10, 307-309 (1978). 13. L. Galetto, G. Bernadello. Ann. Bot. 94, 269-280 (2004).

Dartmouth Undergraduate Journal of Science


environmental sciences

Biophilic Design

A Review of Principle and Practice Elizabeth Molthrop ‘12

T

he word “green” elicits many definitions and responses. From nature itself to environmentally friendly consumer items and building methods, the word has been ubiquitously slapped onto a multitude of products and services currently on the market. The “green” movement in construction particularly has a multitude of implications. For some, green architecture is a black and white definition, set by LEED (Leadership in Energy and Environmental Design) standards. Others seek aesthetic integration with the environment as a determining factor. A new movement related to green architecture—biophilic design—has recently gained much momentum within the building community. The leading experts in the biophilic design field hold that “we should bring as much of nature as we can into our everyday environments so as to experience it first-hand; second, we need to shape our built environment to incorporate those same geometrical qualities found in nature” (1). While the green movement has often focused on the means, biophilic design tends toward emphasizing the end results, establishing natural-based habitats for humans to live and work. Rather than merely erecting buildings, architects who utilize the tenets of biophilic design create spaces in which humans can truly fulfill their potential. Biophilic design incorporates elements derived from nature in order to maximize human functioning and health.

Historical Context While humans evolved over the millennia, their relation to the environment likewise changed. People depend on their surroundings for both natural resources and for enabling the establishment of community. As creatures of the earth, humans respond to its natural features, which can also be incorporated into constructed design. The modern history of architecture is characterized by building movements and styles, often imposed by an elite few who deemed this “good” architecture. The rigid geometry of Modern Architecture, for example, holds few relationships to the outside world. Conversely, “great architects in the past were better able to discern those qualities, and to reproduce them in their buildings because they were more engaged with their immediate surroundings.” As a consequence, buildings provided protection with the benefits of natural elements. The premise of biophilic design “aims not only to reduce the harm that stems from the built environment, but also to make the built environment more pleasing and enjoyable. It seeks both to avoid and minimize harmful impacts on the natural environment, as well as to provide and restore beneficial contacts between people and nature in the built environment.” Architecture in and of itself is not harmful, and its benefits of shelter and community cannot be overlooked. However, built environments can certainly cause stress. Biophilic design provides the answer SPRING 2011

to this predicament, preventing harm to both people and nature while facilitating a beneficial link between the two (1). Too often, a distinction has been made between architecture and environment, cutting people off from a psychologically-developed need to commune with nature. When architects overstep their role, using “images and surface effects” to “supplant everyday human desires and sensibilities in the name of artistic endeavor, humans are left to live out their lives in a series of ill-fitting, overexaggerated and often idiosyncratic formal architectural schemes.” Biophilic design does not advocate tree houses or cave-dwelling, but it does provide the nature-based features that prompt complex thinking in humans. Though not technically biophilic design, the nature-communing architecture of Frank Lloyd Wright’s Fallingwater arguably speaks to the human soul much more than a box-like “machine for living” by Le Corbusier. Not an architectural style, biophilic design must avoid becoming such. Designers can often become caught up with the potential of new technology, pushing its limits but not in the service of its users. Because of these risks, the “green” aspect of biophilic design must not overwhelm its overarching goals of creating an ideal environment for people (1). Biophilic design affords humans a host of benefits. Using particular landscapes can reduce stress and enhance well-being because we gravitate toward certain configurations and natural contents. These landscapes were the environments prehistoric people inhabited throughout their evolution, and the human brain adapted to respond to these types of spaces. In built environments, we have obstructed the connection developed over millennia. We are so accustomed to our built habitats that we do not notice their deleterious effects, and as a result, stress has become a chronic issue in modern society. Eliminating some of the distinction between built and natural allows biophilic design to impart the benefits of both types of environments (2). One crucial element of the natural landscape to human health is sunlight. We are evolutionarily programmed to respond positively to well-lighted or sunny areas over dark or overcast settings (1). People can expect these spaces to foster restoration, improve emotional well-being, and promote health (1). Distractions of modern life induce stress, especially the artifacts (i.e. cell phones, laptops, etc.) to which we are so attached (1). The rates of technological progress have far exceeded rates of psychological evolution, leaving us ill-equipped to cope with our lifestyle. Biophilia expert Yannick Joye states, “by including elements of ancestral habitats in the built environment, one can counter potential deleterious effects, which stem from this dominance, [of uniform/modernist environments], resulting in more positive effects and more relaxed physiological and psychological states” (2). Because biophilia attempts to integrate ancestral and current habitats, it can alleviate the stress 37


caused by the brain’s constant attempts to function in a modern environment it has not yet evolved to handle (2).

Applications of Biophilic Design Because of its tremendous impact on human psychology, biophilic design plays a vital role in healthcare and healthcare delivery. The current healthcare system contains many flaws, especially in its physical spaces. Hospitals, clinics, and offices are high-stress environments for patients, visitors and families, and healthcare professionals alike. Integrating nature into healthcare facilities has numerous benefits for many groups. One well-known study by Ulrich et al. looked at patients after surgery. One group of patients had windows with a tree view; the others’ windows faced a brick wall (2). The patients with windows facing trees “had shorter hospital stays, received fewer negative comments from the nurses, required less moderate and strong analgesics, and had slightly fewer postoperative complications” (2). The underlying reasons for this discrepancy are biological. For our ancestors, “a capability for fast recovery from stress following demanding episodes was so critical for enhancing survival chances of early humans as to favor individuals with a partly genetic predisposition for restorative responding to many nature settings” (1). As a consequence, nature and nature-based design have been integrated into the physical design of many hospitals. Dartmouth-Hitchcock Medical Center (DHMC), for example, boasts an atrium design, flooding daylight through the entire facility. Natural elements also permeate the building, including wood, stone, and numerous live plants. Though DHMC was built to originally incorporate these qualities, other hospitals have been retrofitted with elements of biophilia. This follows the trend of the application of biophilia’s concepts to interior design in hospitals as administrators have witnessed patients’ positive responses to nature. Changes to pre-existing hospitals allow immediate improvement for staff, patients, and visitors. In addition, scientific studies have shown that including gardens in healthcare design has a restorative effect for both faculty and patients. Whether in a concentrated garden setting or dispersed throughout the building, natural forms provide an oasis from the stress inherent within the healthcare system (1). A few theories based on research have emerged to explain the effects of biophilic design on humans. One element likely contributing to biophilia’s influence on human psychology is an underlying geometry of fractals within nature (2). Characteristics of fractals include “roughness [recurring] on different sales of magnitude,” “self-similarity” on each level of magnification, and non-integer dimensionality (which is more easily witnessed than described) (2). Examples of fractal patterns include fern fronds, lighting bolts, and burning flames. The “nested scaling hierarchy” that are fractals can be found in many traditional architectural forms, confirming previous generations’ greater connection (conscious or not) to the natural environment (1). The Gothic style in particular is obviously fractal (1). In addition, functional Magnetic Resonance Imaging (fMRI) studies have begun to reveal a link between aesthetic response and the brain’s pleasure center (1). Logi38

Image courtesy of Figuura.

Biophilic design can incorporate natural elements into workspaces.

cally, it follows that people would gravitate toward aesthetic forms that have been reinforced throughout their history. Along with the concept of fractals, two overarching theories have attempted to explain humans’ affinity for biophilia. The first is attention restoration theory, developed by the Kaplans, which “interprets restoration as the recovery of directed attention or the ability to focus. This capacity is deployed during tasks that require profound concentration, such as proofreading or studying. Natural settings have been found to be ideally suited to restore or rest directed attention.” The second theory applies more broadly than just focusing on attention. Restoration is instead stress-reduction and “can occur even when directed attention is not fatigued”. This theory—the psychoevolutionary theory, developed by Ulrich, is based on the experiences of early humans. The threats they encountered required immediate response and a rapid recovery, which “typically occurred in natural unthreatening (savanna-like) settings.” The humans best-equipped to do this were the most likely to survive and reproduce, passing on these nature-induced restoration capabilities to their offspring. This continuing psychological evolution produced the current makeup of the human brain (2). While the idea of biophilia is an attractive one, as with any theory, it has certain limitations. Some people do respond quite well to modern architecture even though they are biologically predisposed to respond to more natural forms. Biology is a key player in influencing human psychology, but culture must not be overlooked. In comparing cultures, however, people across the board respond similarly well to natural views, making it all the more likely that an affinity for biophilia has been solidified within the gene pool (1). Because people throughout the world associate biophilia with positive feelings, architects relying on biophilic design have the advantage of universal appeal. They also retain a high degree of flexibility and freedom, as biophilic design is not defined by one aesthetic. Many existing buildings contain biophilic elements, but only a few have been built with the specific idea of biophilic design in mind. One such building is the Adam Joseph Lewis Center for Environmental Studies at Oberlin College. DirecDartmouth Undergraduate Journal of Science


tor of Oberlin’s Environmental Studies Program, David Orr, explained the building’s goals were “to create not just a place for classes but rather a building that would help to redefine the relationship between humankind and the environment— one that would expand our sense of ecological possibilities.” Following the tenets of biophilic design led to mutual benefit for the environment and its human inhabitants. The Lewis Center is sustainable in a broader sense than the word can typically be applied. It minimizes energy use in harnessing solar power, utilizes both active and passive air systems, and monitors the weather to adapt to conditions. The Center’s “Living Machine” treats wastewater by combining traditional wastewater technology with wetland ecosystems’ purification processes, producing water that can be used in the toilets and for irrigation. In their design, Orr and his team engineered an outstanding space for students to thrive while insuring the surrounding environment could do the same. Another example is the University of Guelph Humber Building in Ontario, Canada. It contains a centrally located biowall, vertically spanning the building. The wall is covered in dense foliage and can be seen from almost every level inside. The wall also functions as a new filtration system prototype. The wall purifies the air and has the potential to fulfill the building’s fresh air intake requirements (3). DHMC also incorporates an Arts program for the benefit of its patients and caregivers. A large portion of the paintings, photographs, and other works contain nature or natural elements. Elisabeth Gordon, head of the program, says the pieces she seeks are soothing and reflective, and they help reconnect the viewer to his or her humanity (4). Though she does not use the term “biophilic design,” the works within DHMC certainly exhibit the same qualities.

SPRING 2011

Conclusion Biophilic design principles can be applied in a variety of contexts allowing growth of both people and environment. Human psychology clearly benefits from contact with nature, and inviting nature into our buildings is the ideal way to insure the both the continuation of our modern lifestyle and assuagement our more primitive needs. Positive effects can especially be seen in the realm of healthcare. Its typically stressful atmosphere holds tremendous room for improvement, and numerous studies evidence nature’s role in healing. In sum, the built environment need not interfere with biological human needs to commune with nature nor with existing ecological systems. Ancient architects built for their cultures, which were almost always more in touch with the earth than western society of the present. They mimicked nature’s forms, producing magnificent structures with which we are still awed—though biophilic design is a novel concept, they certainly employed some of its recommendations. Today, we can add another layer to this tradition and ensure maximal benefit for our planet and ourselves. References 1. S. Kellert, J. Heerwagen, and M. Mador, Biophilic Design: the Theory, Science, and Practice of Bringing Buildings to Life (Wiley, New Jersey, 2008). 2. Y. Joye, Rev. Gen. Psychol. 11.4, 305-328 (2007). 3. S. Pliska, Biophilia, Selling the Love of Nature (2005). Available at http://www.planterra.com/research/article_biophilia.php (17 August 2010). 4. E. Gordon, Personal interview, 23 August 2010.

39


ecology

Effects of Ocean Acidification on a Turtle Grass Meadow Marielle Battistoni ‘11, Katherine Fitzgerald ‘11, and Suzanne Kelson ‘12

W

ith increasing atmospheric CO2 concentrations, the oceans are expected to increase in acidity during the next century. Increasing ocean acidity has been shown to negatively affect many marine ecosystems, particularly calcifying organisms. We investigated acidification effects on the turtle grass communities of Little Cayman Island, British West Indies (BWI). We hypothesized that acidified seawater would decrease turtle grass growth, the presence of calcifying epiphytic algae, and the metabolism of snail grazers. We placed turtle grass and snails in tanks with acidified or natural seawater for four days. We found that turtle grass growth decreased, leaf senescence increased, and epiphytic algal cover was strongly reduced in acidified seawater. We also found that snail activity was negatively affected by acidic seawater. Our results suggest that continuing ocean acidification could be detrimental to the productivity and health of turtle grass meadows.

Introduction

Oceans act as important carbon sinks for anthropogenic emissions, reducing atmospheric concentrations of CO2 by up to one-third (1). As oceans become saturated with CO2, the pH of seawater is predicted to drop from 8.1 to 7.7-7.8 by the end of this century (2,3). This increased acidity may have dramatic effects on marine organisms, in part because it reduces the concentration of carbonate ions and the solubility of aragonite, a form of calcium carbonate used to create biological structures (1,3). Turtle grass (Thalassia testudinum), the most common marine plant in the Caribbean, supports an ecosystem with a variety of organisms, including epiphytic algae and snails, which will likely be impacted by ocean acidification (4). Hendriks et al. found that, in acidic environments, gastropod survival and growth rates decreased by 93 percent and 63 percent, respectively (5). Some invertebrates depress their metabolic rates to maintain an internal acid-base balance in response to increased acidity (13). While metabolic rates of other marine heterotrophs decrease with acidification, the metabolic response of gastropods has not yet been studied (5). Additionally, growth and recruitment of several species of unattached crustose coralline algae decrease at lowered pH levels (6,7). However, few studies have investigated the effects of acidification on epiphytic calcifying red algae, although the algae are important to the function of coastal seagrass meadows. The epiphytic algae that grow on turtle grass leaves provide protection from desiccation and herbivory (8). Here we investigate the effects of ocean acidification on growth and metabolism of turtle grass and presence of epiphytic crustose coralline algae (Hydrolithon farinosum). We also examined how the metabolic rates and grazing behavior of snails (Littorina sp.) were impacted by lowered 40

pH. We hypothesized that lowering the pH of seawater would stress turtle grass, thereby reducing its growth and metabolic rate. We predicted that the percent cover of calcifying algae on turtle grass would decrease in a lower pH environment because calcium carbonate structures necessary for algal growth would degrade. We predicted that acidified treatments with snails added would have the least amount of algal cover, which would further reduce turtle grass growth rates. Finally, we hypothesized that snail metabolic rates and activity would decrease with increased acidity because snails would be under greater stress.

Methods We conducted an experiment simulating the effects of ocean acidification on turtle grass communities at Little Cayman Research Center, BWI, from March 6 to 9 2011. We created an acidic environment by filling 12 clear, square tanks of 7.5 L capacity with 5 L of seawater and 100 mL of 5 percent white vinegar. Acid was added daily to maintain the tanks at pH 7, measured with pH strips. We created 12 control tanks filled with 5 L of seawater (pH 8). Tanks maintained a constant temperature of 26˚C, which is similar to the average afternoon temperature of 29.5˚C that we measured at the turtle grass collection site. Turtle grass mats including belowground biomass were collected using a shovel from the seagrass meadow in front of the Little Cayman Research Center, Little Cayman, BWI. Because Corlett and Jones found that the coralline red algae Hydrolithon farinosum was the most abundant epiphyte on turtle grass growing near Grand Cayman, BWI, we assumed that this was the same epiphytic calcifying algae we found in our experiment (9). Snails were collected from the seagrass meadow at South Hole Sound, Little Cayman Island, BWI. We massed turtle grass samples and placed mats of similar biomass in 12 tanks of acidified seawater and 12 tanks of natural seawater. We added 20 snails to six of the acidified tanks and six of the control tanks. We analyzed the cover of calcifying algae on 10 haphazardly sampled turtle grass leaves from each tank daily. We measured total length of the leaf and the length of the leaf that was white due to coverage by epiphytic algae. Because algal cover was not uniform on the white area of the leaf, we approximated percent cover on this portion to the nearest 25 percent. We also noted leaf senescence daily as evidenced by browning of leaves. We estimated growth on turtle grass blades by poking holes with a toothpick at the base of individual leaves at the beginning of the experiment. We measured growth of 10 haphazardly chosen blades in each tank at the end of four days by recording the distance (mm) from the base of the leaves to the growth scar. Dartmouth Undergraduate Journal of Science


Snail metabolic rate was estimated once daily by measuring changes in dissolved oxygen after snails were added. We removed all snails from the tanks and placed them together in 125 mL Nalgene® bottles filled completely with seawater. Dissolved oxygen (percent and mg/L) was measured before and 30 min after adding snails using a YSI Inc. Pro Optical Dissolved Oxygen™ meter. We also estimated respiration rate of the turtle grass each afternoon by measuring dissolved oxygen change. Dissolved oxygen was measured before and 30 min after the tanks were covered with airtight black plastic bags. Snail activity was estimated daily by recording the location of each snail in the tanks as on the tank wall, on the leaves, or on the sandy substrate. We classified snails on the leaves and wall, where they must exert effort to maintain suction with the surface, as displaying active behavior. We multiplied the proportion of leaf length covered in algae by the percent cover of those algae to estimate overall leaf algal cover. Algal cover data were arcsine-square-root transformed and the 10 leaves observed per replicate were averaged across treatments. Plant respiration was weighted by initial plant mass. We used ANOVA to examine differences in growth between treatments. We used a repeated-measures ANOVA to quantify the effect of treatment and time on algal cover. We used Student’s t-tests to determine the difference in senescence and plant oxygen consumption in acidified and control treatments. We also used Student’s t-tests to examine differences in snail activity and respiration between treatments.

ing over time in acidified but not in control treatments (Wilks’ lambda value = 0.174, F9, 41.5 = 4.85, p = 0.0002; Fig. 2). Visual observations confirmed this finding, as algal cover was reduced in acidified treatments more quickly and to a greater degree than in control treatments (Fig. 3). We also found that leaf senescence was 28 percent higher in acidified treatments (t21.9 = —7.96, p < 0.0001, Fig. 4). Snails were significantly more active in control treatments (t21.5 = 7.66, p < 0.0001; Fig. 5). There was no significant difference in snail respiration rates between acidified and control treatments (snail: t33.9 = 0.64, p = 0.52).

Results

Discussion

We found that turtle grass growth differed significantly among treatments, with control treatments experiencing twice the growth of acidified treatments (F3,20 = 22.9, p < 0.0001; Fig. 1). A Tukey-Kramer HSD test showed that both control treatments (those with and without snails) grew more than both acidified treatments. Respiration rates showed no difference between acidified and control treatments (t28.0 = -1.38, p = 0.18). We found a significant treatment by time interaction on algal cover among treatments, with algal cover decreas-

We conclude that ocean acidification will negatively affect the health of turtle grass because we found a decrease in growth and an increase in leaf senescence of plants in acidified treatments. The acidic environment also appears to be detrimental to epiphytic calcifying algae, as algal cover on turtle grass leaves was reduced in the acidified tanks. The acid may have dissolved the algal calcium carbonate structures, as

Fig. 1: Turtle grass (T. testinudum) growth in control tanks of seawater (with and without snails) is significantly higher than in acidified tanks in an ocean acidification simulation experiment at Little Cayman Research Center, BWI (means ± 1 SE).

Fig. 3: Cover of epiphytic calcifying red algae (primarily H. farinosum) on turtle grass (T. testinudum) in tanks of acidified seawater (left) is less than ambient seawater (right) after two days of the ocean acidification experiment conducted at Little Cayman Research Center, BWI.

SPRING 2011

Fig. 2: The effect of acidified and ambient seawater on epiphytic calcifying algal cover on turtle grass (T. testudinum) in tanks with and without snails over time (means ± 1 SE). Algal growth decreased in acid treatments. Algal cover was analyzed visually in an ocean acidification simulation experiment conducted at Little Cayman Research Center, BWI.

41


Fig. 4: Leaf senescence of turtle grass (T. testudinum) increased in acidified seawater in an ocean acidification experiment conducted at Little Cayman Research Center, BWI (means ± 1 SE).

pletely seal the tanks and prevent gas exchange. However, Bibby et al. found that rates of oxygen uptake decrease in snails under stress of both lowered pH and predation (14). However, the addition of acetic acid alone may not accurately simulate the effects of ocean acidification due to increased CO2 concentrations. For instance, while a higher concentration of CO2 may decrease the pH of seawater, it may also increase photosynthetic rates of seagrass, causing higher reproductive outputs and biomass production (15, 16). We suggest that further studies examine the long-term combined effects of increased CO2 concentrations and acidity in seawater.

Acknowledgements We would like to thank the Little Cayman Research Center staff for their hospitality and help in the laboratory. We would also like to thank Savannah Barry for designing methods to analyze turtle grass growth. Many thanks to Elin Beck and Kelly Aho for sharing the results of their research on turtle grass and epiphyte interactions, as well as to Brad Taylor. References

Fig. 5: Percent of active snails in tanks of acidified seawater and ambient seawater (means ± 1 SE) in an ocean acidification simulation experiment conducted at Little Cayman Island, BWI. Snails were placed in tanks of turtle grass for four days. Snails were determined to be active if they were attached to turtle grass leaves or the walls of the tank.

acid washes have been known to be effective in reducing epiphyte loads on seagrasses (10). Additionally, the change in chemical composition of the seawater may have prevented the algae from producing new structures and growth. The reduction in epiphyte cover on the turtle grass may have negatively affected the turtle grass through several mechanisms. The reduced cover of algae may have contributed to the increased leaf senescence in the acidified tanks because epiphytes can protect seagrass from desiccation and harmful ultraviolet radiation (8,11). Herbivores may also eat older leaves with greater cover of nutrient-rich epiphytic algae, which reduces consumption of younger, photosynthetic basal seagrass leaves (8). We found that both acid and control tanks with snails had less algal cover than those without snails (Fig. 3); Frankovich and Zieman also found that snails reduce epiphyte cover (12). However, snail grazing, even by snails in natural seawater, may not reduce epiphyte cover enough to affect the turtle grass, as we found the growth of turtle grass was not lower in control and acid tanks with snails than in tanks without them (Fig. 1). We conclude that increasing ocean acidity will be detrimental to snails because we found fewer active snails in the acidified tanks, and many died (Fig. 5). The decreased activity of the snails in acidic tanks may be due to the reduced ability of the snails to maintain acid-base regulation in their body tissue (13). Snail health may be further threatened by acidification as their shells degrade due to the difficulty of creating calcified structures (1,3). While we did not see a change in plant or snail respiration rates between treatments, our methods for measuring respiration were limited, as we could not com42

1. S. Doney, V. Fabry, R. Feely, J. Kleypas. Annu. Rev. Marine Sci. 1, 169-192. (2009). 2. C. Le Quéré et al., Science. 316, 1735-1737 (2007). 3. J. Orr et al., Nature. 437, 681-686. (2005). 4. D. Littler, M. Littler, K. Bucher, J. Norris. Marine Plants of the Caribbean. (Smithsonian Institution, Washington, D.C., 1989). 5. I. Hendriks, C. Duarte, M. Alvarez, Estuar. Coast. Shelf S. 86, 157-164 (2010). 6. P. Jokiel et al., Coral Reefs. 27, 473-483 (2008). 7. I. Kuffner et al., Nat. Geosci. 1, 77–140 (2008). 8. J. Van Montfrans, R. Wetzel, R. Orth, Estuaries. 7, 289-309 (1984). 9. H. Corlett, B. Jones. Sediment. Geol. 194, 245-262 (2007). 10. P. Dauby, M. Poulicek. Aquat. Bot. 52, 217-228 (1995). 11. R. Trocine, J. Rice, G. Wells, Plant Physiol. 68, 74-81 (1981). 12. T. Frankovich, J. Zieman, Estuaries. 28, 41-52 (2005). 13. H. Portner, M. Langenbuch, A. Reipschlager. J. Oceanogr. 60, 705-718 (2004). 14. R. Bibby et al. Biol. Letters. 3, 699-701 (2007). 15. R. Zimmerman, D. Kohrs, D. Steller, R. Alberte, Plant Physiol. 115, 599-607 (1997). 16. S. Palacios, R. Zimmerman. Mar. Ecol-Prog. Ser. 344, 1-13 (2007).

Dartmouth Undergraduate Journal of Science


ecology

Effects of Epiphyte Cover on Seagrass Growth Rates in Two Tidal Zones Kelly aho ‘11 and Elin beck ‘12

E

piphytic algae are the most important primary producers in seagrass ecosystems; yet, the impact of epiphytic algae on seagrass is not well understood. We experimentally tested the effects of epiphyte cover on the growth rate of turtle grass, Thalassia testudinum at Little Cayman Island, British West Indies (BWI). We removed epiphytes from seagrass blades to reduce epiphyte cover and excluded grazers to increase epiphyte cover. We compared the effects of these treatments in a shallow water seagrass bed that is exposed to air at low tide and a seagrass bed in deeper water where seagrass is always submerged. Epiphyte cover had a positive effect on seagrass growth in shallow water but had no effect on growth in deeper water. The effect of epiphyte cover on seagrass growth varies with small-scale environmental differences, which has repercussions for understanding the response of seagrass beds to larger-scale disturbances.

Introduction Epiphytes are extremely important to the ecology of seagrass beds, contributing more production than the grasses on which they grow (1,2). The epiphytes benefit from this relationship with seagrasses by gaining a structure on which to grow and by consuming nutrients that seagrasses leak (2,3,4). However, the net effect of epiphytes on seagrass has not been completely determined. The negative effect of epiphytes on seagrass growth and production is well known. For instance, epiphytes can be so dense that they impact the photosynthesis of their host seagrass, decreasing growth or increasing mortality (5-8). In undisturbed seagrass beds and coral reefs, herbivory prevents epiphytic algae from outcompeting seagrasses and corals (5,6,8-13). Thus, in systems with natural levels of grazing, epiphytic algae cover does not significantly affect seagrass photosynthesis (1,14). Epiphyte cover may benefit seagrass by reducing desiccation during low tide or by protecting it from UV radiation (5,15-17). Therefore, epiphyte cover may be more beneficial to seagrasses in shallow water, where there is a higher probability of desiccation and greater UV exposure. We tested the positive and negative effects of epiphytes on growth rates of turtle grass, Thalassia testudinum. We hypothesize that the positive effects of epiphyte cover would be greater than the negative effects in shallow water; whereas, the negative effects would be greater than the positive effects in deep water. To test these hypotheses, we compared treatments with ambient epiphyte cover, reduced epiphyte cover (achieved by removing epiphytes), and increased epiphyte cover (achieved by excluding large grazers). The specific predictions are provided in Table 1.

SPRING 2011

Methods We studied seagrass at Little Cayman Research Center (LCRC), Little Cayman Island, BWI, on March 4-8, 2011. There are two seagrass beds in front of the LCRC center: one in the shallows close to shore, which is occasionally exposed to air at low tide, and one farther out which is always submerged. The deeper seagrass bed is on average 35 cm deeper than the shallower bed, and has a more constant, cooler temperature. We used these two habitat types to test the effects of epiphytes on T. testudinum growth. We manipulated T. testudinum on March 4 and March 6, 2011. Ten plants were chosen for each treatment. The six treatments were: (1) ambient epiphyte cover above the low tide line (4 and 6 March), (2) ambient epiphyte cover below the low tide line (4 and 6 March), (3) epiphyte removal above the low tide line (4 March), (4) epiphyte removal below the low tide line (4 March), (5) grazer exclusion above the low tide line (6 March), and (6) grazer exclusion below the low tide line (6 March). All of the blades of each focal plant were punctured with a pin at their bases to measure growth over the experimental period. We removed epiphytes from every blade of epiphyte removal plants by gently scraping them with a razorblade. We visually estimated that at least 90% of epiphytes were removed by this method. Grazer exclosures were constructed using GutterGuard™ plastic mesh with 1 cm openings. Each exclosure was a 10 cm diameter cylinder by 15 cm in height with a mesh top sewn together with fishing line. Wooden dowels and clothespins were used to hold each exclosure against the sediment. On March 8, 2011, we retrieved all experimental plants and measured the growth from the base of the blade to the pin scar. Growth rates were measured for the youngest blade of each plant, which is the most central blade (18), as this blade exhibited the most rapid growth and the most detectable differences in growth rate. We analyzed these data with a two-way ANOVA in JMP 8.0 followed by a Tukey’s post hoc test to determine significant differences among the three means.

Results Above the low tide line, T. testudinum with more epiphytes grew 25% faster than T. testudinum with ambient cover of epiphytes and 67% faster than T. testudinum with epiphytes experimentally removed (epiphyte treatment effect: F2,35 = 3.86, p = 0.030, Tukey’s post hoc test p < 0.05; Fig. 1). In contrast, below the low tide line there was no effect of epiphyte cover on T. testudinum growth rate (tide level effects: F2,33 = 0.30, p = 0.75; Fig. 1). The effect of epiphyte cover on T. testudinum growth rate depended on water depth 43


Fig. 1: T. testudinum growth rates ± 1 SE in two tidal zones and three epiphyte treatments at Little Cayman Island. Different letters indicate significant differences based on Tukey;s post hoc test p < 0.05.

(epiphyte treatment x tide level interaction: F4,69 = 3.53, p = 0.035; Fig. 1). This interaction was driven by the difference in the effect of grazer exclusion between the two tidal zones.

Discussion Above the low tide line, T. testudinum grew faster with greater epiphyte cover and slower when epiphytes were experimentally removed. This is consistent with our hypothesis that epiphytes increase growth rate in shallow zones because they protect seagrass from desiccation or UV radiation above low tide level. The removal of epiphytes may diminish the protection they provide T. testudinum, stressing the seagrass and reducing growth rates. In contrast, the grazer exclusion allowed epiphytes to accumulate above ambient levels, and the fastest growth rates in this treatment are consistent with our hypothesis that epiphytes can be beneficial. The effect of epiphytes on T. testudinum growth varied with tidal zone. Below the low tide line, in deeper water, there was no relationship between epiphyte load and T. testudinum growth rate. Our results show that the effect of epiphyte cover is less important to T. testudinum growth below the low tide line, in deeper water, where other factors such as T. testudinum density may be more important. The effect of grazer exclusion is greater in deep water than in shallow water. Perhaps grazing pressure is greater in shallow water, so excluding herbivores there led to more epiphyte accumulation than in deep water, leading to a greater increase in protection from desiccation or UV radiation. Alternatively, the shading effect of increased epiphyte cover may have more impact in deep water, where light is more limited. If this is true, the greater negative effect of increased epiphyte cover could have balanced any positive effects, such as UV radiation protection. The effect of epiphytes on T. testudinum growth rates differs in magnitude and perhaps even direction under different environmental conditions. The threats of desiccation and UV radiation are not as pronounced in deeper water and therefore epiphyte cover may not benefit seagrasses, such as T. testudinum, below the low tide line as strongly as it benefits seagrasses in shallower water. The shading of seagrass leaves by epiphytes may be more impor44

tant in more light-limited deeper waters than in the shallows; however, we did not detect this effect on growth rate. The relative effects of epiphytes (i.e., positive or negative) on seagrasses have also been shown to vary on a larger spatial scale. In systems that have undergone eutrophication, shading by a dense mat of epiphytes may increase seagrass mortality and lead to the collapse of the seagrass beds (6,8). However, shading does not seem to threaten seagrass beds in undisturbed conditions (1,14). Integrating such large-scale research with examinations of the effects of epiphyte cover on smaller, within-site scales could further our understanding of how a seagrass bed responds to changes in nutrient availability. Furthermore, the effect of epiphytic cover varies even within a single seagrass plant. Constant recruitment of epiphytes to seagrass blades means that older blades have a higher epiphyte load than younger ones (19). The accumulation of epiphytes on older blades decreases the amount of light and changes the wavelength of that light (7,19), and thus may contribute to rapid senescence of older leaves (8,14). Moreover, if epiphyte covered leaves are preferred by grazers, then accumulation on older leaves may increase survivorship of seagrass because it diverts grazing pressure away from new growth (5). Future studies could compare growth rates of blades of different ages with epiphyte manipulations to better understand the effects of epiphytes on the life history of seagrass.

Acknowledgements We would like to thank the staff at Little Cayman Research Center for graciously hosting us and allowing us to conduct our study. We would like to thank Brad Taylor for his suggestions and help in this work. References 1. L. Mazzella, R. Alberte, J. Exp. Mar. Bio. Eco. 100, 165-80 (1986). 2. C. Moncreiff, M. Sullivan, Mar. Ecol. Prog. Ser. 215, 93-106 (2001). 3. C. McRoy, J. Goering, Nature. 248, 173-74 (1974). 4. M. Harlin, Aquat. Bot. 1, 125-31 (1975). 5. J. van Montfrans, R. Wetzel, R. Orth, Estuaries. 7, 289-309 (1984). 6. R. Hughes, K. Bando, L. Rodriguez, S. Williams, Mar. Ecol. Prog. Ser. 282, 87-99 (2004). 7. J. Cebrián et al., Bot. Mar. 42, 123-28 (1999). 8. C. Fong, S. Lee, R. Wu, Aquat. Bot. 67, 251-61 (2000). 9. F. Tomas, X. Turon, J. Romero, Mar. Ecol. Prog. Ser. 287, 115-25 (2005). 10. P. Fong, T. Smith, M. Wartian, Ecology. 87, 1162-68 (2006). 11. D. B. Rasher, M. E. Hay, Chemically rich seaweeds poison corals when not controlled by herbivores. Proc. Natl. Acad. Sci. Early Edition (2010). 12. R. Howard, F. Short, Aquat. Bot. 24, 287-302 (1986). 13. M. Hootsmans, J. Vermaat, Aquat. Bot. 22, 83-88 (1985). 14. D. Bulthuis, W. Woelkerling, Aquat. Bot. 16, 137-48 (1983). 15. P. Penhale, W. Smith, Oceanogr. 22, 400-407 (1977). 16. F. D. Richardson, Rhodora. 82, 403-439 (1980). 17. R. Trocine, J. Rice, G. Wells, Plant Physiol. 68, 74-81 (1981). 18. P. A. Cox, P. B. Tomlinson, Am. J. Bot. 75, 958-65 (1988). 19. K. Sand-Jensen, Aquat. Bot. 3, 55-63 (1977).

Dartmouth Undergraduate Journal of Science


Article Submission

DUJS

t What are we looking for? The DUJS is open to all types of submissions. We examine each article to see what it potentially contributes to the Journal and our goals. Our aim is to attract an audience diverse in both its scientific background and interest. To this end, articles generally fall into one of the following categories:

Research

This type of article parallels those found in professional journals. An abstract is expected in addition to clearly defined sections of problem statement, experiment, data analysis and concluding remarks. The intended audience can be expected to have interest and general knowledge of that particular discipline.

Review

A review article is typically geared towards a more general audience, and explores an area of scientific study (e.g. methods of cloning sheep, a summary of options for the Grand Unified Theory). It does not require any sort of personal experimentation by the author. A good example could be a research paper written for class.

Features (Reflection/Letter/Essay or Editorial)

Such an article may resemble a popular science article or an editorial, examining the interplay between science and society. These articles are aimed at a general audience and should include explanations of concepts that a basic science background may not provide.

t Guidelines: 1.

The length of the article must be 3,000 words or less.

2. If it is a review or a research paper, the article must be validated by a member of the faculty. This statement can be sent via email to the DUJS account. 3. Any co-authors of the paper must approve of the submission to the DUJS. It is your responsibility to contact the co-authors. 4. Any references and citations used must follow the Science Magazine format. 5. If you have chemical structures in your article, please take note of the American Chemical Society (ACS)’s specifications on the diagrams. For more examples of these details and specifications, please see our website: http://dujs.dartmouth.edu For information on citing and references, please see: http://dujs.dartmouth.edu/dujs-styleguide Specifically, please see Science Magazine’s website on references: http://www.sciencemag.org/feature/contribinfo/prep/res/refs.shtml

SPRING 2011

45


DUJS Submission Form t Statement from student submitting the article: Name:__________________

Year: ______

Faculty Advisor: _____________________ E-mail: __________________ Phone: __________________ Department the research was performed in: __________________ Title of the submitted article: ______________________________ Length of the article: ____________ Program which funded/supported the research (please check the appropriate line): __ The Women in Science Program (WISP)

__ Presidential Scholar

__ Dartmouth Class (e.g. Chem 63) - please list class ______________________ __Thesis Research

__ Other (please specify): ______________________

t Statement from the Faculty Advisor: Student: ________________________ Article title: _________________________ I give permission for this article to be published in the Dartmouth Undergraduate Journal of Science: Signature: _____________________________ Date:______________________________ Note: The Dartmouth Undergraduate Journal of Science is copyrighted, and articles cannot be reproduced without the permission of the journal. Please answer the following questions about the article in question. When you are finished, send this form to HB 6225 or blitz it to “DUJS.” 1. Please comment on the quality of the research presented:

2. Please comment on the quality of the product:

3. Please check the most appropriate choice, based on your overall opinion of the submission: __ I strongly endorse this article for publication __ I endorse this article for publication __ I neither endorse nor oppose the publication of this article __ I oppose the publication of this article 46

Dartmouth Undergraduate Journal of Science


Write

SPRING 2011

Edit

Submit

Design

47


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.