CO VER I M AG E
SU BMIS S ION S
A B OU T U S
Image courtesy of NASA.
The DUJS welcomes submissions from all Dartmouth undergraduates. Please see dujs. dartmouth.edu for more information on the submission process. Letters to the Editor and requests for correction may be e-mailed to the DUJS or sent to:
The DUJS prints quarterly journals that include science news and review articles, along with research by undergraduates. Weekly Dartmouth Science News articles are also posted to DUJS Online.
A photograph of black hole candidate Cygnus X-1 taken by the Hubble telescope. Adjustments by Steven Chen â€˜15.
DUJS HB 6225 Dartmouth College Hanover, NH 03755
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE Dartmouth College Hinman Box 6225 Hanover, NH 03755 USA http://dujs.dartmouth.edu firstname.lastname@example.org
The Neukom Institute for Computational Science Co-sponsor
Note from the Editorial Board
The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge. EDITORIAL BOARD President: Andrew Zureick ’13 Editor-in-Chief: Daniel Lee ’13
Dear Reader, By its nature, science is a study of the extreme. In pushing intellectual boundaries, science demands we seek an understanding of the unusual and the unknown. Consequently, some of the most dynamic and exciting subjects in science today are of extreme things. Thus, for this issue of the Dartmouth Undergraduate Journal of Science our theme is “Extreme Science.” In tackling such a broad subject, our articles are especially diverse in this issue. We discuss topics including graphene, allergies, the cosmic microwave background, and natural antifreeze. This term, our faculty spotlight is an interview with Solomon Diamond, Assistant Professor of Engineering at the Thayer School of Engineering. Diamond is developing new, dynamic tools for studying neurobiology. We feature three submissions in this issue. The first is an original research paper studying the effect of extra-organismal caffeine on the neuromuscular synapses in crayfish. The second is an editorial, which discusses the intellectual authority of science in society. Lastly, we have included the winner of the “Science Says” essay contest, organized by the Neukom Institute for Computational Science at Dartmouth College. Leonardo Motta discusses computational methods used to understand Einstein’s theories of gravity. We would like to thank the Neukom Institute for their generous support and co-sponsorship of this issue of the DUJS. We would like to end this note by informing our readership of two exciting changes that are happening with the DUJS. First, we have begun distributing our print journal at major research universities around the country. We are excited to increase our readership, and look forward to incorporate more undergraduate research submissions from these schools. While we have had a wide distribution base in the past, this is the first time we have targeted other institutions. Second, through partnering with the Dartmouth Admissions Office, we have launched a national writing contest for high school students. The first-place winner will be featured in our Fall 2012 issue. We hope that this contest will promote interest in the sciences among high school students. More information can be found on our website. Thank you for reading the DUJS, and we hope you enjoy this issue. Sincerely, The DUJS Editorial Board
Managing Editors: Yoo Jung Kim ’14, Derek Racine ’14, Andrew Foley ’15 Layout & Design Editor: Steven Chen ’15 Assistant Managing Editors: Scott Gladstone ’15, Aaron Koenig ’14 Online Content Editor: Brendan Wang ’15 Public Relations Officer: Riley Ennis ’15 Secretary: Emily Stronski ’13 DESIGN STAFF Derek Racine ’14 Rebecca Xu ’15 STAFF WRITERS Shaun Akhtar ’12 Suyash Bulchandani ’15 Pranam Chatterjee ’15 Annie Chen ’13 Andrew Foley ’15 Scott Gladstone ’15 Thomas Hauch ’13 Betty Huang ’14 Yoo Jung Kim ’14 Aaron Koenig ’14 Sarah Morse ’15 Timothy Pang ’13 Derek Racine ’14 Sara Remsen ’12 Rui Shu ’15 Emily Stronski ’13 Brendan Wang ’15 Rebecca Xu ’15 FACULTY ADVISORS Alex Barnett - Mathematics William Lotko - Engineering Marcelo Gleiser - Physics/Astronomy Gordon Gribble - Chemistry Carey Heckman - Philosophy Richard Kremer - History Roger Sloboda - Biology Leslie Sonder - Earth Sciences David Kotz - Computer Science SPECIAL THANKS Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Provost’s Office R.C. Brayshaw & Company Private Donations The Hewlett Presidential Venture Fund Women in Science Project DUJS@Dartmouth.EDU Dartmouth College Hinman Box 6225 Hanover, NH 03755 (603) 646-9894 http://dujs.dartmouth.edu Copyright © 2012 The Trustees of Dartmouth College
The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge.
In this Issue DUJS Science News Andrew Foley ‘15 and Scott Gladstone ‘15
Graphene and Its Applications Scott Gladstone ‘15
Interview with Solomon Diamond, Assistant Professor of Engineering Annie Chen ‘13 9 Exploring the Cosmic Microwave Background Shaun Akhtar ‘12 12 Dissecting the Dartmouth Liver Derek Racine ‘14 15 Robots: The Technology of the Future is Here Rebecca Xu ‘15 18 Explosive Chemistry Rui Shu ‘15 21
Visit us online at dujs.dartmouth.edu 2
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
DUJS Allergies: Immune System Turned Extreme Yoo Jung Kim ‘14 24 Extremely Fun Animal Facts Sara Remsen ‘15 26 Sequencing the Microbial World Aaron Koenig ‘15 Delegitimizing Scientific Authority Patrick Yukman ‘14
Effects of Extra-Organismal Caffeine on Crayfish Neuromuscular Synapses Tara Kedia ‘12 35 Coding Einstein’s Legacy Leonardo Motta 38
NE W S
DUJS Science News For the latest news, visit dujs.dartmouth.edu COMPILED BY ANDREW FOLEY AND SCOTT GLADSTONE
Building a Culture of Innovation David Kelley, the founder of the global design firm IDEO, spoke at the Thayer School of Engineering about building a culture of innovation in higher education, this spring. Kelley’s company focuses on user-centered design for products, services, and environments. In 1980, Kelley was on the IDEO team that created the computer mouse for Apple. In his talk, Kelley spoke about teaching a methodology to unleash creativity. He explained his four-part design thinking methodology, which combines humancentered design, a culture of prototypes, radical collaboration, and storytelling. Design thinkers should have empathy for people who use the devices and services, so that they can identify the need and respond to it. According to Kelley, creating prototypes is a crucial step in achieving this empathy and replicating the true experience of having a product. Kelley also explained his idea of radical collaboration, emphasizing the importance of team diversity: “None of us is smarter than all of us.” Kelley also commented on general applications of a design thinking mindset, noting that a design thinker is now on the
Image courtesy of the NIH
Bovine pulmonary artery endothelial cells under the microscope. Nuclei are stained blue with DAPI, microtubles are marked green by an antibody bound to FITC and actin filaments are labelled red with phalloidin bound to TRITC. 4
prerequisite skill list for start-ups. Kelley is hoping to change the way we think about creativity and points out that we can easily apply design-thinking techniques in our personal lives. “As a student, it’s your job to build a passion for something,” Kelley advised. “The only way to build passion is to go out into the world and experience a lot of stuff.” Kelley emphasized understanding people and creating designs area for the user, not the designer. Everything should be an experiment, he said, and the experimentation has to be transparent and, at the end of the day, fun.
Hybrid Ultrasound and Fluorescent Imaging System Used for Cancer Tumors Surgical procedures are used to remove cancerous tumors from the body, but are less invasive procedures possible? Sason Torosean, a researcher at Dartmouth College, attempted to answer this question. Torosean, along with other researchers, developed a new ultrasound and fluorescence hybrid system that can be used to monitor drug delivery and distribution in tumor cells. “We want to understand drug therapy,” Torosean says, “because we don’t think the drugs always reach the tumors.” Although drug treatments are used to treat tumors, it is uncertain whether the drug can effectively reach its target tissue. However, ultrasound and fluorescent imaging can be used to monitor the activity of nanoparticles in tumor cells. “We used ultrasound to find where the tumor is,” Torosean explained. Using mice as experimental models, Torosean injected fluorescent nanoparticles into induced tumors in the mice. The injected nanoparticle was the drug ALA; when excited with a laser, ALA produces a toxin called PpIX that kills tumor cells. The laser also induced fluorescence in the nanoparticle, allowing for imaging of the distribution of ALA molecules. Fluorescent
wavelengths were taken from samples of both normal and tumor tissues and then compared. The normalized measurements indicated how many nanoparticles were present in tumor tissues, based on their absorbance levels. Torosean found a linear response between the absorbance of light and the concentration of PpIX, confirming a direct correlation between absorbance and concentration. “The project has some definite clinical applications,” says Torosean. By locating tumors and measuring drug concentration using a novel combination of ultrasound and fluorescent imaging, a safer and more efficient alternative to surgery may soon be introduced.
Health & Medicine
The Cost of Increased Transplantable Organ Screening In a recent paper titled “A Consolidated Biovigilance System for Blood, Tissue and Organs: One Size Does Not Fit All,” in the American Journal of Transplantation, researchers at The Giesal School of Medicine compared the various systems used in tracking diseases in donated blood, tissue, and organs. The researchers argued that viable organs should not be strictly screened for diseases, as the benefit of receiving a diseased organ may outweigh the costs of getting such a disease. In recent years, fewer than 20 percent of patients on organ transplant waitlists actually received a transplant. Furthermore, nearly 10 percent of those on the waitlists became too severely ill or died before they were able to receive an organ. Unfortunately, it is true that sometimes disease is spread through transplanted organs. Since it is impractical to stop organ transplants altogether, proper disease screening is of utmost importance. While there is enough non-diseased blood and tissue available to meet demand, the same is not true of organs. As such, there is an overwhelming unmet need for transplantable organs. Therefore, the authors of the article DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Serotonin is found to excite some Cortical Neurons
Image courtesy of CERN
An example of simulated data modelled for the CMS particle detector on the Large Hadron Collider (LHC) at CERN. Here, following a collision of two protons, a Higgs boson is produced which decays into two jets of hadrons and two electrons. The lines represent the possible paths of particles produced by the protonproton collision in the detector while the energy these particles deposit is shown in blue.
concluded that patients should embrace the fact that some disease will be spread through organ transplants. They suggest that a comprehensive biovigilance system should be established for organ transplantation, with the disease screening procedures for this system to differ from those for blood and tissue samples.
Update from the Large Hadron Collider: Closing in on the Higgs Boson Professor John Butler of Boston University briefed the Dartmouth Physics Department on the state of the Large Hadron Collider (LHC), this spring. The LHC is currently involved in the hunt for the Higgs Boson, a theoretical particle which would consolidate several logical gaps in the the Standard Model. In particular, theorists predict that the W and Z bosons, which unify the electromagnetic and weak forces, must receive their mass from phenomena associated with the undetected Higgs Boson. SPRING 2012
Butler presented results from ATLAS experimentation that suggest the presence of a Higgs Boson signature in a channel corresponding to its decay into gamma rays at 126 GeV. He acknowledged difficulties in attempting to locate the Higgs Boson due to an excess of background interactions caused by more than one billion proton collisions per second in the LHC. Results from 2011 suggested that the probability of background noise in the 110-150 GeV range fluctuating to the level observed in the 126 GeV channel was roughly seven percent, a first clue that the Higgs Boson may actually exist. Butler predicted a banner year for the LHC in 2012. The LHC is in the process of increasing the energy of its proton beams to a final collision energy of 14 TeV. Many eyes will be trained on data from the LHC in 2012. Over 3,000 collaborators, including roughly 1,000 graduate students, are currently working on the ATLAS project, which serves as only one of six detectors on the LHC circuit. Butler, who called the collaboration as a “United Nations of Science,” expects the LHC to continue to both pose and answer fundamental questions in physics in the near future.
In a recent publication, Daniel Avesar and Allan T. Gulledge of the Gulledge lab in the Department of Physiology and Neurobiology at the Geisel School of Medicine described their discovery of a subset of cortical neurons that are excited, rather than inhibited, by serotonin. These neurons were found in the layer 5 cortex, which is the layer responsible for the bulk of cortical output. Serotonin is currently the target of much research because of its ties to diseases such as depression, schizophrenia, eating disorders, and Parkinson’s. Serotonin is also often cited as the neurotransmitter responsible for feelings of happiness. Researchers found that while 84 percent of the 172 neurons were inhibited, 14 percent were either excited alone or excited as part of a biphasic response (an initial inhibitory response followed by a longer excitatory response). Avesar and Gulledge were able to draw close parallels between the morphology of the neurons they found to be excited in some form by serotonin and the morphology of callosal/commissural (COM) neurons. They found that the labeled COM neurons exhibited the same responses as the unspecified neurons in their initial experiment, thus showing that it was COM neurons that were excited by serotonin. Avesar and Gulledge were also able to demonstrate that excitation in COM neurons was independent of changes in fast synaptic transmission. The discovery of a subpopulation of cortical neurons excited, rather than inhibited, by serotonin suggests a new framework with which to examine certain psychoses. This finding opens the door to new methods of treatment that target the cellular, rather than molecular, level of disease treatment.
M A T ERI ALS
Graphene and Its Applications The Miracle Material of the 21st Century SCOTT GLADSTONE
Image courtesy of AlexanderAIUS retrieved from http://en.wikipedia.org/wiki/File:Graphen.jpg (accessed 10 May 2012)
Figure 1: Monolayer model of sp2-hybridization of carbon atoms in graphene. The ideal crystalline structure of graphene is a hexagonal grid.
ailed a “rapidly rising star on the horizon of materials science,” graphene holds the potential to overhaul the current standards of technological and scientific efficiency and usher in a new era of flexible, widely applicable materials science. Graphene is the name given to the monolayer, honeycomb lattice of carbon atoms (1). The two-dimensional carbon structure is characterized by sp2-hybridization, yielding a continuous series of hexagons, as represented in Fig. 1 (2). Until its discovery in 2004, graphene had been hiding in plain sight–tucked away as one of millions of layers forming the graphite commonly found in the “lead” of pencils. A team of researchers from the University of Manchester was the first to demonstrate that single layers of graphene could be isolated from graphite, an accomplishment for which team members Andrew Geim and Konstantin Novoselov were awarded the Nobel Prize for Physics in 2010. (3). Since then, the field of graphene research has exploded, with over 200 companies involved in research and more than 3000 papers published in 2010 alone (4). Many proclaim graphene as the 21st century’s “miracle material,” as it possesses powerful properties that other compounds do not: immense physical strength and flexibility, 6
unparalleled super-conducting capabilities, and a diverse range of academic and mainstream applications.
Physical Attributes Graphene boasts a one-atom-thick, two-dimensional structure, making it the thinnest material in the known universe (2). A single layer of graphene is so thin that it would require three million sheets stacked on top of one another to make a pile just one millimeter high (4). In fact, graphene is so thin that the scientific community has long debated whether its independent existence is even possible. More than 70 years ago, the band structure of graphite was discovered, revealing to the scientific community that graphite was composed of closely packed monolayers of graphene held together by weak intermolecular forces. However, scientists at the time argued that two-dimensional structures, like that of graphene, were thermodynamically unstable and thus could exist only as a part of three-dimensional atomic crystals (1). This belief was well established and widely accepted until the experimental discovery of graphene and the subsequent isolation of other freestanding two-dimensional crystals in 2004 (1). With its very discovery, graphene began to push the limits of traditional materials science.
Conventional wisdom dictates that “thin implies weak,” and most would agree that it is more difficult to break through a brick wall than a sheet of paper. Yet, graphene defies expectations. According to mechanical engineering professor and graphene researcher James Hone of Columbia University, “Our research establishes graphene as the strongest material ever measured, some 200 times stronger than structural steel,” (3). Recent research has also shown that it is several times tougher than diamond and supposes that it would take “an elephant balanced on a pencil” to break through a sheet of graphene the thickness of a piece of plastic wrap (4). The enormous strength of graphene is attributed to both the powerful atomic bonds between carbon atoms in the two-dimensional plane and the high level of flexibility of the bonds, which allows a sheet of graphene to be stretched by up to 20% of its equilibrium size without sustaining any damage (4). With the development of a new “wonder material” with properties like those of graphene, one might expect exorbinant prices and relative inaccessibility for mainstream applications. However, one of graphene’s most exciting features is its cost. Graphene is made by chemically processing graphite—the same inexpensive material DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
that composes the “lead” in pencils (3). Every few months, researchers develop new, cheaper methods of mass-producing graphene and experts predict prices to eventually reach as low as $7 per pound for the material (4). The thinnest, strongest material in the universe may be closer to commercial applications than initially imagined.
Conductivity Graphene’s record-setting properties also enter the realm of thermal and electrical conduction. A team of researchers led by Michael Fuhrer of the University of Maryland’s Center for Nanophysics and Advanced Materials recently performed the first measurements regarding the effect of thermal vibrations on the conduction of electrons in graphene (5). All materials are characterized by an intrinsic property known as electrical resistance, which results from the intrinsic vibrations of atoms due to non-absolute-zero temperatures. When the atoms vibrate in place, they block the flow of electrons through the material. The only way to eliminate the vibrations is by reducing the temperature of a substance to absolute zero, a practical and scientific impossibility (5). Fuhrer’s research showed that thermal vibrations have an extraordinarily small effect on the electrons in graphene, yielding a resistivity that is about 35% less than the resistivity of copper. Fuhrer attributes this difference, in part, to the fact that graphene has far fewer electrons than copper, so electrical current in graphene is carried by a few electrons moving much faster than the electrons in copper (5). Before the discovery of graphene, copper was thought to be the material with the lowest resistivity at room temperature. For this reason, the overwhelming majority of electrical wiring is composed of copper. This strongly implies a practical use of graphene in highfrequency electrical systems for which the use of copper limits overall performance. Graphene’s powerful conducting ability also makes it an ideal candidate as a material for the next generation of semiconductor devices. Moore’s Law states that the number of transistors that can be fit on a single processing chip doubles approximately every 18 months, which translates to faster, more advanced devices that utilize high speed transfer of electric charge, such as computers and televisions (6). Graphene’s flexibility allows the single flat carbon sheets to be “rolled” into SPRING 2012
semiconducting carbon nanotubes (see Fig. 2). Recent research shows that graphenebased nanotubes have the highest levels of mobility, a measure used to quantify how fast electrons, and thus electric current, move. The limit to mobility of electrons is graphene is about 200,000 cm2/Vs at room temperature, compared to about 1,400 cm2/Vs in silicon, a staple in computer processing chips, and 77,000 cm2/Vs in indium antimonide, the highest mobility conventional semiconductor known (5). The practical impact of this result is well stated in a review of semiconductor research: “Mobility determines the speed at which an electronic device (for instance, a field-effect transistor, which forms the basis of modern computer chips) can turn on and off. The very high mobility makes graphene promising for applications in which transistors must switch extremely fast, such as in processing extremely high frequency signals” (5). Graphene’s conductive capabilities are also being utilized in the development of high power efficiency capacitors. Electrochemical capacitors, commonly known as supercapacitors or ultracapacitors, differ from the capacitors normally found in electronic devices in that they store substantially higher amounts of charges (7). These capacitors have recently gained attention because they can charge and discharge energy faster than batteries; however, they are limited by low energy densities where batteries are not (7). Therefore, an electrochemical capacitor that could combine the high energy density of a battery with the power performance
of a capacitor would be a significant advance in modern technology (Fig. 3). While this ideal capacitor is not yet within reach, researchers at UCLA have produced capacitor electrodes composed of expanded networks of graphene that allow the electrodes to maintain high conductivity while providing highly accessible surface area (7). Further developments on this technology could lead to innovations such as credit cards with more processing power than current smartphones and computers (3).
Diverse Applications The creation of the first man-made plastic, Bakellite, in 1907 allowed for inventions like the plastic bag, PVC pipe, and plexiglass, which many now take for granted in daily life. Dr. Sue Mossman, curator of materials at the Science Museum in London, notes that graphene closely parallels Bakellite, saying: “Bakellite was the material of its time. Is [graphene] the material of our time?” (4). Dr. Mossman is one of many to compare graphene to plastics, citing the variety of applications and diversity of use as the strongest ties between the materials. Graphene has the potential to transform many different fields, covering a broad range of subject matter that encompasses everything from computational development to water purification. Current touch screen technology could see a massive overhaul with the introduction of graphene-based innovations. Modern touch-sensitive screens use indium tin oxide, a substance that is transparent but
Image courtesy of Arnero retrieved from http://en.wikipedia.org/wiki/File:Carbon_nanotube_zigzag_povray.PNG (accessed 10 May 2012)
Figure 2: Monolayer model of sp2-hybridization of carbon atoms in graphene. The ideal crystalline structure of graphene is a hexagonal grid. 7
that can ride any terrain and never break, or batteries with lifetimes ten times as long as current models as practical applications for graphene technology. The potential is there, but it is now up to those on the forefront of materials science to leverage these discoveries in the progression of mankind. CONTACT SCOTT GLADSTONE AT SCOTT.W.GLADSTONE.15@DARTMOUTH.EDU References
Image courtesy of Stan Zurek from Maxwell Technologies retrieved from http://en.wikipedia.org/wiki/File:Supercapacitors_chart.svg (accessed 10 May 2012)
Figure 3: Comparison of energy density and power output in batteries and capacitors.
carries electrical currents. However, indium tin oxide is expensive and, as some iPhone and other touch-screen gadget users have experienced firsthand, is likely to shatter or crack upon impact (4). Replacing indium tin oxide with graphene-based compounds could allow for flexible, paper-thin computers and television screens. One researcher proposes the following scenario: “Imagine reading your Daily Mail on a sheet of electronic paper. Tapping a button on the corner could instantly update the contents or move to the next page. Once you’ve finished reading the paper, it could be folded up and used afresh tomorrow,” (4). Samsung has been one of the biggest investors in graphene research, and has already developed a 25-inch flexible touch screen that uses graphene. Companies like IBM and Nokia have followed suit. IBM recently created a 150-gigahertz (GHz) transistor; in comparison, the fastest comparable silicon device runs at about 40 GHz (3). Even though graphene-based technology is beginning to emerge, scientists are faced with a fair share of problems. One of the biggest issues for graphene researchers is the fact that graphene has no “band gap,” meaning that its conductive ability can’t be switched “on and off,” like that of silicon (2). For now, silicon and graphene operate in different domains, but as Nobel Prize winner professor Geim states, “It is a dream,” (3). There is good reason to believe that graphene research will be well worth the struggle. Most recently, researchers at the University of Manchester showed that 8
graphene is impermeable to everything but water. It is the perfect water filter. In an experiment, the researchers filled a metal container with a variety of liquids and gases and then covered it with a film of graphene oxide. Their most sensitive equipment was unable to register any molecules leaving the container except water vapor–even helium gas, a molecule that is particularly small and notoriously tricky to work with, was kept at bay (8). Dr. Rahul Nair, leader of this research project, claims that this ability is due to the fact that, “graphene oxide sheets arrange in such a way that between them there is room for exactly one layer of water molecules. If another atom or molecule tries the same trick, it finds that graphene capillaries either shrink in low humidity or get clogged with water molecules,” (8). It is hard to understate the importance of graphene oxide’s potential as an ideal filter, as it could quickly and inexpensively replenish rapidly decreasing clean water supplies. More powerful than a steel beam, tougher than a diamond, a better conductor than copper and the best water filter possible– these are but a few of what Nobel Prize winners Geim and Novoselov claim to be a “cornucopia of new physical and potential applications” of graphene (1). The potential uses of graphene are innumerable, and run the gamut from supercomputers that process at over 300 GHz to super-distilled vodka with zero percent water. Some have gone so far as to suggest iPhones that users can roll up and tuck behind their ears like a pencil, car tires
1. A.K. Geim, K.S. Novoselov, The rise of graphene. Nature Materials 6, 183-191 (2007). 2. M.J. Allen, V.C. Tung, R.B. Kaner, Honeycomb carbon: a review of graphene. Chem. Rev. 1, 132-145 (2010). 3. A. Hudson, Is graphene a miracle material? BBC News (2011). Available at http://news.bbc.co.uk/2/hi/ programmes/click_online/9491789.stm (May 2011). 4. D. Derbyshire, The wonder stuff that could change the world: graphene is so strong a sheet of it as thin as clingfilm could support an elephant. Daily Mail, Science & Tech (2010). Available at http:// www.dailymail.co.uk/sciencetech/article-2045825/ Graphene-strong-sheet-clingfilm-support-elephant. html (October 2011). 5. Graphene – the best electrical conductor known to man. AZOM (2008). Available at http://www.azom. com/news.aspx?newsID=11679 (March 2008). 6. G.E. Moore, Cramming more components onto integrated circuits. Electronics. 8, 4-7 (1965). 7. Graphene capacitors to increase power efficiency. Times of India (2012). Available at http://articles. timesofindia.indiatimes.com/2012-03-20/ infrastructure/31214537_1_graphene-capacitorselectrodes (March 2012). 8. S. Anthony, Graphene: the perfect water filter. ExtremeTech (2012). Available at http://www. extremetech.com/extreme/115909-graphene-theperfect-water-filter (January 2012).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
F A C U LTY SPO TLIGH T
Assistant Professor of Engineering, Thayer School of Engineering ANNIE CHEN
Tell us about your lab and research. I call my workspace the multimodal neuroimaging lab—multimodal in that we use multiple modalities of studying human brain function. One example is EEG; in this lab specifically we use EEG caps, with which electrons are placed directly on the scalp to measure the difference in potential from one location to another and over time. We have a standard cap and one that we have built ourselves. Each of the caps has an elastic web where we attach optodes, which carry near-infrared light to the head. That light is then delivered into the head through the scalp, and we collect the light that comes back out again. Based on changes in the absorption at one wavelength versus another, we can calculate the oxygenation level of the brain. This method also allows us to track the oxygenation changes at the same time as the neurological changes. These measurements are important in certain diseases and the study of brain function. For example, ischemic blockage of blood flow or a bleed in the brain from hemorrhagic stroke causes the loss of neurons and brain function in specific regions. In this case, disrupted blood circulation is closely tied to disrupted neuron function. We are currently looking at stroke subjects in an attempt to understand the relationship between neuron function and vascular function in the brain during the recovery process. Another machine that is useful in this pursuit is the near-infrared spectroscopy machine, or NIRS for short. It has a bank of 32 lasers—all in the near infrared part of the spectrum. On our machine, they are specifically calibrated to 690, 785, 808, and 830 nm. There are repeating rows of four lasers in each color paired with Avalanche Photo Diodes (APDs), which count the photons that come back out. Right now, we’re interested in looking at traumatic brain injury, Alzheimer’s, and stroke. Alzheimer’s disease is typically thought to involve beta-amyloid plaques and neurofibrillary tangles in the brain that lead to cell death There’s strong SPRING 2012
Photo by John Sherman, courtesy of Thayer School of Engineering at Dartmouth
Solomon Diamond, Assistant Professor of Engineering at the Thayer School of Engineering.
evidence that the plaques are also causing inflammation of the local arterioles and small blood vessels in the brain that impair the reactivity of these vessels. Normally when the neurons are active, the blood vessels respond in a coordinated way called neuro-vascular coupling. But with inflammation this coupling is disrupted. This is thought to be one of the factors that accelerates the disease process. An accumulation of plaques leads to impaired function, and hypoxic stress in the microenvironment, both of which lead to an enhanced progression of cell death and greater cognitive decline. By measuring the neuron activity and the blood dynamics at the same time, we hope to identify the decoupling, and therefore the breakdown of the physiological relationship between neural and hemo factors early in the disease. We also hope to track the disease progress over time to help evaluate which therapeutics affect the physiology of the brain.
What was your path to becoming an engineer? How did you become interested in engineering? I spent a lot of my childhood wanting to be an inventor. As a child, I designed and built things for fun. I also grew up with a family business. I used to help out in our woodshop a lot, and I became skilled with the tools there, which helped me to come up with ideas to build things. At the time, I liked to make dart guns. I went through a whole series of design durations on different triggering mechanisms and propulsion mechanisms. I liked doing that sort of design work from a young age. I also just loved computers and technology. My family business also ran a small computer store, and I used to assemble the PC clones. That was “the thing” at the time. IBM had the market cornered and other producers started making separate motherboards, hard drives, and control cards. When I was in junior high, my dad and I would buy all the pieces, put them together and sell them 9
to local clients. I was also tech support, so I would take calls at home and help people with their computer troubles. Since I set up the computer environment for them, I had it all memorized. While talking with someone on the phone, I was able to close my eyes and guide them keystroke by keystroke through the computer. I just really enjoyed it. Overall, I would chalk it up to a combination of a desire to invent things and a love for computers. When I got to Dartmouth and went to the information session at Thayer, I thought, “Wow this is for me.” I decided pretty early on that engineering was going to be a lot of fun, especially here at Dartmouth. I also studied Chinese while I was here: I went on the Beijing FSP, spent a leave term in Taiwan continuing with my language studies, and maintained a high level of involvement with the Asian studies program through my senior year. The thing that really appealed to me about engineering at Thayer was the fact that it’s problem-oriented, projectbased learning.
In what ways did your undergraduate experience at Dartmouth help shape your career? My undergraduate experience at Dartmouth helped me to structure my approach to problem solving and also helped me to learn how to apply my creative energies in a productive way that could really make a difference in the world. I saw that process go through a full cycle multiple times in my experience at Dartmouth and Thayer. I think that we successfully connect students with the world, with real-world problems, and real-world thinking. During my time as a graduate student at Harvard in the School of Engineering and Applied Sciences, it was apparent to me that I had learned a problem-solving methodology that gave me an advantage over my peers. The strong academic training—a combination of mathematics and engineering sciences along with the breadth of other studies I had done here—helped me to refine my ability to define a problem, develop a strategy, and execute that plan while being creative in the process. I was aware that my peers hadn’t had a comparable experience. In most cases, it was also apparent that they hadn’t had nearly as much fun learning as I did while I was at Dartmouth. 10
How did you decide to tackle the problem of Alzheimer’s and stroke? I became interested in the human brain and mind when I was still very young. Before college, I was reading books on my own about the mind and the brain, and I was fascinated. I didn’t think it was something that I’d be able to combine with my interest in technology and engineering; I saw it as a side interest. Then, through the undergraduate program at Dartmouth, I connected with an adjunct professor at Thayer named Bob Dean. Bob is an extraordinary professor, inventor, and entrepreneur. He has made a huge mark on the Upper Valley, having co-founded or founded a number of companies in the area. Early on, I had to do an independent project for my fluid mechanics class, and I decided to contact Bob at his company Synergy Innovations to ask if he had a project that I could work on for my class. He agreed, and I ended up analyzing a submerged water jet that had the potential to break up kidney stones. The project went well, and when it came time to do my capstone design project, which is comparable to the currently offered Engineering Sciences 89 and 90 courses, I again contacted Bob. This time, I was motivated to work personally with Bob, and we decided to take on a project that he had originally co-invented with a physical therapist: an exercise machine for the bed-ridden elderly. The idea is that when someone is elderly, their strength is often diminished relative to the strength needed to walk. When one is bed-ridden for a long time, due to sickness or injury, a loss of basic motor abilities can result in an inability to stand up from a low chair, such as a toilet, or to walk independently— those sorts of things. They invented a machine that would attach to the bottom of a hospital bed. While in bed, it allows patients to do leg exercises to maintain leg strength and stay mobile. For me, this was a very exciting project. Bob was opening up a world for me where I could connect technology and engineering with an ability to address very direct human need. Bob had previously applied for funding from NSF for this project and did not win the award, so I set about redesigning the device from scratch, built a prototype,and started working with some local subjects who were elderly and trying out the device. I gathered new data from the
machine and rewrote the NSF grant proposal,which we submitted together and won the award. One of the subjects to use this in-bed exercise machine, called IBEX, was a stroke patient. This was my first experience working with a stroke patient. No one in my family had had a stroke, and I had not had that personal experience before. There it was, right in front of me, a very direct connection between the brain and a physical disability that I was able to measure with technology. That was the moment when I decided I was going to find a way to connect what I was doing at the time in mechanical design and engineering with the human brain. I spent some time digging in and doing my own research on how these fields were connected and what sorts of technology were available to study the brain. Through this course, I learned about fMRI, EEG, and other technologies. I applied to graduate school to combine my interests of technology design, rehabilitation, and the brain. That’s what I went to Harvard to study. When I got to Harvard I studied under Robert Howe. His lab focused on surgical robotics. I spent a lot of my first year developing a research plan. I had extra time because I’d taken some graduate courses while I was at Thayer, during the fifth year, the B.E. year. I was able to transfer those course credits to Harvard, so I had a reduced course load. I spent my extra time developing a research plan and writing a pilot grant to the Charles A. Dana Foundation, which had a grant mechanism that combined biomedical imaging of brain function with neurological disorders. I applied for the grant and won that award, which allowed me to pay my way through graduate school. There was a transitional period between when I was just focusing on mechanical design in my first couple of years at Thayer and when I was in touch with Bob Dean looking for projects that combined technology with human need. I took a course here at Dartmouth in the education department taught by Professor Linda Mulleyon in special education. She was the one who first introduced me to the connection between engineering and technology and communication and mobility aids. For me, that was a really important bridge between my first forays into engineering design and seeking out Bob Dean, who was in the process of combining the two fields of engineering and rehabilitation technology. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
have to be overcome. I’m not an expert on FDA review and those sorts of issues, but at some point, I hope to become one. Those issues have to be addressed in order for the probe to be advanced into clinical use. I’m looking to move from a phase of technological development and incubation into technological dissemination.
What would be your advice to students who are interested in research?
Photo by John Sherman, courtesy of Thayer School of Engineering at Dartmouth
Professor Solomon Diamond, left, Ph.D. candidate Paolo Giacometti, seated, and Alison Stace-Naughton ’11 are designing a system to measure brain function. Professor Diamond is one of Thayer’s eight new tenure-track assistant professors.
Given the interdisciplinary nature of your research, can you talk a bit about your collaborations with other professors? I collaborate with many people here at Dartmouth. I made extra efforts when I first arrived back on campus to reach out to as many people as I could who were involved with brain imaging technology, neuroscience research, clinical research and practices related to brain disorders and brain injury. It is really a wonderful community which I feel comfortable connecting with as a faculty member. Over time, my relationships have grown into research projects with others. I have some active projects with the Neurology deparment. I also have active collaborations with the Physiology and Neurobiology department considering some neuroscience questions. I keep in regular contact with people in the Psychiatry department at Dartmouth Medical School, and with faculty from the department of Psychological and Brain Sciences. It has been great for me to be in a position of developing technology that helps so many people answer important questions in their research. When I go and knock on doors, I generally get a very friendly response and a very positive interaction. It is exciting to identify opportunities for new discovery and for new kinds of care that are enabled by advances in technology. In fact, this is SPRING 2012
one of the fundamental ways that science and technology advances. You get a new microscope, you see new things. You have breakthroughs and discoveries that enable all sorts of advancements in the world. Engineers are well-valued and respected for those reasons.
What do you see as the next step after you finish developing this cap? With the cap, we’ve entered a licensing agreement with a company in Montreal, called Rogue Research. We’re going to work together to refine our research prototype into a product. First, it’s going to get demonstrated at the annual meeting for the organization for human brain mapping in Beijing in June. Then it will be released officially as a product in November at the Society for Neuroscience meeting in New Orleans. We’re hoping to build a user base in the research world that will grow, allow us to refine the product, make it more effective, and less expensive after a couple of years. At that point, the pilot clinical work that we are doing will hopefully have gone through a few cycles and we will have preliminary clinical data using the device and be well equipped to aggressively promote its use in the clinical research world. At some point, we’re going to transition to trying to advance the head probe from research use into actual clinical care. There are, however, some major hurdles that we will
Research is fundamentally a creative enterprise, so it makes sense to spend time nurturing your creative energies, abilities, and interests. It is also very important to identify mentors, not only in research, but in all aspects of life. I had mentors carrying me through each phase of my life, and would not be here today if I didn’t have that sort of guidance and people who really cared a lot about helping me to mature, develop, and learn to apply myself. So, seek out mentors and learn from them. There is also a different mindset in research, which is that most of human knowledge is yet to be discovered. It’s not already known. We need to push ourselves to master what is known and take steps into the unknown, which can be very disorienting at first. High school is mostly about teaching students everything that’s known. College is a lot about teaching them what is known, and some of the advanced classes begin to step a little bit beyond. However, to jump into a research career you need to be able to take a big leap out of the known into the unknown. Learning how to do that in a way that is going to lead to verifiably correct new knowledge and discoveries takes a lot of experience, a lot of trial and error, a certain amount of failure, and a willingness to take risks. But over time, it is a skill that can be mastered. So, I would say, begin to do that. If jumping into that new knowledge is something that you really enjoy, that you find is fulfilling to you personally, then maybe a research career is right for you. If you find that jumping into the unknown, is something you don’t enjoy so much, look for another direction to go in life. My fourth piece of advice is to always step back and examine yourself and see what it is you enjoy and what it is you don’t enjoy so much. Don’t worry so much about the skills you have or the skills you lack because those can be learned. Your heart has to be in it because it’s not an easy road, so make sure you enjoy it, and if you do, dive in.* 11
CO S MOLO G Y
Exploring the Cosmic Microwave Background All-Encompassing Light from the Early Universe SHAUN AKHTAR
t every moment, night and day, the Earth is bombarded from all directions with microwave-range radiation. This radiation is characteristic of a black-body, an opaque object with a equilibrated temperature of approximately 2.725 degrees Kelvin (1). Arno Penzias and Robert Wilson, then working at the Bell Labs accidentally discovered this continuous background signal in 1964. They first described it as an “excess antenna temperature” they could not account when calibrating their instruments (2). A fellow team of researchers at Princeton University, led by Robert H. Dicke, soon realized that this radiation was the remnants of the “primordial fireball” that existed following the Big Bang (3). With these discoveries, research of the cosmic microwave background (CMB) had begun.
The CMB and Cosmology The early universe was extremely crowded with highly energetic photons. These photons had sufficient energy to break the bonds that may have formed between any electrons and nuclei. Since the universe was very dense at this point, atoms would be ionized almost immediately after combination. As a result, the early universe was full of free electrons and nuclei, with photons continually scattering off both types of particles. As the universe expanded and cooled, the energies of photons decreased. Accordingly, their ability to regularly excite and ionize electrons from their host atoms was diminished. This point is known as decoupling—the period in which the frequency of interaction between photons and electrons decreased precipitously. Since the time of decoupling, many photons have traveled unimpeded all the way to our telescopes and antennae. As the universe has continued to grow, their wavelengths have been redshifted into the microwave range of the spectrum. From Earth’s point of view, these photons composing the CMB appear to originate from the edge of a sphere. This edge is called the surface of last scattering (4). In 2006, astrophysicists George 12
Smoot and John Mather received the Nobel Prize in Physics for their contributions to the instruments aboard the Cosmic Background Explorer (COBE). COBE collected data that conclusively demonstrated the spectrum of the CMB to be remarkably well-described by blackbody radiation (Fig. 2), supporting the idea that this background was a remnant of a largely homogeneous early universe. This result confirmed that mathematical models could be made with some confidence about the expansion of the universe. In addition, COBE revealed that the CMB contained small-scale temperature fluctuations, called anisotropies, on the order of one part in one hundred thousand. The CMB’s anisotropies were correctly predicted to originate with the interaction of early background radiation with perturbations in the density and velocity of extant matter (5). The subsequent study of these anisotropies allowed researchers to establish constraints on the cosmological parameters ascribed to the current standard model (4). In
a presentation speech for Mather and Smoot’s 2006 Nobel Prize, Dr. Per Carlson stated that Mather and Smoot’s research developed the foundation for cosmology’s establishment as a “precision science” (6). The anisotropies of the CMB can be viewed in the form of an angular power spectrum (Fig. 3). Radiation angular power spectrums represent the state of oscillations in the fluid of photons, electrons, protons, and neutrons immediately preceding decoupling and the release of the background radiation. Fortunately for cosmologists, the science determining such oscillations is sufficiently understood that power spectra can be drawn for many scenarios involving varied initial conditions – namely, various parameters of the cosmological model. These parameters include the Hubble constant, H0. The Hubble constant is the rate of expansion of the universe, whose reciprocal provides an estimate for the universe’s current age. Another parameter is ρcrit , the critical density value determining the universe’s
Photo courtesy of NASA
Figure 1: The cosmic microwave background was first detected by a horn antenna seeking satellitecreated radio waves at the Bell Labs in Holmdel, New Jersey. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
spatial geometry. A third parameter is the cosmological constant Λ, which represents the pure energy available in a vacuum. It has recently been associated with the concept of “dark energy,” which appears to be driving an acceleration in the universe’s rate of expansion (7).
Historical Breakthroughs Research into the CMB has produced both theoretical and observational breakthroughs over the past half-century. During the 1970s, Rashid Sunyaev and Yakov Zel’dovich anticipated, researched, and confirmed Inverse Compton Scattering as the major cause of anisotropy in the CMB. Inverse Compton Scattering, which involves the transfer of energy from highenergy electrons to photons, takes place to a noticeable extent in hot electron gas. Electron gasses are heavily present in intergalactic regions within galactic clusters. Perturbations of the CMB caused by Inverse Compton Scattering, known as the Sunyaev–Zel’dovich effect, can therefore point astronomers toward the location of faraway galactic clusters. (8). Understanding the apparent black-body spectrum of the CMB thus helped select the Big Bang model as the preferred model for describing universal origins (9). In 2001, the Wilkinson Microwave Anisotropy Probe (WMAP) was launched as a successor to COBE. Over seven years, WMAP plotted a full-sky map of the CMB, and provided the best look yet at its angular power spectrum. The results of the WMAP data correspond most closely to a cosmological model known as Λ-Cold Dark Matter, or Λ-CDM. This model assumes the presence of dark energy and “cold” (non-relativistic) dark matter. WMAP provides not only improved estimates for cosmological constants, yielding high confidence intervals for these values. Highlights of these updated parameters includ: the current age of the universe at 13.75 ± 0.11 billion years; dark energy density, set to 72.8 ± 0.16 percent of the critical density; and the age of the universe at decoupling, at 377,730 ± 3,205 years (10).
Current Research A number of new projects are underway to investigate the properties of the CMB and refine current estimates. Two of the most notable current projects include the Atacama Cosmology Telescope, located in elevated desert land on Cerro Toco in SPRING 2012
Image courtesy of NASA
Figure 2: Measurements from the Cosmic Background Explorer (COBE) showed that the intensity of the background radiation was a precise match to the black-body spectrum predicted by the Big Bang theory.
Chile, and the Planck satellite, operated by the European Space Agency (ESA) since 2009.
Atacama Cosmology Telescope The Atacama Cosmology Telescope (ACT) is a six-meter reflecting telescope resting over 5,100 meters above sea level in the Chilean mountains. It is focused on three bands in the microwave range. It is tasked with both improving current estimates of cosmological parameters and observing faraway galactic clusters and their local environments (11). In 2011, researchers operating the ACT reported that telescopic data had provided evidence for the existence of dark energy derived solely from observation of the CMB. Analysis of the background radiation’s power spectrum cannot simultaneously constrain values for the universe’s curvature value and rate of expansion. However, ACT detection of radio sources revealed gravitational lensing, or path distortion due to the pull of massive objects, affecting photons from the CMB. This lensing data, combined with the power spectrum, is sufficient to provide an estimate for the density of dark energy (12). In March, scientists announced that ACT sky maps had led to the first measurement of galaxy cluster motions using the kinematic variety of the Sunyaev–Zel’dovich effect. This component of the effect induces a
temperature shift in passing CMB photons proportional to the relative velocity of the galactic cluster. It is much less noticeable than the more standard thermal Sunyaev– Zel’dovich effect in high-mass clusters (13). Further breakthroughs derived from ACT data are expected in coming years.
The Planck Observatory The Planck satellite was launched by the ESA with the goal of mapping the CMB across 95 percent of the sky. The ESA boasted that Planck was intended to improve upon COBE’s measurement of temperature variations by a factor of ten, and refine its angular resolution of the data by a factor of fifty. In addition to strengthening the estimates for a number of cosmological parameters, the Planck team hoped to determine whether certain anisotropies could be attributed to gravitational waves in the early universe. Such results would support the theory of cosmic inflation, which holds that the universe experienced a period of exponentially rapid expansion shortly after the Big Bang, converting small-scale density fluctuations into the seeds for the massive structures that were to later develop. Another goal of the Planck project was to compare structures found in the high-resolution CMB maps with surveys of millions of known galaxies, in an attempt to connect the dots in the evolution 13
Image courtesy of NASA
Figure 3: Recent data, including that gathered by the Wilkinson Microwave Anisotropy Probe (WMAP), have placed constraints on the background’s angular power spectrum, and, with it, a number of cosmological parameters.
of galactic clusters over time (14). Early results from the instruments aboard Planck were first released in December of last year, through 26 papers published in Astronomy & Astrophysics. Detection of the Sunyaev–Zel’dovich effect indicated almost 200 potential galactic clusters, many of which were confirmed by additional observation. Planck results have also provided great detail of the cosmic infrared background, the radiation emitted from heated dust particles surrounding stars. However, data on the CMB remains unavailable while the project team isolates background radiation from stronger sources of other origins. Planck’s most exciting data therefore awaits a 2013 release (15).
Conclusion As investigation of the cosmic microwave background approaches its second half-century, cosmologists have a 14
wealth of observational resources at their disposal for investigating this signal from the early universe. Further refined analysis of the CMB will provide greater insights into the details of the infancy of the cosmos, and will reflect a greater understanding of the constraints that have driven our universe over the past 13.7 billion years. CONTACT SHAUN AKHTAR AT SHAUN.Y.AKHTAR.12@DARTMOUTH.EDU References 1. D. J. Fixsen, Astrophys. J. 707, 916-920 (2009). 2. A. A. Penzias, R. W. Wilson, Astrophys. J. 142, 419421 (1965). 3. R. H. Dicke, P. J. E. Peebles, P. G. Roll, D. T. Wilkinson, Astrophys. J. 142, 414-419 (1965). 4. A. Liddle, An Introduction to Modern Cosmology (Wiley, West Sussex, ed. 2, 2010). 5. P. D. Naselsky, D. I. Novikov, I. D. Novikov, The Physics of the Cosmic Microwave Background (Cambridge Univ. Press, Cambridge, 2006). 6. P. Carlson. Speech given at the presentation of the 2006 Nobel Prize in Physics, Stockholm, Sweden, 10
Dec 2006. 7. D. Scott, Can. J. Phys. 84, 419-435 (2006). 8. Y. Rephaeli, Annu. Rev. Astron. Astrophys 33, 541580 (1995). 9. P. J. E. Peebles, D. N. Schramm, E. L. Turner, R. G. Kron, Nature 352, 769-776 (1991). 10. N. Jarosik, et al., Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Sky Maps, Systematic Errors, and Basic Results (2010). Available at http://lambda.gsfc.nasa. gov/product/map/dr4/pub_papers/sevenyear/basic_ results/wmap_7yr_basic_results.pdf (04 April 2012). 11. ACT Atacama Cosmology Telescope, About Us (2012). Available at http://www.princeton.edu/act/ about/ (4 Apr 2012). 12. B. D. Sherwin, et al., Phys. Rev. Lett. 107, 021302 (2011). 13. N. Hand, et al., Phys. Rev. Lett., in press (available at http://arxiv.org/pdf/1203.4219v1.pdf). 14. U. Seljak, Nature 482, 475-477 (2012). 15. European Space Agency, Objectives (2010). Available at http://sci.esa.int/science-e/www/object/ index.cfm?fobjectid=30968 (4 April 2012).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
AL C OHOL
Understanding Alcohol Alcohol and its Effects on the Body
Harmful byproducts resulting from alcohol metabolism
espite the pervasiveness of alcohol, its physiological effects are often overlooked or misunderstood. Discussion on the effects of alcohol often brings to mind drunk–driving and lowered inhibitions. However, the effects of alcohol on the body are much more subtle and comprehensive than impaired motor skills and judgement. In understanding the biological consequences of alcohol, better decisions can be made.
Alcohol + enyzme cytochrome P450 2E1
+ enyzme alcohol dehydrogenase
Acetaldehyde + proteins or other macromolecules
+ lipid molecules
(excess levels of oxygen radicals and/or reduced levels of antioxidants)
Cellular Effects In aqueous solutions, ethanol forms hydrogen bonds with water, disrupting the organization of the water molecules. Consequently, the presence of alcohol affects the hydrophilic interactions occurring on the cell membrane, including receptor and channel function. Normal ion flow is thus disrupted. This can cause cells to become dehydrated and eventually lyse. Active ion transporters also utilize additional adenosine triphosphate (ATP)
Image courtesy of Benjah-bmm27 retrieved from http://en.wikipedia.org/wiki/File:Ethanol2D-flat.png (accessed 10 May 2012)
Figure 1: The structure of ethanol. SPRING 2012
Chemistry Beverages are considered alcoholic if they contain the chemical ethanol. Ethanol, C2H6O, is a polar molecule consisting of a two-carbon chain, and a hydroxyl (–OH) side group (Fig. 1). The non–polar covalent bonds between carbon and hydrogen and the polar covalent bonds between oxygen and hydrogen allow ethanol to be miscible in both hydrophobic (water–based) and hydrophilic (lipid–based) solutions. Moreover, the partial separation of charge between hydrogen and oxygen in the –OH group also permits hydrogen bonding with other substances.
+ acetaldehye & proteins
MAA Adducts Image courtesy of the NIH
Figure 2: Harmful byproducts resulting from alcohol metabolism.
in an attempt to maintain proper balance, has on the different parts of the body, Potentially toxic products resulting from the breakdown, or metabolism, of alcohol. The major alcohol-metabolizing enzymes increasing the energy of the cell. itAlcohol is dehydrogenase necessary converts to consider how alcohol are alcohol dehydrogenase and demand cytochrome P450 2E1 (CYP2E1). alcohol to acetaldehyde, which hyper–metabolic can react with other proteinsstate in the cell to generate hybrid molecules known as adducts. CYP2E1 also generates The that ensues is consumed. Additionally, the acute acetaldehyde, as well as highly reactive oxygen-containing molecules called oxygen radicals, including the hydroxyethyl radical can tissue–wide hypoxia effects of drinking alcohol (HER)induce molecule. Elevated levels of oxygen radicals can(lack generate aand state ofchronic oxidative stress, which through various mechanisms to cell damage. Oxygen radicals also can interact with fat molecules (lipids) in the cell in a process known as lipid ofleads oxygen) and eventually necrosis (cell depend on the amount consumed peroxidation, resulting in reactive molecules such as malondialdehyde (MDA) and 4-hydroxy-2-nonenal (HNE). Both of these per can react with proteins MDA–protein HNE–protein MDA also can combine acetaldehyde and proteinthe death). Due to toitsformsmall sizeandand lipid adducts. sitting. Ethanol is with ingested through to form mixed MDA–acetaldehyde–protein adducts (MAA). HER also interacts with protein to form HER–protein adducts. solubility, ethanol can also move through mouth and absorbed through the stomach * Also referred to as “breakdown,” “oxidation,” and “degradation. the membrane by passive diffusion. Once and intestinal lining, where it enters the inside the cell, it can induce apoptosis, bloodstream. Because ethanol can cross the Source: Tuma, D.J. and Casey, C.A. Dangerous byproducts of alcohol breakdown—Focus on adducts. Alcohol Research & orHealth programmed cell death, by activating blood–brain–barrier (BBB), it can affect 27(4):285–290, 2003. enzymes called caspases. When broken virtually every cell in the body; however, Prepared: Februaryalso 2005 produces acetaldehyde, down, ethanol it does not do so equally, as cell types are which can react irreversibly with both differentially sensitive. Finally, the alcohol nucleic acids and proteins, compromising is filtered out by the liver where it is broken their function and successful repair (7). down. Free radicals are another potent by– product of ethanol metabolism that can Mouth cause oxidative stress, leading to lipid Alcohol use is correlated with cancer peroxidation and further damage to of the mouth, a trend maintained for nucleic acids and proteins (Fig. 2) (10). In both moderate and heavy drinkers (4). addition to its direct effects, ethanol can This increased risk is facilitated in part bind to specific neurotransmitter receptors, by the activity of acetaldehyde, which including NMDA, GABA, acetylcholine, impairs cellular DNA repair machinery and serotonin types, thus affecting nervous (5). Consequently, mutations associated system activity. with cell division, which would normally be corrected, become permanent. Chronic Consumption alcohol abuse can also cause hypertrophy Before discussing the effects alcohol of the parotid gland, the largest salivary 15
fluid accumulates under the tips of villi causing their destruction. The resulting lesions increase intestinal permeability, allowing toxins into the bloodstream and lymphs, potentially harming other organs.
Image courtesy of Mikael Häggström retrieved from http://en.wikipedia.org/wiki/File:Possible_long-term_effects_of_ethanol.png (accessed 10 May 2012)
Figure 3: Possible long-term effects of ethanol on the human body.
gland, interfering with saliva excretion, glossitis (enlargement of the tongue), and stomatitis (inflammation of the mouth). Such long–term use also increases the risk of tooth decay, teeth loss, and gum disease.
Esophagus Heavy alcohol consumption weakens the lower esophageal sphincter, increasing gastroesophageal reflux and heartburn. Ethanol also damages the esophageal mucosa (the mucus lining of the esophagus), which, when consumed chronically, can progress to esophagitis (inflammation of the esophagus), Barrett’s esophagus, and Mallory–Weiss syndrome. Barrett’s esophagus is a condition in which the esophageal epithelium begins to resemble stomach epithelium as a result of frequent exposure to stomach acid, often leading to esophageal cancer. Mallory– Weiss syndrome is characterized by massive bleeding produced by tears in the mucosa. Such tears are caused by repeated retching and vomiting following excessive drinking.
Stomach Acute ethanol consumption increases gastric acid production and the permeability of the gastric mucosa. As a result, drinking can cause erosive gastritis (inflammation of the stomach). Particularly heavy drinking can even induce hemorrhagic lesions (damage which causes bleeding),and permanently destroy parts of the mucosa. 16
Ethanol also increases gastric transit time (how long food stays in the stomach). This allows for an early bacterial degradation of food, leading to feelings of fullness and abdominal discomfort. Chronic alcoholism decreases the gastric secretory capacity of the stomach, reducing its ability to destroy the bacteria in food. These deleterious effects can result in bacterial overgrowth and colonization of the duodenum, the upper small intestine.
Intestine Ethanol decreases impeding wave motility, a process that retains food for further digestion in the small intestine and compaction in the large intestine. Reducing this movement causes diarrhea and a loss of nutrients. In addition, ethanol also inhibits nutrient absorption. Chronic alcoholism leads to decreased absorption of proteins, carbohydrates, fats, and some vitamins, which can contribute to malnutrition and weight loss (2). Ethanol also interferes with enzymes, including those involved in nutrient transport and digestion, like lactase. Heavy drinking can cause duodenal erosions and bleeding by damaging the intestinal mucosa and disturbing the integrity of the epithelium. Ethanol also decreases prostaglandin synthesis and induces the release of cytokines, histamine, and leukotrienes. The ensuing inflammatory response can damage capillaries and lead to blood clotting and impaired transport of fluids. Consequently,
Alcohol elevates the synthesis of digestive enzymes in pancreatic acinar cells. Concomitant with this upregulation is an increase in the fragility of lysosomes (vesicles containing digestive enzymes) and zymogen granules (cellular packages containing enzyme precursors). This is due to the ethanol–induced accumulation of fatty acid ethyl esters (FAEEs) and reduced GP2 content of zymogen granule membranes. Consequently, these containers tend to lyse, causing auto–digestion of cells by trypsin and other enzymes. Chronic alcoholism can cause pancreatitis which is characterized by widespread tissue atrophy, fibrosis (tissue scarring), and calcification of the pancreas (8).
Circulatory System Ethanol increases plasma high–density lipoprotein (HDL) levels, decreasing the risk for coronary artery disease (CAD) (3). Ethanol also reduces the chance of thrombosis (formation of a blood clot) by disrupting platelet function. This effect is facilitated by increased formation of prostacyclin, which inhibits platelet aggregation. Elevated levels of the enzyme plasmin also increase the rate of clot dissolution. As a result, there is a decreased risk for embolus (detached intravascular mass) formation, myocardial infarction (heart attack), and ischemic (restricted blood supply) stroke in moderate drinkers. On the other hand, heavy drinking can cause dilated cardiomyopathy, which is characterized by low cardiac output and hypertrophy of the heart and can lead to congestive heart failure. This occurs because alcohol alters the permeability of the sarcoplasmic reticulum for Ca2+ ions, which are required for muscle contraction. In addition, ethanol decreases the synthesis of actin, myosin, and mitochondrial proteins. High blood alcohol content (BAC) also reduces the oxygen supply to cardiac muscle. Chronic alcoholism can induce atrial fibrillation (irregular heart beat), premature beating, supraventricular tachycardia (rapid heart rhythm), and ventricular arrhythmias (abnormal heart rhythm originating in DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
the ventricles of the heart). In turn, these conditions can increase clot formation and the propagation of existing clots, which raises the risk for ischemic stroke. Additionally, Ethanol blocks the action of folate, which is used in red and white blood cell synthesis, lowering immune defense and blood health.
Kidneys Acute ethanol consumption prevents kidney function by inhibiting the secretion of anti–diuretic hormone (ADH), or vasopressin, from the pituitary gland (1). As a result, diabetes insipidus (DI) can occur. DI is a condition in which water retention, which is normally regulated by ADH, is blocked, causing the body to become dehydrated. Chronic alcoholism can also cause permanent renal dysfunction, hypophosphatemia, hypomagnesaemia, hypokalemia, and aminoaciduria.
Peripheral Nervous System Ethanol enhances the effects of GABA, an inhibitory neurotransmitter, by binding to its receptors. Consequently, intoxicated individuals experience decreased sensation in their peripheral nervous system (PNS). Chronic alcoholism, however, can lead to a condition called alcoholic polyneuropathy. Alcoholic polyneuropathy is characterized by the degeneration of motor and sensory neuron axons in the PNS caused by the segmental thinning of myelin. This is especially harmful because the thinning myelin increases action potential leakage, producing further degeneration. These effects manifest themselves as pain, motor weakness, and eventually muscle atrophy.
Ethanol also increases the expression of iron transporter proteins, causing iron stores in the liver to expand. Because iron catalyzes the production of free radicals, this increased storage can be harmful. By these mechanisms, chronic alcoholism can induce steatohepatitis. Steatohepatitis is a condition in which fat accumulates in the liver tissue. When combined with cell death, steatopatitis leads to hepatic fibrosis (scarring of the liver). The scar tissue that forms disrupts the structure of the liver, inhibiting the normal regeneration of hepatocytes (liver cells). From here, the pathology can progress to liver disease, cirrhosis of the liver, and cancer (Fig. 3) (6).
Conclusion Alcohol can affect the body in a variety of harmful ways used in excess. On other the hand, when consumed in limited amounts, alcohol can improve cardiovascular health, decreasing the risk for heart attack and stroke (Fig. 4). Like most things, moderation is key.
“Cancer Risk Associated With Alcohol and Tobacco Use: Focus On Upper Aero-Digestive Tract and Liver.” NIAAA Publications. Web. 14 Apr. 2012. <http://pubs.niaaa.nih.gov/publications/arh293/193198.htm>. 5. Quertemont, Etienne, and Vincent Didone. “Role of Acetaldehyde in Mediating the Pharmacological and Behavioral Effects of Alcohol.” NIAAA Publications. Web. 14 Apr. 2012. <http://pubs.niaaa. nih.gov/publications/arh294/258-265.htm>. 6. Seitz, Helmut K., and Peter Becker. “Alcohol Metabolism and Cancer Risk.” NIAAA Publications. Web. 14 Apr. 2012. <http://pubs.niaaa.nih.gov/ publications/arh301/38-47.htm>. 7. Tuma, Dean J., and Carol A. Casey. “Dangerous Byproducts of Alcohol Breakdown.” NIAAA Publications. Web. 14 Apr. 2012. <http://pubs.niaaa. nih.gov/publications/arh27-4/285-290.htm>. 8. Vonlaufen, Alain, Jeremy S. Wilson, Romano C. Pirola, and Minoti V. Apte. “Role of Alcohol Metabolism in Chronic Pancreatitis.” NIAAA Publications. Web. 14 Apr. 2012. <http://pubs.niaaa. nih.gov/publications/arh301/48-54.htm>. 9. Wheeler, Michael D. “Endotoxin and Kupffer Cell Activation in Alcoholic Liver Disease.” NIAAA Publications. Web. 14 Apr. 2012. <http://pubs.niaaa. nih.gov/publications/arh27-4/300-306.htm>. 10. Wu, Defeng, and Arthur I. Cederbaum. “Alcohol, Oxidative Stress, and Free Radical Damage.” Alcohol Research and Health 27 (2003): 277-84. Web. 14 Apr. 2012. <http://pubs.niaaa.nih.gov/publications/arh274/277-284.pdf>.
CONTACT DEREK RACINE AT DEREK.R.RACINE.14@DARTMOUTH.EDU References 1. Brick, John. Handbook of the Medical Consequences of Alcohol and Drug Abuse. New York: Haworth, 2008. Print. 2. Griffith, Christopher M., and Steven Schenker. “The Role of Nutritional Therapy in Alcoholic Liver Disease.” NIAAA Publications. Web. 14 Apr. 2012. <http://pubs.niaaa.nih.gov/publications/arh294/296306.htm>. 3. Mukamal, Kenneth J., and Eric B. Rimm. “Alcohol’s Effects on the Risk for Coronary Heart Disease.” NIAAA Publications. Web. 14 Apr. 2012. <http://pubs. niaaa.nih.gov/publications/arh25-4/255-261.htm>. 4. Pelucchi, Claudio, Silvano Gallus, Werner Garavello, Cristina Bosetti, and Carlo La Vecchia.
Liver The liver is the primary site of alcohol metabolism. As such, it is exposed to the acetaldehyde and free radicals that are produced by ethanol catabolism. Exposure to these molecules facilitates cell apoptosis. The liver is also exposed to endotoxins. Endotoxins, which enter the bloodstream through lesions in the small intestine, bind to the CD14 receptor on Kupffer cells (immune cells which reside in the liver). Bound cells release cytokines and interleukins such as tumor necrosis factor (TNF), and radical oxygen species (9). These molecules induce an inflammatory response that damages the liver tissue. SPRING 2012
R O B O TS
The Technology of the Future is Here
Changing the Course of Science, Industry, Medicine and Human Companionship REBECCA XU
eet Roxxxy, a five-foot-seven, 120-pound, lingerie-clad female, also known as the world’s first robotic “girlfriend.” She senses touch, carries on rudimentary conversation, and has a mechanical heart which pumps coolant liquid (1). Roxxxy is one of many realistic, humanoid robots available in today’s society. Robots and robotics pervade countless areas of modern life, from industrial production to medicine and now, human companionship. Bill Gates, founder of Microsoft and a leader of the computer revolution, believes that robots are the next stage in technological innovation: “I can envision a future in which robotic devices will become a nearly ubiquitous part of our day-to-day lives” (2). Indeed, in fields where science and technology intersect, robots are the newest source of research and excitement.
Introduction The human creation of automata, or objects that have a degree of self-movement, has a long history. Some of the earliest records of automata include an automated chariot, a flying machine and mechanized figures and animals from ancient China. Clockwork automata were first constructed in 14th century Europe, and became popular in cathedrals; these included mechanized birds, bears, and biblical figures (3). Leonardo da Vinci was known for making an automated lion and drawing sketches of a humanoid knight that could move its limbs (2). Throughout the 18th, 19th and 20th centuries, automata became progressively more advanced in their construction and movement. One such mechanism was an automated duck that could walk, drink, eat, and defecate; another was a female automaton that could play the piano. In 1965, Disney began its embrace of automatons with a robotic Abraham Lincoln that delivered patriotic speeches. It was turned into a Disneyland attraction, and still runs to this day. In 2005, Disney World premiered Lucky, a robotic dinosaur that roamed freely in its amusement parks and interacted intelligently with visitors. 18
The late 20th century to modern day saw a vast improvement in the construction of robots and automata, especially in humanoid and animal-like machines. The word “robot” came from the Czech term “robota,” which means serf or slave. It was translated from Karel Capek’s 1921 play R.U.R., or Rossum’s Universal Robots (3). Robots have had a solid place in science fiction ever since, depicted as both a boon and bane to human existence. In 1960s cartoon “The Jetsons,” robots were imagined as friendly helpers exemplified by Rosie the robot maid. Likewise, the robots of the Star Wars franchise are essential and beloved characters. On the other hand, movies like “Blade Runner” (1982) and “I, Robot” (2004) show robots rebelling against their human creators. In literature, Isaac Asimov proposed four laws of robotics that scientists still consider when building robots. First, a robot may not injure humanity, or, through inaction, allow humanity to come to harm. Second, a robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher order law. Third, a robot must obey orders given it by human beings, except where such orders would conflict with a higher order law. Last, a robot must protect its own existence as long as such protection does not conflict with a higher order law (4).
Modern Day Uses Modern day robots are often conceptualized around a practical utility. In fact, the Robot Institute of America defines a robot to be “a reprogrammable, multifunctional manipulator designed to move materials, parts, tools, or specialized devices through various programmed motions for the performance of a variety of tasks.” Less technically, Merriam-Webster describes a robot to be “an automatic device that performs functions normally ascribed to humans or a machine in the form of a human” (4).
Image courtesy of Vanillase retrieved from http://en.wikipedia.org/wiki/File:ASIMO_Conducting_Pose_on_4.14.2008.jpg (accessed 10 May 2012)
Figure 1: ASIMO, known as the world’s most advanced humanoid robot, has 34 degrees of freedom in its limbs and can perform many fine movements with fluidity. the human body.
Industrial Robots Today, the most common robot is the industrial robot. In 2009, North America had 192,000 industrial robots in use as opposed to 113,000 service robots and 20,000 military robots (5). Industrial robots are advantageous in terms of price, speed, accuracy, and safety. Almost all industrial settings utilize robotic arms at some point; robots are responsible for assembling cars, wielding sheet metal and placing parts on computer chips, among many other tasks. Robotic arms also overcome human limitations in the size of their digits which can be made much larger or smaller than human fingers. The latest industrial robots have advanced abilities that further enhance performance, particularly in situations where they are required to adjust to each individual product, further simulating human conscious performance. Fanuc Robotics, a company based in Michigan, makes machines that can be equipped with a technology called iRVision. The camera detects where an object to be picked up is, then relays that information to a computer program. That program directs the arm of the robot to move toward the target object at intervals until it is centered in the field of vision. Since the arm moves toward the object, the robot is said to be more intelligent than the traditional industrial robot. Another technology seen in certain robots is touch sensors on robotic arms and hands. This is particularly useful for robots that place parts into frames. With the DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
sensors, the arms can detect when there is too much resistance during insertion, such as when the part is incorrectly oriented. The robot can then automatically change the orientation to fit the part correctly. Vision and touch allow industrial machines to be much more precise in the presence of variations in the objects they handle. Often, they are used together, such as when a car door is attached to the frame of the car. The iRVision uses 3D laser sensors to locate the brackets for the car door in space, and tactile sensors detect if the door is properly placed within those brackets (6).
Military Uses Robots are also used for military purposes. Most frequently seen are unmanned aerial vehicles (UAVs), commonly known as drones (7). Their impact on military efforts in the Middle East has been significant. UAVs were used to kill Mohammad Atef, Al Qaeda’s chief of military operations, in 2001 and Anwar al-Awlaki, a cleric and prominent Al Qaeda member, in 2011 (8). However, military technology today largely focuses on developing unmanned ground vehicles (UGVs) for combat. In 2001, Congress asked the Pentagon to have one-third of all ground vehicles be automated by 2015. Since then, 6,000 UGVs
have been in use in the Middle East, mostly for surveillance and bomb detection, but officials hope to create robotic “soldiers” that can replace human ones during combat. Several armed UGVs are in development. In 2009, the army released three Special Weapons Observation Remote Direct-Action Systems (SWORDS) to Iraq. SWORDS were equipped with an M249 machine gun and could travel up to four miles per hour. However, due to technical glitches in testing, they were not used for combative purposes (9). The same company that made SWORDS later developed the Modular Advanced Armed Robotic System (MAARS), a larger and more versatile version of the former. MAARS is battery-powered and also equipped with an M240 machine gun, as well as thermal imaging sensors that allow for nighttime surveillance. Moreover, it can be programmed to run certain procedures such as avoiding fire zones to prevent misfiring (9). Nonetheless, both MAARS and SWORDS were built to be operated by a human soldier. Current efforts in UGVs aim to achieve complete automation. As one may expect, robotic weapons are a source of public controversy. Some claim that robotic programs can get viruses and malfunction. Another issue is that robot controllers may be too “triggerhappy” when removed from the physical arena of war.
Medicine and Rehabilitation
Image courtesy of U.S. Army
Figure 2: The Special Weapons Observation Remote Direct-Action Systems (SWORDS) robot shown here is equipped with an M249 machine gun, but it can wield other weapons in the mounting device as well. SPRING 2012
Medicine has also greatly benefited from advances in robotics, especially in surgery. Surgical robots are classified as active, semiactive, or passive. Active robots are programmed to act independently, while semiactive and passive robots mimic the movements of the surgeon’s hands. In 1985, the first active surgical robot successfully performed stereotactic brain biopsy with 0.05 millimeters accuracy. In 1992, Robodoc was introduced for patients receiving hip prostheses. Robodoc drills the well into the femoral head much more precisely than traditional techniques. Others include the Acrobot, used for knee replacement surgeries, and RX-130, used for temporal bone surgeries. Robodoc, Acrobot, and RX-130 are commonly used in Europe but currently do not have FDA approval. Semiactive and passive surgical robots can be considered a type of telerobot. Telerobots operate through telepresence,
where a human operator is immersed in a virtual world (created through 3D cameras) away from where the robot is actually operating. In 1997, the first commercial robotic devices using teleprescence were created by Intuitive Surgical, Inc. Known as the daVinci surgical system, the first model consisted of a surgeon’s console and a robotic instrument drive system with three arms. The surgeon console has two optical viewers that give the surgeon a three-dimensional view of the surgery and is operated by the surgeon’s arm, wrist, and pincer movements. The instrument drive system has two surgical arms and a third arm for the endoscopic video camera. The arms track the surgeon’s movements 1,300 times per second, filter tremors, and scale motions down to create tiny and stable movements. Various improvements have been made since the first model and over 500 daVinci systems are used worldwide today. Robotic surgeries are advantageous in that they reduce blood loss and postoperative consequences. Currently, more research is being done on implementing tactile feedback for the surgeon, a vital tool used by traditional surgeons when dealing with softer and more sensitive tissues (11). The field of prosthetics is also increasingly taking advantage of robotic devices. Prosthetics today can use Bluetooth technology to synchronize movements between two legs. They may have microprocessors and programs to control pressure and balance for more natural movement for the patient, especially when walking. Two promising methods take advantage of electrical signals in the muscles and nerves, allowing for the patient to have limited control of the prosthetic limb. The first technology is known as myoelectric connectivity. Prostheses with myoelectric technology are connected to muscles at the site of amputation and detect muscle movements. These movements are then electrically translated to specific movements in the prosthetic. In this manner, certain myoelectric prosthetics can even allow for fully functional digits (12). The second technology is targeted muscle reinnervation (TMR). The newer of the two technologies, TMR takes nerves from the site of amputation and physically moves them to a different site on the body — e.g. the nerve that controls opening of the hand is relocated to the triceps. When the patient wishes to move the limb, an arm for example, the TMR prosthetic senses the electrical signal and sends the information 19
Image courtesy of U.S. Army
Figure 3: A military veteran wears a myoelectric prosthetic arm, which translates electrical activity of the muscles at the site of amputation into movements.
to a microprocessor in the arm. The result is that the person can consciously control the arm or hand to do tasks like picking up a coin, zipping up a jacket, or holding a bottle with fluidity. However, TMR arms must “train” their limb on a daily basis since electrical signals and electrode placements frequently shift. Consequently, TMR limbs tend to make mistakes and have a limited repertoire of motions. Nonetheless, research into a fully controllable prosthetic is on-going (13).
Personal Use and Creating the Robotic Human Robots that resemble humans are of particular interest to developers today. An important breakthrough in humanoid robotics came with the revealing of Honda’s ASIMO (Advanced Step in Innovative MObility) in 2002, with the tagline of the “world’s most advanced humanoid robot.” ASIMO was particularly impressive for the fluidity of its motions. Making a pair of smooth-walking legs alone took Honda almost ten years; it took a few more years to add on the torso, arms, and head. Each leg on ASIMO had a hip joint, knee joint, and foot joint. The range of joint movement, center of gravity, and the torques exerted at each joint were closely modeled after humans’. As a result, ASIMO is able to walk stably on uneven ground, when it is pushed, or on stairs and slopes. ASIMO is also able 20
to push carts, open and close doors, and carry objects and trays. Additionally, ASIMO has intelligent programming. It can see, hear, recognize faces and refer to people by name. It can also detect its environment, recognize moving objects, and even connect to the Internet. Since its introduction, ASIMO has been seen mostly in promotions and displays, but its makers hope it will one day serve as a helper in the office or home (14). While ASIMO is humanoid, its appearance is still robotic. Another side of humanoid robot development is in realism, looking as human as possible. Japanese entertainment company Kokoro has come close in 2010 with its Actroid F, a hyper-realistic android with a face modeled off an actual Japanese woman. Actroid F can move its eyes, mouth, head, and back independently, but even more realistic expressions can be done through telepresence, where the robot mimics the facial expressions and head movements of an operator looking at a camera. Actroid F can only remain in a sitting position. Its makers aim to sell around 50 units for around $110,000 each to museums and hospitals, where they expect Actroid F to serve as receptionists, patient attendants, or guides (15). In general, anthropomorphic robots, whether it’s ASIMO or Roxxxy, are often made for the purpose of providing social interaction to humans. While androids are currently rare objects in civilian homes, many people already use robot technology on a daily basis. One popular application is the robotic vacuum cleaner, which can vacuum without the need for human direction. Toy robots such as Furbies, AIBOs (a robotic dog), and My Real Babies are also relatively common. My Real Baby is particularly advanced, with various facial movements, touch sensors, and the ability to suck its thumb and eat (16). Paro, a robot seal developed by Japanese company AIST, is used for therapy in hospitals and extended care facilities, where it has been shown to soothe elderly or mentally ill patients. Paro has five different types of sensors — light, sound, tactile, temperature, and posture — that allow it to understand its environment and the behavior of its owner (17).
Conclusion Over the last few decades, developments in robotics have profoundly shaped the modern world. From car assembly to
surgery to social companionship, robots have become fully integrated in various aspects of human lives. Whereas some people are concerned about the ethical and moral issues of using robots, others are optimistic about a future where robots and humans exist harmoniously. Bill Gates’ vision of a “robot in every home,” os coming closer to realization. CONTACT REBECCA XU AT REBECCA.S.XU.15@ DARTMOUTH.EDU
. References 1. Roxxy, the world’s first life-size robot girlfriend (11 January 2010). Available at http://www.foxnews.com/ scitech/2010/01/11/worlds-life-size-robot-girlfriend/ (1 April 2012). 2. B. Gates, A robot in every home. Sci. Am. 296, 5865 (January 2007). 3. S. Dixon, A brief history of robots and automata. The Drama Review. 48, 16-25 (2004). 4. K. Dowling, What is robotics? (19 August 1996). Available at http://www.cs.cmu.edu/~chuck/robotpg/ robofaq/1.html (1 April 2012). 5. A. S. Brown, Industrial robots fall: service robots gain. Mechanical Engineering. 132, 18 (2010). 6. S. Prehn, The evolution of industrialized robots. Machine Design. 83, 44, 46, 48 (11 December 2011). 7. Robot wars. The Economist. 383, 9 (June 2007). 8. M. Mazzetti, E. Schmitt, R. F. Worth. Two-year manhunt led to killing of Awlaki in Yemen (30 September 2011). Available at http://www.nytimes. com/2011/10/01/world/middleeast/anwar-al-awlakiis-killed-in-yemen.html?pagewanted=all (6 April 2012). 9. J. Markoff. War machines: recruiting robots for combat (27 November 2010). Available http:// www.nytimes.com/2010/11/28/science/28robot. html?pagewanted=all (6 April 2012). 10. E. Sofge. America’s robot army: Are unmanned fighters ready for combat? (18 December 2009). Available at http://www.popularmechanics.com/ technology/military/robots/4252643 (6 April 2012). 11. N. G. Hockstein, C. G. Gourin, R. A. Faust, D. J. Terris. A history of robots: from science fiction to surgical robots. Journal of Robotic Surgery. 1, 113-118 (17 March 2007). 12. E. Dundon. 5 major advances in robotic prosthetics. Available at http://news.discovery.com/ tech/five-major-advances-robotic-prosthetics.html (7 April 2012). 13. M. Chorost. A true bionic lim remains far out of reach (20 March 2012). Available at http://www. wired.com/wiredscience/2012/03/ff_prosthetics/all/1 (5 April 2012). 14. ASIMO technical information (September 2007). Available at http://asimo.honda.com/downloads/pdf/ asimo-technical-information.pdf (9 April 2012). 15. T. Hornyak. Kokoro shows off its latest android Actroid-F (28 August 2010). Available at http://news. cnet.com/8301-17938_105-20014981-1.html (8 April 2012). 16. S. Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other (Basic Books, New York, 2011). 17. Paro therapeutic robot (2012). Available at http:// www.parorobots.com/ (10 April 2012).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
CHE MI STRY
The History and Chemistry of Explosives RUI SHU
or many, it is easy to associate explosives with images of the outlandish use of TNT or dynamite, popularized by cartoons (perhaps Wile E. Coyote comes to mind). Yet, while most are familiar with explosives’ role in American pop culture, an understanding of the science behind explosive chemistry is not as common. Several intriguing questions may be posed: What constitutes an explosion? What causes an explosion? How do explosions actually work? Perhaps by the end of this paper, we might just find ourselves a little more impressed by the remarkable chemical processes behind the explosive wonders of exothermic, gasproducing reactions.
Explosions also release a considerable amount of gas very rapidly (1). Gay Lussac’s Law states that the pressure of any gas is proportional to its temperature. As the gas produced by the reaction heats up from the reaction’s exothermic nature, Gay Lussac’s Law dictates a proportional temperaturepressure increase that is ultimately relieved in the form of an explosion (2).
Basic History and Chemistry Black powder was one of the first explosives to be actively analyzed from a chemical perspective. A mixture of potassium nitrate, charcoal, and sulfur, black powder undergoes a highly reactive combustion which produces large quantities of heat and gas in the process (3). The use of the charcoal, however, produces copious amounts of smoke in the process, making black power an inherently messy explosive. Smokeless powder quickly proved a safer, less messy alternative to black
Basic Thermodynamics There are some important properties common among all chemical explosives. Explosive reactions are exothermic and spontaneous, giving off a significant amount of heat and is self-sustaining once initiated. O S OO O
Image courtesy of Rui Shu
Figure 1: Creation of nitronium ion.
HO HO O
N O O
O OO22NN O
Image courtesy of Rui Shu
Figure 2: First nitration step of nitroglycerin synthesis. CHO 3
O CH 3
C OHC NO
H NO HO2
O 2N HN
OH Image courtesy of Rui Shu
O Nstep Oof nitrocellulose synthesis. Figure 3: First nitration SPRING 2012
O CH 3
powder as an explosive. By the 1840s, two particularly promising smokeless chemicals were discovered: nitroglycerine, and nitrocellulose. Colloquially called “guncotton,” nitrocellulose was first produced when Henry Broconnot treated cotton with nitric acid in 1833 (4). However, it was not until 1846 that two chemists, Christian Friedrich Schonbein and F. J. Otto, independently optimized the creation of nitrocellulose by placing the cotton into a mixture of nitric acid and sulfuric acid (4, 5). Nitrocellulose is simple in structure and aptly named; the compound consists only of cellulose with nitro-groups (-NO2) attached via the oxygen in cellulose. The simplicity of its synthesis reflects its chemical make up: one need only add the appropriate amounts of nitric acid, sulfuric acid, and cellulose in the right solvents to yield the explosive substance. The addition of sulfuric acid initiates nitration to the compound of interest. As a strong Brønsted–Lowry acid, sulfuric acid forces an additional proton upon nitric acid, causing nitric acid’s -OH group to become an -OH2 group. The -OH2 group is simply a water molecule, which is relatively lowenergy and therefore inclined to leave the protonated nitric acid. The resulting NO2+ molecule is called a nitronium ion. Flanked by two highly electronegative oxygen atoms, the nitrogen loses most of its electron density and gains a partial positive charge. This nitronium ion now falls prey to one of cellulose’s oxygen atoms, whose negatively charged lone pairs attack the highly positively charged nitrogen center of the nitronium ion. After this process of nitration occurs three times on each glucose monomer of cellulose, the highly flammable nitrocellulose is produced (6,7). Since the nitration results in an oxygen-nitrogen bond, the process is called O-Nitration (6). Unlike high-order explosives, however, the combustion of nitrocellulose is not fast enough to be considered detonation. An explosion involving nitrocellulose requires its combustion in the presence of oxygen, resulting in only subsonic pressure waves. Nitrocellulose is therefore considered to deflagrate rather than detonate. (8, 9).
NO CH 2 3
CH CH3 3
CH CH3 +3 C
CH CH3 3
O+N O CH O3 N O
NO NO 2 2 CH3
O N O 2N 2
CH CH3 3 O N O 2N 2
NO 2 O
NO NO 2 2
H CH CH3 3 + NO + C NO 2 2 C NO 2 H H
NO NO 2 2
Nitroglycerine was first synthesized by Ascanio Sobrero, an Italian chemist at the University of Turin (10). Like nitrocellulose, nitroglycerine can be thought of as a glycerol-derivative with nitro-groups. The addition of sulfuric acid and nitric acid to glycerol leads to the O-nitration of glycerol, whereby the nitro-group replaces the protons of glycerol’s hydroxyl groups. However, unlike nitrocellulose nitroglycerin’s molecular make up allows it to exothermically decompose into gases without an external oxygen source; rather than deflagrating, nitroglycerin detonates. Unfortunately, the combustion of nitroglycerine has a very low activation energy barrier; which makes nitoglycerine susceptible to explosion upon physical contact, and thus impractical for use in most contexts (11). By 1867, however, Alfred Nobel was able to calm the shock sensitivity of nitroglycerin to produce dynamite (12). While nitroglycerine may have been the first practical explosive stronger than black powder, dynamite was the first explosive that was also safe and manageable. The chemical marvel of dynamite lies not in its reaction (it is still nitroglycerine), but in its successful use of absorbents to stabilize nitroglycerine. Alfred Nobel found that 22
NO NO 2 2 CH3
O N O 2N 2
CH CH3 3 NO 2
NO NO 2 2 CH3 NO 2
O N O 2N 2
CH CH3 3 NO 2
NO NO 2 2
Figure 4: Synthesis of O 2,4,6-trinitrotoluene. + N O
CH3 + C HCH3 H H O N O 2N 2 NO + 2 + C C H H NO NO 2 2
O+N O O N O
CH CH3 3
H NO 2 + + C C H H NO H H NO 2 2
NO 2 NOImage 2 courtesy of Rui Shu
liquid nitroglycerin is effectively absorbed by diatomaceous earth, a highly porous fossil product of diatoms. The absorption of the explosive into this new medium successfully diffused the sensitivity of nitroglycerine and made the production and transportation of dynamite much more practical than that of nitroglycerine (13). Due to dynamite’s relative stability, it came to replace the use of pure nitroglycerine in high-order explosives. In 1863, several years before Nobel’s invention of dynamite, a less appreciated molecule was prepared by the German chemist Julius Wilbrand (14). This solid had a yellow hue and was originally intended as a dye (15). The material was prepared in three nitration steps (a recurring theme in explosive chemistry); each time, a nitrogroup would attach to the benzene ring with a single methyl group attached (6). When introduced to sulfuric acid in conjunction with nitric acid, the benzene temporarily destroys its own stable, aromatic system so that one of its pi electron pairs can attack the positive-nitrogen center of the nitronium ion. As soon as this happens, the benzene ring steals the two electrons that it shares in a covalent bond with the less electronegative hydrogen atom. Relieved of this bond , the methylated benzene
ring replaces the two pi electrons which it expended in the electrophilic attack and returns to its calmer, aromatic self – now with a nitro-group attached (7). Because this process effectively substitutes the proton on the aromatic ring with an electron-deficient group (in this case, a nitro-group), this reaction is called electrophilic aromatic substitution. In the first nitration step, the nitrogroup selectively attaches to the carbon directly opposite that which attaches the methyl group despite the presence of four other possible binding sites on the methylbenzene (toluene) ring. The regional specificity of the nitration is due to the presence of the methyl group, which does in fact have a directing effect (7). The same direct influence of the methyl group determines the placement of the remaining two nitro-groups, thus positioning the three nitro-groups on every other carbon, creating the molecule 2,4,6-trinitrotoluene, or TNT. TNT exhibits several important differences from the nitroglycerin in dynamite. Unlike nitroglycerin, the detonation of TNT has an unusually high activation energy barrier (16). Its stability makes the transportation of TNT safe, with little chance of accidental detonation. This stability, however, also means that TNT is much more difficult to detonate. In fact, its insensitivity prevented it from being seriously considered for use as an explosive until the early twentieth century.
Making an Explosive Explosive As shown, the most famous, and often the most destructive, explosives frequently exhibit nitro-groups embedded within their molecular structures. The significance of the nitro-groups is twofold. First, the nitro-groups provide a source of nitrogen, which reduces to the inert (highly-stable) nitrogen gas during the course of the reaction. Compared to its incredible stability as N2 gas, nitrogen has a significantly higher energy level in its oxidized state within the nitro-group. The transition of nitrogen from a high energy state to a significantly lower energy state produces a significant amount of heat, making the enthalpy of reaction very large and very negative. Nitro-groups also provide a source of oxygen with which the hydrocarbon parts of the molecule may interact. This allows for the combustion of the hydrocarbon, creating carbon dioxide and water without the need for contact DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
CONTACT RUI SHU AT RUI.SHU.15@DARTMOUTH.EDU
+ O N
N H Image courtesy of Rui Shu
Figure 5: First N-nitration step of RDX synthesis.
with an external source of oxygen gas. The fact that combustion can happen as a result of an intramolecular rearrangement rather than an intermolecular reaction makes the combustion significantly more likely and, therefore, faster. By exploiting these concepts, people have managed to make ever stronger explosives in the twentieth century. By the 1900’s, RDX, HMX, and HNIW had emerged as parts of a new batch of explosives that made use of nitroamines. These compounds were far more powerful than other explosives of their time, capable of incredibly fast detonations. The synthesis of nitroamines involves the N-nitration of an already nitrogenous molecule, placing the nitro-groups on the nitrogen atoms. HNIW is the strongest of the three, followed by HMX, and RDX. The trend is readily understood through the principles of the explosives’ densities and the molar ratio of C, H, O, and N. HMX has the same empirical formula as RDX, but differs in that it contains an additional nitroamine group in its ring. This increased density allows a greater amount of HMX to be used per unit volume, creating a larger reaction. The fact that HMX is an eight ring molecule (as opposed to RDX’s six ring molecule) introduces slightly more angle strain, making HMX a higher-energy molecule. HNIW, however, differs in the molar ratio of each element present. The molar ratio of oxygen within HNIW allows complete combustion without need of an external oxygen source, making HNIW even more energetic (17). Pushing the limits of explosives further still are the molecules heptanitrocubane and octanitrocubane. These two molecules currently rank highest in terms of chemical explosive power, but involve impressively complex pathways of synthesis. Heptanitrocubane and octanitrocubane are derivatives of cubane, which is difficult to produce in and of itself. While insensitive to shock, these molecules take advantage of the huge carbon-ring strain to rapidly collapse and combust the entire structure. SPRING 2012
Octanitrocubane is especially unique in that it does not contain any hydrogen at all – its combustion produces only carbon dioxide and nitrogen gas. With eight nitro-groups crowded around the small carbon cube, it is no wonder that it is one of the most powerful explosives to date. However, while octanitrocubane outperforms heptanitrocubane in terms of enthalpy, heptanitrocubane compensates with higher density. Its physical structure has made it empirically easier to pack heptanitrocubane than octanitrocubane, making heptanitrocubane slightly more powerful as an explosive (18).
The Offshoots of Explosive Chemistry Despite their initial designation as explosives, many of these molecules have carried on to fulfill much greater and more diverse roles. Nitroglycerin proved to be an effective vasodilator and has since been incorporated in several heartrelated medications (13). Nitrocellulose exhibits a high affinity to proteins and nucleic acids. This property greatly aids the field of biotechnology, which now uses nitrocellulose membranes in techniques such as western blotting to assay proteins (19). The invention of dynamite also prompted Alfred Nobel to establish the Nobel Price in an effort to contribute more positively to the world of science (20).
A Reflection on Chemical Explosives
References 1. Office for Domestic Preparedness, Explosive Devices. Available at http://cryptome.org/ieds.pdf (1 April 2012) 2. D. W. Oxtoby, H. P. Gillis, A. Campion, Principles of Modern Chemistry (Saunders College Pub., Philadelphia, ed. 6, 2008). 3. V. Summers, Gunpowder – The Chemistry Behind the Bang (30 September 2010). Available at http://voices.yahoo.com/gunpowder-chemistrybehind-6884202.html?cat=15 (3 April 2012) 4. D. Williams, Materials Compatability. Available at http://www.shsu.edu/~chm_dlw/index.html (28 March 2012) 5. Encyclopaedia Britannica, Nitrocellulose (2012). Available at http://www.britannica.com/EBchecked/ topic/416152/nitrocellulose (1 April 2012) 6. T. Urbanski, Chemistry and Technology of Explosives (Pergamon Press, New York, Vol 1, 1964) 7. K. P. Vollhardt, N. E. Schore, Organic Chemistry: Stucture and Function (W. H. Freeman and Co., New York, 2003) 8. G. Berwick, The Executives Guide to Insurance and Risk Management (QR Consulting, Haberfield, 2001). 9. Exponent Engineering and Scientific Consulting, Explosions, Detonations, & Detonations (2012). Available at http://brochures.exponent.com/PDF. aspx?capability=explosions (1 April 2012) 10. L. J. Ignarro, Proc. Natl. Acad. Sci. U.S.A., 99 (12), 7816-7817 (2002). 11. H. Henkin, R. McGill, Industrial and Engineering Chemistry, 44 (6), 1391-1395 (1952). 12. K. Fant, Alfred Nobel: a Biography (Arcade Pub., New York, 1993). 13. L. C. Holmes and F. J. DiCarlo, Journal of Chemical Education, 48 (9), 573-576 (1971). 14. B. Stenuit, L. Eyers, S. E. Fantroussi, S. N. Agathos, Environmental Science and Bio/Technology, 4, 39-60 (2005). 15. International Chemical Safety Cards, 2,4,6,-Trinitrotoluene (1993). Available at http:// hazard.com/msds/mf/cards/file/0967.html (1 April 2012) 16. G. Hill and J. Holman, Chemistry in Context (Nelson Thornes Ltd, UK, 2000). 17. U.S. Army Materials Command, Elements of Armament Engineering (1964). Available at http:// www.dtic.mil/dtic/tr/fulltext/u2/830272.pdf (1 April 2012) 18. P. E. Eaton and M. Zhang, Propellants, Explosives, Pyrotechnics, 27, 1-6 (2002). 19. R. Jahn, W. Schiebler, P. Greencard, Proc. Natl. Acad. Sci. USA, 81, 1684-1687 (1984). 20. Nobelprize.org, Alfred Nobel. Available at http:// www.nobelprize.org/alfred_nobel/ (1 April 2012)
Once we look past the “explosives” label, we may come to appreciate that these substances are no less intimidating than any other chemical. “Explosive” molecules obey the same chemical rules and physical rules as any other molecule or compound, and many have the potential to create positive effects beyond the realm of explosive chemistry.
AL L E RGIES
Allergies: Immune System Turned Extreme Why our Immune System Turns Against Us YOO JUNG KIM
n allergy is generally described as “an exaggerated immune response or reaction to substances that are generally not harmful” (1). These substances can include pollens, specific foods, dusts, molds, latex, and insect stings (1, 2, 3). To those who suffer from allergies, the consequences of consuming the wrong food or being stung by the wrong insect can range from being uncomfortable to life threatening. Symptoms could be as mild as itchiness to as extreme as breathing difficulties, vomiting, and death (4).
Overview of the Immune System The immune system protects an organism from a wide spectrum of foreign pathogens, a feat requiring a tremendous degree of flexibility. In particular, the mammalian immune system is composed of three physiological components: the external barriers, the innate immune system, and the adaptive immune system. External barriers—the first level of protection against pathogens—are composed primarily of physical obstacles, such as epithelial cells, and chemical barriers such as sweat, tears, or saliva, all of which contain anti-microbial compounds (5). If a pathogen can successfully breach this first level of defense, it must then face components of the innate immune system: a host of non-specific mechanisms such as phagocytic cells (neutrophils, macrophages), natural killer cells, the complement system, and cytokines, all of which can act to eliminate a diverse array of pathogens and infections. Adaptive immunity is mediated by lymphocytes (B cells and T cells ) and is stimulated by exposure to infectious agents. In contrast to innate immunity, adaptive immunity is characterized by specificity for distinct macromolecules. A striking feature of the adaptive immune system is its “memory,” which enables the immune system to respond with greater vigor to future exposures to the same pathogen (5). Both the innate and adaptive systems are composed of a vast spectrum of 24
molecules, cells, and organs that function collectively to prevent illness, but not all instances of immune response are beneficial. In certain cases, the immune system can react excessively to a potential threat, leading to continuous acute or systematic cellular damage known as hypersensitivity (5).
Overview of Hypersensitivity In the 1960s, two British immunologists, Robin Coombs and Philip Gell created a classification system for sorting undesirable immunological reactions into four categories, which were classified by the “principal immunologic mechanism” and the time required for adverse reactions to occur. In 1963, Gell and Coombs published their findings in “Clinical Aspects of Immunology,” which soon became the authoritative textbook on medical immunology, running to five editions in thirty years (6). One of the adverse immune responses noted by Gell and Coombs was hypersensitivity. Hypersensitivity is a condition in which immune responses target self-antigens, which result from uncontrolled, excessive, or inappropriate responses against foreign antigens, such as microbes and allergens. All hypersensitivity reactions require a pre-sensitized (immune) state of the hosts. Allergies—under Gell and Coomb’s classification—fall under Hypersentitivity Type I, also known as immediate or anaphylactic hypersensitivity (5).
antibodies specific to the allergen. (5) The most obvious symptoms of allergies may include urticaria (hives), eczema (rashes), conjunctivitis (inflammation of the eyelids), and asthma. As the name “immediate hypersensitivity” suggests, this reaction occurs rapidly, usually fifteen to twenty minutes from the time of exposure to the antigen. The initial symptoms may be followed by a second, delayed onset of symptoms after ten to twelve hours (7). Biologically, Type I Hypersensitivity reactions are mediated by a specific class of antibody called Immunoglobulin E (IgE). In a normal immune response, IgE protects against large pathogens such as helminthes and protozoans by binding to IgE-specific receptors called FcεRI, located on mast cells, basophils, lymphocytes, and monocytes. Upon binding of IgE, these cells cause the release of biochemical mediators, such as histamine, prostaglandin, slow-reacting substance of anaphylaxis (SRS-A), and leukotrienes, which can cause contractions of smooth muscle, increased vascular permeability, and increased mucous secretion. The sum of these reactions may lead to “anaphylaxic shock,” a potentially deadly type of whole-body hypersensitivity reaction that includes dilation of blood vessels, massive edema, hypotention, and cardiovascular collapse. (5, 8) While a specific allergy cannot be inherited, one’s predisposition to allergic disease is differentially correlated with genetic factors. If one parent has any type of an allergy, “chances are 1 in 3 that each child will have an allergy. If both parents
Biological Mechanisms of Allergy All hypersensitivity reactions, including allergies, require prior sensitization and memory, processes that involve components of the innate and adaptive immune system. During the initial exposure, would-be allergens are captured, processed, and presented by antigenpresenting cells (APCs) to activate allergenspecific T-helper cells. These cells then enable B cells to differentiate and produce
Image courtesy of Wolfgang Ihloff retrieved from http://commons.wikimedia.org/wiki/ File:Allergy_skin_testing.JPG (accessed 10 May 2012)
Figure 1: Allergy skin testing. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Image courtesy of SariSabban retrieved from http://en.wikipedia.org/wiki/File:The_Allergy_Pathway.jpg (accessed 10 May 2012)
Figure 2: Simplified diagram showing key events that leads to allergy initiation.
have allergies, it is much more likely (7 in 10) that their children will have allergies” (3).
Allergy Testing and Treatment
The most prevalent method of testing for allergies is through the skin, otherwise known as the prick test. To observe for the presence of allergies, a medical provider can place minute doses of the suspected allergen samples under the skin. After few minutes , the skin is then observed for symptoms of inflammation (3). While there is no known cure for allergies, current over-the-counter and prescription medication can inhibit production or release of inflammatory mediators which cause symptoms most commonly associated with allergies. These include non-steroidal antiinflammatory drugs (NSAID) such as aspirin, indomethicin, and synthetic steroids such as glucocorticoids. Other potent drugs include those that inhibit mediator action such as histamine receptor antagonists (antihistamines) like Benadryl and Dramamine, both of which prevent sneezing, runny nose, and itchy eyes (5). For potentially long-lasting relief, symptoms of allergies may be treated via immunotherapy. A patient may receive multiple subcutaneous injections of increasing doses of antigen. This enhances the immunoglobulin G (IgG) mediated immune response, boosting the generation of regulatory T-cells. For reasons not yet clear, increasing levels of IgG coincide with decreased levels of IgE and allergic symptoms. SPRING 2012
In potentially life-threatening allergic reactions, epinephrine can be injected to relax the muscles in the airways and reduce the tightening of the blood vessels, thereby preventing asphyxiation (10).
Social and Environmental Trends in Allergy The prevalence of allergies has reached pandemic levels in industrialized counties. In America, total cases of allergies have increased since the 1980s in virtually all demographics, becoming one of the most common chronic medical conditions in America (11). The costs accrued by treatment for airborne allergens such as dust and pollen total $21 billion annually in the United States (3). Researchers have postulated a correlation between the longevity and severity of allergic symptoms and climate change. Systemic increases in temperature, carbon dioxide levels, and precipitation have enabled the growth of various species of plants known to cause and exacerbate allergic symptoms (1). Studies also suggest that climate-related temperature changes are expected to increase the potency and concentration of airborne allergens. This would mean a lengthened allergy season and increased severity of allergic symptoms (1). Additionally, climate change may also enable the migration of allergy-causing plants to new environments in which they could proliferate more robustly.
Our understanding of the biological pathways of allergic diseases continues to grow, but some of the most important questions have yet to be answered. While several hypotheses regarding the origin of allergies have been raised, scientists have yet to reach a consensus on the evolutionary persistence of allergies (12). Furthermore, despite several conjectures, the steadfast increase in rates of allergic disease in industrialized countries remains largely unexplained (12). While commonplace, allergies can present a significant detriment to the overall lifestyle of individuals. The increase in cases of allergies in virtually all American demographics raises social, environmental, and public health concerns. Researchers from a vast spectrum of fields must work in tandem to address the prevalence of allergic diseases in industrialized countries. CONTACT YOO JUNG KIM AT YOO.JUNG. KIM.14@DARTMOUTH.EDU References 1. Environmental Protection Agency Research & Development (February 2011). Available at http:// www.epa.gov/ord/gems/scinews_aeroallergens.htm. (April 2012). 2. Allergies. PubMed Health (2011). Available at http://www.ncbi.nlm.nih.gov/pubmedhealth/ PMH0001815/. (April 2012). 3. Allergy Facts and Figures. Asthma and Allergy Foundation of America. Available at http://www.aafa. org/display.cfm?id=9&sub=30. (April 2012) 4. C. Hadley, EMBO reports 7, 1080 – 1083 (2006). 5. A. Abbas, Basic Immunology: Functions and Disorders of the Immune System (Saunders Elsevier, New York, ed. 3, 2011), [third edition]. 6. D. H. Pamphilon, British J. of Haematology. 137, 401-208 (2007). 7. A. Ghaffar, University of South Carolina School of Medicine (2010). Available at http://pathmicro.med. sc.edu/ghaffar/hyper00.htm. (April 2012). 8. A.D.A.M. Medical Encyclopedia. PubMed Health (2010). Available at http://www.ncbi.nlm.nih.gov/ pubmedhealth/PMH0001847/. (April 2012). 9. Allergy Shots. WebMD. Available at http://www. webmd.com/allergies/guide/shots. (April 2012). 10. Epinephrine Injection. Medline Plus (2010). http://www.nlm.nih.gov/medlineplus/druginfo/meds/ a603002.html. (April 2012). 11. Air Pollution. Asthma and Allergy Foundation of America. Available at http://www.aafa.org/display. cfm?id=8&sub=16&cont=38. (April 2012) 12. N. W. Palm, Nature. 484, 465 – 471 (2012).
Extremely Interesting Animal Facts The Exciting Physiologies of Different Animals SARA REMSEN
ver since I was young, I have been collecting fun facts about animals. By the time I was ten, I could name more than 200 mammals off the top of my head at the dinner table. Although my parents raised their eyebrows at me, they supported my voracious curiosity. They gave me Marine Mammal Biology for Christmas and The Illustrated Veterinary Guide for my birthday. I absorbed all the animal facts I could find and quoted them back to my surprised parents, finding some particularly interesting ones. Although we like to think the human species is the pinnacle of evolutionary success, I want to share some extraordinary facts that remind us of the many animal species whose abilities far exceed our own.
Sea Dragons: Creatures You Never Imagined I like to believe that I knew about sea dragons before anyone else. In sixth grade, I chose to write my marine biology report on these understudied creatures when my classmates were writing reports on starfish and dolphins. Relatives of sea horses and pipefish, sea dragons are members of the Syngnathidae family that live in the waters off the coast of Perth, Australia. There are two species, weedy and leafy, named for their habitat and corresponding camouflage.
Sea dragons have long snouts and bony rings around their bodies, with leafy or weedy appendages. Their small, transparent dorsal and pectoral fins can propel them in the water, but sea dragons spend most of their time drifting in patches of seaweed. Sea dragons survive on a diet of Mysid shrimp and amphipods (6, 10). Like male seahorses, male sea dragons are responsible for child rearing. When a male sea dragon is ready to mate, his tail turns bright yellow, and the female deposits bright pink eggs onto the brood patch on the underside of his tail. The male carries the eggs for about four to six weeks, at which point he releases the fully–formed baby sea dragons.
The Wolverine Newt Imagine you are a newt: you have the short stubby legs of a primitive amphibian and it is impossible to outrun a predator. Most salamanders and newts have avoided extinction by either hiding or advertising their toxicity through bright warning colors. The Spanish ribbed newt takes an entirely different approach to survival. When confronted with a predator, it sticks its pointed ribs outward through its skin, exposing poison barbs (9). First noticed by a natural historian in 1879, this salamander was the subject of recent research that revealed that the defense comes in two parts. When the newt is threatened, it
Image courtesy of Derek Ramsey retrieved from http://en.wikipedia.org/wiki/File:Leafy_Seadragon_Phycodurus_eques_2500px_PLW_edit.jpg (accessed 10 May 2012)
Figure 1: A leafy sea dragon in the kelp forests off the coast of Perth, Australia. 26
Image courtesy of Richard Ling retrieved from http://en.wikipedia.org/wiki/ File:Phyllopteryx_taeniolatus1.jpg (accessed 10 May 2012)
Figure 2: The relative of the leafy sea dragon, the weedy sea dragon.
secretes a noxious substance on its skin and then contorts its body to force the sharp tips of its ribs through its skin. The ribs become fierce, poison–tipped barbs, deterring attackers. These spear–like ribs must break through the newt’s body wall every time it evokes the defense. However, as is characteristic of many amphibians, the Spanish ribbed newt posses rapid tissue regeneration characteristics that allow it to recover from its puncture wounds (9)
Antifreeze Blood Fish are ectotherms: the external environment determines their internal body temperature. This form of thermoregulation means that fish are the same temperature as the rivers and oceans they inhabit—even when the water is below freezing. Antarctic Notothenoids are fish that inhabit the frigid waters of the Antarctic, where temperatures are constantly below 0 degrees Celsius. In order to keep ice crystals from forming in their blood, fish from the suborder of Notothenioidei have unique antifreeze proteins in their circulatory system. Unlike commercial antifreezes, these glycoproteins do not lower the freezing point of water. Instead, Notothenioids’ antifreeze glycoproteins (AFGPs) bind to ice crystals as soon as they form, blocking other water molecules from binding and thereby preventing the development of a larger crystal. There are several structural varieties of AFGPs that have convergently evolved in several families of cold–dwelling fish. Recent research suggests that these DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Image courtesy of Uwe kils retrieved from http://en.wikipedia.org/wiki/File:Icefishuk.jpg (accessed 10 May 2012)
Figure 3: An icefish off the coast of Antarctica.
proteins evolved from mutations in pancreatic trypsinogens and adapted for survival in frigid water (5).
Eat a Bedtime Snack or You’ll Die Hummingbirds are one of the only species of birds capable of hovering because of their complex, figure eight wing patterns. However, this agility comes at the cost of a metabolism which demands a near constant supply of nectar. The aerial performance of hummingbirds’ approaches the upper bounds of oxygen consumption and muscle power input for all vertebrates (4). While nectar is popularly perceived as the food of choice for hummingbirds, insects actually provide the primary source of nutrition. Fasting becomes a problem at night, when hummingbirds need to sleep. In order to survive the night without starving to death, hummingbirds enter a state called torpor (8). When hummingbirds enter torpor, their body temperature drops and their metabolism slows. A cold hummingbird body has lower energy needs than a warm, flying one and burns fewer calories. Because hummingbirds cannot both sleep and eat at the same time, entering torpor at night enables hummingbirds to survive until their next meal in the morning.
Image courtesy of Derek Ramsey retrieved from http://en.wikipedia.org/wiki/File:Leafy_ Seadragon_Phycodurus_eques_2500px_PLW_edit.jpg (accessed 10 May 2012)
Figure 6: Hummingbirds must constantly forage for nectar and insects throughout the day to maintain their high metabolism. SPRING 2012
Image courtesy of Peter Halasz retrieved from http://en.wikipedia.org/wiki/File:Pleorodeles_waltl_crop.jpg (accessed 10 May 2012)
Figure 4: The Spanish ribbed newt possesses sharp ribs that can puncture through its skin as a defense mechanism, exposing poisonous barbs to predators.
Mammals vs. Reptiles: Locomotion and Respiration Reptiles and amphibians still retain the ancient form of locomotion practiced by our fishy ancestors: lateral undulation. Reptiles swing opposite pairs of their legs forward, with their legs scrunching at their side as they move. Unfortunately for these reptiles, the muscles that control their legs also control their lungs (3). As reptiles flex sideways, they compress a lung shunting air between their two lungs, rather than expelling old air and inhaling fresh air. As a result, lizards must always travel in a run-stop-run pattern. Mammals, unlike reptiles, have diaphragms. The diaphragm is a muscle that contracts to create a vacuum inside the chest cavity thereby causing the lungs to expand. Because the diaphragm is independent of the muscles that power locomotion in mammals, we are able to breathe and run at the same time. High breathing rates enable a greater oxygen intake and stronger, more frequent muscle contractions. Perhaps the diaphragm is a strong component of mammals’ evolutionary success; if migration was quick and easy, our rodent-like, primitive ancestors could travel more easily to find new resources.
Colors We Can’t See: In our eyes, we have two types of cells called rods and cones that send light information to our brain. Rods are the most sensitive to stimulation than cones and are found mostly at the edge of the retina, where they contribute to the motion– sensitivity of our peripheral vision. Cones give our world color. There are different types of cones, with each type containing a pigment that is sensitive to a particular wavelength of light. In humans, there are three types of cones: red, green, and blue. These are the same RGB colors of the LEDs in lampposts, stoplights, and streets signs that mix to produce the spectrum of colors we are familiar with. Humans are
Image courtesy of Bjorn Christian Torrissen retrieved from http://en.wikipedia.org/wiki/ File:Reindeer-on-the-rocks.jpg (accessed 10 May 2012)
Figure 5: Caribou can migrate huge distances between patches of forage on the tundra, in part because it is so easy for them to breathe and move at the same time. 27
Image courtesy of Paul Hirst retrieved from http://en.wikipedia.org/wiki/File:Anole_Lizard_Hilo_Hawaii_edit.jpg (accessed 10 May 2012)
Figure 7: An anole perches on a branch to catch its breath.
thus classified as trichromads (7). When one of these cones is non-functioning, it causes vision distortions, such as red–green colorblindness. Birds are tetrachromads: they have an extra type of cone and can see into the ultraviolet (UV) spectrum (7, 1). Most dinosaurs had four cones, but mammals lost two of these cones when they became nocturnal and had little use for color vision. When mammals diversified in their later evolution, some regained a different third cone to see into green wavelengths to complete our RGB set (7). Others did not and remained dichromads. Birds, however, being phylogenetically close to dinosaurs retained their four cones which are sensitive to UV wavelengths. Birds have another jump on mammals: specialized oil droplets in birds’ cones
narrow the spectral sensitivity of the pigments, enabling birds to distinguish more colors than with pigments alone (7). Recent studies suggest that bird plumage is even more colorful than we can imagine (1, 2). The more I learn about animal adaptations and evolution, the more the complexity of the world amazes me. Although I envy the naturalists of the 1800s who stepped on a new species in their backyard and went on to publish hundreds of papers about their findings, I know science is far from complete and much has yet to be discovered.
1. A. T. D. Bennett, I. C. Cuthill, Vision Res. 34(11), 1471-8 (1994). 2. D. Burkhardt, E. Finger, Naturwissenschaften 78(6), 279-80 (1991). 3. D. R. Carrier, Paleobiology 13: 326–341 (1987). 4. P. Chai, R. Dudley, Nature 377, 722-25 (1995). 5. L. Chen, A. L. DeVries, C. C. Cheng, PNAS 94(8), 3811-3816 (1997). 6. R. M. Connolly, A. J. Melville, J. K. Keesing, Marine and Freshwater Research 53, 777–780 (2002). 7. T. A. Goldsmith, M. C. Stoddard, R. O. Prum, Scientific American 68-75. (2006). 8. O. P. Pearson, The Condor, 52(4),145-152 (1950). 9. M. Walker, Bizarre newt uses ribs as weapons (21 August 2009). BBC News (16 March 2011). 10. N. G. Wilson, G. W. Rouse, Zool. Scr. 39(6), 551-8 (2010).
CONTACT SARA REMSEN AT SARA.E.REMSEN.12@DARTMOUTH.EDU
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Sequencing the Microbial World The Significance of Metagenomics AARON KOENIG
icrobes have been pioneers in molecular biology since the field’s inception. Discoveries made in microbial models are frequent in scientific cannon, from Hershey and Chase’s blender experiment in 1952, which indicated that DNA, not protein, is the heritable material of T2 bacteriophages; to the first examples of gene regulation in the metabolic pathways of Escherichia coli for lactose and tryptophan, discovered by Jacques Monod, François Jacob and Andre Lwoff. More recently, the synthesis of a new strain of the bacterium Mycoplasma mycoides by the Craig Venter Institute in 2010, was hailed as a technical milestone in synthetic biology. This technique belongs to the field of metagenomics. While bacteria and phages have provided a compact framework for researchers, those who seek to study the organisms for their own sake face the long-standing pitfalls of classical microbiology. For instance, only half of the 32 divisions of the domain Bacteria known to science in 2003 can be grown in culture, a prerequisite to any laboratory investigation (1). The limited subset of microbes available for study is examined ex situ, without biological context. Fortunately, the emerging technology of metagenomics permits the sequencing of entire microbial communities, providing a new investigatory model for studying microcobes. Already, metagenomics has yielded insights into how changes in environments can affect microbial populations.
Background on Metagenomic Techniques Metagenomics is the sequencing of environmental DNA samples, with or without prior amplification (2). The first steps in metagenomic studies are sampling and filtration to separate microbes (i.e. microbial Eukarya, Archaea, Bacteria) from other organisms collected from the environment. The information about the environment in which the sample was collected is categorized as metadata and is crucial to the interpretation of results SPRING 2012
(2). An estimate of the number of taxa present in the sample – the term “species” is avoided due to its limited applicability in microbes – can be obtained by analysis of variable regions of DNA that encode RNA subunits of ribosomes (rRNA). Bacteria and Archaea have different ribosomes than do Eukarya, so analysis of ribotypes is performed separately on prokaryotic 16S rRNA and eukaryotic 18S rRNA (2). Although rRNA sequencing has been utilized for a long time, only recently has metagenomics come to encompass full sequencing of an environmental sample. Second generation sequencing technology, including rapid pyrosequencing, sequences DNA in small stretches of <1000 base pairs (2). The challenge is then to assemble contiguous fragments of DNA without generating hybrids between closely matching, but discontinuous, DNA (2). Assembled sequences are used to parse sequenced genomes for homology, providing information on the origin and function of genes (2). Advances in assembly have led to the ability to extract whole genomes of abundant species from multispecies samples (3). Most of the DNA present in environmental samples, however, is fragmented to begin with, and metagenomic analysis is necessary to analyze and interpret the multitude of short assemblies of varying provenance in the sample (2).
Microbial Ecology There are several variables that are considered when analyzing metagenomic data. Consider a comparison between metagenomic date of the Amazon River and Lake Lanier. For instance, both are freshwater, but the first is tropical and flowing while the second is temperate and still. While a number of differences are visible in the dataset, there is a broad similarity in the contribution of Actinobacteria environmental DNA to the two communities (4, 5). In the example of Alaskan permafrost, a time delay and phase change in soil environment has occurred,
accompanied by profound changes in community structure (6). Finally, the North Pacific example shows depth variation in a saltwater environment (7). Photosynthetic Cyanobacteria are seen to rapidly disappear from the samples with depth (7). Microbial ecology asks questions about the sources of variation in microbial communities and about the differences between microbial ecology and that of higher organisms. One prominent hypothesis regarding the spatial distribution of microbial populations is that all microbial taxa exhibit a cosmopolitan distribution within similar environments, meaning, all abiotic factors being equal, a microbe plucked from a river in Brazil could also be found in India (8). Connections between the species composition of widely disparate habitats are not uncommon. For example, an rRNA study of Toolik Lake in the North Slope of Alaska indicated that 58% of bacteria and 43% of archaea taxa present in the lake were also present in the soil of the lake’s catchment or the headwaters of its feeder stream (9). The same was true of only 18% of microbial eukaryotes, even when eukaryotic phytoplankton, which are less likely to be found in the dark recesses of soil, were excluded (9). By contrast, however, researchers concluded that the microbial community in Lake Lanier was more similar in composition to those of other freshwater lakes and the upper layer of the ocean than to communities in soil (5). While habitats may be hydrologically linked, water is not the only fluid that binds distant ecosystems. The global chemical transport model suggests that microbes can take to the skies to aid in their dispersal, with continental leaps possible within a year for microbes under 20 um in diameter (10). Another obstacle on microbial diversity is natural selection acting on microbes. Bacterial communities colonizing the surface of the green algae Ulva australis show 70% similarity in gene function between individual algae, yet they share only 15% of total rRNA-estimated species composition (11). The results of the study suggest that bacteria colonize by a lottery method in 29
which the first bacterial species to reach the surface of the algae with appropriate functional groups of colonization genes (e.g. the ability to metabolize water-soluble sugars secreted by the algae) creates its own symbiotic community (11). Species with the right sets of genes are competitively equivalent, producing a “first come, first served” selective environment. The authors comment that Richard Dawkins’s vision of organisms as fleshy vehicles for competing genes appears to find its justification in bacteria, where there are no alleles for deleterious genes to quietly propagate (11). There is, however, metagenomic evidence to suggest that bacteria can rapidly diversify. To test the adaptive capabilities of microbes, a separate study went underground, into the anoxic far reaches of Richmond Mine in California (12). Over the course of nine years, researchers tracked the production of hybrid strains between two parental types of Leptospirillum group II (12). Estimates of the rate of single nucleotide polymorphism from a 5-year series at a single location allowed researchers to date the emergence of a series of increasingly recombinant yet prolific hybrids to the past 44 years, a mere blip in the evolutionary timescale (12). While the study was performed in a relatively simple bacterial community, the potential for bacterial species to respond to selective pressures is likely present in other environments.
Environmental Chemistry The crown jewel of discoveries using metagenomic techniques was made in 2001, when a method of phototrophy found in an uncultured bacterium previously thought to be a characteristic of more exotic Archaea taxa was discovered by the same researchers to be ubiquitously encoded in global oceanic bacterial populations (13, 14). Proteorhodopsins, which are light harvesting proteins, are structurally simpler than the photosystems of the photosynthetic pathway, generating only energy and not ingredients for carbohydrate synthesis in bacterial cells. In a process analogous to that used by the visual pigments of animal cells, proteorhodopsins convert light energy to chemical energy via bound retinal molecules. Proteorhodopsins are a broad class of proteins with absorbance spectra that are finely tuned to the differential light availability at the depths in which they occur (14). Further investigation of the properties of proteorhodopsin has revealed 30
that its gene is spread rapidly by lateral gene transfer and has, amazingly, vaulted itself into the genomes of eukaryotic dinoflagellates on at least two separate occasions from bacteria (15). While the function of these migrant proteins is not known, researchers speculate that the proteorhodopsin of dinoflagellates is used to acidify digestive vacuoles, a process that would otherwise burn chemical energy (15). Deep in the sea, some 200 meters below the surface, light becomes unavailable and phototrophy of all kinds grows untenable. Nearby, an oxygen minimum zone forms to separate oxygen-rich upper waters from relatively higher oxygen concentrations below. In these largely anoxic habitats, chemolithoautotrophy, the oxidation of inorganic compounds to fix carbon, is prevalent. These regions are enriched in inorganic nitrogen, and the oxidation of ammonia was thought likely to be the driving force for bicarbonate fixation (16). Reliance on nitrogen alone, however, is insufficient to explain the scale of carbon fixation in the deep ocean (16). Sequence data from single cells revealed that certain clusters from Deltaproteobacteria, Gammaproteobacteria and Oceanospirillales encode pathways for sulfur oxidation and carbon fixation, suggesting that reduced sulfur compounds may balance the carbon budget of the deep ocean (16). The lifestyle adopted by these bacteria is made all the more mysterious for the fact that free sulfide is very rare in the ocean (17). Metagenomic studies found representation of known sulfide-oxidizing bacteria at 6.3-16.2% of reads, and sulfatereducing bacteria at 2.1-2.4% of reads in an oxygen minimum zone off the coast of Chile, despite the fact that sulfate reduction is less thermodynamically favorable than the reduction of nitrogenous compounds (17).
Industrial Applications The Deepwater Horizon disaster in 2010 left approximately 4.9 million barrels of oil in the Gulf of Mexico (18). Cleanup of the spill was complicated by the depth at which the spill occurred (1544 m, with plume formation at 1100 m), which necessitated the use of chemical dispersants such as dioctyl sodium sulfosuccinate for cleanup (19). Since the capping of the well, scientists have researched the ongoing response of bioremediators to
the initial spill. One often-overlooked contribution of microbes, specifically of the methanotrophic Archaea, is their ability to rapidly oxidize methane. During the Deepwater Horizon leak, it is estimated that compounds of oil and methane were emitted in almost the same ratio: 0.91x1010 – 1.25x1010 moles CH4 versus 0.93x1010 – 1.0x1010 moles oil (20). Incredibly, researchers sampling the leak were no longer able to detect methane in the water column four months after the beginning of the leak (20). The extraordinary feat was likely orchestrated by a transient burst in the populations of Methylophilacaea, Methylophaga and Methylococcaceae that had already begun to subside by the time of the researchers’ first rRNA analysis (20). In another study, researchers sampled surface waters to determine characteristics and rates of oil metabolism in bioremediators (19). Despite limited phosphate nutrients and arrested growth inside the slick, the surface microbial population was able to increase its respiration rate by five times to aid in the metabolism of oil (19). Researchers estimated that the surface bacteria were capable of metabolizing oil at a comparable rate to the leak itself (19). Metagenomics has had some successes in drug discovery. A gene for a lactonase from pasture soil samples was found to be capable of degrading a number of different homoserine lactones, which aid in the infectivity of pathogens, and their derivatives when expressed in E. coli (21). Metagenomics also has the potential to be useful in the search for industrial catalysts. An esterase from an environmental sample has proved capable of cleaving the polymers of certain bio-plastics, for example (21). There are drawbacks to the use of metagenomic libraries in heterologous expression systems like E. coli, however, as the bacterium is predicted to be capable of expressing only 40% of genes from a selection of 32 fully sequenced bacteria (22). It is anticipated that the screening of organisms from hypersaline, hot and cold regions will yield novel biomolecules useful to industry (23).
Symbiotic Communities in Humans Each of us carries a bacterial metagenome in our bodies, on the order of 150 times the size of our own from intestinal microbes alone (24). Intriguingly, a body of sequence data suggests that our DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
bacterial metagenome is not common across all individuals. One of the most surprising scientific studies of 2011 used sequence data from the intestinal flora of 39 subjects to discover three distinct assemblages of microbes, characterized by the prevalence of Bacteroides, Prevotella or Ruminococcus bacteria. In the initial study, the enterotypes were not associated with sex, age, nationality or weight, although multiple functional correlations were made with host properties (25). Since the initial publication, an independent study with a greater number of subjects discarded the Ruminococcus enterotype as overlapping with Bacteroides. Long-term diet patterns affected enterotype, with self-reporting vegetarians presenting Prevotella enriched microbial communities at a rate of 27% versus 10% for Bacteroides (26). Simply receiving a short-term high fat/low starch or low fat/high starch diet was not sufficient to change enterotype (26). The impact of our resident microbes on our health has yet to be fully appreciated. Poliovirus and reovirus, two related enteric viruses, were recently found to enhance their ability to replicate in the presence of bacterial membrane polysaccharides (27). Treatment of immunodeficient mice with antibiotics reduced mortality from poliovirus to half that of the control group (27). A second group of viruses, the retroviruses, were also found to contain members that have their activity modulated by commensal microbes (28). Pregnant female mice treated with antibiotics and infected by mouse mammary tumor virus did not pass the retrovirus on to their offspring (28). Virus and bacteria appear to act in concert to evade immune responses in the host organism. Obese mice with mutant leptin were observed to possess a greater ratio of Firmicutes to Bacteroidetes than in wild type (29). A healthy group of mice inoculated with the gut flora of obese mice was observed to increase body fat by 47% in a two-week interval, compared to an increase in a control group of 27%, with no associated increase in food consumption (29). Microbes that cohabit our body make their presence known in disease, making the understanding of their community dynamics all the more important.
Future Directions Despite a surge of research interest in metagenomics and its applications, a number of interesting questions in SPRING 2012
microbial ecology remain unresolved. For instance, a “rare biosphere” of species with DNA constituting a fraction of one percent of the total is present in all oceans (8). An open question for microbiologists is the origin of the rare biosphere, which could represent for oceans what seed banks do for soil or something else entirely, like a group of microbes with different life strategies than those chosen by more abundant species, or even newly emerging taxa arising from lateral gene transfer (8). Another pressing issue is the impact of climate change on microbial community activities. In a warming world, the 26% of land currently covered in permafrost, which hosts small microbial communities of its own, could be overrun by microbes actively respiring the 15% of the soil carbon sequestered in the thawing ice (30). As a field, metagenomics presents philosophical challenges to scientists. Maureen O’Malley of the University of Exeter commented on the ways in which scientists do research with metagenomics in a historical analysis of the proteorhodopsin discovery (31). There is a tendency in metagenomic research to avoid the use of hypotheses, foregoing the creation of specific research questions in favor of simply sequencing novel environments, describing the results of sequencing and looking for interesting biology in the sample (31). O’Malley argues that the discovery of proteorhodopsin represented neither blind “exploratory experimentation,” in which complex systems like microbial communities are examined without prior expectations to find correlations and regularities, nor “theory-driven experimentation,” in which well-understood theory is rationally tested through the manipulation of defined variables (31). Instead, metagenomic research should be termed “natural history/experimentation,” in which metagenomic results from different locations are compared systematically with respect to environmental variables (31). As O’Malley concludes, metagenomic studies operate in a natural laboratory, examining communities shaped by selective factors and environmental context (31). Microbes are inherently complex, thanks to their diversity of form and ability to form communities that provide functions not found in individual isolates, but metagenomics provides the toolset required to unravel the structures and functions of the vast microbial world.
References 1. M. S. Rappé, S. J. Giovannoni, Annu. Rev. Microbiol. 57, 369-394 (2003). 2. J. C. Wooley, A. Godzik, I. Friedberg, PLoS Comput. Biol. 6, e1000667 (2010). 3. V. Iverson et al., Science 335, 587-590 (2012). 4. R. Ghai et al., PLoS ONE 6, e23785 (2011). 5. S. Oh et al., App. Environ. Microbiol. 77, 6000-6011 (2011). 6. R. Mackelprang et al., Nature 480, 368-371 (2011). 7. M. V. Brown et al., ISME J. 3, 1374-1386 (2009). 8. L. Amaral-Zettler et al., in Life in the World’s Oceans: Diversity, Distribution, and Abundance, A. D. McIntyre, Ed. (Blackwell, 2010). 9. B. C. Crump, L. A. Amaral-Zettler, G. W. Kling, ISME J. (2012, online ahead of print). 10. D. M. Wilkinson, S. Koumoutsaris, E. A. D. Mitchell, I. Bey, J. Biogeogr. 39, 89-97 (2012). 11. C. Burke, P. Steinberg, D. Rusch, S. Kjelleberg, T. Thomas, Proc. Natl. Acad. Sci. U.S.A. 108, 1428814293 (2011). 12. V. J. Denef, J. F. Banfield, Science 336, 462-466 (2012). 13. O. Béjà et al., Science 289, 1902-1906 (2000). 14. O. Béjà, E. N. Spudich, J. L. Spudich, M. Leclerc, E. F. DeLong, Nature 411, 786-789 (2001). 15. C. H. Slamovits, N. Okamoto, L. Burri, E. R. James, P. J. Keeling, Nat. Commun. 2 (2011). 16. B. K. Swan et al., Science 333, 1296-1300 (2011). 17. D. E. Canfield et al., Science 330, 1375-1378 (2010). 18. Z. Lu et al., ISME J. 6, 451-460 (2012). 19. B. R. Edwards et al., Environ. Res. Lett. 6, 035301 (2011). 20. J. D. Kessler et al., Science 331, 312-315 (2011). 21. A. Beloqui, P. D. de María, P. N. Golyshin, M. Ferrer, Curr. Opin. Microbol. 11, 240-248 (2008). 22. J. Kennedy, J. R. Marchesi, A. D. W. Dobson, Microb. Cell Fact. 7 (2008). 23. C. Simon, R. Daniel, Appl. Microbiol. Biotechnol. 85, 265-276 (2009). 24. P. Lepage et al., Gut (2012, online ahead of print). 25. M. Arumugam et al., Nature 473, 174-180 (2011). 26. G. D. Wu et al., Science 334, 105-108 (2011). 27. S. K. Kuss et al., Science 334, 249-252 (2011). 28. M. Kane et al., Science 334, 245-249 (2011). 29. P. J. Turnbaugh et al., Nature 444, 1027-1031 (2006). 30. E. Yergeau, H. Hogues, L. G. Whyte, C. W. Greer, ISME J. 4, 1206-1214 (2010). 31. M. A. O’Malley, Hist. Philos. Life Sci. 29, 337-360 (2007).
S U B M ISSION
Delegitimizing Scientific Authority The End of Reason PATRICK YUKMAN
n September 10, 2008, the Large Hadron Collider at CERN launched its first proton beam around its 17-mile circumference. Located under the French-Swiss border and constructed at a cost of $6 billion, the LHC represents one of the grandest undertakings in scientific experimentation of all time. Yet, in the weeks leading up to September 10th, newspaper headlines were not acknowledging the triumphs of science: rather, along with numerous blogs and magazine articles, they were quite literally proclaiming the end of the world. According to some, the LHC’s particle collisions had the capability to create “mini black holes” that could suck up the planet, despite reports reviewed by scientists outside of CERN that discredited this possibility. Similar collisions happen daily, without incidence, between cosmic radiation and the matter that makes up the Earth, and even if these mini black holes were to somehow miraculously form inside the LHC, they would quickly evaporate due to a phenomenon known as Hawking radiation (1). Still, some civilians actively tried to halt the operations of the LHC. A German chemist named Otto Rossler tried to file a lawsuit on August 26th claiming that the LHC “would violate the right to life of European citizens and pose a threat to the rule of law,” and two Americans in Honolulu later filed a lawsuit to try and “force the U.S. government to withdraw its participation in the experiment” (1). Though both of these lawsuits failed, they became widely recognized by the media, which began to sensationalize the LHC’s upcoming activation. The Sun newspaper in Britain even ran a headline on September 1st saying “End of the World Due in 9 Days” (1). The startup of the LHC, an event that would otherwise only have been recognized by the scientifically savvy, was suddenly the subject of conversation around the water cooler. And though many correctly believed that the hype was unsubstantiated, many others did not, prompting efforts like Rossler’s to restrict the LHC’s research as well as causing widespread and unnecessary 32
fear and dread. Why exactly did so few believe the CERN report and so many reject it? Considering that most of the concerned citizens were probably not scientifically literate enough to find legitimate flaws in the report’s findings, the underlying problem seems to stem from a basic and unfounded distrust of scientific authority, and the LHC is only one example of the fear and misinformation that can be caused by such distrust. On a larger scope, the origins of this distrust can be almost single-handedly blamed on a widespread delegitimization of scientific authority, and this underlying factor has also become magnified in recent times. Scientific authority’s delegitimization has been driven mainly by the continuing politicization and economization of scientific research, as well as by a widening “science literacy” gap between scientists and civilians. And as seen with the LHC, the distrust caused by delegitimization is ultimately harmful to both scientific progress and to the whole of modern society. The first underlying cause driving the public’s delegitimization of scientific authority is the widening knowledge gap between the scientific community and the average citizen. Though most adults may not outwardly appear scientifically illiterate, statistics suggest otherwise: even in the past dozen or so years, the general public has demonstrated a gross ignorance of basic scientific concepts. As Kenneth Brown, author of Downsizing Science: Will the United States Pay a Price?, explains it: Consider the surveys of public understanding that the Chicago Academy of Sciences conducts biennially. In the most recent survey, only 44 percent of the respondents said that electrons are smaller than atoms, and only 48 percent said that the earliest humans did not live at the same time as the dinosaurs. As these were true and false questions, a 50 percent score would be achievable by flipping a coin…. Given these hopeless results for the simplest questions imaginable, how could the public understand biotechnology or other complex scientific matters in the policy realm? (2)
And even though this specific survey was conducted in 1998, today’s numbers are scarcely better. According to a 2009 survey by the Pew Research Center, only 46 percent of the respondents said that electrons are smaller than atoms, and a full 53 percent claimed that lasers work by focusing sound, not light (3). The scientific knowledge gap between these survey respondents and the academic world is both evident and dangerous. These statistics show that the distinction between the scientifically “average” and the scientifically “elite” is so well-defined that the division sets up a class system of sorts: not economically, but intellectually. Since the layman lacks an understanding of even the simplest of scientific concepts, the academic elite are given the responsibility to consider “complex scientific matters in the policy realm,” despite the fact that these matters will affect both groups. This breeds resentment of the scientifically elite, as would any system in which “the few” make executive decisions for “the many.” However, our modern resentment of scientific authority does not fight against political tyranny but against reason, and the danger therein should be apparent to anyone. The average citizen cannot be trusted to make informed decisions based on science, but the resentment that stems from his/her own ignorance has the potential, as with Otto Rossler and the LHC, to actively hinder scientific progress. Yet there are other forces at work that drive the public’s disillusionment with scientific authority even further: namely, the increasing politicization and economization of science itself. Although scientific politicization and economization go hand in hand, it is best to start with a discussion of the former, specifically as it relates to two of today’s most polarized political issues involving scientific research: global warming and evolution. With the debates raging across television sets and computer monitors from coast to coast, it seems nearly impossible that anyone could be unfamiliar with these topics and their political divisions. In fact, conservatives and liberals alike DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
have so violently defended their respective sides of these “scientific” controversies that a person’s individual stance on global warming and evolution automatically aligns them along party lines, no matter what information (or misinformation) may have motivated that stance. Global warming is a choice example, for as Glen Scott Allen explains in his book, Master Mechanics & Wicked Wizards: Images of the American Scientist As Hero and Villain from Colonial Times to the Present: “The issue has become politicized to the point where opposition to global warming initiatives is considered a necessary demonstration of loyalty to the Republican party” (4). He goes on to cite U.S. Representative Wayne Gilchrest as an example, quoting a Baltimore newspaper that reported, “the Eastern Shore [Maryland] Republican was denied a seat on the bi-partisan Select Committee on Energy Independence and Global Warming… because he refused to argue that climate change was not caused by human actions” (4). The same political dogmatism is true of the “evolution debate.” As author Chris Mooney explains, “religious conservatives have aligned themselves with ‘creation science,’ a form of religiously inspired science mimicry that commands the allegiance of nearly half the American populace, according to polls” (5). Yet the blame does not fall entirely on conservatives: even the political left is guilty of politicizing science to some degree regarding genetically modified organisms and animal testing. There is no doubt that traditional science is becoming increasingly politicized, but how does this delegitimize scientific authority, and how is it harmful? The basic method by which politicization delegitimizes scientific authority is in setting up “controversy” where there is none in reality. This false dichotomy is so powerful in terms of breeding distrust that it is surprising so few recognize it as a complete fabrication. To once again cite Allen and use global warming as an example, “the most curious aspect of this topic is that it is called a ‘debate’ at all. A recent U.N. report, citing an overwhelming preponderance of scientific evidence supporting the argument that human activity of the last century has increased greenhouse gas emissions, concluded that global warming was ‘unequivocal.’ However,” continues Allen, “the Bush administration continues to refer to the ‘Global Warming Controversy,’ which SPRING 2012
at minimum indicates confusion over the meaning of the word ‘controversy,’… [which is an issue] with two equally defensible positions” (4). He mentions that there do exist legitimate scientific controversies, specifically regarding questions like, “does life exist on other planets? Will the universe continue to expand indefinitely, or contract into another Big Bang? Will computers ever become ‘artificially’ intelligent? However,” Allen states, “global warming is not one of these questions; it is not, at least in scientific terms, a controversy. Neither is evolution” (4). When the data only points in one direction, Allen suggests, there is no legitimate controversy since there is no opposing viewpoint. This is what makes scientific politicization so dangerous: it creates an opposing viewpoint along ideological differences and manipulates data to back it up. Legitimate science is all about finding data, analyzing it, and using it to explain the world; it’s the kind of thing that gives us GPS technology and spaceflight and flash memory. Politicized science, on the other hand, is about forcing the data to fit into a predetermined explanation of the world, and unlike “real” science, it gives us nothing, working instead to take away our capacity for reason, our willingness to maintain an open mind, and perhaps most destructively, stripping away the legitimacy of actual science. That last point is particularly important. Scientific politicization is, without a doubt, a dominant contributing factor to the decreasing legitimacy of scientific authority. If science is supposed to be objective, as many understand it to be, the dual-sided nature of politicized science calls this objectivity into question. If – as politicians would have it – there exist two more-orless valid sides to an argument and both are supported by “science,” then the science itself no longer looks like unalterable data, as it instead resembles political opinion. This, in turn, denies credibility to both the scientific and political endeavors: as Mooney states especially well, “The whole process derails when our leaders get caught up in a spin game over who can better massage the underlying data” (5). Similarly, politicization calls the objectivity of scientists themselves into question, for if, as Mooney puts it, “Americans come to believe you can find a scientist willing to say anything, they will grow increasingly disillusioned with science itself. Ultimately, trapped in a tragic struggle between
‘liberal’ science and ‘conservative’ science, the scientific endeavor itself could lose legitimacy” (5). This is obviously an undesirable outcome for scientists, but few realize how harmful this can be to the rest of society. If Americans feel like they can no longer place their trust in any kind of scientific authority, they will turn the relevant issues over to politicians, who, typically, are as scientifically illiterate as their constituents. This means that their policies will be decided not by facts and data, but by party lines and opinion, which can be legitimately dangerous in a very real way. Consider the debate about abstinenceonly sex education in schools: to once again quote Mooney, “When conservatives distort health information – claiming that condoms don’t work very well in protecting against sexually transmitted diseases, for example – such abuses can quite literally cost lives” (5). When motivated by political dogma, politicians may unintentionally end up putting the reputation of science itself, as well as countless human lives, in harm’s way. The same is true of the global warming issue: “Similarly,” states Mooney, “when [conservatives] deny the science of global warming, their refusal to consider mainstream scientific opinion fuels an atmosphere of policy gridlock that could cost our children dearly (to say nothing of the entire planet)” (5). These are only two examples, but it should be apparent that similar issues are present in nearly every “science-backed” political debate today, from evolution to abortion to clean energy standards. And every one of these “controversies” is a clear demonstration that scientific politicization is not a victimless crime. Literally everyone is forced to suffer the dangerous consequences brought forth by the politicized delegitimization of scientific authority. In a very similar, but perhaps more subtle, vein lies the issue of scientific economization, by which I mean the manipulation of scientific principles for the benefit of big business and industry. As businesses compete in the capitalist market, even tiny advantages or disadvantages can translate to huge changes in profit. Businesses that have a close relationship with public health or environmental concerns also therefore tend to have a close relationship with science, since they can’t get ahead in the market if the science says their product is “bad” but can develop enormous profit margins if the science 33
determines their product is “good.” Under these circumstances, some businesses in these fields try to “economize” the science behind their product: that is, twist the science to say what it needs to say for the company to make a profit. And while it is perhaps too easy a target, no industry does this quite as prolifically as the tobacco industry, which has been in a long-standing battle with science regarding the health detriments of its products. Scientific evidence demonstrating that tobacco smoking was harmful to one’s health began amassing as early as the 1930’s, and it was well known by the 1950’s that tobacco could cause lung cancer, yet tobacco companies fought off health warnings on cigarette packages until 1965, due in large part to their “scientific” efforts to demonstrate tobacco’s safety (6). Even in recent times, “Big Tobacco’s” influence over tobacco science is abundantly clear, particularly regarding the research surrounding secondhand smoke: to cite statistics from Mooney, “A 1998 analysis of over one hundred review articles on the health risks of secondhand smoke, published in the Journal of the American Medical Association, found that the odds of an article reaching a ‘not harmful’ conclusion were ‘88.4 times higher’ if its authors had tobacco industry affiliations” (5). This is about as clear an indication as you can get of the tobacco industry’s efforts to economize science. And scientific economization has nearly the same overall consequences as scientific politicization: by creating controversy where there is none, and in demonstrating to the public that “you can find a scientist willing to say anything,” scientific authority ultimately loses its legitimacy in the eyes of the average citizen. This in turn, has a similar outcome to that of politicized science. With both “sides” sure of their correctness in the matter, the populous has no good reason to believe either “side,” which, in the case of tobacco and countless other industries, can lead to unnecessary death and/or environmental harm. Yet an increase in scientific politicization and economization only causes the involved parties to redouble their efforts to stay ahead of their opponent, and the cycle repeats, straying further and further from reason with every successive cycle. Amidst everything else, though, it is important to realize that a basic skepticism and distrust of authority can be healthy 34
in moderation. Most of the time, it keeps powerful societal structures working honestly and efficiently, and in this sense, there is nothing wrong with reasonable skepticism or even distrust. There have, of course, been a number of scientific scandals that have even just recently been brought to the attention of the public, from “cold fusion” to the discovery of element 118 to numerous frauds in cloning and genetics, and these were only caught because of a necessary skepticism within the scientific community itself. But the process works. When a “scientifically unaware” public develops the misbelief that the “scientists in charge” cannot be trusted, everyone suffers the consequences. So is there any way to reconcile the established legitimacy of scientific authority? Ultimately, I believe there is, but it is not an easy process. The root of events like the LHC scare lies in an unsubstantial science education for most of the general public: it is no mystery that people fear what they do not know and resent the authority that they do not understand. It would take quite a bit of educational reform to counteract the damage already done, but closing the “literacy gap” is likely the only way to remove the fear. Once the underlying fear is gone, the power can return directly to the people, and “controversial issues” can be addressed using data and reason, the basic tenets of scientific inquiry. Though difficult, this is the route we must ultimately take if we care at all about the future progression and well-being of our species. After all, knowledge may be power, but it is science that brings us knowledge. CONTACT PATRICK YUKMAN AT PATRICK.R.YUKMAN.14@DARTMOUTH.EDU References: 1. E. Harrell, “Collider Triggers End-of-World Fears,” Time (4 Sep. 2008). Available at http://www.time. com/time/health/article/0,8599,1838947,00.html? imw=Y (8 Apr. 2012). 2. K. Brown, Downsizing Science: Will the United States Pay a Price? (The AEI Press, Washington, D.C., 1998), pp. 58. 3. Pew Research Center for the People & the Press, “Public Praises Science; Scientists Fault Public, Media: Overview” (9 Jul. 2009). Available at http:// people-press.org/report/528/ (8 Apr. 2012). 4. G. S. Allen, Master Mechanics & Wicked Wizards: Images of the American Scientist As Hero and Villain from Colonial Times to the Present (Univ. of Massachusetts Press, Amherst, 2009), pp. 242-243. 5. C. Mooney, The Republican War on Science (Basic Books, New York, 2005), pp. 9-11. 6. US National Library of Medicine, “The 1964 Report on Smoking and Health,” The Reports of the Surgeon General. Available at http://profiles.nlm.nih.gov/NN
/Views/Exhibit/narrative/smoking.html (8 Apr. 2012).
When the market’s down...
Invest in science.
DUJS The Dartmouth Undergraduate Journal of Science meets Mondays at 9:00 p.m. in Kemeny 004. Blitz “DUJS” or go to dujs.dartmouth. edu for more information.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
S U B M I SSI O N
Effects of Extra-Organismal Caffeine
How Extra-Organismal Caffeine Affects Crayfish Neuromuscular Synapses TARA KEDIA
he effect of extra-organismal caffeine exposure upon crayfish neuromuscular activity was explored as a novel method to characterize the physiology of synaptic contacts in the crayfish superficial flexor muscle (SFM) system. Crayfish were housed in 0.5 L of aerated tap water or 3 mM caffeine for one and five days. Muscle junction potentials (JPs) were measured at 1 Hz and 10 Hz frequencies of nerve stimulation. At both frequencies and at both lengths of exposure, external application of caffeine either unexpectedly appeared to enhance or failed to affect synaptic transmission, contradicting the well-documented depressive effect of caffeine upon direct application to the dissected SFM system. This finding merits further investigation to confirm the effect of external application of caffeine. This means of drug exposure has potential future applications to studies of nerve regeneration in the SFM system.
Introduction Past work on the superficial flexor muscle (SFM) system of the crayfish Procambarus clarkii has looked at effects of exposing the dissected SFM system to various drugs in an effort to better characterize the physiological properties of these synaptic contacts (1-5). This project seeks to expand the possibilities and implications of drug research in the crayfish model system by exposing living crayfish to caffeine via more realistic pathways, namely extra-organismal exposure of a living crayfish to caffeine. Exploring the usefulness of this extra-organismal mode of exposure stands to expand the usefulness of the crayfish as a model organism for understanding the physiology of synaptic contacts. It was anticipated that depression of synaptic transmission in the SFM system would be observed upon external application of caffeine. This hypothesis was based upon prior studies that have exposed the dissected SFM system to caffeine and have observed a depression in synaptic transmission (1-3). Any detectable behavioral changes were therefore expected SPRING 2012
to support the hypothesis of decreased overall activity, meaning that the crayfish was expected to spend more time hiding in pots in the tank, show less aggressive behavior to other crayfish, and perform the tail flip escape response less frequently and strongly. Initial studies of synaptic effects of exposure of the SFM system to caffeine hypothesized enhancement of synaptic transmission following caffeine exposure, based upon studies that established that 5 mM to 10 mM caffeine exposure, far above toxic levels in humans, enhance synaptic transmission (6). On a molecular level, caffeine at mM concentrations enhances synaptic transmission by stimulating Ca2+ release, inhibiting phosphodiesterase, and blocking GABAA receptors (6). The mechanism of caffeine’s effect in crayfish is still unknown, other than that caffeine must act to decrease internal calcium concentrations, thereby inhibiting vesicle release into the synapse (2). There is a great precedent in scientific literature for studying behavioral and synaptic effects of non-human ingestion of drugs, such as those among rats and mice (7-9). Precedent for external application of drugs to arthropods is found in NASA’s work on comparing shapes of webs spun by drugged spiders and non-drugged spiders (10). Additional work on extra-organismal drug exposure has been undertaken to study the effects of chronic exposure of crayfish to herbicides, though this study focused on the survival, growth, and other life-cycle consequences of drug exposure, rather than the neurobiological effects (11). This project thus explores a promising new direction for research with the crayfish model organism.
Materials and Methods
Experimental Design Southern red crayfish Procambarus clarkii, ordered from Carolina Biological Supply Company, were used in this experiment. Prior to use, the crayfish were housed together in a large aerated
freshwater aquarium. Crayfish were divided into three groups: control, caffeine one day, and caffeine five days. Each crayfish was kept in its own medium square (946 mL size) plastic Ziploc container containing 0.5 L of water and a small terra-cotta pot for hiding. Containers were aerated and covered. Crayfish in the control group were kept in room-temperature tap water for one day. Crayfish in caffeine groups were kept in room-temperature tap water with 3 mM caffeine, a level previously determined to depress SFM synaptic activity when applied to the dissected SFM system (5). For all groups, data on neuromuscular activity were collected.
Data Collection Methods Crayfish were sacrificed, and the tail was dissected to expose the SFM neuromuscular system. Enzymes released by cutting abdominal deep muscle cells were rinsed away using cold Ringer’s solution and then the tail was bathed in fresh cold Ringer’s solution. Ringer’s solution contains 210 mM NaCl, 14 mM CaCl2, 5.4 mM KCl, 2.6 mM MgCl2, and 2.4 mM NaHCO3, the ion concentrations found in crayfish plasma that maintain normal nerve and muscle activity in the SFM system (12). Data collection followed the conventional methods used to research the crayfish SFM system (1-5). Two suction electrodes were used, one to stimulate the SFM nerve and the other to record nerve activity. The nerve was stimulated using a Grass SD9 Stimulator, and all nerve signals were displayed on a Tektronix storage oscilloscope after passing through a Grass P15 Preamplifier. The nerve was stimulated at two frequencies, 1 Hz and 10 Hz, displaying facilitation at 10 Hz. One microelectrode, filled with 3 M KCl, was used to record resting potentials and junction potentials in muscle cells. All data were collected from the second-to-last abdominal segment during the evening hours. Some data were collected in morning hours, but were not included in analysis in order to avoid introducing another variable into the experiment. 35
Data Analysis Junction potential (JP) sizes were averaged, and the standard deviation (s) s=
∑( xi − x) 2 n −1
was then calculated using where xi represents an individual JP, x represents the average JP size, and n represents the number of JPs in the sample. Standard error of the mean (SE) was calculated using SE =
Values for standard error of the mean were converted into error bars on a column graph of average JP sizes.
Results and Discussion In the control group, two crayfish were used to obtain measurements from 5 SFM muscle cells, and one crayfish was used for each of the caffeine exposure groups. Data were obtained from 9 cells for the group with one day caffeine exposure and from 14 cells for the group with five days of caffeine exposure. It should be noted that, due to the extremely small sample size, none of the results obtained in this experiment were statistically significant. Additionally, to strengthen the methodology, there should ideally have been two separate control groups: a group of crayfish kept for one day and a group of crayfish kept for five days, both groups in tap water in the plastic Ziploc container. That said, the experiment yielded interesting and unexpected results (Fig. 1). Both caffeine groups had a larger average JP size than that of the control group, at stimulation frequencies of 1 Hz and 10 Hz. As expected, JPs obtained from 10 Hz stimulation displayed facilitation. Data on average JP size are also displayed as a column graph in Fig. 2, along with error bars representing standard error of the mean for each data series. Extraorganismal exposure of the crayfish to 3mM caffeine both for one day and for five days induced an increase in average JP size.
However, the data on JP sizes fall within the range of the error bars, so it is possible that the external application of caffeine had no effect on neuromuscular synaptic transmission in the crayfish. This data is not statistically significant as only one crayfish was measured at each of the time points for caffeine exposure, and only two crayfish were measured for the control group. This experiment gives only a preliminary indication that extra-organismal caffeine exposure may enhance or may not change synaptic transmission in the SFM system. While not statistically significant, these results are interesting and unexpected, and merit further investigation. If these results are accurate, it is unclear by what mechanism extra-organismal caffeine exposure would enhance or fail to affect synaptic transmission, whereas direct application of caffeine to the dissected SFM system has a well-documented depressive effect (1-3,5). It is further unclear why the mechanism of caffeine’s action upon the SFM system would depend upon whether caffeine exposure occurred via external application or via direct application to the dissected system. Behavioral data could not be collected in this experiment for two reasons. First, and most importantly, the size of the plastic container was too small to allow a great deal of crayfish movement, and thus any behavioral differences between the control and the caffeine groups were difficult to discern. Additionally, a number of the stated measures for crayfish behavior were based upon crayfish interactions, and these interactions were impossible to observe as only one crayfish was placed in each container. Future researchers wishing to observe behavioral effects of caffeine exposure should use a larger tank. An additional result observed over the course of the study was crayfish mortality at various time points in both the control and caffeine groups. One crayfish in the control group died at five days – effectively
Image courtesy of Tara Kedia
Figure 1: Average muscle junction potential (JP) size at 1 Hz and 10 Hz frequency of stimulation of the crayfish SFM nerve. Two crayfish were in the 1 day Control group, and one crayfish was in each of the caffeine groups. Crayfish were kept in 0.5 L of 3mM caffeine in an aerated 1-L plastic Ziploc container for either 1 day or 5 days. Stimulated JPs were measured from the second-to-last SFM segment. Standard error of the mean was calculated for all data. 36
eliminating the control five days group – and crayfish in the caffeine groups died at two, three, and seven days. These results suggest that, independent of caffeine exposure, a space-constrained environment is not conducive to crayfish survival. Future experiments considering drug exposure at the organismal level should use a tank that is substantially larger than the approximately 1 L sized container used in this experiment. The plastic material of the container may have factored into the observed crayfish deaths; future experiments should consider using standard glass tanks. An additional confounding factor might be that the crayfish did not have enough food to survive, as crayfish might have been hungry even before being placed into the plastic container and might have starved in the plastic container. However, based upon prior lab experience, the likelihood of starvation seems very small, as this is a highly atypical cause of crayfish death. To limit mortality, future experiments should also limit the length of caffeine exposure, as much higher mortality rates were observed in the caffeine group versus the control group. For a number of reasons, this experiment would be best pursued as a multi-term project. First, there will inevitably be various troubleshooting issues that the researcher will need to address, including – but certainly not limited to – dealing with equipment problems and determining the appropriate method and concentration of caffeine exposure. Second, there is a very steep learning curve for the crayfish dissection and proper equipment usage, both of which take up to one or two months to gain sufficient experience to obtain good data consistently. Third, there is no guarantee that every crayfish that is dissected will provide good data. Any given crayfish may be molting, stressed, or may be demonstrating synaptic repression for other reasons, such as seasonal variation (13). A significant representative sample of muscle cells would constitute at least 7-10 cells from each crayfish, and statistically, the minimum total number of crayfish is three crayfish. Therefore, assuming approximately a 30% average success rate in obtaining sufficient JP data from a crayfish, the researcher would need to dissect 10 crayfish for each of the desired groups in this experiment – control one day, control five days, caffeine one day, and caffeine five days. This would total 40 crayfish, a number for which there was insufficient time in the DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Average Junction Potential Size (mV)
5 days Image courtesy of Tara Kedia
Figure 2: Effects of extra-organismal 3mM caffeine exposure on size of stimulated junction potentials (JPs) in crayfish SFM muscle cells. The vertical axis displays the average size of JPs. Data on JP size was collected at two different frequencies of stimulation: 1 Hz and 10 Hz. Error bars represent the standard error of the mean. The number of cells recorded is as follows: Control 1 day: 5; Caffeine 1 day: 9; Caffeine 5 days: 14.
one term that I pursued this experiment. The primary limiting factor in this research project was the lack of time to complete enough dissections to obtain sufficient data. Looking ahead, this external application of caffeine promises to have useful applications in many areas of future research on the crayfish SFM system. Past work on the crayfish SFM system has studied neuronal regeneration and transplantation and has attempted to characterize the physiology of these new synaptic contacts (14-18). This project proposes a novel method for characterizing the physiology of SFM synapses, and, once developed, this method would better simulate conditions that organisms are exposed to during recovery from real – rather than simply surgical – nerve damage. This method of extra-organismal exposure to caffeine would allow exploration of the effects of chronic drug exposure on nerve regeneration. In summary, the premise for this project was that exposure of the SFM system to drugs should occur via a more realistic mode of transmission. This project explored the effects of crayfish exposure to 3 mM caffeine dissolved in tank water, but other concentrations and/or modes of exposure might prove more effective. In human cases of nerve damage, nerve regeneration occurs while the organism is exposed to various SPRING 2012
environmental factors, such as stress and ingested drugs, including caffeine and ethanol. In order to most effectively use data from the crayfish SFM system to model human nerve regeneration, it is proposed that the nerve-damaged crayfish be exposed to similar environmental factors throughout the process of nerve regeneration, rather than after the nerve has already regenerated. However, before such studies can be undertaken, we must establish the physiologically relevant concentration of caffeine, as well as the best method of chronic administration. CONTACT TARA KEDIA AT TARA.KEDIA.12@ DARTMOUTH.EDU References 1. K. M. Celenza, A study of caffeine’s mode of action in the superficial flexor muscle system of the crayfish. Thesis, Dartmouth College (1997). 2. K. M. Celenza, E. Shugert, S. J. Vélez, Depressing effect of caffeine at crayfish neuromuscular synapses. II. Initial search for possible sites of action. Cell. Mol. Neurobiol. 27, 381-393 (2007). 3. K. Judd, E. Shugert, S. J. Vélez, Depressing effects of caffeine at crayfish neuromuscular synapses. I. Dosage response and calcium gradient effects. Cell. Mol. Neurobiol. 27, 367-380 (2007). 4. J. R. Platt, Facilitation studies in the crayfish neuromuscular junction using caffeine and serotonin. Thesis, Dartmouth College (2000). 5. R. L. Wilson, The effects of caffeine and alcohol on the crayfish SFM. Thesis, Dartmouth College (2009). 6. J. W. Daly, B. B. Fredholm, Caffeine – an atypical
drug of dependence. Drug Alc. Dep. 51, 199-206 (1998). 7. S. A. File, H. A. Baldwin, A. L. Johnston, L. J. Wilks, Behavioral effects of acute and chronic administration of caffeine in the rat. Pharm. Biochem. Behav. 30, 809-815 (1988). 8. M. E. Yacoubi et al., The stimulant effects of caffeine on locomotor behavior in mice are mediated through its blockade of adenosine A2A receptors. Br. J. Pharmacol. 129, 1465-1473 (2009). 9. O. P. Balezina, N. V. Surova, V. I. Lapteva, Caffeineand ryanodine-induced changes in the spectrum of spontaneously secreted quanta of the mediator in the neuromuscular synapse of mice. Dok. Biol. Sci. 380, 834-836 (2001). 10. D. A. Noever, R. J. Cronise, R. A. Relwani, Using spider-web patterns to determine toxicity. NASA Tech Briefs 19, 82 (1996). 11. S. M. Naqvi, C. T. Flagge, Chronic effects of arsenic on American red crayfish, Procambarus clarkii, exposed to monosodium methanearsonate (MSMA) herbicide. Bull. Environ. Contam. Toxicol. 45, 101-106 (1990). 12. A. van Harreveld, A physiological solution for fresh-water crustaceans. Proc. Soc. Exp. Biol. Med. 34, 428-432 (1936). 13. P. Prosser, J. Rhee, S. J. Vélez, Synaptic repression at crayfish neuromuscular junctions. I. Generation after partial target area removal. J. Neurobiol. 24, 985-997 (1993). 14. P. Ely, S. J. Vélez, Regeneration of specific neuromuscular connections in the crayfish. I. Pattern of connections and synaptic strength. J. Neurophysiol. 47, 656-665 (1982). 15. W. P. Hunt, S. J. Vélez, Regeneration of an identifiable motoneuron in the crayfish. I. Patterns of reconnection and synaptic strength established in normal and altered target areas. J. Neurobiol. 20, 710-717 (1989). 16. W. P. Hunt, S. J. Vélez, Regeneration of an identifiable motoneuron in the crayfish. II. Patterns of reconnection and synaptic strength established in the presence of an extra nerve. J. Neurobiol. 20, 718-730 (1989). 17. K. M. Krause, S. J. Vélez, Regeneration of neuromuscular connections in crayfish allotransplanted neurons. J. Neurobiol. 27, 154-171 (1995). 18. J. P. Marcoux, Specific regeneration of an identifiable axon in a crayfish neuromuscular system. Thesis, Dartmouth College (1983). Acknowledgements Thanks to Dr. Samuel Vélez for his advice and guidance over the course of the experiment. This research was conducted for Biology 95 using departmental funds.
no longer pushing it, and if there w the rules of geometry were to chan Einstein conjectured that the ball NE U K O M geometry. For instance, if the geom would not move in straight lines w that the planets left alone in space reason using planets have this path was The story of how scientists have come to test Einstein’s theory of gravity computational different from Euclid’s? Einstein sp techniques. . B LEONARDO MOTTA tual abou no small achievement, for it took more Prologue than forty years of trial and error in the a se n 1916, Albert Einstein published implementation of elaborate computer he w the result of his eleven-year research programs. As an important byproduct project on the nature of gravity. One of the of these developments, in 2005, the ﬁrst the marvelous things of Einstein’s work try complete prediction of the shape of the was the discovery of a set of equations he radiation emitted by the collision of two trea believed encoded how we should describe black holes was made, which will in turn at a the evolution of everything in the universe, serve for present and future telescopes from the motion of the planets through say to ﬁnally put to direct test one of the Figure 2: A triangle drawn on a sphere. The lines the creation and death of stars to the Big from predictions of Einstein’s theory: that space that connect the vertices are not straight, but arcs Bang. It is an elegant and beautiful theory. of circles. The sum of the internal angles of the and time can wrinkle back and forth like a triangle is greater than 180 . The laws of geometry of h Despite these qualities, Einstein’s equation wave. In these next pages, we will examine onFigure a sphere 2: are different than the laws of geometry A triangle drawn on a the is very challenging solve. To this day, only use in everyday architecture. Einstein’s theory, black holes and tell the we sphere. The lines that connect a handful of solutions have been found. But But tale of the challenges faced by physicists the surface of a sphere is that if we draw a the vertices are not straight, but thanks to computers and a whole new set arc in putting Einstein’s ideas inside a triangle (we connect three points on the arcs of circles, and the sum of the of computational techniques developed in computer. Finally, we will learn about how sphere with arcs of circles), the sum of the whe the last six years, we are ﬁnally discovering internal angles of the triangle is astronomers intend to see in the near future internal angles is larger ◦ degrees. pick a previously hidden world of phenomena greater than 180 . than The180 laws of telescopes the most violent astrophysical Thus, if we wanted to know whether or in the universe of black holes and neutron geometry on a sphere are differwe m phenomena of the universe. not geometry of of thegeometry real worldwe was stars. Scientists are now able to describe entthe than the laws are that sphere, all that we need is to test the astrophysical dynamics of two black useofina everyday architecture. sic o whether or not the sum of the angles of holes dancing with one another and the Einstein’s Legacy mat dramatic event of their collision. The blast We have a daily interaction with real-world triangles is 180˚. A series of experiments were indeed undertaken of two black holes smashing into each other gravity: we call it the phenomena that such that have this property is what the is than all the stars of the visible makes stuff fall to the ground. But to by Gauss himself. 2. brighter Einstein’s Legacy curvature of apointed geometry In 1916, Einstein out thatmeasures if we universe combined! This discovery was understand gravity is another story. Where We have an everyday experience with thethe same constant speed may drift geometry of the universe in space does it come from? Why is there gravity allow gravity: we call it the phenomena that in time free of Euclid’s ruling, being at toa be constant speed makes yo in the ﬁrst place? These questions were and makes stuff fall to the ground. But to un- we may achieve the extraordinary result of tackled by Albert Einstein and his answers derstand gravity is another story. Where side by side on straight lines will are theitreason he is considered onegravity of the in obtaining gravity as a consequence of the does come from? Why is there plane has curvature. geometry itself. zero The basic idea is this: Whereas here great minds of the 20th century. the first place? These questions were tackaway or closer to each other, on Earth we have seen that if an object is not so we To explain how we understand gravity led by Albert Einstein and his answers are pushed by the anything, it stays at rest or illustrates idea. This nicely fits nowadays, with theone observation the reasonletheusisstart considered of the great being moves in a straight line with a constant that when we measure distances on Earth itis minds of the 20th century. moving on top of it to follow non we doTosoexplain by employing theunderstand traditional rules how we gravity speed. An example of the latter case is when So Einstein set up his equation let us start with ofnowadays, Euclid’s geometry. That is tothe sayobservation that the you roll a ball on a ﬂoor that is extremely ever only halfjustofrollthe wha – the ball will evenstory: after thatofwhen wedistance measurebetween distances Earth, slippery is path shortest twoon points
Coding Einstein’s Legacy
Image courtesy of Leonardo Motta
Image courtesy of Leonardo Motta
Figure 1:1: Albert at the InFigure AlbertEinstein Einsteinsitting sittingalone alone at the stitute for Life Magazine Magazine Institute forAdvanced AdvancedStudy. Study. (From Life Eisenstaedt.) Archives, Nov. Nov. 1947, 1947,©Alfred © Alfred Eisenstaedt.)
times thesum Egyptians, insince spacethe is a ancient straight line andofthe of the you are no longer pushing it, and if there we do so by employing theistraditional rules were no friction, it would roll indeﬁnitely. internal angles of a triangle 180 degrees. 2 More of Euclid’s geometry. That is to say that However, if the rules of geometry weremove to arou precisely, the planets Why is this true? Because these rules work: the path of smallest distance between two architects and engineers employ them change such that paths could no longer be points intospace a straightand lineinterior and the straight lines, Einstein conjectured that the everyday designis buildings sum of the internal angles of a triangle is ball would follow the new curved paths in spaces. Nevertheless, mathematicians 180 degrees. Why is this true? Because since Gauss have found that one can build the modiﬁed geometry. For instance, if the these rules work: architects and engineers mathematically consistent geometries in geometry of the world were to be that of a employ them everyday to design buildings sphere, objects would not move in straight which Euclid’s rules do not hold. A simple and interior spaces. Nevertheless, mathlines when left alone, but in circles. Well, example is the geometry on the surface of ematicians since Gauss have found that a one sphere. In this case, there is no way you can build mathematically-consistent it just happens that the planets left alone can move between two Euclid’s points onrules a straight geometries in which do not in space move in circles around the Sun! line. Another property of the geometry onon Could it be that the reason planets have this hold. A simple example is the geometry
the surface of a sphere. In this case, there is no way you can move between two points on a straight line!1 Another property of the geometry on the sphere is that if we draw a triangle (we connect three points on the sphere
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
that, they are called gravitational waves. As a result, they carry away energy from the system. As the stars lose energy, they slowly get closer. Even though we have seen from the Taylor-Hulse binary a strong case for the existence of gravitational waves, this phenomenom predicted by Einstein’s theory has never been seen directly. In other words, we have not yet detected a wave-oscillation of gravity. It turns out Figure 3: Geometries with different curvature. On a plane (zero curvature), two persons that start to that stars orbiting each other produce faint Figureparallel 3: Geometries with different curvature. On parallel a plane lines. (zero curvature), two persons that start to On move move to each other will continue to be on On a sphere, they eventually meet. a oscillations that are impossible to detect. If parallel to each other will continue to be on parallel lines. On a sphere, they eventually meet. On a saddle, saddle, they can become very distant as they move. they can become very distant as they move. we would like to conﬁrm Einstein’s theory path was the geometry on the scales of the Fig. 3 illustrates the idea. This nicely ﬁts searching for the missing gravitational solar systemEinstein being different ourof intuition thatthat a curved universe? reasonedfrom that Euclid’s? it was the into matter the universe shaped surface the ge- waves, we must look elsewhere: black holes. Einstein speculated that this was indeed the forces objects that are moving it ometry, so he completed his equation by equating the curvature of geometryon ontop oneof side case. to follow non-straight paths. to the energy content of what is in the universe in the other. A star with a much larger Black Holes But how thenear geometry change? So around Einsteinit to settheuppoint his that equation to mass than thedoes planets it would curve the space “moving The idea that every massive object in straight ahead” becomes in circles thegeometry star. Empty space as a flat Einstein eventually foundmoving an answer after around involve only viabehaves its curvature. geometry, thethinking presence about of a massive star bends it in a manner as ifof space itself was nine or sobut years this. He This however is only half the story: whata Nature exerts a gravitational attraction on rubber sheetthis and question the star was heavy bowling ball placed on the top of it. approached by ademanding about the stars, planets and everything others naturally led scientists to speculate In science, when we say that we have something means we have found a series of reasonable requirements ona theory insideofthe universe?itEinstein reasoned that if there could be something like a star so an explanation of facts. Scientific theories are a set of fundamental principles, a logical heavy that not even light could escape its the equations that he wanted. First, the it was the matter of the universe that shaped framework, from which facts can be understood and predictions of new phenomena can be equations could not depend on the choice the geometry, so he completed his equation gravitational pull. If one shoots objects made. Einstein’s theory was put to several tests ever since it was proposed, and so far has of the all coordinates in the map ofremarkable the by equating to upwards from the surface of the Earth passed of them. One of the most examplesthe of curvature such tests of wasgeometry a discovery geometry when localizing objects. This the energy content of the universe. A star made in 1974 by the American astronomers Russell Hulse and Joseph Taylor Jr. They with increasing speed, eventually the makes because a treasure map in thewerewith a much larger mass than at thethe planets found sense two stars orbiting each other that getting closer to one another exact object will be moving so fast that it will universe can tell us that a star A is at a 9 near it would curve the space around it be able to escape the gravitational pull of same rate predicted by Einstein’s theory. They were awarded the Nobel Prize in Physics degrees and1993. 10 degrees latitude, to the point that “moving straight ahead” the Earth and move indeﬁnitely away into for this longitude discovery in stars found by Taylorin and Hulse approaching another? The reason or itBut canwhy say are thatthe A is found somewhere becomes moving inone circles around the star. space. The velocity needed for an object to is that in Einstein’s gravity, objects can makeEmpty ripplesspace in space that will move away from the straight line starting from Betelgeuse behaves as ﬂat geometry, but escape the gravitational pull of a planet or a them just like waveshowever, in the surface of water thatpresence move away a point where it. a rock and ending at Venus; regardless of the of afrom massive star bends It is star is called the escape velocity. Following felt in. The ripples in space however ofasthe geometry for that called how we decide to describe the are localization if space itselfitself was and a rubber sheetare and the Newtonian physics, one can conclude that gravitational waves. As a result, they carry away energy from the system. As the stars if all the atoms of the Sun were compressed of the star, the distance from star A to star was a heavy bowling ball placed on the loose energy, they slowly get closer. in a radius 4 million times smaller, then its the Earth should be the same. But if the top of it. Even though we have seen from the Taylor-Hulse binary a strong case for the exisescape velocity would be higher than the shortest path from the Earth to the star A In science, when we say that we have tence of gravitational waves, this phenomena predicted by Einstein’s theory have never isbeen an arc of a circle this will correspond to a theory of something it means we have seen directly. In other words, we have not yet detected a wave-oscillation of gravity. speed of light. The result would be a massive aItdifferent when compared to a produce found faint an explanation Scientiﬁc black void in space, since light would not turns outdistance that stars orbiting each other oscillations of thatfacts. are impossible straight line. In other words, once we pick a theories are a set of fundamental principles, geometry, the distances should not depend a logical framework, from which facts on how we map it; but in comparing 4 can be understood and predictions of new different geometries, distances are allowed phenomena can be made. Einstein’s theory to change. So what can be used that is was put to several tests ever since it was intrinsic of the geometry, but not of the map proposed, and so far has passed all of them. we choose? Mathematicians knew since the One of the most remarkable examples of time of Gauss that the quantity that has this such a test was a discovery made in 1974 property is what they call the curvature of by the American astronomers Russell the geometry. Intuitively, the curvature of Hulse and Joseph Taylor Jr. They found two a geometry measures by how much two stars orbiting each other that were getting persons moving side by side with the same closer to one another at the exact same rate constant speed may drift apart or get closer. predicted by Einstein’s theory. They were For instance, in a plane geometry, being awarded the Nobel Prize in Physics for this at a constant speed makes you move in a discovery in 1993. straight line, hence two persons moving But why are the stars found by Taylor side by side on straight lines will maintain and Hulse approaching one another? TheorbitFigure 4: When two stars orbit each other, Figure 4: When two stars each other, they periodically change the they space and tim periodically space and time around in spaceobjects move outward away change from thethe stars, removing energy from the sy a constant distance. We then say that the reason is that in oscillations Einstein’s gravity, them. The oscillations in space outward is analogous to waves surface of water, and for thismove reason are called grav plane has zero curvature. Whereas on a can make ripplesena in space that will movein the away from the stars, removing energy from the sphere, as they move along circles they also away from themCarnahan just likeNASA/GSFC) waves on the system. This phenomenon is analogous to waves move away or closer to each other, so we say surface of water. However, the ripples in the surface of water. For this reason, they are gravitational waves. (© theory T. Carnahan NASA/ for the to geometry detect. Ifitself we would to confirm Einstein’s searching that the sphere has a non-zero curvature. in space are of the and forlikecalled GSFC). tional waves, we must look elsewhere: black holes. Image courtesy of Leonardo Motta
Image courtesy of Leonardo Motta
3. Black holes
next closest proton orbit in order to satisfy then should form. the exclusion principle. Since the work of Chandrasekhar, the Nonetheless, the exclusion principle question of whether or not black holes cannot be an inﬁnite repulsive force. This existed in nature remained controversial was ﬁrst realized in 1928 by Subrahmanyan until the mid 1990s. Most scientists, Chandrasekhar, at the time a graduate including myself, became convinced of the student at Cambridge University in existence of black holes after seeing precise England. He pointed out that since nothing measurements on the motion of stars and is allowed to move faster than the speed of interstellar gas. The ﬁrst very compelling light, the repulsion force of the exclusion case was a discovery in 1994 made with the principle should not impart a speed to the Very Long Baseline Array radio telescope. electrons higher than the speed of light. The astronomers William Watson and Chandrasekhar showed that under this Bradley Wallin at the Department of Physics assumption, a star that was heavier than of the University of Illinois at Urbanaabout one and a half times the mass of Champaign, showed that a gas cloud in a Figure 5: A photograph of Messier 106. The motion of a gas cloud in the center of this galaxy was the first the Sun would suﬀer a gravitational force nearby galaxy, known as Messier 106, was strong evidence for the existence black holes (© Sloan Digital Survey). 106. The Figure 5: A of photograph of Sky Messier motion of a gas cloud in the center of this galaxy created by its own mass bigger than the moving in an elliptical orbit around a small was the first strong evidence for the existence of maximum force of the exclusion principle. black void. Because the orbit of the cloud black holes (© Sloan Digital Sky Survey). This mass of one and a half times the mass was so precisely elliptical, it could only be be able to escape the Sun! The star would of the Sun is now called the Chandrasekhar that it was moving around an object that then be a black hole. The eﬀect goes beyond limit. While the star is burning hydrogen was about the size of a star or smaller. But 7 the darkness of visible light: radio signals, nuclei to form heavier elements, the energy given the mass of the cloud and its orbit, microwaves and so forth, also propagate released by nuclear fusion keeps it from the mass of the object that was exerting at the speed of light and hence would be being smashed by gravity. However, as the gravitational force was calculated to trapped in the surface of the black hole. A soon as all the nuclear combustible is used, be about 40 million times the mass of the person inside a black hole would not be able gravity will act alone against the exclusion Sun! Such an object is safely above the to send any signals of his or her existence to principle and will win, resulting in an object Chandrasekhar limit. Moreover, the object the outside world. He or she would also not that can shrink to a point! But if a star with was roughly spherical and did not emit light. be capable of escaping: since nothing moves a mass bigger than the Chandrasekhar The conclusion was inevitable that Messier faster than light in the known universe, limit shrinks to a radius of about 4.5 km, 106 has a black hole in its core. The most direct evidence of a black nothing can escape the gravitational force the escape velocity near its surface will be bigger than the speed of light. A black hole hole to this day is the observation of the of this object. Everything that falls into a black hole is prevented by gravity from ever leaving. That such an object could exist was thought to be a mild amusement by many scientists for a long time. The story started to become more serious when physicists unveiled the mechanism that keeps two objects from occupying the same place: the exclusion principle. In general, it was thought, electrons in atoms will relax to the smallest orbit allowed near the protons of the nucleus. However, if all electrons could do that, then their electric repulsion resulting from being in the same orbit would make many-electrons atoms unstable. To circumvent this issue, the Austrian physicist Wolfgang Pauli postulated that two electrons in the universe could not have the same state of motion. An electron can be moving around a proton in the closest orbit allowed and then it can spin around its own axis. But an electron can only spin clockwise or counter-clockwise, therefore only two electrons can be placed in an atom moving both in the path closest to the proton (each spinning in opposite photograph of the center of our galaxy made with the W. M. Keck Telescope. There is an directions). A third electron added a Figure 6: A of Figure 6: to A photograph the made withbythe W. M.ItKeck Telescope. There is a object labeled Sgr A*center that doesof notour emitgalaxy light but is surrounded many stars. is the supermassive black two-electron atom then haslabeled to moveSgr to the hole that lives in the center of the Milky Way (© W. M. Keck Observatory / UCLA). A* that does not emit light but is surrounded by many stars. It is the supermassive bl Image courtesy of Leonardo Motta
Image courtesy of Leonardo Motta
that lives in the center of the Milky Way (© W. M. Keck Observatory / UCLA). JOURNAL OF SCIENCE DARTMOUTH UNDERGRADUATE
Figure 6: A photograph of the center of our galaxy made with the W. M. Keck Telescope. There is an labeled Sgr A* that does not emit light but is surrounded by many stars. It is the supermassive blac that lives in the center of the Milky Way (© W. M. Keck Observatory / UCLA).
center of the Milky Way. It was discovered in 1974 by the astronomers Bruce Balick and Robert L. Brown that there existed some astrophysical object that was emitting a lot of radio signals in the center of our galaxy. The object became known as Sagittarius A* (pronounced “A-star”), or Sgr A*. Improved resolution of images of this region showed that Sgr A* was black, not emitting any light, but surrounded by many stars. From 1992 to 2002, a group led by Rainer Schodel of the Max Planck Institute for Extraterrestrial Physics in Garching, Germany, made observations of a star’s orbit around Sagittarius A*. In a paper published in Nature on October 17th 2002, they showed that the star was in an elliptical orbit (Fig. 7). The shape of the orbit implied that Sgr A* 1) has a mass of about 4 million times the mass of the Sun and 2) has a radius smaller than 26 thousand times the radius of the Sun. The escape velocity of such an object is bigger than the speed o light, which indicates it must be a black hole. There are also other sources of evidence for black holes, which lead us to believe that there exists at least one in the center of every galaxy. The existence of intense bright centers of many galaxies, which are called Active Galactic Nuclei (AGN), require the existence of a small compact, very heavy object that drags electrons from the nearby gases to create very intense light emissions. The best candidates for many of these AGNs are black holes. There are also systems in which a star is seen to be orbiting another object and emitting a great deal of X rays. The best-known example of such is the Cygnus X-1, which now we believe to be a star orbiting a black hole in the Cygnus constellation. The reason why the companion of the Cygnus X-1 star is likely to be a black hole, is because the matter of the star is detected to be falling into the companion at a speed very close to that of light. No known star can have such a dragging eﬀect, but a black hole can. Finally, in 2005, Warren Brown and collaborators at Harvard University discovered a star in the Milky Way with a speed of 850 km/s, which is higher than the escape velocity of our galaxy. This means that the star was literally ejected from the galaxy by a giant astrophysical slingshot. The existence of these slingshots was predicted years before in 1988 by the theoretical physicist Jack Hills at Los Alamos Laboratory. He showed that when two stars bound together passes by a black hole, one of them can be captured SPRING 2012
falling into the black hole, while the second suffers a change in its velocity that can be as high as 4,000 km/s. This phenomenon is analogous to what happens when an ice skater holding a ball decides to throw the ball: he or she moves in the opposite direction of the pitch.
It is Hard to Simulate a Black Hole
Since the mid-90s whenFigure evidence 7: The measured orbit 7: of one the stars thatorbit live inof theone center the Milk Figure Theof measured ofofthe starsWay. The blac with and crosses are the positions of the star. These are labeled with the year in which the star was found for black holes became strong we that live in the center of the Milk Way. The black there. The location of the central black hole is indicated. (R. Sch¨odel et al., Nature 419, 694, 17 Oct. 2 learned that all galaxies host at least one, dots with crosses are the positions of the star. year in which the star astrophysicists started to focus more in These are labeled with the 9 location of the central was found to be there. The understanding what these objects can do. black hole is indicated. (R. Schodel et al., Nature Can black holes inﬂuence the formation 419, 694, 17 Oct. 2002). of stars in galaxies? Does the emission of light by particles falling into gravity’s one to solve Einstein’s equations. The ﬁrst trap demonstrate that these objects are attempt in this direction was made in 1964 truly described by Einstein’s theory? One by Susan Hahn from IBM in New York important line of research is this: what City, and Richard Lindquist from Adelphi University in Long Island, New York. They happens when two black holes collide? It was always expected that when ran a program in an IBM 7090 mainframe two black holes fuse together they to solve Einstein’s equations starting with release a giant amount of energy. Since an approximate solution of two black these are astrophysical objects with the holes very far away but moving towards a strongest gravitational force possible, their head-on collision. When the holes are far interaction was expected to produce a apart, one can approximate the space to very large amount of gravitational waves. be that of empty space slightly curved by The direct detection of an oscillation in the two holes. As the two holes approach space became a major scientiﬁc challenge. one another, space becomes very curved Also during the 90s, a joint collaboration near them and can no longer be described of scientists from Caltech and MIT as empty. Defining the curvature of the was succeeding in implementing an space was the task Hahn and Lindquist experiment called the Laser Interferometer programmed the computer to calculate. Gravitational Wave Observatory, or LIGO, They attempted to start with a simpliﬁed to detect gravitational waves. However, version of the problem by restricting the deriving the exact shape of a gravitational holes to move in a two-dimensional plane wave from Einstein’s theory turned out to instead of three dimensions. How Hahn and Lindquist wrote their be a very hard problem. program can be easily understood by an A general solution of Einstein’s analogy to digital ﬁlming. The real world equations of gravity for two massive bodies has an inﬁnite number of points in space, to this day is not available. One of the but a digital camera recreates this world reasons is because even though a solution splitting it into a grid of points, the pixels. of one isolated black hole is already known, The camera then records the color of Einstein’s equations do not lend themselves light and intensity at each of these points to a simple addition property: we cannot of the grid. If there are a large number of add two black hole solutions together points, then at some distance from the to obtain another solution to Einstein’s picture it will look very smooth to our eyes, equations. This means that two black holes instead of a collection of discontinuous close to one another curve the space around them in a way that is dramatically diﬀerent colored points. We then say that the grid than having just one. Both theoretical of pixels approximates the real world for physicists and mathematicians have spent a the purposes of a certain picture size. To great deal of time trying to ﬁgure out ways create a movie, the camera records a series to solve Einstein’s equation but no general of instantaneous photographs at a constant rate. For instance, standard movies are method has emerged. With the advent of computers, physicists displayed in computers with 24 to 30 turned to the possibility of programming pictures per second. Image courtesy of Leonardo Motta
changed as the simulation progresses.
Refinement and Harmony
In 1999, a young graduate student, Frans Pretorius of the University of British Columbia, started his Ph.D. work on the topic of solving Einstein’s equations in a computer. That year was in the midst of increasing interest in the solution to this problem. Since at the time the discovery of Figure 8: 8:Three right: rectangular, rectangular,circular circularpolar polaroror Figure Three ways ways in in which which space space can can be be pixelated, pixelated, from left to right: Calabrese and collaborators had yet to be elliptical depictedasasaablue bluedot doton onthe thegrid. grid. ellipticalgrids. grids.Each Each pixels pixel isisdepicted made, Pretorius and his advisor, Matthew Similarly, Hahn and Lindquist split random numerical errors, such as rounding Choptuik, decided to pursue the idea of red in a otherwise space green grid may bethe amplified to were a giant red smudge until picture thedot2-dimensional in which errors, being ampliﬁed by the Einstein’s increasing the resolution of the space becomes entirely red! black holes were moving into almost 8,000 equations leading the computers to crash. pixilation near the black holes, hoping But since the problem was related to the way in which the grid was constructed, choospixels separated by a distance equal to one They showed that the ampliﬁcation was this could purge computer crashes. The ing different grids would get rid of the instabilities. What this means is that it was neceshundredth of the radius of the holes. They independent of the resolution used; that technique was already known in simulations sary to slice space in an astute manner to solve the problem. As an example, if the pixels were interested in creating the ﬁlm showing meant that no matter how small the are all at the same distance from one another, this is a rectangular grid, in which we move of ﬂuids used in problems of aerodynamics the evolution two holes according distances on the gridor and engineering, but its implementation from one pixelof tothe another in space alongtoa line. But onebetween can also points move along a circle Einstein’s equations. This is done in steps in were, any numerical error would grow an ellipse, giving rise to the so-called circular polar and elliptical grids (Fig. 8). After in gravitational physics was still lacking. time like the of frames per second of agroup, digitalphysicists to dominate the whole calculation after a They developed a computer code in the discovery Calabrese and his sought alternative ways in splitting movie. the ﬁrst frame, at each pixel they equations few steps— digital movie analogy, space to In obtain a realization of Einstein’s free in of the chaos. This finally lead in which the density of pixels across the grid used Einstein’s equations to the calculate the meaning after asimulation few framesfree of numerical continuously changes with time to make less than three years later to first successful computer space curvature. With theiscurvature of the Wea simple can understand this discovery in sure that at each instant the regions where instabilities. The solution more complicated than static rectangular, elliptical fromgrid: the ﬁrst one can calculate thecontinuously analogy of achanged movie camera. Suppose the curvature of space is more pronounced orspace circular theframe, grid actually needs to be as the simulation what is the force of gravity acting on each that we are trying to shoot a ﬁlm on a green the resolution is higher than everywhere progresses. hole, which is then used to calculate where screen that does not move. The camera will else. The curvature is bigger near the black holes will move. The holes then move to capture the ﬁrst frame, which will be mostly holes, so the code continuously changes the 5.the Refinement and Harmony their new positions predicted by Einstein’s green. However, there are always random resolution of the grid keeping it high near In 1999, ainyoung graduate student, thefilming University of British Columbia, equations the second frame, andFrans the Pretorius errors inofthe process; for instance, the holes. This is known as adaptive mesh started his Ph.D. work on the topic of solving Einstein’s equations in a computer. That process of computing the curvature is one pixel in the picture may come out red. reﬁnement. For his thesis work, Pretorius year was in the midst of increasing interest in the solution to this problem. Since at that repeated, which then gives the position of In digital ﬁlming and photography this was awarded in 2003 the Metropolis Prize moment the discovery of Calabrese and collaborators was still in the future, Pretorius the holes for the third frame. This then goes is called “noise”. Einstein’s equations are best dissertation in Computational and his advisor, Matthew Choptuik, decided to pursuit the idea of increasing the res- for of the grid keeping it high near the holes on until the full ﬁlm is created. After three such that every single small red dot in an Physics from the American Physical olution of the space pixelation near the black holes, hoping this could purge computer For his thesis work, Pretorius was award hours of calculation by the IBM computer, otherwise green grid may be ampliﬁed to a crashes. The technique was already known in simulations of fluids used in problems of Society. Pretorius went on to the California tation in Computational Physics from the the black holes were a distance apart giant red smudge until the picture aerodynamics and engineering, but itsequal implementation in gravitational physicsbecomes was still Institute of Technology in Pasadena as the California Institute of Technology i to ten times radius. Ita was at this point red!density of pixels across the grid atopost-doctoral fellow to continue this lacking. Theytheir developed computer code in entirely which the that the computer crashed: numerical sinceinstant the problem was related this research. theofdiscovery After the After discovery the chaotic of the c continuously changes with time to make sure thatBut at each the regions where to the research. errors were dominating computation, the way is inhigher which than the grid was constructed, of choice Einstein’sofequation dependent started on curvature of space is morethe pronounced the resolution everywhere else. The nature on the grid, Pretorius to and the numbers control. After different grids wouldthe get rid of the the choice of grid, Pretorius started to study curvature is biggerwere nearout theofblack holes, so thechoosing code continuously changes resolution the issue. the work of Hahn and Lindquist, other instabilities. What this means is that it was how to select the best pixilation. physicists during the 1970’s attempted to necessary to slice space in an astute manner 12 fro solve Einstein’s equations in a computer to solve the problem. As an example, if the lan to no avail. Even as computers improved, pixels are all at the same distance from one det they were also always breaking down when another, this is a rectangular grid, in which faced with Einstein’s theory! we move from one pixel to another in space Ga The National Science Foundation along a line. But one can also move along coo established in 1997 a Grand Challenge grant a circle or an ellipse, giving rise to the solike to support solving Einstein’s equations on called circular polar and elliptical grids (Fig. con a computer. An understanding of why the 8). After the discovery of Calabrese and his tim computers were crashing ﬁnally came with group, physicists sought alternative ways pro a groundbreaking work in 2002 by the in splitting space to obtain a realization equ physicists Gioel Calabrese, Jorge Pullin, of Einstein’s equations free of chaos. This Olivier Sarbach, and Manuel Tiglio, then at ﬁnally lead in less than three years later to me the Louisiana State University. They proved the ﬁrst successful computer simulation aliz that Einstein’s equations once written free of numerical instabilities. The solution lem in a computer could be strongly chaotic is more complicated than a simple static tha Figure 9: Adaptive mesh refinement increases Figure 9: Adaptive mesh refinement independing on the choice of the grid used. rectangular, elliptical or circular grid: the the number of pixels near the black holes, where of creases isthe number ofMax pixels near The chaotic behavior means that small grid actually needs to be continuously resolution more critical (© Planck Inst. the for Image courtesy of Leonardo Motta
Image courtesy of Leonardo Motta
black holes, where resolution Gravitational Physics, Potsdam, Germany).is more
critical (©Max Planck Inst. for GravitaDARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE tional Physics, Postdam, Germany).
me bec nat
Image courtesy of Leonardo Motta
Figure 10: Complete simulation of two black holes orbiting one another until they merge into a single hole. Black holes are shown as grey spheres, while the 10: Complete simulation two black orbiting until theybeing merge single space amplitude ofFigure the gravitational wave disturbance on spaceof is shown in colorholes from the strongestone beinganother green to the weakest blue.into Emptyaundisturbed Black areblack shown spheres, the amplitude of the gravitational is shown ashole. a dark blue. Topholes row: the holes as aregrey approaching in a while spiral motion until the curvature of the space betweenwave them disturbance is no longer the same as empty space. the lastis picture of the row,from the space near thegreen holes isto already highlyblue. curved Empty and oscillatory. Middle row: as theistwo holes spiral onInspace shown in top color strongest weakest undisturbed space shown as a down to merge, theydark emit several strong bursts radiation, signaling the merger isin about to occur. Bottom row: merger occurs and twospace holes smoothly blue.very Top row: theofblack holes are that approaching a spiral motion untilthethe curvature ofthe the amalgamate into a single hole, exploding in gravitational waves after which a single remnant rotating black hole is seen. Full video available at http://numrel.aei. between them is no longer the same as empty space. Max In Planck the last picture of the top row,Potsdam, the space near mpg.de/images (© C. Reisswig, L. Rezzolla (simulation), M. Koppitz (visualization), Institute for Gravitational Physics, Germany).
the holes is already highly curved and oscillatory. Middle row: as the two holes spiral down to merge, Pretorius then several came very across anbursts chaotic behavior ofsignaling Einstein’s that equations? Intel processors with 511MB of they emit strong of radiation, the merger(850MHz) is about to occur. Bottom row: interestingthework written 2001 by holesThe answer amalgamate ﬁnally came into threea years RAM exploding each. At last, problem of coding merger occursinand the two smoothly single hole, in the gravitational David Garﬁnkle of Oakland later,rotating on December 23,is seen. 2004, Full whenvideo Einstein’s equations was solved. waves after which a University single remnant black hole available at http://numrel. in Michigan. Not knowing of the(©C. detailed Pretorius published a paper showing that(vizualisation), From Pretorius simulation, we learned aei.mpg.de/images Reisswig, L. Rezzolla (simulation), M. Koppitz Max Planck Institute analysis of and Physics, collaborators, chaotic behavior was generated by the that about 5% of the total initial mass of the forCalabrese Gravitational Potsdam,no Germany). Garﬁnkle reasoned following intuition that grid of a generalized harmonic coordinates two holes is converted by the fusion process if the coordinates of the grid itself were to in the case of black holes. He then changed into energy in the gravitational waves. The follow a wavelike equation then numerical the wave equation of the coordinates a little amount of energy released per second instabilities could be converted into bit to improve the accuracy of the computer in gravitational waves was one million oscillations that would not grow in time. simulation. This suggested that the problem billion billion (1024) times the energy per However, he knew already the simplest of numerical instabilities could after all be second in light released by the Sun. In the approach could not work because a simple solved with a slightly modiﬁed version observable universe we can see roughly wave-like equation for the coordinates of Garﬁnkle’s method. 15 In the following a 100 billion galaxies, each with around could itself have numerical instabilities. months, Pretorius worked in writing a one to ten billion stars, which means that Hence he proposed a generalization of computer program to implement this new we can observe a thousand billion billion the wave equation, free of such problems. idea with his previous methods of adaptive stars (1021). Our Sun is known to be an Garﬁnkle then wrote a computer program mesh reﬁnement. In the 4th of July of 2005, average star type. Thus, a single pair of that solved Einstein’s equations in the he stunned the scientiﬁc community with black holes merging is an event is brighter presence of a single heavy blob of mass the very ﬁrst 3D simulation of two black in gravitational waves than all the stars of and saw no numerical instabilities coming holes moving together, then orbiting each the visible universe are in light combined. from the grid. This became known as the other several times until the emission It is the single most explosive event in generalized harmonic coordinates. Could of radiation brought both to a complete the universe known so far. If two black it be that Garﬁnkle’s method was the collision. He ran his simulation in a holes with the mass of Sgr A* merge, they guarantee for the complete avoidance of cluster composed of sixty-two Pentium III will burst into gravitational waves within SPRING 2012
roughly 16 minutes. The process is depicted in Fig. 10, which shows the result of an actual computer simulation.
Was Einstein Right, After All?
Pretorius work made available a detailed shape of the gravitational wave emitted by coalescing black holes. This in turn will be used in the search for gravitational waves. At this point, you may ask: but have black holes collided in nature? The answer is yes. This is because astronomers have observed that galaxies very routinely meld. It is seen that most galaxies now have merged at least once with another galaxy. Indeed, our own galaxy is right now in the process of merging with its satellite companion galaxies, the two Magellanic Clouds. When galaxies merge, their central supermassive black holes will come near each other and form a binary system. Dissipation caused by the interstellar gas around them will bring both together until they collide. With the present rate of galactic mergers, it is expected that between three to two hundred black holes binaries merge every year. If at least one of this can be seen from Earth, we will have ﬁnally detected the oscillations of space and time predicted by Einstein. Nevertheless, gravity is very weak when it is not related to heavy stuﬀ like a ailed shape of the gravitational emitted by billion solar mass black wave hole. Even though will be used in searches gravitational our directly bodies are kept in of contact with the y ask: butEarth will black in nature? The due toholes the collide gravitational pull of the mers have observed that galaxies very routinely planet, we can easily raise our arms and xies now small have merged at least the oncework withof another objects against gravity. t now in the process of merging with its satellite How strong is the gravitational oscillation c Clouds. caused by a black hole merger on a terrestrial entral supermassive black holes of each will come object? Well, not much. If a merger occurs, em. Dissipation caused mostly by the interstellar typical terrestrial object will rate moveof a her untilathey collide. With the present billion billion fraction of a meter (10-16 tween three to two hundred black holes binaries cm). This is about one million times smaller his can be seen from Earth, we will have finally than by the radius of the hydrogen atom! me predicted Einstein.
when it n solardies are e gravly raise work of oscillaby on a nearby ect will r (10−16 smaller MeasurtstandImage courtesy of Leonardo Motta utterly Figure 11:11: Schematics of aoflight interferometer. Figure Schematics a light interferomemotion ter. the ra- 44 r, is not granulated in space and thus small devi-
Measuring this type of small deviation is an outstanding technological challenge. It seems utterly absurd to even speak of measuring the motion of a macroscopic object to a fraction of the radius of an atom, but light, unlike matter, is not granulated in space and thus small deviations in light signals can be seen in principle to arbitrary small length scales. This is the concept behind LIGO, a joint collaboration started in the late 80’s between physicists Reiner Weiss, from MIT, and Ronald Drever and Kip Thorne from Caltech. The driving mechanism behind LIGO is the light interferometer. In a L-shaped structure, two mirrors are placed in each leg of the L, and a semi-transparent material is placed in the vertex (see Fig. 11). Laser light is shot through the vertex. Half of it is transmitted straight to the mirror in front while the other half is reﬂected towards the other mirror. The mirrors reﬂect light back through the same path and the re-combined light ray is then seen on an adequate camera (detector). Light is an electromagnetic wave, which means it is a wave-like oscillation of electricity and magnetism. The electric oscillation is picked up by the camera and its pattern can then be seen in a computer. When left alone, the system will have the natural oscillation frequency of the light source used. But when a gravitational wave hits the Earth, space itself will contract and expand at the frequency of the gravitational wave making the path of light oscillate. This is a new oscillation on the top of the natural one and causes the ﬂashes of light detected in the camera to have an extra swinging. This extra ﬂuctuation is can be identified as a signal of the gravitational wave. Still, measuring a deviation in light-path of 1016 cm is a great challenge. The LIGO experiment started with construction of two interferometers in 2000, one in Livingston, Louisiana, and the other in Hanford Nuclear Reservation near Richland, Washington. The project is funded by the National Science Foundation. The basic apparatus was concluded shortly after construction started and the ﬁrst scientiﬁc tests of the machine were made in 2002. In June 2006, the LIGO collaboration established that the instrument was capable of measuring deviations as small as 10-16 cm in the path of light. In the fall of 2011, the instruments were closed for the next stage of updates, which includes several important improvements, such as a stronger laser beam and better control of seismic
effects. These reﬁnements aim in providing the tiny sensitivity of 10-16 cm to a wide range of gravitational wave frequencies. The new band of sensitivity is expected to allow the detection of nearby black hole mergers. Both LIGO sites are scheduled to be ready for action in 2014. If the upgrade goes well, and if the experiment sensitivity is conﬁrmed, at least one to two additional years will be necessary to conﬁrm whether or not gravitational waves can be found.
Outlook and Conclusion It is now clear that the computational obstacles of solving Einstein’s theory of gravity are settled. Nevertheless, the scientific exploration has just begun. Since the work of Pretorius, calculations have now been extended in several directions for astrophysics. The complete evolution of the orbit of one black hole and a star, or of two stars is one example. Another one is the complete simulation of the evolution of nearby intergalactic gas during a black hole collision or when two neutron stars merge. The latter is very important to understand the astrophysical phenomena of gamma ray bursts, and is currently speculated to be one of the sources of these phenomena, though the evidence is far from conclusive. Simulations of Einstein’s theory in a computer can lead in the future to more detailed description of astrophysics. Physicists currently believe that many different phenomena in nature, such as nuclear reactions under very high pressures or superconductivity, can be described by mathematics identical to gravity. In the past, complicated gravitational models of these phenomena were out of reach, but now they are being explored in computer simulations. Still, the greatest achievement of numerical relativity for now is the provision of wave-forms for LIGO and other gravitational wave detectors. They will allow testing of Einstein’s theory of gravity and perhaps open develop a new field of gravitational-wave Astronomy. CONTACT LEONARDO MOTTA AT LEONARDO.DIAS.DA.MOTTA.GR@DARTMOUTH.EDU. THIS SUBMISSION WAS AWARDED FIRST PLACE IN THE “SCIENCE SAYS” COMPETITION SPONSORED BY THE NEUKOM INSTITUTE FOR COMPUTATIONAL SCIENCE AT DARTMOUTH COLLEGE.
Images provided by the Neukom Institute.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
What are we looking for?
The DUJS is open to all types of submissions. We examine each article to see what it potentially contributes to the Journal and our goals. Our aim is to attract an audience diverse in both its scientific background and interest. To this end, articles generally fall into one of the following categories:
This type of article parallels those found in professional journals. An abstract is expected in addition to clearly defined sections of problem statement, experiment, data analysis and concluding remarks. The intended audience can be expected to have interest and general knowledge of that particular discipline.
A review article is typically geared towards a more general audience, and explores an area of scientific study (e.g. methods of cloning sheep, a summary of options for the Grand Unified Theory). It does not require any sort of personal experimentation by the author. A good example could be a research paper written for class.
Such an article may resemble a popular science article or an editorial, examining the interplay between science and society. These articles are aimed at a general audience and should include explanations of concepts that a basic science background may not provide.
The length of the article must be under 3,000 words.
If it is a review or a research paper, the article must be validated by a member of the faculty. This statement can be sent via
email to the DUJS account.
Any co-authors of the paper must approve of the submission to the DUJS. It is your responsibility to contact the co-authors.
Any references and citations used must follow the Science Magazine format.
If you have chemical structures in your article, please take note of the American Chemical Society (ACS)â€™s specifications on
For more examples of these details and specifications, please see our website: http://dujs.dartmouth.edu For information on citing and references, please see: http://dujs.dartmouth.edu/dujs-styleguide Specifically, please see Science Magazineâ€™s website on references: http://www.sciencemag.org/feature/contribinfo/prep/res/refs.shtml
Dartmouth Undergraduate Journal of Science Hinman Box 6225 Dartmouth College Hanover, NH 03755 email@example.com
Article Submission Form Undergraduate Student: Name:_______________________________
Graduation Year: _________________
Research Article Title: ______________________________________________________________________________ ______________________________________________________________________________ Program which funded/supported the research ______________________________ I agree to give the DUJS the exclusive right to print this article: Signature: ____________________________________ Note: The Dartmouth Undergraduate Journal of Science is copyrighted, and articles cannot be reproduced without the permission of the journal. Please scan and email this form with your research article to firstname.lastname@example.org Faculty Advisor: Name: ___________________________
Please email email@example.com comments on the quality of the research presented and the quality of the product, as well as if you endorse the studentâ€™s article for publication. I permit this article to be published in the Dartmouth Undergraduate Journal of Science: Signature: ___________________________________ Visit our website at dujs.dartmouth.edu for more information
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE