20X Print Journal

Page 1


Dartmouth Undergraduate Journal of Science SUMMER 2020




N O. 3



Hunting for Planet Nine by its Influence on Neptune

p. 8 1

Tripping on a Psychedelic Revolution: A Historical and Scientific Overview

p. 144

The Psychology of the Pandemic


Note from the Editorial Board Journalism is a powerful endeavor, especially when in practice with science. It requires patience in data analysis and interpretation, dedication in learning the core scientific principles at the heart of the discipline, and spirit in the subject matter as a whole. With the investment of time and energy that encapsulate the achievement of these three ideals, writers are able to do justice to their subject while also making it accessible to an educated, albeit not specialized, audience. This term, the Dartmouth Undergraduate Journal of Science saw another record number of submissions, exceeding expectation not only in quantity but also in breadth and diversity. There was a continuation of the traditional individual articles, in which students wrote about topics of their choice, completing a thorough literature review followed by multiple weeks of drafting. Additionally, we had multiple group print articles that were written – doubling in number from the pilot term in the spring – in which students voted on contemporary science topics and tackled the literature review and drafting process together, led by a board member. The experience of learning how to read publications, understanding the scientific principles that dictate the subject, and learning how to translate both into a cohesive piece accessible to all was a continued focus in this journal. Among topics published in this edition of our journal are a timely investigation into the current state and future models of telemedicine by Chris Connors ’21, two original mathematical proofs published by Evan Craft ’20 featuring derivations of Einstein’s and Schrodinger’s equations, as well as a look into cellular autophagy and its implication in the onset of cancerous tissue by Zoe Chen ’23. With this term being many writers’ second, third, or even fourth consecutive term writing for the journal, an emphasis was also placed on continuing lines of inquiry across multiple terms in order to better understand a general subject through a variety of different lenses. For instance, Dev Kapadia ’23 took a look into contract research organizations (CROs) that are responsible for nearly all clinical trials run in the United States – an extension of the economics of drug development article that he contributed to the term before.

The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community and beyond by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge. EXECUTIVE BOARD President: Sam Neff '21 Editor-in-Chief: Nishi Jain '21 Chief Copy Editors: Anna Brinks '21, Liam Locke '21, Megan Zhou '21 EDITORIAL BOARD Managing Editors: Anahita Kodali '23, Dev Kapadia '23, Dina Rabadi '22, Kristal Wong '22, Maddie Brown '22 Assistant Editors: Aditi Gupta '23, Alex Gavitt '23, Daniel Cho '22, Eric Youth '23, Sophia Koval '21 STAFF WRITERS Dartmouth Students

Dartmouth Students

Alex Gavitt '23


Anahita Kodali '23

Shannon Sartain '21

Andrew Sasser '23

Sophia Arana '22

Anna Kolln '22

Sophia Koval '21

Anna Lehmann '23

Sudharsan Balasubramani '22

Audrey Herrald '23

Tara Pillai '23

Ben Schelling '21

Zoe Chen '23

Bryn Williams '23

This journal also features a whopping eleven group articles – one of which is an independent environmental analysis of trash accumulation as a result of currents and wind patterns by a group of junior and senior Dartmouth students: Ben Schelling ’21, Maxwell Bond ’20, Sarah Jennewein ’21, and Shannon Sartain ’21. Joining this article is an investigation of the genetic contribution to political affiliation, a critique on the relationship between capitalism and climate change, a look into the rapidly evolving genetic therapy landscape, and a powerful piece that described the use of computer algorithms and screening techniques to assist drug discovery and design, among others. We as a board have been fortunate enough to see the growth of individual writers over their months of involvement at DUJS and are excited to witness their further growth as scientific journalists. Focus for this term was placed on congruency of writing – ensuring a stepwise progression of background toward the end explanation of the subject matter in a contemporary context that is inclusive of recent scientific findings – in an effort to ensure that we as a journal are acting as advocates for the readers. This meant providing sufficient context for the topic, being up front about implications of the findings, and then presenting said findings in a format that followed a logical line of reasoning. The summer allowed for additional time to develop this skillset among our ranks of writers, and application toward both group and individual writers allowed for powerful pieces to be produced across the board.

Carolina Guerrero '23 Chris Connors '21

Hampton High School (PA):

Daniel Abate '23

Manya Kodali

Daniel Cho '22 Dev Kapadia '23

Monta Vista High School (CA):

Dina Rabadi '22

Arushi Agastwar

Emily Zhang '23

Avishi Agastwar

Eva Legge '22 Evan Craft '20

Sage Hill School (CA):

Gil Assi '22

Michele Zheng

George Shan '23 Grace Lu '23

Waukee High School (WI):

James Bell '21

Sai Rayasam

Jess Chen '21 Jenny Song '23

University of Delaware (DE):

Julia Robitaille '23

Arya Faghri

Kamila Zacowics '22 Kamren Khan '22

University of Wisconsin

Kay Vuong '22

Madison (WI)

Leandro Giglio '23

Timmy Davenport

Maddie Brown '22 Maxwell Bond '20

University of Lincoln (Lincoln,

Michael Moyo '22

England, UK):

Nephi Seo '23

Bethany Clarkson

Nina Klee '23 Roberto Rodriguez '23

Writers and editors alike put a tremendous amount of effort into developing these articles and conveying essential STEM concepts and findings. We all sincerely hope that you enjoy reading our works as much as we enjoyed researching, collaborating, and editing. Sincerely, Nishi Jain Editor-in-Chief

DUJS Hinman Box 6225 Dartmouth College Hanover, NH 03755 (603) 646-8714 http://dujs.dartmouth.edu dujs.dartmouth.science@gmail.com Copyright © 2020 The Trustees of Dartmouth College

Sam Hedley '23 Sarah Jennewein '21

SPECIAL THANKS Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Office of the Provost Office of the President Undergraduate Admissions R.C. Brayshaw & Company

Table of Contents Individual Articles Hunting for Planet Nine by its Influence on Neptune Alex Gavitt '23, pg. 8


Racial Bias Against Black Americans in the American Healthcare System Anahita Kodali '23, pg. 18

Mechanochemistry – A Powerful and “Green” Tool for Synthesis Andrew Sasser '23, pg. 24

Fast Fashion and the Challenge of Textile Recycling


Arushi Agastwar, Monta Vista High School Senior, pg. 30

The Cellular Adhesion and Cellular Replication of SARS-COV-2 Arya Faghri, University of Delaware, pg. 36

Gasotransmitters: New Frontiers in Neuroscience Audrey Herrald '23, pg. 44

The Science of Anti-Aging


Avishi Agastwar, Monta Vista High School Senior, pg. 52

Algal Blooms and Phosphorus Loading in Lake Erie: Past, Present, and Future Ben Schelling, '21, pg. 58

Differences in microbial flora found on male and female Clusia sp. flowers Bethany Clarkson, University of Lincoln (UK) Graduate, pg. 70

The Facial Expressions of Mice Can Teach Us About Mental Illness


Bryn Williams '23, pg. 78

How Telemedicine could Revolutionize Primary Care Chris Connors '21, pg. 84

Evidence Suggesting the Possibility of Regression and Reversal of Liver Cirrhosis Daniel Abate '23, pg. 90

CR-grOw: The Rise and Future of Contract Research Organizations



Dev Kapadia '23, pg. 96


Table of Contents Continued Individual Articles Preventative Medicine: The Key to Stopping Cancer in its Tracks Dina Rabadi '22, pg. 104

Challenges and Opportunities in Providing Palliative Care to COVID-19 Patients


Emily Zhang, '23, pg. 112

The Botanical Mind: How Plant Intelligence ‘Changes Everything’ Eva Legge '22, pg. 118

On the Structure of Field Theories I Evan Craft '20, pg. 128

On the Structure of Field Theories II


Evan Craft '20, pg. 136

The Modernization of Anesthetics Gil Assi '22, pg. 138

Tripping on a Psychedelic Revolution: A Historical and Scientific Overview with Dr. Rick Strassman and Ken Babbs Julia Robitaille '23, pg. 144


The Functions and Relevance of Music in the Medical Setting Kamren Khan '23 and Yvon Bryan, pg. 152

Meta-analysis Regarding the Use of External-Beam Radiation Therapy as a Treatment for Thyroid Cancer Manya Kodali, Hampton High School, and Dr. Vivek Verma, Pg. 158

The Role of Epigenetics in Tumorigenesis


Michele Zheng, Sage Hill School Senior, Pg. 164

Selective Autophagy and Its Potential to Treat Neurodegenerative Diseases Sam Hedley '23, Pg 172

The Role of Autophagy and Its Effect on Oncogenesis Zoe Chen '23, Pg. 180


COVID-19 Response in Vietnam Kamila Zakowicz '22 and Kay Vuong '22, pg. 188



Table of Contents Continued Group Articles The Role of Ocean Currents and Local Wind Patterns in Determining Onshore Trash Accumulation on Little Cayman Island Ben Schelling '21, Maxwell Bond '20, Sarah Jennewein '21, Shannon Sartain '21 Pg. 200


Astrobiology: The Origins of Life in the Universe Staff Writers: Sudharsan Balasubramani '22, Andrew Sasser '23, Sai Rayasam (Waukee High School Junior), Timmy Davenport (University of Wisconsin Junior), Avishi Agastwar (Monta Vista High School Senior) Board Writer: Liam Locke '21 Pg. 206

Capitalism and Conservation: A Critical Analysis of Eco-Capitalist Strategies


Staff Writers: Eva Legge '22, Jess Chen '21, Timmy Davenport (University of Wisconsin Junior), James Bell '21, Leandro Giglio '23 Board Writer: Anna Brinks '21 Pg. 222

The Chemistry of Cosmetics


Staff Writers: Anna Kolln '22, Anahita Kodali '23, Maddie Brown '22 Board Writer: Nishi Jain '21 Pg. 242

Inoculation to Operation Warp Speed: The Evolution of Vaccines Staff Writers: Andrew Sasser '23, Anna Lehmann '23, Carolina Guerrero '23, Michael Moyo '22, Sophia Arana '22, Sophia Koval '21, Sudharsan Balasubramani '22 Board Writer: Anna Brinks '21 Pg. 254



The Genetic Engineering Revolution Staff Writers: Bryn Williams '23, Dev Kapadia '23, Sai Rayasam (Waukee High School Junior), Sudharsan Balasubramani '22 Board Writer: Sam Neff '21 Pg. 274


Table of Contents Continued Group Articles An investigation into the field of Genopolitics Staff Writers: Grace Lu '23, Jenny Song '23, Zoe Chen '23, Nephi Seo '23 Board Writer: Nishi Jain '21 Pg. 290

Mastering the Microbiome Staff Writers: Anahita Kodali '23, Audrey Herrald '23, Carolina Guerrero '23, Sophia Koval '21, Tara Pillai '23 Board Writer: Sam Neff '21 Pg. 298


The Psychology of the Pandemic Staff Writers: Daniel Cho '22, Timmy Davenport (University of Wisconsin Junior), Nina Klee '23, Michele Zheng (Sage Hill School Senior), Roberto Rodriguez '23, Jenny Song '23 Board Writers: Sam Neff '21 and Megan Zhou '21 Pg. 314


Rational Drug Design: Using Biology, Chemistry, and Physics to Develop New Drug Therapies Staff Writers: George Shan '23, Carolina Guerrero '23, Samantha Hedley '23, Anna Kolln '22, Dev Kapadia '23, Michael Moyo '22, Sophia Arana '22 Board Writer: Liam Locke '21 Pg. 332


The Rise of Regenerative Medicine Staff Writers: Bryn Williams '23, Sudharsan Balasubramani '22, Daniel Cho '22, George Shan '23, Jenny Song '23, Arushi Agastwar (Monta Vista High School Senior) Board Writers: Nishi Jain '21 and Megan Zhou '21 Pg. 356




Hunting for Planet Nine by its Influence on Neptune BY ALEX GAVITT '23 Cover image: Artist’s impression of Planet Nine. The yellow ring around the Sun is Neptune’s orbit. Source: Wikimedia Commons/ nagualdesign and Tomruen, CC-BY-SA 4.0



What is a Planet?

As more and more Kuiper belt objects have been discovered, astronomers have begun to notice that many of their orbits are aligned contrary to the expected random distribution around the Sun. Furthermore, some have extremely long or highly inclined orbits that cannot be explained by the gravitational influence of known objects. As a result, some astronomers have suggested that there may be a ninth planet, orbiting far beyond Neptune, that is influencing the orbits of these Kuiper belt objects. This paper provides scientific background for the Planet Nine hypothesis and describes a calculation of the transit timing variation (TTV) it would introduce in Neptune’s orbit. Ultimately, it finds that the TTV would be too small and occur over too long a timescale to be useful in finding Planet Nine.

Wanderers The word “planet” comes from the ancient Greek word for “wanderer.” This definition offers a profound insight into humanity’s original understanding of the planets: they clearly move through the sky independent of the slow and fixed movement of the other stars. Under the geocentric model of the solar system, astronomers counted seven planets that fit this definition: Mercury, Venus, Mars, Jupiter, and Saturn, as well as the Sun and the Moon. The advent of the heliocentric solar system, then, was the first nail in the coffin of this definition, for it stripped the wandering Sun and Moon of the title “planet,” while also adding the Earth to the planetary ranks, despite its apparent lack of motion (Brown, 2010, pp. 18–21). What sealed the fate of the “wanderer” definition was the discovery of Uranus in 1781 DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

by the astronomer William Herschel. While he initially assumed this new object was a comet, its near-circular orbit and lack of a tail quickly prompted the realization that it was, in reality, a new planet. As Uranus is not generally visible with the naked eye, it was not one of the “wanderers” known to the ancient Greeks. Soon after the discovery of Uranus, astronomers realized that all was not well with their models of the solar system. Predictions of Uranus’ orbit from Newton’s Law of Gravitation differed noticeably from its observed orbit, suggesting that there was another object nearby, massive enough to influence Uranus’ orbit. Calculations narrowed down this object’s likely position, and astronomers soon found the planet Neptune, whose gravitational influence resolved the errors in Uranus’ orbit (Krajnović, 2016; Brown, 2010, pp. 21–24). The Rise and Fall of Planets Once Uranus had established the precedent for new planets, astronomers began finding many more. The first of these was Ceres, located between Mars and Jupiter; the discovery of Ceres was soon followed by the discovery of the nearby objects Pallas, Juno, and Vesta. Further discoveries of objects in this region eventually resulted in those four losing their status as planets and being reclassified as something new: asteroids, part of a vast “asteroid belt” between Mars and Jupiter. Undaunted by this reclassification, some astronomers continued to search for new planets. In 1930, the astronomer Clyde Tombaugh found the object Pluto, orbiting out past Neptune, which became the solar system’s ninth and final planet. Though Pluto is a bit irregular compared to the other planets—its orbit is more elliptical (so much so that it crosses Neptune’s orbit) and is inclined from the plane of the other planets’ orbits, and it is smaller and much less massive than even Mercury—but it was generally accepted as a planet. By the 1990s, however, astronomers began discovering other objects orbiting in the same region as Pluto, a region that became known as the Kuiper belt. The discovery of these objects raised the specter that Pluto might fall from planethood, just as Ceres and its companions had (Brown, 2010, pp. 21–27). Pluto held on to its planetary status for about a decade and a half after the first Kuiper belt objects were discovered. In 2005, however, a team of astronomers led by Caltech professor Michael Brown discovered an object that forced the issue: a Kuiper belt object, now known as Eris, estimated at slightly larger than


Pluto. Astronomers were left with a choice: either accept Eris—and possibly many more— as a planet or reclassify Pluto as something else. Eventually, in 2006, the International Astronomical Union (IAU) adopted a formal definition of a planet, making the first time that explicit requirements for planethood had been laid down. Under the IAU definition, a planet in the solar system must (IAU, 2006): 1. Be in orbit around the Sun; 2. Have sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape; and 3. Have cleared the neighborhood around its orbit. Pluto, by virtue of the surrounding Kuiper belt objects, has not cleared its neighborhood, and so failed the new definition. A footnote in the IAU definition stated clearly that “the eight planets are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune,” and the matter was considered settled. Past the eight planets are the trans-Neptunian objects, or TNOs. Originally, these objects were split into two main groups: the Kuiper belt and the Oort cloud. Kuiper belt objects (KBOs) orbit within about 20 astronomical units (AU) of Neptune, while the Oort cloud lies upwards of a thousand AU away. Kuiper belt objects are difficult to observe since they are so far away, which is why most were not discovered until the 1990s. The Oort cloud, on the other hand, while thought to be a repository of comets, is too far away to observe and remains theoretical.

“Pluto, by virtue of the surrounding Kuiper belt objects, has not cleared its neighborhood, and so failed the new definition."

Planet Nine Recent observations have suggested that the matter of planets may not be as settled as astronomers believed in 2006. In 2016, astronomers Michael Brown—the same one who discovered Eris—and Konstantin Batygin published a paper laying out the evidence for a ninth planet orbiting far beyond Neptune. Unlike Pluto and the other Kuiper belt objects, this would be a true planet; its mass is currently estimated at five to ten times that of the Earth’s (Batygin et al., 2019, p. 36). Just as the existence of Neptune was originally inferred from its gravitational influence on Uranus, the evidence for Planet Nine comes from anomalous observations, this time of Kuiper belt objects, that would be explained by Planet Nine’s gravitational presence. In their initial paper, Batygin and Brown put forth three main arguments for Planet Nine: the presence 9

Figure 1: Diagram showing the components of specifying an orbit, including the argument of perihelion (labeled as argument of periapsis, as perihelion refers specifically to objects that orbit the Sun) and the longitude of the ascending node. Source: Wikimedia Commons/ Lasunncty, CC-BY-SA 3.0

"... they also suggested that an unseen planet could be responsible for Sedna's orbit...”


of remote objects like Sedna, the clustering observed in the perihelia of many TNOs, and the presence of TNOs that orbit at a high inclination compared to the plane of the eight known planets (Batygin & Brown, 2016, p. 1). Sedna and 2012 VP113 One of those remote objects that does not fit in the Kuiper belt is the TNO Sedna, which has a highly eccentric orbit with a perihelion, or minimum distance from the Sun, of 76 AU. While some other TNOs do travel that far from the Sun, at the time, no others were known that always stayed at least that far away—most objects that travel that far were launched outward as a result of a close encounter with Neptune’s gravitational field, but Sedna never even gets close to Neptune. When Sedna was first observed in 2004, its discoverers—Brown and fellow astronomers Chadwick Trujillo and David Rabinowitz—suggested a few possibilities for this strange orbit. They found that a star passing by the solar system perpendicular to the plane of the Earth’s orbit around the Sun, known as the ecliptic, at a speed of ~30 km/s and a distance

of ~500 AU could lift Sedna from a more normal orbit to its observed one. However, they suggested it would be more likely that the Sun formed as part of a stellar cluster and the other stars in the cluster pushed Sedna into its orbit before the solar system moved away from the cluster. Ironically, they also suggested that an unseen planet could be responsible for Sedna’s orbit, though they considered that very unlikely and the planet they suggested was much smaller and closer than Planet Nine (Brown et al., 2004, p. 648). Then, in 2014, Trujillo and Scott Shepherd found another object like Sedna: 2012 VP113, whose perihelion is 80 AU. Interestingly, they did not find any Kuiper belt objects between 55 and 75 AU, despite the fact that such objects would be closer and therefore easier to detect. Based on that absence, Trujillo and Sheppard suggested that Sedna and 2012 VP113 may be better categorized as inner Oort Cloud objects (Trujillo & Shepherd, 2014, p. 471).


Figure 2: Diagram showing some of the anomalous TNO orbits (left) and a theoretical orbit for Planet Nine (right), with the orbits of Saturn, Uranus, and Neptune in the center for reference. Source: Wikimedia Commons/ nagualdesign, CC-0

Perihelion Clustering In their paper, Trujillo and Sheppard also noticed that Sedna and 2012 VP113 have something else in common: their arguments of perihelion are very similar. The argument of perihelion, represented by the variable ω, is the angle formed between a line from the object’s perihelion to the Sun and the ecliptic. The gravitational influence of the giant planets (Jupiter, Saturn, Uranus, and Neptune) is supposed to randomize these values over the lifespan of the solar system (Batygin & Brown, 2016, p. 1). Instead, Trujillo and Sheppard found that not only did Sedna and 2012 VP113 have similar arguments of perihelion but that every known Kuiper belt object with a semi-major axis greater than 150 AU and a perihelion distance greater than Neptune’s orbital distance (30 AU) has an argument of perihelion within 55° of about 340° (Trujillo & Shepherd, 2014, p. 472). In their paper, Trujillo and Sheppard suggested a planet of about 5ME orbiting the Sun with a semi-major axis of about 250 AU could be responsible for this alignment (Trujillo & Shepherd, 2014, pp. 472–473). Batygin and Brown were intrigued by this possibility and set out to explore it in more detail. In their paper, they noted that Neptune’s gravity can affect objects with perihelion distances greater than its own, which would disturb any effects caused by Planet Nine’s gravity. After simulating the orbits of the Kuiper belt objects identified by Trujillo and Sheppard, they found that objects with


perihelia less than 36 AU generally fall under Neptune’s influence. For the remaining objects, they found that the arguments of perihelion clustered around 318° ± 8°. Intriguingly, they also found that the longitudes of ascending nodes, the angle, Ω, in the ecliptic between a defined reference point and the intersection of the ecliptic and the plane of the object’s orbit, for these objects clustered around 113° ± 13°. Taken together, the clustering of these measurements indicates that the orbits of these objects are actually aligned in physical space and therefore are likely being influenced in the same way (Batygin & Brown, 2016, p. 2). Highly-inclined TNOs To explain this alignment, Batygin and Brown began running simulations of a ninth planet whose gravity would pull the orbits of the Kuiper belt objects into the proper shape. After some trial and error, they found a scenario that predicted both remote objects like Sedna and the perihelion clustering: a planet of about 10ME and a semi-major axis of about 700 AU, orbiting 180° away from the clustered perihelia (Batygin & Brown, 2016, p. 11). This “antialignment,” with the planet orbiting opposite the Kuiper belt objects, was initially puzzling, as such an orbit would generally be unstable and lead to collisions. Batygin and Brown, however, realized that such an orbit could be stable if Planet Nine had sufficient gravity to trap the Kuiper belt objects in mean-motion resonance. This resonance develops over time

“After some trial and error, they found a scenario that predicted both remote objects like Sedna and the perihelion clustering: a planet of about 10ME and a semimajor axis of about 700 AU, orbiting 180 degrees away from teh clustered perihelia."


as objects orbit and exchange energy. If one object is sufficiently more massive than the other, this effect ultimately results in the smaller body making an integer number of orbits for every orbit the massive body completes. This effect is seen elsewhere in the solar system, for example, between Neptune and Pluto, where Pluto makes three orbits for every two that Neptune completes, which prevents them from colliding despite the fact that Pluto crosses Neptune’s orbit. Likewise, under this model, the aligned Kuiper belt objects would be in meanmotion resonance with Planet Nine, allowing the planet to orbit opposite the Kuiper belt objects and still be stable (Batygin & Brown, 2016, p. 6; Batygin & Morbidelli, 2017). However, because of Planet Nine’s high eccentricity, many of these resonances are more complex ratios (e.g. 2:7), and therefore are not helpful for narrowing down Planet Nine’s location in the sky (Bailey et al., 2018).

“One study found that Planet Nine's gravity could explain the solar obliquity, an unexplained misalignment between the Sun's axis of rotation and the celestial plane.”

Interestingly, this model made another prediction: that Planet Nine’s gravity would push some trans-Neptunian objects farther away and to a higher inclination, to the point where they would disappear from view, only to then bring them back into view in a highly inclined orbit, possibly even one perpendicular to the celestial plane. While Batygin and Brown were initially puzzled by that prediction, they soon realized there is such a population of trans-Neptunian objects: at the time, astronomers had already found several TNOs with highly inclined orbits, some of which are in fact roughly perpendicular to the celestial plane. The presence of these objects has never been explained, but the fact that the Planet Nine model offers an explanation for a phenomenon it was not trying to explain lends a measure of additional credence to its other predictions (Batygin & Brown, 2016, p. 10). Further Developments In their initial paper, Batygin and Brown noted that discovering additional Kuiper belt objects would be crucial to eliminating any sampling bias and refining the orbital parameters of Planet Nine. Since then, additional objects with orbits that fit the predictions of the Planet Nine hypothesis, such as 2015 TG387 and 2015 BP519, have been found (Sheppard et al., 2019, p. 10; Becker et al., 2018, pp. 10–11). One study found that Planet Nine’s gravity could explain the solar obliquity, an unexplained misalignment between the Sun's axis of rotation and the celestial plane (Bailey et al., 2016), though later revisions to estimates of Planet Nine’s mass and


orbit concluded that it cannot account for all of the solar obliquity (Batygin et al., 2019, p. 38). Some have suggested that the observed clustering of Kuiper belt objects may just be the result of observational biases (Shankman et al., 2017), but both subsequent observations and a detailed statistical analysis suggest that is not the case (Brown & Batygin, 2019). Since the clustering appears real, most alternate theories suggest that it is caused by a celestial body or bodies other than a planet, such as a ring of icy objects with an orbit and total mass similar to Planet Nine or a black hole with a mass similar to Planet Nine that was captured by the Sun’s gravity (Sefilian & Touma, 2019; Scholtz & Unwin, 2019). Nevertheless, Planet Nine remains the simplest and most likely explanation for the anomalous orbits observed in TNOs, though attempts to locate it with a telescope have yet to succeed. Depending on how far away Planet Nine is and where it is in its orbit, it may be just at the limit of the observational abilities of powerful telescopes like the Subaru Telescope. If it is too far away for current telescopes to see, the Vera Rubin Observatory in Chile, which is projected to begin observations in late 2022 (LSST, n.d.), should be able to find it. In 2019, Batygin and Brown published a thorough analysis of the evidence for Planet Nine thus far. Whereas the original paper suggested Planet Nine would be upwards of 10ME and have a semi-major axis of roughly 700 AU, the most recent data indicates it should be both smaller (mass between five and ten ME) and closer (semi-major axis between 500 and 800 AU); a higher mass indicates a greater distance. Despite the decrease in mass, a 5ME Planet Nine would actually be easier to detect than the original 10ME proposal because it would be closer to Earth (Batygin et al., 2019, p. 39). However, while the decrease in mass refines the Planet Nine hypothesis to better fit its core prediction—the alignment of objects in the Kuiper belt—it also means that Planet Nine cannot explain the entirety of the solar obliquity (Batygin et al., 2019, p. 38).

Neptune's TTV as a Result of Planet Nine Introduction To prove or disprove the existence of Planet Nine, scientists need predictions that can be tested and observed; one possible method for generating these predictions can be borrowed from the techniques of exoplanet hunters. One of the primary ways astronomers search


Figure 3: Plot of Neptune's TTV over time in the 5ME (top) and 10ME (bottom) scenarios. Source: Created by the writer using the TTV2Fast2Furious code, CC-BY-SA 4.0

for exoplanets is by looking for stars that periodically dim slightly, which can indicate that a planet is moving between the star and the Earth. This method is very good at finding large planets that orbit close to their star (since such planets will block more light) and is dependent on the planet’s period—if the planet completes an orbit every 10 days, for example, astronomers should see the star dim every 10 days, like clockwork. If the timing of these transits instead rises and falls over time, that suggests that there may be another planet that is gravitationally tugging on the first. While stars are gravitationally dominant in their planetary system and generally keep their planets orbiting at a constant rate, they are not the only gravitationally noteworthy bodies. The planets themselves tug on each other, tweaking their orbits slightly. As a result, each time a planet goes around its host star, it takes a slightly different amount of time. The difference in these times is known as transit-timing variation (TTV). These variations


increase and decrease over time, like a wave. As a result, astronomers generally talk about the amplitude of a TTV (the maximum amount, positive or negative, by which one planet changes the period of another planet) and the period of the TTV cycle (how long it takes a planet’s TTV to come back to the same value and slope, not to be confused with the period of the planet’s orbit). TTV observations don’t always establish an exact orbit for the second planet, but they can help prove its existence and narrow down the range of possible orbits. Since TTV effects are usually small (on the order of minutes), they are easier to notice in planets with short orbital periods—which, as planets orbit more quickly the closer they are to their star, are exactly the kind of planets that are easy to detect by the dimming of a star. Nevertheless, the general principle holds and can also be applied to more remote planets like Neptune and Planet Nine. Calculating these orbital effects is complex and, in this paper, was accomplished with the

“While stars are gravitationally dominant in their planetary system and generally keep their planets orbiting at a constant rate, they are not the only gravitationally noteworthy bodies.”


TTV2Fast2Furious code (available at https:// github.com/shadden/TTV2Fast2Furious). This code takes in information about the orbital characteristics of at least two planets and runs that information through a matrix equation, before outputting a graph of both planets’ TTV over time. The code was originally developed for calculations with exoplanets—specifically the ones that orbit close to their host star that are easier to detect. While the program is capable of integrating the data over a sufficient number of transits to show a complete TTV cycle for Neptune, even on its innermost orbit, Planet Nine completes only about one orbit over that timespan (giving it, at most, two transits). Increasing the timespan to include enough Planet Nine transits returned an error in the code as the integral exceeded its maximum number of subdivisions. Since the purpose of this calculation is to explore the feasibility of detecting a TTV in Neptune’s orbit assuming that Planet Nine exists, the TTV of Planet Nine is irrelevant anyways and it was removed from the final graphs to avoid any confusion. Because of the extremely long orbital periods compared to those of the planets the code was designed for, the code was also adjusted to display time in years instead of days on the graphs.

“In order to calculate a planet's TTV, the TTV2Fast2Furious code needs to know the mass, period, eccentricity, and initial transit time of both the "target' planet and the planet perturbing that target planet's orbit.”


Calculation Details In order to calculate a planet’s TTV, the TTV2Fast2Furious code needs to know the mass, period, eccentricity, and initial transit time of both the “target” planet and the planet perturbing that target planet’s orbit. This section details the steps taken to supply those parameters and calculate Neptune’s TTV as a result of Planet Nine’s gravitational influence for the minimum and maximum predicted mass of Planet Nine (5ME and 10ME respectively). While the mass of Planet Nine could also lie somewhere between these values, by considering both extremes, these calculations find the maximum and minimum TTV values, essentially establishing the best and worst case scenarios for detecting Planet Nine’s influence on Neptune. Data for Neptune was taken from Williams (September 2018) and the full orbital parameters for both the 5ME and 10ME scenarios for Planet Nine were taken from Batygin et al. (2019). A complete list of adopted values is in Appendix A. The given mass values for Neptune and Planet Nine were converted to solar masses (the units expected by the code) using the values of MS and ME from NASA’s Sun and Earth data (Williams, February 2018, 2020). Given the period

of Neptune’s orbit in sidereal days, the period of Planet Nine, P9, can then be defined in terms of its semi-major axis (a9) and Neptune’s semimajor axis (aN) and period (PN), with Kepler’s Third Law, which yields:

The eccentricities of both Neptune and Planet Nine are given in the sources and required no further conversion. Since the initial transit time for the planets would depend on the location of a hypothetical alien observer, the code was tested with different values to see how much they would impact the results. This testing found that the value assigned to the initial transit time had only a small effect on Neptune’s initial TTV value and no noticeable effect on its TTV amplitude or period. In the final calculations, Neptune was assigned an initial transit time of half its period (indicating that it transits at aphelion, where the transit probability is higher) and Planet Nine an initial transit time of a quarter of its period. For the 10ME scenario, the code was also adjusted to calculate results out to 200 transits of Neptune, which, because Planet Nine is farther away and therefore has less of a gravitational impact on Neptune, was necessary to produce a full TTV cycle. Results Running the code resulted in a TTV amplitude of 0.4 minutes and period of 12,300 years in the 5ME scenario and a TTV amplitude of 0.2 minutes and a period of 22,000 years in the 10ME scenario. For comparison, the first significant planet discovered by TTV, Kepler-19c, gives the transiting planet Kepler-19b a TTV amplitude of 5 minutes with a period of 316 days (Ballard et al., 2011, p. 15). Since its discovery, Neptune has completed just a little bit more than one orbit. As such, even in the best-case scenario of a 5ME Planet Nine, finding Planet Nine by looking at variations in Neptune’s orbit does not seem feasible.

Conclusion Indirect evidence for the existence of Planet Nine has grown over time, but astronomers have yet to actually observe it. The principal challenge is that astronomers have only had a few decades to observe the TNOs it influences, and they have periods of hundreds of years. Astronomically speaking, they have barely


moved since they were discovered. In contrast, when the only other planet to be discovered by calculation—Neptune—was found, astronomers had already known about Uranus for 65 years and had been able to observe most of its orbit. Indeed, seeing the perturbations over time in Uranus’ orbit is what enabled Le Verrier to predict Neptune’s orbit. While the goal that motivated this paper was to help narrow down the range of possible orbits for Planet Nine by calculate the TTVs expected for Neptune as a result of Planet Nine’s gravitational influence, the calculation ultimately found that they are likely too small and occur over too long a timescale to be of much use in finding Planet Nine. Nevertheless, the search for Planet Nine continues. If it is not found within the next few years, the Vera Rubin Observatory, which is scheduled to begin observations in 2022, should have the observational capacity to detect Planet Nine and will be conducting observations as part of a survey of the whole sky every night that it can, which should finally resolve the search for Planet Nine.

This paper would not exist without Dartmouth College Professor Elisabeth Newton, who taught me quite a bit about astronomy, guided me from a vague notion of writing something about Planet Nine to the idea of doing a TTV analysis, helped me understand the TTV2Fast2Furious code, and provided feedback on the paper. This paper also wouldn’t read anywhere near as well as it does without the efforts of the editors who worked on it; thank you to Nathalie Korhonen Cuestas in my Astronomy 15 class and the DUJS editors Sam Neff, Madeline Brown, and Eric Youth for all of their suggestions and comments. And finally, I am enormously grateful to a few people whose influence helped me personally write this paper: Professor Mike Brown of the California Institute of Technology, whose work inspired my interest in Planet Nine, and the friends who kept me sane through the writing process— they know who they are.

“If it is not found within the next few years, the Vera Rubin Observatory, which is scheduled to begin observations in 2022, should have the observational capacity to detect Planet Nine...”

Acknowledgments Appendix A: Adopted Values




black hole? arXiv e-prints. https://arxiv.org/abs/1909.11090

Bailey, E., Batygin, K., & Brown, M. E. (2016). Solar obliquity induced by Planet Nine. The Astronomical Journal, 152(5). https://doi.org/10.3847/0004-6256/152/5/126 Bailey, E., Brown, M. E., Batygin, K. (2018). Feasibility of a resonance-based Planet Nine search. The Astronomical Journal, 156(2). https://doi.org/10.3847/1538-3881/aaccf4

Sefilian, A. A. & Touma, J. R. (2019). Shepherding in a selfgravitating disk of trans-Neptunian objects. The Astronomical Journal, 157(2). https://doi.org/10.3847/1538-3881/aaf0fc

Ballard, S., Fabrycky, D., Fressin, F., Charbonneau, D., Desert, J., Torres, G., Marcy, G., Burke, C. J., Isaacson, H., Henze, C., Steffen, J. H., Ciardi, D. R., Howell, S. B., Cochran, W. D., Endl, M., Bryson, S. T., Rowe, J. F., Holman, M. J., Liassauer, J. J., … Borucki, W. J. (2011). The Kepler-19 System: A transiting 2.2 RE planet and a second planet detected via transit timing variations. The Astrophysical Journal, 743(2). https://doi.org/10.1088/0004637X/743/2/200 Batygin, K., Adams, F. C., Brown, M. E., & Becker, J. C. (2019). The Planet Nine hypothesis. Physics Reports, 805, 1-53. https://doi. org/10.1016/j.physrep.2019.01.009

Shankman, C., Kavelaars, J. J., Bannister, M. T., Gladman, B. J., Lawler, S. M., Chen, Y., Jakubik, M., Kaib, N., Alexandersen, M., Gwyn, S. D. J., Petit, J., & Volk, K. (2017). OSSOS. VI. Striking biases in the detection of large semimajor axis transNeptunian objects. The Astronomical Journal, 154(2). https:// doi.org/10.3847/1538-3881/aa7aed Sheppard, S. S., Trujillo, C. A., Tholen, D. J., & Kaib, N. (2019). A new high perihelion trans-Plutonian inner Oort cloud object: 2015 TG387. The Astronomical Journal, 157(4). https://doi. org/10.3847/1538-3881/ab0895 Trujillo, C. A. & Sheppard, S. S. (2014). A Sedna-like body with a perihelion of 80 astronomical units. Nature, 507, 471-474. https://doi.org/10.1038/nature13156

Batygin, K. & Brown, M. E. (2016). Evidence for a distant giant planet in the solar system. The Astronomical Journal, 151(2). https://doi.org/10.3847/0004-6256/151/2/22 Batygin, K. & Morbidelli, A. (2017). Dynamical evolution induced by Planet Nine. The Astronomical Journal, 154(6). https://doi.org/10.3847/1538-3881/aa937c Becker, J. C., Khain, T., Hamilton, S. J., Adams, F. C., Gerdes, D. W., Zullo, L., Franson, K., Millholland, S., Bernstein, G. M., Sako, M., Bernardinelli, P., Napier, K., Markwardt, L., Lin, H. W., Wester, W., Abdalla, F. B., Allam, S., Annis, J., Avila, S., … Walker, A. R. (2018). Discovery and dynamical analysis of an extreme trans-Neptunian object with a high orbital inclination. The Astronomical Journal, 156(2). https://doi.org/10.3847/15383881/aad042 Brown, M. E. & Batygin, K. (2019). Orbital clustering in the distant solar system. The Astronomical Journal, 157(2). https:// doi.org/10.3847/1538-3881/aaf051 Brown, M. E., Trujillo, C., & Rabinowitz, D. (2004). Discovery of a candidate inner Oort cloud planetoid. The Astrophysical Journal, 617(1), 645-649. https://doi.org/10.1086/422095 Brown, M. E. (2010). How I killed Pluto and why it had it coming. Spiegel & Grau. IAU. (2006, August 24). IAU 2006 General Assembly: Result of the IAU Resolution votes. IAU. Retrieved August 26, 2020, from https://www.iau.org/news/pressreleases/detail/iau0603/ Krajnović, D. (2016). The contrivance of Neptune. Astronomy and Geophysics, 57(5), 5.28-5.34. https://doi.org/10.1093/ astrogeo/atw183 LSST. (n.d.). Fact sheets. Rubin Observatory. Retrieved August 26, 2020, from https://www.lsst.org/about/fact-sheets Williams, D. R. (2018, February 23). Sun Fact Sheet. NASA. Retrieved September 24, 2020, from https://nssdc.gsfc.nasa. gov/planetary/factsheet/sunfact.html Williams, D. R. (2018, September 27). Neptune Fact Sheet. NASA. Retrieved September 24, 2020, from https://nssdc.gsfc. nasa.gov/planetary/factsheet/neptunefact.html Williams, D. R. (2020, April 02). Earth Fact Sheet. NASA. Retrieved September 24, 2020, from https://nssdc.gsfc.nasa. gov/planetary/factsheet/earthfact.html Scholtz, J. & Unwin, J. (2019). What if Planet 9 is a primordial





Racial Bias Against Black Americans in the American Healthcare System BY ANAHITA KODALI '23 Cover Image: Pictured above is a doctor drawing blood as part of the Tuskegee Syphilis study. The men who were in the study were promised free healthcare as incentive for their participation, but the doctors used a litany of placebos and diagnostic procedures (including blood tests) in place of actual medical procedures that could treat syphilis. The lasting impacts of the study on Black patient trust in the healthcare systems are still felt today. Source: Wikimedia Commons


Introduction There are deep historic inequities among America’s different demographic groups. Specifically, Black Americans are at a significant disadvantage in many areas, from wage gaps to discrimination in hiring practices to lack of access to education. The current global pandemic has highlighted yet another area that Black Americans are underserved: the healthcare industry. Black Americans have historically, on average, had poorer health than their White counterparts. Many argue that the disparity in health outcomes was due to genetic differences, cultural practices, or socioeconomic issues. However, even when all factors are held constant, Black individuals consistently have more indicators of poor health than White individuals (Charatz-Litt, 1992). Understanding why necessitates a deeper look at the issues that Black Americans face in medicine. This paper reviews the racist past of American medicine as well as modern

racism in physician attitudes to Black people and medical technologies. It is important to note that for the purposes of simplicity and clarity, much of this paper discusses the differences between the treatment of Black and White Americans as a dichotomy. However, in reality, racism in American healthcare effects other people of color as well. Current rising racial tensions and protests across the United States highlight the struggles of Black Americans with predominantly White American institutions, which is why this paper chooses to focus on their experience with the healthcare system.

Medicine's Ugly History Slavery was the first major barrier to Black Americans receiving proper healthcare. The American Medical Association (AMA) was formed in 1847 as a result of years of DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure 1: Generally, people only consider the obvious effects of the segregation laws in the Jim Crow codes. However, one critical piece often ignored – Black people were often denied access to the best hospitals in the country, forcing them to instead either go to subpar healthcare facilities or treat illnesses themselves at home. This resulted in overall poorer health for America’s Black demographic relative to America’s Whites. Source: Wikimedia Commons

conversations between White physicians wanting to protect their practice from homeopathic, or alternative, medicine. They established standards of science and professionalism while securing for themselves the elite status of physicians in society, resulting in a boom of White physician practice. Unfortunately, these standards did not apply to slaves (“AMA History, ” 2018). Slave masters actually often tried to give slaves professional medical care when they fell ill or got injured; though perhaps counterintuitive at first, this proved to be more economically viable than having to pay for more slaves if one became too ill to work (Lander & Pritchett, 2009). However, slave masters had to contend with several issues. For one, physicians were less available in the South than they were in the North, meaning that slave masters often could not find a doctor close enough to treat their slaves quickly. Secondly, even though it made economic sense to treat slaves for illness, services were often too expensive for individual owners to give to slaves. Thirdly, even when slave masters had access to doctors and could afford to pay them, it was difficult to find a doctor willing to work on Black patients; those that did, often did shoddy work as the standards set by the AMA did not apply to slaves. Finally, even doctors who attempted to do adequate work on slaves would give slaves different treatments than they would a White patient because they did not understand how Black people presented different illnesses (Charatz-Litt, 1992).


Several doctors believed that Black people had less sensitivity to pain than White people, and some doctors still hold the belief today (CharatzLitt, 1992). Others believed that Black people’s brains were smaller than brains of other races, while their senses of smell, taste, and hearing were stronger. Additionally, physicians applied certain theories and disease models specifically to slaves, including “Drapetomania,” a theory that slaves ran away because of mental illness, and “Cachexia Africana,” a pathology in which Black people would eat dirt; this further worsened the level of care given to slaves (Sullivan, n.d.).

“This lack of understanding of Black anatomy (as well as a general lack of knowledge about human biology) led to the horrific practice of experimentation on slaves.”

This lack of understanding of Black anatomy (as well as a general lack of knowledge about human biology) led to the horrific practice of experimentation on slaves. As medicine was quickly advancing, demands for human specimens for physician training and research increased. Slaves represented a very unique and attractive group for researchers; from a scientific perspective, they could be used as good research models, and from a legal perspective, they had no autonomy. Thus, slaves were often used as practice for surgeons, ways to study autopsy, and experimental models for new techniques (Savitt, 1982). Slaves had to endure amputations, electric shocks, tumor removals, and more terrible experiments, often without using anesthesia (Kenny). The improper treatment of Black Americans by doctors persisted even after the emancipation of slaves. Though constitutionally equal to White Americans, Black people were 19

Figure 2: Having a more diverse medical field allows doctors and medical researchers to understand better how diseases (like Lupus) present themselves differently in different demographics of patients. This allows doctors to better and more precisely serve all of their patients. Source: Wikimedia Commons

“Generally, physicians are much more unlikely to believe their Black patient's descriptions of pain and therefore have skewed perceptions of the pain of their patient.”

systemically denied access to proper healthcare due to the implementation of segregation and the Jim Crow Laws (see Figure 1). As a result, Black people were unable to receive necessary medical treatments or go to the best medical practices in the nation (Hunkele, n.d.). At the same time, Black people were exploited by White doctors to advance medicine. One of the most well-known examples is the Tuskegee Syphilis Experiment (see Cover Figure). The US Public Health Service conducted the Tuskegee Syphilis Experiment from 1930 into the 1952 in order to see the effects of untreated syphilis on 600 Black men; even when penicillin treatments (which cured syphilis) became widely available in the 1950s, doctors did not give the men adequate care. By the end, at least 100 Black men had died from syphilis related complications (Brandt, 1978). The tale of Henrietta Lacks, a poor Black southern woman who got treated for cervical cancer at Johns Hopkins University in the 1950s, is another clear case of the American healthcare system failing its Black patients. Doctors took samples of the cells in Mrs. Lacks’s tumor cells without informed consent, which ultimately led to an immortal cell line that has opened the pathway for innovation in countless scientific and medical pursuits, including research into cancer, AIDS, radiation therapy, and genomic mapping. Mrs. Lacks’s family was not immediately informed and never received compensation for her contributions (Khan, 2011). While, these two stories highlight the historic improper care of Black Americans and caused Black Americans to lose trust in the healthcare system, there are many more specific reasons for this lack of trust. Unfortunately, as medicine improves and the inequities in healthcare for Black people persists, the level of disparity between White and Black Americans has stayed stagnant (National Center for Health Statistics, 2006). Though there are more legal protections in place for Black Americans today than there were decades ago, there are still significant barriers to Black people receiving adequate care.

Physician Attitudes Towards Black Patients There still exists a significant degree of bias in the American healthcare system that causes physicians to treat Black patients unfairly. These issues begin in physician training and education. Despite receiving a more comprehensive education than their counterparts in the 1800s did, a significant number of physicians share a fundamental and dangerous misunderstanding


of pain and pain management for Black patients (see Figure 2). Half of White medical students and residents surveyed in 2016 held one or more incorrect beliefs of Black biology and pain tolerance, including: Black people have thicker skin than White people; Black people’s nerves are less sensitive than White people’s nerves; and Black people’s blood coagulates more quickly than White people’s. These beliefs have all been proven to be wrong (Hoffman et. al., 2016). Issues in training and misunderstanding of Black anatomy and biology translate directly to problems in the hospital. Generally, physicians are much more unlikely to believe their Black patient’s descriptions of pain and therefore have skewed perceptions of the pain of their patient (Staton et. al., 2007). This perception has resulted in White patients consistently receive more pain medications than Black patients. Researchers found that 74% of White patients received pain medication for injuries related to bone fractures while only 50% of Black patients received pain medications for similar injuries (Josefson, 2000). The disparity can be specifically found in adolescent patients. For example, White children are much more likely than their Black counterparts to receive pain treatment for appendicitis (Goyal et. al., 2015). Issues with pain management have led to significant trust issues between patients and physicians. Researchers have found that there are significant differences in physician-patient


trust that are related to racial differences. Interactions between Black patients and nonBlack physicians are relatively unfavorable when compared to interactions between Black patients and Black physicians. (Martin et. al., 2013). Black patients tend to trust Black doctors more than White doctors; similarly, when Black men receive care from Black physicians, they are significantly more likely to receive effective care. Specifically, the Black men are more likely to receive preventative care with these findings most pronounced for men who had strong distrust in the medical system (Alsan et. al., n.d.). Thus, emphasis on creating more diversity in the medical field could help alleviate some of the issues caused by biased physician training.

Bias in Medical Technologies Researchers and entrepreneurs alike have increasingly voiced their concerns about racial bias in technology over the past few years. Biotechnology in medicine is no exception. As the medical field becomes more technology driven, the existing issues with algorithms and biotechnology will continue to grow, which could aggravate the already dangerous racial disparities in medicine. Healthcare centers around the country use commercial algorithms to guide help guide clinical decisions and decide which patients receive additional care. These algorithms affect the healthcare that millions of Americans receive. Researchers studied one algorithm that is representative of the majority of popular algorithms employed by hospitals and found that Black patients were significantly sicker than their White counterparts for any given risk score (risk scores are the way that doctors decide what course of treatment to give to their patients). Unwell Black patients were given the same risk score as healthy White patients, resulting in a reduction of Black patients that were identified for extra care by more than half. This bias results from the algorithms predicting costs of health instead of health needs directly. Costs and healthcare needs are directly correlated (sicker patients require more care and therefore more money), so at first glance, using cost as a proxy for needs makes sense. However, historically, the healthcare system spends significantly less money on Black patients because of several issues, including direct discrimination and challenges to the patient-physician relationship. Because less money is spent on Black patients, the algorithms conclude that Black patients are healthier than White patients. By solving the


issue, the team estimated that the number of Black patients receiving extra care would jump from 17.7% to 46.5% (Obermeyer et. al., 2019). The advent of precision medicine also represents a major problem for Black patients. While precision medicine has the potential to greatly improve care by allowing for hyperindividualized care, unchecked, it will also propagate a host of biases towards Black patients. There are three main areas that precision medicine can derive this bias. The first is collection of biased data; Black people are historically underrepresented in research datasets; this underrepresentation results in wider variability and less accurate results, as well as a lack of understanding of the nuances in Black patient presentation of certain diseases. This results in biased conclusions being drawn. The second is integration of biased data into precision medicine algorithms; as previously discussed, datasets are already biased. These biases are then reinforced when biased AI technologies are used to create clinical algorithms. The third is influence of preexisting structural racism while precision medicine is being adopted by hospitals; structural racism affects which hospitals adopt precision medicine and which patients will receive access to it and will ultimately hurt Black patients, who are relatively underprioritized when compared to White patients because of both algorithm bias and direct discrimination (Geneviève et. al., 2020).

“As the medical field becomes more technology driven, the existing issues with algorithms and biotechnology will continue to grow, which could aggravate the already dangerous racial disparities in medicine.�

Conclusions and Future Directions It is abundantly clear that Black people are not cared for properly in the current American healthcare system relative to White counterparts. From historic inequities to issues with modern physician attitudes and growing concerns about the prevalence of biased technology, there are a myriad of problems that need to be solved to ensure Black people receive equal, accessible, and adequate healthcare. By bringing more diversity to medicine, the American healthcare system could see immediate benefits with physicianpatient trust. However, in the long term, significant steps need to be taken to dismantle the systemic issues that exist that are the root causes of unequal access to medicine.


References Alsan, M., Garrick, O., & Graziani, G. C. (n.d.). Does Diversity Matter for Health? Experimental Evidence from Oakland. 56. AMA History. (2018, November 20). Retrieved July 22, 2020 from https://www.ama-assn.org/about/ama-history/amahistory Brandt, A. M. (1978). Racism and Research: The Case of the Tuskegee Syphilis Study. The Hastings Center Report, 8(6), 21. https://doi.org/10.2307/3561468 Charatz-Litt, C. (1992). A chronicle of racism: The effects of the White medical community on Black health. Journal of the National Medical Association, 84(8), 717–725.

Experimentation and Demonstration in the Old South. The Journal of Southern History, 48(3), 331. https://doi. org/10.2307/2207450 Staton, L. J., Panda, M., Chen, I., Genao, I., Kurz, J., Pasanen, M., Mechaber, A. J., Menon, M., O’Rorke, J., Wood, J., Rosenberg, E., Faeslis, C., Carey, T., Calleson, D., & Cykert, S. (2007). When race matters: Disagreement in pain perception between patients and their physicians in primary care. Journal of the National Medical Association, 99(5), 532–538. Sullivan, G. (n.d.). Plantation Medicine and Health Care in the Old South. 21.

Geneviève, L. D., Martani, A., Shaw, D., Elger, B. S., & Wangmo, T. (2020). Structural racism in precision medicine: Leaving no one behind. BMC Medical Ethics, 21(1), 17. https://doi.org/10.1186/s12910-020-0457-8 Goyal, M. K., Kuppermann, N., Cleary, S. D., Teach, S. J., & Chamberlain, J. M. (2015). Racial Disparities in Pain Management of Children With Appendicitis in Emergency Departments. JAMA Pediatrics, 169(11), 996. https://doi. org/10.1001/jamapediatrics.2015.1915 Hoffman, K. M., Trawalter, S., Axt, J. R., & Oliver, M. N. (2016). Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between Blacks and Whites. Proceedings of the National Academy of Sciences, 113(16), 4296–4301. https://doi. org/10.1073/pnas.1516047113 Hunkele, K. L. (n.d.). Segregation in United States Healthcare: From Reconstruction to Deluxe Jim Crow. 51. Josefson, null. (2000). Pain relief in US emergency rooms is related to patients’ race. BMJ (Clinical Research Ed.), 320(7228), 139A. Kenny, S. C. (2015). Power, opportunism, racism: Human experiments under American slavery. Endeavour, 39(1), 10–20. https://doi.org/10.1016/j.endeavour.2015.02.002 Khan, F. A. (2011). The Immortal Life of Henrietta Lacks. Journal of the Islamic Medical Association of North America, 43(2). https://doi.org/10.5915/43-2-8609 Lander, K., & Pritchett, J. (2009). When to Care: The Economic Rationale of Slavery Health Care Provision. Social Science History, 33(2), 155–182. https://doi.org/10.1215/014555322008-018 Martin, K. D., Roter, D. L., Beach, M. C., Carson, K. A., & Cooper, L. A. (2013). Physician communication behaviors and trust among Black and White patients with hypertension. Medical Care, 51(2), 151–157. https://doi.org/10.1097/ MLR.0b013e31827632a2 National Center for Health Statistics. Health United States 2006 with chartbook on trends in the health of Americans. US Government Printing Office, Hyattsville, MD (2006) Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi. org/10.1126/science.aax2342 Savitt, T. L. (1982). The Use of Blacks for Medical





Mechanochemistry – A Powerful and “Green” Tool for Synthesis BY ANDREW SASSER '23 Cover Image: This scientist is using a mortar and pestle to grind different reagents and facilitate a mechanochemical reaction. Unlike most synthetic reactions, mechanochemical reactions do not require any solvent Source: Flickr.com

Introduction Over the past two centuries, the field of synthetic chemistry has experienced remarkable growth. Prior to the 19th century, scientists accepted the doctrine of vitalism, which suggested that living organisms were fueled by a “vital force” separate from the physical and natural world, and that organic compounds could not be synthesized from inorganic reagents. However, following Friedrich Wohler’s 1828 synthesis of the organic molecule urea from the inorganic molecule ammonia, the field of synthetic chemistry exploded, with chemists producing compounds such as the anti-cancer drug Taxol and the pesticide Strychnine from simple organic and inorganic reagents (Museum of Organic Chemistry, 2011). Modern synthetic techniques, however, still have some significant drawbacks. For one, synthetic schemes can be highly inefficient. On average, the synthesis of fine chemicals produces 5-50


kilograms(kg) of by-product per kg of product, and the synthesis of pharmaceuticals can generate over 100 kg of waste product per kg. Second, most syntheses use toxic solvents, which often comprise the largest amount of “auxiliary waste.” (Li and Trost, 2008). Third, reactions can also require a large amount of energy, especially if heating is necessary. As a result of these inefficiencies, the field of “Green Chemistry” has been developed to maximize atom economy – the ratio of the mass of the atoms in the product to that of the reagents – and thus promote efficiency (Chen et. al, 2015). In an effort to minimize waste products, some chemists have turned towards solventless reactions to promote higher product recovery. One class of solventless reactions, referred to as mechanochemistry, uses mechanical force to promote chemical reactions. This paper will demonstrate how mechanochemical reactions have not only reduced energy requirements and improved atom economy but have also DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

1991, which makes of use of magnets to better control the rate of milling (Calka and Radlinski, 1991).

Modern Techniques

led to new reaction pathways that are not achievable under conditions where solvent is present

History of Mechanochemistry Although mechanochemical techniques have existed since the beginning of recorded history, their use in chemical synthesis has only developed recently. Prior to the late 20th century, mechanochemical techniques relied upon simple pestles and mortars. The famed English chemist Michael Faraday, for instance, used a mixture of zinc, tin, iron, and copper to reduce silver chloride (Takacs, 2013). In the 1880s, M.C. Lea discovered that mechanochemical processes can facilitate reactions not achievable by the heating of solutions. Lea found that mercuric chloride could decompose when ground in a mortar but would sublime (turn to gas) upon heating (Boldyreva, 2013). Research in mechanochemistry substantially increased over the 20th century due to the evolution of techniques. As the mortar and pestle proved to be too cumbersome, scientists instead turned to ball milling devices that could supply the large amounts of energy needed to initiate chemical reactions. First developed in 1923, small-scale ball mills became increasingly popular throughout the 1940s and 50s (Takacs, 2013). Mill development has since focused on increasing the motion and energy of individual balls. For example, the planetary mill, first developed in 1961, makes use of a centrifuge to increase impact velocity. Similarly, Calka and Radlinski developed the uni-ball mill in


Further technical advances have enabled an even greater degree of control over mechanochemical reactions. Of particular importance in mechanochemistry are techniques which add catalytic material to a reaction mixture. One of these methods, called liquid-assisted grinding (LAG), has allowed for reaction pathways not traditionally accessible by neat grinding, or grinding with a mortar and pestle. LAG introduces a small amount of liquid into a reaction mixture, which greatly accelerates reaction rates while also promoting the formation of new isomeric products (Do & Friščić, 2017). LAG has also been shown to increase reaction yield; for example, when water was added in the oxidative addition step of the halogenation of Rhenium (I) complexes, yield increased from 82% to 94%. Additionally, the isomeric excess of the diagonal isomer increased from 45% to 99%, improving reaction selectivity (Hernandez et. al, 2014). Another modern technique used in mechanochemistry is twin screw extrusion (TSE). In contrast to traditional “batch” mills, TSE results in a “continuous” mode of production as mechanical force is constantly applied to the products. In TSE, the material is passed through a confined barrel assisted by the rotation of two rotating screws. The configuration of these screws can be modified to control shear and compression forces on the molecule. Additionally, the feed rate of material can be modified to directly control the amount of force applied; this allows for greater temperature control than standard ball mills (Crawford et. al, 2017). TSE has been particularly useful in synthetic processes; for instance, Medina et. al produced pharmaceutical cocrystals from caffeine and oxalic acid – a process previously only achieved under solution crystallization (Medina et. al, 2010). As solution crystallization can lead to the formation of solvate products, TSE increases general yield by eliminating the need for solvent.

Figure I: Planetary ball mills in mechanochemistry take advantage of centrifugal force at high speeds. In planetary ball mills, “grinding jars” are arranged on a wheel and spin opposite to the direction of the wheel. The impact of the balls raises reaction rates by decreasing the surface area of the reactants. Source: Wikimedia Commons

“As the mortar and pestle proved to be too cumbersome, scientists instead turned to ball milling devices that could supply the large amounts of energy needed to initiate chemical reactions.”

Mechanics of Mechanochemical Reactions With regard to mechanochemical synthesis, the energy required to overcome activation energy barriers is provided by the transformation 25

Figure 2: MOFs are comprised of metal ions coordinated to specific organic ligands. Normally synthesized under solution conditions, some MOFs have recently been produced through mechanical force Source: Wikimedia Commons

“Mechanochemical reactions are also more sensitive than their solvent counterparts to changes in the stoichiometric ratio of reactants and temperature.”

of mechanical work into heat energy, often provided through either deformation or friction forces. As suggested by PG Fox, prolonged direct contact between reactants is necessary for a solid-phase mechanochemical reaction to proceed (Fox, 1975). Evidence also suggests that mechanochemical reactions must occur in a liquid-like phase. The aldol condensation reaction, for instance, has shown to produce at least partial melts, where at least some of the reaction mixture is melted, in 67% of ketone/ aldehyde mixtures (Rothenberg et. al, 2001) Notably, when reactants are ground together, the melting point of the mixture drops significantly. For example, when both alpha glycine and beta-malonic acid were milled together a melt was observed at 60˚C, significantly lower than the standard 112˚C melting point of the salt product. This reduction in melting point significantly lowers the thermal requirement for reactions and generally increases reaction rates (Michalchuck et. al, 2014). While mechanochemical reactions occur at significantly lower temperatures than solvent reactions, they are still governed by many of the same principles. For instance, the success of any reaction depends heavily on the nucleophilic or electrophilic character of the involved reagents. As described by Machucha and Juaristi, the yield for the mechanochemical dipeptide-catalysed


aldol-condensation reaction increased from 7% for p-methoxybenzaldehyde to 84% for o-chlorobenzaldehyde. The increased electronwithdrawing effects associated with the o-chlorobenzaldehyde raised the reaction rate due to increased pi-stacking, as the electronrich napthyl ring on the catalyst formed a more rigid transition state with the electron-poor benzaldehyde (Machucha and Juaristi, 2015). Similarly, the mechanochemical synthesis of thioureas required 9 hours of milling when a 4 methoxy thiocyanate derivative was used as the electrophile, as reported by Štrukil. In contrast, only 3 hours of milling were required when the methoxy group was replaced with the more electron-poor nitro group, as the benzyl ring became more electron deficient and thus susceptible to nucleophilic attack by the aniline (Štrukil, 2017). Mechanochemical reactions are also more sensitive than their solvent counterparts to changes in the stoichiometric ratio of reactants and temperature. As reported by Užarević et. al, the dry milling of cyanoguanidine and cadmium chloride produced a 1-dimensional polymer when ground in a 1:1 ratio, whereas a 3-dimensional polymer was formed when the reagents were ground in a 1:2 ratio (Užarević et. al, 2016). Notably, the group found that when the temperature of the milling reaction


Figure 3: This is the Reaction mechanism for Suzuki-Miyaura Cross Coupling, a common organometallic reaction used for producing carbon-carbon single bonds Source: Wikimedia Commons

was raised to 60˚C, there was an almost immediate formation of the 3-dimensional polymer, followed by rapid conversion to the 1-dimensional polymer once the cadmium chloride had been consumed. In contrast, at room temperatures, an amorphous intermediate phase that lasted for almost 20 minutes of ball-milling was observed (Užarević et. al, 2016).

Applications of Mechanochemical Synthesis Due to modern technical achievements, mechanochemistry has proven to be a versatile synthetic tool. Recent studies have demonstrated the potential use of mechanochemistry in metal-catalysed reactions. For instance, Fulmer et. al found that the Sonogashira coupling reaction could be replicated via high speed ball-milling without the need for a traditional copper iodide catalyst (Fulmer et. al, 2009). Instead, the group used a copper vial and copper balls as a catalytic surface for the reaction, which produced yields of up to 89% - comparable to the solvent reaction11. Similarly, Chen et. al found silver foil to be an effective catalyst in the cyclopropanation of alkenes with diazoacetates. In comparison to the standard catalyst for the reaction (dirhodium (II) salts), silver foil is both significantly more abundant and recyclable (Chen et. al, 2015).


Another important application of mechanochemical reactions is in the synthesis of metal-organic frameworks, or MOFs. As porous materials made up of metal ions and organic ligands, MOFs have been proven to be potential candidates for fuel storage, carbon capture, and catalysis (Furukawa et. al, 2013). Pichon et. al demonstrated the potential for solvent free synthesis of MOFs through the reaction of copper acetate mononhydride, Cu(O2CCH3)2·H2O, with isonicotinic acid under ball milling conditions (Pichon et. al, 2006). Similarly, Friščić & Fábián synthesized Zinc fumarate, an MOF commonly used in functional materials, from Zinc oxide via liquid-assisted grinding (Friščić & Fábián, 2009). Significantly, the simple metal oxides in the mechanochemical synthesis of MOFs can replace the more expensive and toxic metal nitrates commonly used in current MOF synthesis (Do & Friščić, 2017).

“Recent studies have demonstrated the potential use of mechanochemistry in metal-catalyzed reactions.”

Benefits of Mechanochemistry over Solvent Reactions Although mechanochemistry is a stillemerging field, it has already demonstrated significant advantages over traditional solution synthesis methods. Some of these advantages are explored below. Reaction Energetics and Efficiency: Compared to common methods of microwave irradiation and solution heating, 27

mechanochemical reactions may be significantly more efficient at transferring energy for comparable yield. Experimental studies suggest that ball mills can deliver anywhere between 95.4 and 117.7 kJ mol-1 (McKissic et. al, 2014) As ball mills can deliver such a large quantity of energy, it is not surprising to see a corresponding increase in reaction rate compared to standard methods. A study conducted by Rodriguez et. al suggests that, on average, high-energy ball milling can drastically reduce reaction time for an enantioselective aldol reaction by over 50%. This reduction was achieved with similar yields and enantiomeric excess for both the stirring reactions and aldol reactions (Rodriguez et. al, 2006). Similarly, a study by Schneider et. al suggests that ball-milling has significant energy efficiency advantages over traditional microwave processes in Suzuki-Miyaura coupling reactions. Traditional microwave irradiation techniques were found to require 7.6 kWh mol-1 to produce 5 mmol of product, whereas planetary mills generated 100 mmol of product using just 5 kWh mol-1 of electrical energy (Schenider et. al, 2009).

“Mechanochemical reactions also enhance the selectivity of desired products as compared to solution conditions.”

Reaction Selectivity: Mechanochemical reactions also enhance the selectivity of desired products as compared to solution conditions. Products with a higher degree of thermodynamic stability are generally preferentially selected due to the different thermodynamic environment associated with solid-state mechanochemical reactions. For instance, Balema et. al observed that the mechanochemical Wittig reaction preferentially selected for the more thermodynamically stable E-stilbenes; in contrast, solution environments normally produce mixtures of Z and E-stilbene isomer (Balema et. al, 2002). Similarly, Belenguer et. al observed that mechanochemical conditions drove the thermodynamic equilibration of a disulfide dimerization reaction from two homodimers (reactants) to heterodimers (products). The group suggested that this may be due to the fact that forcing one of the homodimers into an energetically unstable lattice structure raises the lattice energy by 12.7 kJ mol-1 compared to an isolated molecule in solution, thus reducing the activation energy and raising reaction rate (Belenguer et. al, 2011). De Novo Synthesis: Finally, some mechanochemical reactions have enabled the synthesis of products that could not


be synthesized using other methods. One of these products is a dimer of C6o, better known as fullerene. Previously believed to be thermally impossible to produce due to the highly strained and electron-deficient double bonds, Shiro et. al successfully produced a C120 dimer using a high-speed vibration mill and potassium cyanide as a catalyst (Shiro et. al, 2009). In addition to circumventing the problems of hindered stereochemistry, mechanochemical pathways also avoid the common problem of solvent interference. As reported by Rheingold et. al, the synthesis of tris(allyl)aluminum complexes was only possible via ball-milling; attempts to conduct the synthesis in hexane solutions were entirely unsuccessful due to solvent interference (Rheingold et. al, 2014).

Conclusion While mechanochemical techniques have been used for thousands of years, only recently have they evolved to become a potential tool for synthesis. Thanks to modern technological innovations, synthetic chemists can more easily control the energy input of reactions, while also drastically speeding up the reaction rates. Additionally, mechanochemistry has promoted greater atom economy and reduced solvent waste in modern syntheses, making it an excellent example of green chemistry. Furthermore, due to the different chemical environment associated with solidstate chemistry, synthetic pathways can be fine-tuned to raise reaction speed, change selectivity and even synthesize “impossible” molecules. Though the physical underpinnings of mechanochemical techniques are still being explored, evidently, they present an incredibly viable alternative to traditional reaction pathways. References Balema, V. P., Wiench, J. W., Pruski, M., & Pecharsky, V. K. (2002). Mechanically Induced Solid-State Generation of Phosphorus Ylides and the Solvent-Free Wittig Reaction. Journal of the American Chemical Society, 124(22), 6244–6245. https://doi. org/10.1021/ja017908p Belenguer, A. M., Friščić, T., Day, G. M., & Sanders, J. K. M. (2011). Solid-state dynamic combinatorial chemistry: reversibility and thermodynamic product selection in covalent mechanosynthesis. Chemical Science, 2(4), 696–700. https://doi.org/10.1039/C0SC00533A Boldyreva, E. (2013). Mechanochemistry of inorganic and organic systems: what is similar, what is different? Chemical Society Reviews, 42(18), 7719–7738. https://doi.org/10.1039/ C3CS60052A


Calka, A., & Radlinski, A. P. (1991). Universal high performance ball-milling device and its application for mechanical alloying. Materials Science and Engineering: A, 134, 1350–1353. https://doi.org/10.1016/0921-5093(91)90989-Z Cano-Ruiz, J. A., & McRae, G. J. (1998). Environmentally conscious chemical process design. Annual Review of Energy and the Environment, 23(1), 499–536. https://doi. org/10.1146/annurev.energy.23.1.499 Chen, L., Bovee, M. O., Lemma, B. E., Keithley, K. S. M., Pilson, S. L., Coleman, M. G., & Mack, J. (2015). An Inexpensive and Recyclable Silver-Foil Catalyst for the Cyclopropanation of Alkenes with Diazoacetates under Mechanochemical Conditions. Angewandte Chemie International Edition, 54(38), 11084–11087. https://doi.org/10.1002/ anie.201504236 Crawford, D. E., Miskimmin, C. K. G., Albadarin, A. B., Walker, G., & James, S. L. (2017). Organic synthesis by Twin Screw Extrusion (TSE): continuous, scalable and solvent-free. Green Chemistry, 19(6), 1507–1518. https://doi.org/10.1039/ C6GC03413F Do, J.-L., & Friščić, T. (2017). Mechanochemistry: A Force of Synthesis. ACS Central Science, 3(1), 13–19. https://doi. org/10.1021/acscentsci.6b00277 Fox, P. G. (1975). Mechanically initiated chemical reactions in solids. Journal of Materials Science, 10(2), 340–360. https:// doi.org/10.1007/BF00540358 Friščić, T., & Fábián, L. (2009). Mechanochemical conversion of a metal oxide into coordination polymers and porous frameworks using liquid-assisted grinding (LAG). CrystEngComm, 11(5), 743–745. https://doi.org/10.1039/ B822934C Fulmer, D. A., Shearouse, W. C., Medonza, S. T., & Mack, J. (2009). Solvent-free Sonogashira coupling reaction viahigh speed ball milling. Green Chemistry, 11(11), 1821–1825. https://doi.org/10.1039/B915669K Furukawa, H., Cordova, K. E., O’Keeffe, M., & Yaghi, O. M. (2013). The Chemistry and Applications of Metal-Organic Frameworks. Science, 341(6149). https://doi.org/10.1126/ science.1230444 Hernández, J. G., Macdonald, N. A. J., Mottillo, C., Butler, I. S., & Friščić, T. (2014). A mechanochemical strategy for oxidative addition: remarkable yields and stereoselectivity in the halogenation of organometallic Re(I) complexes. Green Chemistry, 16(3), 1087–1092. https://doi.org/10.1039/ C3GC42104J Kinne-Saffran, E., & Kinne, R. K. H. (1999). Vitalism and Synthesis of Urea. American Journal of Nephrology, 19(2), 290–294. https://doi.org/10.1159/000013463 Li, C.-J., & Trost, B. M. (2008). Green chemistry for chemical synthesis. Proceedings of the National Academy of Sciences of the United States of America, 105(36), 13197–13202. https://doi.org/10.1073/pnas.0804348105 Machuca, E., & Juaristi, E. (2015). Organocatalytic activity of α,α-dipeptide derivatives of (S)-proline in the asymmetric aldol reaction in absence of solvent. Evidence for noncovalent π–π interactions in the transition state. Tetrahedron Letters, 56(9), 1144–1148. https://doi.org/10.1016/j. tetlet.2015.01.079


McKissic, K. S., Caruso, J. T., Blair, R. G., & Mack, J. (2014). Comparison of shaking versus baking: further understanding the energetics of a mechanochemical reaction. Green Chemistry, 16(3), 1628–1632. https://doi.org/10.1039/ C3GC41496E Medina, C., Daurio, D., Nagapudi, K., & Alvarez-Nunez, F. (2010). Manufacture of pharmaceutical co-crystals using twin screw extrusion: a solvent-less and scalable process. Journal of Pharmaceutical Sciences, 99(4), 1693–1696. https://doi. org/10.1002/jps.21942 Michalchuk, A. A. L., Tumanov, I. A., Drebushchak, V. A., & Boldyreva, E. V. (2014). Advances in elucidating mechanochemical complexities via implementation of a simple organic system. Faraday Discussions, 170(0), 311–335. https://doi.org/10.1039/C3FD00150D Pichon, A., Lazuen-Garay, A., & James, S. L. (2006). Solventfree synthesis of a microporous metal–organic framework. CrystEngComm, 8(3), 211–214. https://doi.org/10.1039/ B513750K Rightmire, N. R., Hanusa, T. P., & Rheingold, A. L. (2014). Mechanochemical Synthesis of [1,3-(SiMe 3 ) 2 C 3 H 3 ] 3(Al,Sc), a Base-Free Tris(allyl)aluminum Complex and Its Scandium Analogue. Organometallics, 33(21), 5952–5955. https://doi.org/10.1021/om5009204 Rodríguez, B., Rantanen, T., & Bolm, C. (2006). Solvent-Free Asymmetric Organocatalysis in a Ball Mill. Angewandte Chemie, 118(41), 7078–7080. https://doi.org/10.1002/ ange.200602820 Rothenberg, G., Downie, A. P., Raston, C. L., & Scott, J. L. (2001). Understanding Solid/Solid Organic Reactions. Journal of the American Chemical Society, 123(36), 8701–8708. https://doi. org/10.1021/ja0034388 Schneider, F., Szuppa, T., Stolle, A., Ondruschka, B., & Hopf, H. (2009). Energetic assessment of the Suzuki–Miyaura reaction: a curtate life cycle assessment as an easily understandable and applicable tool for reaction optimization. Green Chemistry, 11(11), 1894–1899. https://doi.org/10.1039/ B915744C Štrukil, V. (2017). Mechanochemical synthesis of thioureas, ureas and guanidines. Beilstein Journal of Organic Chemistry, 13(1), 1828–1849. https://doi.org/10.3762/bjoc.13.178 Takacs, L. (2013). The historical development of mechanochemistry. Chemical Society Reviews, 42(18), 7649–7659. https://doi.org/10.1039/C2CS35442J Taxol – The Drama behind Total Synthesis. (2011, July 27). https://web.archive.org/web/20110727152818/http://www. org-chem.org/yuuki/taxol/taxol_en.html Užarević, K., Štrukil, V., Mottillo, C., Julien, P. A., Puškarić, A., Friščić, T., & Halasz, I. (2016). Exploring the Effect of Temperature on a Mechanochemical Reaction by in Situ Synchrotron Powder X-ray Diffraction. Crystal Growth & Design, 16(4), 2342–2347. https://doi.org/10.1021/acs. cgd.6b00137 Wang, G.-W., Komatsu, K., Murata, Y., & Shiro, M. (1997). Synthesis and X-ray structure of dumb-bell-shaped C 120. Nature, 387(6633), 583–586. https://doi.org/10.1038/42439


Fast Fashion and the Challenge of Textile Recycling BY ARUSHI AGASTWAR, MONTA VISTA HIGH SCHOOL SENIOR Cover Image: Mixed textiles and clothing recycling bin alongside bins for plastic, glass, and aluminum bottles Source: Wikimedia Commons


Introduction In recent decades, the textile industry has seen tremendous growth due to an increase in general population size, a growing middle class in particular, and the presence of higher disposable income. Alongside these broader trends, this growth has also been influenced by a new surge in textile and apparel consumption referred to as Fast Fashion. Consumer fashion brands like Zara, Forever 21, and H&M are disrupting the seasonal design cycles of clothes. Traditional legacy brands like Tommy Hilfiger, Levis, and Dolce and Gabbana used to have around two or three seasonal lines in a year. Now, there is an entirely new collection of apparel put out on a biweekly basis. Fast Fashion brands leverage the urgency created by short supply cycles, causing people to either purchase immediately or miss out on the latest trends. The business model of these brands is to swiftly make clothes that are inexpensive and expendable - and the quick turnaround

of raw material to finished apparel has been facilitated greatly by social media platforms (Caro & MartĂ­nez-de-AlbĂŠniz, 2015). Social media influencers constantly affect social media feeds with new styles, creating an apparent psychological need to keep up with new trends. Unfortunately, however, this Fast Fashion model is neither cost-effective nor sustainable for the environment. Today, the average individual is purchasing more clothes than ever before. Due to a steady rise in clothing consumption during the past three decades, more than 80 billion garments were produced worldwide in 2019 (Thomas, 2019). For the American Market, this increase translates to the average American consumer purchasing more than 63 pieces of clothing per year, or 1.2 garments a week, which is more than 6 times the global average (Bick et al, 2018). Such large-scale consumption of clothing has led to significant problems, such DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure 1: The long-term environmental impacts of the textile industry is often overlooked. Source: Flickr

as unwanted garments and textiles being prematurely discarded. Due to consumers holding onto clothing for only half as long as they used to, around 82 pounds of clothing are typically thrown away each year. Of this, around 85 percent ends up in the landfill and only half of the other remaining 15 percent is reused, whilst the other half is recycled (Bick et al., 2018; Wicker, 2016).

Why We Need Recycling According to the Ellen MacArthur Foundation, the textile production process is often highly resource-intensive. In 2015, the total Greenhouse Gas Emissions (GHG) for the textile and apparel industry was 1,200 million tons. Additionally, water usage by the industry was 93 billion cubic meters, fertilizers for cotton growth were 8 million tons, and pesticides for cotton were 200,000 tons (Ellen MacArthur Foundation, 2017). In the textile industry, there are dozens of types of fabrics. Synthetic fabrics like polyester, nylon, rayon, acrylic, and elastane are based on plastics and designed to last longer. This causes them to take hundreds of years to decompose. This is also the case with natural fabrics including cotton, linen, wool, and silk. During the fabric manufacturing process, the fibers go through a series of processes like dyeing and finishing, which renders these fabrics difficult to decompose (Mukhopadhyay, n.d). Once in landfills, an apparel made of natural fibers is several hundred times more slow to degrade


compared to unprocessed raw fiber which has not been converted to yarn or fabrics. Additionally, their decomposition can lead to the release of certain greenhouse gases in the atmosphere which contribute to increasing global temperatures. Recycling textiles can have a substantial environmental benefit, including a 53% reduction in greenhouse gas emissions, a 45% reduction in chemical pollution, and a 95% reduction in water pollution (ECOSIGN, 2017). And Textile recycling also substantially reduces the dependence on virgin natural resources and allows us to extract the maximum value out of fibers. This is a strong positive because the globe is currently facing increasingly high population projections that demand resources for settlement and sustenance. As a result, societies will have to choose between using land for living, land for food cultivation, land for growing cotton, and land for grazing sheep for

“Recycling textiles can have a substantial environmental benefit, including a 53% reduction in greenhouse gas emissions, a 45% reduction in chemical pollution, and a 95% reduction in water pollution.�

Figure 2: Width of a synthetic fiber as compared to other materials Source: Wikimedia Commons


Figure 3: How mechanical recycling compares to other forms of municipal waste Source: Flickr

wool. Textile recycling is a major way to lessen the desperateness of the situation. For every piece of cloth recycled, the amount of resources used to produce virgin fabric is significantly reduced (Watson, 2017).

“Textiles are recycled in two prominent ways today: mechanical and chemical."

The processes leading up to recycling include the collection of textiles from various sources, consolidation of the textiles, classification according to fabric type and color, and transportation from the collection sites to the sorting facilities. As each of these individual steps is highly labor-intensive, they generate a multitude of employment opportunities. This, in addition to the environmental benefits of reduced landfill and incineration (which helps reduce air pollution), are further positive aspects of plastic recycling (Tai J, et al., 2017).

The Recycling Process Textiles are recycled in two prominent ways today: mechanical and chemical. During mechanical recycling, the fabrics are shredded into shorter fibers. However, these shorter fibers yield a yarn of lower quality and strength and are difficult to turn back into fabrics; shredded materials are optimum for “non-woven” fabrics. As non-woven fabrics are formed directly from fibers, there is no need for yarn formation and weaving or knitting the yarn. As shown by the data in Figure 3, the percentage of municipal waste produced by mechanical recycling has been increasing over the past few decades. The most recent data point shows that the percentage of municipal waste produced by mechanical recycling is relatively similar to the percentage of municipal 32

waste produced by landfilling (Payne, 2013). During chemical recycling, natural fibers such as cotton, linen, or bamboo, which are made of cellulose, can be dissolved much like paper. Nonetheless, after every successive dissolution, the length of the cellulosic polymer chains gets shorter, resulting in a weak yarn and substandard fabric. Even after the first round of recycling, the cotton fibers are not long enough to be properly knitted or woven. To obtain a higher fabric strength, textile industries need to bundle these fibers with virgin fibers (Payne, 2013).

Challenges in Textile Recycling Globally, these textile fibers are often blended together to achieve special optical effects or to improve performance by maximizing the beneficial properties of each fiber type. However, these blends pose significant challenges when it comes to the recycling of the fabrics – each fiber has a different recycling process. Since the blending is done at the fiber stage, the mechanical separation process of these blended fibers has not yet been developed, making it difficult to recycle a blended fabric (Ellen MacArthur Foundation, 2017). Some other issues in recycling are: 1. The processing, dyes, and finishes on the raw textile fabric make it very difficult to recycle the material. This is because dyes and finishes are chemically bonded to the surface of the fabric. The chemical bonds between the dyes, finishes and fiber polymers like cellulose, polyester, DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

nylon are specifically engineered for each type of dye and fabric. This allows the fabrics to hold onto the dyes and finishes for a long time, preventing damage with successive washing or due to sunlight. As a result, they are particularly hard to separate from the fabrics (Marconi et al., 2018). 2. Some apparel is now supplemented with metal wires and various polymer materials (such is the case with technical textiles). These blends of electrical components are done at the weaving or the knitting stage of the fabric production and pose a serious challenge to fabric recycling. Mechanical recycling, which is done using shredding of fabric, will not be able to separate these wires from the fabric (Tai et al., 2011). 3. The fibrous material collected after mechanical recycling fabric by shredding is often inconsistent. It is difficult to change the process parameters for each new batch that comes in. This is because the machine settings used for the yarn production from this recycled fiber is dependent on the fiber length. For the recycling process to run seamlessly, there has to be a consistent increase in the amount of acceptable textile waste as input for the facility to be profitable (Xiao et al., 2018).

Discussion Consumers are becoming increasingly aware of the consequences of human consumption. As a result, clothing brands are introducing new and more ‘conscious’ lines to cater to this market segment. However, these eco-friendly SUMMER 2020

brands and clothing lines are often priced at a premium and are therefore largely inaccessible to the vast majority of the population who have lesser disposable incomes to spend on clothes. Some companies are even criticized for the practice of ‘greenwashing’, or misleading ecoconscious consumers into thinking that their products are environmentally friendly when they really are not any more so than other fabrics on the market.

Figure 4: Public Recycling bin for textiles and shoes

Several brands have implemented approaches that do not directly address the issue of textile biodegradation and recycling. H&M stores, for instance, have recollection bins where people can discard their clothes and receive H&M cash coupons in exchange

“Consumers are becoming increasingly aware of the consequences of human consumption. As a result, clothing brands are introducing new and more 'conscious' lines to cater to this market segment."

To be more environmentally sustainable, the textile industry needs to make significant changes regarding its use of resources. In the past, there has not been an adequate consideration of the environmental impacts of the textile production process. With fast fashion changing the way people purchase and interact with their clothes, the dangers of textile production to the environment have become increasingly pertinent.

Source: Geograph

References Wicker On 09/01/16 at 6:40 AM EDT, A. (2017, March 16). The earth is covered in the waste of your old clothes. https:// www.newsweek.com/2016/09/09/old-clothes-fashionwaste-crisis-494824.html. Caro, F., & Martínez-de-Albéniz, V. (1970, January 1). Fast Fashion: Business Model Overview and Research Opportunities. https://link.springer.com/ chapter/10.1007/978-1-4899-7562-1_9. Circular Fashion - A New Textiles Economy: Redesigning fashion's future. (2017, November 28). https://www. ellenmacarthurfoundation.org/publications/a-new-textileseconomy-redesigning-fashions-future. Claudio, L., Siegle, L., MA. Sant'Ana, F. K., S. Akhter, S. R., G. Gebremichael, A. K., & TP. Lyon, A. W. M. (1970, January 1). The global environmental injustice of fast fashion. https://doi. org/10.1186/s12940-018-0433-7. Cuc, S., & Vidović, M. (1970, January 1). [PDF] Environmental Sustainability through Clothing Recycling: Semantic Scholar. undefined. https://www.semanticscholar.org/paper/ Environmental-Sustainability-through-Clothing-Cuc-Vidovi% C4%87/81069e26c688be1d475a1337f40fe68e6414969f. Elander, M., & Ljungkvist, H. (2016). Critical aspects in design for fiber-to-fiber recycling of ...http:// mistrafuturefashion.com/wp-content/uploads/2016/06/MFFreport-2016-1-Critical-aspects.pdf. Marconi, M., Landi, D., Meo, I., & Germani, M. (2018, April 18). Reuse of Tires Textile Fibers in Plastic Compounds: Is this Scenario Environmentally Sustainable? Procedia CIRP. https://www.sciencedirect.com/science/article/pii/ 33

S2212827117308508. Mukhopadhyay, S. Textile Engineering - Textile Fibres. NPTEL. https://nptel.ac.in/courses/116/102/116102026/. Payne, A. (2015, August 7). Open- and closed-loop recycling of textile and apparel products. Handbook of Life Cycle Assessment (LCA) of Textiles and Clothing.https://www. sciencedirect.com/science/article/pii/B978008100169100006X. Tai, J., Zhang, W., Che, Y., & Feng, D. (2011). Municipal solid waste source-separated collection in China: A comparative analysis. Waste management (New York, N.Y.). https://pubmed.ncbi.nlm. nih.gov/21504843/. Textile recycling as a contribution to circular economy and production waste enhancement. View Cart. http://www. ecosign-project.eu/news/textile-recycling-as-a-contributionto-circular-economy-and-production-waste-enhancement/. Thomas, D. (2019, August 29). The High Price of Fast Fashion. The Wall Street Journal. https://www.wsj.com/articles/thehigh-price-of-fast-fashion-11567096637. Watson, D., Gylling, A., Andersson, T., & Heikkilä, P. (1970, January 1). Textile-to-textile recycling: Ten Nordic brands that are leading the way. VTT's Research Information Portal. https:// cris.vtt.fi/en/publications/textile-to-textile-recycling-tennordic-brands-that-are-leading-t. Women's Clothing & Fashion - shop the latest trends: H&M GB. H&M. https://www2.hm.com/en_gb/ladies.html. Xiao, S., Dong, H., Geng, Y., & Brander, M. (2018, March 22). An overview of China's recyclable waste recycling and recommendations for integrated solutions. Resources, Conservation and Recycling. https://www.sciencedirect.com/ science/article/pii/S0921344918300910. Zara.com. JOIN LIFE: ZARA United States. JOIN LIFE | ZARA United States. https://www.zara.com/us/en/ sustainability-l1449.html.





The Cellular Adhesion and Cellular Replication of SARS-COV-2 BY ARYA FAGHRI, UNIVERSITY OF DELAWARE '24 Cover Image: A depiction of the extracellular component of the SARS-COV-2 cell. All the membrane proteins have specific functions in both entry to the human cell and activity within the human cell. Source: Wikimedia Commons


Introduction COVID-19 is a viral disease caused by an infection from the Severe Acute Respiratory Syndrome Coronavirus or SARS-COV-2. The coronavirus itself is a sphere-shaped virus containing a plasma membrane with various proteins that protect its single-stranded RNA genome (Patel, 2020). This virus is zoonotic, meaning that it initially jumped from another species to a human host, most likely a pangolin or a bat (Patel, 2020). Once in the human body, the coronavirus enters cells so that it can multiply. It enters via the process of cellular adhesion in which it attaches its surfacelevel proteins to a specific human receptor enzyme called Angiotensin Converting Enzyme II (ACE-2) (Patel, 2020). ACE-2 is a type I integral membrane enzyme categorized as a carboxypeptidase, responsible for the conversion of angiotensin II to Angiotensin (17) (Warner, 2004). Once inside the human cell, the coronavirus looks to hijack the biosynthetic

machinery within that cell to reproduce its genomic RNA and proteins to form new copies of itself. Having undergone this cellular replication, the newly produced coronavirus is then released into the extracellular fluid and can travel to other cells to complete this same process.

Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2) and its mechanics Coronaviruses are positive-strand (5’ to 3’) RNA viruses that have a variety of proteins embedded within their plasma membrane, a set of intracellular proteins, and a 30,000 nucleotide-long RNA genome also packed within the cell (Schelle et al., 2005). Proteins within the plasma membrane bind ACE-2 receptor enzymes on human cells to facilitate viral uptake (National Institute of Health NIAD Team, 2020). Spike (S) glycoproteins and DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

hemagglutinin esterase dimers specifically facilitate this bonding, while membrane (M) proteins form much of the protein coat that contains the virus interior. Envelope (E) proteins regulate viral assembly once the virus is inside the cell. Lastly, novel structure proteins (nsp’s) are proteins of very small size that exhibit an assisting role in the coronavirus replication and assembly processes (Schelle et al., 2005, Zeng et al., 2008, Boopathi et al., 2020). The coronavirus genome is highly protected within the viral capsid (protein coat). One set of protective proteins are the nucleocapsid (N) proteins, which form the intracellular component of the virus (Schelle et al., 2005). The viral RNA genome is constantly replicated, transcribed, and translated to multiply and spread throughout the human body. The virus can be compared to other viruses such as influenza; however, the timing and the degree of effects toward organ systems produced by the coronavirus is distinctive. Particularly, the coronavirus strategically ceases the activity of any organs associated with the immune system which allows it to be mobile and capable of severe harm.

Angiotensin Converting Enzyme II (ACE-2) and its mechanics Angiotensin Converting Enzyme II, more commonly known as ACE-2, is related to Angiotensin Converting Enzyme, ACE, which is an enzyme in the human body that helps maintain blood pressure (among other roles) (Warner, 2004, Turner, 2015). The main cellular function ACE-2 possesses is to convert the protein Angiotensin II to Angiotensin (1-7) (R&D Systems, 2015). 3.1 Structure: ACE-2 is a wrench-shaped receptor protein consisting of an N-terminal signal sequence, a transmembrane domain, a single catalytic domain, and a C-terminal cytosolic domain (Turner, 2020). The single catalytic domain is the outermost portion of the protein, the N-terminal signal sequence is also found outside the cell (extracellular), and the C-terminal signal sequence is found inside the cell (intracellular) (Clarke et al., 2011). 3.2 Location: ACE-2 is a receptor protein located on the plasma membranes of lung cells, skin cells, stomach cells, liver cells, kidney cells, bone marrow cells, and much of the respiratory


tract (Hamming el at., 2004). Recent studies have found that ACE-2 is also located on the hypothalamus, brainstem, and cerebral cortex, leading to the concern that the brain is also in danger of having its cells affected by the coronavirus (Kabbani el at., 2020). 3.3 Biological Function: The primary function of ACE-2 is to split the protein Angiotensin II into the protein Angiotensin (1-7) which serves to block organ damage and helps control blood pressure (R&D Systems, 2015, Hiremath, 2020, Sriram et al., 2020). The splitting occurs in the extracellular fluid and also acts in an anti-inflammatory capacity by reducing the activity of bradykinin, a peptide that instigates inflammation (Cyagen Newsletter, 2020, UniProt, 2020). Furthermore, ACE-2 regulates the human gastrointestinal microbiome as well by primarily defending the digestion and metabolization processes of the large intestine from inflammation (Kuba, 2013).

"The primary functino of ACE-2 is to split the protein Angiotensin II into the protein Angiotensin(1-7), which serves to block organ damage and helps control blood pressure.�

Affinity of the Coronavirus for ACE-2 The coronavirus is most likely to enter the body through the respiratory system because respiratory droplets from afflicted people contain viral particles (Ghose, 2020). These droplets can travel through the air a remarkable distance and enter a healthy person’s body (Atkinson, 2009, Ghose, 2020) Within several hours, the coronavirus particles will cross the mucous membranes of the respiratory system and enter the cells of the human body (Anderson et al., 2020). When the coronavirus enters through the respiratory tract, it makes its way towards healthy cells containing ACE-2 receptors to begin the attachment process. Why does the coronavirus have such a high affinity for ACE2? First, the structural features of ACE-2 make it favorable for the coronavirus. Specifically, ACE2 has a more compact conformation in its single catalytic domain which gives it a strong affinity for the coronavirus. It also has an S protein that has a structural binding complementary to ACE-2 (Shang, 2020). Once the coronavirus has identified a host cell, there are a series of reactions that allow it to move intracellularly, beginning with action from the S protein (Zeng et al., 2008, Boopathi et al., 2020), The S protein itself consists of a receptorbinding domain known as subunit S1, an intracellular backbone, and a transmembrane anchor known as the S2 subunit (Li, 2016). The 37

Figure 1: The initial translation process completed to produce pp1a and pp1ab. Image created by author. Inspiration derived from: (Sino Biological, 2006) (Burkard, 2014) (Smirnov, 2018)

S1 subunit makes up the outer surface of the S protein and it is the boundary that faces ACE-2 during attachment (Lee, 2020). Fusion begins when the N-terminal of the S1 subunit on the S protein of the coronavirus binds to the ACE-2 receptor in a complementary fashion (Verdecchi et al., 2020, Lee, 2020). Once that initial binding occurs, a structural change in the S protein takes place which activates another receptor known as transmembrane protease serine 2 receptor (TMPRSS2) which shares a very close position on the plasma membrane with ACE-2 and plays a large role in viral entry in general (Hiremath, 2020, Verdecchi et al., 2020, Lee, 2020). TMPRSS2 cleaves the S protein between the S1 and S2 subunits (Verdecchi et al., 2020). Once the S protein is cleaved, the S2 subunit completes the attachment process by exposing a fusion peptide that is attached to the phospholipid bilayer (Verdecchi et al., 2020, Lee, 2020, Di Giorgio et al., 2020). Once the fusion occurs, the coronavirus travels through the phospholipid bilayer of the cell and is ready to go through a series of reactions and changes to complete its goal of replicating (Lee, 2020).

“Post-fusion, ACE2 is bound by the S protein and is therefore unable to perform its designated function."

Post-fusion, ACE-2 is bound by the S protein and is therefore unable to perform its designated function (Han et al., 2006, Xu et al., 2017, National Heart, Lung, and Blood Institute, 2020). This is dangerous because ACE-2 protects the pulmonary system from inflammation (Zhang, 2020, Povlsen et al., 2020). Specifically, ACE-2 has a protective role in acute-lung failure as it delivers key contributions to lung protection and security which prompts that the loss of ACE-2 function will instigate inflammatory contusions on the lungs and the components of the respiratory tract (Kuba, 2005, Imai, 2005, Verdecchi et al., 2020).

The replication process of the 38

coronavirus The coronavirus has a pre-programmed objective of growing and generating an abundant supply of exact copies in a process known as viral replication (Wessner, 2010). Viral replication allows the virus to survive and thrive, eventually invading organ systems. Because the coronavirus can’t divide and replicate on its own, it hijacks and utilizes the organelles, molecular reactions, and metabolic processes of a human host cell to carry out the synthesis of necessary nucleic acids and proteins (Boundless Microbiology, 2020). This allows the coronavirus to exploit host cell capabilities in membrane fusion, translation, proteolysis, replication, and transcription (Shereen et al., 2020). These processes are all different operations that the RNA can conduct in order to modify, edit, and produce a new RNA strand. Additionally, the coronavirus will utilize organelles as well, namely the rough endoplasmic reticulum, which is involved with ribosomes in protein synthesis, the golgi apparatus, which modifies the newly produced proteins and ships them to the appropriate location in the cell, vesicles, which serve as transporters of material across the cell, and the cytosol which is the fluid between the plasma membrane and the organelles (Shereen et al., 2020). The coronavirus will spend most of its time and effort in the ribosomes and the cytosol with the rough endoplasmic reticulum, golgi apparatus, and vesicles mostly serving as preparation for exocytosis, or exit from the cell. As a whole, the entirety of the coronavirus replication process can be broken down into a four-step process: attachment and fusion as described in the preceding section, virus disassembly, viral biosynthesis, and virus assembly (Hammer, 2020). Virus Disassembly:


Figure 2: The replicase/ transcriptase complex completed to produce the mRNA template Image created by author. Inspiration derived from: (Sino Biological, 2006) (Burkard, 2014) (Smirnov, 2018)

The coronavirus, consisting of all its proteins and RNA once in the cytosol, is ready to be disassembled via membrane uncoating (Haywood, 2010). Membrane uncoating is completed by lysosomes —groups of enzymes that hydrolyze and recycle excess material in the cell— within the host cell and involves separating the viral membrane and exposing the intracellular single-stranded RNA for it to be utilized throughout the replication process (Haywood, 2010). With this process complete, the 30,000-nucleotide genomic RNA is now uncoated and prepared to begin the viral biosynthesis step of replication. Viral Biosynthesis: The positive genomic RNA strand (5’ to 3’) will now begin a series of biosynthetic steps to produce new viral RNA and viral proteins. First, the RNA will undergo translation. The translation step begins with the coronavirus RNA attaching and fusing into a nearby ribosome in the cytosol (Sino Biological, 2006). Then, ORF1a, one region of the viral RNA genome, is translated to polyprotein 1a (pp1a) before being halted by an RNA pseudoknot located right before the termination codon of ORF1a (Ziebuhr et al., 1999, Fehr et al., 2015). The RNA pseudoknot on the slippery sequence alters the reading frame by using the translating ribosome to shift the reading frame one nucleotide in the negative direction (-1) of RNA (Lim et al., 2016). This process is known as programmed -1 ribosomal frameshifting (-1 PRF) and it causes this shift of a nucleotide to allow the ribosome to bypass the ORF1a termination codon and keep translation going for the ORF1b (Plant et al., 2008). The


translation process is then complete at the end of ORF1b and results in a larger hybrid protein that covers two open reading frames known as polyprotein 1ab (pp1ab) (Ziebuhr et al., 1999). With pp1a and pp1ab now produced through translation, the two polyproteins will undergo proteolysis, a process in which proteins are degraded to smaller component polypeptides, which results in the production of fifteen novel structure proteins (nsp) (Wit et al., 2016, Nature, 2020).

“The positive genomic RNA strand (5' to 3') will now begin a series of biosynthetic steps to produce new viral RNA and viral proteins.”

Second, the initial positive genomic coronavirus RNA strand (5’ to 3’) will go through a replicasetranscriptase complex which begins with the initial coronavirus RNA traveling down the cytosol to link and combine with the fifteen novel structure proteins that were produced in the translation phase (Wit et al., 2016). With the initial RNA covered with novel structure proteins, it will now replicate itself and produce a negative genomic RNA strand (3’ to 5’) in the opposite order compared to the initial positive genomic RNA strand (Sino Biological, 2006, Wit et al., 2016). The first molecular process of replication will use the negative RNA strand to produce a positive genomic coronavirus RNA strand (5’ to 3’) - one that is very similar to the initial RNA from the attached coronavirus that originally fused into the cytosol (Sino Biological, 2006, Wit et al., 2016). This newly produced positive genomic RNA strand will make up the intracellular aspect of the new coronavirus (Sino Biological, 2006). The RNA strand will travel further down the cytosol where it will wait until the assembly stage. The second


Figure 3: The final translation process completed to produce the viral proteins that are constructed along with the RNA to produce the new coronavirus cell. Image created by the author. Inspiration derived from: (Sino Biological, 2006)

“Assembly of the coronavirus occurs largely in the endoplasmic reticulum-golgi intermediate component (ERGIC) as follows...”


molecular process is discontinuous transcription using an enzyme called RNA polymerase that transcribes the negative genomic RNA strand to produce a template of subgenomic mRNA strands (Wit et al., 2016, Smirnov et al., 2018). The negative genomic RNA strand consists of a leader transcription regulatory sequence (TRS), and around nine body transcription regulatory sequences (TRSs) all with very small distances from each other to make up a specific portion near the 5’ end of the RNA strand (Marle et al., 1999). RNA polymerase then transcribes from the 5’ end until it arrives at the first body TRS (Sawicki et al., 2003). RNA polymerase stops here and instantly jumps to the leader TRS located at the 3’ end (Komissarova et al., 1996). The segment of the RNA that was jumped over from is not included in the mRNA transcript (Wit et al., 2016). Once transcribed, the small mRNA strand is sent to a template right below the negative genomic RNA strand, in the cytosol (Komissarova et al., 1996, Sawicki et al., 2003). The RNA polymerase then transcribes the RNA again from the 5’ end, until it arrives at the second body TRS which is located at a further position than the first TRS (Marle et al., 1999, Sawicki et al., 2003). RNA polymerase will stop here and again, jump to the leader TRS where it will perform transcription, and produce another mRNA strand that will be sent to the template (Komissarova et al., 1996). The process is identical to the first body TRS; however, since the distance of the second body TRS is farther than the first body TRS, RNA polymerase will transcribe more RNA to arrive at the second

body TRS. This makes the second mRNA strand produced slightly bigger than the first. The same process will be used for the next several body TRSs successively with each mRNA transcript coming out slightly longer than the previous body TRS that was transcribed. Discontinuous transcription will lead to the production of a template of nine total subgenomic mRNAs which gradually increase in size and exert their functions in the next phase of viral biosynthesis (Sino Biological, 2006). Virus Assembly: Assembly of the coronavirus occurs largely in the endoplasmic reticulum-golgi intermediate component (ERGIC) as follows: (Fehr et al., 2015, Lodish, 2020). M protein: The M protein is the commander of virus assembly in the ERGIC (Siu et al., 2008). To begin, the M protein will bind the E protein to form artificial microproteins which maintain the bond with the E protein, and ultimately organize, mature, and produce the viral envelope of the new coronavirus (Siu et al., 2008, Lim et al., 2016). With the viral envelope now produced, there is an opening for the extracellular viral proteins to attach and initiate their positioning on the viral envelope. Next, the M protein binds the S protein to configure the proper locations and amount of S proteins that will be placed on the viral membrane of the new coronavirus (de Haan et al., 2005, Fehr et


al., 2015). If there is an oversupply of S proteins, then the M proteins will release the extra into the cytosol which lysosomes will digest. E protein: The E protein manipulates the plasma membrane to produce membrane curvature and ensure the coronavirus is a stable sphereshaped virus (Raamsman et al., 2000, Fehr et al., 2015). The membrane curvature is particularly valuable as it provides the mobility and transportability it requires to travel through the body. The E protein also has a function of thoroughly evaluating the M protein to ensure it is accurately completing its functions and progressing the assembly process (Boscarino et al., 2008). S protein: The S protein, while extremely active in fusion, has a very limited role in the assembly of the coronavirus, only functioning as a trafficking and regulating agent in the ERGIC (Fehr et al., 2015). N protein: The N protein progresses viral-like proteins to the ERGIC which then promotes a more stable viral envelopment when it binds with the other viral proteins to stabilize the newly constructed virus (Lim et al., 2016). The N protein is responsible for placing the genomic RNA strand in the accurate position within the new coronavirus, which is the final step in viral assembly (Krijnse-Locker et al., 1994). Altogether, the four viral proteins complete all their functions which slowly, but accurately, completes the assembly of the coronavirus. However, there are still a series of steps the coronavirus goes through to ensure that it can leave the human cell accurately. For instance, the golgi apparatus scans and packages the newly produced coronavirus to place it into golgi vesicles (Sino Biological, 2006, Antibodies Online, 2020). The golgi vesicles act as transporters of the coronavirus from the golgi apparatus to the plasma membrane of the human cell. When the vesicle arrives at the plasma membrane, it will be released into the extracellular fluid, a process known as exocytosis (Antibodies Online, 2020). Exocytosis completes the process of coronavirus replication, and results in another virulent particle that can now travel to other organ systems and continue hijacking cells to repeat this process. Overall, it may be said that the coronavirus is novel, which is valid; although, the biological forethought and resistance that the coronavirus


wields is a representation that the coronavirus is a truly unparalleled biological virus that can gain immediate imperious control all through the entirety of an organism. References Anderson, E. L., Turnham, P., Griffin, J., & Clarke, C. (2020, May 1). Consideration of the Aerosol Transmission for COVID-19 and Public Health. PubMed. https://pubmed.ncbi.nlm.nih. gov/32356927/ Antibodies Online. (2020, March 31). SARS-CoV-2 Life Cycle: Stages and Inhibition Targets. https://www.antibodies-online. com/resources/18/5410/sars-cov-2-life-cycle-stages-andinhibition-targets/ Atkinson, J. (2009). Respiratory droplets - Natural Ventilation for Infection Control in Health-Care Settings - NCBI Bookshelf. National Institute of Health (NIH). https://www.ncbi.nlm.nih. gov/books/NBK143281/ Boopathi, S., Poma, A., & Kolandaivel, P. (2020, April 30). Novel 2019 coronavirus structure, mechanism of action, antiviral drug promises and rule out against its treatment. Taylor & Francis. https://www.tandfonline.com/doi/full/10.1080/0739 1102.2020.1758788 Boscarino, J. A., Logan, H. L., Lacny, J. J., & Gallagher, T. M. (2008, January 9). Envelope Protein Palmitoylations Are Crucial for Murine Coronavirus Assembly. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC2258982/ Burkard, C. (2014, November 6). Coronavirus Entry Occurs through the Endo-/Lysosomal Pathway in a ProteolysisDependent Manner. PLOS Pathogens. https://journals.plos. org/plospathogens/article?id=10.1371/journal.ppat.1004502 Clarke, N. & Turner, A. (2011, November 10). AngiotensinConverting Enzyme 2: The First Decade. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC3216391/ Cyagen Newsletter. (2020, April 1). What Roles Does ACE2 Play? | ACE2 Mice | Cyagen US Inc. Cyagen. https://www. cyagen.com/us/en/community/technical-bulletin/ace2.html de Haan, C. A. M., & Rottier, P. J. M. (2005, August 31). Molecular Interactions in the Assembly of Coronaviruses. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC7112327/ Di Giorgio, S., Martignano, F., Torcia, M. G., Mattiuz, G., & Conticello, S. G. (2020). Evidence for host-dependent RNA editing in the transcriptome of SARS-CoV-2 in humans. ScienceAdvances. https://advances.sciencemag.org/content/ advances/early/2020/05/15/sciadv.abb5813.full.pdf Fehr, A., & Perlman, S. (2015, February 12). Coronaviruses: An Overview of Their Replication and Pathogenesis. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC4369385/ Ghose, T. (2020, April 7). How are people being infected with COVID-19? Live Science. https://www.livescience.com/howcovid-19-spreads-transmission-routes.html Hammer, S. M. (2020) Viral Replication. Columbia Viral Replication. http://www.columbia.edu/itc/hs/medical/


pathophys/id/2004/lecture/notes/viral_rep_Hammer.pdf Hamming, I., & Timens, W. (2004, June). Tissue distribution of ACE2 protein, the functional receptor for SARS coronavirus. A first step in understanding SARS pathogenesis. PubMed. https://pubmed.ncbi.nlm.nih.gov/15141377/ Han, D., Penn-Nicholson, A., & Cho, M. (2006, February 28). Identification of critical determinants on ACE2 for SARS-CoV entry and development of a potent entry inhibitor. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC7111894/ Haywood, A. M. (2010, November 1). Membrane Uncoating of Intact Enveloped Viruses. Journal of Virology. https://jvi.asm. org/content/84/21/10946 Hiremath, S. (2020, March 14). Should ACE-inhibitors and ARBs be stopped with COVID-19? NephJC. http://www.nephjc.com/ news/covidace2

Lodish, H. (2020, August). Overview of the Secretory Pathway - Molecular Cell Biology - NCBI Bookshelf. National Institute of Health (NIH). https://www.ncbi.nlm.nih.gov/books/ NBK21471/ Marle, G., Dobbe, J., Gultyaev, A., Luytjes, W., Spaan, W., & Snijder, E. (1999, October 12). Arterivirus discontinuous mRNA transcription is guided by base pairing between sense and antisense transcription-regulating sequences. PNAS. https:// www.pnas.org/content/96/21/12056.figures-only National Heart, Lung, and Blood Institute. (2020). Respiratory Failure | NHLBI, NIH. National Institute of Health. https://www. nhlbi.nih.gov/health-topics/respiratory-failure

Imai, Y. (2005, July 7). Angiotensin-converting enzyme 2 protects from severe acute lung failure. Nature. https:// www.nature.com/articles/nature03712?error=cookies_not_ supported&code=89ced488-272f-47f4-ac2b-24e8bf2b5e94

Nature. (2020, August 21). Cell density-dependent proteolysis by HtrA1 induces translocation of zyxin to the nucleus and increased cell survival. https://www.nature.com/subjects/ proteolysis?error=cookies_not_supported&code=867889c6b015-41ae-a853-b94272da3f45

Jia, H. P., Look, D., & Tan, P. (2009, May 1). Ectodomain shedding of angiotensin converting enzyme 2 in human airway epithelia. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC2711803/

Patel, N. (2020, April 15). How does the coronavirus work? MIT Technology Review. https://www.technologyreview. com/2020/04/15/999476/explainer-how-does-thecoronavirus-work/

Kabbani, N., & Olds, J. (2020, March 27). Does COVID19 Infect the Brain? If So, Smokers Might Be at a Higher Risk. Molecular Pharmacology. http://molpharm.aspetjournals.org/content/ molpharm/97/5/351.full.pdf

Plant, E., & Dinman, J. (2008, May 1). The role of programmed-1 ribosomal frameshifting in coronavirus propagation. PubMed Central (PMC). https://www.ncbi.nlm. nih.gov/pmc/articles/PMC2435135/

Komissarova, N., & Kashlev, M. (1996, December 26). RNA Polymerase Switches between Inactivated and Activated States By Translocating Back and Forth along the DNA and the RNA*. Journal of Biological Chemistry. https://www.jbc.org/ content/272/24/15329.full.pdf

Povlsen, A. L., Grimm, D., Wehland, M., Infranger, M., & Kruger, M. (2020, January 18). The Vasoactive Mas Receptor in Essential Hypertension. MDPI. https://www.mdpi.com/20770383/9/1/267/htm

Krijnse-Locker, J., Ericcson, M., Rottier, P., & Griffiths, G. (1994, January 1). Characterization of the budding compartment of mouse hepatitis virus: evidence that transport from the RER to the Golgi complex requires only one vesicular transport step | Journal of Cell Biology | Rockefeller University Press. Rockefeller University Press. https://rupress.org/jcb/ article/124/1/55/56282/Characterization-of-the-buddingcompartment-of Kuba, K. (2005, July 10). A crucial role of angiotensin converting enzyme 2 (ACE2) in SARS coronavirus–induced lung injury. Nature Medicine. https://www.nature.com/articles/ nm1267?error=cookies_not_supported&code=7ab7aa6e385f-417c-a2af-11908b669599 Kuba, K. (2013, January 18). Multiple functions of angiotensinconverting enzyme 2 and its relevance in cardiovascular diseases. PubMed. https://pubmed.ncbi.nlm.nih. gov/23328447/ Lee, J. (2020, April 1). How the SARS-CoV-2 Coronavirus Enters Host Cells and How To Block It. Promega Connections. https:// www.promegaconnections.com/how-the-coronavirus-entershost-cells-and-how-to-block-it/ Li, F. (2016, September 29). Structure, Function, and Evolution of Coronavirus Spike Proteins. PubMed Central (PMC). https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC5457962/


Lim, Y. X., Ng, Y. L., & Tam, J. P. (2016, July 25). Human Coronaviruses: A Review of Virus–Host Interactions. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC5456285/

Raamsman, M. J. B., Locker, J. K., de Hooge, A., de Vries, A. A. F., Griffiths, G., Vennema, H., & Rottier, P. J. M. (2000, March 1). Characterization of the Coronavirus Mouse Hepatitis Virus Strain A59 Small Membrane Protein E. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC111715/ R&D Systems, a biotech brand. (2015). ACE-2: The Receptor for SARS-CoV-2. https://www.rndsystems.com/resources/ articles/ace-2-sars-receptor-identified Sawicki, S., Sawicki, D., & Siddell, S. (2003, August 23). A Contemporary View of Coronavirus Transcription. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC1797243/ Schelle, B., Karl, N., Ludewig, B., Siddell, S., & Thiel, V. (2005, June 1). Selective Replication of Coronavirus Genomes That Express Nucleocapsid Protein. PubMed Central (PMC). https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC1112145/ Shang, J. (2020, March 30). Structural basis of receptor recognition by SARS-CoV-2. Nature. https://www.nature. com/articles/s41586-020-2179-y?error=cookies_not_ supported&code=c040882d-a7b9-4224-b9b5-2b2ca6ff8da8 Shereen, M., Khan, S., Kazmi, A., Bashir, N., & Siddique, R. (2020, July 1). COVID-19 infection: Origin, transmission, and characteristics of human coronaviruses. ScienceDirect. https://www.sciencedirect.com/science/article/pii/ S2090123220300540


Sigma Aldrich. (2020). Coronavirus (SARS-CoV-2) Viral Proteins. https://www.sigmaaldrich.com/technicaldocuments/protocols/biology/ncov-coronavirus-proteins. html

Xu, J., Fan, J., & Wu, F. (2017, May 8). The ACE2/ Angiotensin-(1–7)/Mas Receptor Axis: Pleiotropic Roles in Cancer. Frontiers. https://www.frontiersin.org/ articles/10.3389/fphys.2017.00276/full

Sino Biological, Biological Solution Specialist. (2006). Coronavirus Replication. Sino Biological. https://www. sinobiological.com/research/virus/coronavirus-replication

Zeng, Q., Langereis, M., van Vliet, A. L. W., Huizinga, E., & de Groot, R. J. (2008, July 1). Structure of coronavirus hemagglutinin-esterase offers insight into corona and influenza virus evolution. PubMed Central (PMC). https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC2449365/

Siu, Y. L., Teoh, K. T., Lo, J., Chan, C. M., Kien, F., Escriou, N., Tsao, S. W., Nicholls, J. M., Altmeyer, R., Pieris, J. S. M., Bruzzone, R., & Nal, B. (2008, November 15). The M, E, and N Structural Proteins of the Severe Acute Respiratory Syndrome Coronavirus Are Required for Efficient Assembly, Trafficking, and Release of Virus-Like Particles. Journal of Virology. https:// jvi.asm.org/content/82/22/11318 Smirnov, E., Hornacek, M., Vacik, T., Cmarko, D., & Raska, I. (2018, January 30). Discontinuous transcription. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC5973254/ Sriram, K., Insel, P., & Loomda, R. (2020, May 14). What is the ACE2 receptor, how is it connected to coronavirus and why might it be key to treating COVID-19? The experts explain. The Conversation. https://theconversation.com/what-isthe-ace2-receptor-how-is-it-connected-to-coronavirusand-why-might-it-be-key-to-treating-covid-19-the-expertsexplain-136928

Zhang, H. (2020, March 3). Angiotensin-converting enzyme 2 (ACE2) as a SARS-CoV-2 receptor: molecular mechanisms and potential therapeutic target. Intensive Care Medicine. https://link.springer.com/article/10.1007/s00134-020-059859?error=cookies_not_supported&code=d9b9c760-3c954b58-a03f-668954481245 Ziebuhr, J., & Siddell, S. (1999, January). Processing of the Human Coronavirus 229E Replicase Polyproteins by the VirusEncoded 3C-Like Proteinase: Identification of Proteolytic Products and Cleavage Sites Common to pp1a and pp1ab. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC103821/

Turner, A. (2015, January 1). ACE2 Cell Biology, Regulation, and Physiological Functions. ScienceDirect. https://www.sciencedirect.com/science/article/pii/ B9780128013649000250?via%3Dihub Turner, A. J. (2020, April). ACEH/ACE2 is a novel mammalian metallocarboxypeptidase and a homologue of angiotensinconverting enzyme insensitive to ACE inhibitors. PubMed. https://pubmed.ncbi.nlm.nih.gov/12025971/ UniProt. (2020). ACE2 - Angiotensin-converting enzyme 2 precursor - Homo sapiens (Human) - ACE2 gene & protein. https://www.uniprot.org/uniprot/Q9BYF1 Verdecchi, P., Cavallini, C., Spanevello, A., & Angeli, F. (2020). The pivotal link between ACE2 deficiency and SARS-CoV-2 infection. European Journal of Internal Medicine. https://www.ejinme.com/action/showPdf?pii =S0953-6205%2820%2930151-5 Viral Replication | Boundless Microbiology. (2020) Lumen Learning. https://courses.lumenlearning.com/boundlessmicrobiology/chapter/viral-replication/ Warner, F. J. (2004, November). Angiotensin-converting enzyme-2: a molecular and cellular perspective. PubMed. https://pubmed.ncbi.nlm.nih.gov/15549171/ Wessner, D. (2010). Origin of Viruses | Learn Science at Scitable. Nature. https://www.nature.com/scitable/ topicpage/the-origins-of-viruses-14398218/?error=cookies_ not_supported&code=996434bb-dc5a-49f8-b0c65f250616315b Wit, E., van Doremalen, N., Falzarano, D., & Munster, V. (2016, June 27). SARS and MERS: recent insights into emerging coronaviruses. Nature Reviews Microbiology. https://www. nature.com/articles/nrmicro.2016.81?error=cookies_ not_supported&code=f8f66712-0405-4bc7-908e58cb5d96d03f#Sec2



Gasotransmitters: New Frontiers in Neuroscience BY AUDREY HERRALD '23 Cover Image: A threedimensional representation of the nitric oxygen synthase (NOS) protein, a critical player in the synthesis of nitric oxide (NO), the first identified gasotransmitter Created by Jawahar Swaminathan and MSD staff at the European Bioinformatics Institute, from Wikimedia Commons


Introduction The average adult brain is comprised of about 120 billion neurons (Herculano-Houzel, 2001). Communication between the neurons in this vast network is important for cognition, behavior, and even physiological function; in short, neural signaling is a critical element of life (Neural Signaling, 2001). This signaling generally occurs through one or some combination of two mechanisms: electrical transmission and chemical transmission. The “all-or-nothing” action potential is a relatively well-known example of electrical transmission, and endogenous signaling molecules called neurotransmitters are equally familiar. However, a lesser-known but profoundly important class of molecules regulates our cardiovascular, nervous, gastrointestinal, excretory, and immune systems—in addition to many cellular functions, including apoptosis, proliferation, inflammation, cellular metabolism, oxygen sensing, and gene

transcription (Handbook of Hormones, 2016). Their name? Gasotransmitters. The phrase “gasotransmitter” was first coined in 2002 when a team of researchers identified hydrogen sulfide (H2S) as the third gaseous signaling molecule of its kind (Wang et al., 2002). The rotten-egg smelling molecule joined nitric oxide (NO) and carbon monoxide (CO) in the group of molecules referred to as gasotransmitters. Since then, advancements in understanding of the cellular signaling process have led to the proposed identification of other gasotransmitters, like ammonia (NH3). These molecules dictate a wide variety of physiological processes, and the mechanisms of effect are accordingly varied. Section 1 of this paper will address the various functional mechanisms of the three broadly accepted gasotransmitters, in addition to providing a clearer profile of what, exactly, these gasotransmitters look like and how they mediate neural connectivity. Section DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

(NO3−), nitrous oxide (N2O), and nitroxyl (HNO) for NO). These derivatives are classified within the gasotransmitter family, even though some exist in non-gaseous forms, because they often perform signaling functions in place of their primary molecule or help to buffer fluctuations in gasotransmitter levels (Ida et al., 2014). The derivatives play important physiological roles, which are addressed in subsequent sections. First, though, it is important to understand the general functional mechanisms of the primary gasotransmitters. All molecules classified as gasotransmitters must fulfil six defining criteria (Wang, 2002):

Figure 1: Representation of the enzyme soluble guanylate cyclase (sGC), the only known target of the NO gasotransmitter Created by Audrey Herrald, reference Elango, 2017

(i) They are small molecules of gas. (ii) They are freely permeable through membranes and therefore act without specified membrane receptors. 2 will address the role of gasotransmitters in a number of neurological diseases and psychiatric conditions, along with any potential avenues for gasotransmitter-related treatment.

(iii) They are generated endogenously. (iv) They have well-defined, specific functions at physiologically relevant, generally exceptionally low, concentrations.

Form and Function

(v) Their functions can be mimicked by exogenous counterparts.

What are gasotransmitters? Gasotransmitters are gaseous signaling molecules produced within the body. Their characterization as such is recent, and discussions regarding the molecules’ official designation are still ongoing; some suggest the term “gaseous messengers,” while others advocate for “small-molecule signaling species” (Wang, 2018). The term “gasotransmitter,” however, refers to a specific set of criteria that do not necessarily apply to alternate namings— hence the term’s selection for this article. Gasotransmitters, for example, are endogenous. This means that oxygen (O2), which can be classified as both a “gaseous messenger” and a “small-molecule signaling species,” cannot be a gasotransmitter (Wang, 2018). The proposal of alternate terms invites ambiguity. However, these proposals also point to an important truth: interest in gasotransmitters is quickly growing within the scientific community. Accordingly, the synthesis, form, and function of gasotransmitters are emerging with increasing clarity. Currently, the three molecules widely recognized as gasotransmitters are nitric oxide (NO), carbon monoxide (CO), and hydrogen sulfide (H2S). Each of these primary gasotransmitters has a series of chemical derivatives (such as nitrite (NO2−), nitrate SUMMER 2020

(vi) Their cellular effects may or may not be mediated by secondary messengers, but specific final targets are implicated. The first three criteria refer to the molecules themselves, while the last three refer to their physiological effects. First, there is the size and state requirement. “Small” here is defined as having a molecular mass between 28 and 34 amu, which excludes a vast array of endogenous species (Wang, 2002). The requirement of a gaseous state is also important; gasotransmitters exist either in gaseous form or are dissolved in circulation, intracellular fluid, and/or interstitial (between cells) fluid (Wang, 2018). Next comes membrane permeability. Gasotransmitters do not require cognate membrane receptors to interact with cells; their gaseous state enables them to ‘slip through the gate’ without the need for a gatekeeper. This is where gasotransmitters differ from well-known transmitters like hormones, neurotransmitters, and drugs, all of which require and interact extensively with cellular receptors. Third, gasotransmitters must be produced endogenously. Thus far, all identified gasotransmitters are not only produced endogenously, but produced specifically through highly regulated enzymatic processes. The careful bioregulation of these molecules

“Currently, the three molecules widely recognized as gasotransmitters are nitric oxide (NO), carbon monoxide (CO), and hydrogen sulfide (H2S).”


Figure 2: Molecular orbital diagram for NO Source: Wikimedia Commons

makes sense, as CO, NO, and H2S can all be highly toxic in unregulated conditions (CO and NO interfere with oxygen exchange and H2S is a respiratory tract irritant) (Wareham et al., 2018).

“The possibility for mimicry allows scientists to selectively manipulate certain qualities (like concentration, for example) of the gasotransmitters, and then observe their physiological effects.”


Now come the function-related stipulations for molecules in the gasotransmitter family. The fourth criterion has to do with both concentration and physiological effect: gasotransmitters must play a role that is both well-defined and specific to that molecule, and this specific molecular function must take place at a known (and generally low) endogenous concentration. This means that manipulating the concentration of gasotransmitters within the body should evoke specific physiological changes, in line with the identified role of the molecule. The fifth criterion has particularly noteworthy implications for the study of gasotransmitters. It states that exogenous (created outside of the body) substances can effectively mimic the activity of endogenous gasotransmitters. This possibility for mimicry allows scientists to selectively manipulate certain qualities (like concentration, for example) of the gasotransmitters, and then observe their physiological effects. Studies like these help researchers understand more about which characteristics of gasotransmitters hold particular significance. An often-utilized tool for these types of studies is a class of substances called NO-releasing compounds, which solve the issue of NO administration (high reactivity in air and concentration-dependent function) by reversibly storing and releasing NO under specific conditions (Cheng, 2019). The recent engineering of these “NO donors” is just one example of the many scientific developments

elicited by our emerging understanding of gasotransmitters. Finally, the sixth criterion refers to both signaling mechanism and signaling result. All gasotransmitters have certain physiological effects, from regulation of the immune, cardiovascular, and nervous systems to initiation of cellular events like apoptosis, inflammation, and proliferation (Handbook of Hormones, 2016). These physiological effects depend on a series of interactions between gasotransmitters and proteins. The sixth gasotransmitter qualification criterion clarifies that even though gasotransmitters must have specific physiological effects, these effects need not result from direct interaction with a gasotransmitter. In other words, the series of events from gasotransmitter to end-result can be long, provided that the path between the pair can be clearly traced. One simple example of this principle involves the dilation of blood vessels via NO signaling: while NO can induce vasodilation directly by binding to oxygencarrying myoglobin (thus enabling oxygendependent muscle cells in vasculature walls to dilate), NO might also interact with the enzyme guanylyl cyclase (GC) to initiate a chain reaction with a series of intermediaries prior to inducing blood vessel dilation (Ormerod et al., 2011). Either way, according to the six generally accepted criteria listed above, NO passes the test and is classified as a gasotransmitter. Meet the threesome Though technological advancements and new research keep leading to the proposal


Figure 3: Electron configuration for Fe2+ Created with MEL VR Virtual Chemistry by Audrey Herrald.

of additional gasotransmitters, the family of signaling molecules currently consists of three definite members: Nitric oxide (NO), carbon monoxide (CO) and hydrogen sulfide (H2S) (Shefa et al., 2017; Wang et al., 2020). Each of these molecules interact differently with various physiological targets, and their mechanisms of synthesis are unique. First, the knowns: scientists believe that they have successfully identified many of the biological targets for NO, CO, and H2S. The mechanisms of interaction with these targets, however, remain somewhat ambiguous; and researchers still hope to uncover the identities of the proteins that modulate gasotransmitter function, the effect of gasotransmitter production on other gasotransmitters, and the characteristics of gasotransmitter sensor proteins, among other questions. Future research will build in part upon two key areas of gasotransmitter understanding, as outlined by field-pioneer Rui Wang: interactions with producers, and interactions with targets (Wang, 2018). First, for producer interactions: the identification criteria for gasotransmitters stipulate that gasotransmitters must be created inside the body. This makes synthesis an important element of gasotransmitter function. Each of the three main gasotransmitters, H2S, NO, and CO, are produced through enzymatic processes. Hydrogen sulfide production depends primarily upon three enzymes: cystathionine γ-lyase (CSE), cystathionine β-synthase (CBS), and 3-mercaptopyruvate sulfurtransferase (MST) (Zhu 2017). Enzymatic production of NO is catalyzed by three subtypes of nitrous oxide synthase (NOS) enzymes, known as eNOS, iNOS, and neuronal NO synthase (nNOS) (Wang 2018). Lastly, endogenous CO is produced following a catalysis process that involves oxygenase (HO).


In these enzymatic processes, different amino acids—either obtained through consumption of protein-rich foods or produced through the endogenous breakdown of other consumed proteins—bind to one of the enzymes mentioned above. When this bond occurs, slow-moving chemical reactions accelerate greatly (more than a millionfold), resulting in the production of new substances (Castro, 2014). Variations of this enzymatic synthesis generate H2S, NO, and CO. Upon production, gasotransmitters begin interacting with their physiological targets. Importantly, though one might read that gasotransmitters regulate the immune system or increase breathing rate, (Handbook of Hormones, 2016), gasotransmitters (like other bodily signaling molecules) seldom act directly on these systems. In reality, newly synthesized molecules of H2S, NO, and CO tend to exert small effects on specific targets, the results of which trigger physiological processes that might indeed lead to improvements in the immune system or an uptick in respiration. Each gasotransmitter has a wide range of initial targets and a wider range of physiological functions, many of which likely remain to be discovered. However, a few key mechanisms for physiological function have been thoroughly outlined. The next section provides a demonstrative example of one such pathway: the reduction of tension in blood vessel walls, also known as “vasorelaxation,” initiated by the gasotransmitter NO (Wang 2018).

“... though one might read that gasotransmitters regulate the immune system or increase breathing rate, gasotransmitters (like other bodily signaling molecules) seldom act directly on these systems.”

Nitric Oxide and Vasorelaxation: From synthesis to final effect Nitric oxide (NO) has only one known target, called soluble guanylate cyclase (sGC). sGC


is an enzyme (encoded by three different genes) that is comprised of two subunits. Both subunits of the sGC enzyme have four domains, each involved in different elements of enzyme function. Figure 1 provides a visual representation of the two subunits and eight domains. One of the sGC subunits, the “beta” subunit, includes a small, adjunct molecule called a ligand. This specific ligand is histidine, a common amino acid, coordinated to an ironcontaining molecule known as a “heme.” The heme, Fe2+, is the specific target of NO.

“Due to its many roles, loss of NO bioactivity contributes to disease in many conditions - making restoration of NO bioavailability an attractive therapeutic avenue for many diseases and disorders."

Nitric oxide binds to the Fe2+ heme with particularly high affinity. Why? The answer lies in the arrangement of electrons associated with each substance. In NO, electrons from the nitrogen and oxygen atoms come together in such a way that one “molecular orbital,” called the pi* orbital, remains only partially filled. (Molecular orbitals are mathematical functions that help describe the probability of finding electrons in certain regions around the molecule.) The partially filled pi* orbital in NO, shown as lines without arrows (without “electrons”) in Figure 2, can help visualize the outer-most electrons that are associated with NO. The heme on the beta subunit of sGC is Fe2+, which can be visualized with a representation like the one in Figure 3. As in Figure 2, the arrows represent electrons. In general, for two species to bond by sharing electrons, these two species must possess valence electrons in comparable energy levels to one another. Further, the orbitals—the regions where these electrons are most likely to be found—must match up in a particular way. In the case of NO and Fe2+, if both species were to retain the electron organizations depicted in Figures 2 and 3, the electrons wouldn’t quite match up. However, a phenomenon known as “back bonding” enables electrons from Fe2+ to move from their highest-energy atomic orbitals (see the “3d” label in Figure 3) to the pi* molecular orbital (see the empty lines in the center of Figure 2) of NO (Cotton et al., 1999). Nitric oxide, with one unpaired electron, has a particular affinity for back bonding in appropriate situations (Cotton et al., 1999). A combination of back bonding, electrostatic attractions, and a number of related forces (all driven towards an end-goal of reducing the system’s free energy) allow the gasotransmitter to group itself tightly into molecular complex with the Fe2+ heme and the amino acid ligand, all just one element of the much larger sGC enzyme.


Upon bonding, the presence of NO leads to more than a 200-fold increase in sGC activity (Offermanns et al., 2008). Activation of the heme on the beta subunit of sGC (see Figure 1) initiates activity in the catalytic region of the enzyme, where sGC catalyzes the transition of a common nucleotide called guanosine 5′-triphosphate (GTP) to an important messenger compound known as cGMP. Remarkably, sGC is one of only two enzymes that produces cGMP. This means that sGC (and thereby NO, being a key sGC transmitter) is intimately involved in many signal transduction pathways. One of these pathways capitalizes on the rise in cGMP concentration (a result of NOinitiated sGC activation) to set off a signaling cascade. cGMP-dependent protein synthesis, as well as cation channels that open and close in accordance with cGMP concentration, all contribute to the release of a chemical signal (myosin phosphate). Finally, myosin phosphate drives smooth muscle cells to release their stores of excess calcium—thereby relaxing the muscles and leading to NO-initiated vasodilation (Denninger et al., 1999). The dilation of blood vessels is one physiological result of NO signaling, but the molecule has a wide variety of additional functions. Due to its many roles, loss of NO bioactivity contributes to disease in many conditions—making restoration of NO bioavailability an attractive therapeutic avenue for many diseases and disorders (Helms & Kim-Shapiro, 2013). And even though NO signaling affects regions throughout the body, one of its most prominent areas of impact, together with CO and H2S, is the nervous system. In the next section, several relations between gasotransmitters and neurological conditions are outlined.

Gasotransmitters and Neurological Conditions Gasotransmitters have been implicated in a number of neurological conditions, including Alzheimer’s disease, autism spectrum disorder, Parkinson’s disease, and multiple sclerosis (MS) (Steinert et al,. 2010). These disorders all have in common the degeneration or disruption of neural plasticity. Neural plasticity, or synaptic plasticity, is the ability of neurons in the brain and spinal cord to adapt in response to changes in the environment or damage to neural tissue (Sharma et al., 2016). This plasticity is critical for proper communication among neurons, maintenance of bodily homeostasis, and for the regulation of neural transmission (Shefa et al., 2018). It makes sense,


then, that disruptions to neural plasticity can harbor devastating effects; from psychiatric disorders like schizophrenia and bipolar disorder to neurodegenerative disorders like Alzheimer’s disease, the link between synaptic plasticity and many common neurological conditions is becoming increasingly clear (Shefa at al., 2018; Kumar et al., 2017). Now, as gasotransmitters—NO in particular—have emerged as important regulators of synaptic plasticity, the transmitters’ roles in neurological conditions are an increasingly common subject of investigation (Shefa at al., 2018). Among common neurodegenerative diseases, Alzheimer’s disease is one of the most closely associated with synaptic plasticity (Kumar et al., 2017). While the loss of synaptic plasticity— closely related to working memory—is not the cause of Alzheimer’s disease, it may be both a symptom and a pre-diagnosis warning sign; recent research provides in vivo evidence for reduced synaptic plasticity in the brain’s frontal lobe among early- and mid-stage Alzheimer’s disease patients (Kumar et al., 2017). Why might reduced plasticity be a problem? Dr. Sanjeev Kumar, lead author of a recent study on Alzheimer’s disease and synaptic plasticity, explains that healthy neuronal plasticity supports the brain’s “cognitive reserve,” or the protection that offsets poorer functioning in other brain areas and shields against the development of neurodegenerative disease (Kassam, 2017). Thus, any physiological process with a significant effect on synaptic plasticity would likely have an effect on neurodegenerative diseases like Alzheimer’s, as well—and gasotransmitters fit precisely this description. The upregulation or downregulation of gasotransmitters may be a cause of neurodegenerative disorders, and targeted enhancement of gasotransmitter function may prove therapeutic by restoring synaptic plasticity (Shefa et al., 2018). Another facet of the relationship between gasotransmitters and neurological disorders is the effect of gasotransmitters on oxidative stressors. The term “oxidative stress” describes an imbalance between harmful free radicals and their ameliorating pair, antioxidants, within the brain (Pizzino et al., 2017). This imbalance, when present, leaves neural cells vulnerable to attack from unchecked free radicals—the results of which include protein misfolding, glia cell over-activation, mitochondrial dysfunction, and (in the worst cases) subsequent cellular destruction (Kim at al., 2015). Enter


gasotransmitters. The molecules boast a variety of defenses against neurodegenerationinducing oxidative stress. NO, perhaps the most ubiquitous of the gasotransmitters, has the ability to activate a subset of brain receptors known as NMDA receptors, or NMDARs, which help defend against the harmful effects of free radicals (Huque et al., 2009). NMDARs also initiate a signaling pattern that leads to the generation of (more) endogenous NO. The increased levels of NO drive NMDAR activation, and this positive feedback loop can serve as a critical natural defender against oxidative stress (Shefa et al., 2018). The other gasotransmitters, too, can help protect the brain from neurological disorders. H2S appears to have a role in defending against major depressive disorder; studies have shown robust anti-depressive effects via a signaling pathway known as the tropomyosin receptor kinase B -rapamycin (TrKB-mTOR) pathway (Hou et al., 2017). CO has been shown to restore synaptic function in both Alzheimer’s disease and schizophrenia, two vastly different disorders, though with the common characteristic of neural deterioration (McGlashan, 1998; Magalingam, 2018). Lastly, NO emerges again, this time as an upregulator of a messenger molecule called cyclic AMP (cAMP)—which binds a restorative protein called CREB and subsequently reduces the influx of Ca2+ during neural signaling; this action has been linked to reduced symptomology in both schizophrenia and major depressive disorder.

“The other gasotransmitters, too, can help protect the brain from neurological disorders. H2S appears to have a role in defending against major depressive disorder.”

The possibilities for gasotransmitter-related treatment of neurodegenerative and neuropsychiatric disorders are undoubtedly numerous. The next hurdle to cross here will be the development of a more thorough understanding of the signaling mechanisms associated with these gaseous transmitters (Wang 2018). The identified links between gasotransmitters and synaptic plasticity, as well as between gasotransmitters and oxidative stress, are both promising signs that further work on gasotransmitters may improve treatments for currently currently incurable conditions.

Conclusion The term “gasotransmitter” is truly only as old as a typical first-year college student. A keyword-search reveals just four published books on the subject, undoubtedly preceding a host of additional works. Since one of the first defining papers on gasotransmitters was


published 18 years ago, the molecular family has garnered significant scientific attention, and knowledge on the subject will only continue to grow. As it does, among the most exciting prospects is the development of new treatments for neurodegenerative diseases and other neurological disorders. Two of the most recent insights—that gasotransmitters seem to mediate neural repair and harbor protective effects against oxidative stress—are certain to generate subsequent investigation, and they might just offer insight as to the most effective mechanisms for management of neurodegenerative conditions (Shefa at al., 2017). In the meantime, researchers will likely continue to investigate the signaling mechanisms of these transmitters— particularly the identity of sensor proteins and the interactions between gases. New molecules are bound to join ammonia in working their way towards gasotransmitter-classification. The field is new and growing, with the potential for lifesaving clinical applications inching ever closer.

Kim, G. H., Kim, J. E., Rhie, S. J., & Yoon, S. (2015). The Role of Oxidative Stress in Neurodegenerative Diseases. Experimental Neurobiology, 24(4), 325–340. https://doi. org/10.5607/en.2015.24.4.325


Pizzino, G., Irrera, N., Cucinotta, M., Pallio, G., Mannino, F., Arcoraci, V., Squadrito, F., Altavilla, D., & Bitto, A. (2017). Oxidative Stress: Harms and Benefits for Human Health. Oxidative Medicine and Cellular Longevity, 2017. https://doi. org/10.1155/2017/8416763

Cheng, J., He, K., Shen, Z., Zhang, G., Yu, Y., & Hu, J. (2019). Nitric Oxide (NO)-Releasing Macromolecules: Rational Design and Biomedical Applications. Frontiers in Chemistry, 7. https://doi. org/10.3389/fchem.2019.00530 Declines in plasticity reveals promising treatments for Alzheimer’s disease. (n.d.). Drug Target Review. Retrieved July 24, 2020, from https://www.drugtargetreview.com/ news/26979/plasticity-alzheimers/ Donald, J. A. (2016). Chapter 103—Gasotransmitter Family. In Y. Takei, H. Ando, & K. Tsutsui (Eds.), Handbook of Hormones (pp. 601–602). Academic Press. https://doi.org/10.1016/B978-0-12801028-0.00103-3 Herculano-Houzel, S. (2009). The Human Brain in Numbers: A Linearly Scaled-up Primate Brain. Frontiers in Human Neuroscience, 3. https://doi.org/10.3389/neuro.09.031.2009 Hoque, K. E., Indorkar, R. P., Sammut, S., & West, A. R. (2010). Impact of dopamine-glutamate interactions on striatal neuronal nitric oxide synthase activity. Psychopharmacology, 207(4), 571–581. https://doi.org/10.1007/s00213-009-1687-0 Hou, X.-Y., Hu, Z.-L., Zhang, D.-Z., Lu, W., Zhou, J., Wu, P.-F., Guan, X.-L., Han, Q.-Q., Deng, S.-L., Zhang, H., Chen, J.-G., & Wang, F. (2017). Rapid Antidepressant Effect of Hydrogen Sulfide: Evidence for Activation of mTORC1-TrkB-AMPA Receptor Pathways. Antioxidants & Redox Signaling, 27(8), 472–488. https://doi.org/10.1089/ars.2016.6737 Ida, T., Sawa, T., Ihara, H., Tsuchiya, Y., Watanabe, Y., Kumagai, Y., Suematsu, M., Motohashi, H., Fujii, S., Matsunaga, T., Yamamoto, M., Ono, K., Devarie-Baez, N. O., Xian, M., Fukuto, J. M., & Akaike, T. (2014). Reactive cysteine persulfides and S-polythiolation regulate oxidative stress and redox signaling. Proceedings of the National Academy of Sciences of the United States of America, 111(21), 7606–7611. https://doi.org/10.1073/ pnas.1321232111


Kumar, S., Zomorrodi, R., Ghazala, Z., Goodman, M. S., Blumberger, D. M., Cheam, A., Fischer, C., Daskalakis, Z. J., Mulsant, B. H., Pollock, B. G., & Rajji, T. K. (2017). Extent of Dorsolateral Prefrontal Cortex Plasticity and Its Association With Working Memory in Patients With Alzheimer Disease. JAMA Psychiatry, 74(12), 1266–1274. https://doi.org/10.1001/ jamapsychiatry.2017.3292 Magalingam, K. B., Radhakrishnan, A., Ping, N. S., & Haleagrahara, N. (2018, March 8). Current Concepts of Neurodegenerative Mechanisms in Alzheimer’s Disease [Review Article]. BioMed Research International; Hindawi. https://doi.org/10.1155/2018/3740461 McGlashan, T. H. (1998). The profiles of clinical deterioration in schizophrenia. Journal of Psychiatric Research, 32(3–4), 133–141. https://doi.org/10.1016/s0022-3956(97)00015-0 Ormerod, J. O. M., Ashrafian, H., Maher, A. R., Arif, S., Steeples, V., Born, G. V. R., Egginton, S., Feelisch, M., Watkins, H., & Frenneaux, M. P. (2011). The role of vascular myoglobin in nitrite-mediated blood vessel relaxation. Cardiovascular Research, 89(3), 560–565. https://doi.org/10.1093/cvr/cvq299

Purves, D., Augustine, G. J., Fitzpatrick, D., Katz, L. C., LaMantia, A.-S., McNamara, J. O., & Williams, S. M. (2001). Neural Signaling. Neuroscience. 2nd Edition. https://www.ncbi.nlm. nih.gov/books/NBK10882/ SHARMA, N., CLASSEN, J., & COHEN, L. G. (2013). Neural plasticity and its contribution to functional recovery. Handbook of Clinical Neurology, 110, 3–12. https://doi. org/10.1016/B978-0-444-52901-5.00001-0 Shefa, U., Yeo, S. G., Kim, M.-S., Song, I. O., Jung, J., Jeong, N. Y., & Huh, Y. (2017, March 12). Role of Gasotransmitters in Oxidative Stresses, Neuroinflammation, and Neuronal Repair [Review Article]. BioMed Research International; Hindawi. https://doi.org/10.1155/2017/1689341 Steinert, J. R., Chernova, T., & Forsythe, I. D. (2010). Nitric oxide signaling in brain function, dysfunction, and dementia. The Neuroscientist: A Review Journal Bringing Neurobiology, Neurology and Psychiatry, 16(4), 435–452. https://doi. org/10.1177/1073858410366481 Wang, R. (2018). Chapter 1 Overview of Gasotransmitters and the Related Signaling Network. 1–28. https://doi. org/10.1039/9781788013000-00001 Wareham, L. K., Southam, H. M., & Poole, R. K. (2018). Do nitric oxide, carbon monoxide and hydrogen sulfide really qualify as ‘gasotransmitters’ in bacteria? Biochemical Society Transactions, 46(5), 1107–1118. https://doi.org/10.1042/ BST20170311 Yakovlev, A. V., Kurmasheva, E. D., Giniatullin, R., Khalilov, I., & Sitdikova, G. F. (2017). Hydrogen sulfide inhibits giant depolarizing potentials and abolishes epileptiform activity of neonatal rat hippocampal slices. Neuroscience, 340, 153–165.


https://doi.org/10.1016/j.neuroscience.2016.10.051 Zabłocka, A. (2006). [Alzheimer’s disease as neurodegenerative disorder]. Postepy Higieny I Medycyny Doswiadczalnej (Online), 60, 209–216.



The Science of Anti-Aging BY AVISHI AGASTWAR, MONTA VISTA HIGH SCHOOL SENIOR Cover Image: Old man sitting at banks of river Source: Image by Isakarakus from Pixabay


Introduction to Aging Anti-aging medicine has long been a topic of discussion among biogerontologists – scientists who study the process of aging. They apply their understanding to discover methodologies and medicinal processes to delay the aging process. Anti-aging is the process of delaying old age or the onset of ailments and medical conditions that accelerate the process of getting old and eventually lead to death (Juengst, 2013). Research in this domain has been highly sought after by individuals looking for ways to live longer and healthier lives. Scientific research and development-based companies like Calico, AgeX, BioAge, BioViva, and the Longevity Fund are working towards addressing the large gaps in our understanding about the science of aging. This research involves interventions like DNA repair, enhancement of the immune system, synthesis and degradation of proteins, cell replacement, and cell regeneration.

From a scientific perspective, aging is defined as the gradual decline in the body’s capacity for cellular repair and increased risk of age-related pathology. At the cellular level, aging manifests in the form of nuclear DNA mutations, changes in the rates of synthesis of cells, and heightened levels of protein degradation (Hayflick, 2004). While bodily functions are hampered, the speed at which cellular aging happens varies between individuals as well as between different cells within a particular individual (Rattan, 2014). From an evolutionary perspective, this process of gradual degradation is explained by the notion that the rapid and efficient processes of body function repair are essential only until a species procreates. After procreation, there is decreased need for efficient functioning to continue indefinitely. In other words, evolution selects for a species capacity to reproduce, not to live a very long life.


Figure 1. Life expectancy at birth by world region from 1950 to 2050 Source: Wikimedia Commons

Cellular repair mechanisms are monitored by genes called the Longevity Assurance Genes. These genes overlook the maintenance and repair functions of our cells. DNA repairing ability, counteracting stress, and prevention of unregulated proliferation of cells is positively correlated to enhanced lifespan of an individual. These abilities are deeply encoded in the genetics of the individual. This is supported by the fact that until very recently in the evolutionary timeline of our species – that spans millions of years – the Homo sapiens’ life expectancy was close to 30 years (Rattan, 2004). In 1900, the average life expectancy was 49 years; today, as a result of antibiotics and vaccination, it has jumped to 73 years, surpassing the evolutionarily essential lifespan (Hayflick, 2004). However, biotechnology and medical science are yet to find a solution to the problem of chronic diseases.

Approaches to Counter-Aging Anti-aging research is multifaceted and must be approached from multiple angles. Consequently, scientists have three general paradigms for framing their research. The first of these is to look at the eventual onset of chronic ailments brought about as a result of old age (Laberge et al., 2019). Scientists operating under this paradigm do not intend to increase the lifespan of the human body, but instead work to improve the quality of life of people as they age. As a result, people not only live longer but live healthier and remain a contributing part of society. This is valuable to society in that it has the potential to lower healthcare and


social security expenditures dramatically. The second anti-aging approach is to work towards a longer lifespan by delaying the processes that result in aging. Scientists working under this paradigm claim that an average life expectancy of 112 years is a reasonable prediction (Juengst, 2013). Finally, the third and most ambitious paradigm is to attempt to reconfigure and revitalize the basic cellular processes that take place in the human body, slowing down the process of aging and potentially extending the human lifespan. The goal is to backpedal the damage already done by aging in adults and preemptively prevent such damage in young people. It is accomplished by using DNA reconfiguration to specific amino acids to make targeted changes to the operation of a person’s body (Juengst, 2013).

“DNA repairing ability, cunteracting stress, and prevention of unregulated proliferation of cells is positively correlated to enhanced lifespan of an individual."

Figure 2. The skin is part of the integumentary system and consist of multiple layers that have a role in anti-aging Source: Open Learning Initiative


Figure 3. Cyronics chambers at the Cryonics Institute. Source: Wikimedia Commons

Current Scientific and Commercial Research in Anti-Aging “Another reearch firm named AgeX is working on induced tissue regeneration that can harness the regenerative capability of human tissue and repair it.”

As of right now, there are no robust anti-aging medicines on the market that have yielded any substantial benefits for the human body. With that said, there is a lot of ongoing research in the field, as demonstrated by the examples that follow. In Brisbane, California, scientists at Unity Biotechnology are tackling the issue of cellular senescence. Senescence occurs when cells stop dividing further, causing age-associated illnesses like inflammation and degradation of tissues in the body and the extracellular environment that surrounds them. The researchers at Unity Biotechnology diligently remove these senescent cells from the bodies of their patients (Unity Biotechnology, n.d.). Cellular senescence is irreversible because there is currently no known process that can cause a senescent cell to regenerate further. Cellular senescence does serve a purpose – it is known to suppress cancer development at an early stage - but it also contributes to many age-related pathologies later in life (Campisi and Judith, 2013). Another research firm named AgeX is working on induced tissue regeneration that can harness the regenerative capability of human tissue and repair it. This research looked at certain genes that changed their state from on or off at the time of loss of regenerative potential. They have identified gene COX7A1 as a marker for cells that have lost the regenerative potential and have subsequently formulated a technique to suppress the expression of this gene.(AgeX


Therapeutics, n.d.). Scientists at Cryonics are working to preserve one’s health by putting a corpse in cryo-sleep by cooling it to liquid nitrogen temperatures. This procedure is based on the idea that a recently deceased body still has the tissues intact. When the recently deceased body is put into cryosleep, it preserves the tissues (Cryonics Institute, 2013). Scientists hope that future scientific advancements have the potential to restore the body to a healthy state (Best, 2008). Meanwhile, CRISPR biomedical technology offers the ability to locate, edit, and manipulate DNA and, as a result, influence functions of any organism; this capability has accelerated biotechnology and medicinal research (Hsu et al., 2014). Research into the phenomenon of hormesis has taken precedence in the field of anti-aging. These efforts rest on the principle that the functional integrity and life of a cell can be maintained without having to make radical alterations to the mechanism controlling its reproductive lifespan. In general, hormesis involves the regulated administration of microdoses of an otherwise harmful stimulus that has been shown to yield an increase in the lifespan of an organism (Rattan, 2004). The theory behind these microdoses is that a limited amount of stress for a controlled duration can cause a spike in the stress response of a cell, which facilitates tissue maintenance, and therefore counteracts the process of tissue aging (Rattan, 2004). The researchers at Sinclair Lab at Harvard Medical School are working with


Figure 4. Food and Drug Administration’s (FDA) Typical Drug Development and Approval Process Source: Wikimedia Commons

Nicotinamide Adenine Dinucleotide (NAD+), which has presented itself as a promising molecule for hormesis therapy. It is observed that NAD+ concentration in the tissues gradually decreases as the human body ages. These researchers have used another compound Nicotinamide Mononucleotide abbreviated as NMN to synthesize NAD+ in the body and thus increase its concentration in the tissues. NMN is made from Vitamin B and is naturally occurring in our body. NMN is a precursor to NAD+ and promotes its production. NMN is sent to the cells through the small intestine and converted to NAD+ using the NMN transporter Slc12a8 (Sinclair Lab, n.d.).

Process of Regulatory Approval The field of anti-aging medicine is not devoid of false or misguided claims. One common example of this is the advice to take a high dosage of certain vitamins and hormones for anti-aging purposes, often without prescription from medical professionals. Companies are directly marketing these supplements mostly on the basis of their anti-aging properties, yet these approaches are neither data-driven nor sufficiently backed by scientific research. Although some of these techniques have been used to treat certain diseases in the body of the elderly, they do not alter the aging process as a whole. Before a medical treatment can be put on the market, it must be approved by the US Food and Drug Administration (FDA). This process is rigorous and involves multiple steps, including discovery and research, pre-clinical trials, clinical trial, FDA review and FDA post-market safety assessment, which increases the cost and time needed to develop a new treatment. SUMMER 2020

The FDA itself does not conduct trials for the drug, but oversees the validation process. The FDA also inspects the facilities where the drugs will be manufactured to ensure quality of production is maintained. Nevertheless, the approval process is important because it helps establish the effectiveness and validity of the treatment and helps consumers be aware of the risks involved (U.S. Food and Drug Administration).

Ethical Concerns Regarding AntiAging Research The topic of anti-aging treatments also provokes many ethical questions regarding the science and technology involved. One concern is the actual benefit of living longer in the first place - a longer life does not necessarily mean more years in good health. In addition to these concerns about the quality of life, a longer lifespan would also mean an added burden to the healthcare system (Juengst, 2013). That being said, many companies working in this field of research are working towards improving healthspan, (i.e. the number of years that a person lives a healthy life and without any chronic ailments), not just extending people’s lifespan (Crimmins, 2015). The subsequent ethical reasoning revolves around researches and companies adhering to FDA guidelines and do not leave out any necessary steps in the drug development process in order to cut costs or time required. Another ethical concern to be addressed is the widespread non-discriminatory access to any drugs or procedures that are developed.

“One concern is the actual benefit of living longer in the first place - a longer life does not necessarily mean more years in good health.”

Conclusion Biogerontologists have classified aging as the 55

decline in molecular repair that leads to agerelated diseases. Factors like DNA mutation, inflammation, cellular senescence, and protein degradation determine the onset and the extent of age-related diseases. For the past few centuries, advances in medical science have increased average human life expectancy by nearly 50 years; this has led to an increasingly large elderly population, and at the same time, increased the incidence of pathology related to old age. Different groups of scientists follow varied approaches to tackle aging, including lengthening the healthspan to retracing and revitalizing organs functionality. This is a principle that anti-aging scientists ought to consider carefully when proposing new therapeutics – improving lifespan but neglecting to preserve or improve quality of life would be a negative development from both a humanitarian and an economic perspective.

harvard.edu/sinclair/research.php. The Science – Unity Biotechnology. UnityBiotech. https:// unitybiotechnology.com/the-science/. SI;, R. Aging, anti-aging, and hormesis. Mechanisms of ageing and development. https://pubmed.ncbi.nlm.nih. gov/15063104/. Technology. AgeX Therapeutics. https://www.agexinc.com/ technology/. Yohn , C., O'Brien, R., Lussier, S., Berridge, C., Ryan , R., Guermazi, A., … An, M. Senescent Synoviocytes in Knee Osteoarthritis Correlate with Disease Biomarkers, Synovitis, and Knee Pain. ACR Meeting Abstracts. https://acrabstracts. org/abstract/senescent-synoviocytes-in-knee-osteoarthritiscorrelate-with-disease-biomarkers-synovitis-and-knee-pain/.

References Best, B. P. (2008, April). Scientific justification of cryonics practice. Rejuvenation research. https://www.ncbi.nlm.nih.gov/ pmc/articles/PMC4733321/. The Case for Cryonics: Cryonics Institute. The Case for Cryonics | Cryonics Institute. https://www.cryonics.org/about-us/thecase-for-cryonics/. Center for Drug Evaluation and Research. Drug Development & Approval Process. U.S. Food and Drug Administration. https:// www.fda.gov/drugs/development-approval-process-drugs. Crimmins, E. M. (2015, November 10). Lifespan and Healthspan: Past, Present, and Promise. OUP Academic. https://academic. oup.com/gerontologist/article/55/6/901/2605490. Hayflick, L. (2004, June 1). Aging: The Reality: "Anti-Aging" Is an Oxymoron. OUP Academic. https://doi.org/10.1093/ gerona/59.6.B573. Hsu, P. D., Lander, E. S., & Zhang, F. (2014). Development and Applications of CRISPR-Cas9 for Genome ... https://www.cell. com/cell/fulltext/S0092-8674(14)00604-7. Judith CampisiBuck Institute for Research on Aging, J. C. (2013). Aging, Cellular Senescence, and Cancer. Annual Reviews. https://www.annualreviews.org/doi/10.1146/annurevphysiol-030212-183653. Juengst, E. T., Binstock, R. H., Mehlman, M. J., & Post, S. G. (2003, February 28). Antiaging Research and the Need for Public Dialogue. https://science.sciencemag.org/ content/299/5611/1323.full. NE. Sharpless, R. A. D. P., C. Lopez-Otin, M. A. B., SE. Artandi, H. M. B., ME. Carlson, C. S., NS. Gavrilova, L. A. G., Rink, J. C., … MB. Schultz, D. A. S. (1970, January 1). Stem cells and anti-aging genes: double-edged sword-do the same job of life extension. Stem Cell Research & Therapy. https://doi.org/10.1186/s13287017-0746-4. Research: The Sinclair Lab: Harvard Medical School, Department of Genetics. Research | The Sinclair Lab | Harvard Medical School, Department of Genetics. https://genetics.med.





Algal Blooms and Phosphorus Loading in Lake Erie: Past, Present, and Future BY BEN SCHELLING '21 Cover Photo: Lake Erie Beach Sign Source: Wikimedia Commons

History “You’re glumping the pond where the Humming-Fish hummed! No more can they hum, or their gills are all gummed. So I’m sending them off. Oh their future is dreary. They’ll walk on their fins and get woefully weary in search of some water that isn’t so smeary. I hear things are just as bad up in Lake Erie” (Dr. Seuss, 1971) Lake Erie has always held the reputation as the most polluted of the Great Lakes. One of the most infamous environmental disasters in history took place on the Cuyahoga River in Cleveland, Ohio. In June of 1969, the river was so polluted from the surrounding industrial city that it caught on fire, with flames reaching over five stories high (Ohio History Connection). While this was not the first time a fire broke


out on the river (it has happened over a dozen times in recorded history), it did spark a movement that eventually led to the formation of the United States Environmental Protection Agency (EPA). While there has not been a fire on Lake Erie water since, there have been algal blooms, a much more complex environmental concern. In the 1960s, the population along the coast of Lake Erie was growing and so was the pollution entering the lake. Laundry detergents with high phosphorus concentrations were common at this time as well. Along with the phosphorusrich detergent, tons of human sewage made its way into the lake, unnaturally increasing the phosphorus concentration (Hasler, 1969). The excess phosphorus increased the phytoplankton biomass on the lake (Burns and Ross, 1972). This process is called eutrophication. Eutrophic simply means a body of water with abundant nutrients to support DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure 1: A map of the bathymetry of Lake Erie. Red areas represent shallow water and blue represent deep water. Note that the west-most basin is the shallowest Source: NOAA, 2020

life within it. The shallower the lake, the more eutrophic (RMB Environmental Laboratories). This is because in shallow lakes the littoral zone, the portion of the lake where light can reach the bottom, is greater. When light reaches the bottom, photosynthetic life proliferates. Lake Erie is the shallowest of the Great Lakes, so it is the most susceptible to eutrophication. The western basin of the lake is extremely shallow, so it is even more susceptible than the rest of the lake (Figure 1). When the phytoplankton die, they sink to the bottom and decompose, which requires oxygen. When large masses of phytoplankton decompose, they require so much oxygen that they decrease the dissolved oxygen level of the lake and can create “dead zones”. Dead zones are portions of water that have such low dissolved oxygen concentrations that they cannot support life (NOAA, 2020). Phosphorus loading is highlighted as a key driver of eutrophication. Unlike other nutrients, such as nitrogen, phosphorus has no gaseous cycle. Once phosphorus is in the water or sediments, it remains there much longer than other nutrients (Schindler, 1977). The low oxygen condition of Lake Erie caused a trophic cascade, and consequently, the abundance of benthic invertebrate species decreased. Mayfly nymphs, a benthic invertebrate, were once found at an abundance of 500 per square meter, but in the late 1960s, they were found in densities of only five per square meter, which is a 99% decrease (Hasler, 1969). At the same time, the populations of popular commercial fish such as herring, walleye, and blue pike


were decreasing as well. Lake Erie’s reputation was so poor that reporters named it “the dead lake” (Ashworth, 1982). Notably, Dr. Suess mentions the lake’s poor health in his book The Lorax (Figure 2). While popular media slammed Lake Erie, dedicated scientists worked hard to find the root of the issue and develop a solution. In 1969, Arthur Hasler published a compelling paper titled “Cultural Eutrophication is Reversible.” Hasler highlighted the main issue – the point source-pollution from detergents and sewage – and emphasized that, “It is of the greatest urgency to prevent further damage to water resources and to take corrective steps to reverse present damages.” In 1972, the newly formed EPA released an extensive report on the health of Lake Erie titled “Project Hypo”. Project Hypo investigated the causes of the low dissolved oxygen conditions in the lake’s hypolimnion (the deepest and coldest water) that caused the trophic cascade. The report suggests that if the phosphorus loading into the lake in 1970 was much smaller, that the total phytoplankton biomass in 1970 would have been much smaller as well (Noel and Ross, 1972).

“Unlike other nutrients, phosphorus has no gaseous cycle. Once phosphorus is in the water or sediments, it remains there much longer than other nutrients.”

Shortly after the publication of Project Hypo, the United States and Canadian governments jointly created the Great Lakes Water Quality Agreement (GLWQA), 1972. This agreement set a goal of 6.0 mg/L of O2 and limit of 1 mg/L of phosphorus. To achieve these goals, the agreement set a total annual limit of 11,000 metric tons (MT) of phosphorus discharge into 59

Figure 2: The Lorax, written by Dr. Seuss in 1971, contains a verse about Lake Erie and its dismal water conditions:

"You're glumping the pond where the Humming-Fish hummed! No more can they hum, for their gills are all glummed So I'm sending them off. Oh, their future is dreary They'll walk on their fins and get woefully weary in search of some water that isn't so smeary I hear things are just as bad up in Lake Erie." Source: Available under the Creative Commons License at Pixy.org; Book written by Dr. Seuss (1971)

“In the early 1990s, it seemed that the GLWQA had sufficiently reduced phosphorus loading. But as early as the mid-1990s, phytoplankton masses began to reappear."

Lake Erie. While the GLWQA of 1972 focused on point sources from sewage, the 1978 amendment also mandated the reduction of phosphorus heavy detergents (United States Government and Canadian Government, 1972). For the most part, the GLWQA was a success. From 1972 to 1975, the nearshore phytoplankton biomass decreased by 42% (Nicholls et al., 1977). This was the first evidence to suggest that Lake Erie was making a comeback. Eventually, the total annual phosphorus discharge into the lake fell below the 11,000 MT limit and the health of Lake Erie improved. An assessment of phosphorus and phytoplankton from 1970 to 1987 suggested, “cautious optimism on the phosphorusreduction program” Makarewicz and Bertram, 1991). Phosphorus levels were reduced from 15,260 MT per year to 2,445, an 84% reduction. Similarly, the phytoplankton biomass across the whole lake was reduced to 89% of its original size before the GLWQA (Makarewicz et al., 1993; Makarewicz and Bertram 1991). The water quality in the lake improved so much that Dr. Seuss decided to remove the portion of The Lorax that mentions poor ecosystem heath of Lake Erie (Figure 2; Fortner, 2019).

Soluble Reactive Phosphorus In the early 1990s, it seemed that the GLWQA had sufficiently reduced phosphorous loading.


But as early as the mid 1990s, phytoplankton masses began to reappear (Bridgeman et al., 2013). On average, there was no increase in total phosphorus (TP), and the total annual discharge was under the 11,000 MT limit. So, what could have caused this increase in phytoplankton biomass? The answer is soluble reactive phosphorus (SRP). Phosphorus is found in many different forms. The biggest distinction is between bioavailable phosphorus (soluble reactive phosphorus or “SRP”, sometimes called dissolved reactive phosphorus “DRP”) and non-bioavailable phosphorus (non-reactive phosphorus “NRP”). Bioavailable phosphorus is a form that is readily available for uptake by any organism. Nonbioavailable phosphorus, such as iron-bound phosphates, are transported as particles into the lake, settle in the sediment, and are never used by algae or bacteria (Hecky et al., 2004). In the 1990s, the fraction of SRP that made up TP increased (Figure 3). Figure 3: The top figure displays total phosphorus (TP) discharged measured in metric tons (MT) from the Maumee River from 1991 to 2019. The red portion represents the non-reactivate phosphorus (NRP) and the blue represents the soluble reactive phosphorus (SRP). The bottom figure represents the ratio of soluble reactive phosphorus (SRP) to DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

total phosphorus (TP). Data is from National Center for Water Quality Research (Heidelberg University, 2020). Image created by author in JMP 14.2.0.

Dreissenid Mussels Around the time Lake Erie was making a comeback, invasive Dreissenid mussels were introduced into the great lakes from vessels in the Proto-Caspian Sea (Zhang et al., 2011). They were most likely transported in the ballast water; water contained on a ship to help control buoyancy. It is estimated that 80% of the total bottom of Lake Erie was colonized by the mussels by 1992 (Makarewicz et al., 2000). There are two types of Dreissenid mussels, zebra and quagga. Both species are benthic feeders and play a major role in the nutrient cycling within the lake. Some organisms, such as Dreissenid mussels, can alter the chemistry of non-bioavailable phosphorus to a bioavailable form. This is a process known as nutrient remineralization: the transformation of particle nutrients (nonbioavailable) that have settled out of the water column into soluble forms (bioavailable) that can be transported through the water column and made available to growing algae and bacteria (Conroy et al., 2005). Dreissenid mussels can filter through 30% of the suspended matter each day, which is much faster than other organisms (Makarewicz et al., 2000). Benthic detritovores eat the feces of Dreissenid mussels which furthers the nutrient remineralization process and produces even more bioavailable phosphorus (Hecky et al., 2004). The influence of Dreissenid mussels on a lake ecosystem depends on the physical conditions of the lake. In deeper water, Dreissenid mussels will decrease the SRP concentration, but in shallower water, they will enhance it (Steinmetz, 2015). A study of 61 lakes across Michigan found that lakes with more zebra mussels have a smaller population of phytoplankton because zebra mussels may graze on phytoplankton (Raikow et al., 2004). Interestingly, lakes with low nutrient concentrations showed an increase in Microcystis, a species of cyanobacteria, with an increase in zebra mussel abundance. This may be due to the anti-grazing properties of the Microcystis strain. Similarly, in the Saginaw Bay, the introduction of Dreissenid mussels led to an outbreak of Microcystis (Bierman et al., 2005). This highlights the complex relationship between invasive Dreissenid mussels and their


ecosystem. Lake Erie meets the criteria, being the shallowest of the great lakes, for an ecosystem to increase SRP with the introduction of Dreissenid mussels. While TP levels decreased, SRP levels increased partially due to the colonization of invasive Dreissenid mussels in Lake Erie (Conroy et al., 2005; Hecky et al., 2004; Kleinman et al., 2015; Steinmetz 2015; Zhang et al., 2011).

Tillage Another explanation for the increase in SRP is the start of no till and conservation tillage. Tilling is the agricultural preparation of soil by mechanical agitations such as digging, stirring, and overturning. The 1983 revision of the GLWQA, focused on reducing TP and preventing erosion, suggests no till and conservation tillage as a method to reduce TP run-off and erosion (United States Government and Canadian Government, 1972). No till practices also increase biological activity in the soil, which leads to better soil structure and improves water storage (Ulén et al., 2010). In the Maumee and Sandusky watersheds (the watersheds that drain into the western basin of Lake Erie), half of corn and soybean farmers practiced conservation tillage (Richards et al., 2002). At first, this practice appeared to work as TP loads decreased. The no till practices were a major driver in reducing particulate phosphorus (another term for NRP) run off into the western basin of Lake Erie (Richards et al., 2009). While reducing TP run off, no till practices were also enhancing the SRP run off (Baker et al., 2017; Kleinman et al., 2011; Smith et al., 2015a; Ulén et al., 2010).

“Around the time Lake Erie was making a comeback, invasive Dreissenid mussels were introduced into the great lakes from vessels in the ProtoCaspian Sea...”

However, no till and conservation tillage practices allowed for phosphorus to vertically stratify in the soil (Baker et al., 2017; Duiker and Beegle 2006; Jarvie et al., 2017). Soil stratification is the process of soil particles separating based on their physical properties, resulting in visible banding. Phosphorus stratification concentrates SRP near the soil surface, which heightens the possibility of run off (Baker et al., 2017, Duiker and Beegle 2006). Another potential pathway for SRP run off is through macropores (Smith et al., 2015b). Macropores are large clumps of soil that SRP concentrates in. Reduced tillage systems promote the development of macropores. In the watersheds of Lake Erie’s western basin, no till practices decreased TP losses by 69%, but doubled the


Figure 3: The top figure displays total phosphorus (TP) discharged measured in metric tons (MT) from the Maumee River from 1991 to 2019. The red portion represents the nonreactivate phosphorus (NRP) and the blue represents the soluble reactive phosphorus (SRP). The bottom figure represents the ratio of soluble reactive phosphorus (SRP) to total phosphorus (TP). Data is from National Center for Water Quality Research Source: Heidelberg University, 2020. Image created by author in JMP 14.2.0.

loss of SRP (Smith et al., 2015a). In Norway, a study found no till practices to reduce NRP run off but increase SRP run off by a factor of 4 (UlĂŠn et al., 2010). Once thought of as a conservation practice to reduce phosphorus loading, reduced tillage provides a new phosphorus source that stimulated algal bloom growth into the new millennium.

Toledo Water Crisis “From August 2nd to 4th, 2014, the water supply from the city of Toledo was contaminated."


From August 2nd to 4th, 2014, the water supply for the city of Toledo was contaminated. On August 2nd, at 2am, the governor of Ohio put out a no drink order on the water supply (WTOL, 2019). The water supply was contaminated with Microcystin, a toxin produced by the cyanobacteria Microcystis aeruginosa. When this is found at a concentration over 1 mg/L, the water is considered unsafe to drink (Kasich et al., 2016). Toledo was not prepared to supply

400,000 people with alternative drinking water. There was no secondary water supply for emergencies (Jetoo et al., 2015). By 12pm, the governor declared a state of emergency (WTOL, 2019). It would be three whole days before the water in Toledo was safe to drink again. The most common freshwater bloom-forming cyanobacteria is Microcystis aeruginosa. M. aeruginosa releases a toxin called microcystin that has many adverse health effects. Exposure to microcystin during recreational activities can lead to acute illness causing abdominal pain, headache, sore throat, vomiting, diarrhea, blistering around the mouth, and pneumonia (United States EPA, 2020a). Ingestion though drinking water can cause more serious health effects such as liver or kidney failure and may even increase the risk of ALS (Banack et al., 2015). While microcystin poisoning is rare, it can cause death. Dogs that encounter contaminated water are at high risk, and there


are many reported deaths due to M. aeruginosa ingestion in dogs (Stewart et al., 2008). There are reports of human and dog illnesses caused by the toxin in 50 countries and 27 states (United States EPA, 2020a). While M. aeruginosa naturally occurs in freshwater systems, the high abundance seen today is almost entirely anthropogenic, or due to human activity. Similar to other phytoplankton, an unnatural amount of phosphorus can promote excessive growth (Haney and Ikawa 2001; Michalak et al., 2013). Increasing water temperatures as a result of climate change also promotes M. aeruginosa growth (Liu et al., 2011). When water temperatures are higher for longer periods of time, lake stratification is more pronounced for longer. Lake stratification is when the lake is separated into different temperature zones. When the air is warm enough, the top layer of the lake warms while the bottom remains cold. This separation reduces mixing between temperature zones, allowing for M. aeruginosa, which is more buoyant than water, to ascend to the surface where blooms form (Paerl and Huisman, 2009; Wood et al., 2017). Higher atmospheric CO2 concentrations are also advantageous for M. aeruginosa growth. M. aeruginosa displays phenotypic plasticity; a change in morphology, behavior, or physiology based on environmental conditions. The phenotypic plasticity of M. aeruginosa allows it to fix carbon from the atmosphere at a greater rate than other aquatic organisms (Ji et al., 2020).

Current Legislation After the 2014 Toledo water crisis, the state of Ohio created new legislation to prevent a similar event in the future. The first legislation passed was Senate Bill 150. This bill requires certification by the Ohio Department of Agriculture (DOA) for anyone who applies fertilizer to 50 acres of land (Cera et al., 2014). The certified card holder must record the type of fertilizer being applied and who is applying it, as well as the time, place, and rate of application. This information must be available for audits by DOA. Currently, Ohio is the only state with legislation of this kind (Snyder, 2018). Violation of the law can result in the loss of fertilizer registration or refusal to register someone for fertilizer application. This law was made effective on August 21, 2014. The second legislation was Senate Bill 1,


effective July 3, 2015. This bill set restrictions on the fertilizer application methods to reduce the SRP discharge into the lake. Senate Bill 1 states that no person in the watershed of the western basin of Lake Erie may apply fertilizer or manure on snow-covered or frozen soil, if the top two inches of soil are saturated from precipitation, or if the weather forecast calls for greater than 50% chance of precipitation exceeding 1 inch in a 12 hour period (Gardner and Peterson, 2015). It may seem odd, but December often has the greatest SRP loading of any month (Figure 4). The average discharge from the Maumee River varied significantly by month for TP (Figure 4: top, ANOVA: P<0.0001, F1,12=14.38) and for SRP (Figure 4: bottom, ANOVA: P<0.0001, F1,12=12.03). According to Dr. Baker, it is important to avoid winter application for two reasons. The first reason is that SRP is concentrated at the surface from no till practices and the crop residue that is left to decompose at the surface. Crop residue is the plant biomass that remains in the ground after a harvest, and it is often made up of the stems of plants. The second reason that Dr. Baker highlights is that the rainfall in December is elevated, causing large discharges of water carrying SRP. Senate Bill 1 also mandates that fertilizer and manure must be injected into the ground as opposed to broadcast application, where the fertilizer is applied to the soil surface. According to Dr. Baker, injection prevents phosphorus run off in two ways. First, it reduces the buildup of phosphorus at the surface that is already enhanced by phosphorus stratification. This type of phosphorus loss is known as an acute loss; the loss of phosphorus from immediate rainfall before it interacts with the soil. The second reason is the physical intrusion of injection disrupts the formation of macropores that act as a “pipeline” for SRP to travel from agriculture to the water. This type of phosphorus loss is known as a chronic loss; the loss of phosphorus from the soil itself.

“After the 2014 Toledo water crisis, the state of Ohio created new legislation to prevent a similar event in the future.”

The GLWQA saw great success in only a matter of years, so should we expect the 2015 Senate Bill 1 to cause rapid change as well? By using phosphorus data from 2000 to 2019, there is not a significant difference before and after the legislation for TP (ANOVA: P=0.26, F1,12=1.28) or for SRP (ANOVA: P=0.40, F1,12,=0.71). Before Senate Bill 1, the average TP load by month was 89.6 MT and after it is 108 MT (Figure 5). Before the average SRP load by month was 24.4 MT, after it is 26 MT (Figure 5).


Figure 4: The average phosphorus loads into the Maumee River by month from 2000-2019. The top displays TP and the bottom displays SRP, both are measured in MT. Error bars represent 1 SE around the mean. Data is from National Center for Water Quality Research Source: Heidelberg University, 2020. Image created by author in JMP 14.2.0.

“Agricultural practices are not the only factors influencing run off. The weather has a large impact on the amount of phosphorus discharge.”


much smaller 2014 bloom. Does this mean that Ohio Senate Bill 1 was a failure? It is hard to say. Agricultural practices are not the only factors influencing run off. The weather has a large impact on the amount of phosphorus discharge. For example, the phosphorus loads in 2019 were extremely large (Figure 3). This is not a result of farmers violating the law. Rather, as suggested by Dr. Baker, it was the result of an unusual amount of rainfall in 2018 and 2019. The large amount of rain washed loads of phosphorus into the lake. While a large load, the SRP levels were one third less than expected from this amount of rainfall. Dr. Baker predicts that this is because farmers had fewer opportunities to apply fertilizer due to the heavy rain. The result of the high loads in 2019 was one of the largest blooms on record. It was so large that it covered an area six times the size of Cleveland and could be seen from space (Johnston, 2019). Although it was before the 2015 legislation, the record-setting 2011 bloom was also a result of extreme meteorological events and long-term agricultural practices, not short-term phosphorus loads that year (Michalak et al., 2013). The 2011 bloom biomass reached a size of 40,000 MT and the 2019 bloom biomass reached a predicted 46,300 MT, topping the previous (Obenour et al., 2014; Scavia et al., 2019). The 2014 bloom was only 22,000 MT, less than half that of 2011 and 2019. Neither the 2011 nor the 2019 bloom caused a water crisis like the

Toledo was unlucky in 2014. The bloom formed around the intake for the water treatment plant. The M. aeruginosa rich water that collected in the Maumee Bay was blown east on the southern shoreline and past the Toledo water intake (Steffen et al., 2017). This assessment of the 2014 crisis is correct according to Dr. Baker. He also notes that, along with prevailing winds and biomass movement at the surface, the location of the mass within the water column contributed to the drinking water contamination. The water treatment plant intake is not at the surface, where blooms are most prevalent. When it is windy, as it was in August 2014, the bloom is stirred up and brought down to the intake. Research suggests a viral outbreak among the M. aeruginosa population in the bay, causing an unusual toxin concentration (Erickson 2017, Steffen et al., 2017). A DNA analysis of the 2014 bloom reveals that a virus was spreading between the M. aeruginosa, causing the cells to lyse, or break open. The lysing of the cells released an unusual amount of microcystin toxin into the water, elevating the toxin concentration entering the water intake (Steffen et al., 2017). While the media labels this crisis as “unlucky”, the combination of winds and viral outbreak is likely to happen again.


Figure 5: The average monthly phosphorus loading for the Maumee River, 2000-2019 (top: TP, bottom: SRP) before and after the passing of Ohio Senate Bill 1 in 2015. The error bars represent 1 SE from the mean and the values reported are the means. Data is from National Center for Water Quality Research Source: Heidelberg University, 2020. Image created by author in JMP 14.2.0.

The question still remains: was Ohio Senate Bill 1 successful at reducing the risk of water contamination? The answer is not as clear as the success of the GLWQA, which showed relatively rapid results. This is an understandable difference since reducing point source pollution is simpler than non-point source pollution. Ohio Senate Bill 1 may be the best mitigation tactic given what is known. Unfortunately, the most influential factor impacting bloom formation is one that cannot be regulated by the law – it is governed by weather. The frequency of extreme weather events is currently much greater than it used to be, increasing the chances of another crisis. Therefore, a comparison between the two legislations is unfair. Today, it is an entirely different ballgame.

Future Legislation It is important to remember what is at stake when considering future policies for Lake Erie. There is a $15.1 billion dollar tourism industry on the coast of Lake Erie, most of which is concentrated in the western basin (Briscoe, 2019). The closure of beaches due to health concerns from algal blooms hurts this extremely lucrative tourism industry that many Ohioans rely on. More importantly, algal


blooms threaten the water supply for over 11 million people (United States EPA, 2020b). The 2014 water crisis contaminated the water supply of over 400,000 people and cost an estimated $65 million (NOAA, 2015). The cost of damage from algal blooms is great but the potential health crisis from contaminating the water is much greater. In general, public health is most important. If scientists have identified agriculture as the major source of phosphorus causing blooms, then why don’t we eliminate the pollution source entirely (Briscoe, 2019)? Why did the state of Ohio not ban the use of fertilizers if public health is the number one priority? It is not that simple. The Ohio agriculture industry is massive. There are over 75,000 farms in Ohio, contributing $105 billion to the annual state economy and directly accounting for 13% of the state’s total business (Ohio Department of Agriculture, 2018). A law that completely restricts fertilizer application would destroy this massive industry, leaving thousands unemployed. The Ohio economy would take a huge hit, and the absence of products from Ohio’s agriculture industry to in-state and out-of-state markets would be disastrous. While public health still comes before economics, the downfalls of

“The 2014 water crisis contaminated the water supply of over 400,000 people and cost an estimated $65 million.”


eliminating fertilizer application are too great to ignore. The solution must protect clean drinking water supplies without a major disruption to the largest Ohio industry. This is what Senate Bill 1 has started to do. There are restrictions on application methods to reduce drinking water contamination concerns, while still allowing the agriculture industry to move forward. There are still possibilities for new policies that satisfy public health and the agriculture industry. One possibility is the improvement of riparian zones. Riparian zones are vegetation areas that act as buffers between agriculture run off and waterways. They have been successful in reducing phosphorus run off (Jaynes et al., 2014). So, why have we not increased the abundance and size of these buffer zones? Riparian buffers do not catch the run off using the current tile drainage system that is in place for the majority of Ohio farms. Tile drainage is a drainage system that collects excess water underground which is then transported though pipes that bypass the important riparian buffer zones. Currently, 49% of Ohio farms use tile drainage systems, which is much greater than the 14% national average (Zulauf and Brown, 2019). One possible solution is to redirect the drainage systems so the riparian buffers can catch excess nutrients. A tile drainage redirection method has been developed in Iowa and has proven to catch all nitrate runoff (Jaynes et al., 2014; Jaynes and Isenhart 2014). While this has not been proven to work for phosphorus run off, it is likely to do so, since riparian zones generally reduce all types of run off. This is an excellent opportunity for future research and has great promise for reducing phosphorus run off in Ohio.

“The most promising opportunity to decrease Ohio run off, that was not mentioned in any current legislation, are blind inlets.�


The most promising opportunity to decrease Ohio run off, that was not mentioned in any current legislation, are blind inlets. Blind inlets are mat drainage structures that act as a buffer in the lowest point of a field. This is a buffer that interferes with water flow before it enters a drainage pipe or passes through a riparian buffer zone. Unlike the little research that has been done on tile drainage redirection, there has been significant research on blind inlets documenting the reductions in phosphorus loading in comparison to tile drainage methods. Compared to tile drainage, blind inlets can reduce TP loads by 60-71.9% and SRP by 50-79.4% (Feyereisen et al., 2015; Smith et al., 2015b; Smith and Livingston 2013). These numbers are extremely promising and the implementation of these drainage systems, while a significant structural change, is relatively simple. A combination of eliminating tile drainage, allowing for passage

of water through riparian buffers and the inclusion of blind inlets is the ideal next step in reducing run off while supporting Ohio farms (See Table 1 for a breakdown of the potential improvements to Ohio agriculture legislation).

Statistical Methods The phosphorus data was retrieved form the National Center For Water Quality Research for the Maumee River (Heidelberg University, 2020). Negative concentrations and discharges were excluded from the data because they make no physical sense. All tests were performed on Log(X+1) transformed data. All statistics were done on JMP Pro 14.2.0.

Acknowledgements Special thanks to Doctor David Baker, a professor at Heidelberg University. Dr. Baker has 72 publications and has been cited more than 3,000 times. He has also been at the forefront of research on algal blooms and phosphorus loading in Lake Erie since the 1960s. His professional and informed opinions appear throughout the text. Thank you to Melissa Desiervo, a PHD candidate at Dartmouth College in the Ecology, Evolution, Environment and Society graduate program. Melissa advised the statistical analysis. This paper would not be possible without the general advising of Professor Michael Cox, Dartmouth College. Your guidance is greatly appreciated.

Key Terms Bio-available: The form of a nutrient that is available for a living organism to use. This is when the nutrient is not bound to an organic compound. Dreissenid mussels: A type of mussel (zebra and quagga) that are from the Proto-Caspian Sea. They are invasive in Lake Erie and alter aquatic phosphorus cycling. Eutrophication: The excessive growth of algae in a body of water due to an oversaturation of nutrients. Great Lakes Water Quality Agreement (GLWQA): The 1972 agreement between the United States and Canadian governments that sets phosphorus loading limits and targets the reduction of point source pollution.


Macropores: Relatively large clusters of soil that can form from no till or conservation tillage. Microcystin: The toxin that is produced by many cyanobacteria, most notably Microcystis aeruginosa. Microcystis aeruginosa: This is the scientific name for the cyanobacteria that dominate the recent algal blooms in Lake Erie. Mineralization: The decomposition of chemical compounds in organic matter, releasing nutrients in a soluble inorganic form that is bioavailable. Trophic Cascade: Powerful indirect interactions that can control an entire ecosystem when a trophic level in a food web is suppressed. Non-Reactive Phosphorus (NRP): The portion of TP that is non-bioavailable. Ohio Senate Bill 1: Addresses agricultural regulations and application of fertilizer. Establishes Lake Erie water quality protections. Ohio Senate Bill 150: Sets the reequipment of a permit to anyone who applies fertilizer to a plot of land greater than 50 acres. Phytoplankton: All aquatic autotrophic life between 60 and 50 microns in length. Stratification Soil Stratification: The separation of soil types by their physical properties resulting in abrupt porosity changes at various depths. Phosphorus Stratification: The accumulation of phosphorus at the soil surface where it can easily be transported by precipitation into a water source. Lake Stratification: The separation of water into three layers based on temperature and density: Epilimnion (shallowest layer, warmest, least dense), Metalimnion (middle layer), and Hypolimnion (deepest layer, coldest, most dense). Soluble Reactive Phosphorus (SRP): Also called dissolved reactive phosphorus (DRP), is the portion of TP that is bioavailable. Tillage: The agricultural preparation of soil by


mechanical agitation such as digging, stirring, and overturning. Conservation tillage: The practice of reducing the amount of tilling. References Ashworth, W. (1982). The Late, Great Lakes: An Environmental History. Baker, D., Johnson, L., & Confesor, R. (2017). Vertical Stratification of Soil Phosphorus as a Concern for Dissolved Phosphorus Runoff in the Lake Erie Basin. Journal of Environmental Quality, 46, 1287–1295. Banack, S., Caller, T., & Henegan, P. (2015). Detection of Cyanotoxins, β-N-methylamino-L-alanine and Microcystins, from a Lake Surrounded by Cases of Amyotrophic Lateral Sclerosis. Toxins, 7, 322–336. Bierman, V., Kaur, J., & DePinto, J. (2005). Modeling the Role of Zebra Mussels in the Proliferation of Blue-green Algae in Saginaw Bay, Lake Huron. Journal of Great Lakes Research, 31, 32–55. Bridgeman, T., Chaffin, J., & Filbrun, J. (2013). A novel method for tracking western Lake Erie Microcystis blooms, 2002–2011. Journal of Great Lakes Research, 39, 83–89. Briscoe, T. (2019). The shallowest Great Lake provides drinking water for more people than any other. Algae blooms are making it toxic — and it’s getting worse. Chicago Tribute. Burns, N., & Ross, C. (1972). Project Hypo: An Intensive Study of the Lake Erie Central Basin Hypolimnion and Related Surface Water Phenomena. United States Environmental Protection Agency. To revise the law governing the abatement of agricultural pollution, to require a person that applies fertilizer for the purposes of agricultural production to be certified to do so by the Director of Agriculture, to make other changes to the Agricultural Additives, Lime, and Fertilizer Law., no. 150 (2014). Conroy, J., Edwards, W., & Pontius, R. (2005). Soluble nitrogen and phosphorus excretion of exotic freshwater mussels (Dreissena spp.): potential impacts for nutrient remineralisation in western Lake Erie. Freshwater Biology, 50, 1146–1162. Dr. Suess. (1971). The Lorax. Duiker, S., & Beegle, D. (2006). Soil fertility distributions in longterm no-till, chisel/disk and moldboard plow/disk systems. Soil and Tillage Research, 88, 30–41. Erickson, J. (2017). Virus infection may be linked to 2014 Toledo water crisis. Michigan News. Feyereisen, G., Francesconi, W., & Smith, D. (2015). Effect of Replacing Surface Inlets with Blind or Gravel Inlets on Sediment and Phosphorus Subsurface Drainage Losses. Journal of Environmental Quality, 44, 594–604. Fortner, R. (2019). There’s Nothing Smeary About Lake Erie Anymore. Ohio Sea Grant.Addresses agricultural regulations and application of fertilizer, no. 1 (2015). Haney, J., & Ikawa. (2001). A Survey of 50 NH Lakes for


Microcystins (MCs). University of New Hampshire Scholar’s Repository, 127.

Bathymetry of Lake Erie & Lake Saint Clair. https://www.ngdc. noaa.gov/mgg/greatlakes/erie.html

Hasler, A. (1969). Cultural Eutrophication Is Reversible. Bioscience, 19(5), 425–431.

National Oceanic and Atmospheric Administration. (2015). Harmful Algal Blooms (HABs) in the Great Lakes (p. 3).

Hecky, R. E., Smith, R. E. H., & Barton, D. R. (2004). The nearshore phosphorus shunt: a consequence of ecosystem engineering by dreissenids in the Laurentian Great Lakes. He Canadian Journal of Fisheries and Aquatic Sciences, 61, 1285–1293.

National Oceanic and Atmospheric Administration. (2020). What is a Dead Zone?https://oceanservice.noaa.gov/facts/ deadzone.html

Heidelberg University. (2020). National Center for Water Quality Research. Tributary Data Download. Jarvie, H., Johnson, L., & Sharpley, A. (2017). Increased Soluble Phosphorus Loads to Lake Erie: Unintended Consequences of Conservation Practices? Journal of Environmental Quality, 46, 123–132. Jaynes, D., & Isenhart, T. (2014). Reconnecting Tile Drainage to Riparian Buffer Hydrology for Enhanced Nitrate Removal. Journal of Environmental Quality, 43, 631–638.

Obenour, D., Gronewold, D., & Stow, C. (2014). 2014 Lake Erie Harmful Algal Bloom (HAB) Experimental Forecast: This product represents the first year of an experimental forecast relating bloom size to total phosphorus load. Ohio Department of Agriculture. (2018). Ohio Agriculture. Farm Flavor. https://www.farmflavor.com/ohio-agriculture/

Jaynes, D., Isenhart, T., & Parkin, T. (2014). Reconnecting riparian buffers with tile drainage (2). Leopold Center Completed Grant Reports.

Ohio History Connection. (n.d.). Cuyahoga River Fire. Ohio History Central. https://ohiohistorycentral.org/w/Cuyahoga_ River_Fire

Jetoo, S., Grover, V., & Krantzberg, G. (2015). The Toledo Drinking Water Advisory: Suggested Application of the Water Safety Planning Approach. Sustainability, 7, 9787–9808.

Paerl, H., & Huisman, J. (2009). Climate change: a catalyst for global expansion of harmful cyanobacterial blooms. Environmental Microbiology Reports, 1(1), 27–37.

Ji, X., Verspagen, J., & Van de Waal, D. (2020). Phenotypic plasticity of carbon fixation stimulates cyanobacterial blooms at elevated CO2. Science Advances.

Raikow, D., Sarnelle, O., & Wilson, A. (2004). Dominance of the noxious cyanobacterium Microcystis aeruginosa in low-nutrient lakes is associated with exotic zebra mussels. Limnology and Oceanography, 49(2), 482–487.

Johnston, L. (2019). How big did the 2019 Lake Erie harmful algal bloom get? See the progression. Cleveland.Com. https:// www.cleveland.com/news/g66l-2019/09/204be17826609/ how-big-did-the-2019-lake-erie-harmful-algal-bloom-get-seethe-progression.html

Richards, Baker, D., & Crumrine, J. P. (2009). Improved water quality in Ohio tributaries to Lake Erie: A consequence of conservation practices. Journal of Soil and Water Conservation, 64(3).

Kasich, J., Taylor, M., & Butler, G. (2016). Public Water System Harmful Algal Bloom Response Strategy. Ohio Environmental Protection Agency.

Richards, R. P., Baker, D., & Eckert, D. (2002). Trends in Agriculture in the LEASEQ Watersheds, 1975–1995. Journal of Environmental Quality, 31.

Kleinman, P., Sharpley, A., & Johnson, L. (2015). Implementing agricultural phosphorus science and management to combat eutrophication. AMBIO, 44, 293–310.

RMB Environmental Laboratories Inc. (n.d.). Lake Eutrophication. Lakes Monitering Program. https://www. rmbel.info/primer/lake-eutrophication/

Liu, X., Lu, X., & Chen. (2011). The effects of temperature and nutrient ratios on Microcystis blooms in Lake Taihu, China: An 11-year investigation. Harmful Algae, 10, 337–343.

Scavia, D., Manning, N., & Bertani, I. (2019). 2019 Western Lake Erie Harmful Algal Bloom (HAB) Forecast.

Makarewicz, J. (1993). Phytoplankton Biomass and Species Composition In Lake Erie, 1970 to 1987. Journal of Great Lakes Res., 19(2). Makarewicz, J., & Bertram, P. (1991). Evidence for the Restoration of the Lake Erie Ecosystem. Oxford University Press, 41(4), 216– 223. Makarewicz, J., Bertram, P., & Lewis, T. (2000). Chemistry of the Offshore Surface Waters of Lake Erie: Pre- and Post-Dreissena Introduction (1983-1993). Journal of Great Lakes Research, 21(1), 82–93. Michalak, A., Anderson, E., & Chaffin, J. (2013). Recordsetting algal bloom in Lake Erie caused by agricultural and meteorological trends consistent with expected future conditions. PNAS, 110(16), 644–6462. National Oceanic and Atmospheric Administration. (n.d.).


Nicholls, K. H., Standen, D. W., Hopkins, G. J., & Carney, E. C. (1977). Declines in the Near-Shore Phytoplankton of Lake Erie’s Wester Basin Since 1971. Journal of Great Lakes Res, 3, 72–78.

Schindler, D. W. (1977). Evolution of Phosphorus Limitation in Lakes: Natural Mechanisms Compensate for Deficiencies of Nitrogen and Carbon in Eutrophied lakes. Sicence, 195, 260–262. Smith, D., Francesconi, W., & Livingston, S. (2015a). Phosphorus losses from monitored fields with conservation practices in the Lake Erie Basin, USA. AMBIO, 22, 319–331. Smith, D., King, K., & Johnson, L. (2015b). Surface Runoff and Tile Drainage Transport of Phosphorus in the Midwestern United States. Journal of Environmental Quality, 44, 495–502. Smith, D., & Livingston, S. J. (2013). Managing farmed closed depressional areas using blind inlets to minimize phosphorus and nitrogen losses. Soil Use and Management, 29, 94–102. Snyder, L. (2018). Overview: Senate Bill 150, Senate Bill 1. Together with Farmers. Steffen, Davis, T., & Bullerjahn, G. (2017). Ecophysiological


Examination of the Lake Erie Microcystis Bloom in 2014: Linkages between Biology and the Water Supply Shutdown of Toledo, OH. Environmental Science and Technology, 51, 6745–6755. Steinmetz, M. (2015). Dreissenid Mussels Impact on Phosphorus Levels in the Laurentian Great Lakes. The Duluth Journal of Undergraduate Biology. Stewart, I., Seawright, A., & Shaw, G. (2008). Cyanobacterial poisoning in livestock, wild mammals and birds – an overview. Ulén, B., Aronsson, H., & Bechmann, M. (2010). Soil tillage methods to control phosphorus loss and potential sideeffects: a Scandinavian review. Soil Use and Management, 26, 94–107. United States Environmental Protection Agency. (2020a). Human Health Effects Caused by the Most Common Toxinproducing Cyanobacteria. Health Effects from Cyanotoxins. United States Environmental Protection Agency. (2020b). Lake Erie [Government]. The Great Lakes. https://www.epa.gov/ greatlakes/lake-erie United States Government, & Canadian Government. (1972). The Great Lakes Water Quality. Wood, S., Borges, H., & Puddick, J. (2017). Contrasting cyanobacterial communities and microcystin concentrations in summers with extreme weather events: insights into potential effects of climate change. Erschienen in: Hydrobiologia, 785(71–89). WTOL Staff. (2019). 5 years since the Toledo water crisis: A timeline of what happened. WTOL 11. https://www.wtol. com/article/news/local/protecting-our-water/5-years-sincethe-toledo-water-crisis-a-timeline-of-what-happened/51271a2414b-a34d-4b4a-9632-58e1c212d098 Zhang, H., Culver, D., & Boegman. (2011). Dreissenids in Lake Erie: an algal filter or a fertilizer? Aquatic Invasions, 6(2), 175– 194. Zulauf, C., & Brown, B. (2019). Use of Tile, 2017 US Census of Agriculture. Farm Doc Daily.



Differences in microbial flora found on male and female Clusia sp. flowers BY BETHANY CLARKSON, UNIVERSITY OF LINCOLN (UK) GRADUATE Cover Image: A snapshot of the Santa Lucia Cloud Forest Reserve. This image was captured at the Santa Lucia Ecolodge, GPS coordinates N00°8.193 W078°36.458. Photograph captured by author.


Abstract Specialized rewards with known antimicrobials such as resin are being increasingly used to develop understanding of plant-pollinator relationships, though the knowledge of how this varies between male and female dioecious plants, such as Cluisa, is still somewhat limited. This research, carried out at the Santa Lucia cloud forest reserve in Ecuador on a Clusia sp., aimed to explore differences between microbial growth on male and female Clusia sp. flowers. In total, 16 flowers were collected and stored in 100ml of sterile water for 60 minutes so that any antimicrobial compounds would infuse the water. The infusion was then used for further testing. Spread plating, Petri film culturing, and biochemical testing were performed on sixteen samples from four different Clusia plants (two males and two females) using a mixture of open flowers and young buds. Microbial investigation revealed that female flowers yielded a lower density and diversity of

bacterial and fungal colonies present on Petri films than male flowers. Noticeably, there was a significant decrease in the number of fungal colonies present on female flowers. A possible reason is that female flowers have a shorter lifespan, and therefore support increased antimicrobial activity to make themselves more attractive to possible pollinators.

Keywords Pollinator reward systems, Clusia, Ecuador, Antimicrobial, Resin, Bees.

Introduction Antibiotic resistance is not only on the rise, but has been named “the greatest risk to public health” (Kumarasamy et al., 2010); antibioticresistant infections are projected to be responsible for an estimated 10 million deaths by 2050 (Davies et al., 2013). With increasing


Figure 1: A makeshift laboratory within the Ecolodge. This table was where most of the work was carried out. Equipment includes disposable 5ml pipettes, sterile water, ethanol, pH testing strips and samples of Clusia sp. flowers suspended in sterile water. Photograph captured by author.

resistance comes a rising urgency to not only find new sources of antibiotics but also to make antibiotic treatment as efficacious as possible. Current research explores many potential sources of antimicrobials, from cockroaches to deep-sea fungi (Zhang et al., 2013; Ali et al., 2018). Plants are a common source of antimicrobial compounds; ongoing research shows plant species including Calamus aromaticus, Croton lechleri, and many others possess significant antimicrobial potential. Therefore, research into understanding these specimens is of the utmost importance (Salam and Quave, 2018; Khan et al., 2018). Indeed, a study by Kumar et al. (2006) on over sixty species of plants used in traditional Indian medicine showed that substantial sources of antimicrobials are found in flowers, leaves, and resin. This phenomenon is the basis of natural relationships like pollinator reward systems, which involves mutually beneficial interactions when pollinators, such as bees, receive ‘rewards’ from plants (primarily in the form of pollen or nectar). In return, the plants are pollinated by the bees (Simpson and Neff, 1981). In rarer cases, this pollinator reward system has been observed in nature using alternative rewards such as resins, particularly those containing antimicrobial properties. Though pollinator reward systems using resin are fairly rare, the genus Clusia, from the family Clusiaceae, has several species that have these relationships with male euglossine bees (de Faria et al., 2006). The genus Clusia contains around 250 species native to tropical America, many of which produce a viscous resin containing high


amounts of polyprenylated benzophenones named guttiferones. Guttiferones have shown significant antifungal, antimicrobial, and antihuman immunodeficiency virus (HIV) effects (Gustafson et al., 1992; Cruz and Teixeira, 2004). When used in pollinator reward systems, the resin has many different uses depending on the species of the bee collecting the resin and the current needs of the hive. For example, Apis bees use collected resin to seal their hive, whereas Eulaema and Euglossa bees use it in the construction of their nests. Antimicrobial resin, such as the resin produced by Clusia, can be used to protect food and larvae from bacteria and fungi (Patiny, 2012).

“The genus Clusia contains around 250 species native to tropical America, many of which produce a viscous resin containing high amounts of polyprenylated benzophenones named guttiferones ”

Interestingly, almost all known Clusia are dioecious, meaning they have differentiating secondary sex characteristics—which could include the properties of these resins (Luján, 2019). Abe (2001) showed that in another species of dioecious flower, Aucuba japonica, there are significant differences in the proportion of open flowers and the period of time for which flowers opened between male and female inflorescences. Not only did male inflorescences generally flower for longer, but they also had a higher proportion of flowers open at the peak of the flowering period. Therefore, it is possible that female flowers may show higher antimicrobial activity to compensate for the shorter time frame in which they are able to become pollinated, thus making them more desirable to pollinators. Using an array of microbial culturing, including spread plating, culturing of Petrifilms, and biochemical testing, this project aims to 71

Figure 2: A schematic representation of the Petrifilm grid. The areas highlighted are an example of four randomly selected squares, which would then be counted and multiplied up to get the total number of colonies on the Petrifilm.

“A rudimentary laboratory was set up with all the microbial culturing carried out close to a Bunsen burner to maintain as close to an aseptic environment as possible...”

Figure 3: Female sample number two. This image was captured as part of the cataloguing of the samples. This particular female flower measured 2cm in diameter and weighed 6.9g. Photograph captured by Samuel Shaw and used with permission.

develop a further understanding of the differences in microbial flora to predict possible antimicrobial efficacy between male and female samples of flowers from Clusia sp.

Materials and Methods This research was carried out in the Santa Lucia cloud forest reserve, located at the far south of the Choco-Andean Rainforest Corridor, within the Pichincha province in Ecuador, in July 2018 (Figure.1). In this study, sixteen flowers were collected from four trees, two males and two females, of an unidentified species, from the genus Clusia, family Clusiaceae. Female sample 1 was collected at N00°07.084 W078°36.706, female sample 2 at N00°06.944 W078°36.114 and male samples 1 and 2 were taken at N00°07.110 W078°36.507 and N00°06.964 W078°36.155 respectively. A rudimentary laboratory was set up with all the microbial culturing carried out close to a Bunsen burner to maintain as close to an aseptic environment as possible, so as to try to avoid contamination (Fig. 2). All equipment was cleaned with ethanol and sterile water between uses and a negative control of sterile water was used to ensure all of the cultured colonies were representative of the microbial growth present on the flowers and not a result of contamination. Before testing, each flower was photographed, catalogued, stored in a plastic collection pot with 100ml of sterile water, and then left to infuse for sixty minutes. After infusion, the flowers were weighed using a 10g spring balance. Next, the water pH was measured (with pH strips), as was

the sugar concentration (with a PAL-1 Atago digital refractometer, USA). The infused water was then put aside for microbial culturing. The microbial load was measured using a combination of spread plating and 3M Petrifilm Rapid Aerobic Count, Rapid Yeast and Mould and Rapid Coliform / E. coli plates (3M United Kingdom PLC, Bracknell, England). Spread plating was executed under aseptic technique, using 1ml of infused water. 1ml of sterile water was also cultured as a negative control. Petrifilms for samples 1-4 of both sexes were prepared using 1ml of the infused water pipetted directly onto the films. Upon initial examination, many of the male films had too many colonies to count. To account for this, samples 5-8 were adjusted by using 0.5ml of the sample water and 500µl of sterile water. For each set of Petrifilms cultured, a control plate of 1ml of sterile water was also prepared. To calculate the total colony count per plate, four equal sections of the petrifilms were chosen at random; the populations of those chosen areas were then counted and an average of the overall count was reached (Fig. 3). All cultures were incubated for 24 hours in a field incubator made from a tin foil-lined box containing a mixture of bubble wrap and temperature-controlled hot water bottles, which maintained a temperature of 18-40°C, optimal for encouraging microbial growth. Initially, it was found that some petrifilms had too many colonies to count. When this happened, the samples were diluted as described above; however, this methodology did not overcome the excess of colonies. It was therefore decided that quantifiable plates would be those with colony counts from 0-999, any higher would be estimated as 1000 colonies



Table 1: Key findings from biochemical testing. The results of biochemical tests provide means and standard deviation of key findings including mass, height, pH and sugar concentration differences between samples taken from male and female Clusia sp. flowers.

for statistical purposes. Differences between colony counts of samples taken from male and female flowers were first tested for normality using the AndersonDarling test. When the data were normally distributed, a 2-sample t-test was used to test for significance between the number of female and male colonies formed on the Petrifilms. If the data were not normally distributed, a MannWhitney test was used to analyze the data.

Results The Clusia flowers used in this experiment had red and black petals, which were, on average, 2 cm in diameter. Their weights differed slightly between the sexes, with male inflorescences weighing slightly more (8.75±1.94g) than those of females (7.01±1.18g). Male trees also grew on average 7.5m taller than female trees (22.5±3.54m vs 15±4.24m). Measurements of the sugar levels present revealed trace amounts in the sample, with concentrations low in both males (0.05±0.15) and females (0.034±0.05). Female samples

showed a minimally lowered pH level compared to the males (6.06±0.90 vs 6.13±0.35), but this difference is negligible and not statistically significant (Table 1). Spread plating produced a myriad of results, as seen in Figure 5B. The presence of aerobic bacteria on the petrifilms is shown with a blue-green stain, which spread much more extensively on the male samples. All of the plates had less fungi than bacteria, and all fungal growths were seen exclusively on male samples. This correlates well with the Petrifilm data recorded and the negative controls showed no clear microbial growths, which suggests contamination did not play a significant role in these results. There was a large difference between the number of colonies of yeast and mould specimens which grew on samples taken from female flowers (13±16.52) compared to those taken from male flowers (815±342.10) (Fig. 6). Most male samples had too many colonies to count (>1000 colonies) and, after quantifying these results (at 1000 colonies per plate), it showed that males had significantly more colonies, around 802 on average, than female

“The presence of aerobic bacteria on the petrifilms is shown with a bluegreen stain, which spread much more extensively on the male samples.”

Figure 4: A representative sample of the spread plates which were produced. A wide selection of different morphologies is seen within the colonies grown on the different samples; one fungal growth can be seen in the bottom righthand plate. B: All the cultured petrifilms for aerobic bacteria are shown with both female and male samples included. Above, the negative controls for all three of the petrifilms are also visible. Female samples are labelled as F1-8 and males as M1-9. Photographs captured by the author.



Figure 5: Mean and standard deviation of the number of yeast and mold colonies grown on Petrifilm samples taken from male and female Clusia sp. flowers.

“Female flowers had lower colony counts across all findings, with a 60% decrease in the mean number of fungal colonies per female sample compared to male samples.”

samples (Fig. 6; P<0.01, W= 93.0, n=4). Aerobic bacteria appeared to grow well on all samples (Fig. 7), though male samples did show a higher number of colonies than female samples (508±422.76 in males as opposed to 346±436.92 in females). The spread plated samples yielded similar results (Fig. 5B). This equates to an average of 72 more colonies per Petrifilm on male samples, though this was not statistically significant (T=0.75, P= 0.464, DF=14). The findings shown in Figure 8 were consistent with other results, in which colonies of coliforms were found on all the samples which were cultured, though there was no statistically significant difference between the number of colonies growing on male (143±142.88) and female (56±112.45) Petrifilm samples (Fig. 8; P=0.712, W=72.0, n=4). There was relatively little growth on the Escherichia coli petrifilms, though samples taken from male flowers showed some growth, with a mean number of 2±3.94 colonies per plate. There were no colonies observed on the female samples (Fig.9); therefore, this data was not sufficient for statistical analysis.


Figure 6: Mean and standard deviation of the number of colonies of aerobic bacteria grown on Petrifilm samples taken from male and female Clusia sp. flowers.


Female flowers had lower colony counts across all findings, with a 60% decrease in the mean number of fungal colonies per female sample compared to male samples. This could be due to intrinsic antifungal properties in the female flowers. A study by Fernandez et al. (2016), on Clusia hilariana, showed that female plants not only have higher levels of specific essential oils than male plants, but that they also have some oils that male plants do not possess at all. One of these components is alpha cadinol, a powerful

fungicide, which is around four times more prevalent in females (2.2%) than in males (0.5%) (Ho et al., 2012). Furthermore, Cruz and Teixeira (2004) showed that Clusia tends to produce a large quantity of polyprenylated benzophenones, which have been proven to exhibit a wide variety of pharmacological effects, including anti-inflammatory, anti-HIV activity, fungicidal, and antimicrobial activity. This may suggest that the fungicidal action is not in fact due to discrepancies between the components of the resin itself, but rather a difference in the amount of resin present in males and females. This is supported by the work of Porto et al. (2000) which found no divergence between the male and female versions of the major chemical structures of the resin, but did find that the concentrations of resin varied between males and females. Though these findings may offer an explanation as to how these specimens produce an antimicrobial effect, it is also important to understand why this occurs; one popular theory involves pollinator reward systems. Pollinator reward systems are symbiotic relationships between plants and pollinators like birds and insects. Usually, pollinator reward systems offer pollinators nectar in exchange for pollinating the plants; however, some use non-sugar rewards, such as resin. This has been observed previously between several species of Clusia and several species of bees; the bees even select specific plants depending on the purpose for which the resin is required. Resin which hardens quickly can be used by species of Aphid bees to cement together structures for the hive or to seal the outside, whereas a softer and more malleable resin can be used for structuring cells within the hive, which may have to be remolded (Patiny, 2012). Antimicrobial resins may be used to defend structures from pathogens, therefore protecting food stores, DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

larvae, and the colony itself from attack (Farias et al., 2011; Ribeiro et al., 2011). In addition, Llyod and Webb (1977), showed that, in a wide range of species, not only do male inflorescences tend to appear earlier in life and more frequently than females, but that male plants also tend to produce a higher number of inflorescences than females. If the plants are using the resin produced by the inflorescences to attract bees and allow for pollination, it could be that the females had a lower bacterial or fungal load due to an intrinsically higher antibacterial activity to compensate for the lower number of inflorescences (Evers et al., 2018). This outside pressure of selective pollinators choosing plants which offer specific rewards, such as antimicrobial resins, may explain why only trace levels of sugar were found; this particular species of Clusia may produce large amounts of resin rather than an abundance of nectar. On average, female samples also had a 32% lower sugar concentration than male samples, which may also be due to higher amounts of resin produced, as part of this compensatory mechanism. Another explanation for the differences between male and female inflorescences could be a result of the antifungal effect of the female Clusia. More than 90% of terrestrial plants form mycorrhizal associations, in which the roots of the plant form a symbiotic relationship with a species of fungi. This relationship causes an increase in phosphorus available to the plant largely due to an increased surface area, using the arbuscular mycorrhizal fungal hyphal network (Bolduc, 2011). Though literature surrounding Clusia-mycorrhiza associations is fairly limited, it has been documented that Clusia develops much better when mycorrhized. This has also shown to be the SUMMER 2020

case in a range of other species, leading to larger plants with an increase in the number of inflorescences, so it is possible this could also apply to Clusia (Lüttge, 2007; Scagel, 2003). The antifungal activity of a female may disrupt this relationship, leading to an average decrease in height of 7.5m compared to the male trees and fewer inflorescences. A similar effect has been seen in species growing close to Alliaria petiolata, which also shows antifungal activity (Stinson et al., 2006). This relationship could be investigated in future research by conducting staining on sections of male and female root systems, to search for the presence of fungi. Plant tissue analysis could also be used to look for a difference in phosphorus levels between male and female leaves, which would suggest a mycorrhizal association.

Figure 7: Mean and standard deviation of the number of colonies of coliforms grown on Petrifilm samples taken from male and female Clusia sp. flowers.

Several outliers were observed in this study, with certain flowers showing an abnormal bacterial count. One possible cause of these outliers is age. For example, male sample 2—a closed bud—showed no fungal growths. It is possible that because the bud was sealed shut, the fungi were unable to enter, hence none were present. Another outlier is female sample 4, which—despite the low microbial load of most female flowers—showed an extremely high bacterial count. Research by Tawaha and Hudaib (2012) on Thymus capitatus examined essential oils, which can show antibacterial effects, from buds and flowers and found that mature flowers yield 0.93% more oil. Though this research was not carried out on Clusia, it was carried out on members of the Lamiaceae family, which are also eudicots. This suggests the plant may produce more oil as it gets older; if the resin uses the same mechanism, that would suggest the buds in female sample

“If the plants are using the resin produced by the inflorescences to attract bees and allow for pollination, it could be that the females had a lower bacterial or fungal load due to an intrinsically higher antibacterial activity to compensate for the lower number of inflorescences.”

Figure 8: Mean and standard deviation of the number of colonies of E. coli grown on Petrifilm samples taken from male and female Clusia sp. Flowers.


“The resin produced by the female samples of this species of Clusia may also be able to undergo toxicity testing, such as on cell lines and LD50 testing, as it is a potent antifungal and may well be able to be used in human or veterinary medicine in the future.�

4 were not old enough to produce as much antimicrobial resin as the mature flowers, resulting in higher levels of bacteria. Therefore, future studies should analyze flowers from a range of ages to test for differences in the level of antifungal activity.

symbiotic relationship developed with bees as a pollinator reward system using antimicrobial resin. It is also possible that this inherent antimicrobial activity may be due to the presence of compounds such as polyprenylated benzophenones or alpha-cadinol.

This research is an exciting start to what is a relatively understudied area of science. Future applications could focus on obtaining a larger amount of data, with an increased number of repeats, so as to increase the validity of any findings. This could also be achieved by using more precise equipment, such as laboratory grade incubators and automated colony counters, to obtain more accurate results and a higher throughput of data. The next step would be to analyze the resin produced by both female and male samples of this species and look for any differences between the sexes and the compounds produced (such as alpha-cadinol) and for possible differences in the concentrations of resin produced by the sexes. This could then definitively prove why females show such a higher antimicrobial effect than males. Treatment of the resin with Diazomethane allows methylation of the resin, which would prevent it from decomposing and allow it to later be purified using silica column chromatography; this technique has been successfully used to analyze resin produced by Clusia flowers in the past (Porto et al., 2000).


The resin produced by the female samples of this species of Clusia may also be able to undergo toxicity testing, such as on cell lines and LD50 testing, as it is a potent antifungal and may well be able to be used in human or veterinary medicine in the future. It is important to consider all manners of toxicity in testing, as another species of Clusia, Clusia alata, has previously been shown to have mutagenic properties at very high doses (2000 mg/kg) (Moura et al., 2008). Currently, there seem to be no treatments available using Clusia extract, though previous testing for a potential antibiotic has been carried out using the stem, with poor results (Suffredini et al., 2006). It is possible that extracts using resin would be much more potent.

Bolduc, A. (2011). The Use of Mycorrhizae to Enhance Phosphorus Uptake: A Way Out the Phosphorus Crisis. Journal of Biofertilizers & Biopesticides, 02(01).

Conclusion Female Clusia sp. flowers were found to have a statistically significant lower number of fungal colonies than males (p<0.009); female samples also had fewer bacterial colonies than males, though this difference was not statistically significant. This may be due to a 76

Many thanks to my fantastic project leaders Adrian Goodman and Catrin Gunther for all their support, advice and patience with me on this project, both in the field and in their support after. Also, my fellow academic, Sophie Ellis, for her continued advice and help throughout the duration of this research and its editing as well as the support and guidance of Clare Miller. Finally, thanks to the University of Lincoln funding department for funding received towards travel so this research could be undertaken. References Abe, T. (2001). Flowering phenology, display size, and fruit set in an understory dioecious shrub, Aucuba japonica (Cornaceae). American Journal of Botany, 88(3), 455-461. Ali, S., Siddiqui, R. and Khan, N. (2018). Antimicrobial discovery from natural and unusual sources. Journal of Pharmacy and Pharmacology, 70(10), 1287-1300. Armbruster, W. (1997). Exaptations link evolution of plantherbivore and plant-pollinator interactions: A phylogenic enquiry. Ecology, 78(6), 1661-1672.

Brundrett, M. (2002). Coevolution of roots and mycorrhizas of land plants. New Phytologist, 154(2), 275-304. Davies, S., Fowler, T., Watson, J., Livermore, D. and Walker, D. (2013). Annual Report of the Chief Medical Officer: infection and the rise of antimicrobial resistance. The Lancet, 381(9878), 1606-1609. de Faria, A., Matallana, G., Wendt, T. and Scarano, F. (2006). Low fruit set in the abundant dioecious tree Clusia hilariana (Clusiaceae) in a Brazilian restinga. Flora - Morphology, Distribution, Functional Ecology of Plants, 201(8), 606-611. Evers, H., Jackson, B. and Shaw, S. (2018). Overseas Field Course Report. University of Lincoln. Farias, J., Ferro, J., Silva, J., Agra, I., Oliveira, F., Candea, A., Conte, F., Ferraris, F., Henriques, M., Conserva, L. and Barreto, E. (2011). Modulation of Inflammatory Processes by Leaves Extract from Clusia nemorosa Both In Vitro and In Vivo Animal Models. Inflammation, 35(2), 764-771. Fernandes, C., Cruz, R., Amaral, R., Carvalho, J., Santos, M., Tietbohl, L. and Rocha, L. (2016). Essential Oils from Male and Female Flowers of Clusia hilariana. Chemistry of Natural Compounds, 52(6), 1110-1112. Gustafson, K., Blunt, J., Munro, M., Fuller, R., McKee, T., Cardellina, J., McMahon, J., Cragg, G. and Boyd, M. (1992). The DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

guttiferones, HIV-inhibitory benzophenones from Symphonia globulifera, Garcinia livingstonei, Garcinia ovalifolia and Clusia rosea. Tetrahedron, 48(46), 10093-10102.

Stinson, K., Campbell, S., Powell, J., Wolfe, B., Callaway, R., Thelen, G., Hallett, S., Prati, D. and Klironomos, J. (2006). Invasive Plant Suppresses the Growth of Native Tree Seedlings by Disrupting Belowground Mutualisms. PLoS Biology, 4(5), 140.

Ho, C., Hua, K., Hsu, K., Wang, E. and Su, Y. (2012). Composition and antipathogenic activities of the twig essential oil of Chamaecyparis formosensis from Taiwan. Natural Product Communications, 7(7), 933-936.

Suffredini, I., Paciencia, M., Nepomuceno, D., Younes, R. and Varella, A. (2006). Antibacterial and cytotoxic activity of Brazilian plant extracts - Clusiaceae. Memórias do Instituto Oswaldo Cruz, 101(3).

Khan, B., Bakht, J. and Khan, W. (2018). Antibacterial potential of a medically important plant Calamus aromaticus. Pakistan Journal of Botany, 50(6), 2355-2362.

Tawaha, K. and Hudaib, M. (2012). Chemical composition of the essential oil from flowers, flower buds and leaves of Thymus capitatus hoffmanns. Journal of Essential Oil Bearing Plants, 15(6), 988-996.

Kumar, V., Chauhan, N., Padh, H. and Rajani, M. (2006). Search for antibacterial and antifungal agents from selected Indian medicinal plants. Journal of Ethnopharmacology, 107(2), 182188.

Zhang, X., Zhang, Y., Xu, X. and Qi, S. (2013). Diverse DeepSea Fungi from the South China Sea and Their Antimicrobial Activity. Current Microbiology, 67(5), 525-530.

Kumarasamy, K., Toleman, M., Walsh, T., Bagaria, J., Butt, F., Balakrishnan, R., Chaudhary, U., Doumith, M., Giske, C., Irfan, S., Krishnan, P., Kumar, A., Maharjan, S., Mushtaq, S., Noorie, T., Paterson, D., Pearson, A., Perry, C., Pike, R., Rao, B., Ray, U., Sarma, J., Sharma, M., Sheridan, E., Thirunarayan, M., Turton, J., Upadhyay, S., Warner, M., Welfare, W., Livermore, D. and Woodford, N. (2010). Emergence of a new antibiotic resistance mechanism in India, Pakistan, and the UK: a molecular, biological, and epidemiological study. The Lancet Infectious Diseases, 10(9), 597-602. Lloyd, D. and Webb, C. (1977). Secondary sex characters in plants. The Botanical Review, 43(2), 177-216. Luján, M. (2019). Playing the Taxonomic Cupid: Matching Pistillate and Staminate Conspecifics in Dioecious Clusia (Clusiaceae). Systematic Botany, 44(3), 548-559. Lüttge, U. (2007). Clusia: A Woody Neotropical Genus Of Remarkable Plasticity And Diversity. Berlin: Springer, 235-239. Moura, A., Perazzo, F., Maistro, E. (2008). The mutagenic potential of Clusia alata (Clusiaceae) extract based on two short-term in vivo assays. Genetics and Molecular Research, 7(4), 1360-1368. Patiny, S. (2012). Evolution of plant-pollinator relationships. Cambridge: Cambridge University Press, 51-54 Peres, M., Monache, F., Cruz, A., Pizzolatti, M. and Yunes, R. (1997). Chemical composition and antimicrobial activity of Croton urucurana Baillon (Euphorbiaceae). Journal of Ethnopharmacology, 56(3), 223-226. Ribeiro, P., Ferraz, C., Guedes, M., Martins, D. and Cruz, F. (2011). A new biphenyl and antimicrobial activity of extracts and compounds from Clusia burlemarxii. Fitoterapia, 82(8), 12371240. Salam, A. and Quave, C. (2018). Opportunities for plant natural products in infection control. Current Opinion in Microbiology, 45, 189-194. Scagel, C. (2003). Inoculation with arbuscular mycorrhizal fungi alters nutrient allocation and flowering of Freesia x hybrida. Journal of environmental horticulture, 21(4), 196. Simone-Finstrom, M., Gardner, J. and Spivak, M. (2010). Tactile learning in resin foraging honeybees. Behavioral Ecology and Sociobiology, 64(10), 1609-1617. Simpson, B. and Neff, J. (1981). Floral Rewards: Alternatives to Pollen and Nectar. Annals of the Missouri Botanical Garden, 68(2), 301.



The Facial Expressions of Mice Can Teach Us About Mental Illness BY BRYN WILLIAMS '23 Cover Image: Highlighted neural connections and pathways in the brain for various brain regions. These connections can be related to various emotions in humans and other animals. Source: Wikimedia Commons


Introduction Emotions play a large role in our lives, yet scientists know very little about their neurobiological roots. If researchers can identify regions of the brain that correlate to certain emotions, the medical profession would be one step closer to providing better treatments for mental illness, which affects one in five Americans over the course of their lives (NIH, 2017). To determine this connection, emotional expressions need to be objectively analyzed, but this can be especially difficult in animals given their difference from human expressions (Dolensek et al., 2020). Almost 150 years ago, Darwin posited that animals have facial expressions, similar to humans, that can provide insight into their emotions; however, researchers did not have the proper tools to analyze these minute facial changes until recently. Using optogenetics and twophoton imaging, scientists deciphered the complex facial expressions of mice, taking the

first step in an attempt to explain emotions in neurobiological terms (Abbott, 2020).

The Importance of Emotions in Both Humans and Animals The definition of the term “emotion” is widely disputed and differs between fields, but most researchers would agree that the term includes, but is not limited to, expressive behaviors connected with internal brain states that are classified as “feelings”. In humans, these expressive behaviors include actions like smiling or frowning, as well as vocal expressions like laughing or crying. These expressions are based on an internal central (nervous system) state which is activated by certain stimuli, and this internal central state consists of neural pathways that have a certain result when triggered (Anderson et al., 2014). Researchers currently know very little about these neural pathways, but analyzing emotions across DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

researchers can attempt to link the external changes to internal neural circuits (Dolensek et al., 2020).

Figure 1: Plate 1 from Charles Darwin’s The Expression of the Emotions in Man and Animals. Source: Wikimedia Commons

Varying Facial Expressions in Response to Stimuli

phylogeny, or evolutionarily distinct species, may provide answers (Encyclopedia Britannica, 2020). Animals and humans may share the same “primitive emotions,” which are considered evolutionary building blocks of emotion (Anderson et al., 2014). Even single cell organisms have ways to detect and respond to certain events. Darwin asserted that emotions evolve just as other biological features. Humans across the globe use similar expressions of emotion, and animals do as well (LeDoux, 2012). It is plausible that “primitive emotions” are conserved across different types of animals given their proposed evolutionary advantage. The events that illicit emotional responses have occurred many times over evolutionary history, so responses to these events can be selected for evolutionarily. Emotions help animals, including humans, process external stimuli by sorting out various brain states or “feelings” (Adolphs, 2017). Though different animals may have similar emotional building blocks, that does not necessarily mean they are homologous to human emotional states. This is where many studies have fallen short in the past (Anderson et al., 2014). Controversy still surrounds the extent to which emotional states are conserved across phylogeny. To determine the extent to which these emotional states are conserved, scientists sought to analyze the facial expressions of mice in response to certain stimuli. If an observable external facial change occurs in response to a stimulus, the


A study conducted by Nejc Dolensek et al. at the Max Planck Institute of Neurobiology sought to explore the question of conserved emotional states in animals using mice to map out a connection between emotional expressions and neural circuits. The researchers exposed mice to wide-ranging stimuli assumed to result in an emotional response. The stimuli included tail shocks for pain, sucrose for sweetness, quinine for bitterness, and lithium chloride for malaise and freeze or fight behaviors. The mice’s heads were fixed, and video monitored as they responded to the stimuli. The facial expressions in response to the stimuli were clearly noticeable, even to naïve untrained observers, yet the type of underlying emotion was not immediately recognizable. To maintain an objective reading of the mice’s facial expressions, the “histogram of oriented gradients” (HOG) feature of machine vision was used. These HOG descriptors could measure and track facial movements in an objective way by providing a numerical vector for each video frame. The facial expressions were recorded for each stimulus and corresponding emotion. Each stimulus exposure was repeated three times and HOGs from before and after exposure were compared. They found two main clusters of facial expressions: an expression from before the exposure and an expression from either during or immediately after exposure. The group then analyzed the data from after exposure to determine if there were distinct facial expressions resulting from certain stimuli. They observed distinct facial expressions in response to each event type, or stimulus type. These unique expressions suggest that there were underlying emotions associated with the facial expressions.

“Animals and humans may share the same 'primitive emotions', which are considered evolutionary building blocks of emotion.”

To test this theory, researchers trained a“random forest classifier,” which was an unbiased third party, to predict the event or stimulus behind the observed facial expression. This third party could predict the underlying emotion with 90% accuracy. Given that the stimulus could be predicted by the expression, the team explored similarities between the emotional categories of mice and humans. To test this, Dolensek et al. constructed the typical HOGs from each stimulus and then correlated a stimulus to 79

Figure 2: A patient undergoing transcranial magnetic stimulation, a type of brain stimulation that can be used to treat mental illnesses. Source: Wikimedia Commons

an emotion. Quinine would correspond to disgust, sucrose to pleasure, tail shock to pain, lithium chloride to malaise, escape to active fear, and freezing to passive fear. They found that each facial expression (or its HOG numerical equivalent) was unique to an emotional state.

“A new technology called optogenetics allows scientists to control events within specific cells of the brain.�

These emotional states may correspond to a central internal state, or they may just be reflexlike reaction. To determine if these reactions were based on internal states or simply reflexes, the research team scaled the intensity of stimuli to see if the expression also scaled. They discovered that the facial expressions during events with higher intensity were closer to the prototypical expression for that emotion, meaning the expressions scaled in response to more intense stimuli. Additionally, the team tested the underlying emotions of the expressions by providing corresponding alternative stimuli that should elicit similar emotions, resulting in evidence suggesting mice, like humans, have facial expressions tied to different categories of emotions (Dolensek et al., 2020).

Connection of External Expressions to Internal Neural Connections Even though researchers can test facial expressions in response to stimuli, the connection between emotions and the cells inside the brain is widely unknown. As Nobel laureate Francis Crick observed, the major challenge facing neuroscience is the need to observe and control individual cells while leaving others unaffected. A new technology called optogenetics allows scientists to control events within specific cells in


the brain. Optogenetics works by transfecting certain cells with genes that confer light responsiveness and then using technologies that allow light to reach those specific cells in vivo (Deisseroth, 2010). The study conducted by Dolensek et al. at the Max Planck Institute of Neurobiology used optogenetics in the brains of the mice to determine whether facial expressions would result from the activation of certain brain regions. Certain regions known to be associated with emotions were activated and facial expressions were observed. These regions were located in the insular cortex (IC), the portion of the brain which has been shown to evoke emotional sensations and their related behaviors in both animals and humans. Additionally, they activated neurons in the ventral pallidum (VP), which processes the rewarding properties of pleasure stimuli. Each of these activations resulted in strong facial expressions. Again, a random unbiased third party was brought in to analyze the facial expressions and was able to match the expressions with the previously analyzed HOGs corresponding to certain emotions. When the anterior IC and VP were activated using optogenetics, the mice expressed the same facial expression associated with pleasure. For every brain region tested, the mice responded with an expression similar to the prototypes from the earlier stimuli. It was found that projections from the IC to the amygdala were found to be able to influence emotional reactions to the stimulants. For example, the anterior IC to basolateral amygdala pathway is associated with pleasure, and when the


pathway was activated internally, the reaction to the bitter quinine was attenuated. The data resulting from optogenetic studies suggests that facial expressions are sensitive reflections of internal emotional states, and that these emotional states correspond to brain states (Dolensek et al., 2020). The connection between facial expressions and emotional states suggests that facial expressions have neuronal connections in brain regions associated with emotion like the IC and VP. To further study this concept, facial videography was combined with two photon imaging in the posterior IC (Dolensek et al., 2020). Two photon imaging is used to study the activity of certain neurons in vivo. Calcium sensitive fluorescent dye is loaded into neurons, so when the neurons are active, they can be specifically identified through the use of fluorescence. When the neuron is active it illuminates with the fluorescence and is therefore more easily identifiable (Mitani et al., 2018). Two types of neurons were identified in the posterior IC; those which corresponded with reaction to the sensory stimuli and those which corresponded to facial expressions. The neurons receiving the sensory inputs were multisensory, but the neurons responsible for facial expression were highly segregated (Dolensek et al., 2020).

Improved Mental Illness Diagnoses and treatments By identifying neural connections and neural regions associated with certain emotions, scientists can better diagnose and treat mental illnesses. The study at the Max Planck Institute of Neurobiology by Dolensek et al. provides an early but profound insight into how emotions originate from the activation of specific neural pathways. If physicians can identify abnormalities associated with brain regions and neural pathways early on, more effective preventative treatments can begin. In addition to aiding in preventative treatments, this knowledge can help researchers discover more targeted medical interventions which could aid physicians in providing more personalized treatments (Wojtalik et al., 2018). For example, brain stimulation is currently being tested as a targeted treatment for mental illnesses such as depression. Multiple variations of brain stimulation treatment can be further explored and made more effective by understanding more about how targeting specific brain regions affects certain emotions. By targeting more


specific regions, the side-effects of this type of treatment can be significantly reduced (NIMH, 2019). In the future, a better understanding of the psychopathology of varying emotions and their associated diseases allows scientists to better identify and treat patients suffering from mental illness (Etkin et al., 2013).

Conclusion Recent studies have identified the nature of facial expressions in mice and connected these expressions with emotional states and brain states. Additionally, these expressions and their related internal emotions can be linked to certain neurons (Dolensek et al., 2020). This connection between expressions, emotions, and neurons is a first step in an attempt to uncover the neurobiological basis of emotions. Given the evidence that emotions are highly conserved across phylogeny due to their evolutionary advantage, researchers can learn from the emotions and associated brain states in mice and apply the findings to human brains. As scientists learn more about emotions and how they are actuated in the brain, many medical applications could arise. Physicians can create more effective diagnoses and treatments for mental illnesses that many people are currently suffering from.

“The connection between facial expressions and emotional states suggests that facial expressions have neuronal connnections in brain regions associated with emotion like the IC and VP.”

References Abbott, A. (2020). Artificial intelligence decodes the facial expressions of mice. Nature. https://doi.org/10.1038/d41586020-01002-7 Adolphs, R. (2017). How should neuroscience study emotions? By distinguishing emotion states, concepts, and experiences. Social Cognitive and Affective Neuroscience, 12(1), 24–31. https://doi.org/10.1093/scan/nsw153 Anderson, D. & Adolphs, R. (2014). A Framework for Studying Emotions Across Phylogeny. (n.d.). Retrieved August 5, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC4098837/ Deisseroth, K. (n.d.). Optogenetics: Controlling the Brain with Light [Extended Version]. Scientific American. Retrieved August 5, 2020, from https://www.scientificamerican.com/ article/optogenetics-controlling/ Dolensek, N., Gehrlach, D. A., Klein, A. S., & Gogolla, N. (2020). Facial expressions of emotion states and their neuronal correlates in mice. Science, 368(6486), 89–94. https://doi. org/10.1126/science.aaz9468 Etkin, A., Gyurak, A., & O’Hara, R. (2013). A neurobiological approach to the cognitive deficits of psychiatric disorders. Dialogues in Clinical Neuroscience, 15(4), 419–429. LeDoux, J. E. (2012). EVOLUTION OF HUMAN EMOTION. Progress in Brain Research, 195, 431–442. https://doi.


org/10.1016/B978-0-444-53860-4.00021-0 Mitani, A., & Komiyama, T. (2018). Real-Time Processing of Two-Photon Calcium Imaging Data Including Lateral Motion Artifact Correction. Frontiers in Neuroinformatics, 12. https:// doi.org/10.3389/fninf.2018.00098 NIMH » Discover NIMH: Personalized and Targeted Brain Stimulation Therapies. (n.d.). Retrieved August 5, 2020, from https://www.nimh.nih.gov/news/media/2019/discover-nimhpersonalized-and-targeted-brain-stimulation-therapies.shtml NIMH » Mental Illness. (n.d.). Retrieved August 5, 2020, from https://www.nimh.nih.gov/health/statistics/mental-illness. shtml Phylogeny | biology | Britannica. (n.d.). Retrieved August 5, 2020, from https://www.britannica.com/science/phylogeny Wojtalik, J. A., Eack, S. M., Smith, M. J., & Keshavan, M. S. (2018). Using Cognitive Neuroscience to Improve Mental Health Treatment: A Comprehensive Review. Journal of the Society for Social Work and Research, 9(2), 223–260. https://doi. org/10.1086/697566





How Telemedicine could Revolutionize Primary Care BY CHRIS CONNORS '21 Cover Image: A physician and patient engaging in a telemedicine consult Source: Wikimedia Commons


Introduction The United States primary healthcare system is broken. As stated by the American College of Physicians, “primary care, the backbone of the nation’s health care system, is at grave risk of collapse (American College of Physicians, 2006).” There is an alarming shortage of primary care physicians and current practitioners are overworked and face administrative challenges. Fortunately, there are feasible solutions on the horizon–one of which is telehealth. Since the beginning of the 20th century, there has been discussion of how telehealth could be integrated in order to supplement the primary healthcare system (Lustig and Nesbitt, 2012). Yet, the utilization of telehealth today has significantly lagged behind its intended use until the current COVID-19 pandemic, as healthcare systems in the United States as well as around the world have into an unprecedented era of telemedicine in response to widespread quarantine and social distancing

measures. While it is expected that the prevalence of telehealth use will subside once the pandemic ends, there is some hope that the momentum of telehealth will carry forward, as telehealth has the potential to revolutionize the healthcare system by improving access to care. While telehealth impacts all specialties, the focus of this paper concerns the realm of primary care¬–specifically investigating the potential of telehealth to ameliorate issues in rural primary care.

Overview of the Current Primary Health Care System and its Limitations Importance of Primary Care Primary care is often the first point of contact a patient has with a healthcare system. Primary care physicians (PCPs) are responsible for treating common conditions such as diabetes, hypercholesterolemia, arthritis, and depression DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure 1: An illustrative map showing Health Professional Shortage Areas in Primary Care. Source: Wikimedia Commons

or anxiety as well as routine health maintenance, and act as a point of continuing care for patients (Finley et al., 2018). If necessary, PCPs will be the one to refer a patient to a specialist for additional treatment. Thus, primary care is considered the backbone of any healthcare system. It has been shown that even small increases in access to primary care significantly improve the health of community population. In areas where primary care is insufficient, communities have higher death and disease rates, higher rates of emergency department visits, and generally worse health outcomes than in areas with better primary care and access. For example, in northern, rural Canada, access to primary care is one of the largest reasons that there still exists a healthcare gap between indigenous and non-indigenous peoples (Jong et al., 2019). Limitations of the Current Primary Health Care System in the United States There is a variety of limitations in the current primary care system that can be partially, if not fully, resolved by telemedicine. Arguably the largest issue in the United States regarding healthcare is the immense shortage of PCPs. In fact, over 65 million Americans are said to live in primary care shortage areas (Bodenheimer and Pham, 2010). Furthermore, current primary care practitioners experience a whole host of inconveniences including burdensome administrative tasks, distracting


work environments, and crammed schedules (Bodenheimer and Pham, 2010). These problems are also amplified by the fact that medical students are avoiding primary care. In fact, only 7% of medical students plan to go into primary care today–a figure that has been steadily declining since from 40% in 1997 (Bodenheimer and Pham, 2010). The reason for this decline is in part due to the significantly higher salary offered by many specialties. This monetary incentive becomes particularly attractive when you consider the steadily rising cost of medical school tuition that leaves many recent graduates in significant debt.

“Evidently, the U.S. primary care system as a whole has many shortcomings, and these problems are significantly worse when considering rural primary care.�

Evidently, the U.S primary care system as a whole has many shortcomings, and these problems are significantly worse when considering rural primary care. While the ratio of primary care physicians to population in the United States in urban areas is 100 per 100,000, in rural areas, the ratio is just 46 per 100,000 (Bodenheimer and Pham, 2010). The situation looks even more dire when we take into account that 21% of the U.S. population lives in rural areas. Effectively, 10% of PCPs are currently responsible for providing for 21 percent of Americans (Bodenheimer and Pham, 2010). In more absolute terms, over additional 16000 PCPs are needed to meet the demand in these rural areas (Rieselbach et al., 2010). The effects of these shortages in rural regions are substantial. Rural populations face significant health disparities, have less health insurance, 85

Figure 2: An example of a common synchronous telemedicine appointment conducted from a patient’s home. Source: Shutterstock

and have higher rates of chronic conditions such as diabetes and obesity when compared to urban populations (Marcin et al., 2016).

“The first known mention of using technology to facilitate distance healthcare dates back to 1879 when the Lancet notes using a telephone to help reduce unnecessary office visits.”

Ultimately, the United States has a precarious geographic maldistribution of primary care that is being exaggerated by ever-worsening PCP shortages (Bodenheimer and Pham, 2010). However, the natural solution of producing more primary care physicians and incentivizing them to practice in rural areas may not work. It has been tried before unsuccessfully and, to be put simply, it just very difficult to motivate physicians to work in rural areas. This is because rural physicians face additional challenges and complications when compared to their urban or suburban counterparts. These include feelings of professional isolation, a reduced access to continuing medical education, and a lack of collaboration with other specialists or support services (Anderson et al., 1994). Given this acute situation and the shortcomings of previous solutions, there is a gap to be filled by a novel solution–and telemedicine might be the perfect fit.

Overview of Telemedicine Introduction to Telemedicine Telemedicine, which is synonymous with telehealth and virtual care, is defined as the “use of electronic information and communications technologies to provide and support healthcare when distance separates the participants (Marcin et al., 2016).” Generally, there are three


types of applications: live videoconferencing between patient and provider (synchronous), transmission of information and medical images (store-and-forward/asynchronous), and remote patient monitoring (Marcin et al., 2016). The most common form today–synchronous telemedicine–is comprised of patients using a videoconferencing platform to communicate with a physician at scheduled time. During the call, verbal communication about symptoms and visual tests such as displaying a skin condition or demonstrating range of motion can be used by the physician to confirm a diagnosis. However, telemedicine has come a long way since its inception. History of Telemedicine The first known mention of using technology to facilitate distance healthcare dates back 1879 where the Lancet notes using a telephone to help reduce unnecessary office visits (Lustig and Nesbitt, 2012). While telemedicine in its early stages was seldomly used, there are instances of the radio being used throughout the 1920s to diagnose patients or communicate with clinics on ships (Lustig and Nesbitt, 2012). However, medical video communications did not truly begin in the United States until 1959 (Institute of Medicine, 1996). Applications of Telemedicine Today Today, telemedicine is much more advanced and complex as it can now facilitate a


Figure 3: Image of a surgeon operating remotely on a patient. Source: Wikimedia Commons

patient and provider meeting over a video communications platform at a scheduled time in lieu of an in-person visit. Outside of primary care, telemedicine has been used extensively in certain specialties. For example, teleradiology has been used by physicians for over 60 years (Lustig and Nesbitt, 2012). This is likely because the highly image-based practice is easily transferable to an online format. Similarly, dermatology, another highly visual specialty, has been well-integrated within telemedicine. Nevertheless, telemedicine has also been experimented with for primary care. For example, primary care telehealth has been used often throughout rural regions of Northern Canada. In fact, in some areas of Nunavut and Labrador, telehealth is routine–almost entirely replacing in-person visits (Jong et al., 2019). Future of Telemedicine While telemedicine has made significant advances since its early days in the late 19th century, there are still large improvements to be implemented. For example, future research into telemedicine plans to explore avenues in which patients could be provided with small physiological monitoring devices in their own homes. These wearable devices would be able to function as a stethoscope, accelerometer, and an electrocardiogram, among other functions such as measuring heart rate or blood pressure in order to allow a physician to perform these tests from a distance in real


time (Lustig and Nesbitt, 2012). This would be a significant advance in telehealth as it would eliminate one of the biggest barriers: the inability to perform diagnostic tests from a distance. In fact, more basic versions of these devices have already been successfully used by some physicians and have been an effective method of measure vital signs from a distance. Successful implementation of these devices would significantly reduce the need for in-person appointments. Additionally, while not in the realm of primary care, some telemedicine advancements in the future seem like science fiction. For example, telesurgery–in which a patient is operated on remotely by a physician–has been successfully done several times. The first such operation occurred in 2001 when a New York surgeon successfully operated on a patient in Strasbourg, France (Choi et al., 2018). In this case, the surgeon used remote-controlled, robot-assisted laparoscopic surgery to remove the gall bladder of a patient on the other side of the Atlantic. Overall, it is clear that telemedicine has made enormous strides since its inception and has a promising future, but what is more important, however, is the success of its implementation–particularly in primary care.

“... future research into telemedicine plans to explore avenues in which patients could be provided with small physiological monitoring devices in their own homes.”

Healthcare Outcomes of Telemedicine in Primary Care Outcomes Perhaps the most important aspect of


“Since many chronic and preventative care appointments (which account for the majority of primary care visits) can effectively be managed using virtual care, more clinic time becomes available for those who need in-person visits”

telemedicine when assessing its ability to supplement a healthcare system in rural areas is evaluating its convenience for patients and patients’ healthcare outcomes. Overall, most outcome studies involving telehealth use in primary care has shown positive results. One study showed that patients receiving synchronous, remote care demonstrated similar outcomes as those receiving in-person care for various conditions (Portnoy et al., 2020). Of course, telemedicine has not been without its weaknesses. Some patients participating in telehealth have indicated that they prefer traditional office care. Additionally, the current lack of testing equipment available in patients’ homes means that many diagnostic tests that PCPs typically perform in office are unable to take place. The lack of medical technology and equipment at home also necessitates that some procedures take place in-person. Furthermore, many in-office procedures that traditionally occur the same-day of the diagnosis have to be scheduled at a later date during an in-office visit–inconveniencing patients. However, these downsides are counterbalanced by reports where telemedicine has been shown to have even better healthcare outcomes than traditional in-person care. For example, a survey conducted in Canada concerning telemedicine for primary care found that use of telehealth in rural regions resulted in improved patient care, reduced transfers, and better collaboration between patients and providers when compared to inperson care (Jong et al., 2019). Similarly, some monitoring programs have demonstrated better management of certain chronic conditions such as diabetes, hypertensions, and congestive heart failure (Lustig and Nesbitt, 2012). Fortunately, it appears that telemedicine utilization does not imply less effective healthcare in many cases. Additional Benefits Outside of health outcomes, telemedicine has several other advantages, such as the ability to streamline clinics–allowing PCPs to be more efficient. Since many chronic and preventative care appointments (which account for the majority of primary care visits) can effectively be managed using virtual care, more clinic time becomes available for those who need inperson visits (Bodenheimer and Pham, 2010). Similarly, if more sick, contagious patients are staying at home using telemedicine, then the potential for these sick patients to expose other patients to their illness is minimized. Thus, widespread use of telemedicine has the added benefit of minimizing patient exposure


to infectious diseases within a clinic. Given successful outcomes of telemedicine as well as a host of other benefits, it appears that the implementation of telehealth in rural communities to supplement primary care is a feasible and effective solution. However, successful healthcare outcomes will not be the only factor weighed when it comes to deciding whether to implement telemedicine infrastructure.

Economic Evaluation of Telemedicine Economic Overview What is equally important–and in policy makers’ eyes, perhaps more important–in the push towards implementing telemedicine in rural primary care is the economics of implementing such as system. This is particularly relevant in rural primary care due to the extreme financial burden that healthcare puts on governments in these areas. For example, health care per capita in rural Canada costs more than double what it costs for the rest of Canada (Jong et al., 2019). Clearly, if a government could alleviate this expenditure while maintaining the same quality of care, they may be interested in the opportunity. Economic Evaluation of Telemedicine Financially, telemedicine has had mixed success among different specialties. For example, in dermatology, telemedicine was found to not be cost-effective when compared to conventional care (Delgoshaei et al., 2017). However, integrating telemedicine into primary care has shown significant economic benefits. This is for a multitude of reasons including cost saving opportunities concerning access, unnecessary in-office visits, and patient travel. First, a lack of access to primary care has become an enormous financial drain on healthcare systems. For example, emergency rooms have been overwhelmed by patients who could have otherwise been treated by a PCP. In fact, a 2006 California survey estimated that about 46% of emergency room visits could have been addressed by a family medicine practitioner (Bodenheimer and Pham, 2010). Seeing that emergency room visits tend to be more expensive than primary care appointments and that they overwhelm hospitals, a lack of access to primary care imposes a significant financial problem for healthcare systems. Improved primary care access through telemedicine has already been shown to reduce hospitalizations (Bodenheimer and Pham, 2010). Thus, if telemedicine is implemented widely in order to improve health care access, health care


systems stand to benefit from substantial savings. Second, telemedicine also offers an enormous cost-savings benefit by reducing unnecessary office visits. In fact, about 75% of American healthcare expenditures are linked to the presence of chronic disease (Lustig and Nesbitt, 2012). A majority of these chronic diseases are managed by PCPs, and since many chronic disease visits could be transferable to a telemedicine appointment, the American healthcare system could benefit greatly from an increased use of virtual care (Bashshur et al., 2014). Third, telemedicine would eliminate patient travel, which has been shown to provide significant cost-savings opportunities for most patients (Marcin et al., 2016). In fact, in rural locations, almost 50% of in-office appointments require significant travel for patients (Jong et al., 2019). This travel not only results in transportation expenses, but also lost time from work and family–all of which can impose a financial burden on patients. Telemedicine has the potential to drastically reduce patient travel and, in turn, provide financial savings to patients. In fact, a recent study of 47 cancer patients demonstrated that 27,000 miles of travel was saved due to telepharmacy (Lustig and Nesbitt, 2012). Evidently, using telemedicine in rural primary care stands to offer many economic benefits to both healthcare systems and patients.

Conclusion The need for improvement in our primary health care system is dire, especially in rural areas where a lack of patient access is having a detrimental effect on health outcomes. With past measures failing to remedy this situation, it is time for an innovative solution: telehealth. Like any solution, telehealth has its limitations, but its potential to drastically improve healthcare outcomes in rural areas and its economic benefits are far-reaching. Since the COVID-19 pandemic has sparked the widespread implementation of telemedicine, it will hopefully give health care systems and policy makers an unprecedented chance to evaluate this technology on a large-scale. However, it is imperative that this trial-run is not abandoned once the pandemic subsides. The implementation of telehealth in rural healthcare systems has the potential to improve millions of lives, and policy makers must take full advantage of this opportunity.


References American College of Physicians. (2006, Jan 30). The impending collapse of primary care medicine and its implications for the state of the nation’s health care. American College of Physicians. Anderson, E. A., Bergeron, D., Crouse, B. J. (1994) Recruitment of family physicians in rural practice. Minn Med, 8, 29-32. Bashshur, R. L., Shannon, G. W., Smith, B. R., Alverson, D. C., Antoniotti, N., Barsan, W. G., Bashshur, N., Brown, E. M., Coye, M. J., Doarn, C. R., Ferguson, S., Grigsby, J., Krupinski, E. A., Kvedar, J. C., Linkous, J., Merrell, R. C., Nesbitt, T., Poropatich, R., Rheuban, K. S., Sanders, J. H., … Yellowlees, P. (2014). The empirical foundations of telemedicine interventions for chronic disease management. Telemedicine journal and e-health: the official journal of the American Telemedicine Association, 20(9), 769–800. https://doi.org/10.1089/ tmj.2014.9981 Bodenheimer, T., & Pham, H. H. (2010). Primary Care: Current Problems and Proposed Solutions. Health Affairs, 29(5), 799805. https://doi.org/10.1377/hlthaff.2010.0026 Choi, P. J., Oskouian, R. J., & Tubbs, R. S. (2018). Telesurgery: Past, Present, and Future. Cureus, 10(5), e2716. https://doi. org/10.7759/cureus.2716 Delgoshaei, B., Mobinizadeh, M., Mojdekar, R., Afzal, E., Arabloo, J., & Mohamadi, E. (2017). Telemedicine: A systematic review of economic evaluations. Medical journal of the Islamic Republic of Iran, 31, 113. https://doi.org/10.14196/ mjiri.31.113 Finley, C. R., Chan, D. S., Garrison, S., Korownyk, C., Kolber, M. R., Campbell, S., Eurich, D. T., Lindblad, A. J., Vandermeer, B., & Allan, G. M. (2018). What are the most common conditions in primary care? Systematic review. Canadian family physician Medecin de famille canadien, 64(11), 832–840. Institute of Medicine: Committee on Evaluating Clinical Applications of Telemedicine. (1996). Telemedicine: A Guide to Assessing Telecommunications in Health Care. Washington: National Academies Press. Jong, M., Mendez, I., & Jong, R. (2019). Enhancing access to care in northern rural communities via telehealth. International journal of circumpolar health, 78(2), 1554174. https://doi.org/10.1080/22423982.2018.1554174 Lustig, T.A., & Nesbitt, T. S. (2012). The Role of Telehealth in an Evolving Health Care Environment: Workshop Summary. Washington: National Academies Press. Marcin, J. P., Shaikh, U., Steinhorn, R. H. (2016). Addressing health disparities in rural communities using telehealth. Pediatric Research, 79(1), 169-176. https://doi.org/10.1038/ pr.2015.192 Portnoy, J., Waller, M., & Elliott, T. (2020). Telemedicine in the Era of COVID-19. The journal of allergy and clinical immunology. In practice, 8(5), 1489–1491. https://doi. org/10.1016/j.jaip.2020.03.008 Rieselbach, R. E., Crouse, B. J., Frohna, J. G. (2010). Teaching primary care in community health centers: addressing the workforce crisis for the under- served. Annals of Internal Medicine, 152, 118-122. https://doi.org/10.7326/0003-4819152-2-201001190-00186


Evidence Suggesting the Possibility of Regression and Reversal of Liver Cirrhosis BY DANIEL ABATE '23 Cover Image: Comparison of a liver from a healthy individual with the liver of a person affected by Liver Cirrhosis. The irregular surface on the cirrhotic liver is the scar tissue that accumulates on the liver after extensive damage over a long period of time Source: healthdirect

Introduction & Statistics Liver cirrhosis, or simply cirrhosis, refers to damage that the liver accumulates over a long period of time. This damage may have multiple causes, including excessive consumption of alcohol (known as alcoholic liver cirrhosis), fatty liver disease, chronic hepatitis B, and chronic hepatitis C (NIH, 2014). As the liver sustains damage, it forms scar tissue to repair cells in a process known as liver fibrosis. Liver cirrhosis may be thought of as an advanced stage of liver fibrosis, as it involves the accumulation of scar tissue to such an extent that the normal functioning of the liver is impaired. The disease is asymptomatic during the early stages, and symptoms only begin to surface after the liver has sustained significant damage. Some of the common symptoms of liver cirrhosis are fatigue, lack of appetite accompanied by weight loss, nausea, itching, and easy bruising of the skin (Mayo, 2018).


To diagnose liver cirrhosis, a liver biopsy is performed, which involves the removal of a small amount of liver tissue using a needle for laboratory analysis. Liver cirrhosis is a major heath concern with a high mortality rate. According to the World Health Organization (WHO), liver cirrhosis is the third leading cause of non-communicable deaths in Africa, with 174,420 dying from the disease in 2016. More broadly, liver cirrhosis was the ninth leading cause of death in lowermiddle-income countries (WHO, 2018), and in the United States, 41,743 deaths were attributed to the disease in 2017 as reported by the National Vital Statistics Report (Kochanek et al., 2019). The consensus among physicians and researchers is that the effects of liver cirrhosis are largely irreversible. In general, medical practitioners employ preventative measures DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure 1: A surgeon performing a liver biopsy. A syringe and needle are used to extract a small piece of liver tissue, which is then sent to the laboratory for analysis. This is the main process by which liver fibrosis and cirrhosis are detected and diagnosed Source: Flickr

such as vaccination against hepatitis A and B, as well as advising patients to minimize consumption of acetaminophen and alcohol. However, in cases involving advanced liver cirrhosis, a liver transplant is usually required for survival. Unsurprisingly, a liver transplant has many risks and can be problematic. As with any transplantation procedure, there is always the possibility of transplant rejection, where the body’s immune system attacks the transplanted liver in a self-destructive effort to protect itself from foreign tissue. According to the United Network for Organ Sharing, there are currently 12,420 people in the United States who are waiting for a liver transplant, with the median national waiting time being 149 days (Organ Procurement, 2020). Unfortunately, this leads to the development of clinical depression in patients waiting for a liver transplant, with some even dying before an organ becomes available (Mandal, 2019).

Liver Fibrosis Regression Contrary to contemporary views on the permanence of liver cirrhosis, research in recent years seems to indicate that reversal may be at least partly possible. Most of the research has focused on the process of liver fibrosis, the repair of damaged liver tissue which is a precursor to liver cirrhosis. In particular, the process of hepatic fibrogenesis involves the accumulation of myofibroblasts in the liver. The process occurs when hepatic stellate cells (HSCs) are activated from their inactive forms, and then themselves initiate an immune response that ultimately results in the accumulation of collagen and


other extracellular matrix (ECM) materials in the liver (Jung & Lim, 2017). HSCs are pericytes which are found between the sinusoids of the liver and the hepatocytes, and while their role in their inactive form is not clearly understood, their role in the production of collagen scar tissue has been well documented. ECM, on the other hand, refers to the complex grid of water, proteins and proteoglycans that anchors and supports the cells in the liver, maintaining the organ’s structure. In normal quantities, ECM plays a vital role in maintaining the liver’s normal functioning by, among other things, maintaining hydrolysis and homeostasis. However, various liver infections not only cause increased and unregulated production of ECM, but also change the structure and components of the ECM such that the normal functioning of hepatocytes is impaired (Arriazu et al., 2014). Since the activation of HSCs is integral to the formation of ECM, agents of this activation have become a target for researchers looking for medication or treatment that could lead to regression or even reversal of liver cirrhosis. Hepatic inflammation is usually the main process that results in the activation of HSCs, and it is common among various liver diseases like viral hepatitis and alcoholic hepatitis (Bataller & Brenner, 2005). Hepatic inflammation is caused by oxidative stress, an imbalance between free radicals and antioxidants in the liver with the former in excess. Oxidative stress has also been shown to play a vital role in fibrogenesis, and it is prompted by the release of reactive oxygen species (ROS) by Kupffer cells (Nieto, 2006). Additionally, cytokines like TGFbeta1, as well

“Contrary to contemporary views on the permanence of liver cirrhosis, research in recent years seems to indicate that reversal may be at least partly possible.”


Figure 2: The above diagram illustrates the process by which quiescent Hepatic Stellate Cells (HSCs) are activated. Activated HSCs play a major role in the accumulation of extracellular matrix (ECM) in the liver, which ultimately leads to liver cirrhosis and fibrosis Source: ResearchGate

as some growth factors such as platelet-derived growth factor, have been shown to play a role in the activation of HSCs (Hellerbrand et al., 1999).

“...experimental models have indicated that the apoptosis (cell death) and clearance of HSCs (hepatic stellate cells) leads to fibrosis regression”

The previous consensus that liver fibrosis is irreversible has been challenged by a handful of studies studies. First, experimental models have indicated that the apoptosis (cell death) and clearance of HSCs leads to fibrosis regression (Kisseleva & Brenner, 2007). This mainly occurs in early stages of fibrosis, and if ECM has not been deposited in the liver to any significant degree, the liver may revert almost completely to its previous structure after the cause of the liver damage has been removed. Additionally, enzymes known as Matrix Metalloproteinases (MMPs) may play a role in fibrosis reversal. MMPs are responsible for degradation of ECM in the liver, and tissue inhibitor of metalloproteinases (TIMP-1) may promote liver fibrosis (Guimarães et al., 2010). Studies of liver fibrosis in rats have indicated that when the agent causing injury in the rats is removed, internal TIMP-1 levels decreased and ECM was degraded (Iredale et al., 1998). Restoration of macrophages in the liver has also been shown to mediate degradation of ECM in the liver, though the mechanism used to achieve this is not clearly understood (Duffield et al., 2005). Regression of liver cirrhosis also involves the metabolic processes that regulate liver fibrosis. It should be noted that in order for activated HSCs to maintain production of ECM, a continuous supply of intercellular energy is required;


inhibition of metabolic pathways needed to provide activated HSCs with energy may ultimately lead to regression of fibrosis. In one study, researchers found that a sublethal dose of the energy blocker 3-bromopyruvate (3BrPA) transformed activated LX-2 (a human HSC line) into a less-active form, thus blocking the progression of liver fibrosis. Regression of the fibrosis was detected using biomarkers such as increased levels of MMPs and decreased collagen mRNA (Karthikeyan et al., 2016). Some clinical evidence also supports the regression of liver fibrosis when caused or facilitated by certain diseases and infections. For example, the standard treatment for chronic hepatitis B involves a protein known as interferon type I (IFN-α), which has been shown to exhibit antifibrotic activity by inhibiting action of a cytokine known as Transforming growth factor beta (TGF-β), decreasing the activation of HSCs and encouraging their death (Chang et al., 2005). The same is true for hepatitis C, where patients who received treatment associated with IFN demonstrated a reduced risk of developing cirrhosis, among other liver complications (Everson, 2005). As for liver disease associated with alcoholism, there is little evidence suggesting possible regression. For example, a colchicine treatment of alcoholic liver fibrosis showed no statistically significant improvements in liver health (Rambaldi et al., 2005). However, abstinence in those who were diagnosed with alcoholic liver cirrhosis did improve the prospect of long-term


Figure 3: Illustration of the Hepatitis C virus attacking a human liver. The virus is spread through blood-to-blood contact, and while treatments such as antiviral medications exist, Hepatitis C currently has no vaccine Source: Wikimedia Commons

survival, suggesting that addiction treatment may be an avenue for reducing alcoholic liver cirrhosis mortality rates. Most pharmacological substances that could be used as antifibrotic agents have targeted TGF-β1. However, a difficulty of this approach is that systemic inhibition of TGF-β1 has been shown to increase inflammation in the liver, which is closely associated with activation of HSCs (Samarakoon et al., 2013). Researchers have sidestepped this issue by shifting their focus to specific steps in the activation of TGF-β1. Alternative targets that indirectly influence TGF-β1 activation include the protein integrin beta 6 (αvβ6) and connective tissue growth factor (CTGF), wherein agents like anti-αvβ6 monoclonal antibody and antiCTGF monoclonal antibody have respectively demonstrated efficacy (Patsenker et al., 2008; Wang et al., 2011). The inhibition of a cannabinoid receptor known as CB1 using a CB1 antagonist has been linked to apoptosis of HSCs (Giannone et al., 2012). Additionally, NOX inhibitors have been shown to reduce oxidative stress by reducing the production of ROCs by Kupffer cells (Jiang et al., 2012). Melatonin is another chemical which has been studied for its potential antifibrotic effects. It is a hormone that is secreted by the pineal gland in the brain, and it is mostly known for its role in regulating the sleep-wake cycle. However, scientists are beginning to look into the hormone’s latent role in reducing excessive fibrosis due to its antioxidant and anti-inflammatory properties. In mice that were


given ethanol to induce acute liver disease, melatonin was found to reduce oxidative stress by downregulating MMPs and upregulating the tissue inhibitor TIMP-1 (Mishra t al., 2011). Moreover, in a study involving rats with liver fibrosis induced by carbon tetrachloride, melatonin was found to decrease levels of substances such as growth factor β1 and α-smooth muscle actin which were initially increased through exposure to carbon tetrachloride (Choi et al., 2015). The researchers also found that melatonin prevented liver fibrosis by inhibiting an inflammatory signaling associated with necroptosis, the programmed death of inflammatory cells (Zhang et al., 2017). Indeed, the relationship between liver cirrhosis and melatonin is further cemented by the fact that in patients with liver cirrhosis, imbalances in the level of melatonin in the body have been observed. Furthermore, melatonin was found to ameliorate the conditions of rat livers affected by thioacetamide-induced liver

“Scientists are beginning to look into the hormone's [melanin's] latest role in reducing excessive fibrosis due to its anti-oxidant and anti-inflammatory properties.”

Figure 4: A ball-and-stick model of the molecule melatonin. Melatonin is a hormone that is produced in the pineal gland and is mainly involved in the regulation of the sleep cycle. Moreover, research suggests that it may have a role to play in liver fibrosis regression Source: Wikimedia Commons


“It would immensely benefit the research community to develop an alternative strategy of assessing liver fibrosis development and regression that is more representative of the entire liver.”

cirrhosis by mitigating destructive changes caused by oxidative stress (Ellis et al., 2012).

web/20150609090212/http://www.niddk.nih.gov/healthinformation/health-topics/liver-disease/cirrhosis/Pages/facts. aspx.


Mayo Clinic, “Symptoms of Cirrhosis.” Mayo Foundation for Medical Education and Research, December 7, 2018. https://www.mayoclinic.org/diseases-conditions/cirrhosis/ symptoms-causes/syc-20351487.

While contemporary research into regression of fibrosis and cirrhosis is quite promising, these studies are not without limitations. For example, most of the research has focused on reversing the effects of liver fibrosis rather than liver cirrhosis, and while the two are linked, they are not the same. There is not much information in the literature about reversal of liver cirrhosis, but this may be attributed to the lack of a clear boundary between liver fibrosis and cirrhosis. Another major barrier to the study of liver fibrosis regression is the methodology used to assess the level of regression. Recall that the main way doctors diagnose liver cirrhosis (and fibrosis) is by performing a liver biopsy. However, the liver is a large organ (the largest internal organ in the body), and the inconsistency of biopsy samples must be considered; due to the liver’s size, a tiny piece of tissue extracted for a liver biopsy may not be representative of the state of the liver as a whole (Celli & Zhang, 2015). The results from clinical studies in which a statistical decrease in liver fibrosis is observed must therefore be carefully evaluated. It would immensely benefit the research community to develop an alternative strategy of assessing liver fibrosis development and regression that is more representative of the entire liver. Additionally, increasing the sample size of research studies would help ensure that results are more meaningful, since many of the studies on regression have involved only a handful of patients.

Conclusion Despite the commendable progress that has been made in the study of liver fibrosis and cirrhosis, scientists and researchers still have a long way to go before they develop effective therapies that mitigate or even reverse the damage caused to a liver by cirrhosis. While some substances hold promising implications, they are still highly localized to specific cells of the liver, and their efficacy has only been demonstrated in rat models. That being said, a growing body of research gives hope for a future where liver cirrhosis is manageable without a transplant.


“Disease Burden and Mortality Estimates.” World Health Organization. World Health Organization, March 26, 2019. https://www.who.int/healthinfo/global_burden_disease/ estimates/en/. “The Top 10 Causes of Death.” World Health Organization. World Health Organization, May 24, 2018. https://www.who. int/news-room/fact-sheets/detail/the-top-10-causes-ofdeath. Kochanek KD, Murphy SL, Xu JQ, Arias E. “Deaths: Final data for 2017.” National Vital Statistics Reports; vol 68 no 9. Hyattsville, MD: National Center for Health Statistics. 2019 https://www. cdc.gov/nchs/data/nvsr/nvsr68/nvsr68_09-508.pdf Organ Procurement and Transplantation Network, “Number of Patients on the Waitlist by Organ,” US Department of Health and Human Services, Health Resources and Services Administration, August 16, 2020. https://optn.transplant.hrsa. gov/data/view-data-reports/national-data/# Mandal, Dr. Ananya. “Waiting for a Liver Transplant.” News. News Medical Life Sciences, April 22, 2019. https://www.newsmedical.net/health/Waiting-for-a-liver-transplant.aspx. Jung, Y. K., & Yim, H. J. (2017). Reversal of liver cirrhosis: current evidence and expectations. The Korean journal of internal medicine, 32(2), 213–228. https://doi.org/10.3904/ kjim.2016.268 Arriazu, E., Ruiz de Galarreta, M., Cubero, F. J., Varela-Rey, M., Pérez de Obanos, M. P., Leung, T. M., Lopategi, A., Benedicto, A., Abraham-Enachescu, I., & Nieto, N. (2014). Extracellular matrix and liver disease. Antioxidants & redox signaling, 21(7), 1078– 1097. https://doi.org/10.1089/ars.2013.5697 Bataller R, Brenner DA. Liver fibrosis [published correction appears in J Clin Invest. 2005 Apr;115(4):1100]. J Clin Invest. 2005;115(2):209-218. doi:10.1172/JCI24282 https://pubmed. ncbi.nlm.nih.gov/15690074/ Nieto N. Oxidative-stress and IL-6 mediate the fibro-genic effects of [corrected] Kupffer cells on stellate cells [published correction appears in Hepatology. 2007 Feb;45(2):546]. Hepatology. 2006;44(6):1487-1501. doi:10.1002/hep.21427 https://pubmed.ncbi.nlm.nih.gov/17133487/ Hellerbrand C, Stefanovic B, Giordano F, Burchardt ER, Brenner DA. The role of TGFbeta1 in initiating hepatic stellate cell activation in vivo. J Hepatol. 1999;30(1):77-87. doi:10.1016/ s0168-8278(99)80010-5 https://pubmed.ncbi.nlm.nih. gov/9927153/


Tatiana Kisseleva and David A Brenner, “Role of Hepatic Stellate Cells in Fibrogenesis and the Reversal of Fibrosis,” Wiley Online Library (John Wiley & Sons, Ltd, May 29, 2007), https://onlinelibrary.wiley.com/doi/full/10.1111/j.14401746.2006.04658.x.

National Institute of Health, “What Causes Cirrhosis?” Cirrhosis. National institute of Diabetes and Digestive and Kidney Diseases, April 23, 2014. https://web.archive.org/

Guimarães EL, Empsen C, Geerts A, van Grunsven LA. Advanced glycation end products induce production of reactive oxygen species via the activation of NADPH oxidase


in murine hepatic stellate cells. J Hepatol. 2010;52(3):389-397. doi:10.1016/j.jhep.2009.12.007 https://pubmed.ncbi.nlm.nih. gov/20133001/ Iredale JP, Benyon RC, Pickering J, et al. Mechanisms of spontaneous resolution of rat liver fibrosis. Hepatic stellate cell apoptosis and reduced hepatic expression of metalloproteinase inhibitors. J Clin Invest. 1998;102(3):538549. doi:10.1172/JCI1018 https://pubmed.ncbi.nlm.nih. gov/9691091/ Duffield JS, Forbes SJ, Constandinou CM, et al. Selective depletion of macrophages reveals distinct, opposing roles during liver injury and repair. J Clin Invest. 2005;115(1):5665. doi:10.1172/JCI22675 https://pubmed.ncbi.nlm.nih. gov/15630444/ Karthikeyan, S., Potter, J. J., Geschwind, J. F., Sur, S., Hamilton, J. P., Vogelstein, B., Kinzler, K. W., Mezey, E., & GanapathyKanniappan, S. (2016). Deregulation of energy metabolism promotes antifibrotic effects in human hepatic stellate cells and prevents liver fibrosis in a mouse model. Biochemical and biophysical research communications, 469(3), 463–469. https://doi.org/10.1016/j.bbrc.2015.10.101 Chang XM, Chang Y, Jia A. Effects of interferon-alpha on expression of hepatic stellate cell and transforming growth factor-beta1 and alpha-smooth muscle actin in rats with hepatic fibrosis. World J Gastroenterol. 2005;11(17):2634-2636. doi:10.3748/wjg.v11.i17.2634 https://pubmed.ncbi.nlm.nih. gov/15849824/ Everson GT. Management of cirrhosis due to chronic hepatitis C. J Hepatol. 2005;42 Suppl(1):S65-S74. doi:10.1016/j.jhep.2005.01.009 https://pubmed.ncbi.nlm.nih. gov/15777574/ Rambaldi A, Gluud C. Colchicine for alcoholic and nonalcoholic liver fibrosis and cirrhosis. Cochrane Database Syst Rev. 2005;(2):CD002148. Published 2005 Apr 18. doi:10.1002/14651858.CD002148.pub2 https://pubmed.ncbi. nlm.nih.gov/15846629/

ncbi.nlm.nih.gov/22618020/ Mishra, Amartya, Sumit Paul, and Snehasikta Swarnakar. “Downregulation of Matrix Metalloproteinase-9 by Melatonin during Prevention of Alcohol-Induced Liver Injury in Mice.” Biochimie. Elsevier, February 24, 2011. https://www.sciencedirect.com/science/article/pii/ S0300908411000630?via=ihub. Choi, Hyo-Sun, Jung-Woo Kang, and Sun-Mee Lee. “Melatonin Attenuates Carbon Tetrachloride–Induced Liver Fibrosis via Inhibition of Necroptosis.”Translational Research. Mosby, April 12, 2015. https://www.sciencedirect.com/science/article/pii/ S1931524415001097?via=ihub. Zhang, J. J., Meng, X., Li, Y., Zhou, Y., Xu, D. P., Li, S., & Li, H. B. (2017). Effects of Melatonin on Liver Injuries and Diseases. International journal of molecular sciences, 18(4), 673. https:// doi.org/10.3390/ijms18040673 Ellis, Elizabeth L., and Derek A. Mann. “Clinical Evidence for the Regression of Liver Fibrosis.” Journal of Hepatology. Elsevier, January 13, 2012. https://www.sciencedirect.com/science/ article/pii/S016882781200044X. Romulo Celli and Xuchen Zhang, “Pathology of Alcoholic Liver Disease,” Journal of Clinical and Translational Hepatology 2 (September 11, 2015): 103–9, https://doi.org/10.14218/ JCTH.2014.00010. *ResearchGate has made this image available for use via license (i.e. it may be copied and reproduced in any medium). Commercialization of the image, however, is prohibited. For more information, please visit https://www.researchgate. net/figure/Hepatic-Stellate-Cell-HSC-activation-Both-alcoholand-inflammation-damage-the-liver_fig2_281745019

Samarakoon R, Overstreet JM, Higgins PJ. TGF-β signaling in tissue fibrosis: redox controls, target genes and therapeutic opportunities. Cell Signal. 2013;25(1):264-268. doi:10.1016/j. cellsig.2012.10.003 https://pubmed.ncbi.nlm.nih. gov/23063463/ PatsenkerE,PopovY,StickelF,JonczykA,GoodmanSL,Schuppan D. Inhibition of integrin alphavbeta6 on cholangiocytes blocks transforming growth factor-beta activation and retards biliary fibrosis progression. Gastroenterology. 2008;135(2):660-670. doi:10.1053/j.gastro.2008.04.009 https://pubmed.ncbi.nlm. nih.gov/18538673/ Wang Q, Usinger W, Nichols B, et al. Cooperative interaction of CTGF and TGF-β in animal models of fibrotic disease. Fibrogenesis Tissue Repair. 2011;4(1):4. Published 2011 Feb 1. doi:10.1186/1755-1536-4-4 https://pubmed.ncbi.nlm.nih. gov/21284856/ Giannone FA, Baldassarre M, Domenicali M, et al. Reversal of liver fibrosis by the antagonism of endocannabinoid CB1 receptor in a rat model of CCl(4)-induced advanced cirrhosis. Lab Invest. 2012;92(3):384-395. doi:10.1038/labinvest.2011.191 https:// pubmed.ncbi.nlm.nih.gov/22184091/ Jiang JX, Chen X, Serizawa N, et al. Liver fibrosis and hepatocyte apoptosis are attenuated by GKT137831, a novel NOX4/NOX1 inhibitor in vivo. Free Radic Biol Med. 2012;53(2):289-296. doi:10.1016/j.freeradbiomed.2012.05.007 https://pubmed.



CR-grOw: The Rise and Future of Contract Research Organizations BY DEV KAPADIA '23 Cover Image: Because of the increasing resistance of bacteria to antibiotics, the pharmaceutical industry has seen steep increases in research and development costs every year. One of the solutions to combat these increasing prices is to outsource some of the operations of the industry, including the clinical trials process. This outsourcing trend is how the Contract Research Organization industry gained a foothold in the research and development process, and it has since expanded to be just as much a part of the process as the pharmaceutical and medical device companies. Source: Wikimedia Commons


Introduction One of the biggest problems in the pharmaceutical industry today is the exorbitant costs of researching, developing, and testing a new drug. A recent study at the Tufts Center for the Study of Drug Development reported that the 2016 estimated cost to drug makers to produce a drug that receives market approval is $2.6 billion with $1.4 billion in estimated cash costs (DiMasi et al., 2016). This number is more concerning within the context of Eroom’s Law. In 1965, Gordon Moore of IBM coined Moore’s Law that stated the number of transistors that computer scientists can use on a microchip doubles every two years, which also doubles the computing power. Conversely, Eroom’s Law, with “Eroom” being the inverse spelling of “Moore,” was first observed in the 1980s when it was documented that every nine years, the cost of research and development (R&D) for pharmaceutical drugs doubles. While the doubling time has since extended slightly, the

growth continues to threaten pharmaceuticals, and the increasingly high-cost phenomenon is now seen across the medical product segment, not just pharmaceuticals (Ringel et al., 2020). There are several methods, such as machine learning and drug re-design, that the pharmaceutical, biotechnology, and academic research industries use to expedite timelines and lower the high costs they incur as a result of R&D. However, one of the most popular methods that is used by nearly every laboratory research industry is the Contract Research Organization (CRO). CROs provide outsourced research services to the pharmaceutical, biotechnology, and academic research industries, as well as virtually any other industry that requires the need for development and testing of life science research. These services can come in several forms, including biologic assay development, clinical trials management, toxicology reports, research models, and much DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

more. Outsourcing these services has started to become common practice in the medical product pipeline and shows no signs of stopping its expansion throughout the medical product development process.

The History of CROs CROs sprouted in the wake of the Cold War Increased regulation and competition in pharmaceutical research brought sudden increases in R&D timelines and subsequent increases in cost; these factors caused the R&D costs for pharmaceutical companies to double every five years (Dimachkie Masri et al., 2012). The CRO industry was popularized as an easy way to cut down on costs and headaches caused by supply chain and clinical trial management in the drug discovery process (Mirowski & Van Horn, 2005). Eventually the benefits of the industry were seen far beyond clinical trials testing exclusively in the pharmaceutical industry, and these organizations spread like wildfire. It is estimated that CROs make the clinical trials process 30% faster and save research teams more than $150 million per development process (A. Miller, 2019). Now, not only do CROs provide cost-effective services, but, for many organizations like PPD, Charles River Laboratories, and Covance, their industry experience allows them to claim more effective services than would be performed in-house. In 2010, it was estimated that CROs were involved


in about 64% of studies. It is now estimated that this number has risen to 80% and shows no signs of stopping (A. Miller, 2019). Even better for the industry is that despite the increased competition seen over the past years, the revenues and profits continue to rise due to sharp increase in demand and incorporation of strategic partners to monopolize the supply for these CROs. Profit margins (calculated from the net revenue as a percentage of revenue) has increased from 17.2% in 2014 to 24.1% in 2019, meaning that CROs are charging more for their services relative to their costs than they have in the past (A. Miller, 2019). Clearly, this industry is doing very well and growing. However, because these CROs are expected to be a big part of the research industry in the upcoming years, it is important to assess the factors affecting the industry’s business as well as the services and the innovation that is happening in-house for many of these companies.

The Development Process In order to truly understand the dynamics of the CRO industry, it is extremely important to learn about their specialty: the medical product development process. While drug and medical device development is an increasingly costly and timely industry that is always changing due to regulation, the overall structure of the

Figure 1: The medical product traditional timeline begins with the exploratory and preclinical phases, then moves into the clinical trials phase, and finally the manufacturing phase. While CROs started with outsourcing services to manage the clinical trials process, many have since expanded to include software and animal models for the exploratory and preclinical testing phases as well as operational optimization services and more for the manufacturing of medical products. Although COVID-19 has caused an expedited and overlapping timeline as shown below the traditional time, CROs are still needed, if not more needed, for their expertise, experience, and ability to accelerate the timeline of clinical trials. In the future, CROs might expand their offerings even further or innovate offerings already discussed to further integrate across the medical product industry. Source: Flickr

“The CRO industry was popularized as an easy way to cut down on costs and headaches caused by supply chain and clinical trial management in the drug discovery process.�


process is constant. The average time length for a drug or device to be developed and gain FDA approval is about twelve years, not including the years of research that is put in beforehand to finalize the idea and internal approval to pursue development of the drug (Van Norman, 2016). The development process consists of four stages: discovery, preclinical research, clinical trials, and FDA review. The discovery stage consists of actually developing the product; the preclinical research consists of the researchers conducting internal tests to estimate the efficacy and safety of their product. Once the researchers see signs of adequate safety and efficacy within a laboratory environment, the product will then be tested on real individuals in the clinical stage process. Lastly, once the product has been tested on a sufficient number of individuals to prove its efficacy, safety, and more, it will be submitted for FDA review. Given the multiple steps and countless hours of research and analysis in each process, the entire development process can often take over a decade to complete. Also, almost all of the drugs will not even make it to FDA approval. For example, say that 10,000 drugs enter the discovery stage; on average, only about 250 of those will be cleared to enter the preclinical research in which researchers conduct pharmacology and toxicology reports. Of the 250 drugs, only a portion of those enter clinical trials with five reaching the final clinical trials stage. Eventually, only one, on average, will gain FDA approval (Dimachkie Masri et al., 2012).

“Many international CROs include the following offerings: exploratory research, animal and cellular models, clinical trial management, and manufacturing processes”


While CROs now operate over every stage of the development process, their specialty has always been in facilitating the clinical trials phases. The clinical trials process itself can be broken up into three different phases. Phase I clinical trials are intended to ensure the safety of the product to permit further testing and usually take around one year to complete. Phase I trials only use anywhere from 15 to 80 patients, a smaller group relative to other phases as researchers don’t want to expose too many participants to the product if it causes adverse effects. For this reason, in every trial phase, many participants are selected extremely carefully. If there are any potential signs of characteristics or health conditions that might cause adverse effects from the use of the medical products, volunteers are rejected from participation in the trial. Next, phase II trials can use anywhere from 100 to 500 subjects with the goal of ensuring efficacy, necessary concentration, biological interactions, and more. Phase II trials usually last around two

years because researchers must determine how effective the drug is at different doses, what parts of the body are affected and in what way, and much more to determine a clearer idea of the efficacy of the product. Lastly, the product enters phase III trials, which is a comprehensive review of both the safety and efficacy of the drug to build a strong case for FDA approval. Because this is the last step before regulatory approval, groups can consist of anywhere from 1,000 to 5,000 subjects and can take one to four years depending on the complexity of the product and the conclusiveness of the initial result (Mohs & Greig, 2017).

Offerings of CROs The development process is similar across drugs, devices, and other medical products that require FDA approval. Although the process seems straightforward at this highlevel description, there are actually many complexities in each of the steps before and during the process. CROs can assist in a variety of these processes. At first, CROs operated simply as methods of outsourcing clinical trials management, but, as they saw more demand for outsourcing in the process, they continued to expand their offerings (Getz et al., 2014). Many international CROs include the following offerings: exploratory research, animal and cellular models, clinical trial management, and manufacturing processes. The first offering is exploratory research. Before researchers can even think of testing the drug or medical device, they must first conceive and produce it in the discovery stage. This takes an immense amount of research and planning, and this is where CROs can come in to assist by helping research teams in validating the intended method of bioanalysis of the product. CROs can also help with determining the intended stability of the product and ensuring proper Investigational New Drug filing studies are planned and filed with the FDA. Because these processes deal with the development, or exploration, of the product, this is the “exploration” process (A. Miller, 2019). The next major offering that CROs can provide are animal and cellular models in the preclinical phase. Especially for drug development, researchers must test their products on animals and human cells before they can test them on human subjects; because this process can be cumbersome and infeasible for many laboratories, CROs took the opportunity to offer


Figure 2: With the rise of the pandemic, telehealth has been an industry that has garnered a lot of attention throughout 2020. The technology allows physicians to consult, monitor, and potentially diagnose their patients remotely. Because of the widespread attention, telehealth has now been proposed to supplement many industries, including the CRO industry. By allowing CROs to monitor clinical trials participants remotely, CROs can conduct trials even more efficiently and more comprehensive than before. Source: Wikimedia Commons

to outsource the production of these animal models. Using gene editing techniques, CROs can alter the genome of rabbits, rats, mice, human stem cells, and other models. The end goal of this editing is to simulate the human phenotypes and the conditions of the diseases studied. If the genes are not accurately altered to model what the real conditions in a disease would be, then researchers would not be able to predict the efficacy of proposed medical products (Huang, 2019). The main service that has been the “bread-andbutter” of CROs is clinical trials management. There are many complexities in the clinical trials process that makes it extremely frustrating for researchers at pharmaceutical companies, academic institutions, and biotechnology companies to conduct themselves. Challenges can include finding patients who meet the many eligibility requirements for the study, staying in contact with each patient throughout the duration of the trail, navigating the various regulations involved in running clinical trials, and paying the high costs of running these trials. Because CROs have specialties in the management of clinical trials, they can run them more efficiently at lower costs. Through patient networks, they can more match patients with clinical trials much quicker and have even begun to develop methods of remote monitoring to better maintain connection with these patients through the duration of the trial. Further, CROs are also starting to provide data analysis services to provide researchers with instant, actionable insight on the results of


their trails (A. Miller, 2019). Lastly, many CROs provide services to assist in the manufacturing process of product development. These services can entail an evaluation and optimization of a firm’s entire drug or device discovery protocol, design of machines to produce these products, consulting services to identify best practices for the firm to follow, and much more. Manufacturing services can also be applied to individual projects if there is a certain challenge that the firm is facing that could use the expertise of those who specialize in the discovery process. Whatever the method, the end goal is to ensure the most time and cost-effective path is taken from FDA approval to introduction of the product to the market (Huang, 2019).

The Future of CROs Much of the work that CROs do seems routine– like it has not changed in the past decades and isn’t expecting to in the near future. However, this would be an oversimplification of the large amount of innovation that CROs are doing both in-house and through partnerships and acquisitions (Huang, 2019). One of the largest sources of innovation in the CRO industry is in animal models. These animal models often require mutations in order to fit the needs of the research team, meaning that CROs need to change the gene expression of many of the models when they are sent to researchers. This requirement makes CROs one of the primary testers of innovation in the gene editing

“One of the largest sources of innovation in the CRO industry is in animal models...”


landscape, particularly in CRISPR technology (Labant, 2020). For instance, Charles River Laboratories, one of the largest global CROs, announced in 2016 the introduction of in vivo and in vitro genome editing through a licensing partnership with the Broad Institute of MIT and Harvard. These editing techniques allow the firm to use the power of gene knock-outs and knock-in, processes that remove and add genome sequences into the target cell, to alter the phenotypic expression of the models much more effectively than before (Charleas River Laboratories, 2016).

“The opportunity for at home clinical trials has been present for years, even before the inception of telehelath platforms, but CROs have always been resistant to the change.�

Along with innovation in current service offerings, CROs are also looking to expand their reach beyond services they have done in the past. For instance, beyond just clinical trial recruitment, company organizational optimization, and more that have been traditionally demanded by researchers, the industry is looking ahead to what they expect will be needed. For instance, many CROs have now expanded their offerings to include data analytic services and integration of EHR data into clinical trial management (Landhuis, 2018). The opportunity for at home clinical trials has been present for years, even before the inception of telehealth platforms, but CROs have always been resistant to the change. The reasons included the fact that there was never a commonly accepted way to facilitate these types of trials, and the use of telehealth came with a regulatory hurdle that would have taken large cash deposits and time to overcome. Now a combination of the desire to ensure safety of participants, improve efficiency, and increase patient monitoring ability could bring the entrance of telehealth far quicker than expected (Lahiri, 2013). Telehealth is the technology that enables medical healthcare professionals to connect virtually via video or other virtual engagement platforms with their patients. By allowing researchers this remote communication capability, clinical trials can be conducted on a larger scale and much more efficiently. Just before the pandemic, IQVIA, one of the largest CROs in the world, introduced its Avacare Clinical Research Network, which connects members of clinical trials to research teams while also providing a patient engagement platform that allows for remote patient monitoring. Integrated with artificial intelligence, the network automatically matches and patients with clinical trial leaders while also allowing for remote monitoring of progress, greatly improving the efficiency of the


entire clinical trials process from recruitment to analysis. Now, with the new effects of the pandemic, IQVIA expects to grow this technology in-house and through acquisitions in the telehealth industry in hopes of creating an entire home-trial environment, allowing for recruitment, instruction, monitoring, and reporting all to be done from the comfort of a participant’s home (Adams, 2020). Given that there is concern regarding safety and ethical conduct in the industry, particularly in the way that clinical trials are planned and managed, these problems can easily be solved through more transparency in the CRO process. Because of the important position that CROs will play in the future along with the increased government attention to the industry due to a refocus of the political landscape on health, firms are projected to be far more open about their gene editing techniques and clinical trials protocols along with a myriad of other steps in the process through public reports to the FDA and other health organizations (Roberts et al., 2016).

Industry Outlook There are several key growth drivers of the CRO industry. These include the increase in the research and development expenditure, demand for outsourcing solutions, demand for drugs, medical devices, and other biotechnology, and number of elderly individuals. Historically, the industry has done well and has enjoyed robust, stable growth. Unsurprisingly, the outlook for the industry continues to be favorable for a number of reasons (A. Miller, 2019). Firstly, the CRO industry is expected to greatly benefit from the always increasing discovery pipeline costs of products in the medical industry. These rising costs not only benefit the industry by increasing research and development expenditure, thereby increasing the commissions that CROs receive for their services, but these increasing costs also put pricing pressures on research teams. These pricing pressures encourage them to find ways of increasing efficiency while decreasing costs, like outsourcing certain activities to CROs (Foster & Malik, 2012). Therefore, the projected increase in costs simply from the increasing complexity of diseases and antimicrobial resistance of bacteria will factor into the success of the CRO industry (A. Miller, 2019). Second, the number of elderly individuals


in the world is expected to increase in the coming decades. Because baby boomers are reaching ages beyond the past median age of the population due to more advanced healthcare services and increased standards of living worldwide, the resulting number of elderly individuals is expected to increase. In fact, by 2060, the median age of the United States is expected to be 43, a five-year increase from the current median age of 38 (Bureau, 2018). This shift in the median age is not specific to America; it is also clearly reflected in global demographics. Because the number of health problems sustained increases as individuals age, there will likely be a surge in demand for drugs, medical devices, and other biotechnology worldwide to treat these health problems. This will in turn increase the number of products entering the discovery and clinical trial process, benefitting CROs (J. Miller, 2012).

be up to these firms to continue to innovate their current offerings and expand in order to avoid irrelevancy in the market. This comes in the form of expanding to the digital sector in data analytics and telehealth as well as further research on genome editing and embryonic stem cells. Regardless, it seems as though CROs have a winning strategy that will last the industry for years to come.

Further, while the coronavirus pandemic has put many industries in jeopardy, the CRO industry is one industry that has suffered temporarily but is expected to benefit from the long-term trends. Because of the closing of many businesses, including pharmaceutical companies and laboratories, CROs experienced a brief hiccup in revenue that is expected to hurt their total 2020 revenue and lead to a decrease in total growth from 2019 (A. Miller, 2019). However, the expected increased attention to health and treatment going forward, not only relating to the coronavirus but all aspects of health in general, bodes very well for the CRO industry in the long run. This shift is expected to not only increase the demand for medical products but also the demand for grants and other funding for research developments that will increase the research and development expenditures. Therefore, while the coronavirus pandemic spelled trouble for many industries, it likely signals opportunity for CROs (Margherita & Valeria, 2017).

Charles River Laboratories. (2016, December 1). Charles River Laboratories Demonstrates Expertise in CRISPR/Cas9 Genome Engineering Technology | Charles River Laboratories International, Inc. https://ir.criver.com/news-releases/newsrelease-details/charles-river-laboratories-demonstratesexpertise-crisprcas9/

Conclusion As seen, the world of CROs is not only complex but extremely dynamic. As the price of research and development of medical products rises, the demand for outsourcing solutions will only go up. Coupled with the increased attention that the health sector as a whole will receive in the future, it will be shocking if companies like PPD, Charles River Laboratories, PRA Health Sciences, and others don’t take a more dominant position in the healthcare sector (Margherita & Valeria, 2017). However, with the saturation of competitors in the market, it will


References Adams, B. (2020, February 27). IQVIA launches new research network to better match patients to trials. FierceBiotech. https://www.fiercebiotech.com/cro/iqvia-launches-newresearch-network-to-better-match-patients-to-trials Bureau, U. C. (2018, March 13). Older People Projected to Outnumber Children. The United States Census Bureau. https://www.census.gov/newsroom/press-releases/2018/ cb18-41-population-projections.html

Dimachkie Masri, M., Ramirez, B., Popescu, C., & Reggie, E. M. (2012). Contract research organizations: An industry analysis. International Journal of Pharmaceutical and Healthcare Marketing, 6(4), 336–350. https://doi. org/10.1108/17506121211283226 DiMasi, J. A., Grabowski, H. G., & Hansen, R. W. (2016). Innovation in the pharmaceutical industry: New estimates of R&D costs. Journal of Health Economics, 47, 20–33. https:// doi.org/10.1016/j.jhealeco.2016.01.012 Foster, C., & Malik, A. Y. (2012). The Elephant in the (Board) Room: The Role of Contract Research Organizations in International Clinical Research. The American Journal of Bioethics, 12(11), 49–50. https://doi.org/10.1080/15265161.2 012.719267 Getz, K. A., Lamberti, M. J., & Kaitin, K. I. (2014). Taking the Pulse of Strategic Outsourcing Relationships. Clinical Therapeutics, 36(10), 1349–1355. https://doi.org/10.1016/j. clinthera.2014.09.008 Huang, J. (2019). Contract Research Organizations Are Seeking Transformation in the Pharmaceutical Value Chain. ACS Medicinal Chemistry Letters, 10(5), 684–686. https://doi. org/10.1021/acsmedchemlett.9b00046 Labant, M. (2020, April 1). As Needs Change, the CRO Industry Adapts. GEN - Genetic Engineering and Biotechnology News. https://www.genengnews.com/insights/as-needs-changethe-cro-industry-adapts/ Lahiri, K. (2013). Telemedicine, e-Health and Health related IT enabled Services: The Indian Situation. Globsyn Management Journal; Calcutta, 7(1/2), 1–16. Landhuis, E. (2018). Outsourcing is in. Nature, 556(7700), 263–265. https://doi.org/10.1038/d41586-018-04163-8 Margherita, B., & Valeria, L. (2017). The increasing role of


contract research organizations in the evolution of the biopharmaceutical industry. African Journal of Business Management, 11(18), 478–490. https://doi.org/10.5897/ AJBM2017.8360 Miller, A. (2019, April). Contract Research Organizations. https:// my.ibisworld.com/us/en/industry-specialized/od5708/about Miller, J. (2012). Contract Services in 2012. Biopharm International; Monmouth Junction, 25(1), 18–19. Mirowski, P., & Van Horn, R. (2005). The Contract Research Organization and the Commercialization of Scientific Research. Social Studies of Science, 35(4), 503–548. https://doi. org/10.1177/0306312705052103 Mohs, R. C., & Greig, N. H. (2017). Drug discovery and development: Role of basic biological research. Alzheimer’s & Dementia : Translational Research & Clinical Interventions, 3(4), 651–657. https://doi.org/10.1016/j.trci.2017.10.005 Ringel, M. S., Scannell, J. W., Baedeker, M., & Schulze, U. (2020). Breaking Eroom’s Law. Nature Reviews Drug Discovery. https:// doi.org/10.1038/d41573-020-00059-3 Roberts, D. A., Kantarjian, H. M., & Steensma, D. P. (2016). Contract research organizations in oncology clinical research: Challenges and opportunities. Cancer, 122(10), 1476–1482. https://doi.org/10.1002/cncr.29994 Van Norman, G. A. (2016). Drugs, Devices, and the FDA: Part 1: An Overview of Approval Processes for Drugs. JACC: Basic to Translational Science, 1(3), 170–179. https://doi.org/10.1016/j. jacbts.2016.03.002





Preventative Medicine: The Key to Stopping Cancer in its Tracks BY DINA RABADI '22 Cover Image: The field of medicine is on the cusp of a paradigm shift towards preventative medicine. This paper examines a potential model of prevention and analysis for cancer, minimizing patient costs and improving patient response to the disease. Source: Nick Youngson, obtained from Alpha Stock Images, original at https://www.picpedia. org/medical/p/preventivemedicine.html


Introduction “For the next quantum leap, fundamentally different strategies have to be developed. The two immediate steps should be a shift from studying animals to studying humans and a shift from chasing after the last cancer cell to developing the means to detect the first cancer cell.” – Dr. Azra Raza, The First Cell: And the Human Costs of Pursuing Cancer to the Last (Raza, 2019, 48). According to the World Health Organization (WHO), cancer is the second cause of death globally, causing about one out of every six deaths. Between 1975 and 2015, cancer mortality rates stagnated around 40% (Falzone et al., 2018). Why is it that, despite such great advancements in the field like immunotherapy, cancer still accounted for approximately 9.6 million deaths in 2018, making it one of the leading causes of death worldwide (World Health Organization)? Why is the triad of

surgery, radiotherapy, and chemotherapy, also known as the “slash, burn, poison” method, still the standard for treatment (Hunter, 2017)? Perhaps one of the reasons for this lies in the fact that much of medicine is reactive, rather than preventative. Physicians often find themselves in a position where a cancer has become malignant, and if the cancer is not malignant, the intricate and heterogenous nature of a tumor presents a challenge in and of itself. This suggests a needed shift in the way researchers and clinicians view cancer, from reactive treatment to preventative interventions, such as using biosensors that are informed by an international biomarker database.

Peto’s Paradox and How Cancer Affects Animals Why are humans so susceptible to cancer? Considering that humans have trillions of cells, and that each cell has billions of base pairs of DNA, mistakes are unavoidable (Tollis et al., DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure 1: This figure displays the global burden of disease, with cancer causing the second highest number of deaths worldwide. Source: Max Roser and Hannah Ritchie (2015). Published online at OurWorldInData.org.

2017). Therefore, it would make sense that organisms that are even larger than humans, such as elephants and whales, are even more likely to get cancer. Surprisingly, elephants have about a 5% risk of developing cancer, much lower than the 16% that a human will develop cancer according to the WHO (Tollis et al., 2017). This contradiction is known as Peto’s Paradox, named after epidemiologist Richard Peto, who studied tumor progression after carcinogen exposure in mice. He quickly realized that despite the fact humans have one thousand times more cells than mice, the rates of cancer incidence between the two organisms were very similar (Tollis et al., 2017). There are several possible explanations of Peto’s Paradox. One of the possible mechanistic answers to this question is evolution. First, one must examine p53, also known as the “guardian of the genome.” Humans have only one copy of p53 in their genome, and a mutation in a single TP53 allele indicates a 90% chance of developing cancer (Tollis et al., 2017). Interestingly, elephants have twenty copies of p53, which when exposed to ionizing radiation, switch on the apoptotic pathway to destroy mutated cells (Tollis et al., 2017). When the number of copies of p53 were increased in mice, cancer risk was significantly minimized, suggesting that the number of copies of p53 that an organism has plays a critical role in determining the risk for cancer (Tollis et al., 2017). However, whales do not have any extra copies of any tumor suppressor gene, and whales have an even lower rate of cancer than elephants. This fact contradicts


the possible solution that extra copies of p53 are necessary in preventing cancer (Tollis et al., 2017). There are other exceptions to Peto’s Paradox: some smaller organisms, specifically naked mole rats and blind mole rats, which have very low incidence of cancer (Tollis et al., 2017; Tidwell et al., 2017). This is due to their very sensitive tumor-suppressor pathway and over proliferation to trigger necrotic cell death, respectively (Tollis et al., 2017). Another perhaps more satisfying explanation to Peto’s Paradox is the Warburg effect, as it explains the metabolic transition of a cancerous cell. Essentially, a cancerous cell will alter its methods of metabolism, which permits the regulation of gene expression epigenetically, causing rapid turnover of metabolic substrates (Tidwell et al., 2017). Perhaps then, larger organisms like whales developed slower metabolic rates in order to compensate for their incredibly high cell numbers. A slower metabolism reduces reactive oxygen species, which are very damaging to DNA and are a major cause of cancer (Tidwell et al., 2017). Indeed, it was found that elephants and whales have slower metabolisms than smaller animals (Tidwell et al., 2017). An additional factor that allows these organisms to survive for such a long time with low incidences of cancer is the lack of predation and other causes of death, which allows energy to be directed towards maintaining cells (Tidwell et al., 2017). It is also critical to acknowledge that humans are the only species to have significantly extended its lifespan well beyond reproductive years. Interestingly, organisms that are in captivity, such as tigers, also have a lifespan far past

“... it would make sense that organisms that are even larger than humans, such as elephants and whales, are even more likely to get cancer. Surprisingly, elephants have about a 5% risk of developing cancer, much lower than the 16% that a human will develop cancer according to the WHO."


reproductive years, and it has been found that they develop cancer at higher rates when compare to their wild relatives (Tidwell et al., 2017). This extension of life beyond reproductive years is another possible solution to Peto’s Paradox, since cancer is strongly associated with aging.

How does cancer work? “There are three main factors that play a major role in tumor biology and development: genetic mutations, the immune system, and epigenetics."

There are three main factors that play a major role in tumor biology and development: genetic mutations, the immune system, and epigenetics. There are two classifications of genes that regulate cancer growth and are critical in preventing cancer – proto-oncogenes and tumor suppressor genes. Proto-oncogenes provide the ‘gas’ that fuels the cell’s growth. However, when mutated, a proto-oncogene becomes an oncogene, causing the cell to grow out of control, much like a gas pedal being stuck to the floor. Tumor suppressor genes provide the ‘brake’ for cell growth, meaning that it stops cell growth when necessary, and initiates apoptosis if there are any errors in the cell. If there is a mutation in a tumor suppressor gene, cell division will be out of control, also causing cancer. Mutations in either or both types of genes are major causes for cancer. There are many other ways by which gene expression is regulated. For example, microRNAs, which are small non-coding RNAs that will terminate protein translation at a specific point, can act as oncogenes and tumor suppressors under different conditions, and they are found to be highly dysfunctional in cancer (Peng and Croce, 2015). MicroRNAs affect many characteristics of cancer growth and spread, so recently there have been more studies to explore their role (Peng and Croce, 2015). The immune system plays another critical aspect in how to treat and prevent cancer. Cancer-associated inflammation contributes to genomic instability, epigenetic modification, improvement of anti-apoptotic pathways, and other methods by which a tumor successfully establishes itself within the body (Gonzalez et al., 2018). Initially, it may seem that chronic inflammation associated with tumor progression draws the immune system in to kill the cancer. In reality, inflammatory immune cells, such as macrophages, neutrophils, dendritic cells, and myeloid-derived suppressor cells (MDSCs), can be manipulated to play a tumor-promoting role in the tumor microenvironment, thereby protecting the tumor from the immune system (Gonzalez et al., 2018). Another way to describe this phenomenon is that the cancer creates its own micro-immune system by manipulating


immune cells, allowing the tumor to evade the body’s immune system. While weaker cancer cells are in fact eliminated by the immune system, the stronger ones survive and give rise to further generations that can avoid immune detection and build immune tolerance (Gonzalez et al., 2018). Examining just one of these cell types that leads to immune tolerance sheds light on how the others generally work when transformed from effector cells to tumorprotecting cells. For example, macrophages play a major role in the innate, or immediate, immune response and are among the first responders to infection and injury. However, if recruited by the tumor, macrophages transform into tumor-associated macrophages (TAMs), which will attack the body’s immune cells. Unsurprisingly, higher levels of TAMs are associated with poorer prognosis and overall survival rates, so this has become a potential clinical target (Li et al., 2019). Furthermore, there are many critical environmental factors that can increase a person’s likelihood of developing cancer. Understanding how epigenetic factors regulate reading of the genome cannot be understated in cancer prevention. Examples of epigenetic causes of the disease include tobacco, alcohol, diet, and pollution, all of which induce low levels of inflammation and leads to an elevated cancer risk over time (Gonzalez et al., 2018). Epigenetic changes occur in humans during all stages of development, all the way from early embryonic stages through adulthood (Roberti et al., 2019). Exposure to certain environmental factors can cause changes to how DNA is transcribed and later expressed as a protein. These modifications typically occur by at least one of the following regulators of gene expression—DNA methylation, histone modification, and non-coding RNAs (Roberti et al., 2019). These modifications are often interdependent, which makes identification of the specific cause of the disease even more challenging. Clearly, the complexity of tumor heterogeneity makes identifying treatments extremely complex.

Current Treatments The “slash, burn, poison” model is still the standard of cancer care. As described earlier, this method entails surgery, radiotherapy, and chemotherapy. While surgery can remove a mass, if any cancerous cells remain, cancer recurs. Radiotherapy is effective, but it kills cells nonspecifically, so a patient’s healthy cells are


Figure 2: Genes can be turned on and off based on methylation or acetylation, as seen above, which can impact whether DNA is open or condensed. Source: Wikimedia Commons

killed along with the cancer cells. Therefore, this treatment causes drastic side effects and decreased quality of life for the individual. Additionally, excess radiation itself can cause cancer. Chemotherapy has the same issue with nonspecific cell killing, causing severe side effects in patients as well. Despite the development and implementation of new treatments, cancer mortality rates have remained mostly unchanged over the past several decades. Newer treatments, such as checkpoint inhibitors, cancer vaccines, chimeric antigen receptor (CAR) T-cells, and antibodydrug conjugates (ADCs), are being used more frequently in the clinic, showing variably promising results depending on the patient. Immunotherapy has been hailed as a great stride in the history of cancer treatments and even as a fourth pillar, added onto the “slash, burn, poison” method (Hunter, 2017). One type of immunotherapy is the use of checkpoint inhibitors. One of the most well-known checkpoint inhibitors is anti-programmed death-1 (anti-PD-1). PD-1 is a molecule is expressed on the surface of cancer cells and allows the cancer to deflect the immune system. Anti-PD-1 blocks this interaction and allows T-cells to better kill cancer cells. While this checkpoint inhibitor has shown immense promise in mouse models and some patients, much is still to be learned about the mechanisms by which the checkpoint functions (Hunter, 2017). Essentially, the efficacy of these checkpoint inhibitors is highly variable, with relatively better results before metastasis.


There are other types of promising immunotherapies that are beginning to become more widely used or are undergoing clinical trials. Personalized anti-cancer vaccines may help improve responses in patients whose tumors fail to respond to checkpoint inhibition. These vaccines are not currently preventative, as they work by targeting T cells, dendritic cells, peptides, DNA, or whole-cells (Thomas and Prendergast, 2016). CAR T-cell therapy is another immunotherapy that is receiving much recognition for its potential. These engineered T cells function by binding to a tumor-associated antigen to cause death of tumor cells by the T cells (Golubovskaya, 2017). While quite promising in clinical trials, obstacles arise in the tumor microenvironment, where suppressive MDSCs continue to protect the tumor. Strategies such as chemotherapy and checkpoint inhibition can work to reduce this suppressive factor (Baruch et al., 2017). Some serious toxicities, such as anaphylaxis and cytokine release syndrome, can occur, so significant research must be done in order to evaluate the clinical safety of this therapy (Baruch et al., 2017). Finally, ADCs are also a promising immunotherapy, essentially combining chemotherapy and immunotherapy (Pondé et al., 2019). The main concern that arises with ADCs is off-target toxicity, resulting in the release of cytotoxic agents into the bloodstream which kill healthy cells (Baruch et al., 2017). As with the other new and upcoming immunotherapies, more research must be done in order to minimize serious side effects and enhance efficacy in patients. These methods are certainly promising, but their focus is curative rather than preventative.

“Newer treatments, such as checkpoint inhibitors, cancer gaccines, chimeric antigen receptor (CAR) T-cells, and antibody-drug conjugates (ADCs) are being used more frequently in the clinic, showing variably promising results dependent on the patient."


Figure 3: This is an example of what a lung-on-a-chip looks like, which would be similar to the cancer-on-a-chip model. Source: Wikimedia Commons

While many new therapies appear promising, over 90% will likely fail during clinical trials (Goossens et al., 2015). Furthermore, a major drawback that cannot be understated is the exuberant costs associated with the treatments that are available. A single year of a new cancer treatment in the United States is at least $100,000, and costs have increased by 13% each year from 2000-2015 (Nakashima, 27). These massive costs must be minimized in order to make treatment accessible to all.

The Human Cancer Biomarker Project “Broadly speaking, a biomarker is defined as any biological substance that is indicative of disease...”

Thus far, the problems in understanding and treating cancer stem from the complex factors that cause a tumor grow: genetics, the immune system, and epigenetics. Challenges with cancer treatment lie in the variable success rates of treatments, as well as the physical, mental, and financial costs of treatment. Here, the Human Cancer Biomarker Project is proposed – it is an attempt to create an affordable and preventative potential solution for evading cancer. This project is focused on scaling up the efforts to determine cancer biomarkers and precursors, and it also seeks to ensure that all this information is shared in a large, interoperable database. This was done for the Human Genome Project, a thirteenyear international effort to discover every gene in humans –a human cancer biomarker project could be equally groundbreaking. After discovery, these biomarkers can be sorted in several ways, such as type of cancer, genetics, risk factors, likelihood of metastasis, and prognostic factors in order to determine the best treatment plan for a patient. There are thousands of publications that discuss such biomarkers. Broadly speaking, a biomarker is defined as any biological substance that is indicative of disease. According to the WHO, biomarkers fall into the following categories: predictive, prognostic, and diagnostic (Goossens et al., 2015). There are already several cancer biomarker databases, some being more specific than others. For example, the National Cancer Institute has a large database that can filter results by organ, then lists aliases, a brief description of the biomarker itself, studies, publications, and resources for each biomarker. There also exist specific biomarker databases: some are based on specific types of mutations, while others focus on a specific cancer. Another example, BioMuta, focuses on cancer-associated single-nucleotide variations, while the LACEBio project focuses specifically on prognosis biomarkers in patients with non-small-cell lung


cancer (Dingerdissen et al., 2018; Seymour et al., 2019). Clearly, significant research is being done on cancer biomarkers at many levels, but this research is not yet comprehensive. What makes the Human Cancer Biomarker Project different is that this will be a collaborative, international project with a unified goal, clear direction, and unparallel focus, which will result in a much more comprehensive database, both in breadth and depth. A major challenge to the use of biomarkers is validation and clinical implementation, considering that approximately 0.1% of biomarkers are translated to the clinic successfully (Goossens et al., 2015). Too often, biomarker discovery is an afterthought of experiments at best, but this challenge can be overcome if the search for biomarkers becomes a worldwide goal. Carefully designed studies and high-quality sample sizes can certainly overcome the currently low percentage of biomarkers researched in the clinic. If the world can place greater emphasis on the identification of human cancer biomarkers, then those biomarkers can be translationally implemented as the preventative solution to cancer.

Implementation of Biosensors A biosensor equipped with a global database of cancer biomarkers and high biological sensitivity to subtle, yet cancerous changes could be used to catch a tumor before it can evade therapies or metastasize. The biosensor can be programmed based on the Human Cancer Biomarker Project’s database to be optimized to detect the most frequent and broad indicators of cancer. Furthermore, each person’s biosensor could be personalized to their genome and environment. An example


of genome-based personalization could be for BRCA-1, a key gene in breast cancer susceptibility, as its mutation is found in greater than 80% of inherited breast and ovarian cancers (Yang et al., 2016). A modification to the biosensor would allow it to detect levels of targets of the BRCA-1 gene. Such modifications can also utilize epigenetic signatures as potent biomarkers for early detection, noninvasive screening, prognosis, and prediction of therapeutic response. Epigenetic therapy, based on a biomarker test, is another way of sensitizing tumors’ current treatments, allowing the cancer to be stopped in its earlier stages (Roberti et al., 2019). Both types of personalization, genetic and epigenetic, will lead to unheard levels of sensitivity and specificity, optimized to the patient and their needs. This is hypothetically within our technological capabilities, and it is both achievable and affordable. There are numerous types of available biosensors, including fluorescent biosensors, electrochemical DNA biosensors, among many others, all of which are being continuously studied and tested. Specifically, fabrication of low-cost and highly sensitive, stable, and specific sensors is critical in the movement towards preventative care (Sokolov et al., 2009). The use of organic materials and technology like 3D printing will allow for the eventual accessibility, affordability, and commercialization of the biosensor. Another approach involves nanomaterial based biosensors, such as those used to detect ovarian cancer, which is advantageous due to the sensors’ high levels of sensitivity and selectivity (Sha and Badhulika, 2020). For lung cancer, a cancer-on-a-chip platform has been developed, which can monitor various tumor characteristics in real time and consequently optimize subsequent treatment (Khalid et al., 2020). Thus, each individual could have a set of cancer-on-chips that mimics that person’s genetics, lifestyle, and other conditions that are present. This could help physicians screen patients early if they are at risk, and these chips could be used to test the effects of various therapies to determine which would be most beneficial for that individual and their cancer.

Conclusion – The Future for


Biosensors and Preventative Cancer Care So, what is required to put these two ideas – a comprehensive biomarker panel and a personalized predictive biosensor – together? Interdisciplinary research involving data science, computer science, basic science, clinical research, engineering, and machine learning are all needed and more. With this technology, a patient could monitor themselves, and their physician could also monitor them in real time. This way, the moment conditions within the body go awry, both the patient and their physician can be notified to determine the best course of action. This method will allow physicians to detect cancer far earlier than ever before, which will give patients exponentially higher chances for survival. The earlier that cancer is detected, the more efficacious our current treatments will be at eliminating the tumor.

“For lung cancer, a cancer-on-a-chip platform has been developed, which can monitor various tumor characteristics in real time and consequently optimize subsequent treatment."

It is time to do away with the “slash, burn, poison” method. If the world wants to rid itself cancer, it must place preventative cancer care at the forefront of research and funding, and scientists must work collaboratively. “The toll of human suffering should serve as a tool with which to pry open new ways of critical thinking, a grander global vision, a positive outlook toward our world… the future is in preventing cancer by identifying the earliest markers of the first cancer cell rather than chasing after the last,” (Raza, 2019, 290). References Baruch, Erez Nissim, et al. Adoptive T Cell Therapy: An Overview of Obstacles and Opportunities. Cancer, 123(S11), 2154–62. doi:10.1002/cncr.30491. Dingerdissen, Hayley M., et al. BioMuta and BioXpress: Mutation and Expression Knowledgebases for Cancer Biomarker Discovery. Nucleic Acids Research, 46(D1), D1128–36. doi:10.1093/nar/gkx907. Falzone, Luca, et al. Evolution of Cancer Pharmacological Treatments at the Turn of the Third Millennium. Frontiers in Pharmacology, 9(1300), Frontiers, 2018. Frontiers, doi:10.3389/fphar.2018.01300. Golubovskaya, Vita. CAR-T Cell Therapy: From the Bench to the Bedside. Cancers, 9(11), 150. doi:10.3390/ cancers9110150. Gonzalez, Hugo, et al. Roles of the Immune System in Cancer: From Tumor Initiation to Metastatic Progression. Genes & Development, 32(19–20), 1267–84. doi:10.1101/ gad.314617.118.


Goossens, Nicolas, et al. Cancer Biomarker Discovery and Validation. Translational Cancer Research, 4(3), 256–69. doi:10.3978/j.issn.2218-676X.2015.06.04. Hunter, Philip. The Fourth Pillar. EMBO Reports, 18(11), 1889–92. doi:10.15252/embr.201745172. Khalid, Muhammad Asad Ullah, et al. A Lung Cancer-onChip Platform with Integrated Biosensors for Physiological Monitoring and Toxicity Assessment. Biochemical Engineering Journal, 155, 107469. doi:10.1016/j.bej.2019.107469. Li, Xiaolei, et al. Harnessing Tumor-Associated Macrophages as Aids for Cancer Immunotherapy. Molecular Cancer, 18(1), 177. doi:10.1186/s12943-019-1102-3. Nakashima, Lynne. Evolution of Cancer Treatment and Evolving Challenges. Healthcare Management Forum, 31(1), 26–28. doi:10.1177/0840470417722568. Peng, Yong, and Carlo M. Croce. The Role of MicroRNAs in Human Cancer. Signal Transduction and Targeted Therapy, 1(1), 1–9. doi:10.1038/sigtrans.2015.4. Pondé, Noam, et al. Antibody-Drug Conjugates in Breast Cancer: A Comprehensive Review. Current Treatment Options in Oncology, 20(5), 37. doi:10.1007/s11864-019-0633-6. Raza, Azra. The First Cell - And the Human Costs of Pursuing Cancer to the Last. First, Basic Books, 2019. Roberti, Annalisa, et al. Epigenetics in Cancer Therapy and Nanomedicine. Clinical Epigenetics, 11(1), 81. doi:10.1186/ s13148-019-0675-4. Seymour, Lesley, et al. LACE-Bio: Validation of Predictive and/ or Prognostic Immunohistochemistry/Histochemistry-Based Biomarkers in Resected Non–Small-Cell Lung Cancer. Clinical Lung Cancer, 20(2), 66-73. doi:10.1016/j.cllc.2018.10.001. Sha, Rinky, and Sushmee Badhulika. Recent Advancements in Fabrication of Nanomaterial Based Biosensors for Diagnosis of Ovarian Cancer: A Comprehensive Review. Microchimica Acta, 187(3), 181. doi:10.1007/s00604-020-4152-8. Sokolov, Anatoliy N., et al. Fabrication of Low-Cost Electronic Biosensors. Materials Today, 12(9), 12–20. doi:10.1016/S13697021(09)70247-0. Thomas, Sunil, and George C. Prendergast. Cancer Vaccines: A Brief Overview. In: Thomas S. (eds) Vaccine Design. Methods in Molecular Biology, 1403. 755–61. doi:10.1007/978-1-49393387-7_43. Tidwell, Tia R., et al. “Aging, Metabolism, and Cancer Development: From Peto’s Paradox to the Warburg Effect.” Aging and Disease, 8(5), 662–76. doi:10.14336/AD.2017.0713. Tollis, Marc, et al. Peto’s Paradox: How Has Evolution Solved the Problem of Cancer Prevention? BMC Biology, 15. doi:10.1186/ s12915-017-0401-7. Yang, Hui, et al. In Situ Hybridization Chain Reaction Mediated Ultrasensitive Enzyme-Free and ConjugationFree Electrochemcial Genosensor for BRCA-1 Gene in Complex Matrices. Biosensors and Bioelectronics, 80,450–55. doi:10.1016/j.bios.2016.02.011.





Challenges and Opportunities in Providing Palliative Care to COVID-19 Patients BY EMILY ZHANG '23 Cover Image: A palliative care physician is holding hands of a patient. Source: Flickr

Introduction Palliative care is a method of treatment used to prevent and relieve suffering for patients with serious life-threatening illnesses ("WHO | WHO Definition of Palliative Care," n.d.). While traditional medical approaches seek to postpone or “fight” death with all possible means regardless of how painful these treatments might be for patients, palliative care strives to minimize the patients’ pain and suffering at the end of their lives. While palliative care does not intend to postpone death, it is also different from physician-assisted suicide as it does not intend to hasten death either. Instead, palliative care both “affirms life and regards death as normal processes” without forcefully pushing patients in either direction ("WHO | WHO Definition of Palliative Care," n.d.). As famous American surgeon and writer Dr. Atul Gawande, describes in Being Mortal: Illness, Medicine, and What


Matters in the End, while assisted suicide focuses on “a good death,” palliative care focuses on “a good life to the very end” (2014). Palliative care not only includes medical treatments that focuses on patients’ direct physical health but also integrates psychosocial and spiritual support to ensure a satisfying and comfortable end-of-life experience mentally. This support often involves a comprehensive support network from physicians, nurses, social workers, psychological and spiritual counselors, and family members (Chidiac et al., 2020). It is often regarded as an essential component of a complete medical system, especially during mass-casualty emergencies like the COVID-19 pandemic where so many patients are experiencing the end of their lives and in need of the pain and symptom relief provided by palliative care (Farrell et al., 2020). However, the scarcity of medical resources during this emergency as well as the required DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure 1: Four components of palliative care: biological, psychological, social, and spiritual. Image created by author

physical isolation caused by the virus’ highly transmissible nature adds great challenges to providing comprehensive palliative care to COVID-19 patients. Nevertheless, many care providers are highly aware of the importance of palliative care for COVID-19 patients and are integrating innovative technologies to improve the feasibility of palliative care.

Why is Palliative Care Important? For patients infected by COVID-19, palliative care becomes increasingly crucial due to the highly transmissible nature of the virus and the fatality of COVID-19 infection. According to World Health Organization (WHO), there are three major components of palliative care: physical, psychosocial, and spiritual ("WHO | WHO Definition of Palliative Care," n.d.). In fact, all these three categories of support are relevant and important to supply to patients suffering from COVID-19 infection. Firstly, palliative medicine can effectively relieve physical pain of COVID-19 patients. The high symptom burdens of COVID-19 patients are traditionally managed through mechanical ventilation; however, invasive ventilation that requires intubation is often considered as a last resort that is not suitable for every patient because it can be uncomfortable and dangerous. For example, for unstable or endof-life patients who are unlikely to recover from COVID-19, continuing oxygen therapy or invasive interventions may bring them more burden and discomfort (Fusi-Schmidhauser et al., 2020). In this case, palliative methods would call for pharmaceuticals such as diazepam or opioids that are more effective to relieve patients’ symptoms and pain in


conjunction with other psychosocial and spiritual support systems from their family or other professionals. Even for stable patients who have a high chance of recovering, their breathing difficulties, excessive shivering, and anxiety can be effectively managed by morphine, lorazepam, or other less aggressive, pain-relieving palliations when mechanical ventilation is not suitable or beneficial (FusiSchmidhauser et al., 2020). On the psychosocial aspect, Guo et al at Shanghai Jiao Tong University School of Medicine in China found that compared to non-COVID-19 patients, hospitalized COVID-19 patients suffer from a significantly higher level of depression, anxiety, and post-traumatic stress symptoms, not even including critical COVID-19 patients who are not likely to recover (Guo et al., 2020). The social stigma and pressure they feel of getting infected and potentially spreading the virus to more people as well as the uncertainty of their individual disease progression triggers a great amount of fear, guilt, and helplessness among infected patients (Guo et al., 2020)

"... for unstable or end-of-life patients who are unlikely to recover from COVID-19, continuing oxygen therapy or invasive interventions may bring them more burden and discomfort.”

Similar to patients’ psychosocial needs, the spiritual needs of patients also become increasingly important. During the pandemic, patients infected by COVID-19 often suffer from excessive isolation, loneliness, and vulnerability compared to areas no longer in a pandemic environment since those infected during a pandemic are often hospitalized in isolation rather than having the option of staying at home or in a hospice to receive palliative care (Ferrell et al., 2020). As a result, many forms of social and spiritual interactions become unavailable, which makes spiritual care that can be easily practiced in a hospital setting 113

Figure 2: A senior man is using virtual reality goggles. Source: Shutterstock

increasingly crucial for patients facing death. Given the pressing need for spiritual assessment and care, Ferrell et al recommends that all health care providers should be trained to provide spiritual care during such emergencies (Ferrell et al., 2020).

Is Palliative Care Feasible At All? “New technologies give us new ways to simulate our palliative care settings in normal times. For example, video conferencing and virtual reality (VR) technologies are gaining popularity in palliatve care for COVID-19 patients.”


Providing palliative care to patients during a pandemic poses unique challenges. For COVID-19, since the virus is highly contagious, COVID-19 patients are often treated in physical isolation to limit transmission to others. In this situation, palliative care providers and patients’ families can only visit COVID-19 patients in a highly limited frequency under strict protective measures. Since palliative care strives to support patients to live as fully and actively as possible until death, palliative care normally occurs at the patient’s home, a hospice, or other less institutionalized place than a hospital to ensure autonomy of the patient’s life (Gawande, 2014). However, since most of society is quarantined and/or social distancing, palliative care for COVID-19 is difficult to implement. Moreover, since personal protective equipment are needed for physicians at all times, patients and physicians are separated by an additional barrier, so it is even harder for either side to form close relationship with another. Therefore, it may seem impossible to provide high-quality palliative care amidst this

time since even best palliative care practices right now cannot fully provide holistic personal care, family support, and an active, autonomous end-of-life experience. Fortunately, palliative care physicians have been seeking innovative measures to counter this additional challenge posed by physical isolation. New technologies give us new ways to simulate our palliative care settings in normal times. For example, video conferencing and virtual reality (VR) technologies are gaining popularity in palliative care for COVID-19 patients. Currently, care providers mostly use video conferencing tools to facilitate palliative care for patients suffering from COVID-19 who are under physical isolation (Wang et al., 2020). In this way, patients can both receive palliative care from medical professionals and connect with family and friends to meet their social needs without having to worry about virus transmission (Chua et al., 2020). While video conferencing only provides a tool to achieve verbal and visual communications; it still cannot realize the patients’ need for physical interactions and real-life experiences. Therefore, VR technology has gained attention in the field of palliative care as the next best alternative to actual physical interactions and support that are needed the most by patients in end-of-life stages (Wang et al, 2020). For example, VR technology can help end-oflife patients simulate vacations, outdoor DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure 3: A doctor wearing personal protective equipment in a hospital in Italy during the COVID-19 pandemic. Source: Wikimedia Commons

settings, memorable places, or facilitate social interactions for patients at home, in hospital, or in physical isolation (Baños et al., 2013; Niki et al., 2019). Wang et al from National University Hospital System in Singapore thus recommends using VR technology to help provide COVID-19 patients with both psychosocial and physical palliation (Baños et al., 2013; Niki et al., 2019). Psychosocially, VR technology is able to support patients in palliative care as a source of distraction, entertainment, and relaxation (Baños et al., 2013). In a clinical trial done in Spain’s Universitat de Valencia, patients under palliative care are led to an immersive experience of either an urban park or a rural forest through virtual reality (Baños et al., 2013). Patients in the trial reported significant improvements in satisfaction after the VR intervention, and the usage of VR devices were “minimally uncomfortable,” suggesting that there are no significant psychological or physical rejection of the VR device among patients receiving this novel form of palliative care treatment (Baños et al., 2013). Physically, the adoption of VR technology can relieve patients’ physical pain. A study done by Osaka University found that using VR for palliative care patients can relieve their symptom burdens, such as pain, shortage of breath, and drowsiness. (Niki et al., 2019). Interestingly, this study found that participants who went to a memorable place in VR tend to SUMMER 2020

experience more physical benefits than those who travel to a place they have never visited, showing how virtual reality benefits on physical pain can be supplemented when triggering the patients’ positive memory (Niki et al., 2019). Therefore, the positive psychological effects brought by virtual reality can improve the patients’ physical experiences and help relieve their suffering and pain. Thus, both video conferencing and VR pose as viable method to provide palliative care in physical isolation during the COVID-19 pandemic.

Luxury or Necessity? Palliative care is not always prioritized in healthcare delivery during such emergencies like the COVID-19 pandemic, and people have different opinions on its priority level. Even though many healthcare professionals argue that palliative care is important for COVID-19 patients and current challenges to providing palliative care can be solved by new technology, many people still would not prioritize palliative care for COVID-19 patients, especially for patients who are unlikely to recover. Critics claim that such methods are a waste of critically needed medical resources.

“A study done by Osaka University ofund that using VR for palliative care patients can relieve their symptom burdens, such as pain, shortage of breath, and drowsiness.”

For instance, the World Health Organization issued a guidance this March on maintaining essential health services during the pandemic. While it highlighted the essential maintenance of maternal care, immunization, chronic


diseases, and many other methods of care, there was no mention of palliative care as one of those essential health services (World Health Organization, 2020a). Only in the updated version issued in June did WHO remember to include a short section on the safe delivery of palliative care during the pandemic (World Health Organization, 2020b).

“... providing palliative care can actually increase the efficiency of our other medical resources because it prevents resource loss from insisting on other more aggressive and resourceconsuming medical treatments on critical patients when those treatments are likely to be futile.”

It generally seems that during mass casualty events like the COVID-19 pandemic, when medical resources become extremely scarce, the primary goal of healthcare becomes utilitarian: saving the greatest number of people eclipses the need to provide the best individual care for each patient (Matzo et al., 2009). This utilitarian view could lead to providers focusing resources on healthier patients rather than use scarce resources on those who are unlikely to survive. Since the total amount of medical resources becomes increasingly limited as the pandemic sprawls, critics have suggested that an altered paradigm for medical care should be adopted where palliative care should no longer be a necessity for all patients but only a luxury of lower priority (Rosoff, 2010). There are mainly three ways that advocates for palliative care counter the utilitarian concerns of palliative care critics. First of all, many believe that providing palliative care should be granted a high priority simply out of humanitarian considerations (Nouvet et al., 2018). Though in such mass casualty context limitation on medical resources requires our coordinated response to save as many lives as possible, people in a civil society demand a secondary goal in which they want to support the life quality of people whose lives are shortened unexpectedly by the crisis event (Matzo et al., 2009). From their point of view, palliative care ought to be provided as much as possible even during such emergencies to not only save quantity but also quality. Second, advocates for palliative care argue that additional human resources can be used during emergencies to sustain palliative care for end-oflife COVID-19 patients. For instance, emergency palliative care can involve the mobilization of personnel beyond traditional palliative care physicians, such as physicians and nurses who did not specialize in palliative care before or non-clinician volunteers who can be trained to provide some basic palliative care (Matzo et al., 2009). Moreover, critical COVID-19 patients near the end of their life can also be directed to alternative care sites, such as hospices or other non-hospital settings to avoid occupying


hospital resources for other patients who are more likely to recover, yet specific protocols and training are needed before providing palliative care from those new staff (Matzo et al., 2009). Finally, proponents of palliative care argue that palliative care does not mean grabbing resources from other patients that are more likely to survive and wasting resources on people destined to die. Rather, providing palliative care can actually increase the efficiency of our other medical resources because it prevents resource loss from insisting on other more aggressive and resource-consuming medical treatments on critical patients when those treatments are likely to be futile (Powell et al., 2017). Therefore, if the option of palliative care is presented to both the physician and the patient, not only can the patient choose to enjoy a better, less painful end-of-life care, but the physician can also choose a more cost-effective approach to help the patient while saving other resources.

Conclusion While palliative care can provide important physical, psychosocial, and spiritual support for COVID-19 patients undergoing the end of their lives, challenges to providing palliative care posed by the pandemic have limited the beneficial effects. Physical isolation and ethical objection to providing palliative care for endof-life patients in resource scarcity in particular make it harder for palliative care to be labeled as a priority during the COVID-19 pandemic. Nevertheless, VR technology now enables palliative care physicians to solve the problem from physical isolation by continuing to give palliative psychosocial and physical pain relief. For the ethical dilemma in resource distribution, even though the conflicting interest of different sectors of the healthcare system may seem insurmountable, healthcare providers can actually break down these macroscopic dilemmas into small, individual decisions between the physicians and the patients under specific contexts of the care center to better counter this challenge and provide more effective and desirable treatments to their patients. References Baños, R. M., Espinoza, M., García-Palacios, A., Cervera, J. M., Esquerdo, G., Barrajón, E., & Botella, C. (2013). A positive psychological intervention using virtual reality for patients with advanced cancer in a hospital setting: A pilot study to assess feasibility. Supportive Care in Cancer, 21(1), 263–270. https://doi.org/10.1007/s00520-012-1520-x Chidiac, C., Feuer, D., Naismith, J., Flatley, M., & Preston, N.


(2020). Emergency Palliative Care Planning and Support in a COVID-19 Pandemic. Journal of Palliative Medicine, 23(6), 752–753. https://doi.org/10.1089/jpm.2020.0195 Chua, I. S., Jackson, V., & Kamdar, M. (2020). Webside Manner during the COVID-19 Pandemic: Maintaining Human Connection during Virtual Visits. Journal of Palliative Medicine. https://doi.org/10.1089/jpm.2020.0298 Farrell, T. W., Ferrante, L. E., Brown, T., Francis, L., Widera, E., Rhodes, R., Rosen, T., Hwang, U., Witt, L. J., Thothala, N., Liu, S. W., Vitale, C. A., Braun, U. K., Stephens, C., & Saliba, D. (2020). AGSPosition Statement: Resource Allocation Strategies andAge-RelatedConsiderations in theCOVID-19 Era and Beyond. Journal of the American Geriatrics Society, 68(6), 1136–1142. https://doi.org/10.1111/jgs.16537 Ferrell, B. R., Handzo, G., Picchi, T., Puchalski, C., & Rosa, W. E. (2020). The Urgency of Spiritual Care: COVID-19 and the Critical Need for Whole-Person Palliation. Journal of Pain and Symptom Management. https://doi.org/10.1016/j. jpainsymman.2020.06.034 Fusi-Schmidhauser, T., Preston, N. J., Keller, N., & Gamondi, C. (2020). Conservative Management of COVID-19 PatientsEmergency Palliative Care in Action. Journal of Pain and Symptom Management, 60(1), e27–e30. https://doi. org/10.1016/j.jpainsymman.2020.03.030

Journal of Clinical Ethics, 21(4), 312–320. Wang, S. S. Y., Teo, W. Z. W., Teo, W. Z. Y., & Chai, Y. W. (2020). Virtual Reality as a Bridge in Palliative Care during COVID-19. Journal of Palliative Medicine, 23(6), 756–756. https://doi. org/10.1089/jpm.2020.0212 WHO | WHO Definition of Palliative Care. (n.d.). WHO; World Health Organization. Retrieved July 19, 2020, from https:// www.who.int/cancer/palliative/definition/en/ World Health Organization. (2020a). COVID-19: Operational guidance for maintaining essential health services during an outbreak: interim guidance, 25 March 2020 (WHO/2019nCoV/essential_health_services/2020.1). Article WHO/2019nCoV/essential_health_services/2020.1. https://apps.who.int/ iris/handle/10665/331561 World Health Organization. (2020b). Maintaining essential health services: Operational guidance for the COVID-19 context: interim guidance, 1 June 2020 (WHO/2019-nCoV/ essential_health_services/2020.2). Article WHO/2019-nCoV/ essential_health_services/2020.2. https://apps.who.int/iris/ handle/10665/332240

Gawande, A. (2014). Being Mortal: Illness, Medicine, and What Matters in the End. Profile Books. Guo, Q., Zheng, Y., Shi, J., Wang, J., Li, G., Li, C., Fromson, J. A., Xu, Y., Liu, X., Xu, H., Zhang, T., Lu, Y., Chen, X., Hu, H., Tang, Y., Yang, S., Zhou, H., Wang, X., Chen, H., … Yang, Z. (2020). Immediate psychological distress in quarantined patients with COVID-19 and its association with peripheral inflammation: A mixed-method study. Brain, Behavior, and Immunity, 88, 17–27. https://doi.org/10.1016/j. bbi.2020.05.038 Matzo, M., Wilkinson, A., Lynn, J., Gatto, M., & Phillips, S. (2009). Palliative Care Considerations in Mass Casualty Events with Scarce Resources. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science, 7(2), 199–210. https://doi. org/10.1089/bsp.2009.0017 Niki, K., Okamoto, Y., Maeda, I., Mori, I., Ishii, R., Matsuda, Y., Takagi, T., & Uejima, E. (2019). A Novel Palliative Care Approach Using Virtual Reality for Improving Various Symptoms of Terminal Cancer Patients: A Preliminary Prospective, Multicenter Study. Journal of Palliative Medicine, 22(6), 702–707. https://doi.org/10.1089/jpm.2018.0527 Nouvet, E., Sivaram, M., Bezanson, K., Krishnaraj, G., Hunt, M., de Laat, S., Sanger, S., Banfield, L., Rodriguez, P. F. E., & Schwartz, L. J. (2018). Palliative care in humanitarian crises: A review of the literature. Journal of International Humanitarian Action, 3(1), 5. https://doi.org/10.1186/s41018-018-0033-8 Powell, R. A., Schwartz, L., Nouvet, E., Sutton, B., Petrova, M., Marston, J., Munday, D., & Radbruch, L. (2017). Palliative care in humanitarian crises: Always something to offer. The Lancet, 389(10078), 1498–1499. https://doi.org/10.1016/S01406736(17)30978-9 Richardson, P. (2014). Spirituality, religion and palliative care. Annals of Palliative Medicine, 3(3), 10. Rosoff, P. M. (2010). Should palliative care be a necessity or a luxury during an overwhelming health catastrophe? The



The Botanical Mind: How Plant Intelligence ‘Changes Everything’ BY EVA LEGGE '22 Cover Image: Plants have a highly manipulative relationship with their environment and are able to respond to environmental stimuli with a remarkable complexity that some have deemed to be intelligent. However, the emerging field of plant intelligence has become an increasingly controversial subject in recent years. Source: Flickr


Introduction to Plant Intelligence In November 2017, Fred Adams, a philosophy professor at the University of Delaware, published a paper entitled “Cognition Wars” in the leading scientific journal Elsevier. He wrote, “In case you missed it, there is a war going on over what counts as cognition. Luckily, it is a war among academics, so no one will get hurt, but it is a war nonetheless” (Adams, 2014). This war to which Adams refers is that of plant intelligence, a topic that had become the subject of cuttingedge research by plant scientists over the past few decades. Adams is an outspoken skeptic of plant cognition. Plants, Adams asserts, are hard-wired to display knee-jerk reactions to environmental stimuli and are not capable of complex learning and cognitive processing. In other words, Adams believes plants are not capable of intelligence. This is not an isolated belief; skepticism of plant intelligence is widespread among plant biologists and has been the subject of multiple back and forth

journal articles debating the topic. Soon after Adams’ piece, philosopher SegundoOrtin from the University of Wollongong and plant neurobiologist Paco Calvo from the University of Murica published “Are plants cognitive? A reply to Adams,” in which they argued that plants display intelligent behavior that is remarkably reminiscent of that of animals. Plants display remarkably adaptive behavior, are capable of complex decision-making, learning, memory, and even anticipation of future events. There is nothing “metaphorical” about plant intelligence. It is not a placeholder. Plants, instead, are “genuine cognitive agents.” However, the root of the discrepancy is deeper than a debate over adaptive behavior, and instead stems from how “intelligence” has been conceptualized and bounded. Even today, the definitions of intelligence DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

are complex and varied. “Intelligence is a term fraught with difficult definitions,” writes University of Edinburgh plant scientist Anthony Trewavas (2003). Instead of one elegant definition, intelligence is most commonly defined as an assemblage of behaviors that display a complex, sensitive adaptivity to one’s environment. The list of “intelligent behaviors” is exhaustive. As University of Edinburgh plant scientist Anthony Trewavas writes in his article “Plant Intelligence,” “Biologists suggest that intelligence encompasses the characteristics of detailed sensory perception, information processing, learning, memory, choice, optimization of resource sequestration with minimal outlay, self-recognition, and foresight by predictive modeling,” (Trewavas, 2005). In other words, intelligence is the ability of an organism to problem-solve through predicting, learning, and adapting, among others. However, these definitions of intelligence come with a set of assumptions and biases. Many scientists, including Adams, believe that displaying intelligent behavior is not enough to deem an organism intelligent. For an organism to be intelligent, it must also have a central nervous system, which in humans is the brain and the spinal cord. Intelligent behavior “is the kind of thing that creatures with minds do, that creatures with cognitive processes do” (Adams, 2018, p. 29). Plants, however, do not have a centralized organ for computation. Rather, the cells that make up their ‘nervous’ system are decentralized and distributed over a


large area of the plant (Garzón & Keijzer, 2011). This ‘decentralized’ intelligence, according to many biologists, immediately negates the possibility that plants are intelligent. Intelligence, according to Adams, is the ability of an organism to problem solve — to predict, learn, and adapt — as long as the pathway through which these behaviors are displayed are similar to that of humans (Adams, 2018). This perception of intelligence has its own host of social, ethical, and environmental issues, ranging from peoples’ tendency to anthropomorphize (i.e. to attribute human characteristics to a nonhuman); the flawed idea of human primacy and even can be claimed to be one of the many roots of speciesism and racism (Segundo-Ortin & Calvo, 2019; Franks, B. et al., 2020). Essentially, the current definition of intelligence carries the assumption that if a creature does not look like humans, it can’t be as smart.

Figure 1. Boquila trifolata in its “normal” phenotypic expression

In addition, this human-centered perception of intelligence as something inherently centralized in a computational organ (like the brain) is a counterargument that immediately disqualifies any argument in support of plant intelligence. “The notion that learning takes place in the tissues of a collection of individuals, not a single individual, sounds like a very different or metaphorical extension of this conception of learning” (Adams, 2018). Instead, Adams suggests choosing a new term rather than “cognition” or “intelligence” with which to describe plant intelligence. However, to choose a new term would be a rapid response to a complicated issue. Plant neurobiologists are just beginning to understand the ways in which plants can display intelligent behavior, and a widening pool of evidence suggests that even though its cognition is decentralized plants possess many of the same neurobiological pathways present in humans and other nonhuman animals (Baluška et al, 2004). Perhaps, then, it is time to change the rules of the game: to consider diverse pathways that can lead to intelligent behavior. The Extended Cognition Hypothesis has been accepted by many as a way to bridge the gap between intelligent plant behavior and our current pathway to intelligence. Originating in the 1990’s, the Extended Cognition Hypothesis calls for the recognition of decentralized intelligence. The hypothesis claims that intelligence isn’t necessarily bound to one centralized organ but instead can occur beyond the limits of the body, including objects from the environment (Parise, A.G. et al., 2020). By

“Originating in the 1990s, the Extended Cognition Hypothesis calls for the recognition of decentralized intelligence... that intelligence isn't necessarily bound to any one organ but instead can occur beyond the limits of the body, including objects from the envrionment."

Source: Wikimedia Commons


Figure 2. Mimosa pudica, with its leaves closed Source: Wikimedia Commons

including extended cognition, this hypothesis could be the key to legitimize the concept of plant intelligence.

Phenotypic Plasticity “At the cornerstone of animal intelligence is plasticity, the ability of an organism to change their observable characteristics in response to environmental stimuli."


At the cornerstone of animal intelligence is plasticity, the ability of an organism to change their observable characteristics in response to environmental stimuli (M.J. West-Eberhard, 2008). In an analysis of the evolution of intelligence, Stenhouse (1974) defines plasticity as the “adaptively variable behavior within the lifetime of the individual.” Further, Stenhouse claims that “the more intelligent the organism, the greater the degree of individual adaptively variable behavior'' (Trewavas 2003). Animal intelligence plasticity isn’t a novel idea, but rather has been accepted by scientists for a halfcentury. Contrary to Adam’s claim, researchers have reported plant behavior is not “purely reactive and mechanical” (Segundo-Ortin & Calvo, 2019). Plants have an incredible capacity for adaptable behavior. Plants display directional behavior, such as growing in the direction of a light source, as well as non-directional movement, such as the folding of plant leaves in response to an external stimulus. In addition, plants are capable of both positive behavior (such as growing towards the light) and negative behavior (growing away from a gravity vector) (Segundo-Ortin & Calvo, 2019). In fact, plants are able to process over 20 different environmental stimuli at any one time, including water, gravity, minerals, alien roots, and chemicals (Baluška et al., 2006; Yokawa & Baluška, 2018).

A unique case study with which to study phenotypic plasticity is the Boquila trifolata, a ground-rooted vine native to the temperate rainforests of Chile and Argentina (Mancuso 2017). It possesses mimetic capabilities, allowing it to imitate the phenotype of another species. In 2013, botanist Ernesto Gianoli noticed that the Boquila can mimic the phenotype of every plant it grows upon. Boquila plants can change the size, color, and shape of its leaves to match even the most complex leaf. Moreover, a singular vine growing upon multiple different plants can change its leaves accordingly. Boquila is also able to adjust its leaves over time to compensate for any change in the host plants’ leaves. The purposes for and pathways leading to this behavior remain unknown, but Boquila plants are widely celebrated as the “veritable Zelig of the plant world” (Mancuso, 2017, p. 44). Plants display nuanced responses to small shifts in environmental stimuli. In a study of the saline affinity of Arabidopsis thaliana (known as “the model lab plant” due to its simple genomic structure and rapid life cycle) Li and Zhang (2008) of the Chinese Academy of Sciences distributed salt unevenly throughout the soil (National Science Foundation). The plant’s roots could sense the presence of the salt and began growing toward the high-salt areas in order to acquire necessary nutrients. However, after Li and Zhang increased the concentration of the salt in those areas to above a healthy threshold, the roots turned back before they had even reached the salt. The plants and their roots noticed the smallest shift in the salinity gradient and made the “decision,” as Li and


moving in a nearby pipe. Somehow, the plant could hear the water, and could decide to grow toward it, even without a moisture gradient leading the way.

Figure 3. Pisum sativum has been shown to use ‘sound’ to locate moving water Source: Wikimedia Commons

The method by which plants perceive these acoustic cues (such as the sound of rushing water) requires further study, as does the method by which the plant compares different environmental stimuli. Nevertheless, it is becoming more and more evident that even without a brain, plants are able to intelligently perceive, choose and act on a vast array of environmental factors, even in many ways that humans are unable to do.

Learning, Memory, and Prediction

Zhang put it, to turn back. Plants also have the capability of adjusting to temporal shifts in nutrient availability. A study of Pisum sativum showed that pea plants are able to plastically shift their behavior in response to a changing availability of nutrients in the soil, in order to access the ideal nutrient concentration for that species (Dener et al., 2016). Plants, too, have been shown to modify their behavior in the presence of predators (Segundo-Ortin & Calvo, 2019). Trewavas (2014) even found that a species of wetland grass was capable of making “compromises.” When placed in an environment where, in any one place, only two out of three environmental factors — competition, warmth, or light — were optimal, the plants were able to prioritize warm soil and light, by growing primarily in those conditions. Perhaps one of the better-known experiments about plant adaptability and decision-making is exemplified by the study of plants using “sound” to locate water. Tree roots respond to a water-saturation gradient to locate a water source in a similar fashion as they use a mineral gradient to locate a mineral source. However, the specific mechanism the plants use to comprehend and respond to the auditory stimulus is unknown (Gagliano et al., 2017). Even without the presence of substrate moisture, the model plant Pisum sativum was able to detect vibrations caused by water


In 1815, French botanist René Desfontaines asked one of his students to take a collection of plants on a tour of Paris. Among the plants collected were a few jars of Mimosa pudica, a tropical plant best known for its sensitive response to touch. When someone runs their fingers along the leaves of Mimosa pudica, or jostles the plant, the leaves will close in on themselves (Mancuso, 2019). The reaction evolved as an evolutionary response to predators, but the act of closing up its leaves exhausts a large portion of the plant’s precious energy reserves. Desfontaines wondered if the plants had the capacity to differentiate between threatening and non-threatening stimuli. In other words, could the plant learn when it should close its leaves? During the tour, Desfontaines asked his student to carefully observe the plants for even the slightest movement as they traveled in their carriage. For most of the ride, the upand-down motion of the carriage caused the plant to close up its leaves. But as the tour continued, the plants suddenly relaxed. Almost simultaneously, every Mimosa pudica opened its leaves, and remained in this position for the duration of the carriage ride (Lamarck 1815). “The plants are getting used to it,” the student observed in his field notebook (Mancuso 2019). Centuries later, Gagliano et al. (2014) replicated the carriage ride experiment. Gagliano built an apparatus that would drop Mimosa pudica at a controlled speed from a height of four inches, simulating the bumpy, repetitive movement of the carriage ride. After seven or eight drops, the plant stopped closing its leaves. To rule out the effects of fatigue, Gagliano then shook that same plant, and it immediately closed its leaves. In other words, the plant learned

“Perhaps one of the better-known experiments about plant adaptability and decision-making is exemplified by the study of plants using 'sound' to locate water.”


Figure 4. Lavatera cretica orients in anticipation of the light before sunrise Source: needpix.com

“Plants not only have the capacity to modify their behavior due to memory. They are also able to predict future events and modify their behavior in accordance with their predictions..”


to differentiate between a drop, a movement deemed to be non-threatening, and a shake, a new and threatening movement. After forty days, the plants still exhibited the learned response. More complex exhibitions of learning have also been recorded in plants, as well. Gagliano et al. (2016) designed an experiment in which garden peas (Pisum sativum) were able to exhibit “Pavlovian” learning, meaning learning by association. By using a y-maze track, a single tube that branches off in two directions, the growing pea tendril had to make a choice: to grow right or left. After being exposed to two external factors, the positive light source and the neutral presence of light wind, the tendril was conditioned to associate wind with the presence of light (Gagliano, 2016). These results showed that “associative learning represents a universal adaptive mechanism shared by both animals and plants” (Gagliano et al., 2016).

also able to grow towards areas in which they anticipate future shade and roots by modifying their behavior in anticipation of water and minerals (Calvo & Friston, 2017). The ability of plants to anticipate the future events, or minimize “surprise over time,” is an adaptation that is vital to plants’ fitness (Calvo & Friston, 2017). However, it is important to note that the research field of plant intelligence is still in its infancy. For these results to have the traction they deserve, they must be successfully replicated, and the experiments must also be transposed into field settings. But even in the absence of sufficient replication, it is hard not to revel in the precise, ingenious observations of two centuries of botanists.

Plants do not only have the capacity to modify their behavior due to memory. They are also able to predict future events and modify their behavior in accordance with their predictions. Lavatera cretica, a flowering plant in the Mallow family, learns to orient their leaves east before the sun rises (Garzón & Keijzer, 2009). Lavatera predicts the position from which the sun will rise each day and can optimize the amount of sunlight input through the reorientation of their leaves. In addition, Latvatera is able to retain this anticipatory behavior for multiple days, even in the absence of any light input to guide their behavior (Garzón & Keijzer, 2009). Plants are

In recent years, studies supporting the intelligent behavior of plants (such as learning, memory, prediction, and phenotypic plasticity) have grown in number. However, the fact that plants do not possess a central nervous system remains the crux of many arguments against the existence of plant intelligence. In recent years, scientists have begun to explore the neurobiological pathways that, although being decentralized, display uncanny resemblances to the nervous systems in animals. According to Bullock and Horridge in Structure and Function in the Nervous Systems of Invertebrates (1965), the nervous system of plants is defined as “an organized constellation

Plant Neurobiology and Intelligent Communication Networks


Figure 5. A large portion of plants’ cognitive processing occurs underground. The root apices have been proposed to act as ‘brain-like’ command centers in the roots, capable of synaptic communication previously thought to be limited to animals and humans (Baluška et al., 2004) Source: Wikimedia Commons

of cells (neurons) specialized for the repeated conduction of a repeated state from receptor sites or from other neurons to effectors, or to other neurons.” Although plants do not possess neurons, they may possess a decentralized set of cells that is functionally similar to the central nervous system. The emerging field of plant neurobiology grapples with this very concept: to understand how the “integrated signaling and electrophysiological properties of plant networks of cells” can meet the requirements of a central nervous system that the “intelligence” definition mandates (Garzón & Keijzer, 2011). In plant neurobiology, roots are often considered to serve as the main harbor of plants’ decentralized nervous systems. In this model, root apices, the region of cells at the tip of the roots responsible for root extension, are likened to “brain-like” units (Britannica; Garzón & Keijzer, 2011). In fact, some scientists have argued that the highly specialized group of cells at the root apex “has almost all the attributes of a brainlike tissue” (Baluška et al, 2004, p.2). Vascular strands which connect these apices are likened to plant neurons, and their polarly-transported auxins are likened to plant neurotransmitters, capable of extracellular communication and the propagation of electrical signals (Baluška et al., 2004, 2010). By identifying these neuronadjacent mechanisms in plant roots, plant neurobiologists posit that “the integration and transmission of information at the plant level involves neuron-like processes such as action potentials, long-distance electrical signaling, and vesicle-mediated transport of (neurotransmitter-like) auxin” (Garzón & Keijzer,


2011). There are three major similarities between plant intelligence networks and animal nervous systems: 1) the common presence of longdistance electrical signaling 2) the similarity between certain plant molecules and animal neuroreceptors/neurotransmitters, and 3) the similarities between the neurotransmitter auxin in plants and neurotransmitters in humans (Garzón & Keijzer, 2011). Plants, like animals, exhibit action potentials in response to environmental stimuli that allow coordination between different cells and parts of the body. Originally thought to be limited to plants that display rapid, observable movements (like insectivores), it is now becoming widely accepted that action potentials actually occur in all plants (Baluška et al., 2004). These signals are able to travel long distances within the plant axis (Garzón & Keijzer, 2011). Even though the observed responses of most action potentials may be hard for the human eye to see, the action potentials may be just as rapid as the action potentials in animal nerves (Baluška et al., 2004). For example, Barlow (2008) observes a rapid change in CO2 assimilation after a small shift in the moisture of the soil caused an action potential to be sent from the roots to the leaves. Next, plants possess many neurotransmitters that are also present in animal nervous systems, including but not limited to GABA, glutamate, dopamine, serotonin, and acetylcholine. It remains unknown whether these substances

“some scientists have argued that the highly specialized group of cells at the root apex 'has almost all the attributes of a brain-like tissue.' ”


“The issue of plant conservation has become particularly pressing in recent years. According to a comprehensive study by Humphreys et al. (2019), over 600 plant species have gone extinct in the last 250 years: twice the amount of amphibian, mammalian, and avian species combined."

have the same role in signaling as they do for animals. However, some substances, such as glutamate, have been proven to act as neurotransmitters in intracellular plant communication (Garzón & Keijzer, 2011). Finally, synapses, the microscopic space between two animal nerve cells through which neurotransmitters are exchanged, may have a parallel in plants. Auxin, which is exchanged extracellularly, is known to cause fast electrical responses in the receiving plant cell (Garzón & Keijzer, 2011). Thus, auxin is thought to resemble neurotransmitters, facilitating rapid extracellular communication and electrical signaling. These similarities have led scientists to the “rootbrain hypothesis,” a concept that both builds on and specifies the cognition question presented in the Extended Cognition Hypothesis. According to this hypothesis, plants have an underground and widespread center for cognitive processing. The fact that the ‘brain-like units’ occur on root apices speak to the fact that although scientists may be able to identify specific parts of the plant that are similar to animal cognitive centers, the expressed reality in plants is fundamentally different. Therefore, the public must shift their understanding of cognition to include the vast world below their feet.

Reciprocal Benefits of Plant Intelligence When formulating the Extended Cognition Hypothesis, crafting salinity gradients in soil, or comparing neurotransmitters in animals and plants, it may be easy to wonder why such a study is important, especially when its driving goal has been met with so much skepticism. Plant intelligence research has provided scientists with the curiosity not just to continue researching botany, but to advocate aggressive preservation and promotion for plants. When plants are perceived as passive agents in the carbon cycle, it is much more difficult to provide a salient argument for their preservation. But when plants are seen in their entirety as both objects of beauty, agents in the carbon cycle and intelligent organisms capable of sensing, perceiving, thinking, and choosing, the defense of plants has suddenly become much more fortified. Continued research on plant intelligence is crucial to the preservation of many ecosystems, as well as the preservation of the biosphere. As Baluška and Mancuso (2020) write, “Considering plants


as active and intelligent agents has therefore profound consequences not just for future climate scenarios but also for understanding mankind’s role and position within the Earth’s biosphere.” Plant roots actively control the carbon in the soil and may even manipulate the amount of carbon in the air. Therefore, current climate models don’t accurately reflect the extremely nuanced carbon manipulation pathways in plants (Baluška & Mancuso, 2020). To form more accurate predictive models, further research into plant intelligence is necessary. A deeper knowledge of plant intelligence may also alleviate plant blindness. Coined by University of Tennessee botanist Elisabeth Schlusser and Louisiana State University science educator James Wandersee in 1998, “plant blindness” is the tendency for people to overlook plants, to consider them as a backdrop in our lives (Wandersee and Schlusser, 1999). Plant blindness feeds into the belief that animals are superior to plants, rendering us unable to recognize the crucial role that plants play in our well-being and survival. This mindset has led to a lack of funding and interest in plant conservation, — far lower than animal conservation efforts (Havens et al., 2014). The issue of plant conservation has become particularly pressing in recent years. According to a comprehensive study by Humphreys et al. (2019), over 600 plant species have gone extinct in the last 250 years: twice the amount of amphibian, mammalian, and avian species combined. This extinction rate is 500 times as fast as it would without the anthropogenic contribution to species decline. These findings are not only devastating for the floral world, but for all species on earth. As primary producers, plants form the foundation of ecosystems across the world and are responsible for producing the air we breathe. Without plants, life on earth would be inexorably altered. And for the human race, it could threaten survival and would certainly alter human life for the worse. According to Humphreys et al. (2019), combatting plant blindness is a necessary solution to combating the rapid decline in plant biodiversity. Researching and supplementing arguments for plant intelligence is, in turn, crucial to combatting plant blindness. The more humans perceive plants as conscious, intelligent organisms, the more likely it is that plant biodiversity will be preserved. This is


not just good news for plants, but lifesaving for the whole human race. In political theorist Jane Bennet’s paper “Steps Toward an Ecology of Matter,” she argues for “a renewed emphasis on our entanglement with things.” This, she believes, will allow us to “tread more lightly upon the earth, both because things are alive and have value and because things have the power to do us harm” (Bennet, 2004). Indeed, this is how we should approach plants. Plants should be preserved both because they have value and because their absence has the power to do us harm. Without considering the fact that plants are intelligent, it is impossible to create a relationship with then. And according to Balding and Williams (2016), the issue of empathy is crucial for conservation efforts. “We argue that support for plant conservation may be garnered through strategies that promote identification and empathy with plants,” write Balding and Williams (2016). Therefore, to consider the possibility that plants can hear the sound of rushing water, fire neurotransmitters between synapses, and learn and remember past events is to begin to see ourselves in the plant, and perhaps to begin to see the plant in the people. Humans do not just have a lot to learn about plants. By observing how plants cultivate a sensitive, highly adaptive relationship with their environment, humans may be able to learn from them too and be able to manipulate their behavior in ways that work towards a better, and greener planet. As Baluška and Mancuso write, “plant intelligence changes everything” (Baluška & Mancuso, 2020 p.1). References

status of the root apex transition zone. Biologia, 13(1), 1-13. Baluška, F., Mancuso, S., Volkmann, D., & Barlow, P. (2010). Root apex transition zone: A signalling–response nexus in the root. Trends in Plant Science, 15, 402–408. Baluška, František, and Stefano Mancuso. “Plants, Climate and Humans.” EMBO Reports, vol. 21, no. 3, 27 Feb. 2020, doi:10.15252/embr.202050109 Bennett, Jane. “The Force of Things: Steps Towards an Ecology of Matter.” Political Theory, vol. 32, no. 3, 2004, pp. 347–372., doi:10.1177/0090591703260853. Book Reviews: Stenhouse, David, The Evolution of Intelligence. London: Allyn & Unwin, 376pp, about $12, 1974. (1975). Gifted Child Quarterly, 19(2), 102–102. Bullock, T. H., & Horridge, G. A. (1965). Structure & function in the nervous systems of invertebrates (Vol. 1). San Francisco: W.H. Freeman. Burton, N. (2018, November 28). What Is Intelligence? Psychology Today. Retrieved from https://www. psychologytoday.com/us/blog/hide-and-seek/201811/whatis-intelligence Calvo, Garzón, P., & Keijzer, F. (2009). Cognition in plants. In F. Baluška (Ed.). Plant-environment interactions (pp. 247–266). Berlin: Springer. Calvo, P., & Friston, K. (2017). Predicting green: Really radical (plant) predictive processing. Journal of The Royal Society Interface, 14(131), 20170096. https://doi.org/10.1098/ rsif.2017.0096. Dener, E., Kacelnik, A., & Shemesh, H. (2016). Pea Plants Show Risk Sensitivity. Current Biology, 26(13), 1763-1767. doi:10.1016/j.cub.2016.05.008 Jørgensen, S.E, & Fath, Brian (2008) Encyclopedia of Ecology. Elsevier.Firn, R. “Plant Intelligence: an Alternative Point of View.” Annals of Botany, vol. 93, no. 4, 2004, pp. 345–351., doi:10.1093/aob/mch058. Franks, B. et al. (2020). “Conventional science will not do justice to nonhuman interests: A fresh approach is required.” Animal Sentience, 300, 2004, pp. 1-5.

Adams, Fred. “Cognition Wars.” Studies in History and Philosophy of Science Part A, vol. 68, 2018, pp. 20–30., doi:10.1016/j.shpsa.2017.11.007.

Gagliano, M., Renton, M., Depczynski, M., & Mancuso, S. (2014). Experience teaches plants to learn faster and forget slower in environments where it matters. Oecologia, 175(1), 63–72. https://doi.org/10.1007/s00442-013-2873-7.

“Anthropomorphize.” Merriam-Webster, Merriam-Webster, www.merriam-webster.com/dictionary/anthropomorphize. Arabidopsis: The Model Plant. (2017, March 24). Retrieved from https://www.nsf.gov/pubs/2002/bio0202/model.htm

Gagliano, M., Vyazovskiy, V., Borbély, A. et al. Learning by Association in Plants. Sci Rep 6, 38427 (2016). https://doi. org/10.1038/srep38427

Balding, Mung, and Kathryn J.h. Williams. “Plant Blindness and the Implications for Plant Conservation.” Conservation Biology, vol. 30, no. 6, 2016, pp. 1192–1199., doi:10.1111/ cobi.12738. Baluška, F., Hlavacka, A., Mancuso, S., & Barlow, P. W. (2006). Neurobiological view of plants and their body plan. In F. Baluška, S. Mancuso, & D. Volkmann (Eds.). Communication in plants: Neuronal aspects of plant life (pp. 19–35). New York, NY: Springer. Baluška, F., Mancuso, S., Volkmann, D., & Barlow, P. (2004). Root apices as plant command centres: The unique ‘brain-like’


Gagliano, M., Grimonprez, M., Depczynski, M. et al. Tuned in: plant roots use sound to locate water. Oecologia 184, 151–160 (2017). https://doi.org/10.1007/s00442-017-3862-z Humphreys, Aelys M., et al. “Global Dataset Shows Geography and Life Form Predict Modern Plant Extinction and Rediscovery.” Nature Ecology & Evolution, vol. 3, no. 7, 2019, pp. 1043–1047., doi:10.1038/s41559-019-0906-2. Lamarck, J.B. and A.P. de Candolle (1815) “French Flora or Short Summaries of all the plants that Naturally grow in France.” Paris: Desray.


Li, X., & Zhang, W. (2008). Salt-avoidance tropism in Arabidopsis thaliana. Plant Signaling & Behavior, 3(5), 351–353. https://doi.org/10.4161/psb.3.5.5371. Mancuso, Stefano. The Revolutionary Genius of Plants: a New Understanding of Plant Intelligence and Behavior. Atria Books, 2018. Novoplansky, A. (2016). Future perception in plants. In N. Mihai (Ed.). Anticipation across disciplines (pp. 57–70). New York, NY: Springer. https://doi.org/10.1007/978-3-319- 22599-9_5. Parise, André Geremia, et al. “Extended Cognition in Plants: Is It Possible?” Plant Signaling & Behavior, vol. 15, no. 2, 3 Jan. 2020, p. 1710661., doi:10.1080/15592324.2019.1710661. Segundo-Ortin, Miguel, and Paco Calvo. (2019). “Are Plants Cognitive? A Reply to Adams.” Studies in History and Philosophy of Science Part A, vol. 73, 27 Feb. 2020, pp. 64–71., doi:10.1016/j.shpsa.2018.12.001. Seward, A. C. Plants: What They Are and What They Do. Cambridge University Press, 2011.Trewavas, Anthony. Aspects of Plant Intelligence, Annals of Botany, Volume 92, Issue 1, July 2003, Pages 1–20, https://doi.org/10.1093/aob/mcg101. Trewavas, Anthony. “Plant Intelligence.” Naturwissenschaften, vol. 92, no. 9, 2005, pp. 401–413, doi:10.1007/s00114-005-00149. Trewavas, A. J. (2014). Plant behaviour and intelligence. Oxford, United Kingdom: Oxford University Press. Wandersee, James H., and Elisabeth E. Schussler. “Preventing Plant Blindness.”The American Biology Teacher, vol. 61, no. 2, 1999, pp. 82–86., doi:10.2307/4450624. Yokawa, K., & Baluška, F. (2018). Sense of space: Tactile sense for exploratory behavior of roots. Communicative & Integrative Biology, 11(2), 1–5.





On the Structure of Field Theories I BY EVAN CRAFT '20 Cover Image: ATLAS Experiment Source: CERN

Abstract We rewrite the gravitational action solely in terms of the gamma matrices and spinor connection (as defined in the curved space Dirac equation). The requirement that the variation vanishes gives the condition that the affine connection be Levi-Civita along Einstein's equations in vacuum. We then propose an extension of this theory to the case of the gravitational field interacting with another, external field.

Preliminary Remarks Hilbert used the action principle to formalize the gravitational field equations and extend them further. His original approach, however, was not unique. It was soon realized that different pairs of variables, such as the vierbein and spin connection or the metric and affine


connection, could be used to rewrite Hilbert’s action. From these, one could also reproduce Einstein’s theory. In the current discourse, we describe another formulation. Dirac’s variables seemingly coincide with the vierbein formalism. Both the tetrad and gamma matrices can be used to construct the metric. And, the spin and spinor connections are intimately related by formulae. From this, we are led to believe that a symmetry of must exist between the theories. This paper is the result of that conviction and serves as a proof of the aforementioned correspondence. In Appendix A, we derive the closed form action in terms of Dirac’s variables. Taking the variation, one arrives at the desired result. This was not my original direction, however. Presented is my initial proof.


Gravitation We will first show that the Palatini Lagrangian may be rewritten so that

where all integrations are over a certain fourdimensional volume. Here, we are referring the spinor connection and gamma matrices as used in the covariant Dirac equation

where the covariant derivative of spinors is defined as

A. Re-expressing the Metric

We can then take the anti-commutator on the R.H.S. of (7) so that

where we have taken the trace over the spinor indices. Hence both metric and affine connection can be written as functions of the spinor connection and gamma matrices alone.

Cover Photo: The United Nation’s (UN) symbol for Maternal Mortality. In the Millennium Summit conference in 2000, the UN set a goal to improve maternal health as one of eight key performance indicators by 2015. (Source: Public Broadcasting Service (PBS))

Equations of Motion In order for

in (1), we must have

Each term must vanish separately so that

The defining property of the gamma matrices are the anti-commutation relations A. Variation with respect to the Gamma Matrices Inverting this, we find

B. Re-expressing the Affine Connection We have that gamma matrices are covariantly constant,

This assertion is due to equation (8) of [1]. We can solve this relation this for Rearranging,

Take the anti-commutator on the L.H.S.

From above, the first condition must be true for any arbitrary variation of the gamma matrices. We must then require

This can be expanded in terms of the metric and affine connection

“... both metric and affine connnection can be written as functions of the spinor connection and gamma matrices alone.”

after a change of indices in the gamma matrices. We can do this chain rule expansion as the original Palatini Action could be written solely as a function of both the metric and affine connection. The metric can then be rewritten in terms of the gamma matrices by formula (5). Furthermore, since the spinor connection is fixed, the affine connection is completely determined by the these same matrices using (11). For the second term in (15), we find

Figure 2: Maternal mortality rate (per 100,000 births) across the United States. Source: Wikimedia Commons (published in 2015).

Hence raising the index, we get where we have abbreviated



Since Dirac’s matrices are covariantly constant (6), we may conclude

Taking the trace over the spinor indices we get

From (15), (19), and (23) we conclude And therefore

“The tetrad (or vierbein) projects the curved geometry down to flat space and vice versa."

Hence the second term in (15) is vanishing. For the first term, we can compute the variation of the metric using (5). This is done in text Appendix B. The result is

B. Variation with respect to the Spinor Connection We may now focus on the spinor connection. From before,

We may then say

where we have used Palatini’s Variation

to evaluate the first element. Now, taking the transpose over the spinor indices,

where we have abbreviated Einstein's tensor to

We may take the anticommutator with

We may obtain a formula for in terms of the other variables. We will need equation (6), and for convenience, we restate it here:

Contracting with

The contraction on the L.H.S. can be evaluated with the help of the tetrad. The tetrad (or vierbein) projects the curved geometry down to flat space and vice versa. In the new coordinate system, the curved space vector becomes

And its scalar product gives

This results in the condition Using this result in (23), which is equation (3) of [2]. We may use (33) and (35) to re-express the anti-commutation relation (4)

In the next section, we will show that the connection is symmetric and hence the Einstein tensor is symmetric so that this reduces to


Hence inverting the original relation we get


Contracting over the latin indices, we find Since the inner product is a scalar, it will not depend on coordinates so that Using this fact in (32),

In Appendix C, we show that the solution to the above equation is given by

where is a four vector. Hence the spinor connection can be written solely as a function of the gamma matrices, the affine connection, and an arbitrary four vector. We can now rewrite the in (30) as

The vector potential is fixed and the gamma matrices are fixed so the spinor connection is solely a function of the affine connection. We can therefore eliminate the chain rule in this variable leaving

Now that we have simplified the variation of the spinor connection, we can insert it into (30):

Since this variation holds the curved space gamma matrices fixed, the action is solely a function of the spinor connection. Removing the chain rule,

variations hence after a change of indices in the spinor connection. Since the affine connection transforms as a connection, it can be written solely as a function of the transformation matrix (the vierbein) and its corresponding connection in flat space (the spin connection). We may therefore express its variation as

where we have used to denote the spin connection. We can now rewrite the variation of the spinor connection. Using this in (40) we get

The tetradic version of Palatini’s Theorem allows us to write the action solely as a function of the vierbein and spin connection. The requirement that the variation vanish with respect to the spin connection gives the condition that the connection be symmetric. For equation (48), since the vierbein is fixed (Appendix D), we then have that the connection is symmetric. This along with (35) gives the Levi-Civita connection.

“The tetradic version of Palatini's Theorem allows us to write the action solely as a function of the vierbein and spin connection.”

A Comprehensive Action Principle For the total action, one could propose

The curved space gamma matrices are being held fixed so that the first term vanishes. At this point we may also eliminate the variation in the vector potential. The reason for this is explained in Appendix D. We are left with

where the primed functional corresponds to the external field. In the case of a Dirac particle we get

where we treat the gamma matrices, spinor connection, spinor, and adjoint spinor as independent variables. The variation then gives In Appendix D, we also show that whenever the curved space gamma matrices are held fixed, the vierbein is also fixed. The above equation then implies



We can compute the variation of the covariant derivative as

Forcing x to vanish, we obtain the equations of motion for the gravitational field

And the equations of motion for the spinor field

We may add extra terms to the external action by using (5) and (11) to rewrite the metric and affine connections in terms of Dirac’s variables.


Subtracting, we obtain the commutator


2. Two-Spinors We can create a two spinor by contracting onto the gamma matrices. Choose some arbitrary four vector and let

This can be inverted to

Appendix A: The Closed Form Action “Here we derive the closed form of teh Einstein-Hilbert Action in terms of the gamma matrices and spinor connection alone.”

Here we derive the closed form of the EinsteinHilbert Action in terms of the gamma matrices and spinor connection alone. The scalar density can be written solely as a function of these variables using (4). All that is left is reexpressing the Ricci Scalar. 1. Spinor Curvature and Torsion

We can follow suit with the previous section and take the commutator on this object Using (6), we have for the first term:

Expanding the covariant derivative,

The covariant derivative of spinors is of the form

We want to calculate the commutator of covariant derivatives . This is analogous to how the Riemann curvature tensor is obtained. For the first term, we have


Expanding the covariant derivative,

Subtracting these, we arrive at the commutator Similarly,



3. The Riemann Tensor in terms of the Gamma Matrices and Spinor Connection We can use the result for two-spinors to help calculate the commutator on the gamma matrices. From (6),

So that the first term in the anti-commutator gives

We can expand out the covariant derivative:

so that the Riemann Tensor,

the Ricci Tensor,

and the Ricci Scalar

can all be written in terms of the gamma matrices and spinor connection alone.

Appendix B: The Variation of the Metric

“We can uise the result for two-spinors to help calculate the commutator on the gamma matrices.�

The metric is related to the gamma matrices by the anti-commutation relation

This can be inverted to Taking the commutator and using (B13) we find We can use this to take the variation of the metric: The gamma matrices are covariantly constant so that the entire L.H.S and the middle term on the R.H.S vanish. We are left with


On the L.H.S of the above equation, we may take the anti-commutator with .

The anti-commutator on the R.H.S of (A19) gives


Use the fact that tr(AB) = tr(BA) so that

The trace also commutes with the variation:

Working out the parenthesis,

We can use this in (B5) so that


By the definition of the trace,

Giving the coefficient

Appendix C: Solving for the Spinor Connection Begin with differential equation (38)

Assume there exists a solution

Appendix D: Some comments on the Variation w.r.t. the Spinor Connection 1. Fundamental Variables Given the vierbein, the choice of metric for the curved space is completely unambiguous. One simply takes the Minkowski metric and constructs it by projecting up,

Now, given the vierbein, it would seem ambiguous what to choose for the curved space gamma matrices. However, this ambiguity does not exist. In flat space, in order to solve the Dirac equation, one makes a choice of gamma matrices to use. From these, we can then construct the curved space gamma matrices by projecting up:

We would then have

The tetrad is the only variable. From this, we use the ansatz 2. A relation between the Gamma Matrices and Vierbein

"In flat space, in order where is to be determined. Inserting equation to solve the Dirac (C4) in (C1) we are left with the condition equation, one makes a choice of gamma matrices to use." So that

The solution to which is given by

for an arbitrary four vector. So in total,

Rewriting the gamma matrices using the vierbein, one can show that an object of the form (C3) exists and satisfies (C2) so that the above expression is valid.


We can now use the argument of the preceding section. The claim is that whenever the curved space gamma matrices are fixed, the vierbein must also be fixed. Begin by taking the variation of (D2),

If the curved space gamma matrices are held fixed then the L.H.S vanishes leaving

Now, since the flat space gamma matrices are not variables, we get

Take the anti-commutator with


We may take the trace over the spinor indices to eliminate the identity matrix. Finally, we can contract with the Minkowski metric to obtain

References [1] E. Schrรถdinger, Sitz. Preuss. Akad. Wiss. Berlin (1932). [2] A. Einstein, Sitz. Preuss. Akad. Wiss. Berlin 217 (1928). [3] P.A.M Dirac, Proc. R. Soc. Lond. A 126 (1928).

which is the desired result. From (D3) we also see that whenever the vierbein is fixed, the curved space gamma matrices are also fixed. Finally, we note that the vierbein is completely determined by the gamma matrices as

3. The Vector Potential We may apply a similar argument to the vector potential. Recall that the connection for spinors is given by (C8),


Any choice of four vector could be used in (D9) to satisfy the equation (C1). It would therefore seem ambiguous how to determine the nature of . This ambiguity, however, does not exist. Given a particular choice of Dirac spinor, there is a fixed, predetermined four potential with which it is associated. It is precisely the four potential contained in the interaction term of its Lagrangian. From this, we can then construct the covariant derivative. The variation of this object is therefore vanishing; it is not a variable.



On the Structure of Field Theories II BY EVAN CRAFT '20 Cover Image: ATLAS Experiment Source: CERN

Abstract Given any action functional, we may construct a path integral. Many theories, however, have actions that are not unique. In those cases, the functional of the fields may be rewritten in terms of other variables yet yield the exact same equations of motion. This symmetry of the action leads to a corresponding symmetry of the path integral. We discuss these ideas in the context of gravitation.

Introduction The basis of any field theory is the principle of least action. We begin with some functional S describing our system, and we impose the condition that its variation vanish, rather

If the action is a functional of n independent fields, this gives the equations of motion 136

for all n. Oftentimes, the functional of the fields can be rewritten in terms different variables. Take for instance gravitation. It is known that Palatini’s treatment of the metric and affine connection as independent yields Einstein’s theory. In a recent paper, we’ve shown that Dirac’s variables will also do the trick [1]. This ambiguity of the action allows us to rewrite the path integral. For a field theory, we define the latter functional to be

where we are integrating over all possible field configurations [2]. In the following, we consider this expression with respect to the various formulations of the gravity. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Single Variable Theories A. Hilbert Hilbert’s original formulation treated the action solely a functional of the metric. From this, we may construct the path integral

Multivariable Extension A. Palatini In the Palatini Formulation, the metric and affine connection are treated independently. From this, we can construct

(Source: Public Broadcasting Service (PBS))

where the integration is over all possible metric configurations.

where we are integrating over field configurations of for both the connection and metric.

B. Tetrad

B. Tetradic Palatini

The existence of a vierbein would allow us to project curved space vectors down to flat space (and vice versa). This leads to the relationship

For a given affine connection of curved space, there is a corresponding connection in flat space termed the spin connection. This object satisfies

between the corresponding metrics. We can use this to reformulate our functional as

since the Minkowski metric is fixed. In this case, we are integrating over all possible vierbein configurations. C. Spinor Dirac’s theory proposes the existence of gamma matrices such that

Cover Photo: The United Nation’s (UN) symbol for Maternal Mortality. In the Millennium Summit conference in 2000, the UN set a goal to improve maternal health as one of eight key performance indicators by 2015.

where the Greek indices refer to curved space and the Latin correspond to flat. It is known that the action, written in terms of this new connection and the vierbein, produces Einstein’s equations in vacuum. From this, we construct the path integral

“For a given affine connection of curved space, there is a corresponding connection in flat space termed the spin connection...”

where we are integrating over field configurations for both the connection and vierbein. C. Spinor

These are used to determine the dynamics of spinors in his namesake equation

where the covariant derivative is defined by

As we’ve shown in [1], the Palatini action may be reformulated solely in terms of Dirac’s variables. This leads to a new path integral given by

where we are integrating over field configurations for both the gamma matrices and spinor connection.

The relationship (7) may be inverted to References

And hence the path integral can transformed into


Figure 2: Maternal mortality rate (per 100,000 births) across the United States. Source: Wikimedia Commons (published in 2015).

[1] E. Craft, “On the Structure of Field Theories I”, Dartmouth Undergrad J. Sci., 20X (2020). [2] R. P. Feynman Rev. Mod. Phys. 20, 367 (1948).


The Modernization of Anesthetics BY GIL ASSI '22 Cover Image: Robert Liston performing an amputation at the University College London. Source: Wikimedia Commons


Introduction In the 18th century, London's University College Hospital became known for its agonizing medical procedures. On many occasions, hundreds of men and women gathered at the operating theater to observe a surgery. The surgeon was typically accompanied by his medical assistants, who were not yet regarded as nurses. Operating theaters were filled with a mixture of medical students and miscellaneous people with no real medical credentials. Due to the large crowds, surgery was often only used as a last resort. Out of fear, doctors restricted themselves to external and superficial skin wounds. Internal procedures, although necessary, were very dangerous and risky (Fitzharris, 2018). This was mostly because there was no concept of anesthesia, so surgery was extremely painful. This meant that there was an extremely high rate of postoperative death due to infections. Robert Liston, commonly known as the “fastest knife in the West End,”

was a Scottish surgeon who built his reputation on his speed and agility when amputating his patients. Speed was a determinant factor for many patients as they preferred surgeons who could perform quick albeit painful surgeries (Wright et al., 2014). One afternoon in 1846, Liston prepared for a mid-thigh operation in the College’s renowned hospital. As the voyeurs rushed in to find their seats, Liston readied himself to perform his very first surgery using a primitive form of anesthesia. The anesthetic properties of ether, vaporized into a gaseous compound, had recently been discovered by Boston dentist William T.G. Morton to leave patients unconscious and numb (Chang et al., 2015). Liston was able to amputate his patient in under thirty seconds (Reginald, 2000). The ether prevented the patient from fighting and being agitated on the surgery table, and as he woke up, he and the crowd were surprised to find DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure 1: This diagram depicts a presynaptic and a postsynaptic neuron. GABA released into the synaptic cleft will bind to GABAA receptor on the postsynaptic neuron, which results in a reduction of neuronal excitation. Source: Wikimedia Commons

that the surgery had been completed without any pain. Thus, 1846 marked the birth of a new science in the medical field: anesthesiology.

The Different Types of Anesthesia It has now been two centuries since Robert Liston conducted that famed operation. What does anesthesia look like today? Surgeons are now at the liberty of choosing between four different types of anesthesia: general anesthesia, regional anesthesia, local anesthesia, and sedation. Depending on the situation, patients are allowed to choose the anesthetic they prefer. The most common anesthetic known to any person is general anesthesia. Through medication, doctors render their patients unconscious and unaware of their surroundings. Some general anesthetics are gases (like the ether used by Liston) that can be administered in a tube or mask. Others are given as a liquid through an IV to induce sleep and treat pain. Regional anesthesia, on the other hand, is precise. It targets a specific area of the body and numbs it to prevent pain. One great example is nerve blocks; femoral nerve blocks numb just the thigh and knee (UCLA Health, Los Angeles, CA, n.d.). In eye surgery cases, doctors sometimes use sedation, which involves a medication that induces drowsiness and relaxation. If the patient needs

to be awake to follow instructions from the surgeon, a moderate sedation may be used, where the patient may doze off but awakens easily. Medications such as lidocaine that are injected through a needle or applied as a cream to an area are known as local anesthetics. Many doctors tend to use local anesthetics in combination with sedation during minor outpatient surgery (UCLA Health, Los Angeles, CA, n.d.). Although anesthetics are prevalent today and used in every hospital, the general population and sometimes also doctors still lack a general understanding of its molecular basis.

“Surgeons are now at the liberty of choosing between four different types of anesthesia: general anesthesia, regional anesthesia, local anesthesia, and sedation.�

Molecular Pathway In recent years, researchers have used model organisms to find the targets of general anesthetics. These studies have shown that different types of general anesthetics act through distinct mechanisms. For example, one study focused on the classification of general anesthetics and their relative potencies and effect on EEG. This ultimately shed light on certain molecular targets associated with loss of consciousness. Group 1 The first class of anesthetics consists of



Figure 2: This diagram shows a GABA receptor with its subunits and where various ligands bind. α and β subunits determine anesthetic sensitivity for group 3—sensitive to volatile anesthetics. Source: Wikimedia Commons

“These anesthetics enhance GABAmediated channel activation and prolong postsynaptic inhibitory currents (IPSCs), ultimately suppressing neuronal excitability.”

intravenous drugs such as etomidate, propofol, and barbiturates which tend to induce a state of unconsciousness rather than immobilization. A subset of γ-aminobutyric acid type A (GABAA) receptors mediate loss of righting reflexes (LORR) and immobility as produced by these drugs. GABAA receptors, located both postsynaptically and extrasynaptically, can trigger a reduction in neuronal excitation when activated. These anesthetics enhance GABA-mediated channel activation and prolong postsynaptic inhibitory currents (IPSCs), ultimately suppressing neuronal excitability. This was discovered when researchers performed tests on rats showing that intravenous drugs induced LORR through GABAA receptors containing β3 subunits, whereas sedation utilized GABAA receptors containing β2 subunits (Forman and Chin, 2008). Group 2 The second group includes the inhaled anesthetics (nitrous oxide, xenon, and cyclopropane) and ketamine, an intravenous drug. This group has the lowest potency of the three groups in their ability to render patients unconscious and immobilized. At the molecular level, they have little to no effect on GABAA receptors (Raines et al., 2001). However, this class of anesthetics could be the major factor in inhibiting N-methyl-D-aspartate (NMDA) receptors, cation channels activated by glutamate (Jevtović-Todorović et al., 1998).


For example, in the presence of xenon, NMDA receptor-mediated excitatory postsynaptic currents are inhibited and it has been shown that reduced excitatory signals in neuronal circuits causes unconsciousness (Forman and Chin, 2008). Group 3 The volatile halogenated anesthetics— halothane, enflurane, isoflurane, sevoflurane, and desflurane—are drugs used to induce amnesia, unconsciousness, and immobility in a more predictable way. Compared to the other groups, the volatile halogenated anesthetics lack significant selectivity in regards to their target molecules (Campagna et al., 2003). They have been shown to enhance the function of inhibitory GABAA receptors, which suggests that they produce unconsciousness via different GABAA receptor subunits—α and β subunits determine anesthetic sensitivity for this group (Forman and Chin, 2008). These subunits are different from those targeted by group 1 drugs. TREK-1 is an anesthetic-sensitive K+ channel that plays a role in the resting membrane potential of neurons (Franks & Honoré, 2004). It can be activated by volatile anesthetics (Honoré et al., 1999). Recent studies show that TREK-1 knockout mice required an increased dosage of volatile anesthetic to trigger LORR and immobility. This suggests that a wild type


TREK-1 channel is sensitive to these drugs (Linden et al., 2007). Furthermore, other ion channels have been discovered to be sensitive to volatile anesthetics including serotonin type 3 receptors (Stevens et al., 2005), Na+ channels (Roch et al., 2006), mitochondrial ATP-sensitive K+ channels (Turner et al., 2005), neuronal nicotinic acetylcholine receptors (Flood & Role, 1998), and cyclic nucleotide-gated HCN channels (Chen et al., 2005). A relationship between these channels and unconsciousness has yet to be established.

A Proposed Biological Pathway of General Anesthetics At St. John’s University, Professor Mahmud Arif Pavel’s lab found that the chemical properties of anesthetics target lipid rafts found in the cell plasma membrane. They discovered that anesthetics such as chloroform and diethyl ether target GM1 rafts and activate ion channels in a two-step mechanism. Previously, as Professor Pavel described, “for 100 years, anesthetics were speculated to target cellular membranes, yet no plausible mechanism emerged to explain a membrane effect on ion channels” (Pavel et al., 2020). Knowing that inhaled anesthetics are hydrophobic molecules that can activate TREK-1 channels and ultimately induce loss of consciousness, his lab was able to demonstrate that chloroform and isoflurane, inhaled anesthetics, activate TREK-1 channels by disrupting phospholipase D2 (PLD2) localization to lipid rafts which subsequently produces signaling phosphatidic acid (PA) (Pavel et al., 2020). The lab elucidated a mechanism where PLD2 activates TREK-1 by binding to a disordered C terminus, which in turn produces a high level of PA that activates the channel. Therefore, the researchers tested anesthetic sensitivity by blocking PLD2’s activity. Additionally, the researchers tested an inactive mutant of PLD2 (xPLD2). Since chloroform was unsuccessful in activating TREK-1, they concluded that PLD2 is a necessary enzyme in the biological pathway for the activation of TREK-1 by general anesthetics (Pavel et al., 2020). However, more research will be necessary to uncover the other factors involved in this specific mechanism.

Side Effects of Anesthesia While modern anesthesia is safe, it can cause side effects during and after the medical


procedure. Although most of these side effects are as minor and temporary as a sore throat, rash, or hypothermia, there are serious effects that should be highlighted. In some surgeries where general anesthesia is used, an individual can exhibit confusion, long term memory loss, and subsequent learning problems. Known as postoperative cognitive dysfunction, this is more common in older people who have preconditions like heart disease, Alzheimer's or Parkinson’s. In other instances, patients can suffer from pneumothorax as a result of sedation; when anesthesia is directly injected through a needle near the lungs, the needle may accidentally puncture the lungs causing them to collapse. A chest tube will be required to re-inflate the lungs. Of all the different types of anesthetics, local anesthesia is the least likely to cause side effects (“When Seconds Count,” n.d.). However, it also has minor side effects such as itchiness or soreness.

The Future of Anesthesia Chief medical officer of Team Health Anesthesia and anesthesiologist Sonya Pease has discussed the intricacies of current day anesthesiology and the potential for the field in the future. One of the major issues she raised concerned the possibility of zero defects in anesthesiology related harm. According to Dr. Pease, a future with “zero harm” anesthesia will require anesthesiologists to consider redesigning healthcare delivery methodologies as well as improving patient-specific anesthesiology (Pease, 2020).

“At St. John's University, Professor Mahmud Arif Pavel's lab found that the chemical properties of anesthetics target lipid rafts found in the cell plasma membrane.”

Fortunately, there have been many predictions as to what the future holds to improve patient care. One of the most popular predictions is the application of artificial intelligence. For instance, it has been hypothesized that patients will have 24/7 access to virtual nurse avatars who will assist them with home medication and provide them with healthier lifestyle choices after the surgery. This will allow doctors to continue tracking patients’ care and health after a medical procedure (Pease, 2020). In terms of the potential loss of memory or other severe side effects, preoperative testing will be developed in virtual clinics in combination with a “cognitive assistant” that will repetitively run algorithms to determine the clearest and safest procedural pathway prior to the actual procedure. Additionally, there will be ingestible sensors accompanied with drug-devices that will track and monitor the effectiveness of anesthetics as they are


injected in patients. This will allow doctors to prevent specific complications during the procedures and track the patients’ health in the days that follow (Pease, 2020).

Anesthetic Agents in Development Novel drug development is costly, and risky. Only one tenth of drugs in the early stages of development will later be approved by the Food and Drug Administration (Hay et al., 2014). There have been instances when approved drugs are withdrawn from the market because of unanticipated limitations and side effects. In recent years, evolving drug innovations have been focusing on modifying the chemical structures of existing drugs to improve their pharmacodynamic, pharmacokinetic, and side effects (Mahmoud and Mason, 2018). Remimazolam Rem

“Novel drug development is costly, and risky. Only one tenth of drugs in the early stages of development will later be approved by the Food and Drug Administration.”

Remimazolam is a new ester-based anesthetic agent that combines the properties of midazolam (sedative) and remifentanil, two already established anesthetic drugs. Remimazolam acts on GABA receptors and exhibits pharmacokinetic properties similar to remifentanil. In animal studies, remimazolam induced a quicker onset and faster recovery than midazolam (Upton et al., 2010). Initially, remimazolam was developed for procedural sedation, but more studies have begun to focus on utilizing the agent for induction and maintenance of general anesthesia. A recent study examined the properties of inhaled remimazolam alone and as an adjunct to remifentanil in rodents. The results showed that remimazolam significantly potentiated the analgesic effect of remifentanil without lung irritation, bronchospasm, or other pulmonary complications (Bevans et al., 2017). ADV6209 A novel generation of oral midazolam has been formulated by combining sucralose, orange aroma, and y-cyclodextrin with a citric acid solution of midazolam (Marçon et al., 2009). Initial research has indicated that this drug can help by offering anxiolysis and sedation with improved patient acceptance and tolerance (Mahmoud and Mason, 2018). At the present time, substantial evidence shows that the formulation improves the longevity of the oral midazolam’s shelf-life (Mathiron et al., 2013).

Conclusion Liston’s surgery at the University College London


in 1846 marked the birth of anesthesiology. Long before that era, clinicians desired immobile patients to improve the rate of procedural success. However, unwanted consequences and side effects have emerged, which has prompted the anesthesiology community to invest in further research into the molecular pathway and mechanism of general anesthesia. In doing so, novel drugs such as remimazolam, ADV6209, and many more are currently in the research and animal testing phase. As novel anesthetics are studied and uncovered, a future with zero defects in anesthesiology related procedures could be in close proximity. References Bevans, T., Deering-Rice, C., Stockmann, C., Rower, J., Sakata, D., & Reilly, C. (2017). Inhaled Remimazolam Potentiates Inhaled Remifentanil in Rodents. Anesthesia and Analgesia, 124(5), 1484–1490. https://doi.org/10.1213/ ANE.0000000000002022 Campagna, J. A., Miller, K. W., & Forman, S. A. (2003). Mechanisms of actions of inhaled anesthetics. The New England Journal of Medicine, 348(21), 2110–2124. https://doi. org/10.1056/NEJMra021261 Chang, C. Y., Goldstein, E., Agarwal, N., & Swan, K. G. (2015). Ether in the developing world: Rethinking an abandoned agent. BMC Anesthesiology, 15. https://doi.org/10.1186/ s12871-015-0128-3 Chen, X., Sirois, J. E., Lei, Q., Talley, E. M., Lynch, C., & Bayliss, D. A. (2005). HCN subunit-specific and cAMP-modulated effects of anesthetics on neuronal pacemaker currents. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 25(24), 5803–5814. https://doi.org/10.1523/ JNEUROSCI.1153-05.2005 Effects of Anesthesia on Brain & Body—When Seconds Count. (n.d.). When Seconds Count | Anesthesia, Pain Management & Surgery. Retrieved August 16, 2020, from https://www.asahq.org/whensecondscount/anesthesia-101/ effects-of-anesthesia/ Fitzharris, Lindsey. (2018). Prologue: The Age of Agony. In Straus & Giroux (Eds.), TheButchering Art: Joseph Lister's Quest to Transform the Grisly World of Victorian Medicine. Penguin Books Flood, P., & Role, L. W. (1998). Neuronal nicotinic acetylcholine receptor modulation by general anesthetics. Toxicology Letters, 100–101, 149–153. https://doi.org/10.1016/s03784274(98)00179-9 Forman, S. A., & Chin, V. A. (2008). General Anesthetics and Molecular Mechanisms of Unconsciousness. International Anesthesiology Clinics, 46(3), 43–53. https://doi.org/10.1097/ AIA.0b013e3181755da5 Franks, N. P., & Honoré, E. (2004). The TREK K2P channels and their role in general anaesthesia and neuroprotection. Trends in Pharmacological Sciences, 25(11), 601–608. https://doi. org/10.1016/j.tips.2004.09.003 General anesthesia—Sedation—UCLA Anesthesiology & Perioperative Medicine—UCLA Health, Los Angeles, CA. (n.d.).


Retrieved August 16, 2020, from https://www.uclahealth.org/ anes/types-of-anesthesia

Korean Journal of Anesthesiology, 59(1), 3–8. https://doi. org/10.4097/kjae.2010.59.1.3

Hay, M., Thomas, D. W., Craighead, J. L., Economides, C., & Rosenthal, J. (2014). Clinical development success rates for investigational drugs. Nature Biotechnology, 32(1), 40–51. https://doi.org/10.1038/nbt.2786

Stevens, R. J. N., Rüsch, D., Davies, P. A., & Raines, D. E. (2005). Molecular properties important for inhaled anesthetic action on human 5-HT3A receptors. Anesthesia and Analgesia, 100(6), 1696–1703. https://doi.org/10.1213/01. ANE.0000151720.36988.09

Jevtović-Todorović, V., Todorović, S. M., Mennerick, S., Powell, S., Dikranian, K., Benshoff, N., Zorumski, C. F., & Olney, J. W. (1998). Nitrous oxide (laughing gas) is an NMDA antagonist, neuroprotectant and neurotoxin. Nature Medicine, 4(4), 460–463. https://doi.org/10.1038/nm0498-460 Linden, A.-M., Sandu, C., Aller, M. I., Vekovischeva, O. Y., Rosenberg, P. H., Wisden, W., & Korpi, E. R. (2007). TASK-3 knockout mice exhibit exaggerated nocturnal activity, impairments in cognitive functions, and reduced sensitivity to inhalation anesthetics. The Journal of Pharmacology and Experimental Therapeutics, 323(3), 924–934. https://doi. org/10.1124/jpet.107.129544

Turner, L. A., Fujimoto, K., Suzuki, A., Stadnicka, A., Bosnjak, Z. J., & Kwok, W.-M. (2005). The interaction of isoflurane and protein kinase C-activators on sarcolemmal KATP channels. Anesthesia and Analgesia, 100(6), 1680–1686. https://doi. org/10.1213/01.ANE.0000152187.17759.F6 Upton, R. N., Somogyi, A. A., Martinez, A. M., Colvill, J., & Grant, C. (2010). Pharmacokinetics and pharmacodynamics of the short-acting sedative CNS 7056 in sheep. British Journal of Anaesthesia, 105(6), 798–809. https://doi.org/10.1093/bja/ aeq260

Magee, R. (2000). Surgery in the Pre-Anaesthetic Era: The Life and Work of Robert Liston. Health and History, 2(1), 121–133. JSTOR. https://doi.org/10.2307/40111377

Weir, C. J. (2006). The molecular mechanisms of general anaesthesia: Dissecting the GABAA receptor. Continuing Education in Anaesthesia Critical Care & Pain, 6(2), 49–53. https://doi.org/10.1093/bjaceaccp/mki068

Mahmoud, M., & Mason, K. P. (2018). Recent advances in intravenous anesthesia and anesthetics. F1000Research, 7. https://doi.org/10.12688/f1000research.13357.1

Wright, A. S., & Maxwell, P. J. (2014). Robert Liston, M.D. (October 28, 1794-December 7, 1847): The fastest knife in the West End. The American Surgeon, 80(1), 1–2.

Marçon, F., Mathiron, D., Pilard, S., Lemaire-Hurtel, A.-S., Dubaele, J.-M., & Djedaini-Pilard, F. (2009). Development and formulation of a 0.2% oral solution of midazolam containing gamma-cyclodextrin. International Journal of Pharmaceutics, 379(2), 244–250. https://doi.org/10.1016/j. ijpharm.2009.05.029 Mathiron, D., Marçon, F., Dubaele, J.-M., Cailleu, D., Pilard, S., & Djedaïni-Pilard, F. (2013). Benefits of methylated cyclodextrins in the development of midazolam pharmaceutical formulations. Journal of Pharmaceutical Sciences, 102(7), 2102–2111. https://doi.org/10.1002/jps.23558 Patel, A. J., Honoré, E., Lesage, F., Fink, M., Romey, G., & Lazdunski, M. (1999). Inhalational anesthetics activate twopore-domain background K+ channels. Nature Neuroscience, 2(5), 422–426. https://doi.org/10.1038/8084 Pavel, M. A., Petersen, E. N., Wang, H., Lerner, R. A., & Hansen, S. B. (2020). Studies on the mechanism of general anesthesia. Proceedings of the National Academy of Sciences, 117(24), 13757–13766. https://doi.org/10.1073/pnas.2004259117 Pease, Sonya MD (2020). Future of Anesthesiology: Anesthesia Industry Predictions for 2028. (n.d.). Retrieved August 16, 2020, from https://www.teamhealth.com/blog/ anesthesiology-2028-bd/?r=1 Raines, D. E., Claycomb, R. J., Scheller, M., & Forman, S. A. (2001). Nonhalogenated alkane anesthetics fail to potentiate agonist actions on two ligand-gated ion channels. Anesthesiology, 95(2), 470–477. https://doi. org/10.1097/00000542-200108000-00032 Roch, A., Shlyonsky, V., Goolaerts, A., Mies, F., & SaribanSohraby, S. (2006). Halothane directly modifies Na+ and K+ channel activities in cultured human alveolar epithelial cells. Molecular Pharmacology, 69(5), 1755–1762. https://doi. org/10.1124/mol.105.021485 Son, Y. (2010). Molecular mechanisms of general anesthesia.



Tripping on a Psychedelic Revolution A Historical and Scientific Overview, with Dr. Rick Strassman and Ken Babbs BY JULIA ROBITAILLE '23 Cover Image: Psilocybin mushrooms have been used as a psychedelic for centuries and grow naturally in many parts of the world. Source: Wikimedia Commons [Credit: Thomas Angus, Imperial College London]


Overview Lysergic acid diethylamide, known more commonly as “LSD,” was first synthesized by Swiss chemist Albert Hoffmann in 1943. It was initially used as an investigational drug for clinical research, but by the 1960s, it was widely used among Americans as a recreational drug. LSD’s mainstream use and connection to various drug-fueled social movements led to its eventual criminalization in 1968. With this, the field of psychedelic research was brought to a sudden halt. The U.S. Drug Enforcement Administration (DEA) claimed that the ban on LSD was prompted by medical reasons, noting the dangers of psychedelic drugs. But the research to support these claims was, and still is scarce and not well supported. Social and political motives played a major role in the DEA’s decision to criminalize psychedelics. Through the expertise of two psychedelic pioneers, this paper explores the effects of LSD on the brain and its validity within a clinical setting.

I first interviewed Dr. Rick Strassman, a leading researcher in the field of psychedelics and their use in psychotherapy and integrative medicine. Dr. Strassman has worked as a psychiatrist for many years and has received numerous research grants. He has published over fortyone peer-reviewed articles and written four books. I also interviewed Ken Babbs, a central figure in the psychedelic revolution that began in the late 1950s. He is a certified Merry Prankster and novelist – he was on the original famed cross-country bus trip with Ken Kesey (author of One Flew Over the Cuckoo’s Nest, 1962) that became hallmark of the psychedelic era. Both Babbs and Strassman were kind enough to talk to me and provide their expert insight on the risks and benefits of psychedelics.


Figure 1: A storefront window displaying a poster that reads, “Hippies use backdoor.” Source: Wikimedia Commons [Runran, Public Domain]

History of Psychedelics in America Following its discovery, the clinical effects of LSD were thoroughly researched. LSD was utilized in psychiatry as a psychotomimetic – a drug that mimics disordered psychiatric states by inducing psychotic symptoms. Researchers administered LSD to healthy participants for the purpose of studying their delusions and hallucinations, both of which were present in patients of psychosis and schizophrenia. LSD was also used as an investigational drug in clinical settings, as it was piloted for the treatment of depression (Novak, 1997). Within ten years of LSD’s synthesis, more than thirty scientific publications had been published on its clinical effects (Nichols, 2013). In 1956, Dr. Sidney Cohen - who had personally taken LSD - began to implement psychedelics in psychotherapy used to treat depression and alcohol dependence (DiPaolo, 2018). LSD was even informally endorsed by the founders of Alcoholics Anonymous for its potential in combatting addiction. During this time, the rhetoric surrounding psychedelics significantly shifted. LSD went from being an experimental psychotomimetic to something with tangible therapeutic potential (Novak, 1997). This inevitably led to the wider recreational use of LSD. By 1960, the roots of counterculture movements were taking hold in America. Young adults and liberal intellectuals who would eventually become known as “hippies,” began to use psychedelics in creative endeavors. SUMMER 2020

It was around this time that Ken Babbs met Ken Kesey - and was first introduced to LSD. Both Babbs and Kesey were students in the Stanford Graduate Writing Program and met at a cocktail party held by a professor before the start of the term. “We hit it off right away. We had the same kind of temperaments. We were both imaginative, and both outgoing, and both athletic. We just really meshed,” Babbs says. In 1961, Ken Kesey was working in the Menlo Park VA Hospital where psychedelic drug studies were being conducted. Kesey willingly volunteered in the experiments, in which researchers would administer different drugs to him and observe his symptoms. “Sometimes it would be a placebo, and then other times it would be different. Then, there was one drug that was just really a knockout,” Ken Babbs says.” That drug was LSD. There were no laws against psychedelic drugs at the time, so Kesey sought to obtain some LSD of his own. “He went into the office where that guy was running the show at night, when nobody was there. And he opened a drawer, and found a bottle from Sandoz lab, from Switzerland - pure LSD tabs,” explains Ken Babbs. Kesey took the LSD and brought it back to his house on Perry Lane in Palo Alto, where Ken Babbs had his first dose of the drug. “That’s where people started drinking wine and playing banjos and getting high on LSD.”

“In 1961, Ken Kesey was working in the Menlo Park VA Hospital where psychedelic drug studies were being conducted. Kesey willingly volunteered in the experiments, in which researchers would administer different drugs to him and observe his symptoms.”


Figure 2: Dr. Robin CarhartHarris presents at the Centre for Psychedelic Research. Dr. Carhart-Harris’ Entropic Brain Model explains how psychedelics may allow one to experience multiple combinations of cognitive, emotive, and perceptive functions. Source: Wikimedia Commons [Thomas Angus, Imperial College London]

" ' When LSD hit the West Coast, it was like a big tsunami came roaring in and everybody was high'..."


Meanwhile, on the East Coast, two psychology professors at Harvard were using LSD for psychological studies on graduate students. The professors coordinating these unconventional experiments (the famed Timothy Leary and Richard Alpert) were fired once administration caught wind of the studies. But Leary and Alpert soon became leaders of the new psychedelic movement, just as LSD was gaining popularity among Hippies, Bhikkhus, and gypsies. Psychedelics were flowing through these groups in subcurrents and tributaries, waiting to permeate the river of mainstream American consciousness. “When LSD hit the West Coast, it was like a big tsunami came roaring in and everybody was high,” says Ken Babbs. Recreational use of LSD was growing as psychedelics percolated through the wider American population. Famous artists began using the drug to boost creativity. Hippies took the drug for an intimate spiritual experience often described as more meaningful than earthly existence. LSD soon became a major player in American counterculture movements. The Sixties Psychedelic Revolution, led primarily by free thinkers, liberals, and “leftists,” both cascaded into and coincided with the Civil Rights, Peace, and Hippie Movements that followed. However, the increased use of LSD also brought about its increased misuse. The dangers of blackmarket distribution and unsupervised use led to negative experiences with the drug. These threatened to overshadow LSD’s therapeutic and positive uses. Abusers of the drug and others with bad experiences helped perpetuate

false myths about LSD - such as its ability to “fry” one’s brain or make someone permanently insane. In addition, Dr. Sidney Cohen, the same scientist who presented the world with a beneficial view of psychedelics, turned his back on LSD in 1962. He publicly stated his weariness of LSD’s potential for abuse. He worried about the impurities that arose in street drugs and proposed that the unsupervised use of LSD was too dangerous. This claim led to regulations passed in order to “crack down” on the use of psychedelics in America (Novak, 1997).

The Criminalization of LSD In 1968, as part of Nixon’s ‘War on Drugs,’ the U.S. Drug Enforcement Administration banned LSD in the United States. It is not clear, however, whether the drug’s criminalization was prompted by actual medical concerns or on socioeconomic and political purposes. (Novak, 1997). It is often speculated that these laws were not enacted for the purpose of medical safety, but rather for political and social motives. “It was thrown in with the rest of those drugs – like heroin and methedrine, so it was kind of demonized for a while,” Ken Babbs says. It is commonly believed that the criminalization of LSD and “The War on Drugs” was part of an effort to marginalize and discriminate against the Hippies, in order to quell the anti-war movement. “The War on Drugs was lost before it began,


because it was all based on a lie,” says Mumia Abu-Jamal in Turning the Tide. John Ehrlichman, a Nixon White House aide, opened up about the ‘War On Drugs’ in a 1994 interview with Harper’s Magazine, admitting that the socalled ‘War’ was actually an effort to “attack” the enemies of the Nixon campaign (Abu-Jamal, 2016). The media portrayed the Hippies as dirty, classless, barefooted drug addicts on the fringes of society, posing a threat to the morals of an upstanding country. Discrimination was evident in billboards which displayed, “Keep America Clean: Take a Bath,”“Hippies not served here,” and “Get a Haircut.” In 1968, police sweeps enacted violence and brutality against Hippies in San Francisco streets (DiPaolo, 2018). “It wasn't about drugs. It never was. It was about Politics,” writes Abu-Jamal.

Psychedelics and the Brain The field of psychedelic research was significantly stunted in the 1960s when psychedelics were criminalized. Researchers have developed multiple theories on how these drugs act on the brain as a physical system, but there is more work to be done. In 1953, Aldous Huxley proposed that psychedelics act on the brain’s “cerebral reducing valve,” a function which he defined as the brain’s method of filtering sensory information from everyday experiences. According to Huxley, psychedelics reduce the efficacy of this valve, allowing the brain to experience multiple processes, such as emotion, cognition, and perception - all at once. Huxley believed that young children, whose “cerebral reducing valves” were not yet fully solidified, could experience this phenomenon to an extent (Swanson, 2018). This is consistent with users’ accounts which report the return to a child-like sense of wonder and delight. Recent theories on psychedelic effects focus on the brain as a dynamic and physical system. Dr. Robin Carhart-Harris postulates that psychedelics produce effects on the brain at three different levels. The first is the brain receptor level. Most psychedelics work as serotonin 2A receptor agonists, meaning they bind to and activate a response in these receptors. (Carhart-Harris, 2019). Usually this binding causes the depolarization of a neuron, increasing its likelihood of firing. The excitability of the neuron, however, does not correlate SUMMER 2020

with general excitability of the whole brain. In fact, functional magnetic resonance imaging studies have found that the frequency and amplitude of brain waves under the influence of psychedelic drugs are lower than they would be during resting states. This phenomenon can be explained by the possible excitement of inhibitory neurons. Serotonin 2A receptors are not the only receptors that are involved in the effects of psychedelics. Some drugs produce downstream effects that eventually trigger other neurotransmitters, such as glutamate and dopamine, which cause differing symptoms and experiences (Swanson, 2018). The second method by which Dr. Carhart-Harris believes psychedelics work is on the functional level. Psychedelics increase the brain’s plasticity, or its ability to change and adapt in accordance to experience. (Carhart-Harris, 2019). A 2017 paper published by Nichols et al. states that psychedelics increase the expression of genes that affect synaptic plasticity of the brain. This triggers a cascade of actions, resulting in the activation of serotonin 2A receptors in the brain and the onset of what is referred to as the psychedelic experience (Nichols et al., 2017). Psychedelics also work on the dynamic level, which resembles Huxley’s Cerebral Reducing Valve theory. This theory suggests that psychedelics produce effects when normal brain entropy is disturbed. In the Entropic Brain Theory, developed by Carhart-Harris, it is understood that the brain normally functions in a state of suppressed entropy (or disorder), in which modes of perception and cognition are constrained. (Carhart-Harris, 2019). This normal state of suppressed entropy in the brain allows for optimal focus and survival - making it evolutionarily favorable. According to EBT, psychedelics interfere with these informationsuppressing systems, increasing the brain’s entropy. This allows for temporarily expanded combinations of cognitive, perceptive, and emotive functions which would not normally occur (Swanson, 2018). These combinations result in hallucinations and intense sensory experiences that make up a psychedelic “trip.”

“Dr. Robin CarhartHarris postulates that psychedelics produce efects on the brain at three different levels...”

Ultimately, psychedelics are thought to set off a cascade of downstream effects that eventually lead to an experience often described as a heightened sense of relaxation (Carhart-Harris, 2019). Although some liken this to a mystical experience, Dr. Rick Strassman envisions psychedelics as “super placebos.”


Figure 3: The striking similarities in the molecular structures of LSD (A) and serotonin (B) suggested that the drug may act on these receptors in the brain. Source: (A) Wikimedia Commons [D0ktorz, 2006]; (B) Wikimedia Commons [U3117276]


“Psychadelics haven't been widely used in psychiatry since they were criminalized in the 1960s. But in the past decade, a 'psychedelic renaissance' has taken place, as researchers reevaluate the value of these drugs in psychiatry."


“They reinforce more or less conscious preexisting beliefs, aspirations, goals, and so on,” he says. “Look at Charles Manson. He used LSD in shaping his followers from being halfhearted psychopaths to fully committed ones.” Alongside this, Strassman warns the psychedelic community from “overreaching” in their interpretations of these experiences. “Psychologists are not theologians, and psychiatrists are not ministers,” Strassman cautions. “When people start making claims about areas in which they are not qualified to do so, there’s backlash. So, talking about ‘God’ as a psychologist will be criticized by those with authority in the field.”

Potential Benefits & Risks Psychedelics can also provide a vast array of potential benefits. For Ken Babbs, psychedelics changed the course of his life. “It opens all kinds of doors,” he says. “Before I had LSD, I was kind of a frat rat, know-it-all. But when I took LSD, it wised me up.” “When you’re high on LSD, you’re roaming though the cosmos, you’re back in time having fights with dragons with a sword, and you’re having huge parties with people. I mean, you’re going everywhere… Sometimes, if you’re really lucky, you can leave your body and roam around,” Babbs says. “Now, they’re exploring [LSD] along with psilocybin mushrooms as beneficial for people

B in certain ways,” Babbs says. “I think it’s a good idea because it could be used in therapy. But you know, there’s always the problem - with all those drugs - that if people that do too much of it, it’s not good for them.” Psychedelics haven’t been widely used in psychiatry since they were criminalized in the 1960s. But in the past decade, a “psychedelic renaissance” has taken place, as researchers reevaluate the value of these drugs in psychiatry. Rick Strassman confirms the potential applications of psychedelic-assisted psychotherapy for disorders including obsessive-compulsive disorder, alcohol or tobacco dependence, depression, and endof-life issues – he even notes that “the list continues to grow.” There seem to be two ways in which psychedelic therapies can be implemented. One such method is described by Dr. Rick Strassman as “the production of a very intense psychedelic or peak experience which sets into motion a series of downstream effects, resulting in the desired outcome.” Another method constitutes a low-dosage of the drug in conjunction with other treatments. This method “enhances the beneficial effects of more traditional therapies, while being used in the context of those traditional therapies—talk therapy, for example.”


Carhart-Harris similarly reiterates that “psychedelics initiate a cascade of neurobiological changes that manifest at multiple scales and ultimately culminate in the relaxation of high-level beliefs” (Carhart-Harris, 2019). The ability of psychedelics to simulate a relaxed and flexible neuroplastic state may allow for groundbreaking intervention in individuals with disordered pathology (Nichols et al., 2017). Integrating psychedelics into practical psychotherapy also has potential risks that must be considered. According to Dr. Strassman, “The risks are primarily psychological and must be addressed by careful screening, supervision of drug sessions, and post-session integration. Some people have flashbacks, some develop psychosis, anxiety, depression.” In addition, patients may experience drawbacks when psychedelic therapy does not work as promised or is improperly administered. “There was a suicide at Hopkins in the terminally ill patient study because the patient got a low dose of drug and didn’t attain the experience everyone told her would be healing and curative. While this may not have been a drug side effect, it can be interpreted as a side effect of the model. Another reason why we must remain open-minded regarding how psychedelics cure,” says Dr. Strassman. Along with treatment, researchers must work to manage patient expectations with regard to psychedelic-assisted psychotherapy. An argument commonly made against the use of psychedelics is one claiming their lack of medical or societal use. Existing research conducted on psychedelics provides proof of their versatility and use in psychiatry. Psychedelic research has even proven useful in other ways. LSD had been integral to the discovery of the relationship between brain chemistry and behavior. In 1954, the tryptamine moiety in the structure of LSD was also found in serotonin. The similarities between the molecules led researchers Wolley and Shaw to hypothesize that LSD works by interacting with serotonin receptors in the brain. This important discovery was hedged by the research of psychedelics and was integral to the consideration of chemical makeup in brain functions and disorders (Nichols, 2013).

A Cautious Future The potential risks associated with using


psychedelics must be critically analyzed before being implemented in a clinical setting. Ken Babbs cautions that the range of reactions elicited from these drugs can vary from person to person. “People have been drinking [alcohol] since the beginning of time. It can be fun, but also too much of it can be bad. I think the sort of thing is true of all drugs,” Babbs says. He also emphasizes the necessity of preparation. “We had to be as set, mentally, physically, and morally, as the astronauts did when they went up into space - to be able to come back and be able to resume our regular lives - and not be left out there in some weird place,” Babbs says. “We had good launching pads. When we left, we had good places to come back to. Now I’m not sure that everybody can do that.” Many claim that LSD is a dangerous drug. But it is difficult to encounter unbiased evaluations because these claims are often confounded with stigmas and subliminal biases, partially as a result of the ‘War On Drugs’. There are still many stigmas surrounding psychedelic drugs that exist today. US law currently classifies LSD, peyote, and other psychedelics, along with heroin, as Schedule I substances, due to having high potential for abuse and the potential to create severe psychological and/or physical dependence.” (“Drug Scheduling,” n.d.). But according to researcher David E. Nichols, psychedelics “are generally considered physiologically safe and do not lead to dependence or addiction.” Psychedelic drugs have even shown potential to aid in combatting substance addiction disorders (Nichols, 2016). Dr. Strassman agrees. “Physiologically, psychedelics are safe. The only exception might be the parenterally active tryptamines which raise blood pressure and heart rate, and which should be avoided by people with cerebrovascular, cardiovascular disease, or epilepsy—common sense kinds of contraindications,” Dr. Strassman says. In a 1969 publication, Abram Hoffer and Humphrey Osmond object quite frankly to the dangers of LSD. “Is LSD a dangerous drug? Of course it is. So is salt, sugar, water, and even air. There is no chemical which is wholly safe nor any human activity which is completely free of risk. The degree of toxicity or danger associated with any activity depends on its use. Just as a scalpel may be used to cure, it may also kill. Yet we hear

“The potential risks associated with using psychedelics must be critically analyzed before being implemented in the clinical setting."


no strong condemnatory statements against scalpels” (Hoffer & Osmond, 1967).

Conclusion “Moving forward with psychadelics, but with a healthy amount of skepticism and caution, seems to be the consensus of the experts.”


There are still several barriers to break and stigmas to be overcome in psychedelic research. There will always be dissenting opinions regarding its use, and according to Ken Babbs, this is simply the way of everything in life. “A certain class of people doesn’t want other people doing certain things. It’s always been like that. It’s one of the nice things about society. There are so many different attitudes… so you just got to work your way through all that and not let it hang you up,” he says. While some psychedelics have been cleared for research, society is a long way from legalizing psychedelics for wider public use. Dr. Strassman says, “The move to decriminalize psychedelics will also create a backlash. This move to decriminalize is heavily dependent on the promising results coming out of laboratory research on beneficial effects. As more people take psychedelics, more adverse effects will be reported—suicides, crazy behavior, psychosis, flashbacks, and the like. We need to be ready for this.” “Downplaying the adverse effects is a recipe for problems; that is, the media and larger public will say, ‘Why didn’t you tell us about this?’ If we can get ahead of the adverse effects before it’s even a problem, we will be in a much better position to deal with these types of reports,” says Dr. Strassman. History has proven that perhaps the most influential breakthrough discoveries were achieved by thinking outside the box and breaking from mainstream processes. As Hoffer and Osmond wrote in 1969, "To the extent to which discovery changes our interpretive framework, it is logically impossible to arrive at it by the continued application of our previous interpretative framework. In other words, discovery is creative also in the sense that it is not to be achieved by the diligent application of any previously known and specifiable procedure.” (Hoffer & Osmond, 1967). Moving forward with psychedelics, but with a healthy amount of skepticism and caution, seems to be the consensus of the experts. According to Dr. Strassman, “One of the reasons this field was shut down in the late ‘60s and early ‘70s was because of the zealotry of its advocates.

They thought they had found the silver bullet and stopped looking at mechanisms. Instead, they just applied it to everything that they could see, in kind of a religious fervor.” As for Ken Babbs, he believes that the use of psychedelics is just one of the many ways to stay happy in life. “Oh yeah, well, life is a groove, and you have to find the groove.” References Abu-Jamal, M. (2016). Ehrlichman: 'War On Drugs Really 'Bout Blacks & Hippies'. Turning the Tide. Retrieved from https:// search-proquest-com.dartmouth.idm.oclc.org/docview/1787 805206?accountid=10422 Baum, D., Kroll-Zaidi, R., Jr., and Indiana, G. (2016, March 31). Legalize It All, by Dan Baum. Retrieved June 06, 2020, from https://harpers.org/archive/2016/04/legalize-it-all/ Carhart-Harris, R. L. (2019). How do psychedelics work?. Current opinion in psychiatry, 32(1), 16–21. Retrieved from https://doi.org/10.1097/YCO.0000000000000467 Carhart-Harris R. L., Friston K. (2010). The default-mode, ego-functions and free-energy: a neurobiological account of Freudian ideas. Brain 133 1265–1283. 10.1093/brain/awq010 DiPaolo, M. (2018). LSD and The Hippies: A Focused Analysis of Criminalization and Persecution In The Sixties. PIT Journal. Retrieved June 4, 2020, from http://pitjournal.unc.edu/ content/lsd-and-hippies-focused-analysis-criminalizationand-persecution-sixties Drug Scheduling. (n.d.). Retrieved June 06, 2020, from http:// www.dea.gov/drug-scheduling Friedenberg, E. Z. (1971). The anti-American generation. Chicago: Distributed by Aldine Pub. Retrieved from https:// www.taylorfrancis.com/books/9781315082240 Hoffer, A., & Osmond, H. (1967). Criticisms of LSD Therapy and Rebuttal. The Hallucinations. Retrieved June 4, 2020, from http://www.psychedelic-library.org/lsd1.htm Nichols, D. E. (2013). Serotonin, and the Past and Future of LSD. MAPS Bulletin Special Edition. Retrieved from https:// maps.org/news-letters/v23n1/v23n1_p20-23.pdf Nichols, D. E. (2016). Psychedelics. Pharmacological reviews, 68(2), 264–355. Retrieved from https://doi.org/10.1124/ pr.115.011478 Nichols, C. D., Gainetdinov, R. R., Nichols, D. E., Kalueff, A. V. (2017, September 22.) Psychedelic Drugs in Biomedicine. Trends in Pharmacological Sciences, Volume 38, Issue 11. Retrieved from https://doi.org/10.1016/j.tips.2017.08.003 Novak, S. J. (1997). LSD before Leary. Sidney Cohen's critique of 1950s psychedelic drug research. Isis; an international review devoted to the history of science and its cultural influences, 87–110. Retrieved June 4, 2020, from https://doi. org/10.1086/383628 Swanson L. R. (2018). Unifying Theories of Psychedelic Drug Effects. Frontiers in pharmacology, 9, 172. Retrieved from https://doi.org/10.3389/fphar.2018.00172




The Functions and Relevance of Music in the Medical Setting BY KAMREN KHAN '23 AND YVON BRYAN Cover. The application of music within the clinical setting. Source: Shutterstock

Introduction This article will explore the role of music in clinical contexts with respect to the psychologically demanding nature of the medical field. The discussion begins with a consideration of the levels of anxiety in both patients and medical professionals and the effects of this anxiety in clinical contexts. Within this context, the focus will then turn to several studies that explore the influence of musical intervention on patient experience and physiological state and finally a collection of studies that assesses the influence of music exposure on surgical performance of clinicians.

Music in the Medical Setting Almost universally, the thought of impending surgery induces a gripping sensation of anxiety characterized by negative emotional valence, feelings of tension, and increased autonomic activation (Kazdin & Alan 2000). At Yirgalem 152

Zonal hospital in Ethiopia, researchers found the incidence of preoperative anxiety for patients undergoing elective surgery to be 47 % (Bedaso & Ayalew, 2019). Unsurprisingly, preoperative anxiety can contribute to a traumatic and stressful experience for patients. More concretely, the severity of a patient’s anxiety is predictive of the amounts of intravenous propofol and sevoflurane gas that a patient will require under general anesthesia, where greater levels of anxiety correspond to greater doses needed to achieve sedation (Kil et al, 2012). Furthermore, preoperative symptoms of anxiety correspond to lower levels of patient satisfaction (Kavalnienė et al, 2018). Preoperative anxiety therefore directly affects not only the patient experience but also the outcome of the medical procedures themselves. However, the effects of anxiety are not limited to patients alone. There is also a high prevalence of anxiety DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

among healthcare personnel. A cross-sectional survey of Chinese nurses in public city hospitals found the incidence of anxiety symptoms to be 43% (Gao et al, 2012). Similarly, physicians suffer from the emotional demands of their job, including feelings of obligation to the health of the patient, feelings of responsibility or powerlessness in response to declining health of patients, grief, and concerns about contracting illnesses which ultimately may degrade both the physician’s well-being and the quality of the care they provide (Meier, Black & Morrison, 2001). These emotions are often shared by family members of patients and accentuated by the uncertain and foreign nature of surgery for those outside the medical field. Essentially, surgery acts as a source of anxiety that affects a broad range of people including patients, families of patients, and healthcare personnel themselves, which in turn may affect the quality of care given to the patient and the wellbeing of all those involved. The prevalence and effect of anxiety related to medical care necessitates intervention in order to improve the experience and general quality of healthcare. While the sources of this anxiety (the uncertainty, invasiveness, and implicit risk of surgery) are themselves fundamental to medical care, the anxiety may be treated symptomatically. Of all available methods of intervention, music meets the demands of universal and undemanding applicability with low potential for adverse outcomes. In fact, a recent study found that the primary reason people listen to music is to manage and regulate mood (Lonsdale & North, 2011). Additionally, music therapy has been shown to decrease levels of anxiety and depression (Jasemi, Aazami & Zabihi, 2016). How then might the efficacy of music in regulating mood and managing anxiety translate into a clinical setting?

The Effects of Musical Intervention on Patients Perioperative (around the time of surgery) music therapy decreases patient anxiety and elicits a wide variety of related benefits. The benefits of this therapy unsurprisingly vary with respect to the features of the chosen music. Studies have shown that exposure to music of the patient’s preference can positively affect patients undergoing surgery. In a prospective randomized double-blind study evaluating patients undergoing abdominal surgery under general anesthesia, researchers


exposed an intervention group to music of their preference immediately following induction with anesthesia. As referenced in Figure 4, the intervention group had more stable systolic arterial blood pressure, calmer recovery, higher satisfaction, and lower pain as reported on the Visual Analog Scale – a subjective yet reliable method of reporting pain wherein patients indicate a location along a continuum from no pain to extreme pain corresponding to their pain levels (Kahloul et al, 2017). Patients most frequently selected Tunisian music, perhaps indicating a bias towards familiarity (as the study occurred in Tunisia) when seeking anxiety-reducing effects. Ultimately, the study demonstrated the efficacy of perioperative exposure to patient-selected music in objectively improving the experience of patients undergoing surgery. However, allowing the patients to select whatever music they want fails to address the broadness of that decision. Some patients may choose music because it calms them, others may choose music because of its association with happy memories, while others may choose music they find to be particularly engaging. The variation in decision making criteria allows for variance in the role of music and likely corresponds to variation in affective response to music. Alternatively, another study exposed patients to self-selected music followed by a mandatory playlist curated by a music therapist to promote serenity and relaxation. The plurality of music selections (41%) consisted of indigenous or religious music, affirming the preference for familiarity demonstrated in the previous study. Ultimately, the researchers found that musical intervention decreased scores on the Hospital Anxiety and Depression Scale, referring to a fourteen-item questionnaire that gauges patient symptoms of both anxiety and depression (Tan et al, 2020). One limitation of the study was the notion that the mandatory playlist promoted relaxation and serenity. This presumed an overly simplistic level of universality to music perception, as perception of music varies culturally and generationally (Thompson 2010). As music therapy operates through the regulation of mood, one might therefore expect the efficacy of music therapy to vary by cultural, and perhaps even personal, background. In some cases, musical intervention was supplemented by the use of a music therapist. In a three-group randomized controlled trial,

“Perioperative (around the time of surgery) music therapy decreases patient anxiety and elicits a wide variety of related benefits.”


Figure 2. An example of the Visual Analog Scale (VAS) used to measure pain Source: Created by Author

patients were divided into two experimental groups, live music and recorded music, and a control group. Patients in the two experimental groups took part in five-minute music therapy sessions in which they listened to and discussed a preferred song with the music therapist. The patients in these two groups then listened to music from a playlist curated by the music therapist characterized by smooth melodic lines, stable rhythms, and consistent dynamics. Though the two experimental groups did not differ significantly from the control group in terms of propofol needed to achieve moderate sedation or satisfaction scores, the experimental groups had larger reductions in preoperative anxiety. Additionally, recovery times of patients in the live music group were shorter than those of the patients in the recorded music group, suggesting live music was a more effective form of intervention (Palmer et al, 2015). The relative inefficacy of musical intervention in this study perhaps implicates duration of musical intervention as a factor, as the other, more successful studies were characterized by longer exposure to music.

“... recovery times of patients in the live music group were shorter than those of patients in the recorded music group, suggesting live music was a more effective form of intervention."


Palmer’s study was perhaps limited by the narrowing of instrumentation as a result of the use of live music; the live music only included piano and guitar which may alter the effect of the music; for example, a song with guitar and drums might elicit significantly different neurophysiological activation than the same song without drums given the potential for emotion communication through drumming (Rojiani, Zhang, Noah, & Hirsch, 2018). Lastly, the choice to subject the live music and recorded music treatment groups to the same intraoperative music seems to contradict the previous reliance on patient preference. This raises the question of how much intraoperative music influences patients’ psychological states and neurophysiological activation under general anesthesia (referring to the medically induced loss of consciousness).

In contrast to the aforementioned brief preoperative intervention, other studies have assessed more prolonged intervention that spanned the entire perioperative period. In a randomized controlled trial study, patients undergoing mastectomies under general anesthesia in the experimental group listened to music of their preference (of four genres: classical, easy listening, inspirational, and new age) with earphones throughout the preoperative, intraoperative, and postoperative periods. Patients in the experimental group had greater reductions in mean arterial pressure, greater reductions in anxiety, and less pain spanning from the preoperative period to the time of discharge from the recovery room (Binns-Turner et al, 2008). 2.1 General trends in musical intervention and patient experience The large degree of variation in experimental design and operational contexts prevents the simplification of perioperative music therapy into a universal procedure. However, the varying success of different studies reflects the existence of general trends that predict positive or negative outcomes. Firstly, it appears that music of patient preference outperforms music deemed to be ubiquitously ‘calming.’ Next, the benefit of music therapy may vary by duration of intervention with longer duration corresponding to greater reductions in anxiety and pain. However, this apparent trend may arise from the resultant variation in exposure to anxiety-inducing aspects of medical care; subjects who spend less time exposed to music consequentially spend more time fully immersed in their stressful environment which presumably acts as the source of their anxiety. Lastly, it appears music therapy can benefit patients throughout the entire perioperative period and exposure should therefore not be limited simply to the preoperative period.


Figure 3. A depiction of surgeons during an operation Source: Shutterstock

The Effects of Musical Exposure on Surgical Performance The relevance of music in clinical contexts is not limited to patients. In fact, a survey conducted by physicians at the University Hospital of Wales found that music is played 62-72% of the time in the operating theater. Specifically, instrumental and classical music appear to be among the most popular genres in the OR (Ullman et al, 2008; George, Ahmed, Mammen & John, 2011). Naturally, one might wonder why music is so prevalent in operating rooms. In a questionnaire-based cross-sectional prospective study, researchers found that 63% percent of respondents agreed that music improves their concentration and 59% agree that it helps reduce autonomic reactivity in stressful surgeries (George, Ahmed, Mammen & John, 2011). Ultimately, it appears that a majority of health-care providers believe in some form of benefit from music in the operating room. These beliefs about the functions of music in the operating theater are to some degree reinforced by experimental findings. For example, researchers conducted a crossover study involving 12 plastic surgery residents to determine the effect of music on the quality and duration of a surgical task. In the study, the residents conducted layered closures of standardized incisions on pigs’ feet both while listening to music of their choice and then


in the absence of music. Listening to music led to a 10% faster completion of the closure and increased repair quality as according to the judgement of blinded faculty (Lies & Zhang, 2015). Though rather narrow in scope, the study affirms the notions of music as beneficial to surgical performance — though it is important to note that the study refers specifically to surgical performance of somewhat inexperienced surgeons within the field of plastic surgery—as expressed in the aforementioned surveys. Listening to music has even been shown to lead to enhanced learning of surgical procedures. Researchers conducted a crossover study in which a total of 31 surgeons performed tasks related to manual tasks required to perform surgery, but not requiring surgical knowledge. These tasks were performed under four conditions: silence, dichotic music, mental loading, and relaxing music, with performance measured in terms of speed and accuracy. The tasks, executed using a surgical computerized laparoscopy simulator (which may not translate to the OR), were repeated after a ten-minute break. During the break, subjects were engaged in a manual number alignment quiz in order to divert the subjects’ focus from the previously accomplished task so as to best assess memory consolidation and recall of motor performance. Their results suggest that classical music (in this case specified as slow movements from Mozart’s piano sonatas) leads to significantly

“Listening to music has even been shown to lead to enhanced learning of surgical procedures.”


“...The finding that performance was most significantly improved by hiphop and Jamaican music likely indicates the importance of familiarity and surgeon preference rather than some element of musical form.”


improved memory consolidation (Conrad et al, 2020). Additionally, some studies suggest that certain types of music are better suited to altering surgical performance. In a crossover study investigating the effect of music on robot assisted laparoscopic surgical performance, subjects completed both a suture tying and a mesh aligning task using the da Vinci robotic surgical system. Subjects completed the task in the absence of music and under four musical conditions: jazz, classical, Jamaican, and hip-hop music. The subjects then repeated the task in the absence of music to ensure that performance varied by musical intervention rather than serial position. Researchers found that accuracy (as measured by total travel distance of instrument tips) and time of task completion both improved in the presence of music. Music also led to reduced muscle activations and increased median muscle frequency (as given by electromyography), implying decreased muscle fatigue (Siu et al, 2010). Notably, in this study, the performanceenhancing effects of music were greatest in the presence of Jamaican or hip-hop music. This trend can perhaps be explained by the fact that 7 out of 10 participants rated hip-hop as one of their two favorite types of music and that the subjects were all young medical students (Siu et al, 2010). Therefore, the finding that performance was most significantly improved by hip-hop and Jamaican music likely indicates the importance of familiarity and surgeonpreference rather than some element of musical form. Additionally, this instance potentially demonstrates the generationality of music within clinical contexts; given the generational homogeneity of the subjects in this study, hiphop and Jamaican music emerged as the most beneficial. However, one might expect a similar study with a broader age range to yield different results as reflected by the fact that classical and instrumental music, not hip-hop or Jamaican music, appear to be the most common genres. Ultimately, music exposure can increase surgical performance in terms of speed, accuracy, and even memory consolidation and motor recall.

blood pressure and heart rate while reducing the quantity of anesthetics necessary for induction of anesthesia. Additionally, music exposure improves clinician speed and accuracy when performing surgical tasks as well as motor learning. While these findings call for greater application of music in the medical setting, several questions are left unanswered. One might wonder how the effect of music varies with culture, generation, musicality, and many other factors. Additionally, one may ask how the altered neurological state arising from anesthetic induction might influence a patient’s neural response to music. Ultimately, while the subtler details remain obscure, the general applicability and relevance of music within the medical setting is clear.

As we have seen, there exists a definite role of music in the OR for everyone involved. For the patients, preoperative, intraoperative, and postoperative music exposure can significantly decrease self-reported anxiety and depression and increase patient satisfaction. More concretely, music exposure can lower patient

George, S., Ahmed, S., Mammen, K. J., & John, G. M. (2011). Influence of music on operation theatre staff. Journal of anaesthesiology, clinical pharmacology, 27(3), 354–357. https://doi.org/10.4103/0970-9185.83681

Appendix References Bedaso, A., & Ayalew, M. (2019). Preoperative anxiety among adult patients undergoing elective surgery: a prospective survey at a general hospital in Ethiopia. Patient safety in surgery, 13, 18. https://doi.org/10.1186/s13037-019-0198-0 Binns-Turner, P.G., Wilson, L.L., Pryor, E.R., Boyd, G.L., & Prickett, C.A. (2008). Perioperative music and its effects on anxiety, hemodynamics, and pain in women undergoing mastectomy. AANA journal, 79 4 Suppl, S21-7 . Conrad, C., Konuk, Y., Werner, P. D., Cao, C. G., Warshaw, A. L., Rattner, D. W., Stangenberg, L., Ott, H. C., Jones, D. B., Miller, D. L., & Gee, D. W. (2012). A quality improvement study on avoidable stressors and countermeasures affecting surgical motor performance and learning. Annals of surgery, 255(6), 1190–1194. https://doi.org/10.1097/SLA.0b013e318250b332 Cunningham, L. L., & Tucci, D. L. (2017). Hearing Loss in Adults. The New England journal of medicine, 377(25), 2465–2473. https://doi.org/10.1056/NEJMra1616601 Daryl Jian An Tan, Breanna A. Polascik, Hwei Min Kee, Amanda Chia Hui Lee, Rehena Sultana, Melanie Kwan, Karthik Raghunathan, Charles M. Belden, & Ban Leong Sng. (2020). The Effect of Perioperative Music Listening on Patient Satisfaction, Anxiety, and Depression: A Quasiexperimental Study. Anesthesiology Research and Practice, 3761398. https://doi.org/10.1155/2020/3761398 Gao, Y., Pan, B., Sun, W. et al. Anxiety symptoms among Chinese nurses and the associated factors: a cross sectional study. BMC Psychiatry 12, 141 (2012). https://doi. org/10.1186/1471-244X-12-141

Groarke J.M., Hogan M.J. (2019) Listening to self-chosen music regulates induced negative affect for both younger and older adults. PLoS ONE 14(6): e0218017. https://doi. org/10.1371/journal.pone.0218017


Jasemi, M., Aazami, S., & Zabihi, R. E. (2016). The Effects of Music Therapy on Anxiety and Depression of Cancer Patients. Indian journal of palliative care, 22(4), 455–458. https://doi. org/10.4103/0973-1075.191823 Kahloul, M., Mhamdi, S., Nakhli, M. S., Sfeyhi, A. N., Azzaza, M., Chaouch, A., & Naija, W. (2017). Effects of music therapy under general anesthesia in patients undergoing abdominal surgery. The Libyan journal of medicine, 12(1), 1260886. https://doi.org/10.1080/19932820.2017.1260886

Study. Anesthesiology research and practice, 2020, 3761398. https://doi.org/10.1155/2020/3761398 Thompson, William. (2010). Cross-cultural similarities and differences (Music and Emotion). 10.1093/acprof:o so/9780199230143.001.0001. Ullmann, Y., Fodor, L., Schwarzberg, I., Carmi, N., Ullmann, A., & Ramon, Y. (2008). The sounds of music in the operating room. Injury, 39(5), 592–597. https://doi.org/10.1016/j. injury.2006.06.021

Kavalnienė, R., Deksnyte, A., Kasiulevičius, V., Šapoka, V., Aranauskas, R., & Aranauskas, L. (2018). Patient satisfaction with primary healthcare services: are there any links with patients' symptoms of anxiety and depression?. BMC family practice, 19(1), 90. https://doi.org/10.1186/s12875-018-0780-z Kazdin, A. E. (2000). Encyclopedia of psychology. Washington, D.C: American Psychological Association. Kil, H. K. , Kim, W. O. , Chung, W. Y., Kim, G. H., Seo, H., Hong, J. Y, Preoperative anxiety and pain sensitivity are independent predictors of propofol and sevoflurane requirements in general anaesthesia, BJA: British Journal of Anaesthesia, Volume 108, Issue 1, January 2012, Pages 119–125, https:// doi.org/10.1093/bja/aer305 Lies, S. R., & Zhang, A. Y. (2015). Prospective Randomized Study of the Effect of Music on the Efficiency of Surgical Closures. Aesthetic surgery journal, 35(7), 858–863. https:// doi.org/10.1093/asj/sju161 Lonsdale A. J., North A. C. (2011). Why do we listen to music? A uses and gratifications analysis. Br. J. Psychol. 102, 108–134 10.1348/000712610X506831 Meier, D. E., Back, A. L., & Morrison, R. S. (2001). The inner life of physicians and care of the seriously ill. JAMA, 286(23), 3007–3014. https://doi.org/10.1001/jama.286.23.3007 Palmer, J. B., Lane, D., Mayo, D., Schluchter, M., & Leeming, R. (2015). Effects of Music Therapy on Anesthesia Requirements and Anxiety in Women Undergoing Ambulatory Breast Siu, K. C., Suh, I. H., Mukherjee, M., Oleynikov, D., & Stergiou, N. (2010). The effect of music on robot-assisted laparoscopic surgical performance. Surgical innovation, 17(4), 306–311. https://doi.org/10.1177/1553350610381087 Surgery for Cancer Diagnosis and Treatment: A Randomized Controlled Trial. Journal of clinical oncology : official journal of the American Society of Clinical Oncology, 33(28), 3162–3168. https://doi.org/10.1200/JCO.2014.59.6049 Trehub, S. E., Becker, J., & Morley, I. (2015). Cross-cultural perspectives on music and musicality. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 370(1664), 20140096. https://doi. org/10.1098/rstb.2014.0096 Rojiani, R., Zhang, X., Noah, A., & Hirsch, J. (2018). Communication of emotion via drumming: dual-brain imaging with functional near-infrared spectroscopy. Social cognitive and affective neuroscience, 13(10), 1047–1057. https://doi.org/10.1093/scan/nsy076 Tan, D., Polascik, B. A., Kee, H. M., Hui Lee, A. C., Sultana, R., Kwan, M., Raghunathan, K., Belden, C. M., & Sng, B. L. (2020). The Effect of Perioperative Music Listening on Patient Satisfaction, Anxiety, and Depression: A Quasiexperimental



Meta-analysis Regarding the Use of External-Beam Radiation Therapy as a Treatment for Thyroid Cancer BY MANYA KODALI, HAMPTON HIGH SCHOOL, AND DR. VIVEK VERMA MD Cover Image: Pictured above is an overview of the hormones released by the thyroid and their relation to multiple bodily functions. The graph depicts the relationship between basal metabolic relate and hormone production rate. Source: Wikimedia Commons


Background - The Thyroid Gland The thyroid is vital to the function and metabolic health of the human body. This organ is part of the endocrine system and lies under the voice box, towards the front of the neck. It has a bilobular structure with an appearance like that of a butterfly. Weighing between 20 and 60 grams on average, it is surrounded by two fibrous capsules. Thyroid tissue itself consists of individual lobules enclosed in layers of connective tissue; these lobules contain vesicles which store droplets of thyroid hormones (Information, 2018). As with other glands in the endocrine system, the thyroid gland is controlled through feedback systems involving the pituitary gland and hypothalamus. Although both positive and negative feedback systems are exhibited, the negative feedback system dominates and is responsible for maintaining constant levels of circulating hormones.

Two hormones are produced by the thyroid gland (in addition to calcitonin, a hormone that regulates calcium, secreted by the parafollicular C-cells): Tetraiodothyronine (T4 or thyroxine) and Triiodothyronine (T3). T3 and T4 are not produced in equal amounts; the thyroid produces all the T4 in the body, but only around 20% of total T3 in the body is produced in the thyroid. The other 80% of T3 is produced through extrathyroidal deiodination of T4, typically in the kidney or liver. These hormones circulate through the body bound to one of three plasma proteins - thyroxine binding prealbumin (TBPA), thyroxine binding globulin (TBG), or albumin. TBG binds to most of the circulating T3 and T4, while albumin has the lowest affinity for binding (Fritsma, 2013). Hormones produced by the thyroid gland have a wide variety of functions which affect metabolism, growth, and maturation. The hormones are calorigenic, meaning they DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

result in the generation of body heat and consumption of oxygen. They increase lipid metabolism, glucose utilization, heart rate, protein catabolism, myocardial contractility, and cardiac output. They also stimulate the production of cytokines, proteins that play a key role in cell signaling. Thyroid hormones additionally promote gluconeogenesis, cell differentiation, and increased motility of the gastrointestinal system (Fritsma, 2013).

Background - Thyroid Cancer Thyroid nodules, caused by growth of cells in the thyroid gland, are relatively common in the general population. When nodules are discovered, either through an exam or as an incidental finding, patients are tested for thyroid-stimulating hormone (TSH) levels in the blood; measuring TSH levels allows doctors to distinguish between functional and nonfunctional nodules. Nodules with low TSH levels undergo radioiodine imaging to find whether they are autonomously functioning or hypofunctional. Autonomously functioning nodules are typically benign and thus don't usually require further treatment; however, further diagnostic tests are used if deemed necessary. Hypofunctioning nodules and ones with elevated TSH undergo either ultrasoundguided fine needle aspiration based on features described by the American Thyroid Association, or the patient's doctors will monitor the nodule; hypofunctioning nodules are often malignant and require surgery (Haddad et al., 2020). The diagnostic neck ultrasounds are performed on suspicious nodules to check for unusual morphologic features; if the nodule is large/speculated/raggedy, it is more likely to be cancerous than a nodule that is small/rounded (Lee et al., 2011). Repeat biopsies are performed if the initial test is indeterminate; nodules are also monitored through ultrasound or surgery if results are unclear or suspicious in any way (Mayo). Thyroid cancer is divided into two main categories: 1) well-differentiated, which includes papillary and follicular cancers, and 2) poorly differentiated, which includes anaplastic and medullary cancers. If found to be malignant, thyroid cancer is further subdivided by differentiation and by the cell type of origin. Papillary carcinomas (PTC) account for around 80% of thyroid cancer cases (Nguyen et al., 2015); they are differentiated, slow-growing cells and can develop in one or both lobes of the thyroid gland. These can spread to the lymph nodes, but generally are SUMMER 2020

treatable and have a good prognosis, close to 100% after five years (American Cancer Society). Follicular thyroid cancer makes up approximately 14% of cases (Nguyen et al., 2015). This cancer is more aggressive than papillary cancer and is more likely to spread to other organs, specifically to bones and the lungs. It is often associated with iodine deficiency. Hürthle-cell carcinomas are a subtype of follicular thyroid cancer and are treated similarly to other follicular carcinomas. The prognosis for distant follicular cancer is 63%, but 98% for all stages combined (American Cancer Society). Medullary thyroid cancer begins in C-cells and accounts for roughly 3% of thyroid cancers (Nguyen et al., 2015). It is typically associated with multiple endocrine neoplasia, a genetic syndrome that predisposes people to getting various types of endocrine cancers at multiple places in the body (Multiple Endocrine Neoplasia); this form of cancer produces an excess of calcitonin. Distant medullary thyroid cancers have a prognosis of 40%, significantly lower than the 90% survival rate of localized medullary cancer (American Cancer Society). Anaplastic thyroid cancer (ATC) is the most aggressive form of thyroid cancer. It is found in less than 2% of patients and typically occurs in people over 60 years of age (Nguyen et al., 2015). This is the most undifferentiated form of thyroid cancer and spreads rapidly to other parts of the neck and body. The prognosis for ATC is grim -a 5-year survival rate of 7% - with virtually no proper therapy (American Cancer Society).

“Thyroid cancer is divided into two main categories: 1) well-differentiated, which includes papillary and follicular cancers, and 2) poorly differentiated, which includes anaplastic and medullary cancers.”

Treatments - Overview Thyroid cancer is typically treated with a combination of treatments depending on the stage and cell type of thyroid cancer, patient preference, general health of the patient, and possible side effects. Non-radiation treatments include surgery, hormone treatment, targeted therapies, and chemotherapy; surgery is the most common therapy. Radiotherapies include external-beam and radioactive iodine (RAI). Radiotherapy is typically used in patients with residual cancer activity due to incomplete surgery. Radiation therapy can be used for stage III papillary and follicular thyroid cancer, stage IV papillary and follicular thyroid cancer, localized medullary thyroid cancer, and anaplastic thyroid cancer (alone or in conjunction with 159

Figure 1: A histopathological image of a papillary thyroid carcinoma, obtained through total thyroidectomy, under a hematoxylin and eosin stain. Source: Wikimedia Commons


Introduction: External-Beam Radiation Therapy “Tubiana et al. reported on 97 patients who had pathologic tissue macroscopically remaining after surgery; the study found a 57% 15-year survival rate and a 40% 15-year relapsefree survival rate.�

Reports on the use of external-beam radiation therapy (EBRT) are largely retrospective, with nonuniform criteria for the selection of patients, thereby leading to contradictory conclusions. Many studies have shown no effect or detrimental effects, while other have shown positive effects. EBRT is typically only considered for patients with significant risk of relapse and/ or when surgery and RAI are less effective (Kiess et al., 2016). Because radiation therapy affects both normal and cancerous cells, it often involves a multitude of side effects. Common side effects of EBRT for thyroid cancer include dry mouth, cough, appetite loss, nausea, fatigue, trouble swallowing, and dry skin. Other symptoms include skin erythema, mucositis, hyperpigmentation of the skin, and esophageal and tracheal stenosis (Wexler, 2011).

External-Beam Radiation for Residual Cancer Tubiana et al. reported on 97 patients who had pathologic tissue macroscopically remaining after surgery; the study found a 57% 15-year survival rate and a 40% 15-year relapse-free


survival rate (Tubiana et al.,1985). A more recent study looked at patients with differentiated thyroid carcinoma from the Royal Marsden Hospital. The patients received EBRT [a dose of 60 Grays (Gy) in 30 fractions] over a period of six weeks. Complete regression was seen in 37% and partial regression in another 35%; the 5-year survival rate was 27% (O'Connell et al., 1994). Additionally, a review of 33 patients with residual disease at the Princess Margaret Hospital was performed. Of the total number, 20 patients were treated solely with EBRT, and the other 13 were given RAI along with EBRT. The 5-year local relapse-free rate was found to be 62%, and the cause-specific survival rate was found to be 65% (Tsang et al., 1998). Brierley and Tsang suggest the administration of RAI followed by EBRT for patients with residual disease following thyroidectomies. For young patients with limited residual disease, EBRT may be unnecessary if the patient shows appropriate iodine uptake (Brierley and Tsang, 1999).

External-Beam Radiation as Adjuvant Therapy The American Head and Neck Society suggests that EBRT should not be used routinely as


Figure 2: The image to the left depicts a thyroidectomy, the most common treatment for patients with thyroid cancers. Source: Wikimedia Commons

an adjuvant therapy after patients have undergone complete resection, but it should be considered for patients older than 45 years with low likelihood of responding to RAI treatment and who have high likelihood of microscopic residual disease (Kiess et al., 2016). Several studies support this suggestion for the use of EBRT in select patients. Patients with resected stage 4 papillary thyroid cancers were shown to have a 10-year local failure-free survival of 88% after EBRT along with RAI treatment, far better than the same rate for RAI therapy alone (72%) (Chow et al., 2006). Another study performed on patients with papillary and follicular thyroid cancer similarly found that adjuvant EBRT improved recurrence-free survival (Farahati et al., 1996). A study reporting on 114 patients post-surgery with no macroscopic disease left behind found significantly improved local relapse free survival (Ésik et al., 1994). In Essen, Germany, EBRT was shown to significantly increase the 10-year survival rate for patients older than 40 years who had stage 3 or 4 tumors; those given EBRT had a survival rate of 58% while those


without had a rate of 48% (Benker et al., 1990). EBRT as an adjuvant therapy has also been found to vastly improve control rates. Postsurgery, 23 patients received EBRT (with and without RAI therapy), and another 68 were treated with RAI therapy alone. Survival rates at 7 years were not statistically different between the two groups, but the 5-year locoregional control rate was 95.2% for the EBRT group and 67.5% without EBRT (Kim et al., 2003). A Korean study showed that EBRT significantly decreased locoregional recurrence from 51% to 8% in 68 patients who underwent excision of thyroid tumors off the trachea (Keum et al., 2006). Tubiana et al. reported on 66 patients who received adjuvant radiation for regional lymph node involvement; they found that the rate of local recurrence was 14% when EBRT was involved compared with 21% recurrence for patients who did not receive EBRT (Tubiana et al., 1985). All the studies discussed above suggest benefits for patients given adjuvant EBRT. However, these studies are retrospective and

“Patients with resected stage 4 papillary thyroid cancers were shown to have a 10-year local failure-free survival of 88% after EBRT along with RAI treatment, far better than the same rate for RAI therapy alone (72%)."


do not have clear criteria for patient selection or standardization of therapy. Thus, the studies leave room for future improvement and standardization.

External-Beam Radiation for Recurrent Cancer “A study on patients with welldifferentiated papillary thyroid carcinoma identified patients who relapsed in the thyroid bed, and treated fourteen with EBRT; seven of these patients did not relapse regionally a second time.”

delayed for one-month following radiotherapy (Frassica, 2003). Other studies found complete or partial pain relief in 50% of patients who received EBRT (Simpson et al.1998; Simpson 1990). More research is required for definitive evidence regarding the use of EBRT to treat metastases due to thyroid cancer.

Patients with relapse in the neck are typically given RAI therapy and TSH suppression; in patients with nodal recurrence, neck dissections are often performed (Brierley and Tsang, 1999). If the recurrence occurs in the thyroid bed or there is extra capsular lymph node involvement and the soft tissues of the neck are infiltrated, EBRT is also given (Brierley and Tsang, 1999).


In another study, five patients with locally recurring papillary thyroid cancer all had no second relapses following EBRT (Sheline et al., 1966). A study on patients with welldifferentiated papillary thyroid carcinoma identified patients who relapsed in the thyroid bed, and treated fourteen with EBRT; seven of these patients did not relapse regionally a second time (Vassilopoulou-Sellin et al., 1996). EBRT of greater than 50 Gy has been shown to be useful for long-tem control of sites with recurrent lesions of differentiated thyroid cancer (Makita et al.). Finally, in patients with extensive extrathyroidal extension, EBRT should be considered to aid in patient outcome (Brierley and Tsang, 1999). Other studies suggest reserving EBRT and instead using salvage surgery or RAI therapy for recurrence to avoid the morbidity and side effects associated with EBRT (Shaha, 2004).

EBRT, when used as a treatment for residual disease, achieves increased rates of relapsefree survival and both partial and complete regressions. Local relapse-free rates, regional control, and overall survival rates in patients with poor uptake of RAI are all benefitted through the use of EBRT - either alone or in conjunction with RAI. Some research has shown EBRT should only be used as a last resort for recurrent cancers, but other studies have shown that in certain patient groups, the therapy helped to avoid second relapses. Finally, EBRT can be used to help relieve pain in patients with skeletal metastases. After reviewing the existing literature, it becomes clear that EBRT often provides added benefit to management of thyroid cancer, especially when used in addition to thyroidectomy and radioiodine therapy. Thus, doctors continue to utilize EBRT as a treatment for thyroid cancer due to its efficacy.

When the most common treatment for thyroid cancer, thyroidectomy, doesn't work, doctors often recommend radiotherapy. While not the most common method, external radiation has been shown to benefit patient outcome in a variety of scenarios.


External-Beam Radiation for Bone Metastases Metastatic thyroid cancer is typically treated with RAI therapy; however, its effectiveness has been shown to vary greatly with the site of the metastasis (Casara et al., 2018; Brown et al., 1984). In these situations, surgical resection is recommended. For unresectable bone metastases, EBRT is warranted (Brierley and Tsang, 1999). Few studies have been done on the efficacy of EBRT as management for bone metastases. However, studies on the general principles of EBRT have shown that approximately 70% of patients receive some pain relief through palliative EBRT. Patients reported improved symptoms in 2-3 days, but in some cases relief was


American Cancer Society. (2020, September). Survival Rates for Thyroid Cancer.AmericanCancer.Org. https://www.cancer.org/cancer/ thyroid-cancer/detection-diagnosis-staging/survival-rates. html Benker, G., Olbricht, T., Reinwein, D., Reiners, C. R., Sauerwein, W., Krause, U., . . . Hirche, H. (1990). Survival rates in patients with differentiated thyroid carcinoma. Influence of postoperative external radiotherapy. Cancer, 65(7), 15171520. doi:10.1002/1097-0142(19900401)65:73.0.co;2-k Brierley, J. D., & Tsang, R. W. (n.d.). External‐beam radiation therapy in the treatment of differentiated thyroid cancer. 8. Brown, A. P., Greening, W. P., McCready, V. R., Shaw, H. J., & Harmer, C. L. (1984). Radioiodine treatment of metastatic thyroid carcinoma: the Royal Marsden Hospital experience. The British journal of radiology, 57(676), 323–327. https://doi. org/10.1259/0007-1285-57-676-323


Casara, D. D., Rubello, D., Saladini, G., Gallo, V., Masarotto, G., & Busnardo, B. (2018). Distant Metastases in Differentiated Thyroid Cancer: Long-term Results of Radioiodine Treatment and Statistical Analysis of Prognostic Factors in 214 Patients: Tumori Journal. https://doi. org/10.1177/030089169107700512 Chow, S.-M., Yau, S., Kwan, C.-K., Poon, P. C. M., & Law, S. C. K. (2006). Local and regional control in patients with papillary thyroid carcinoma: Specific indications of external radiotherapy and radioactive iodine according to T and N categories in AJCC 6th edition. Endocrine-Related Cancer, 13(4), 1159–1172. https://doi.org/10.1677/erc.1.01320 Ésik, O., Németh, G., & Eller, J. (1994). Prophylactic External Irradiation in Differentiated Thyroid Cancer: A Retrospective Study over a 30-Year Observation Period. Oncology, 51(4), 372–379. https://doi.org/10.1159/000227368 Farahati, J., Reiners, C., Stuschke, M., Müller, S. P., Stüben, G., Sauerwein, W., & Sack, H. (1996). Differentiated thyroid cancer: Impact of adjuvant external radiotherapy in patients with perithyroidal tumor infiltration (stage pT4). Cancer, 77(1), 172–180. https://doi.org/10.1002/(SICI)10970142(19960101)77:1<172::AID-CNCR28>3.0.CO;2-1 Frassica, D. A. (2003). General Principles of External Beam Radiation Therapy for Skeletal Metastases. Clinical Orthopaedics and Related Research (1976-2007), 415, S158. https://doi.org/10.1097/01.blo.0000093057.96273.fb Fritsma, G. A. (n.d.). DIALOG AND DISCUSSION. 68. Haddad, R. I. (2020). Thyroid Carcinoma (Vol. 2). National Comprehensive Cancer Network. Information, N. C. for B., Pike, U. S. N. L. of M. 8600 R., MD, B., & Usa, 20894. (2018). How does the thyroid gland work? In InformedHealth.org [Internet]. Institute for Quality and Efficiency in Health Care (IQWiG). https://www.ncbi.nlm.nih. gov/books/NBK279388/ Keum, K. C., Suh, Y. G., Koom, W. S., Cho, J. H., Shim, S. J., Lee, C. G., Park, C. S., Chung, W. Y., & Kim, G. E. (2006). The role of postoperative external-beam radiotherapy in the management of patients with papillary thyroid cancer invading the trachea. International Journal of Radiation Oncology*Biology*Physics, 65(2), 474–480. https://doi. org/10.1016/j.ijrobp.2005.12.010 Kiess, A. P., Agrawal, N., Brierley, J. D., Duvvuri, U., Ferris, R. L., Genden, E., Wong, R. J., Tuttle, R. M., Lee, N. Y., & Randolph, G. W. (2016). External-beam radiotherapy for differentiated thyroid cancer locoregional control: A statement of the American Head and Neck Society. Head & neck, 38(4), 493–498. https://doi.org/10.1002/hed.24357 Kim, T.-H., Yang, D.-S., Jung, K.-Y., Kim, C.-Y., & Choi, M.-S. (2003). Value of external irradiation for locally advanced papillary thyroid cancer. International Journal of Radiation Oncology*Biology*Physics, 55(4), 1006–1012. https://doi. org/10.1016/S0360-3016(02)04203-7

with Thyroid Cancer. American Health & Drug Benefits, 8(1), 30–40. O’Connell, M. E. A., A’Hern, R. P., & Harmer, C. L. (1994). Results of external beam radiotherapy in differentiated thyroid carcinoma: A retrospective study from the Royal Marsden Hospital. European Journal of Cancer, 30(6), 733–739. https:// doi.org/10.1016/0959-8049(94)90284-4 Multiple endocrine neoplasia (n.d). Genetics Home Reference. Retrieved July 28, 2020, from https://ghr.nlm.nih. gov/condition/multiple-endocrine-neoplasia ShahaAR2004 Implications of prognostic factors and risk groups in the management of differentiated thyroid cancer. Laryngoscope114393–402. Sheline, G. E., Galante, M., & Lindsay, S. (1966). Radiation therapy in the control of persistent thyroid cancer. American Journal of Roentgenology, 97(4), 923–930. https://doi. org/10.2214/ajr.97.4.923 Simpson, W. J., Panzarella, T., Carruthers, J. S., Gospodarowicz, M. K., & Sutcliffe, S. B. (1988). Papillary and follicular thyroid cancer: Impact of treatment in 1578 patients. International Journal of Radiation Oncology, Biology, Physics, 14(6), 1063–1075. https://doi.org/10.1016/0360-3016(88)90381-1 Simpson WJ. Radioiodine and radiotherapy in the management of thyroid cancers. Otolaryngologic Clinics of North America. 1990 Jun;23(3):509-521. Thyroid cancer—Symptoms and causes. (n.d.). Mayo Clinic. Retrieved July 23, 2020, from https://www.mayoclinic.org/ diseases-conditions/thyroid-cancer/symptoms-causes/syc20354161 Tsang, R. W., Brierley, J. D., Simpson, W. J., Panzarella, T., Gospodarowicz, M. K., & Sutcliffe, S. B. (1998). The effects of surgery, radioiodine, and external radiation therapy on the clinical outcome of patients with differentiated thyroid carcinoma. Cancer, 82(2), 375–388. Tubiana, M., Haddad, E., Schlumberger, M., Hill, C., Rougier, P., & Sarrazin, D. (1985). External radiotherapy in thyroid cancers. Cancer, 55(S9), 2062–2071. https://doi. org/10.1002/1097-0142(19850501)55:9+<2062::AIDCNCR2820551406>3.0.CO;2-O Vassilopoulou-Sellin, R., Schultz, P. N., & Haynie, T. P. (1996). Clinical outcome of patients with papillary thyroid carcinoma who have recurrence after initial radioactive iodine therapy. Cancer, 78(3), 493–501. https://doi.org/10.1002/(SICI)10970142(19960801)78:3<493::AID-CNCR17>3.0.CO;2-U Wexler, J. A. (2011). Approach to the Thyroid Cancer Patient with Bone Metastases. The Journal of Clinical Endocrinology & Metabolism, 96(8), 2296–2307. https://doi.org/10.1210/ jc.2010-1996

Lee, Y. H., Kim, D. W., In, H. S., Park, J. S., Kim, S. H., Eom, J. W., Kim, B., Lee, E. J., & Rho, M. H. (2011). Differentiation between benign and malignant solid thyroid nodules using an US classification system. Korean journal of radiology, 12(5), 559–567. https://doi.org/10.3348/kjr.2011.12.5.559 Nguyen, Q. T., Lee, E. J., Huang, M. G., Park, Y. I., Khullar, A., & Plodkowski, R. A. (2015). Diagnosis and Treatment of Patients



The Role of Epigenetics in Tumorigenesis BY MICHELE ZHENG, SAGE HILL SCHOOL SENIOR Cover Image: A structural representation of the DNA molecule with methylated cytosines. DNA methylation is integral to the regulation of gene expression and a critical contributor to cancer progression. Source: Wikimedia Commons [Cristoph Bock of Max Planck Institute for Informatics, CCPL]


Introduction Often, DNA is regarded as a fixed, rigid blueprint that directs the lives of every living organism. In humans, it determines our health and ultimately, our identities. Though we are born with a fixed set of genes, the study of epigenetics has illuminated other factors, including our environment and lifestyle choices, that have just as much influence on the way our genome manifests. The epigenome shapes aspects of behavior and appearance as well as susceptibility to various kinds of diseases and cancers. Moreover, epigenetic patterns are inherited by offspring, and consequently, the lives and health of future generations remains dependent upon the environmental contexts and lifestyles that affected earlier generations. The advancing field of epigenetics is crucial to the understanding of cancer etiology and progression. DNA methylation and other epigenetic mechanisms reveal a dynamic yet delicate epigenetic landscape that regulates

the expression of the human genome that, when disrupted, can initiate the progression of cancer. Thus, in understanding the mechanisms responsible and processes that take place, a better understanding of epigenetics will allow for the creation and implementation of effective risk reduction strategies that may improve quality of life and healthcare for future generations.

The Rise of Epigenetics From Mendelian Genetics to Epigenetics The study of genetics is a complex, everchanging subject. Its foundations were established in the revolutionary work of Gregor Mendel whose famous experiments on plant hybridization set the stage for genetics to become “the science of heredity� (Gayon, 2016). With increasing research in molecular biology, the theories of Mendelian genetics were expanded and revised, while the study of epigenetics grew to become a field of its own. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

pregnant mother was given regular mouse chow, the other was given a nutritious diet supplemented with vitamin B12, folic acid, choline, and betaine, all foods rich in methyl groups (Murphy, 2003). The results were drastic. The mother given a nutrient-poor diet produced yellow, obese newborns who had an increased risk of developing cardiovascular disease, diabetes, and cancer (Murphy, 2003). On the other hand, the mother that received a nutrient-rich diet produced thin, brown, healthy mice with a lower incidence of disease (Murphy, 2003).

Figure 1: Gregor Mendel, known as the “father of modern genetics,” discovered the fundamental principles of inheritance through his plant hybridization experiments. Source: Wikimedia Commons

What explains these results? In utero, the brown, lean offspring received methyl groups through their mother’s nutritious diet that acted upon the agouti gene to effectively turn it “off” (Murphy, 2003). Thus, this study revealed that environmental cues, such as prenatal diet, are able to affect disease risk and alter phenotype not by changing the fundamental DNA code, but by modulating the epigenome through DNA methylation. While Mendelian inheritance describes the stable replication of the genome, epigenetics illuminates the ways in which inheritance involves more than just a predetermined set of genes. First introduced in 1942 by embryologist Conrad Waddington, epigenetics today is defined as “the study of changes in gene function that are mitotically/meiotically heritable and that do not entail a change in DNA sequence” (Dupont et.al., 2009). Key Experiment: The Agouti Mice Model The ground-breaking agouti mice experiment is an example of the powerful influence of epigenetic gene regulation upon phenotype and transgenerational inheritance. Conducted by Randy Jirtle of Duke University, the experiment explored the link between prenatal diet and susceptibility to certain diseases such as diabetes and cancer (Murphy, 2003). By manipulating the nutritional intake of pregnant agouti yellow mice, the researchers were able to identify a link between genotype and phenotype. These mice contained a gene named agouti that when expressed, produces a yellow, obese phenotype and increases susceptibility to various cancers and diabetes. When turned “off,” the gene remains unexpressed and produces a brown, thin phenotype. In his experiment, Jirtle and his team took two genetically identical strains of agouti mice whose mothers were given different diets during pregnancy. While one


The Role of Twin Studies in Epigenetics Research Twin studies are also a unique tool in illuminating the effects of environment and lifestyle upon the epigenome. Though monozygotic twins share almost all their genetic information, phenotypic differentiations, such as birth weight, are not uncommon. In one survey, monozygotic twins showed lifetime disparities in type 1 diabetes (61%), type 2 diabetes (41%), autism (58-60%), schizophrenia (58%), and various cancers (0-16%) (Castillo-Fernandez et.al., 2014). Though genetic inheritance remains a factor in disease risk, it is clear that differing environmental exposures acquired throughout life, not genotype alone, determines phenotype (Castillo-Fernandez et.al., 2014). In one of the largest twin studies on epigenetics, epigenetic patterns of 80 sets of monozygotic twins from ages 3 to 74 years were analyzed (Choi, 2005). Researchers also dietary habits, physical exercise, drug consumption, alcohol intake, height, and weight (Choi, 2005). Results indicated older twins are epigenetically dissimilar to each other than those who were younger. For the two sets of twins where the individual siblings were most epigenetically opposed, there were “four times as many differentially expressed genes in the older pair than in the younger pair” (Choi, 2005). This

“Though genetic inheritance remains a factor in disease risk, it is clear that differing environmental exposures acquired throughout life, not genotype alone, determines phenotype.”

Figure 2: Maternal mortality rate (per 100,000 births) across the United States. Source: Wikimedia Commons (published in 2015).


Figure 2: Mouse with agouti gene activated (left), mouse with inactive agouti gene (right) Source: Wikimedia Commons [Randy Jirtle, CCPL]

further suggest that accumulated differences of environmental exposures can influence the magnitude of expression of certain genes (Choi, 2005).

Epigenetic Mechanisms “The expression of genes within the mammalian genome is regulated by three primary epigenetic components: DNA methylation, posttranslational histone modifications, and non-coding RNAs.�

By altering chromatin structure and therefore DNA accessibility, epigenetic modification determines how the genome manifests through cellular identity and development as well as the onset of various disease states. The expression of genes within the mammalian genome is regulated by three primary epigenetic components: DNA methylation, posttranslational histone modifications, and noncoding RNAs. DNA Methylation DNA methylation is, by far, the most extensively researched epigenetic mechanism due to its expansive influence upon the expression of the mammalian genome as well as its critical role in maintaining homeostasis and genetic continuity. DNA methylation is involved in many biological processes including cellular differentiation, genomic imprinting, X-chromosome inactivation, and aging Its primary mechanism of action, however, involves the silencing the expression of certain genes by mobilizing regulatory proteins (Roberti et.al., 2019). Methylation occurs via the transfer of a methyl group to a cytosine residue in CpG dinucleotides within the genome. These segments of DNA,


in which a cytosine is followed by a guanine are concentrated in CpG-rich regions of DNA that remain unmethylated in differentiated tissues (Sharma et.al., 2010). In some instances, however, these CpG promoter sites may become methylated, both in embryonic and adult somatic cells, resulting in long-term gene silencing (e.g., X-chromosome inactivation) (Roberti et.al., 2019 and Sharma et.al., 2010). Other CpG sites, not located in islands, are heavily methylated ensuring chromosomal stability (Sharma et.al., 2010). DNA methylation is mediated by three main enzymes, known as DNA methyltransferases (DNMTs), that serve to both develop and maintain epigenetic patterns. Following fertilization, DNMT3A and DNMT3B enable the differentiation of embryonic stem cells, silencing or activating certain genes by establishing methylation patterns during early development. As these differentiated cells replicate, DNMT1 maintains these epigenetic patterns through the methylation of CpG sites on daughter DNA strands during DNA replication (Roberti et.al., 2019). In addition, ten-eleven translocation enzymes (TETs) carry out DNA demethylation, in which methyl groups are removed through the oxidation of methylcytosine to hydroxymethylcytosine, followed by reversion to unmethylated cytosine (Roberti et.al., 2019). Together, methylation, regulated by DNMTs, and demethylation, regulated by TETs, strike a delicate balance that


Figure 3: The epigenetic landscape is shaped by a variety of environmental and lifestyle factors that affect gene expression. Source: Wikimedia Commons [National Institutes of Health]

enables proper cell functioning. Histone Post-Translational Modifications Histone post-translational modifications, otherwise known as HPM, is a group of epigenetic mechanisms that influence gene expression through the alteration of chromatin structure. This class of epigenetic modification includes histone acetylation, methylation, and phosphorylation. Chromatin consists of repeating units called nucleosomes, each comprised of about 146 base pairs of DNA wrapped around an octamer, or eight-piece complex, of four core histone proteins. Each octamer consists of two subunits of H2A, H2B, H3, and H4 proteins with NH2 terminal tails extending outward from the nucleosome structure (Kanwal et.al., 2012). Through the covalent modification of these histone tails, the chromatin can either be condensed and transcriptionally inactivated, known as heterochromatin, or decondensed in a way that facilitates transcription, known as euchromatin (Kanwal et.al., 2012). By altering the compaction state of the chromatin, histone modifications can either block or enable the recruitment of transcriptional proteins such as RNA polymerase to nearby genes (Chuang et.al., 2007). The principle enzymes that enable this process include histone acetyltransferases (HATs), histone deacetyltransferase, histone methyltransferases (HMTs), and histone demethyltransferases among others that act


in combination to activate or repress gene expression at specific gene bodies (Kanwal et.al., 2012). The actions of these opposing proteins histone modification to be a highly reversible process. Non-coding RNAs Non-coding RNAs (ncRNAs) are divided into two main categories: short-chain non-coding RNAs and long non-coding RNAs (lncRNAs) (Roberti et.al., 2019). Operating on both the transcriptional and post-transcriptional levels, ncRNAs regulate gene expression the way phenotype manifests. Short-chain ncRNAs include miRNAs, siRNAs, and piRNAs. The most researched are endogenous miRNAs that can control gene expression by binding to mRNAs. In doing so, they block the mRNA from being translated into protein. miRNAs, extending ~22 nucleotides long, play a major role in regulating the cell cycle, mainly cell proliferation, differentiation, and apoptosis (Sharma et.al., 2010). Thus, they are often targets of interest in the study of tumorigenesis. miRNAs, as well as siRNA, also play a part in establishing DNA methylation and histone modification patterns. By regulating the expression of DNMT1, DNMT3A, and DNMT3B, these ncRNAs are predicted to affect the activity of these enzymes and facilitate the expression of epigenetic mechanisms (Chuang et.al., 2007). However, activity of miRNAs can, in

“miRNAs, extending ~22 nucleotides long, play a major role in regulating the cell cycle, mainly cell proliferation, differentiation, and apoptosis. Thus, they are often targets of interest in the study of tumorigenesis.�


Figure 4: A computer representation of the molecular structure of DNA methyltransferase DNMT3B. Source: Wikimedia Commons [Jawahar Swaminathan and MSD staff of the European Bioinformatics Institute]

turn, be regulated by histone modifications and DNA methylation as well (Chuang et.al., 2007). Long non-coding mRNAs (lncRNAs) are also an integral part of global gene regulation. These include small nucleolar RNAs and enhancer RNAs among other types (Roberti et.al., 2019). In both embryonic and adult stem cells, they help to maintain pluripotency and facilitate differentiation (Gayon, 2016). lncRNAs also have well-established roles in transcriptional interference and other biological processes including the reparation of damaged DNA and DNA replication (Roberti et.al., 2019). In posttranscriptional processes, lncRNAs regulate splicing and protein translation as well as maintain both protein and mRNA stability (Roberti et.al., 2019).

Disruptions of Epigenetic Landscape in Tumorigenesis “In general, cancer cells exhibit both global hypomethylation and promoter hypermethylation, ultimately leading to the destabilization of the genome and inducing abnormal cell functioning.�


In tumorigenesis, the delicate epigenetic landscape is significantly dysregulated and distorted enabling cancer progression and the development of other disease. Ultimately, cancer is the result of combined epigenetic events that influence each other in a way that changes and alters the normal cell cycle. Aberrations in DNA Methylation In general, cancer cells exhibit both global hypomethylation and promoter hypermethylation, ultimately leading to the destabilization of the genome and inducing abnormal cell functioning. Hypermethylation of promoter regions is associated with the permanent silencing of tumor suppressor genes that help to maintain correct cell division and regulate cell death. These genes that are normally unmethylated in healthy cells are inappropriately methylated in tumor tissues which enables irregular cellular growth (Cheng et.al., 2019). For example, the p16 gene, essential in cell cycle inhibition, undergoes heavy methylation in a multitude of tumor types including lung and breast carcinomas (Esteller et.al., 2001). Hypermethylation is also prevalent in gastrointestinal tumors in which mutated p14 and APC genes lead to uninhibited cell growth (Esteller et.al., 2001). Ultimately, these mutations can be passed down from generation to generation such as those of the BRCA1 gene which increases risk of developing breast and ovarian carcinomas in families (Esteller et.al., 2001). Global hypomethylation at repetitive elements, transposons, and various gene bodies is also a main contributor to cancer initiation that leads to the deregulation of the genome (Sharma

et. al., 2010). This process is not only prevalent in cancer but also in many other disease states such as systemic lupus erythematosus and ICF syndrome, in which DNMT mutations may lead to precancerous conditions (Kelly et.al., 2010).

Aberrations in Histone Modification Patterns and Non-Coding RNAs In addition to aberrations in DNA methylation, abnormalities in histone modifications as well as decreased miRNA expression are prevalent in cancerous tissues. The hypoacetylation and hypermethylation of histones are hallmarks of cancer progression that contribute to the permanent repression of tumor suppressor genes. For example, global loss of H4K16 acetylation and the overexpression of HDAC proteins have been identified in numerous cancers and contribute to the silencing of tumor-suppressor genes (Sharma et.al., 2010). The dysregulation of histone methylation is also a cause for concern as H4K20 tri-methylation loss can lead to the silencing of suppressor genes in addition to the overexpression of regulatory proteins and DNA hypermethylation (Sharma et.al., 2010). Various HMTs, such as those that regulate histone H3K27 and H3K9, are overexpressed in breast, prostate, and liver cancer in a way that aberrantly alters chromatin structure and levels of transcription (Sharma et.al., 2010). Disruptions in miRNA expression, which plays a central role in regulating cell growth, transcription, and cell death, can have detrimental consequences that serve to promote tumorigenesis and further cancer


radiation, and pesticides (Park, 2020). Thus, DNA methylation is a unique tool that reveals the impact of environmental and lifestyle factors upon physiological traits such as body weight, physical activity, depression, and even alcohol consumption. For example, children whose mothers smoked while pregnant are found to be at increased risk for asthma (Li et.al., 2005). Parental exposure to pesticides is also found to be associated with an increased risk for hematologic cancers such as leukemia in children (Park, 2020). Thus, a better understanding of one’s DNA methylation profile and cancer risk is essential to the development of personalized prevention strategies.

progression. Many miRNAs that target proapoptic genes like Bim are often overexpressed while those targeting antiapoptic genes including BCL2 are significantly downregulated ultimately accelerating the cell cycle and inhibiting cell death (Sharma et.al., 2010). For example, the overexpression of miR-146, which represses the BRCA1 gene, has been observed in multiple cancers including as well as several autoimmune disorders (Kasinski et.al., 2011). Furthermore, the alterations of non-coding RNAs processes have significant impact on the functioning of DNMTs that regulate DNA methylation patterns. Decreased activity of miR-143 and miR-29 both lead to increased DNMT expressions, specifically those that regulate de novo methylation including DNMT3A and DNMT3B (Kelly et.al., 2010).

Molecular Markers for Risk Reduction Armed with this understanding of epigenetic processes, the potential for new and improved cancer treatment is tremendous. By targeting specific epigenetic mechanisms and identifying epigenetic markers, it is possible to develop effective risk reduction strategies that can transform the way we perceive and treat cancer. Over the course of life, humans accumulate a multitude of different epigenetic markers as a result of the environmental exposures, lifestyle, and ancestral inheritance. In analyzing salivary and peripheral blood DNA, scientists are able to draw upon the relationship between DNA methylation patterns and risk association (Park, 2020). Methylation markers in DNA can act as an “epigenetic memory” that details past exposures including drugs, air pollution,


Risk reduction strategies include changing or quitting lifestyle habits, proactively seeking medical assistance, and acquiring medication that can potentially reverse risk markers all together (Park, 2020). In reversing these epigenetic markers, one may significantly decrease risk for a disease (Park, 2020). Breast cancer risk, linked to obesity and a lack of physical activity, can be reduced by ~20% for women all across the risk spectrum which includes women of higher-risk due to family history according to a joint study by Columbia University and the University of Melbourne (Kehm et.al., 2020). Further studies comparing DNA methylation patterns of former smokers revealed that those whose methylation patterns resembled those of non-smokers were at lower risk of developing lung cancer than those whose patterns matched those of current smokers (Zhang et.al., 2016). These findings suggest the possibility of reversing cancerinducing, epigenetic markers imposed by one’s lifestyle (Park, 2020). Thus, in understanding the ways in which people can assert control over personal cancer risk, we can push the needle forward in making conscious lifestyle choices to better our health as well as develop new and improved epigenetic therapies to counteract genetic and environmentally-imposed risk factors.

Figure 5: A representation of the different stages of the cell cycle. In tumorigenesis, checkpoints that serve to halt the cell cycle at various stages are bypassed, inducing unregulated cell growth. Source: Wikimedia Commons [Richard Wheeler, COMGFDL]

“Methylation markers in DNA can act as an 'epigenetic memory' that details past exposures including drugs, air pollution, radiation, and pesticides.”

Conclusion Epigenetics is still a relatively new field, but with tremendous potential in the realm of cancer treatment. Epigenetic mechanisms, such as DNA methylation, histone modification, and the action of non-coding RNAs, are greatly dynamic and essential to any comprehensive view of the interactions between the genome and environment in the context of cancer progression and etiology. In understanding how this epigenetic landscape becomes


disrupted in tumorigenesis, we can both devise better treatment strategies and improve quality of health for living at-risk individuals and future generations to come.

Schneider AP 2nd, Zainer CM, Kubat CK, Mullen NK, Windisch AK. The breast cancer epidemic: 10 facts. Linacre Q. 2014;81(3):244-277. doi:10.1179/2050854914Y.0000000027


Sharma S, Kelly TK, Jones PA. Epigenetics in cancer. Carcinogenesis. 2010;31(1):27-36. doi:10.1093/carcin/bgp220

Castillo-Fernandez, J.E., Spector, T.D. & Bell, J.T. Epigenetics of discordant monozygotic twins: implications for disease. Genome Med 6, 60 (2014). https://doi.org/10.1186/s13073-0140060-z

Zhang, Y., Elgizouli, M., Schöttker, B. et al. Smoking-associated DNA methylation markers predict lung cancer incidence. Clin Epigenet 8, 127 (2016). https://doi.org/10.1186/s13148-0160292-4

Cheng, Y., He, C., Wang, M. et al. Targeting epigenetic regulators for cancer therapy: mechanisms and advances in clinical trials. Sig Transduct Target Ther 4, 62 (2019). https://doi.org/10.1038/ s41392-019-0095-0 Choi, C.Q. How epigenetics affects twins. Genome Biol 5, spotlight-20050708-02 (2005). https://doi.org/10.1186/gbspotlight-20050708-02 Chuang, J., Jones, P. Epigenetics and MicroRNAs. Pediatr Res 61, 24–29 (2007). https://doi.org/10.1203/pdr.0b013e3180457684 Dupont, C., Armant, D. R., & Brenner, C. A. (2009). Epigenetics: definition, mechanisms and clinical perspective. Seminars in reproductive medicine, 27(5), 351–357. https://doi.org/10.1055/ s-0029-1237423t Esteller M, Corn PG, Baylin SB, Herman JG. A gene hypermethylation profile of human cancer. Cancer Res. 2001;61(8):3225-3229. Gayon J. From Mendel to epigenetics: History of genetics. C R Biol. 2016;339(7-8):225-230. doi:10.1016/j.crvi.2016.05.009 Kanwal R, Gupta S. Epigenetic modifications in cancer. Clin Genet. 2012;81(4):303-311. doi:10.1111/j.1399-0004.2011.01809.x Kasinski AL, Slack FJ. Epigenetics and genetics. MicroRNAs en route to the clinic: progress in validating and targeting microRNAs for cancer therapy. Nat Rev Cancer. 2011;11(12):849864. Published 2011 Nov 24. doi:10.1038/nrc3166 Kelly, T. K., De Carvalho, D. D., & Jones, P. A. (2010). Epigenetic modifications as therapeutic targets. Nature biotechnology, 28(10), 1069–1078. https://doi.org/10.1038/nbt.1678 Kehm RD, Genkinger JM, MacInnis RJ, et al. Recreational Physical Activity Is Associated with Reduced Breast Cancer Risk in Adult Women at High Risk for Breast Cancer: A Cohort Study of Women Selected for Familial and Genetic Risk. Cancer Res. 2020;80(1):116-125. doi:10.1158/0008-5472.CAN-19-1847 Li YF, Langholz B, Salam MT, Gilliland FD. Maternal and grandmaternal smoking patterns are associated with early childhood asthma. Chest. 2005;127(4):1232-1241. doi:10.1378/ chest.127.4.1232 Murphy, G. Mother's diet changes pups' colour. Nature (2003). https://doi.org/10.1038/news030728-12 Park HL. Epigenetic Biomarkers for Environmental Exposures and Personalized Breast Cancer Prevention. International Journal of Environmental Research and Public Health. 2020; 17(4):1181. Roberti, A., Valdes, A.F., Torrecillas, R. et al. Epigenetics in cancer therapy and nanomedicine. Clin Epigenet 11, 81 (2019). https:// doi.org/10.1186/s13148-019-0675-4





Selective Autophagy and Its Potential to Treat Neurodegenerative Diseases BY SAM HEDLEY '23 Cover Image: Image of mouse cortical neurons after 15 days in culture. Mouse models are often used to study protein aggregation in neurons, a leading cause of neurodegenerative diseases. Source: Wikimedia Commons


What is Autophagy? Autophagy, meaning “self-eating�, is a degradative process contributing to the maintenance of cellular homeostasis. By using the autophagic pathway to deliver cargo to the lysosome, the cell is able to eliminate cytoplasmic material like proteins or organelles that would otherwise inflict harm. Autophagy is highly regulated within the cell and is evolutionarily conserved, with significant overlap between the Atg (autophagy) proteins in yeast and in mammals (Glick et al., 2010). Given a renewed attention to autophagy within the scientific community in recent years, we now know much more about the inner workings of this elusive process. Autophagy was once thought to only induce bulk-degradation, but three subdivisions of autophagy have since been discovered: macroautophagy, microautophagy, and chaperone-mediated autophagy (CMA) (Parzych & Klionsky, 2014).

The general autophagy pathway is conserved throughout the three subtypes, but each division confers its own targeting processes and mechanisms of specificity (Figure 1). Macroautophagy, or selective autophagy, is the most studied of these three autophagy processes and involves the synthesis of a double-membrane vesicle called the autophagosome. This vesicle will ultimately carry the cargo to the lysosome to be degraded. During microautophagy and chaperonemediated autophagy (CMA), cargo is absorbed into the lysosomes directly. In microautophagy, invaginations in the membrane of the lysosome entrap cargo and transport it into the organelle (Parzych & Klionsky, 2014). This process confers the lowest level of target specificity because it does not involve the targeting of individual proteins. CMA involves the recognition of an amino acid sequence in the cargo protein by a highly specific chaperone protein complex, which then binds to and carries the cargo to DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure 1: Each subtype of autophagy has a different mechanism of cargo targeting and delivery to the lysosome. Macroautophagy utilizes vesicle fusion to the lysosome. Chaperone-mediated autophagy transports cargo via the lysosome-associated membrane protein 2 (LAMP-2A) and in microautophagy, the cargo is absorbed directly into the lysosome. Source: Original figure created using BioRender; see (Parzych & Klionsky, 2014) for figure inspiration.

the lysosome. This cargo is then transported into the lysosome through the membrane (Glick et al., 2010). Selective autophagy is activated in response to stress and has been shown to endanger cells when unregulated (Parzych & Klionsky, 2014). Dysregulation of selective autophagy is linked to a range of health issues, prominently neurodegenerative diseases (Pyo et al., 2012). Discovering more about the selective autophagy pathway and the structures of the associated Atg proteins provides the potential to alter autophagy functions in patients through drug therapies. By restoring dysregulated autophagic processes, normal cellular processes are promoted, lessening the impact of neurodegeneration.

The Process of Selective Autophagy Selective autophagy, hereafter referred to as autophagy, involves the de novo1 formation of an autophagosome around targeted cargo. The autophagosome travels to the lysosome and fuses with its membrane to form an autolysosome, where the cargo is subsequently degraded (Figure 2). The type of cargo being transported for degradation depends on the autophagy subtype. For instance, mitophagy refers to mitochondrial degradation while aggrephagy involves the targeting of protein aggregates.The same autophagic pathway, however, occurs during each subtype, independent of the nature of the cargo. Recent


research has identified over 30 Atg proteins involved in selective autophagy. These Atg proteins have been grouped into complexes based on their role in the autophagic pathway, which will be outlined in the following sections. Autophagy Initiation via the ULK1 Complex

“Autophagy can be initiated in response to deprivation of nutrients such as insulin and glucose in mammalian cells.�

Autophagy can be initiated in response to deprivation of nutrients such as insulin and glucose in mammalian cells (Moruno et al., 2012). In such starvation conditions, the GAAC (general amino acid control) pathway upregulates amino acid synthesis while the autophagy pathway degrades unnecessary proteins, recycling amino acids to be used in new processes (Chen et al., 2014). The regulator kinase of autophagy is the mTOR complex, which is deactivated by starvation or the presence of the drug rapamycin, an mTOR inhibitor. Under normal conditions, mTORC1 (a protein of the mTOR complex) phosphorylates and inhibits the autophagy proteins ULK1 and Atg13. ULK1 and Atg13 are part of the mammalian autophagy initiation complex, so phosphorylation of these proteins by mTORC1 inactivates the autophagy pathway. This ULK1 initiation complex is comprised of ULK1 and Atg13, as well as the proteins FIP200 and Atg101 (Zachari & Ganley, 2017). When mTOR is inactivated, ULK1 and Atg13 are dephosphorylated and the autophagy pathway 173

Figure 2: Autophagy is initiated through the inhibition of mTOR (1). The first groups of autophagy proteins are recruited to the PAS (2) where lipids are processed and used to form the isolation membrane (3). Expansion of the vesicle continues to form the phagophore (4), where cargo is then targeted and enclosed by the completed autophagosome (5). The autophagosome fuses with the lysosome (6) to degrade the cargo and release it back into the cell (7). Source: Original figure created using BioRender; see (Nixon, 2013) for figure inspiration.

“Following the initial synthesis of the phagophore through the PI3K complex, the Atg12-Atg5-Atg16L1 complex expands the phagophore to create the doublemembraned vesicle known as the autophagosome.�

is activated. FIP200 has recently been identified as the largest protein in the initiation complex and is the source of interaction for the other complex components. Atg13:Atg101 is a dimer that forms and interacts with the N-terminal of FIP200. Once Atg13:Atg101 localizes to FIP200, ULK1 is recruited to and interacts with FIP200 as well (Shi et al., 2020). This complex localizes to the endoplasmic reticulum, where the autophagosome is formed. ULK1 is the only protein in the complex with kinase activity and phosphorylates Beclin1, Vps34, and Atg14 to initiate the nucleation of the autophagosome membrane via the class III PI3K complex (Zachari & Ganley, 2017).

of the autophagosome to the lysosome, a later stage in the autophagic cycle (Chun & Kim, 2018). The PI3K-Atg14 complex helps localize the ULK1 complex to the phagophore initiation sites close to the endoplasmic reticulum membrane (Figure 4). Additionally, Beclin1 and Vps34 interact to phosphorylate the lipid phosphatidylinositol (PI), generating phosphatidylinositol I 3-phosphate (PI3P). The lipid PI3P then elongates the phagophore and recruits other Atg proteins to the complex (Pyo et al, 2012). This process is regulated by BCL2, which binds to Beclin1 and inhibits interaction with Vps34, preventing the formation of the autophagosome (Parzych & Klionsky, 2014).

Autophagy Nucleation via the PI3K Complex

Autophagosome Expansion via Atg12-Atg5Atg16 and LC3B-II

The class III PI3K complex enlists three primary proteins: Beclin1, Vps34, and p150, which are activated by phosphorylation. This PI3K complex plays two different roles in the autophagy pathway; there is a 4th protein that interacts with the Beclin1/Vps34/p150 complex which changes based on the stage of autophagy. When PI3K interacts with Atg14, the complex facilitates membrane nucleation2, an early phase of autophagy. If the complex interacts with the protein UVRAG, however, it facilitates the fusion


Following the initial synthesis of the phagophore through the PI3K complex, the Atg12-Atg5-Atg16L1 complex expands the phagophore to create the double-membraned vesicle known as the autophagosome. Through an ATP-dependent process involving other Atg proteins, Atg12 and Atg5 are attached covalently, initiating the noncovalent interaction between Atg5 and Atg16L1 to form the Atg12-Atg5-Atg16L1 complex (Glick et al.,


LC3B-II (Bjørkøy et al. 2005). By the end of autophagosome expansion, the membrane has enclosed ubiquitin-targeted proteins, and the vesicle proceeds to the lysosome. Autophagosome Completion and Fusion

2010). These proteins associate with the phagophore and induce both membrane expansion and curvature through the recruitment of processed LC3B-II, then dissociate after autophagosome completion (Figure 4). The production and processing of the protein LC3B increase during autophagy. Through the interaction of various Atg proteins, LC3B is cleaved to form LC3B-I and is then conjugated with the phospholipid phosphatidylethanolamine (PE) to form LC3B-II (Figure 4). Drawn to the phagophore by Atg12Atg5-Atg16L1, LC3B-II recruits the cargo to be enclosed by the autophagosome (Glick et al., 2010). The main molecule interacting with LC3B-II is the receptor protein p62. Proteins in the cell that are targeted for degradation via the lysosome are tagged by the protein ubiquitin. p62 recognizes and binds to these ubiquitinated proteins, delivering them to the autophagosome via interaction with

The last step in the selective autophagy pathway is the least studied but involves the trafficking of the autophagosome to the lysosome via microtubule transport and the subsequent formation of the autolysosome through fusion (Figure 1). The endosomal complex required for transport, ESCRT III, is said to play a role in the closure of the autophagosome as well as the fusion of the autophagosome and the lysosome (Pyo et al., 2012). Deletion of the complex causes autophagosome accumulation, indicating that ESCRT III is necessary to form the autolysosome. Additional components include the aforementioned PI3K-UVRAG complex, which activates the G-protein Rab7, therefore promoting microtubule transport (Parzych & Klionsky, 2014). Furthermore, SNARE proteins facilitate the fusion process by tethering the autophagosome membrane to the lysosome membrane and drawing the two components closer together (Nixon, 2013).

Figure 3: The orientation of the ULK1 initiation complex is centered around the N-terminal domain of the protein FIP200 (FIP200NTD). FIP200NTD has a c-shaped conformation and mediates direct interactions with both the Atg13:Atg101 dimer and the kinase ULK1. Source: Original figure created using BioRender; see (Shi et al., 2020) for figure inspiration.

“The last step in the selective autophagy pathway is the least studied but involves the trafficking of the autophagosome to the lysosome via microtubule transport and the subsequent formation of the autolysosome through fusion.”

Figure 4: Following autophagy initiation (1), the ULK1 and PI3K complexes induce membrane nucleation through the processing of the lipid PI3P (2). The Atg12-Atg5Atg16L1 complex is localized to the phagophore, where it recruits processed LC3B-II. LC3B-II interacts with the cargo receptor p62, which carries the ubiquitinated proteins to the phagophore (3). Source: Original figure created using BioRender; see (Quan & Lee, 2013) for figure inspiration.



Figure 5: Neurodegenerative diseases impair the autophagy at various points in the pathway, exacerbating the aggregation of toxic proteins in cells. Source: Original figure created using BioRender; see (Nah et al., 2015) for figure inspiration.

Causes of Neurodegenerative Diseases A significant cause of neurodegenerative diseases is the accumulation of mutant proteins in affected cells. Given that selective autophagy removes such protein aggregates in normally functioning cells, mutations in the autophagy pathway have been studied as a potential source of these diseases. The dysfunction of autophagy in these cases can be attributed to insufficient autophagy initiation, reduced degradation function, or increased levels of autophagic stress due to protein aggregates (Nah et al., 2015). The next section explores the function of autophagy in three of the most prominent neurodegenerative diseases: Alzheimer’s, Parkinson’s, and Huntington’s (Figure 5). Alzheimer’s Disease (AD)

“A significant cause of neurodegenerative diseseases is the accumulation of mutant proteins in affected cells... mutations in the autophagy pathway have been studied as a potential source of these diseases.”


Alzheimer’s Disease is the leading cause of dementia and develops from cell death in the hippocampus and cerebral cortex, both of which play roles in memory (Irvine et al., 2008). Two toxic protein structures can be attributed to the onset of Alzheimer’s Disease: plaques of beta-amyloid (Aβ) and tangles of tau. When an amyloid precursor protein (APP) is targeted to autophagosomes for degradation, it undergoes a cleavage and produces Aβ peptides. In diseased cells, the functions of autophagosome transport and fusion are compromised, leading to an accumulation of Aβ-containing vesicles (Figure 5). These Aβ peptides are released back into the extracellular space and aggregate to form insoluble toxic Aβ plaques (Nah et al.,

2015). This Aβ accumulation in turn initiates abnormal downstream tau activity. Under normal conditions, the protein tau interacts with tubulin and stabilizes microtubules, promoting vesicle transport. The Aβ peptides, however, induce the hyper-phosphorylation of tau and lead to the formation of aggregate tau structures called neurofibrillary tangles. These tangles are not specific to Alzheimer’s, but exacerbate the neuronal toxicity caused by the Aβ peptides and are good indicators of disease severity (Irvine et al., 2008). Further autophagy impairments include the downregulation of PI3K complex protein Beclin1, which plays a role in autophagy initiation. This phenotype is observed in early AD patients and leads to increased Aβ peptide accumulation (Nah et al., 2015). Additionally, mutations in the protein presenilin-1 (PS1) impair lysosomal degradation (Figure 5) leading to early-onset Alzheimer’s (Nixon, 2013). The compromised autophagy pathway degrades the tau and Aβ protein aggregates at a much lower rate than normal, ultimately leading to further disease progression. Parkinson’s Disease (PD) Parkinson’s Disease is characterized by a loss of motor skills, coordination, and cognitive decline – functions controlled by a region of the brain called the substantia nigra. The neurons in this region are involved in a pathway that controls voluntary movement and use dopamine as a neurotransmitter (Irvine et al.,


2008). Motor function is impaired by death of these dopamine neurons, which results from an accumulation of “Lewy Bodies” in these cells. These “Lewy Bodies” are aggregates of mutated ⍺-synuclein protein, which is involved in the transport of neurotransmitters like dopamine. The mutation of several genes in PD, primarily SNCA and LRRK2, inhibit autophagy and lead to the toxic accumulation of ⍺-synuclein (Figure 5). SNCA encodes ⍺-synuclein, mutations in which lead to the generation of “Lewy Bodies” (Albanese et al., 2019). Cells which overexpress ⍺-synuclein also demonstrate a downregulation of the protein Rab1A, which is necessary for proper autophagosome synthesis (Rahman & Rhim, 2017). There are 6 mutations in LRRK2 linked to PD; mutated LRRK2 has been identified as the most common genetic cause of PD. However, the mechanism by which LRRK2 impairs autophagy remains unclear due to a lack of models effectively isolating LRRK2 function in the pathway (Albanese et al., 2019). Additionally, Parkinson’s cells have compromised mitophagy functions, which inhibits their ability to degrade mitochondrial waste. Mitophagy is impeded by mutations in PINK1 and parkin, which comprise the pathway by which p62 targets cargo to the autophagosome. Infected cells have also been shown to have both compromised mitophagy and CMA via LRRK2 mutations (Nixon, 2013). Huntington’s Disease (HD) Huntington’s Disease is accompanied by abnormal motor movements and personality changes. These symptoms are most directly influenced by the cerebellum, spinal cord, and striatum, the latter being the region of the brain with control over rewards and decision making (Arraste & Finkbeiner, 2012). Degeneration of such functions is caused by the accumulation of toxic mutant huntingtin (HTT) protein in neural cells, leading to cell death and cognitive decline (Rahman & Rhim, 2017). The HTT mutation is a trinucleotide repeat created by aberrant DNA replication and exhibits autosomal dominant inheritance pattern. The resulting protein is more prone to misfolding and aggregation. The process by which mutant HTT accumulates in neurons is still largely unknown, but disruptions in the autophagy pathway have been identified. In the presence of mutant HTT, autophagosomes form correctly and the HTT is successfully ubiquitinated, but the HTT cargo is not recruited to the autophagosome (Figure 5). This is possibly due to an inability of the cargo receptor p62 to recognize the mutant


protein (Rahaman & Rhim, 2017). Additionally, these mutant HTT aggregates can interact with Beclin1 and therefore impede Beclin1 function in autophagosome nucleation (Nah et al., 2015). However, in an effort towards self-preservation, HTT aggregates have been shown to form structures called “inclusion bodies” that reduce the load of protein aggregates targeted by the autophagic pathway (Arraste & Finkbeiner, 2012). Little is known about this process, but it is thought to function as a way to increase protein degradation, providing support for autophagyrelated neurodegenerative therapies.

Drug Induced Manipulation of Autophagy The most prominent mechanism being studied to treat neurodegenerative diseases is the upregulation of the autophagy pathway. By inducing greater levels of autophagy in affected cells, toxic protein aggregates can be degraded at a higher rate, alleviating the effects of the disease. Selective autophagy can be upregulated through two different pathways: mTORC1 dependent, and mTORC1 independent.

“By inducing greater levels of autophagy in affected cells, toxic protein aggregates can be degraded at a higher rate, alleviating the effects of the disease.”

Rapamycin directly inhibits mTORC1, inducing autophagy. Treatment with rapamycin has been shown to alleviate neurodegeneration in mice via autophagy upregulation (Nixon, 2013). In Drosophila (flies) and mouse models, rapamycin treatment enhanced mutant HTT clearance as well as mutant ⍺-synuclein clearance (Metcalf et al., 2012). In transgenic mice with PS1/APP/ Tau proteins, rapamycin was shown to diminish Aβ plaque formation (Rahman & Rhim, 2017). One study compared a mouse strain of normal neurological function to a senescence-prone (SAMP) mouse strain that exhibits age-related neurodegenerative decline. The SAMP mice demonstrated hyperphosphorylated Tau proteins and increased autophagy-inhibiting mTOR activity. Treatment of the SAMP mice with rapamycin greatly reduced the levels of phosphorylated Tau (Wang et al., 2017). There are three main approaches to mTORC1independent upregulation. The first is direct activation of the ULK1 complex via the kinase AMPK. This treatment is the least studied because activation of such a central signaling pathway will affect multiple cellular processes and create unwanted side effects (Nixon, 2013). The second approach involves intracerebral delivery of Beclin1. Deletion of the Beclin1 gene in a mouse model of Alzheimer’s Disease was shown to increase Aβ peptide accumulation,


and patients with Alzheimer’s have demonstrated reduced Beclin1 production (Pickford et al., 2008). In mouse models, of both Alzheimer’s and Parkinson’s the intracerebral delivery of Beclin1 reduced aggregation of both Aβ peptides and ⍺-synuclein (Metcalf et al., 2012). Additionally, the introduction of Beclin1 into HeLa cells via viral packaging was shown to be sufficient in inducing autophagy (Nah et al., 2015). The third pathway of increased autophagy is the inhibition of the phosphoinositol (IP3) cycle in cells. The production of I3P is necessary to generate PI3P for autophagosome formation, so the reduction of free IP3 in the cell through phosphoinositol cycle inhibition upregulates autophagy (Metcalf et al., 2012). The main drug used to accomplish this is lithium, which is more commonly used as a mood stabilizer in the treatment of bipolar disorder. In neurodegenerative disorders, lithium enhances clearance of aggregates of mutant HTT and ⍺-synuclein aggregates (Nixon, 2013). Additionally, inhibition of the IP3 cycle inhibits the protein GSK-3B, which interrupts tau phosphorylation in transgenic mice (Rahman & Rhim, 2017). Some FDA approved drugs have the same function as lithium in acting on the IP3 cycle, including Ca2+ blockers like loperamide. The dual treatment with mTORC1-dependent drugs like rapamycin and mTORC1-independent drugs like lithium leads to further upregulation of autophagy in flies in vivo (Metcalf et al., 2012).

Conclusion (Looking to the Future) “The dual treatment with mTORC1dependent drugs like rapamycin and mTORC1independent drugs like lithium leads to further upregulation of autophagy in flies in vivo.”


Autophagy as a treatment for neurodegenerative diseases holds a great deal of promise but has yet to see the same amount of success in human trials as in mouse models. Lithium has been administered to small trial groups with adverse results, attributed to the small population size and the variability in the degree of neurodegeneration (Nixon, 2013). Selective autophagy, while becoming increasingly understood, still involves proteins that have undefined functions. The mechanisms of certain complexes are understood, but the roles proteins play in this process remain unknown. Thus, many of these preclinical studies have not been tested in humans to the same degree as animals. Transgenic mice models simulate these neurodegenerative diseases by overexpressing causative proteins, but the individual proteins leading to neurodegeneration are not always obvious (Sweeny et al., 2017). Yet, increases in accessible patient data in recent years, as well as strides in drug discovery technology, shape a promising future for neurodegenerative disease treatment through autophagic mechanisms.

References Albanese, F., Novello, S., & Morari, M. (2019). Autophagy and LRRK2 in the Aging Brain. Frontiers in Neuroscience, 13, 1352. https://doi.org/10.3389/fnins.2019.01352 Arrasate, M., & Finkbeiner, S. (2012). Protein aggregates in Huntington’s disease. Experimental Neurology, 238(1), 1–11. https://doi.org/10.1016/j.expneurol.2011.12.013 Bjørkøy, G., Lamark, T., Brech, A., Outzen, H., Perander, M., Overvatn, A., Stenmark, H., & Johansen, T. (2005). P62/ SQSTM1 forms protein aggregates degraded by autophagy and has a protective effect on huntingtin-induced cell death. The Journal of Cell Biology, 171(4), 603–614. https://doi. org/10.1083/jcb.200507002 Chen, R., Zou, Y., Mao, D., Sun, D., Gao, G., Shi, J., Liu, X., Zhu, C., Yang, M., Ye, W., Hao, Q., Li, R., & Yu, L. (2014). The general amino acid control pathway regulates mTOR and autophagy during serum/glutamine starvation. The Journal of Cell Biology, 206(2), 173–182. https://doi.org/10.1083/ jcb.201403009 Chun, Y., & Kim, J. (2018). Autophagy: An Essential Degradation Program for Cellular Homeostasis and Life. Cells, 7(12). https://doi.org/10.3390/cells7120278 Glick, D., Barth, S., & Macleod, K. F. (2010). Autophagy: Cellular and molecular mechanisms. The Journal of Pathology, 221(1), 3–12. https://doi.org/10.1002/path.2697 Irvine, G. B., El-Agnaf, O. M., Shankar, G. M., & Walsh, D. M. (2008). Protein aggregation in the brain: The molecular basis for Alzheimer’s and Parkinson’s diseases. Molecular Medicine (Cambridge, Mass.), 14(7–8), 451–464. https://doi. org/10.2119/2007-00100.Irvine Metcalf, D. J., García-Arencibia, M., Hochfeld, W. E., & Rubinsztein, D. C. (2012). Autophagy and misfolded proteins in neurodegeneration. Experimental Neurology, 238(1), 22–28. https://doi.org/10.1016/j.expneurol.2010.11.003 Moruno, F., Pérez-Jiménez, E., & Knecht, E. (2012). Regulation of autophagy by glucose in Mammalian cells. Cells, 1(3), 372–395. https://doi.org/10.3390/cells1030372 Nah, J., Yuan, J., & Jung, Y.-K. (2015). Autophagy in neurodegenerative diseases: From mechanism to therapeutic approach. Molecules and Cells, 38(5), 381–389. https://doi. org/10.14348/molcells.2015.0034 Nixon, R. A. (2013). The role of autophagy in neurodegenerative disease. Nature Medicine, 19(8), 983–997. https://doi.org/10.1038/nm.3232 Parzych, K. R., & Klionsky, D. J. (2014). An overview of autophagy: Morphology, mechanism, and regulation. Antioxidants & Redox Signaling, 20(3), 460–473. https://doi. org/10.1089/ars.2013.5371 Pickford, F., Masliah, E., Britschgi, M., Lucin, K., Narasimhan, R., Jaeger, P. A., Small, S., Spencer, B., Rockenstein, E., Levine, B., & Wyss-Coray, T. (2008). The autophagy-related protein beclin 1 shows reduced expression in early Alzheimer disease and regulates amyloid beta accumulation in mice. The Journal of Clinical Investigation, 118(6), 2190–2199. https://doi. org/10.1172/JCI33585 Pyo, J. O., Nah, J., & Jung, Y. K. (2012). Molecules and their functions in autophagy. Experimental & Molecular Medicine,


44(2), 73–80. https://doi.org/10.3858/emm.2012.44.2.029 Quan, W., & Lee, M.-S. (2013). Role of Autophagy in the Control of Body Metabolism. Endocrinology and Metabolism, 28(1), 6. https://doi.org/10.3803/EnM.2013.28.1.6 Rahman, M. A., & Rhim, H. (2017). Therapeutic implication of autophagy in neurodegenerative diseases. BMB Reports, 50(7), 345–354. https://doi.org/10.5483/ bmbrep.2017.50.7.069 Shi, X., Yokom, A. L., Wang, C., Young, L. N., Youle, R. J., & Hurley, J. H. (2020). ULK complex organization in autophagy by a C-shaped FIP200 N-terminal domain dimer. Journal of Cell Biology, 219(7), e201911047. https://doi.org/10.1083/ jcb.201911047 Sweeney, P., Park, H., Baumann, M., Dunlop, J., Frydman, J., Kopito, R., McCampbell, A., Leblanc, G., Venkateswaran, A., Nurmi, A., & Hodgson, R. (2017). Protein misfolding in neurodegenerative diseases: Implications and strategies. Translational Neurodegeneration, 6(1), 6. https://doi. org/10.1186/s40035-017-0077-5 Wang, Y., Ma, Q., Ma, X., Zhang, Z., Liu, N., & Wang, M. (2017). Role of mammalian target of rapamycin signaling in autophagy and the neurodegenerative process using a senescence accelerated mouse-prone 8 model. Experimental and Therapeutic Medicine, 14(2), 1051–1057. https://doi. org/10.3892/etm.2017.4618 Zachari, M., & Ganley, I. G. (2017). The mammalian ULK1 complex and autophagy initiation. Essays in Biochemistry, 61(6), 585–596. https://doi.org/10.1042/EBC20170021



The Role of Autophagy and Its Effect on Oncogenesis

BY ZOE CHEN '23 Illustration depicting autophagy with a western spin! Directed by signaling complexes, autophagy wrangles cellular components in the wild cytosolic landscape Created by the author

Introduction Just two decades ago, autophagy had little foothold in the world of research. This quickly changed as the mechanism erupted into relevance on the scientific stage, revolutionizing current knowledge of neurodegeneration, longevity, and immune diseases. Autophagy is also a critical area of interest in the development of cancer therapies. This mechanism is suspected to either aid or combat chemotherapy resistance in tumor cells; whether it is friend or foe remains contentious. Defining autophagy’s involvement may thus be key in the fight against cancer. Autophagy, or self (auto) eating (phagy), is a homeostatic cellular process that degrades debris present within the cell. It serves as a well conserved survival mechanism in eukaryotes (Feng et al., 2014), also present among different tumor types (Tan et al., 2016). Essentially, autophagy can be thought of as a


lassoing cowboy, corralling unnecessary or harmful components in the cell’s cytosol for roundup. The autophagosome—an organelle that, in this simile, is akin to a lasso—then drives these components into the lysosome where they are degraded and broken into basic monomers. The double membraned autophagosome is a mediating organelle that forms upon appropriate signaling by the cell. It exists temporarily to envelope relevant cytosolic matter and transport it to the lysosome, where it fuses membranes. Interior enzymes within the lysosome break down the autophagosome’s cargo for disposal or reuse by the cell (Mizushima, 2007). A visual translation of the mechanism can be seen in Figure 1. Autophagy’s operation is regulated by several signaling pathways (Mizushima, 2005) that deem how necessary it is for the process to take place. This most general role of autophagy in homeostasis gives rise to its versatile range of function. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

A Glimpse into the Rodeo To better understand autophagy’s unique abilities, it is helpful to break down the process stage by stage. Baseline autophagy occurs, but more extensively, the process can be triggered by a state of nutrient starvation—most notably nitrogen, carbon, and amino acid starvation. This varies by the type of cellular organism (Takeshige, t al., 1992). For multicellular organisms, the endocrine system is believed to be the major regulator of autophagy, because sensing occurs as a whole (Mortimore, 1987). The autophagic pathway involves several signaling factors. An overall decrease in glucose transportation (denoting starvation) triggers the molecule mTOR to inhibit the ULK1 complex, responsible for vesicle nucleation. The protein beclin 1, phosphorylated by ULK1, transports autophagic proteins to the forming phagophore, a double membrane that is prerequisite to the autophagosome. When the activating molecule BECN1 regulated autophagy protein 1 (AMBRA1) of the pl3k complex binds to beclin 1, the phagophore becomes stabilized. It is now prepared to collect the cytosolic matter. These signaling molecules appropriately call autophagy to action and are also important as targets, frequently used by scientists to arrest autophagy. Research has targeted autophagy pharmacologically upstream with ULK1, Pl3K, and Beclin 1 inhibition. Drugs have also been used for downstream targets to prevent autophagy. Chloroquine, hydroxychloroquine and bafilomycin block the autophagosome from fusing with the lysosome (Levy, 2017) thereby preventing the breakdown of the debris. These drugs are commonly used in clinical practice to observe the effects of autophagy and what happens when it is absent. The next step is the emergence of the autophagosome, a semi-randomly occurring process. Upon formation, the phagophore sequesters components of the cytosol; the autophagosome is thus formed when the sequestration is complete. The noose of the lasso encloses around the unsuspecting herds of cytosolic matter. Subsequently, the autophagosome merges membranes with the lysosome, releasing its cargo to be degraded by hydrolases present inside. The resulting structure is called the autolysosome or the autophagolysosome. When the contents of the autophagolysosome have been broken down into individual units, they are then returned to the cytosol for reuse. Recycling proteins, for


instance, yield useful monomers for the cell (Newsholme, Crabtree, Ardawi, 1985). These amino acids can then be digested as energy or used as the raw ingredients to synthesize proteins (Onodera and Oshumi, 2005). These abilities render autophagy a vital tool in cell survival during times of limited resources; reusing or repurposing its own contents increases the cell’s endurance.

Purpose of Autophagy Autophagy is chiefly responsible for disposal of cytosolic contents and acts as a sort of protein and organelle quality control. Researchers find that cells missing autophagy tend to accumulate long-lived or misfolded proteins and abnormal organelles in regular homeostatic conditions (Komatsu et al., 2005). Another study demonstrated autophagy’s ability to clear away excess organelles, such as damaged mitochondria (Kim, RodriguezEnriquez, Lemasters, 2007) and surplus peroxisomes (Iwata et al., 2006). Autophagy’s main functionality guards the homeostasis of the cell, in turn promoting cell survival and endurance.

“Another hat autophagy wears is as a promoter of longevity. Recent findings show autophagy plays a key role in preventing cellular senescence, or the aging of the cell that effectively ends its reproductive and growth potential."

Because of the autophagosome’s ability to round up and direct cytosolic elements, autophagy can exist alternatively as a standalone transportation pathway. For instance, vacuole specific enzymes formed in the cytosol are delivered by an autophagic pathway to the vacuole (instead of the lysosome) as part of a biosynthetic pathway (Klionsky, 2005). Furthermore, also in making use of its transportative abilitiy, autophagy is employed by dendritic cells to aid the innate immune response. The autophagosome patrols for and rounds up viral single-stranded RNA (ssRNA) in the cytosol. Once the foreign ssRNA is captured, toll-like receptors are then able to identify the ssRNA as it is being transported to the lysosome. Dendritic cells then secrete interferons which stimulate the immune response, attacking the foreign matter (Lee et al., 2007). Autophagy also assists with adaptive immune responses, informing various steps including neutrophil extracellular trap formation, antigen processing, and type I interferon production and regulation (Zhang et al., 2012). Another hat autophagy wears is as a promoter of longevity. Recent findings show autophagy plays a key role in preventing cellular senescence, or the aging of the cell that effectively ends its reproductive and growth 181

Figure 1: The stages of autophagy: from initiation to degradation. The schematic condenses the steps of macroautophagy, starting with initiation. Upstream, starvation conditions trigger mTOR inhibition of the ULK1 complex. A Pl3K complex mediates ULK1 (in charge of nucleation). This sequence appropriately allocates necessary proteins to the site of the phagophore. The double-membraned phagophore extends, evolving into an autophagosome and collecting cytosolic debris. In the stage of vesicle fusion, the lysosome forms with the autophagosome to create the autophagolysosome. The acidic lysosomal interior and its hydrolytic enzymes break down the cargo of the autophagosome which results in the return of the degraded content into the cytosol Created by the author

“Although autophagy is outfitted as a defense mechanism, certain pathogens and viruses exploit the autophagic pathway.”


potential. Senescence begins with damaged DNA by the cell being shed from the nucleus and leaked into the cytosol. Environmental stressors such as chemotherapy, UV rays, radiation, etc. can instigate genomic damage and breakages in the DNA strands. The self-DNA can exit the nucleus through pores or from budding of the membrane. Autophagosomes are responsible for trafficking this extranuclear DNA to the lysosome, where the nuclease Dnase2a can degrade it. Its essential role for self-DNA removal was highlighted in several experiments: when autophagy was arrested in different cell lines, DNA accumulated in the cytosol. Without autophagy, DNA accumulates in the cytosol. Another sensing pathway, STING, upon detection of DNA in the cytosol, triggers an inflammatory response (Lan et al., 2014). Typically, cytosolic DNA is considered suspect by the cell’s immune system—it could have originated from bacterial or viral sources, thereby indicating infection. In this case, the inflammatory response is prosurvival and defends the cell against foreign threats (Moritz et al., 2017). This inflammatory immune response, although in the short term beneficial, contributes to cellular senescence in the long run. In the case of self-DNA, when cytosolic nucleic acid is sourced from the cell itself and not a foreign attacker, this inflammatory

response is inappropriate. As a direct result, premature aging occurs in cells that build up self-DNA without the help of autophagy (Santoro et al., 2018). Senescence is accelerated when autophagy cannot collect and break down cytosolic DNA fast enough before STING detection. Through this perspective, autophagy can be thought of as a senescence suppressant, prolonging life and stoving off early aging.

Both Sheriff and Outlaw Although autophagy is outfitted as a defense mechanism, certain pathogens and viruses exploit the autophagic pathway. Pathogens have been known to fuse with the autophagosome to survive inside the cell and block its policing function (Swanson, Fernandez-Moreira, 2002). Forms of hepatitis, poliovirus, and picornavirus replicate within the vesicles of autophagosomal origin, increasing proliferation in a protected setting (Jackson et al., 2005). In addition to the dangers that arise when autophagy’s function is corrupted, when autophagy cannot be carried out to completion, problems are equally created. Studies find that when autophagy is disrupted,


Figure 2: Stained images of lung cancer cells treated with antitumor drug etoposide show autophagy. Here, autophagy serves as a line of defense against chemotherapy, mounting resistance of the cancer. By removing waste, autophagy restores the cell, enabling its survival. The lung cells, outlined in green, contain several nuclei (shown as bright green bodies). In the surrounding cytosol are several autophagosomes, shown in orange Source: Gewirtz and Saleh, 2016

the resulting accumulation of organelles causes neurodegenerative diseases like Alzheimer’s (Okamoto et al., 1991) and Parkinson’s diseases (Anglade et al., 1997). Researchers find that if autophagosome formation and maturation into autolysosomes are not fast enough, the incompletely degraded matter can lead to toxic peptides and Alzheimer’s precursors. As a result, the up-regulation of autophagy is a beneficial therapy for neurodegeneration (Ravikumar et al., 2004). Another interesting example is the autoimmune disease, lupus一specifically, systemic lupus erythematosus (SLE). In elucidating how the disease develops, scientists have linked variants of autophagic genes to SLE susceptibility. Defects in autophagy, especially adaptive immune cells, are suspected to further the development of of SLE (Qi et al., 2019). Although autophagy’s mechanism is purpoted to maintain and protect the cell, it can turn harmful when abused by foreign agents or executed incorrectly.

Effects of Autophagy on Tumorigenesis The role of autophagy is important in noncancer related topics and it is well established


as a multifaceted tool. However, it is equally important to uncover its complex entanglement with both cancer growth and prevention. Anti-Cancer On the one hand, existing research in the field demonstrates that autophagy shields cells against cancer development and growth. Autophagy often precedes apoptosis and/or acts in conjunction with this alternative form of cell death. Evidence is mounting that autophagy enhances apoptosis when available. When apoptosis is unavailable, autophagy mediates autophagic cell death as an alternative. Type II autophagic cell death is also important in killing tumor cells that have no resistance to anticancer drugs. Recent studies demonstrate the potency of cannabinoids and TMZ as anticancer actors via autophagic cell death in some cancers. Other signal pathways like AMPK/AKT1/mTOR also regulate autophagic cell death. In cases of multidrug resistance (MDR)—defined as the resistance of cancer cells to varied avenues of chemotherapy—where apoptosis is absent, autophagy arrests the growth of tumor cells. In essence, the normal autophagic pathway is turned excessive, digesting the cell itself, killing it. Evidence has even shown that increasing autophagy could

“Evidence is mounting that autophagy enhances apoptosis when available. When apoptosis is unavailable, autophagy mediates autophagic cell death as an alternative.”


facilitate MDR reversal by triggering oxidative stress (Ying Jie-Li et al., 2017). Autophagy and apoptosis also make up the first line of defense against hypoxia, a deficiency of oxygen (Bohensky et al., 2007, Degenhardt et al., 2006). When both these mechanisms are suppressed, cells die from hypoxia. Necrotic cell death then triggers the inflammatory immune response, which in turn promotes tumorigenesis. Thus, researchers find that autophagy is important in fending off tumorigenesis by stopping necrosis.

“Further reports contend that autophagy may act directly as a tumor suppressor. Mutation of some autophagic genes result in tumorigenesis and a dysregulation of cell proliferation in certain lines of mice.”

Further reports contend that autophagy may act directly as a tumor suppressor. Mutation of some autophagic genes results in tumorigenesis and a dysregulation of cell proliferation in certain lines of mice (Karantza-Wadsworth et al., 2007). Another theory for its antitumor character is that autophagy’s prevention of genome damage armors the cell against tumor progression. Without autophagy, a cell under metabolic stress exhibits more genome damage which increases the likelihood of tumor growth, since tumors may arise out of genome instability and mutation. By clearing out abnormal material (such as defunct proteins or organelles) that threaten genome stability, autophagy improves the functionality of the metabolic elements. Other theories argue that autophagy aids in the T cell immune response to cancer (Townsend et al., 2012). Autophagy shapes cell death in tumor cells by making it possible for the immune system to recognize the tumor’s properties and instigate an immune response. In enhancing autophagy, a study found a boost in antitumor responsiveness to immune counters (Pietrocola et al., 2016). These findings suggest bundling chemotherapy with enhanced autophagy may prove effective. In all, various properties of autophagy, ranging from its role as an alternative cell death route, its genetic link to tumor growth, its ability to clear harmful cytosolic matter, and its connection to the immune response optimize it as a defender against cancer. Pro-Cancer On the other hand, some of the same properties that render autophagy an antitumor actor can simultaneously promote tumor development in the cell. Ironically, autophagy is suggested to help cells acquire chemotherapy resistance, a rather stark contrast to previous cited research. Because of autophagy’s capacity to promote survival under tough conditions by breaking down and reusing its own components, its application in tumor cells defends them against chemotherapy. Without autophagy there to support tumor cell survival, the cells became


all the more susceptible. In studies, when autophagy was suppressed, the tumor cells became more sensitized to the treatment which in turn had increased its cytotoxic potential. In breast cancer, evidence suggests that autophagy actually prevents apoptosis, thereby denying the cell of either form of self programmed death, helping tumor cells cling to life. Using drugs that prevent autophagy in conjunction with chemotherapeutic drugs have thus been shown to enhance the cytotoxicity of the treatment (Maycotte et al., 2012). Prescribing chloroquine or hydroxychloroquine with chemotherapy has proven to magnify antitumor activity (Levy, 2017). When chemotherapy is combined with autophagy suppression, similar effects are observed in a variety of human cancers, including esophageal cancer, glioblastoma, hepatocellular carcinoma, leukemia, lung cancer, pancreatic adenocarcinoma, prostate cancer, renal cell carcinoma and ovarian cancer (Sui et al., 2013). Another factor influencing autophagy is epidermal growth factor signaling, responsible for three subsequent signaling pathways (Ras/MAPK, PI3K/Akt and JAK/STATs) which are linked to cancer initiation (Henson, 2006). Research indicates that epidermal growth factor signaling can trigger autophagy which then protects tumor initiation as it develops in the cell. An extension of the resistance acquisition ability of autophagy is multidrug resistance (MDR) which arises after long-term chemotherapy. In clinical studies, higher levels of autophagy frequently appeared in patients with poor prognosis. This indicates that autophagy could trigger MDR development. Further examination of genes involved in the promotion of autophagy show that they also mediate MDR. This interaction culminates in the protection of MDR cells from appropriate cell death. Silencing autophagy related genes has sensitized cells with MDR to chemotherapy treatments, making them more effective (Ying Jie-Li et al., 2017). Autophagy’s genetic and signaling ties to cancer along with its alliance with tumor cells as a pro-survival mechanism reveal its dangers. These results pave the way for exciting possibilities in oncology, specifically, pairing chemotherapy agents with autophagy suppressants to improve the lethality of anticancer treatments. These results also show how contradictory the current knowledge in the field is. Just looking at MDR tumor cells can


gives readers a conflict: with evidence proving that autophagy protects MDR tumor cells against chemotherapies and evidence proving that autophagy terminates MDR tumor cells, how can a vote of confidence be made about autophagy as an instrument of life or death?

Conclusion Based on current research in the field, autophagy’s positive or negative effect on cancer remains ambiguous. Autophagy inducers or inhibitors could be fundamental to a prospective immunotherapeutic strategy against cancerーthe matter is of discerning the circumstances in which autophagy is hurtful or helpful. Although progress is being made into this branch of research, ranging from genetic to clinical studies, more is still necessary. As with many elements of the immune system, autophagy’s seemingly simple job is in reality a complex and unnavigated web filled with contradictions. With certainty, autophagy’s role in cancer growth and development can currently only be defined as contextdependent and varied. A clearer understanding of how autophagy controls chemotherapy sensitivity is necessary to elucidate the effect on tumor cells. Currently, autophagy’s functionality hinges on its circumstance; the type of anticancer treatment, the type of cancer, and more can dictate cancer outcomes. While the fundamental mechanisms and applications of autophagy are well explored, defining autophagy’s role in cancer remains in debate. Grasping connections between cancer and autophagy, the immune system, signaling pathways, and associated genes is necessary moving forward. To promote or suppress autophagy? The decision could be potentially lifesaving or death-affirming.

References Anglade, P., Vyas, S., Javoy-Agid, F., Herrero, M.T., Michel, P.P., Marquez, J., Mouatt-Prigent, A., Ruberg, M., Hirsch, E.C., Agid, Y.(1997) Apoptosis and autophagy in nigral neurons of patients with Parkinson’s disease. Histol. Histopathol. 12:25–31. Bohensky, J., Shapiro, I.M., Leshinsky, S., Terkhorn, S.P., Adams, C.S., Srinivas, V.(2007) HIF-1 regulation of chondrocyte apoptosis: Induction of the autophagic pathway. Autophagy 3:207–214.


Degenhardt, K., Mathew, R., Beaudoin, B., Bray, K., Anderson, D., Chen, G., Mukherjee, C., Shi, Y., Gelinas, C., Fan, Y., et al.(2006) Autophagy promotes tumor cell survival and restricts necrosis, inflammation, and tumorigenesis. Cancer Cell 10:51–64. Feng, Y., He, D., Yao, Z. & Klionsky, D. J. (2014). The machinery of macroautophagy. Cell Res. 24, 24–41. Gewirtz, D., Saleh T. (2016). Lung Cancer Autophagy. National Cancer Institute Up Close 2016. Hacohen, N., & Lan, Y. Y. (2019). Damaged DNA marching out of aging nucleus. Aging, 11(19), 8039–8040. Hara, T., Nakamura, K., Matsui, M., Yamamoto, A., Nakahara, Y., Suzuki-Migishima, R., Yokoyama, M., Mishima, K., Saito, I., Okano, H., et al.(2006) Suppression of basal autophagy in neural cells causes neurodegenerative disease in mice. Nature 441:885–889. Henson ES, Gibson SB. (2006). Surviving cell death through epidermal growth factor (EGF) signal transduction pathway: implications for cancer therapy. Cell Signal; 18: 2089–2097. Iwata, J., Ezaki, J., Komatsu, M., Yokota, S., Ueno, T., Tanida, I., Chiba, T., Tanaka, K., Kominami, E.(2006) Excess peroxisomes are degraded by autophagic machinery in mammals. J. Biol. Chem. 281:4035–4041. Jackson, W.T., Giddings, T.H., Taylor, M.P., Mulinyawe, S., Rabinovitch, M., Kopito, R.R., Kirkegaard, K.(2005) Subversion of cellular autophagosomal machinery by RNA viruses. PLoS Biol. 3:e156, 10.1371/journal.pbio.0030156. K Takeshige, M Baba, S Tsuboi, T Noda, Y Ohsumi. (1992). Autophagy in yeast demonstrated with proteinase-deficient mutants and conditions for its induction.. J Cell Biol 15 119 (2): 301–311. Karantza-Wadsworth, V., Patel, S., Kravchuk, O., Chen, G., Mathew, R., Jin, S., White, E.(2007) Autophagy mitigates metabolic stress and genome damage in mammary tumorigenesis. Genes & Dev. 21:1621–1635. Kim, I., Rodriguez-Enriquez, S., Lemasters, J.J.(2007) Selective degradation of mitochondria by mitophagy. Arch. Biochem. Biophys. 462:245–253. Klionsky, D.J.(2005) The molecular machinery of autophagy: Unanswered questions. J. Cell Sci. 118:7–18. Komatsu, M., Waguri, S., Ueno, T., Iwata, J., Murata, S., Tanida, I., Ezaki, J., Mizushima, N., Ohsumi, Y., Uchiyama, Y., et al.(2005) Impairment of starvation-induced and constitutive autophagy in Atg7-deficient mice. J. Cell Biol. 169:425–434. Lan, Yuk Yuen, Diana Londono, Richard Bouley, Michael S. Rooney, Nir Hacohen. (2014). Dnase2a Deficiency Uncovers Lysosomal Clearance of Damaged Nuclear DNA via Autophagy. Cell Reports 9, no.1: 180-92. Lee, H.K., Lund, J.M., Ramanathan, B., Mizushima, N., Iwasaki, A.(2007) Autophagy-dependent viral recognition by plasmacytoid dendritic cells. Science 315:1398–1401. Levy, J., Towers, C. & Thorburn, A. (2017). Targeting autophagy in cancer. Nat Rev Cancer 17, 528–542. Li, Y., Lei, Y., Yao, N. et al. (2017). Autophagy and multidrug resistance in cancer. Chin J Cancer 36, 52.


Maycotte P, Aryal S, Cummings CT, Thorburn J, Morgan MJ, Thorburn A . (2012). Chloroquine sensitizes breast cancer cells to chemotherapy independent of autophagy. Autophagy; 8: 200–212. Mizushima, N. (2007). Autophagy: process and function. Genes Dev, 21, 2861–2873. Mizushima, N. (2007). The Pleiotropic Role of Autophagy: From Protein Metabolism to Bactericide. Cell Death & Differentiation, 12, 1535-1541. Moritz M. Gaidt et al. The DNA Inflammasome in Human Myeloid Cells Is Initiated by a STING-Cell Death Program Upstream of NLRP3, Cell (2017). Mortimore, G.E., Pösö, A.R.(1987) Intracellular protein catabolism and its control during nutrient deprivation and supply. Annu. Rev. Nutr. 7:539–564. Nakai, A., Yamaguchi, O., Takeda, T., Higuchi, Y., Hikoso, S., Taniike, M., Omiya, S., Mizote, I., Matsumura, Y., Asahi, M., et al.(2007) The role of autophagy in cardiomyocytes in the basal state and in response to hemodynamic stress. Nat. Med. 13:619–624. Newsholme, E.A., Crabtree, B., Ardawi, M.S.(1985) Glutamine metabolism in lymphocytes: Its biochemical, physiological and clinical importance. Q. J. Exp. Physiol. 70:473–489. Okamoto, K., Hirai, S., Iizuka, T., Yanagisawa, T., Watanabe, M.(1991) Reexamination of granulovacuolar degeneration. Acta Neuropathol. (Berl.) 82:340–345. Onodera, J., Ohsumi, Y.(2005) Autophagy is required for maintenance of amino acid levels and protein synthesis under nitrogen starvation. J. Biol. Chem. 280:31582–31586. Pietrocola, F. et al. (2016) Caloric restriction mimetics enhance anticancer immunosurveillance. Cancer Cell 30, 147–160. Qi, Y.‐y., Zhou, X.‐j. and Zhang, H. (2019), Autophagy and immunological aberrations in systemic lupus erythematosus. Eur. J. Immunol., 49: 523-533. Ravikumar, B., Vacher, C., Berger, Z., Davies, J.E., Luo, S., Oroz, L.G., Scaravilli, F., Easton, D.F., Duden, R., O’Kane, C.J., et al.(2004) Inhibition of mTOR induces autophagy and reduces toxicity of polyglutamine expansions in fly and mouse models of Huntington disease. Nat. Genet. 36:585–595. Santoro, A, Spinelli, CC, Martucciello, S, et al. (2018). Innate immunity and cellular senescence: The good and the bad in the developmental and aged brain. J Leukoc Biol. 2018; 103: 509– 524. Swanson, M.S., Fernandez-Moreira, E.(2002). A microbial strategy to multiply in macrophages: The pregnant pause. Traffic 3:170–177. Tan, Q. et al. (2016). Role of autophagy as a survival mechanism for hypoxic cells in tumors. Neoplasia 18, 347–355. Townsend, K. N. et al. (2012). Autophagy inhibition in cancer therapy: metabolic considerations for antitumor immunity. Immunol. Rev. 249, 176–194. Zhang Y, Morgan MJ, Chen K, Choksi S, Liu ZG. (2012). Induction of autophagy is essential for monocyte-macrophage differentiation. Blood. 119(12):2895-905.






Fig. 1: Map of COVID-19 infections as of September 7, 2020. By Raphaël Dunant, Gajmar (maintainer) - Own work, data from Wikipedia English (e.g. COVID-19 data and Population), maps from File:BlankMap-World. svg and File:Blank Map World Secondary Political Divisions.svg, CC BY 4.0,Wikimedia Commons

Introduction The end of December 2019 marked the beginning of the coronavirus outbreak, now a global pandemic. As medical professionals in China’s Hubei province noticed a rise in peculiar pneumonia cases that exhibited SARSlike symptoms, they attempted to warn their government and the world about its potentially deadly consequences. It took only weeks, from December 31, 2019 when the Coronavirus was identified until January 13, 2020 when the first case of COVID-19 outside China was confirmed, for COVID-19 cases to start multiplying on a global scale and turn into a pandemic (World Health Organization, 2020). It was declared as such on March 11th by the WHO. During the first meeting of the International Health Regulations Emergency Committee regarding the outbreak of novel coronavirus on January 22 and 23, the representatives from China’s Ministry of Health confirmed


that there were 557 cases, located mainly in the Hubei province and some clusters outside of this region. According to the World Health Organization’s (WHO) 205th Situation Report, there have been 20,162,474 confirmed cases and 737,417 fatalities globally as of August 12th, 2020 with North America being the current epicenter (WHO, 2020). Since the Coronavirus outbreak is the first pandemic since the catastrophic Spanish flu emerged in 1918, most people who are currently alive have never experienced the severity of a pandemic’s impact. A pandemic such as the Spanish Flu, and the COVID-19 pandemic the world is experiencing now, affect almost every essential sector of the economy, including healthcare, education, and trade. Among those affected are world leaders who bear the burden of protecting all of these aspects and making the right sacrifices. A large number of governments, particularly those of the Western DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

world, have been heavily criticized for their untimely or ill-fitting pandemic-containment policies; yet, in the face of it all, Vietnam and other Asian countries seem to have been those with the firmest grasp on how to contain the 21st century’s worst public health disaster so far. The official COVID-19 statistics in Vietnam as of August 2020 are 1029 confirmed cases with 27 deaths. From April 17 to May 2020, however, there were 324 confirmed cases (all contracted overseas) and 0 deaths (Ministry of Health Vietnam, 2020). Due to the steady decrease in cases, the Vietnamese government began to ease lockdown restrictions on April 23, 2020 and entered a “new normal”– a step that few affected countries had been able to take at that point (Ministry of Health Vietnam, 2020). It is therefore valuable to assess what Vietnam accomplished for it to achieve this outcome at such an early date, especially since it has more than 97,000,000 citizens and shares a border with China that spans roughly 1,400 kilometers or 870 miles (World Population Review, 2020) (Thao, 2009).

Addressing the credibility of Vietnam’s Coronavirus statistics Vietnam’s status as a one-party Communist state inevitably provokes an instinctive reaction in Western, non-Communist societies to deny the credibility of its COVID-19 statistics. It is true that there is a lack of transparency when it comes to official details of what the medical process looks like to Vietnamese COVID-19 patients and its infected case numbers, as the Ministry of Health has yet to release any data to those who are not directly involved in the medical sector. Thus, the credibility of the number of infected cases remains uncertain. The lack of public access to data at this point during the pandemic, however, neither supports nor denies the successful results that the Vietnamese government has achieved in containing an outbreak. As of right now, there have been confirmations from independent international sources that argue in favor of Vietnam’s credibility. The director of the Centers for Disease Control and Prevention (CDC) in Thailand, Dr. John MacArthur, stated that he had faith in Vietnam’s case numbers in a telephonic briefing with Dr. Barbara Marston, CDC COVID-19 International Task Force Lead: “[Our] team that’s up in Hanoi working very, very closely with ministry of health counterparts on many, many aspects of


this outbreak response, providing technical assistance in the areas of surveillance, data analysis, laboratory testing, the actual going into the field and doing investigations, contact tracing with the Vietnamese counterparts, and really, sort of trying to support them and their approach. [...] And so those relationships are strong, and it kind of allows us to get a sense of whether the numbers are real, the numbers are not real, only because of deficiencies in testing and some such, or whether there’s some reason to hide those numbers. And from the communications I’ve had with my Vietnam team, is they, at this point in time, don’t have any indication that those numbers are false.” In addition, Reuters, the world’s largest multimedia news provider, lauds Vietnam for having the highest ratio of tests to each confirmed case in the world, which stands at roughly 791:1. Although these numbers may contain bias from the Vietnamese Ministry of Health, international news outlets such as Reuters mostly acknowledge the effectiveness of Vietnam’s approach regardless of questions of numerical transparency.

Historical Pandemic Response: Severe Acute Respiratory Syndrome (SARS), 2003

“Reuters, the world's largest multimedia news provider, lauds Vietnam for having the highest ratio of tests to each confirmed case in the world, which stands at roughly 791:1."

The COVID-19 global outbreak is not the first pandemic that swept through Vietnam in the 21st century. In February, 2003, the SARS coronavirus, which initially appeared in southern China’s Guangdong province around mid-November 2002, spread to 26 countries and proceeded to infect roughly 8,437 in the following 3-4 month period (“Cumulative Number,” 2020; CDC, 2020). The number of worldwide fatalities reached about 800 (“Cumulative Number,” 2020). Among the affected countries outside of China, Vietnam was one of the first four to report alarmingly atypical cases of pneumonia, following the hospitalization of a 47-year-old business man, who had previously traveled to mainland China and Hong Kong, in Hanoi (Kamps & Hoffman, 2006) . These cases were later confirmed to be SARS and were all treated in Hanoi’s VietnamFrench Hospital. Out of 63 SARS cases in Hanoi, 36 of them were healthcare workers who were in direct contact with the first patient, making the percentage of infected medical personnel during the pandemic 57%, the highest of all affected countries (Hörmansdorfer, Campe, & Sing, 2008). The 5 recorded deaths in Vietnam entirely consisted of doctors and nurses at the 189

Figure 2: Graph depicting daily COVID-19 tests in South Korea and Vietnam from January 28th to April 27th. Daily testing rose significantly in South Korean starting in February, while in Vietnam daily testing did not start until March. Retrieved from Our World in Data

French Hospital (Phuong, 2018). There are two important factors that contributed to Vietnam’s success in becoming SARS-free after just two months (02/26/2003 - 04/28/2003): timely response from the hospital responsible for SARS patients and the government’s quick and transparent efforts to enforce policies that ensured effective containment of the disease.

“Only a few days after the initial report to the WHO made by Dr. Carol Unabi, the French hospital issued a hospital-wide quarantine order on March 5, 2003.”

The amount of commitment and resourcefulness that went into controlling SARS infections was especially impressive because at that time, medical resources in Vietnam were still relatively scarce (Phuong, 2018). Only a few days after the initial report to the WHO made by Dr. Carlo Ubani, a WHO representative who traveled to Hanoi for a situational assessment, the French Hospital issued a hospital-wide quarantine order on March 5, 2003 (National Hospital for Tropical Diseases Vietnam, 2013). When interviewed about the situation that led to the hospital’s lockdown, nurse Xuân, who was one of the first responders, shared with a VNExpress journalist: "Tôi lờ mờ nhận thấy dịch ngày càng nguy hiểm. Hôm đó (5/3/2003), tôi đi chợ mua rất nhiều đồ, dặn chồng nấu nướng và chăm sóc các con. Quả nhiên, tối ấy bệnh viện phát lệnh đóng cửa, toàn bộ nhân viên ở lại viện" (“I vaguely guessed that the outbreak was becoming more and more dangerous.


That night [March 5, 2003], I bought a lot of groceries and instructed my husband to cook and take care of the kids. As expected, the hospital sent out the order to close down the hospital and [everyone] must stay there”). All working hospital staff and residing patients were required to remain at the hospital for a period of more than 30 days. This swift move to isolate SARS cases largely prevented the spread of the disease within the community and minimized the possible damage of the outbreak. According to Doctor Võ Văn Bản, the vice director of the French Hospital during the SARS outbreak, most of the hospital’s medical personnel were unaware of the impending SARS pandemic when they came into contact with patient zero during on February 26, 2003. As a result, they were not able to take drastic preventive measures such as isolating the patient and wearing enough personal protective equipment. As a consequence, a disproportionate number of the hospital’s staff became infected and died (Phuong, 2018). This devastating unintended consequence prompted responses from the hospital to protect their staff and control infection. First, after identifying the SARS threat, the hospital allowed high-risk personnel such as those who


Figure 3: Chart depicting Vietnamese tracing levels F0-F5. Created by authors.

were pregnant or had young dependents to take time off work (Anh, 2020). Second, the hospital began providing additional PPE for its workers, specifically N95 masks (Nishiura et al., 2005) . The hospital addressed its urgent need to provide enough essential life-saving equipment by cooperating with Bach Mai hospital and its French associates to “borrow” five more ventilators (Phuong, 2018). After the first few frantic days of coordinating additional medical supplies and intensively working to minimize the further spread of the outbreak, the Vietnamese Ministry of Health directed the French Hospital to move its patients to the National Hospital of Tropical Diseases, whose medical personnel and resources were better equipped for this type of emergency (National Hospital for Tropical Diseases Vietnam, 2013). The fast and efficient cooperation among the aforementioned network of hospitals enabled Vietnam to mitigate the consequences of medical institutions being caught unprepared by an unprecedented and deadly medical emergency. By March 15, the CDC issued warnings and guidelines for global health departments on the SARS pandemic (CDC SARS Response Timeline, 2013). From this point onwards, the Vietnamese government strengthened


“By March 15, the CDC issued warnings and guidelines for global health departments on the SARS pandemic. From this point onwards, the Vietnamese government strengthened border control and enforced extensive medical screenings at ports of entry as well as airports across the In addition, the Ministry of Health directed the country.” formation of the National Steering Committee border control and enforced extensive medical screenings at ports of entry as well as airports across the country. The testing procedures were conducted by medical personnel from the National Institute of Hygiene and Epidemiology in cooperation with the National Hospital of Tropical Diseases (Phuong, 2018). Despite being a struggling developing country at the time, Vietnam was committed to funding efforts to fight the transmittable disease. The Finance Ministry spent about $2 million on medical equipment and activities related to SARS prevention and targeted a little more than $1 million for Vietnam’s border provinces tasked with preventing SARS from leaving or entering Vietnam (Congressional Research Service, 2003).

for the Prevention of SARS with the assistance of other governmental sectors with authority over national security and governmental media. This Committee consisted of divisions responsible for community tracing as well as regional disease-monitoring task forces. According to the General Department of Preventive Medicine in Vietnam, the task forces included the following sub-committees (Executive Hazardous Infectious Diseases, 2017):


Figure. 4: Map depicting tracing levels of different countries. Retrieved from Our World in Data.

1. The sub-committee for disease monitoring 2. The sub-committee for disease treatment 3. The sub-committee for public health education and media 4. The sub-committee for logistics

“Among the countries most affected by the SARS outbreak, Vietnam was the poorest.”

Among the countries most affected by the SARS outbreak, Vietnam was the poorest (Congressional Research Service, 2003). The country acknowledged that it was lacking in necessary medical and monetary resources at the time, making the prospect of fighting the SARS pandemic alone unfeasible. In order to overcome this vulnerability, Vietnam actively sought for international aid. As soon as the first severe SARS cases appeared, the French Hospital was financially assisted by France and received more than $100,000 in for sterilization and disinfection of hospital equipment. In addition, Vietnam called for Japan to dispatch medical experts and was sent two ventilators along with other medical supplies by the Japanese government. In addition to requesting assistance from other nations, Vietnam also cooperated extensively with large public health organizations. It was supplied with a large amount of PPE such as masks and gowns from the WHO and CDC, and received direct consultation from a team of medical professionals sent by Doctors Without Borders (Congressional Research Service, 2003). On April 28, 2003, a mere two months after


the appearance of its first SARS case, Vietnam became the first country in the world to contain the outbreak (Update 95 - SARS: Chronology of a serial killer, WHO, 2003).Vietnam’s holistic approach to battling this public health emergency made it possible for it to minimize the spread of the deadly virus, and Vietnam’s success in 2003 would later on become the blueprint for how the country dealt with the COVID-19 pandemic in 2020.

Strategies for COVID-19: Systemic Level Where is Vietnam now and what strategies did the government employ to get there? From the first case of COVID-19 reported on January 23rd of this year, Vietnam had gone almost 100 days without a locally transmitted case until the recent July 2020 outbreak in Da Nang. The number of positive cases reported by the Vietnamese Ministry of Health is 1009 in August, up from the 324 of May to July due in large part to Vietnamese tourists from all over the country coming to a popular resort in the Da Nang province as reported by Vietnam Briefing (Vietnam Briefing News, 2020). Meanwhile, the CDC reported ~50,000 new cases per day in the United States in July and August 2020 (CDC Covid Data Tracker, 2020). The question remains: how did Vietnam do so well in the


face of this pandemic? A report in the Journal of Travel Medicine claimed Vietnam’s response was similar to those of other countries, but that Vietnam’s strategies have been particularly effective because they were implemented much earlier (Dinh et al., 2020). This paper will investigate several key systemic strategies employed by the Vietnamese government to prevent the spread of COVID-19. The following strategies will be discussed: early and enforced quarantine, government tracing, distribution of supplies and testing, utilization of technological tools, and leadership organization. Early and Enforced Isolation i. Budget-friendly Approach: A brief comparison to South Korea Much of Vietnam’s success can be attributed to its proactive and effective lockdown measures. Vietnam took a preventative approach because with $2,715.3 GDP per capita reported in 2019, Vietnam could not afford to mitigate an outbreak with comprehensive testing and generous spending. To demonstrate the economic constraints to Vietnam’s pandemic response, South Korea provides an illustrative counterexample. Both Vietnam and South Korea have demonstrated relatively effective COVID-19 mitigation outcomes, but they have adopted different approaches. With a significantly higher GDP per capita ($31,362), South Korea could afford to employ a comprehensive testing and generous spending pandemic response strategy (World Bank, 2020). In its current state, South Korea has reported just under 18,000 cases on August 25th with over 1.8 million tests performed (Ministry of Health and Welfare South Korea, 2020). Back in January, however, officials from the South Korean health ministry convened representatives from more than 20 medical companies to produce COVID-19 testing kits. Just a week after the January 27th meeting, a company was approved with others following soon after (Terhune et al., 2020). So far, Asia Times reported that South Korea’s national health insurance system has spent $310 million US dollars on COVID-19 treatment. This pales in comparison to the trillions of dollars the United States has spent thus far in its COVID-19 response, but Dr. Kim Sun-min, president of the Health Insurance Review and Assessment Service (HIRA), nonetheless called COVID a “relatively cheap disease” because it does


not require MRIs, surgery, or other expensive equipment for treatment other than ventilators for extreme cases (Salmon, 2020). Relatively, Vietnam spent around $3 million US dollars total on the SARS outbreak in 2003 and could not afford this “low cost, high tech” strategy. Vietnamese test kits were launched in March, and only after several grants were awarded to the Vietnamese government from WHO to fund development. As a result, daily testing was implemented slower in Vietnam than in South Korea. The Vietnamese government instead opted to launch a prevention plan in order to accommodate their resources. Travel restrictions, quarantine centers, school closures, and lockdowns were executed early in order to prevent, not mitigate, the spread of the virus. ii. Travel Immediately after its first case on January 23rd, Vietnam cancelled flights to and from Wuhan, China and set up health screenings at airports (Tuoi Tre News, 2020). A 14-day government sanctioned quarantine was instituted for travelers on February 15th and by March 22nd, all foreign travelers were banned from entering Vietnam (Ministry of Foreign Affairs, 2020). Health declaration forms are mandatory upon entry into the country, and foreigners who arrived before March 1st were considered for an automatic extension of stay until August 31st. The country went into a limited national lockdown on April 1st, shutting down its borders, public transportation, and any gatherings of more than 10 people (Gardaworld, 2020; Shira et al., 2020; Sullivan, 2020; U.S. Embassy & Consulate in Vietnam, 2020). These measures were taken as a preventative strategy, and likely contributed to Vietnam’s minimum number of positive cases and deaths. After March, domestic restrictions loosened and airlines resumed domestic travel until the recent evacuation and resuspension due to the outbreak in Danang. The foreign travel ban, however, is still in effect in August, and citizens returning from abroad must abide by the mandatory government quarantine.

“Immediately after its first case on January 23rd, Vietnam cancelled flights to and from Wuhan, China and set up health screening at airports.”

iii. Isolation Centers There are several sites that the Ministry of Health has listed as candidates for establishing a centralized isolation facility. These include army and police barracks, school dormitories, factories and enterprises, new and unused


Figure 5: Application logos for the NCOVI (left) and Bluezone (right) applications. Retrieved from https://apps. apple.com/us/app/ncovi/ id1501934178 and https:// play.google.com/store/apps/ details?id=com.mic.bluezone

apartment buildings, hotels and resorts, schools, and communal health facilities (Ministry of Health, 2020).

“The Ministry of Health has outlined the system and protocols for how to carry out tracing, most of which rely on collaboration between individual citizens and local health department officials.”

Returning citizens and travelers are kept in centralized isolation facilities for 14 days, where they are given government subsidized sustenance and accommodations. The Ministry of Health lists the requirements for isolation facilities. These include essential living conditions (electricity, water, bathrooms), ventilation, security and safety, fire prevention, and the characteristic of being located away from residential areas, yet being located somewhere convenient for transportation and waste removal. If possible, rooms are equipped with television or internet. In the facilities, people are isolated as much as possible but may receive packages from friends and loved ones provided they do not contain money or alcohol. The conditions are reported to be minimal but comfortable, with volunteer translators for foreigners and fences in between bunk beds. The People’s Committees of each district or province are responsible for implementing these facilities, which are then staffed and guarded by health workers, the military, and law enforcement (Nguyen, K., 2020; Chau & Nguyen, Xan Q., 2020; Nguyen, S., 2020; Pearson, J. & Nguyen, P., 2020). iv. School Closings Hanoi was the first province to cancel in-person schooling in late January, after Vietnam’s first COVID-19 case on January 23rd. Other provinces and cities followed suit, and by February 16th, all 63 districts opted to extend school closures. After either a three-month school break or three months of online learning, schoolchildren began to return to school with physical distancing


measures in place starting early May, with gradual reopening in phases (Insider, V. 2020; UNESCO, 2020; Saignoneer, 2020; Tuan, V. & Tung, M., 2020; Vu, K. & Nguyen, M., 2020). Government Tracing The government has resourced a comprehensive tracing program of positive cases. The Ministry of Health has outlined the system and protocols for how to carry out tracing, most of which rely on collaboration between individual citizens and local health department officials. The system classifies cases on a scale from F0 to F5. F5 cases are notified of possible exposure and asked to self-monitor, F1-F4 cases are isolated and quarantined in the home, and contacts of F0 cases are traced for 14 to 28 days. F0 cases are infected patients, F1 cases are people who had direct contact with F0 cases, F2 cases have had contact with F1 cases, F3 cases have had contact with F2 cases, F4 cases have had contact with F3 cases, and F5 cases have had contact with F4 cases (Times, V., 2020). There are different levels of action needed for each level of tracing. The Vietnamese Ministry of Health has reported the following protocols of response from the Ministry of Health for each level. F0 individuals need to be hospitalized, treated, and isolated to prevent further infection. F1 individuals must wear a mask immediately, notify the local county health department, isolate at a hospital, and notify F2 individuals. F2 individuals must wear a mask immediately, notify the local health department, follow isolation instructions by department staff, and notify F3 individuals. F3 individuals must wear a mask immediately,


notify the local health department, follow isolation instructions by department staff, and notify F4 individuals. F4 individuals must enact home isolation and notify the local health department. These protocols are documented on the Ministry of Health website, Vietnamese news site Vietnam+, and in work done by Dinh et al (2020). Vietnam has one of the most comprehensive tracing policies in the world. While some parts of Europe and other countries are stepping up to trace as thoroughly as Vietnam, the United States still employs limited tracing measures despite being the current global epicenter. Supplies, PPE, Testing i. Developing, Distributing, and Exporting Test Kits Reuters and The Japan Times reported that the collaboration between the Vietnamese government, the medical supply company Viet A Corps, and the research facilities at the state-run Military Medical University (MMU) was key to Vietnam’s comprehensive testing and supply strategy. By late February, MMU and Viet A Corps had designed a mass-producible test kit for coronavirus. The government issued a license, and by the end of March 250,000 kits were issued in Vietnam. Labs that could test for COVID-19 also increased, from 3 in January to 112 in April (Japan Times, 2020) (Vu, K., Nguyen, P., & Pearson, 2020). By the end of April, the World Bank reported that Vietnam was conducting 967 tests for every positive case it found (Minh Le, S., 2020). The same week, Vietnam received the seal of approval from the WHO to start exporting tests. The WHO had previously sent Vietnam lab supplies two months before to start developing the kit, and since then Vietnam has developed two internationally-used tests to avoid false-negatives: COVID-19 antibody tests and Polymerase Chain Reaction (PCR) tests to detect the presence of the virus. These kits were researched and developed by multiple organizations and political bodies in Vietnam working together. The Vietnam Academy of Science and Technology (VAST), consulting business IMM, and the Vietnamese University of Technology (UoT) were consulted by the government and worked together to develop publicly funded test kits. Vietnam used a collaborative top-down approach with kits; IMM’s kit was developed by the military and was mass produced by Viet A Corps. Vietnam’s


kits have been tested and found to be on par with those distributed by the WHO and the CDC, and producers can make 10,000 kits per day (Klingler-Vidra, R., Tran, Ba L., & Uusikyla, I., 2020). By the beginning of May, the American news outlet Voice of America reported that Vietnam had received 20 orders for test kits from nations around the world (Voice of America, 2020). ii. Manufacturing of Related Goods Production of other pandemic-related supplies have also boosted Vietnam’s exports and economy. With the global economic downturn caused by the virus, Vietnam found a solution in producing face masks. The government converted numerous textile and clothing factories into face mask producers. Producing the masks is quick, efficient, and does not require complicated imports of raw materials. Factories can produce thousands of masks per day for domestic use and exports. By mid-April, the Ministry of Industry and Trade reported that 50 producers had the combined capacity to manufacture 8 million masks per day, or 200 million masks per month (Dougn, D., 2020).

“With the global economic downturn caused by the virus, Vietnam found a solution in producing face masks.”

Technology and Media i. Ministry of Health Website Along with information from the CDC and WHO, Vietnamese citizens can access information and announcements from their own government through the Ministry of Health website. The site is updated daily with announcements on new cases and links to news regarding COVID-19. The homepage features a live database, which is crucial for citizens to stay up to date on infections and the status of different regions. There are additional tabs for articles on the latest COVID research, recommendations from the Ministry of Health on a variety of topics, and updates on industry support. There is also a section where people can submit questions, find health support locations, look up instructions, and take quizzes on various health protocols. The Ministry of Health has also created digital posters on subjects such as hygiene, social distancing measures, tracing protocol, ways to manage stress, testing protocols, and advice on maintaining good health during a pandemic. Information is centralized, organized, and easy to access. The live webpage has a plethora of information that connects citizens to their government directly through the Ministry of Health.


. ii. Applications: NCOVI and Bluezone

“Military resources have been extensively utilized in the wake of the pandemic and have been supplemented by resources from police as well as preventative medicine.”

The Ministry of Health and the Ministry of Information and Communications launched the NCOVI application, a platform where users can find recommendations on COVID-19 protocols and send out voluntary health declarations. The application asks users to fill out contact information, family details, and take a health and disease survey. In March, the app reached the #1 spot in the Vietnam Apple Store, and the Google Play store shows that by August 13th, 41,293 people had downloaded the app. Many users have left comments finding the application useful, informative, and accessible. On April 21, technology firm Bkav and the Ministry of Information and Communications launched an additional application called Bluezone. Bluezone operates using Bluetooth to link smartphones within two meters and notifies users if they have had contact with an infected individual in the last 14 days. 42,849 users have downloaded the app from Google Play as of August 13th (Nguyen, D., 2020). The benefit of the app lies in its efficiency: the government does not have to systematically collect information from people and dispatch valuable resources for tracing. Leadership and Culture A strong, centralized leadership was vital to Vietnam’s swift and effective response to COVID-19. The key players in Vietnam’s coordinated pandemic response were the Vice Prime Minister, the military, and the Ministry of Health. These leaders covered a lot of ground in little time, which was what was needed when COVID-19 surfaced in Vietnam back in January. i. Vice Prime Minister The Vice Prime Minister Vu Duc Dam, also the Vice Chair of the Communist Party, is the first prong representing government leadership in Vietnam. Mr. Vu Duc Dam was delegated to lead the Ministry of Health after the previous MOH leader stepped down in 2019, and his tenure lasted through the first wave of the pandemic (Vietnam Investment Review, 2019). Despite not having a background in MOH leadership, Mr. Vu Duc Dam has been praised for his role in the early and swift campaign against COVID-19. The government and the country, he said in Hanoi Times, was prepared for the pandemic “before even the first infection,” presumably referring to the country’s battles with SARS in 2003 and MERS in 2008 (Pham, 2008). Working with


scientists, leaders in the MOH, and other parts of the government, he was able to secure the MOH in its position in spearheading campaigns for tracing, quarantine, and health resources for those who needed it. His post in the MOH has since been delegated to Mr. Nguyen Thanh Long as of July 7th, former MOH Vice Minister (Linen, 2020). ii. Military Leadership Military resources have been extensively utilized in the wake of the pandemic and have been supplemented by resources from police as well as preventative medicine officials from the MOH. Local and federal troops have been maintaining isolation centers since their creation, military research center MMU was central to developing the country’s globally used testing kit, and recent reports state that military student volunteers were mobilized to carry out contact tracing protocols and collect samples in Danang (Vu, K., Nguyen, P., 2020). In a speech translated by the Vietnam Law and Legal Forum back in March 2020, Commanderin-Chief and Secretary of the Central Military Commission Nguyễn Phú Trọng expressed that “each citizen must be a soldier in the battlefield against the disease,” advocating for the cooperation of all levels of leadership and community in the fight against COVID-19. The diversity of the military’s involvement showcases the extent to which its resources can be utilized in a variety of sectors, from technological research to facility management. iii. The Ministry of Health Simply put, the Ministry of Health had been preparing for a pandemic long before COVID-19. In 1961, the Direction of Healthcare Activities (DOHA) was established by Vietnamese leader Ho Chi Minh in order to facilitate central communication and guidance between higher tier administrative healthcare and lower tier hospital care. A report published in the Journal of Environmental Health and Preventive Medicine describes the DOHA’s two missions: “1. To build a sound collaboration network and support system among health facilities, particularly those at higher and lower levels, to help ensure equity of health and deliver quality healthcare services to all Vietnamese people. 2. To address the burden of too many patients in higher level centers. This means supporting improvements in the quality


of healthcare services provided at lower levels, particularly training and technical skills transfer activities to improve trust and respond to social demands.” (Takashimi et al., 2017) Dr. Lily Hue, a doctor at the central Bac Mai hospital, described how this initiative has established “a strong preventative medicine sector from [a] central level...down to [a] provincial, district, and communal level.” Hue believes it was thanks to this effective communication between central hospitals like Bac Mai and smaller hospitals that Vietnam’s healthcare network was able to act quickly and effectively under the leadership of the MOH to respond to COVID-19. Within the MOH, leadership prior to Mr. Vu Duc Dam and Mr. Nyugen Thanh Long had also prepared the country for an effective pandemic response. Dr. Nguyen Thi Kim Tien served as the Minister of Health for 8 years before stepping down in 2019, and during her time as minister she raised the country’s capacity for disease prevention by investing in the healthcare sector. Although the price increases she put in place were controversial and eventually led to her dismissal, she implemented innovative technology into hospitals, raised healthcare worker salaries, and provided patients with amenities like expanded hospital rooms and furnished waiting rooms (Tuổi Trẻ Online, 2019). The country’s response would not have been possible without these measures implemented by Dr. Tien before leaving her post. Although no official successor had been appointed, the Ministry of Health led temporarily by Deputy Prime Minister Vu Duc Dam was able to implement health screenings, tracing, and provide treatment for patients in the first wave of COVID-19 described earlier in this report.

Vietnam: A Model For Pandemic Response? The United States, as well as other countries around the world and the global scientific community, have a lot to learn from Vietnam and its response to COVID-19. Firstly, Vietnam had previously built up thorough pandemic preparedness framework due to previous outbreaks of infectious disease such as SARS in 2003. Vietnam also employed a preventative approach to COVID-19 that required effective coordination between government, science, technology, military resources, and the Ministry of Health. Vietnam serves as a model for the U.S. and others to build a pandemic response


plan that serves each country’s needs and best utilizes its resources. The key takeaways from the Vietnamese response are a review and improvement of previous pandemic response tactics, early and enforced quarantine, effective testing and technology, and an organized leadership team. Of course, the citizens of Vietnam also played a major role in the effects of the response strategies. In the end, as Dr. Thanh Quang of the Vietnamese National Children’s Hospital stated, “everyone, from a 5 [year] old to a 90 [year] old, [it didn’t] matter what social-economical background, understood what COVID-19 [is] and how dangerous it is.” Without the compliance and understanding of its citizens, none of what Vietnam did would be possible. An informed public, organized leadership, and effective strategies are vital to an effective pandemic response, and Vietnam had them all.

“Hue believes it was thanks to this effective communication between central hospitals like Bac Mai and smaller hospitals that Vietnam's healthcare network was able to act quickly and effectively under the leadership of the MOH to respond to COVID-19.”

References After aggressive mass testing, Vietnam says it contains coronavirus outbreak. (2020, April 30). Reuters. https://www. reuters.com/article/us-health-coronavirus-vietnam-fight-insiidUSKBN22B34H A line runs through it: Vietnam and China complete boundary marking process. (n.d.). Retrieved August 25, 2020, from https://vietnamlawmagazine.vn/a-line-runsthrough-it-vietnam-and-china-complete-boundary-markingprocess-3227.html Andrew Salmon. (2020, June 15). Inside Korea’s low-cost, high-tech Covid-19 strategy. Asia Times. https://asiatimes. com/2020/06/the-secrets-behind-south-koreas-covid-19success/ Archived: WHO Timeline - COVID-19. (n.d.). Retrieved August 25, 2020, from https://www.who.int/news-room/detail/2704-2020-who-timeline---covid-19 BaiViet—Foreigners residing in provinces and cities ... (n.d.). Retrieved May 18, 2020, from https://lanhsuvietnam.gov.vn/ Lists/BaiViet/B%C3%A0i%20vi%E1%BA%BFt/DispForm.aspx ?List=dc7c7d75%2D6a32%2D4215%2Dafeb%2D47d4bee7 0eee&ID=1007 Bộ trưởng Bộ Y tế Nguyễn Thị Kim Tiến: ’Tôi cảm ơn những lời chỉ trích’—Tuổi Trẻ Online. (n.d.). Retrieved August 13, 2020, from https://tuoitre.vn/bo-truong-bo-y-te-nguyen-thi-kimtien-toi-cam-on-nhung-loi-chi-trich-20191121083316875. htm CDC. (2020, August 13). Coronavirus Disease 2019 (COVID-19) in the U.S. Centers for Disease Control and Prevention. https:// www.cdc.gov/coronavirus/2019-ncov/cases-updates/casesin-us.html CDC SARS Response Timeline | About | CDC. (2018, July 18). https://www.cdc.gov/about/history/sars/timeline.htm Chau, Mai N. & Nguyen, Xan Q., (March 19, 2020). Vietnam Military Increasing Isolation Housing to 60,000 Beds. .


Bloomberg.Com. https://www.bloomberg.com/news/ articles/2020-03-19/vietnam-is-increasing-quarantinecapacity-to-house-60-000-people Minh Le, S., (2020) Containing the coronavirus (COVID-19): Lessons from Vietnam. (n.d.). Retrieved May 20, 2020, from https://blogs.worldbank.org/health/containing-coronaviruscovid-19-lessons-vietnam Coronavirus Disease (COVID-19) Situation Reports. (n.d.). Retrieved August 25, 2020, from https://www.who.int/ emergencies/diseases/novel-coronavirus-2019/situationreports COVID-19 Information. (n.d.). U.S. Embassy & Consulate in Vietnam. Retrieved May 18, 2020, from https://vn.usembassy. gov/u-s-citizen-services/covid-19-information/ Dinh, L., Dinh, P., Nguyen, P. D. M., Nguyen, D. H. N., & Hoang, T. (2020). Vietnam’s response to COVID-19: Prompt and proactive actions. Journal of Travel Medicine, 27(3). https://doi. org/10.1093/jtm/taaa047 Dinh, L., Dinh, P., Nguyen, P. D. M., Nguyen, D. H. N., & Hoang, T. (2020). Vietnam’s response to COVID-19: Prompt and proactive actions. Journal of Travel Medicine, 27(3). https://doi. org/10.1093/jtm/taaa047 Disease 19(COVID-19), Ministry of Health and Welfare, Coronavirus. (n.d.). Coronavirus disease 19(COVID-19). Coronavirus Disease 19(COVID-19). Retrieved August 25, 2020, from http://ncov.mohw.go.kr/en/ Dougn, D. (2020, April 15). Coronavirus offers opportunity for face mask production business. Vietnam Insider. https:// vietnaminsider.vn/coronavirus-offers-opportunity-for-facemask-production-business/ EXECUTIVE HAZARDOUS INFECTIOUS INFECTIOUS DISEASE. (n.d.). Retrieved August 25, 2020, from http://vncdc.gov.vn/vi/ danh-muc-benh-truyen-nhiem/1082/benh-viem-duong-hohap-cap-nang-do-vi-rut GDP per capita (current US$)—Singapore, Taiwan, China, Hong Kong SAR, China, Korea, Rep. | Data. (n.d.). Retrieved May 19, 2020, from https://data.worldbank.org/indicator/ny.gdp.pcap. cd?locations=sg-tw-hk-kr&name_desc=false Hahn, R. (2020, April 11). COVID-19, From F0 to F5, Government Centralized Quarantine and Cohort Quarantine Guide A to Z in…. Medium. https://medium.com/@rachelhahn/covid-19from-f0-to-f5-government-centralized-quarantine-and-cohortquarantine-guide-a-to-z-in-2fc08d0a3f7a Hồi ức 45 ngày kinh hoàng chống dịch SARS. (n.d.-a). Retrieved August 25, 2020, from http://benhnhietdoi.vn/tin-tuc/chi-tiet/ hoi-uc-45-ngay-kinh-hoang-chong-dich-sars/220 Hồi ức 45 ngày kinh hoàng chống dịch SARS. (n.d.-b). Retrieved August 25, 2020, from http://benhnhietdoi.vn/tin-tuc/chi-tiet/ hoi-uc-45-ngay-kinh-hoang-chong-dich-sars/220 Hörmansdorfer, S., Campe, H., & Sing, A. (2008). SARS – Pandemie und Emerging Disease. Journal Für Verbraucherschutz Und Lebensmittelsicherheit, 3(4), 417–420. https://doi.org/10.1007/s00003-008-0374-0 https://plus.google.com/+UNESCO. (2020, March 4). COVID-19 Educational Disruption and Response. UNESCO. https:// en.unesco.org/covid19/educationresponse


Insider, V. (2020, February 14). Hanoi extends school closure for another week over coronavirus concerns. Vietnam Insider. https://vietnaminsider.vn/hanoi-extends-school-closure-foranother-week-over-coronavirus-concerns/ In the 15 years of the SARS pandemic, the horror has not yet faded. (n.d.). Retrieved August 25, 2020, from https:// vnexpress.net/15-nam-dai-dich-sars-noi-kinh-hoang-chuaphai-3723214.html Klingler-Vidra, R., Tran, Ba L., & Uusikyla, I. (April 9, 2020) Testing Capacity: State Capacity and COVID-19 Testing | Global Policy Journal. (n.d.). Retrieved May 20, 2020, from https://www.globalpolicyjournal.com/blog/09/04/2020/ testing-capacity-state-capacity-and-covid-19-testing Linen, T. T. (2020, July 7). Ông Nguyễn Thanh Long làm quyền Bộ trưởng Bộ Y tế. TUOI TRE ONLINE. https://tuoitre.vn/news20200706161424781.htm NCOVI - Ứng dụng trên Google Play. (n.d.). Retrieved May 27, 2020, from https://play.google.com/store/apps/ details?id=com.vnptit.innovation.ncovi&hl=vi Newsletter translated COVID-19 in the last 24 hours: Stop social isolation, people still have to wear masks when going out and keeping distance—Details—Ministry of Health— Newsletter of acute respiratory infections COVID- 19. (n.d.). Retrieved August 25, 2020, from https://ncov.moh.gov.vn/-/ ban-tin-dich-covid-19-trong-24h-qua-ngung-cach-ly-xahoi-nguoi-dan-van-phai-eo-khau-trang-khi-ra-ngoai-va-giukhoang-cach Nguyen, D. (2020, April 21). Vietnam launches Covid-19 contact tracing app. Vietnam Insider. https://vietnaminsider. vn/vietnam-launches-covid-19-contact-tracing-app/ Nguyen, K. (April 6, 2020). Quarantined In Vietnam: Scenes From Inside A Center For Returning Citizens. (n.d.). NPR.Org. Retrieved May 19, 2020, from https://www.npr.org/sections/ pictureshow/2020/04/06/823963731/quarantined-invietnam-scenes-from-inside-a-center-for-returning-citizens Nguyen, S. (March 24, 2020) Coronavirus: Life inside Vietnam’s army-run quarantine camps. South China Morning Post. https://www.scmp.com/week-asia/health-environment/ article/3076734/coronavirus-life-inside-vietnams-army-runquarantine Outbreak of Severe Acute Respiratory Syndrome— Worldwide, 2003. (n.d.). Retrieved August 25, 2020, from https://www.cdc.gov/mmwr/preview/mmwrhtml/ mm5211a5.htm Pearson, J. & Nguyen, P. (March 6, 2020). Vietnam quarantines tens of thousands in camps amid vigorous attack on coronavirus. (2020, March 26). Reuters. https://www.reuters. com/article/us-health-coronavirus-vietnam-quarantineidUSKBN21D0ZU Pham, L. Why does Vietnam gain international praise for fight against Covid-19? (n.d.). Hanoitimes.Vn. Retrieved August 25, 2020, from http://hanoitimes.vn/why-does-vietnam-gaininternational-praise-for-fight-against-covid-19-311680.html SARS Reference | SARS Timeline. (n.d.). Retrieved August 25, 2020, from http://sarsreference.com/sarsref/timeline.htm Severe Acute Respiratory Syndrome (SARS): The International Response. (n.d.). Retrieved August 25, 2020, from https:// www.everycrsreport.com/reports/RL32072.html


Shira et al., (April 15, 2020). COVID-19 in Vietnam: Travel Updates and Restrictions. Vietnam Briefing News. https:// www.vietnam-briefing.com/news/covid-19-vietnam-travelupdates-restrictions.html/ Vu, K., Nguyen, P., Pearson, J. (2020, May 1). After mass testing, Vietnam says coronavirus outbreak contained. The Japan Times. https://www.japantimes.co.jp/news/2020/05/01/asiapacific/vietnam-coronavirus-outbreak-contained/ Sullivan, M., (April 16, 2020). In Vietnam, There Have Been Fewer Than 300 COVID-19 Cases And No Deaths. Here’s Why. (n.d.). NPR.Org. Retrieved May 20, 2020, from https://www.npr.org/sections/coronavirus-liveupdates/2020/04/16/835748673/in-vietnam-there-havebeen-fewer-than-300-covid-19-cases-and-no-deaths-hereswhy Takashima, K., Wada, K., Tra, T. T., & Smith, D. R. (2017). A review of Vietnam’s healthcare reform through the Direction of Healthcare Activities (DOHA). Environmental Health and Preventive Medicine, 22(1), 74. https://doi.org/10.1186/ s12199-017-0682-z Telephonic Briefing with Dr. Barbara Marston, CDC COVID-19 International Task Force Lead; and Dr. John MacArthur, CDC Thailand Country Director. (n.d.). United States Department of State. Retrieved August 25, 2020, from https://www.state. gov/telephonic-briefing-with-dr-barbara-marston-cdc-covid19-international-task-force-lead-and-dr-john-macarthur-cdcthailand-country-director/ Terhune, C., Levine, D., Jin, H., & Lee, J.L., Special Report: How Korea trounced U.S. in race to test people for coronavirus. (2020, March 18). Reuters. https://www.reuters.com/article/ us-health-coronavirus-testing-specialrep-idUSKBN2153BW The Ministry of Health issues guidelines on medical isolation at concentrated COVID-19 disease isolation facilities — Ministry of Health—COVID-19 acute respiratory disease epidemic page. (March 14, 2020).(n.d.). Retrieved May 20, 2020, from https://ncov.moh.gov.vn/web/guest/-/bo-y-teban-hanh-huong-dan-cach-ly-y-te-tai-co-so-cach-ly-taptrung-phong-chong-dich-covid-19 Times, V. (2020, March 16). Prevention of Covid 19: Vietnamese Customs continues giving directives about exporting face masks. Vietnam Times. https://vietnamtimes. org.vn/prevention-of-covid-19-vietnamese-customscontinues-giving-directives-about-exporting-facemasks-18450.html Top leader calls for solidarity against COVID-19. (n.d.). Retrieved August 13, 2020, from http://vietnamlawmagazine. vn/top-leader-calls-for-solidarity-against-covid-19-27105. html TRANG TIN VỀ DỊCH BỆNH VIÊM ĐƯỜNG HÔ HẤP CẤP COVID-19—Bộ Y tế—Trang tin về dịch bệnh viêm đường hô hấp cấp COVID-19. (n.d.). Retrieved August 25, 2020, from https://ncov.moh.gov.vn/ Tuan, V. & Tung, M. (April 11, 2020) VnExpress. (n.d.). Vietnam schools set to reopen in June after four-month break— VnExpress International. VnExpress International – Latest News, Business, Travel and Analysis from Vietnam. Retrieved May 18, 2020, from https://e.vnexpress.net/news/news/ vietnam-schools-set-to-reopen-in-june-after-four-monthbreak-4083091.html Vietnam: Government issues new coronavirus-related


travel restrictions February 15 /update 8. (n.d.). GardaWorld. Retrieved May 18, 2020, from https://www.garda.com/ crisis24/news-alerts/314431/vietnam-government-issuesnew-coronavirus-related-travel-restrictions-february-15update-8 Vietnam aviation authority ceases all flights to and from coronavirus-stricken Wuhan. (n.d.). Tuoi Tre News. Retrieved May 18, 2020, from http://tuoitrenews.vn/news/ business/20200124/vietnam-aviation-authority-ceases-allflights-to-and-from-coronavirusstricken-wuhan/52707.html Vietnam Business Operations and the Coronavirus: Updates. (2020, August 13). Vietnam Briefing News. https://www. vietnam-briefing.com/news/vietnam-business-operationsand-the-coronavirus-updates.html/ Vietnam Business Operations and the Coronavirus: Updates. (2020, August 13). Vietnam Briefing News. https://www. vietnam-briefing.com/news/vietnam-business-operationsand-the-coronavirus-updates.html/ Vietnam Continues Nationwide School Shutdown Due to Covid-19 | Saigoneer. (February 16, 2020) (n.d.). Retrieved May 18, 2020, from https://saigoneer.com/saigon-health/18327vietnam-continues-nationwide-school-shutdown-due-tocovid-19 VietnamPlus. (2020, March 9). [Infographics] Phân loại cách ly người nhiễm, nghi nhiễm COVID-19 | Sức khỏe | Vietnam+ (VietnamPlus). VietnamPlus. https://www.vietnamplus.vn/ infographics-phan-loai-cach-ly-nguoi-nhiem-nghi-nhiemcovid19/627447.vnp Vietnam Poised to Export COVID-19 Test Kits | Voice of America—English. (April 30, 2020) (n.d.). Retrieved May 20, 2020, from https://www.voanews.com/covid-19-pandemic/ vietnam-poised-export-covid-19-test-kits Vietnam Population 2020 (Demographics, Maps, Graphs). (n.d.). Retrieved August 25, 2020, from https:// worldpopulationreview.com/countries/vietnam-population VIR VIR-. DPM Vu Duc Dam appointed as Secretary of MoH Party Affairs Committee. Vietnam Investment Review - VIR. Published October 15, 2019. Accessed August 25, 2020. https://www.vir.com.vn/dpm-vu-duc-dam-appointed-assecretary-of-moh-party-affairs-committee-71144.html Vu, K., Nguyen, P., Vietnam says origin of Danang outbreak hard to track as virus cases rise. (2020, August 2). Reuters. https://www.reuters.com/article/us-health-coronavirusvietnam-idUSKBN24Y0CL Vu, K. & Nguyen, M. (May 11, 2020) Vietnam reopens schools after easing coronavirus curbs. Reuters. https://www. reuters.com/article/us-health-coronavirus-vietnam-schoolsidUSKBN22N0QB Vượt qua “tử thần” SARS -Kỳ 2: Trong “tâm bão” SARS - Tuổi Trẻ Online. (n.d.). Retrieved August 25, 2020, from https:// tuoitre.vn/vuot-qua-tu-than-sars-ky-2-trong-tam-baosars-20200131102513948.htm WHO | Cumulative Number of Reported Probable Cases of SARS. (n.d.). WHO; World Health Organization. Retrieved August 25, 2020, from https://www.who.int/csr/sars/ country/2003_07_11/en/ WHO | Viet Nam SARS-Free. (n.d.). WHO; World Health Organization. Retrieved August 25, 2020, from https://www. who.int/mediacentre/news/releases/2003/pr_sars/en/


The Role of Ocean Currents and Local Wind Patterns in Determining Onshore Trash Accumulation on Little Cayman Island STAFF WRITERS: BEN SCHELLING ’21, MAXWELL BOND ’20, SARAH JENNEWEIN ’21, SHANNON SARTAIN '21 TA'S: MELISSA DESIERVO, CLARE DOHERTY | FACULTY EDITOR: CELIA CHEN Cover Image: A rainbow assortment of plastic accumulated on the beach. Source: Needpix.com


Abstract Each year, humans deposit billions of pounds of plastic and other trash into the ocean. These ocean plastics are distributed worldwide and pose a significant threat to marine ecosystems. The Caribbean Islands as a whole are the largest plastic polluter per capita, and the interconnected nature of the Caribbean Sea promotes trash transport among all of the islands. We quantified the onshore accumulation of plastics and other trash over four days at four sandy beaches on Little Cayman Island (two on the north side of the island and two on the south) to determine whether the deposition of trash on the island is driven by ocean currents or local wind patterns. Though previous research suggests that major ocean currents play a considerable role in the accumulation of trash on beaches, other studies have found that powerful local winds can overcome the influence of ocean currents. On Little Cayman Island, where ocean

currents come from the southeast, we might expect to see higher trash accumulation rates at sites on the southern side of the island. Alternatively, if local wind patterns have a greater influence than do ocean currents on where trash is deposited, we would expect to see trash accumulation variation on each side of the island depending on local wind patterns. We found that more southerly wind increases trash accumulation rate on the south side of the island and more northerly wind increases trash accumulation rate on the north side of the island. Based on our findings, we recommend focusing beach cleanup efforts for Little Cayman on recent wind patterns. Key Words: ocean, currents, wind, plastic, trash, Little Cayman Island, onshore accumulation


Figure 1: Sites chosen for trash surveys on Little Cayman Island Image from Google Earth.

Introduction Plastic has become pervasive in every aspect of our lives. Each year, humans deposit over five million tonnes of plastic and other trash into the ocean, which later washes up on beaches or converges in oceanic gyres (Jambeck et al., 2015). Plastic waste has even been found in the guts of amphipods in the deepest parts of the sea (Jamieson et al., 2019). The widespread distribution of plastic and other trash is not without consequences. Oceanic plastic pollution poses a huge threat to marine ecosystems. Seabirds, turtles, seals, and other wildlife are dying at alarming rates from ingesting plastic or getting tangled in plastic products (Laist, 1997). A large portion of these plastics and other trash has accumulated in the five major oceanic gyres. The largest of these is the Great Pacific Garbage Patch, which is located between the coast of California and Hawaii and contains an estimated 79 thousand tonnes of plastic brought there by ocean currents (Lebreton et al., 2018). In addition to accumulating in oceanic gyres, plastic also washes up on shorelines around the world. In the Caribbean islands, which produce the most plastic pollution per capita (Ritchie & Roser, 2020), plastic waste is easily transported between islands due to the connected nature of the Caribbean Sea. The once pristine beaches in the Caribbean islands now have substantial amounts of plastic and other trash, alarming residents and visitors, harming wildlife, and potentially negatively


affecting the vital tourism industry. One area that receives large amounts of plastic and trash from the other Caribbean islands is Little Cayman Island. Though environmental activists on Little Cayman are vying for a single-use plastics ban on the island (Young, 2020), the issue of onshore trash accumulation still persists. Researchers from the Central Caribbean Marine Institute on Little Cayman believe that most of the plastic and other trash that washes onto the island is brought primarily to the southeastern faces of the island from Haiti, Jamaica, and the Dominican Republic by major ocean currents (personal communication, L. Forbes). In response to this accumulation, there have been consistent grassroots efforts to remove the tonnes of plastic and other trash that wash up onto the beaches of Little Cayman. We aimed to develop baselines for trash accumulation to determine where and when to focus local cleaning efforts. We examined the role of ocean currents and local wind patterns in determining where and how much trash accumulates by quantifying the trash accumulation rate on various parts of the island. The Caribbean current, similar to that which created the Great Pacific Garbage Patch, is relatively constant throughout the year, and likely plays a major role in the accumulation of trash in the ocean. However, the local prevailing winds on the island change seasonally, and in the winter, notably, vary in magnitude and direction on short time scales (Burton, 1994). Studies have shown that strong, persistent local winds can overcome the influence of prevailing climate patterns on trash accumulation,

“Researchers from the Central Caribbean Marine Institute of Little Cayman believe that most of the plastic and other trash that washes onto the island is brought primarily to the southeastern faces of the island from Haiti, Jamaica, and the Dominican Republic by major ocean currents.�


Figure 2. Composition of total trash items collected separated into plastic, Styrofoam, glass, and other Created in excel by the authors

affecting or even reversing them (Swanson & Zimmer, 1990). If ocean currents are the primary determinant of trash deposition, we would expect more onshore trash accumulation on the southern side of the island regardless of wind direction due to the southeastern current hitting Little Cayman. However, if daily wind changes drive trash deposition, we would not expect a difference in trash accumulation among sites overall, but instead in trash accumulation among sites by day depending on wind direction.

Methods Observational Design

"We studied trash abundance at four beaches on Little Cayman Island, two on the south side of the island: the Department of Environment and Nighthawk; and two on the north side of

We studied trash abundance at four beaches on Little Cayman Island, two on the south side of the island: the Department of Environment and Nighthawk; and two on the north side of the island: Bloody Bay and Cumber’s (Figure 1). The beach substrate can have a large influence on the quantity and composition of trash accumulation (Pace, 1996). To normalize for these variations, we conducted all studies on sandy beaches. On the first day of our study (considered “day zero”), we cleared all trash present along three ten-meter transects at each of the four sites. Transects ran parallel to the waterline, two meters above the waterline, and were four meters wide. Specifically, we removed trash visible to the naked eye. Then, for four subsequent days, we counted and collected new trash pieces accumulated at each transect (N=48). In the lab, we categorized the trash we picked

up each day as either plastic, glass, Styrofoam, or “other.” We also counted the number of bottles, bottle caps, and plastic bags. Because pieces we picked up may have broken down in the process of transporting them to the lab, the number of trash pieces at each site were sometimes greater than our counts in the field. Therefore, we opted to use our field values for analyses. To estimate the number of microplastics present at each site, we took 10 cm cores (N=32) and used two- and five-mm sieves to separate and count microplastics between these sizes. Microplastics smaller than two mm were the same size or smaller than sand and therefore could not be sieved or identified easily. We took three cores, each in the middle of a transect, two meters up from the waterline. We also took five cores per site, four meters up from the

Figure 3: New trash items per day for north and south side sites. Values are per 10 meters. The six transects from the north side sites (three from each Bloody Bay and Cumber’s) are shown in blue and the six transects from the south side sites (three from each Department of Environment and Nighthawk) are shown in red. Wind vectors represent the average wind, found by averaging the wind components, over the day leading up to the time of the survey. They are shown in miles per hour, with the length and angle of the arrow corresponding to the speed and direction of the wind, respectively. Mean ± SE Image created in JMP 14.0 by authors.



wind values since the previous collection. To test how this daily trash accumulation rate was affected by both side of the island and northsouth wind, and to test how this daily trash accumulation rate was affected by both side of the island and east-west wind over this period, we made two general linear models— one for each wind component. We nested site within side of the island in both of these analyses.


waterline, to avoid the area where microplastics would be washed away by waves. We used wind data taken every five minutes from Tropical Runaway Station ICAYMANB3, located on the west end of Cayman Brac (Tropical Runaway, 2020). Ocean current direction was determined from historical hydrographic surveys (Roemmich, 1981). Statistical Analyses To analyze the differences in daily trash accumulation between the sides of the island and between days, we performed an ANOVA analysis on log(1+x) transformed data. To test the significance of the different sites on the trash accumulation, we nested site within the side of the island. For each period between trash collections, we found both the average north-south component and the east-west component of wind. For each site, we related the number of new pieces of trash averaged across transects to the average

We found 1,773 pieces of trash in total. Of these, 1424 (80 percent) were plastic, 275 (16 percent) were Styrofoam, 61 (3 percent) were glass, and 15 (one percent) were “other” (Figure 2). Of the plastic, 523 (36 percent) was below 1 centimeter in size. In addition, we found 78 bottle caps, 21 straws, and 144 pieces of plastic packaging, including items such as plastic bags. The average accumulation rate per meter per day was 2.5 pieces for the Department of Environment, 1.6 pieces for Bloody Bay, 3.4 pieces for Nighthawk, and 3.6 pieces for Cumber’s. On average at our sites, 2.7 pieces of trash are deposited per meter of shoreline per day. In total, we collected 2.01 kg of trash. On average at our sites, 1.95 g accumulates per meter of shoreline per day. The total pieces of new trash per day did not differ between days (ANOVA: F3,11 =2.68, p=0.06; Figure 3) nor the side of the island (ANOVA: F1, 11=3.77, p=0.06). The interaction between day and side of the island was significant (ANOVA: F3,11=21.80, p<0.0001). Site had no influence on the trash accumulation within the sides of the island (ANOVA: F2, 11=0.11, p=0.89). The east-west component of wind had no effect on onshore trash accumulation on either side

Figure 4: Linear relationships between north-south wind component and trash accumulation rate, by side of the island. Points represent average trash accumulation at one site for one day, located either on the north or south side of the island. Negative wind component values correspond to northerly wind, and positive values correspond to southerly wind Image created in JMP 14.0 by authors.

"On average at our sites, 2.7 pieces of trash are deposited per meter of shoreline per day. In total, we collected 2.01 kg of trash. On average at our sites, 1.95 g accumulates per meter of shoreline

Supplemental Figure 1. Trash accumulation over the course of our study. Values are per 10 meters. Blue lines represent sites on the north side of the island and red lines represent sites on the south side of the island. Wind vectors are the same as those in Figure 2. Mean ± SE Image created in JMP 14.0 by authors.



of the island (general linear model: F1,11= 0.92, p = 0.36). Trash accumulation on the north side of the island increased with northerly winds, and trash accumulation on the south increased with southerly winds (general linear model: F1,11= 17.40, p = 0.0019; Figure 4). Site nested within the side of the island had no significant effect (general linear model: F2,11= 0.88, p = 0.44). The average wind speed during our study was 6.1 mph, and average wind direction was ESE. Wind speeds ranged from 0-16 mph, from northwest to south wind directions. This is slightly weaker wind than the average wind speed for Little Cayman: 13 mph out of the east. No microplastics were found in any of the core samples.

"On the time scale of our study, trash accumulation on Little Cayman Island was primarily driven by wind strength and direction."

Discussion On the time scale of our study, trash accumulation on Little Cayman Island was primarily driven by wind strength and direction. Because trash accumulation did not differ between sides of the island, we can conclude that ocean currents had little influence on trash amounts relative to wind during the course of our study. Additionally, because the amount of trash accumulated did not differ across days over which there was varying wind, we can conclude that wind did not influence overall trash amounts. However, the amount of trash accumulated on one side of the island is dependent on the day— and therefore the direction and strength of the wind. Unfortunately, though we did not find any microplastics in the sand, we cannot conclude that there are no microplastics on Little Cayman Island— this was likely due to a sampling error. The influence of wind direction and magnitude on trash accumulation was also evident in examining trash accumulation over the course of this study (Supplemental Figure 1). On March 6th (Day 3), the leveling off in cumulative trash amounts was concurrent with the small wind vector on that day. If our calculated rates of 2.7 pieces of trash and 1.95 g per meter per day are representative of all of Little Cayman Island, we could expect almost 100,000 pieces and 74.4 kg of trash to be deposited each day over the roughly 37 kilometers of shoreline of Little Cayman. This ‘pieces accumulation rate’ is much higher than the results of a similar study, showing an average of 0.0034 pieces per meter per day of anthropogenic debris stranding on Atlantic Ocean shores (Barnes and & Milner, 2005). It is possible that these rates are not representative of all shoreline on Little Cayman.


Because we normalized for substrate by specifically choosing sandy beaches, we did not capture the diverse shorelines of the island, which include rocky substrates, mangrove forests, and other vegetated areas. Additionally, our samples were not evenly spaced around the island; Bloody Bay, Cumbers and the Department of the Environment are all located on the west side of the island, while Nighthawk is on the east side. A future study could choose sites more equally distributed around the perimeter of the island to learn more about how wind influences this entire area. Our sample period also may not represent how trash is accumulated all year on Little Cayman. In winter, the winds are more variable than they are in the summer. Given our results, these seasonal wind patterns could influence seasonal trash accumulation patterns. Future studies should survey during different seasons and for longer periods of time. This will help reveal the effects of ocean currents on trash accumulation over longer timescales. Nevertheless, given the magnitude of trash accumulation on Little Cayman, a strong contingent of conservationists gathers weekly to clean up local beaches. Though these efforts are already very successful at cleaning affected areas, studies such as ours can help inform these practices to be as efficient as possible because they can predict where trash will accumulate on a short time scale. In the future, trash cleanups on Little Cayman should focus on areas that have been receiving stronger winds. On-shore accumulation studies are also important for understanding the concentration and movement of trash around the world’s oceans. Mid-ocean trash concentrations steadily increased from 1960-1990, however, from 1990-2010 there was no trend despite increases in human trash (Law et al., 2010). This suggests that the sinks and sources of marine plastic are poorly understood. By studying onshore trash accumulations, we can contribute to our understanding of how marine trash concentrations, and our world’s oceans, are changing.

Acknowledgements Biggest of all shout-outs to the Central Caribbean Marine Institute staff, namely Lowell, Niki, and Miriam, who were absolutely essential in all parts of this study. Also, great thanks to our Professor and TAs, especially Clare, who spent several hours waiting in a boiling-hot van


for us to finish picking up plastic, and drove us to the store several times to get snacks when we were tired, hot, and hungry.

Author Contributions All authors contributed equally to this study. Benjamin Schelling was especially useful in finding tiny, tiny, tiny pieces of plastic as well as accurately counting everything, and Shannon Sartain was our sand-core and sand-sieving guru. References Barnes, D. K. A. and Milner, P. (2005). Drifting plastic and its consequences for sessile organism dispersal in the Atlantic Ocean. Marine Biology 146(815–825) Burton, F. J. (1994). Climate and tides of the Cayman Islands. In The Cayman Islands (pp. 51-60). Springer, Dordrecht. Eriksson, C., Burton, H., Fitch, S., Schulz, M., & van den Hoff, J. (2013). Daily accumulation rates of marine debris on subAntarctic island beaches. Marine pollution bulletin, 66(1-2), 199-208. Jambeck, J.R., Geyer, R., Wilcox, C. Siegler, T.R., Perryman, M. Andrady, A., Narayan, R., Law, K.L. (2015). “Plastic Waste Inputs from Land into the Ocean.” Science. 347 (6223): 768-71. Jamieson, A. J., Brooks, L. S. R., Reid, W. D. K., Piertney, S. B., Narayanaswamy, B. E., & Linley, T. D. (2019). Microplastics and synthetic particles ingested by deep-sea amphipods in six of the deepest marine ecosystems on Earth. Royal Society open science, 6(2), 180667. Lebreton, L., Slat, B., Ferrari, F. et al. (2018). Evidence that the Great Pacific Garbage Patch is rapidly accumulating plastic. Sci Rep 8, 4666. https://doi.org/10.1038/s41598-018-22939-w Pace, L. (1996). Factors that influence changes in temporal and spatial accumulation of debris on an estuarine shoreline, Cliftwood beach, New Jersey, USA. Theses. 1076. https:// digitalcommons.njit.edu/theses/1076 Ritchie, H. and Roser, M. (2020). “Plastic Pollution” Our World In Data. https://ourworldindata.org/plastic-pollution#citation. Roemmich, D. (1981). Circulation of the Caribbean Sea: A well‐ resolved inverse problem. Journal of Geophysical Research: Oceans, 86(C9), 7993-8005. Swanson, R. L., & Zimmer, R. L. (1990). Meteorological conditions leading to the 1987 and 1988 washups of floatable wastes on New York and New Jersey beaches and comparison of these conditions with the historical record. Estuarine, Coastal and Shelf Science, 30(1), 59-78. Tropical Runaway - ICAYMANB3 (2020). Weather Underground. https://www.wunderground.com/dashboard/ pws/ICAYMANB3?cm_ven=localwx_pwsdash. Young, K. (2020). The plastics problem: Cayman contends with a regional menace. Cayman Compass. https://www. caymancompass.com/2020/01/16/the-plastics-problemcayman-contends-with-a-regional-menace/.



Astrobiology: The Origins of Life in the Universe STAFF WRITERS: SUDHARSAN BALASUBRAMANI '22, ANDREW SASSER '23, SAI RAYASAM (WAUKEE HIGH SCHOOL JUNIOR), TIMMY DAVENPORT (UNIVERSITY OF WISCONSIN JUNIOR), AVISHI AGASTWAR (MONTA VISTA HIGH SCHOOL SENIOR) BOARD WRITER: LIAM LOCKE '21 Cover image: Astrobiology is the study of life in the cosmos. Understanding how life arose on Earth gives insights into where and what to look for when searching for life in outer space Source: Flickr



Short History of the Universe,” 2020).

The universe began 13.7 billion years ago with a tremendous and vibrant explosion known as the Big Bang, and went through immense transformations within fractions of a second. In 10-32 seconds, the universe grew from the size of a fingertip to the size of a galaxy. 10-6 seconds after the Big Bang, the expansion of the universe lowered the temperature just enough to allow for quarks to form and remain stable. Soon thereafter the universe further cooled to allow for these quarks to form protons and neutrons. At about one second, the universe’s temperature was about 1000 times that of the sun. A few minutes later, protons and neutrons started fusing to make light elements and isotopes such as helium and deuterium. 1520 minutes later, temperatures cooled down enough to stop further fusion, resulting in a universe consisting of about 92% hydrogen and 8% helium (along with a minute amount of lithium and other radioactive isotopes) (“A

For the next hundreds of thousands of years, nothing eventful happened short of from further cooling and expanding. But at around 380,000 years old, electrons and atomic nuclei within the universe started to combine to form neutral atoms. Due to the newly synthesized neutral atoms, the universe became transparent to a broad range of radiation wavelengths, causing a period of total darkness throughout the universe: the Dark Ages. As the universe further expanded, the neutral atoms were spread out relatively equally throughout the universe. However, by chance, there were some irregularities in the spreading out of matter, which caused clusters of matter to be abnormally close together. At around 380 million years after the Big Bang, gravity caused these atoms to clump together, which gave birth to the first star. With these stars came the first emission of light, ending the Dark Ages (“A DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Short History of the Universe,” 2020). To supply this light and prevent gravity from causing a star to fall in on itself, it fuses lighter elements into heavier elements to create energy. When a star died, it caused a massive explosion, called a ‘supernova,’ that released all the heavier elements into the universe. At around 600 million years after the Big Bang, the Milky Way started accumulating matter, and at around 5 billion years, it was fully formed. At around 9.3 billion years after the Big Bang, the Earth and the rest of the solar system formed from gas, dust, and the elements given off during supernovae. The first few hundred million years on Earth were rather rough; huge asteroids and comets regularly crashed into Earth, and the surface was completely molten. But after around 500-600 million years, things calmed down and Earth had a climate that offered steady temperatures and water. It was not much later, about a billion years perhaps, that the first life forms developed. And the rest is history (“A Short History of the Universe,” 2020). When scientists uncovered this sequence of events, many new questions arose. Was Earth unique? Or were there other objects in the universe that developed life? Enter ‘astrobiology.’ Astrobiology is an interdisciplinary scientific field concerned with the origins, distribution, early evolution, and future of life in the universe. This field of study strives to answer the basic unanswered questions about the long-term adaptation of living organisms to other environments. Astrobiology not only gazes up into space, but also dives into the deepest parts of the Earth, hoping to uncover the truth of how life came


to be. Given the timeless fascination with the origins and prevalence of life, astrobiology shall endure long into the future (Hubbard, 2017). In 1944, the physicist Erwin Schrödinger published an essay titled What is Life? The Physical Aspect of the Living Cell. In this essay, he stated that the “obvious inability of present-day physics and chemistry to account for [biological] events is no reason at all for doubting that they can be accounted for by these sciences” (Schrödinger, 1944). Since Schrödinger’s initial publication, researchers have made resounding progress in understanding the physical and chemical underpinnings of life. In 1952, Hershey and Chase showed for the first time that deoxyribonucleic acid (DNA) was the molecule necessary for reproduction (Hershey & Chase, 1952). Just one year later, x-ray diffraction patterns produced by Rosalind Franklin and ratios between nucleotides published by Erwin Chargaff and his colleagues were used by Watson and Crick to determine the double-helical structure of the DNA molecule (Zamenhof et al., 1952; Watson & Crick, 1953). The ‘central dogma’ of molecular biology – DNA makes RNA makes proteins – has fueled an incredible amount of research on the structure and function of these biomolecules. Today, humankind possesses complete genome sequences for about 3,500 species and has determined the structures of over 150,000 proteins (Burley et al., 2019; Lewin et al., 2018).

Figure 1: Structure of DNA, the self-replicating molecule of modern biochemistry

At its most basic level, life is simply the existence of a self-replicating molecule capable of evolving and adapting to its environment. However, spontaneous replication of a biopolymer (for instance, DNA or protein) from a soup of free monomeric units (nucleic acids or amino acids) is largely prohibited by large energy barriers and unfavorable reaction conditions (Ram Prasad & Warshel, 2011). On Earth, additional molecules have been introduced to make selfreplication more successful. Proteins speed up DNA replication, provide the cell with energy, facilitate efficient signaling, and serve as life’s ‘molecular machines.’ Instructions on how to make these proteins are passed to the next generation by specific DNA sequences. Additionally, replication of DNA takes place at an optimal pH, salt concentration, and temperature, so a barrier has evolved to create a constant internal environment separated from the harsh external environment. The semi-permeable outer membrane of a cell, known as the plasma membrane, is made of a bilayer of phospholipids, molecules with hydrophilic (water-loving) phosphate heads on the exterior and hydrophobic (water-repelling)

"At its most basic level, life is simply the existence of a selfreplicating molecule capable of evolving and adapting to its environment."

Source: Wikimedia Commons


Figure 2: Most common elements in biological systems Source: Wikimedia Commons

"The molecules mentioned above – nucleic acids, proteins, phospholipids, and water – represent the basis of life on Earth, but our search for life in the universe must not be constrained


fatty acid chains on the interior. A cell is defined by this plasma membrane and is considered the smallest unit of life (Ruiz-Mirazo et al., 2014). It is worth noting that water is required for nearly all reactions in biochemistry, either as a reactant, product, or as a polar solvent facilitating the making and breaking of chemical bonds. It also contributes to the folding of proteins, stability of DNA, and formation of lipid bilayers through hydrogen bonding (Nelson et al., 2017). The molecules mentioned above – nucleic acids, proteins, phospholipids, and water – represent the basis of life on Earth, but our search for life in the universe must not be constrained by these criteria; astrobiology requires a more inclusive definition of what it means to be alive. Scientists at NASA have defined life as a “self-sustaining system capable of Darwinian evolution” (Benner & Hutter, 2002). However, some researchers disagree with the effort to define life because it narrows the search and neglects the possibility of discovering new and interesting chemical systems in outer space that may be viewed as a different kind of life (Cleland & Chyba, 2002). Nonetheless, much of our search for life in the universe has focused on water and small organic compounds, as these are the molecules known to work for life on Earth. This review will provide an overview of current research efforts in the field of astrobiology. First, theories surrounding the origins of life on Earth will be presented. This will include further description of modern biochemical processes, the formation of early biopolymers, chemical mutation and evolution, as well as a discussion of organisms that use alternative biochemistries or live in extreme environments (possibly resembling those encountered in space).

Next, civilization’s search for extraterrestrial life forms will be addressed. This will include a discussion of possible requirements and criteria that have guided the search effort, how understanding life on Earth may advise this search, where and what scientists are looking for, efforts and techniques used to characterize the extraterrestrial environments in our solar system (mostly Mars and some gas giant moons) as well as the efforts and techniques used to examine celestial objects outside our solar system. Finally, a discussion of the probability and search for intelligent life will be presented.

Life on Earth The origin of life on Earth is still an unresolved and hotly debated topic. This discussion is not meant to be a definitive explanation for how life arose on Earth, but rather a review of prevailing theories presented by experts in the field. A qualitative chemical approach will be employed to understand modern biochemistry, Earth’s prebiotic environment and possible mechanisms for the creation of early biopolymers. Speculating about the origins of life on Earth is largely an exercise in examining the current state of biological systems and turning back the clock, and a brief introduction to modern biochemistry will provide an endpoint to understand what had to form at the beginning. All known forms of life on Earth are largely composed of six standard elements: carbon, hydrogen, nitrogen, oxygen, with lesser quantities of phosphorus and sulphur. Among these six elements, carbon is most abundant; estimates suggest that the total biomass on earth contains up to 550 gigatons of carbon


Figure 3: RNA polymerase (blue) is a protein responsible for the synthesis of mRNA (green) from a DNA (orange). This figure shows an example of a proteinDNA complex and demonstrates the role of proteins in catalyzing biochemical reactions. Source: Wikimedia Commons

(Bar-On et al., 2018). This abundance of carbon and the stability of its bonds are important features of biomolecules; carbon-carbon bonds and carbon-hydrogen bonds do not generally react at standard temperatures (National Research Council, 2007). However, carbon itself is not sufficient to drive the reactions required for life; in order to promote reactivity, many biomolecules make use of oxygen and nitrogen, as these atoms have higher electronegativity values than carbon and induce the partial electric dipoles needed to promote reactivity and drive the creation of macromolecules (National Research Council, 2007). Phosphorus and sulfur also play significant roles in biochemical reactions; for example, sulphur is used in the amino acids cysteine and methionine (Brosnan & Brosnan, 2006). Phosphate groups are prevalent in phospholipids and nucleic acid backbones and are also required for synthesis of adenosine triphosphate (ATP)—a key source of energy for many biochemical reactions (Schirber, 2012). Another important feature of carbon, nitrogen, oxygen, and phosphorus is their ability to form branched structures which can polymerize to form long chains (Kitadai & Maruyama, 2018). DNA and RNA are composed of a long, negatively charged backbone of phosphates and ribose sugars linked to one another by phosphodiester bonds. Considering that the human genome is about 3 billion base pairs spread over 46 chromosomes, the average length of a DNA molecule in a human cell is


around 65 million base pairs (although the small size of some chromosomes means that these molecules can certainly be longer). The polymerization of nucleic acids into DNA allows for the storage of an incredible amount of information in a single molecule. Proteins, comprised of strings of amino acids, are not nearly as long as DNA (the average length of a protein is about 300 amino acids), but the great amount of variability among the 20 natural amino acid side chains endows these polymers with some extraordinary functions (Alberts, 2002). Although lipids are much smaller compounds and do not polymerize in the same way as nucleic acids and amino acids, they form interesting macromolecular structures like membranes and micelles (hollow spheres) which can compartmentalize cellular reactions. Possibly one of the most important features of these biopolymers is their ability to recognize one another in a sequence-specific manner. The hydrogen bond donors and acceptors of the nucleotide bases – adenine, guanine, cytosine, thymine, and uracil – ensure that A pairs with T (or U in RNA) and G pairs with C (Figure 1). During replication, each strand of a DNA molecule is used as a template to synthesize a new DNA molecule, and highfidelity proteins, known as DNA polymerases, incorporate the nucleotide with the correct hydrogen bonding characteristics. Researchers have shown how the physics of hydrogen bonding, base stacking, and steric hindrance give DNA polymerases a high degree of

"Possibly one of the most important features of these biopolymers is their ability to recognize one another in a sequence-specific manner."


Figure 4: The Miller-Urey experiment demonstrated that biological molecules could be formed from abiotic starting materials Source: Wikimedia Commons

"One approach to simulating the appearance of the first self-replicating molecule has been to recreate conditions of the primordial Earth and observe the results in the lab."

accuracy during this reaction, demonstrating how the underlying physics of our microscopic universe can govern the chemical reactions that create life. The creation of proteins also relies on nucleotide hydrogen bonding, but between DNA and mRNA in the process of transcription and between mRNA and tRNA during translation. Finally, proteins can have an enormous amount of specificity based on their amino acid sequence. The specific substrate that fits into an enzyme, protein that is phosphorylated by a kinase, and DNA sequences recognized by a transcription factor are all examples of proteins recognizing their sequence-specific targets. Interestingly, the major groove of the DNA double-helix is just large enough to fit a protein alpha-helix, suggesting a possible coevolution of these biomolecules (Alberts, 2002). The complexity of modern cells (as well as the organisms they comprise) is the product of billions of years of chemical evolution and natural selection. One of NASA’s specifications for extraterrestrial life is that the system be capable of Darwinian evolution (Benner & Hutter, 2002). This refers to the process of natural selection published by Charles Darwin and simultaneously theorized by Alfred Wallace in the late 1850s (Darwin, 1959; Costa, 2014). The theory states that variation occurs through sexual reproduction, and that geographic seclusion of a population and isolated mating can give rise to a new species better suited to that environment. The ‘survival of the fittest’ refers to the process by which only the organisms most equipped to thrive in their environments will survive and reproduce. Although originally applied to animals (and humans), this concept has been widely accepted to explain the process of chemical evolution. Consider a polymer that is made out of two molecules, X and Y. The polymer reproduces by copying X across from X and Y across from Y, but the incorporation of X is 100 times faster due to its chemical interactions. Consider that an error is made during a round of replication where an X is erroneously incorporated across from a Y. The resulting polymer would have the ability to reproduce faster and would presumably consume more resources, outcompeting the original molecule. Understanding how molecules recognize one another, reproduce, mutate, and compete with one another for survival gives insight into how a simple self-replicating molecule can eventually give rise to the extraordinary biodiversity observed today, but it does not explain the appearance of the first biomolecule.


One approach to simulating the appearance of the first self-replicating molecule has been to recreate conditions of the primordial Earth and observe the results in the lab. The famous Miller-Urey experiments combined water (H2O), hydrogen gas (H2), methane (CH4), and ammonia (NH3) in a sealed flask, boiled the solution and applied electrodes across the flask to simulate lightning (Miller, 1956). These researchers found that several natural amino acids were formed, and recent analysis of these specimens has shown that all 20 natural amino acids, as well as some unnatural amino acids, were formed in this experiment (McCollom, 2013). The Miller-Urey experiments show how a biopolymer could form at the surface of Earth’s oceans; it is thought that the first life forms were actually formed in deep-sea volcanic vents (though this is highly debated). This first cell, known as the last universal common ancestor (or LUCA), is thought to have appeared 3.6 billion years ago in deep sea vents when the Earth was only 560 million years old (De Guilo, 2003). The prevailing theory behind the first cells was that their biochemistry was completely dependent on RNA (Totani, 2020). RNA is single stranded and forms a three-dimensional structure with interesting functionality; notably, RNA has been shown to catalyze chemical reactions, even in modern cells. It is thought that the first cells were composed of a membrane containing only RNA that catalyzed the reaction of new RNA. The ability of RNA to store information and catalyze chemical reactions makes it an efficient system for early life to use to take hold. This theory of the origin of life is known as ‘RNA world’ and has garnered a significant amount of attention from the scientific community (Leslie E., 2004; Joyce & Szostak, 2018; Totani,


2020). It is important to note that proteins also have the ability to store information and catalyze reactions, and a newer computational model has shown that protein, not RNA, could potentially be the original biopolymer (Guseva et al., 2017).

sophisticated enzymes. However, it is believed that although life could thrive in arsenic-based environments, the greater abundance of phosphorus on Earth contributed to the natural selection of phosphorus-based compounds (Wolfe-Simon et. al, 2009).

When considering life in outer space, it is important to recognize examples on Earth where organisms in different chemical environments may employ different biochemistries. While biochemical reactions are typically limited to carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur, there is a possibility that other, chemically similar elements could be incorporated into biomolecules. One such element – silicon – is a chemical analogue to carbon, as it possesses the same number of valence electrons and thus exhibits similar reactivity. Silicon is expected to form silane “chains” analogous to the aliphatic carbon chains found in lipids and other biomolecules (Rampelotto, 2010). However, it is worth noting that Silicon’s reactivity necessitates a chemical environment different from earth-like conditions. For instance, silicon burns spontaneously in oxygen and strips the oxygen from water by forming a silica “shell” (LeGrand, 1998). With this increased reactivity, silica-based life is only likely to be found in low oxygen-environments, where the primary solvent may be liquid methane or ethane. Additionally, as polysilanes are generally not stable under standard temperature and pressure conditions, it has been suggested that silicon-based life would be more likely to be found in low temperature, high pressure environments (Rampelotto, 2010). Carbon, however, still possesses some inherent advantages over silicon. For instance, carbon is more capable of forming double and triple bonds and is more readily able to bond to heteroatoms (Pace, 2001).

Other living organisms have been able to obtain energy needed for biochemical reactions via the oxidation of inorganic compounds – a process called chemolithoautotrophy (Amils, 2011). Although normally conducted through molecules like ammonia and hydrogen sulfide, some microorganisms have been able to take advantage of the more readily available transition metals. For example, a bacteria species called Candidatus Manganitrophus noduliformans was found to oxidize Mn2+ to Mn4+ species when exposed to aerobic conditions. The rate of growth of the bacteria was also found to have a linear relationship with the amount of Mn2+ present in solution (Yu and Leadbetter, 2o2o). Similarly, Acidithiobacillus ferrooxidans have been found to derive its energy from the oxidation of Fe2+ to Fe3+ under extreme acidic conditions. It is currently believed that this pathway depends on the presence of a so-called rus operon which encodes the 2 cytochromes and rusticyanin required for this pathway (Quatrini et. al, 2009).

Similarly, arsenic has been observed to serve as an analogue for phosphorus. Although largely toxic for most living things, arsenic has been found to be used by the bacterium GFAJ-1 in Mono Lake, California. Researchers found that the bacteria, which was located in a lake with high arsenic concentrations, incorporated arsenate ions into the synthesis of nucleic acids at a rate comparable to that of normal phosphate ions (Wolfe-Simon et al., 2011). Given that polyarsenates are more reactive than phosphates due to weaker covalent bonds within the molecule, it has been suggested that arsenic biomolecules would require less


"While biochemical reactions are typically limited to carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur, there is a possibility that other, chemically similar elements could be incorporated into biomolecules."

Another possibility for alternative biochemistries is the synthesis of biomolecules with non-standard stereochemistry. Almost all life on Earth relies on L-amino acids and D-carbohydrates as standard biomolecules, but some species on earth have been able to take advantage of their respective enantiomers. For example, D-amino acids have been found to be incorporated into the peptidoglycans of cell walls, as well as peptide antibiotics synthesized by some bacteria and fungi. However, the stereochemical configuration of D-amino acids hinders their usage in life; for example, the rate of enzyme-catalyzed hydrolysis of peptide bonds of D-amino acids is significantly slower than that of the L configuration (Friedman, 1999). While L-amino acids and D-sugars have no known, inherent chemical advantages over their enantiomers, it is believed that preferential evolution towards L-amino acids and D-sugars is the result of meteor bombardments. Evidence found in the Murray and Murchison meteorites suggests that there may have been a small enantiomeric excess of these stereocenters (Bailey, 2002). However,


Figure 5: The polar ice caps and former bodies of water on Mars are of great astrobiological interest and have been the topic of future exploration by international space agencies Source: Wikimedia Commons

"One misconception that can constrain our collective understanding of life is that it can only be found on other planets. Yet growing evidence suggests that habitable zones can possibly be found on exoplanets, moons, asteroids, or other celestial objects besides planets."

the high enantiomeric excess of L-amino acids and D-sugars in nature is believed to have been amplified by the mechanism of some catabolic and anabolic reactions. This is the result of the theory of “mutual antagonism,” which suggests that some molecules may catalyze self-production while suppressing the synthesis of its enantiomers (Frank, 1953). There is some experimental evidence that suggests autocatalytic mechanisms may be possible in the development of homochirality; for example, the alcohol product of the alkylation of pyrimidyl aldehydes has been found to autocatalyze synthesis of either the R or S enantiomer in high excess, even if the starting enantiomeric excess was low (Soai et. Al, 1995). One of the most interesting aspects of life as we know it is the wide array of environments life has been found in. While most life generally thrives on land or near the surface of the oceans and freshwater lakes, some life has been able to persist in conditions that may otherwise seem completely hostile. Though mainly no more complex than simple bacteria and fungi, these “extremophiles” have been found to persist in otherwise intolerable conditions. Some of the most well-known examples of extremophiles persist best in areas of extreme heat or cold and may live in environments similar to those of primordial life on Earth. In particular, thermophiles – organisms that can thrive in high heat environments – have been found to thrive in hot springs and deep-sea hydrothermal vents. For example, Pyrlobus fumarii, found in a hydrothermal “black smoker” vent on the Mid Atlantic Ridge, has been found to survive in temperatures up to 113˚C (Blöchl et. al, 1997). These deep-sea hydrothermal vents may have been the location of the origin of the first protobacteria in an anaerobic environment, due to the abundance of organic matter and hydrogen sulfide. The vents also supply the needed energy required for the reduction of CO2 to various hydrocarbons (Colín-Garcia et. al, 2016). Other extremophiles have been able to thrive in extremely cold environments. For example, the lichen Xanthoria Elegans has been able to photosynthesize at temperatures as low as -24˚C (Barták et. al, 2007). Other extremophiles have demonstrated the ability to survive in both hypersaline environments and environments with extreme pH. Although normally hypertonic solutions cause cells to shrivel up due to diffusion of water out of a cell, some organisms have been able to adapt to the high concentrations of salt


and be isotonic relative to their surrounding environments. For example, Halothermothrix orenii, a “halophile,” has been able to thrive at salt concentrations of 4-20% due to its ability to synthesize high concentrations of “compatible solutes” (such as amino acids) to relieve osmotic pressure (Cayol et. al, 1994; Santos & Da Costa, 2002). Other organisms have been able to survive conditions of extremely high acidity or basicity. Although normally most proteins denature under these conditions, bacteria have taken steps to maintain neutral pH conditions in the cytoplasm through proton pumps. For example, both the acidophile Bacillus acidocaldarius and the alkaliphile gram-negative Pseudomonas alcaliphila pump protons out of and into each species’ respective membrane (Michels and Bakker, 1985; Matsuno et. al 2018).

Life in Outer Space Classifying life is a subjectively biased evaluation, especially once we expand our knowledge of life on other celestial objects. One misconception that can constrain our collective understanding of life is that it can only be found on other planets. Yet growing evidence suggests that habitable zones can possibly be found on exoplanets, moons, asteroids, or other celestial objects besides planets (Ramirez, 2018). With this possibility, biologists will have to consider how the diversity of life is circumstantial to the particular environment of each celestial body where life has been found. It would be rather peculiar if we found pandas on Mars. On Earth, extremophiles are one branch of


Figure 6: This image shows a diagram of a star-planet system and portrays the doppler redshift and blue-shift of the star’s light waves as it “wobbles.” This doppler shift is detected in the radial velocity method.

organisms that may provide insight as to how life can endure the “extreme” environmental conditions of other celestial objects. Some microbial extremophiles have been found to thrive in polar environments or can withstand high levels of UV radiation (Hoover and Pikuta, 2010). Since water is fundamental to life on Earth, it is key for astrobiologists to examine polar environments as an analog to celestial bodies such as the moons of Jupiter or the ice caps found on Mars, that are covered in solid or liquid water (Hoover and Pikuta, 2010). Understanding extremophiles and the ecological dynamics they are involved in within their environments can be the gateway to predicting the kind of life we may encounter on future exploration missions. However, when life is found on other celestial objects, the efficacy of tools used to characterize and classify organisms on Earth comes into question. To first clarify the expectations of the types of organisms that astrobiologists are predicting to encounter, the likelihood of discovering extraterrestrial microbial life is assumed to be greater than other forms of life as observed on Earth (Hoover and Pikuta, 2010). Yet when we consider the definitions of how we characterize life on Earth, we exclude the environmental dynamics found on other celestial bodies which may also contain life. Looking again to extremophiles, if the assumed characterization of the biosphere is already considered extreme in comparison to the environmental conditions of Earth, the microbes found on that particular celestial object would by default be characterized as extremophiles. It is therefore essential that astrobiologists consider the normalization of celestial environments prior to the characterization of life.


“Where did we come from and where shall we go” is a key mantra in the spectrum of astrobiological studies. Searching for lifebearing celestial bodies where prospective contact could be made is most pragmatic if those bodies are relatively close to Earth. Some of the present candidates of highest astrobiological relevance within our solar system include Mars, Europa, and Titan. The latter two moons are classified by NASA as “icy worlds” where subsurface water, whether in liquid or solid form, is abundant (Hays et al., 2015). Researchers have also proposed the presence of hydrothermal vents at the bottom of Europa’s ice-covered liquid water oceans (Hays et al., 2015). On Earth, hydrothermal vents have been observed to foster a trove of microbial and multicellular life, which indicates that the presence of these vents on Europa could establish the means to sustain chemosynthetic microbial life (Hays et al., 2015). On the other hand, analyses of the geography of Titan has suggested the presence of a salty subsurface sea covered by several kilometers of ice (Hays et al., 2015). Titan’s atmosphere is also notably characterized by a haze that has been deemed analogous to prebiotic environmental conditions of early Earth, with evidence of photochemical activity in Titan’s lower atmosphere (Hays et al., 2015). Mars, which has become a focal point of current surface-rover missions by space agencies, contains atmospheric methane which has suggested to some astrobiologists that there were life processes occurring relatively recently in martian history due to the relatively short chemical lifespan of atmospheric methane (Hays et al., 2015).

"On Earth, hydrothermal vents have been observed to foster a trove of microbial and multicellular life, which indicates that the presence of these vents on Europa could establish the means to sustain chemosynthetic microbial life."

Current missions to explore life on Mars are underway through a collaboration between


multiple international space agencies (Figure 5). In late July of 2020, the Perseverance Rover was launched by NASA for the purpose of collecting terrestrial samples on Mars, which researchers hope will contain biochemical evidence of microbial life (Wiliford et al., 2018). A follow-up mission headed by the European Space Agency will aid in bringing the collected samples back to Earth and is expected to launch in 2026 (Wiliford et al., 2018).

"Scientists estimate there are about 100 thousand million exoplanets in the Milky Way alone."

One matter of discussion is what geographical features hold the highest probability of containing microbial life. Research on the development of early life on Earth stems from looking at stromatolites, which are sedimentary features that happen to excel in trapping microbial matter (Wiliford et al., 2018). Analogous sedimentary features, known as microbialites, have been identified in extinct Maritan lakes, and may be reservoirs or evidence for Martian microbes (Rizzo, 2020). Other missions by NASA to investigate other celestial objects include the Europa Clipper, which will take detailed imaging of Saturn’s icy moon Europa (Howell and Pappalardo, 2020). This mission will enhance insights into whether Europa contains the environmental conditions to support life (Howell and Pappalardo, 2020). Scientists estimate there are about 100 thousand million exoplanets in the Milky Way alone. Currently, we have confirmed a little over 4000 of them. Although that might sound feeble, finding that many exoplanets is impressive, and there are two main techniques (among others) used to detect them (Haynes, 2020). One method scientists use to detect exoplanets is the radial velocity method. As exoplanets revolve around their host star, the planet's gravitational pull causes the star to wobble, which makes the star seem like it is in a mini orbit. During this “orbit,” the star periodically moves towards and away from Earth. By understanding the doppler effect, scientists can determine if the star is wobbling or not. The doppler effect is essentially the expansion or compression of light waves. As a light emitting object moves towards another object, the light it emits becomes compressed, which makes the light approach the blue end of the spectrum. When the object is moving away from the other object, the opposite happens, and its light waves approach the red end of the spectrum. The doppler effect is also true for sound waves. In fact, it is noticeable when a police car with its siren on approaches. As it gets closer it seems to be higher pitched, and while it's moving away, it gets lower pitched.


By seeing if the star periodically red shifts and blue shifts, the scientists know that the star is wobbling, which likely means it hosts an exoplanet (NASA, 2020). The second main way scientists find exoplanets is by using the transit method. Whenever an exoplanet is in orbit around a star, it will pass between the star and Earth. As a result, the star’s light will be slightly dimmed. If a camera is pointed long enough at a star and its brightness is tracked over time, a dim in its brightness indicates an exoplanet. The simplicity of this process makes it the most effective and common way of tracking exoplanets (NASA, 2020). The ingenuity of this method not only allows us to spot exoplanets, but also to find out what elements or molecules are in their atmosphere. One of the main features scientists look for on exoplanets are the basic biochemistry elements, such as carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur. The main way to search for these elements is by looking at the atmosphere of the object. Certain elements absorb certain wavelengths of light, so by using the transit method, scientists can learn what elements are present in the atmosphere. When the exoplanet passes in front of the host star, the host star’s light penetrates the atmosphere of the exoplanet. By analyzing the light that comes off of the exoplanet using a technique called spectral analysis, the scientists can find out which wavelengths of light were absorbed by the planet, which the scientists can use to find out the elements present in the atmosphere and determine if they are the biological elements needed to support life (Piskunov et al., n.d) The other major thing scientists look for on other planets is water. While on Earth we can see water and taste it freely, searching for water on other planets is a challenge. Regular optical telescopes can sometimes give us a glimpse of water outside of Earth. A bright region of a planet or moon could indicate the reflections of frozen water, but carbon dioxide and other similar gases, if cold enough, form a reflective solid, which means optical telescopes alone cannot confirm the presence of water. To better verify the presence of water, a camera needs to be placed outside the atmosphere of Earth to create high resolution images. There are two main ways scientists do this. The first are orbiting spacecraft. Orbiting spacecrafts, such as the Hubble telescope, can get much closer to the celestial object. Normally these spacecrafts are giant telescopes with immense apertures


Figure 7: SETI Radio Telescope for Parabolic Research Source: Google Images

that can capture images of very far objects with great resolution. The second way is by using rovers, which are normally mounted to the orbiting spacecraft. Rovers can move around on an object and transmit the video back to humans on Earth. These landers and rovers can collect samples from the surface, which are then placed in an analysis chamber which can determine the chemical composition of the sample, like clay minerals or other materials that likely formed in a liquid water environment. Spacecraft can also collect such samples and return it back to Earth for a much more detailed analysis (Greenspon, 2019). The only problem with orbiting spacecraft and rovers is that scientists have only been able to use them for objects in our solar system. Utilizing them for exoplanets will take more innovation, which means that spectral analysis is currently the best option for determining the elements and molecules present on exoplanets.

Intelligent Life in Outer Space Although scientists may not know without doubt that there are intelligent civilizations out there, they believe it is worth attempting to find them. In 1984, scientists founded the Search for Extraterrestrial Intelligence (SETI). SETI is a non-profit corporation that functions as an institution for research and educational projects relating to life in the universe. SETI is headquartered in Mountain, View, CA, but has many locations throughout the world. The SETI institute focuses on three things: astrobiology (the efforts to understand the prevalence of life in general), education and outreach projects (the efforts to inform the public about research and motivate young students to pursue


science), and SETI itself (experiments designed to detect radio waves and other light signals that could be the product of other-worldly sophisticated beings). SETI is most popular for the third, and that is what attracts so many scientists to the organization (SETI, 2020). SETI recognizes that looking for “life” itself may prove to be a tedious process, as the universe is always expanding. Rather, the SETI institute focuses on the idea of using technology as a proxy for intelligence (Taylor Redd, 2016). Astrobiologists at the SETI institute believe that for an extraterrestrial civilization to be deemed an intelligent civilization, technology must have surfaced at one point or another. That technology will most likely emit signals or other measurable properties or effects that provide scientists on Earth the evidence needed to confirm past or present technology. These signatures are known as ‘technosignatures’ and are analogous to the biosignatures that signal the presence of life, whether intelligent or not (SETI Institute, 2020). Though these signatures may not be deliberate, they expose some technologically advanced activity occurring somewhere in our universe, which is enough to pique astrobiologists’ interests. Specifically, SETI designs experiments to look for and detect electromagnetic radiation (EMR). These experiments include all of the different wavelengths of EMR and using varying types of telescopes, SETI can search across the entire spectrum for indicators of advanced technology. However, SETI’s primary focus lies in radio waves, for they are the prime indicator of purposeful technology (SETI, 2020). Almost all SETI experiments thus far have looked for what

"Astrobiologists at the SETI institute believe that for an extraterrestrial civilization to be deemed an intelligent civilization, technology must have surfaced at one point or another. That technology will most likely emit signals or other measurable properties or effects that provide scientists on Earth the evidence needed to confirm past or present technology."


Figure 8: What the Arecibo message will look like if decoded Source: Wikimedia Commons

"High-power tv and radio telescopes are capable of detecting transmitters at a few light-years away while planetary radar systems on earth can detect and be detectable across the entire galaxy."

are called “narrow-band signals,” which are radio emissions that only extend over a small part of the radio wavelength spectrum (SETI, 2020). This same feature is employed in the radio industry to allow for an everyday handheld radio to pick up over 200 channels (FCC, 2020). Though celestial bodies, such as pulsars, quasars, and interstellar nebulae make radio signals, the static from them spread across the entire radio dial (SETI, 2020). SETI calls these narrow-band signals “carriers,” as they pack immense energy into a small amount of spectral space and therefore are the easiest technosignature to look for in our galaxy. Current technology at SETI includes telescopes that detect signals over a wide range. Highpower tv and radio telescopes are capable of detecting transmitters at a few light-years away while planetary radar systems on earth can detect and be detectable across the entire galaxy (Berkley SETI, 2020). The SETI Institute at Berkeley University recently started Project Breakthrough Listen, known to SETI as the “Apollo of SETI” (Berkley SETI, 2020). This will be the largest ever SETI project and will span over ten years of scanning for artificial signals across 100 million stars and 100 galaxies (Berkley SETI, 2020). This EMR detection project collects over 2 petabytes of data each day, and SETI will eventually make their data public, ushering in a new era of analysis. Although SETI is an immensely innovative initiative, until recently, it was mainly a passive effort that was designed only to detect signals, not to send them. Humankind has been inadvertently transmitting signals into space for more than 50 years--primarily television, radio, and high-frequency radio (SETI, 2020). However, in 1974, the first deliberate broadcast was beamed into space from the Arecibo Radio Telescope in Puerto Rico. The broadcast was made as a celebration for a major upgrade to the Arecibo Telescope. The broadcast was sent to the globular star cluster M13, which is around 25,000 light years away (meaning the signal will take 25,000 years to reach), and the broadcast consisted of a simple, pictorial message. The broadcast was immensely powerful, as it utilized Arecibo’s megawatt transmitter attached to a 305-meter antenna. The antenna works by concentrating the energy of the broadcast into a very small patch of sky in M13. The emission was equivalent to a 20 trillion-watt broadcast, meaning it would be detectable anywhere in the galaxy that contains a receiving antenna similar in size to Arecibo’s (SETI, 2020).


What is interesting about the broadcast, however, is what the pictorial message contains. The message consists of 1679 bits of information, which were transmitted by frequency shifting at the rate of 10 bits per second. This message results in a graphic that consists of, among other things, a human figure, DNA, and the solar system (SETI, 2020). Although it is extremely unlikely that this message will prompt a reply, it was useful in getting us to think about how we can reach out to other civilizations. This experiment also inspired many other messages to space, including the 2008 beaming of the The Beatles’ song “Across the Universe” to the star Polaris, and the 2016 radio transmission also to Polaris called “A Simple Response to an Elemental Message” (Dunbar, 2017; Quast, 2020). Sending radio signals into space is not the only thing scientists have done to reach out to extraterrestrial civilizations. Launched in 1977, spacecraft Voyager 1 and 2 explored all the jovian planets and 48 of their moons (Nelson, 2020). Aboard both of these spacecrafts was a 12-inch gold-plated copper phonograph record famously called “The Golden Record.” Each record is encased in an aluminum jacket, along with a cartridge and needle. Additionally,


instructions, in symbolic language, explain the origin of the spacecraft and detail how the record is to be played. The contents of The Golden Record were chosen by a NASA committee led by Carl Sagan, a world-famous astronomer and science communicator. Sagan and his associates put together 115 images and a variety of sounds, including wind and thunder, birds, whales, and other animals. Along with this, they added spoken greetings in fifty-five different languages (Nelson, 2020). The Voyager 1 spacecraft is moving away from Earth at around 3.5 AU per year, and in about 38,200 years, it will come within 1.7 light years of a star in the constellation Ursa Minor called AC+79 3888. Similarly, Voyager 2 is moving away from Earth at around 3.1 AU per year, and in about 40,000 years, it will come within 1.7 light years of a small star called Ross 24 in the Andromeda constellation, giving any potential planet orbiting that star 40,000 years to develop intelligent life (Nelson, 2020). The discussion on what we should do to communicate with extraterrestrial intelligence has always been a contested topic. The SETI Institute prides itself on listening for radio signals, not sending them (SETI, 2020). The institute works on improving receivers and incorporating larger radio dials to broadly search the universe for patterns; any broadcast sent by SETI is usually serendipitous (SETI, 2020). Individuals in the SETI community say the main reason for not transmitting signals is because individuals on Earth are not technologically committed to a long-term plan (David, 2014). That is, current technology may be able to send signals for a short 5 to 10-year period; however, in order to contact extraterrestrial intelligence, scientists need to have the determination to do so for a long time, over 10,000 years. The SETI community says that our Earth is currently not the technologically stable civilization that it needs to be to execute that long-term plan. Despite this, community members of SETI still plan on making themselves ready for contact. Many scientists suggest that mathematical language via radio messages is still our best hope of contacting intelligence (David, 2014). Additionally, culture continues to persist as a driving force in any civilization; many suggest that the earth should be sending information of economics, or symbols that could be thought of as universal symbols of cultures (David, 2020). Current technologies, specifically Elon Musk’s Neura-link, suggest that we will be able to read electrical signals as thoughts and that it might be interesting to communicate in that sense


(Walter, 2019). The discovery of extraterrestrial life will pose fundamentally new questions in environmental ethics. Most contributors to the field of astrobiology agree that it would do us well both ethically and scientifically to support alien life as a commitment to enhancing the richness and diversity of life in the universe (McKay, 2011). If the true goal of astrobiology is to enhance the richness and diversity of life throughout the universe, then exploration and human actions with respect to life in the universe should take into account ethical, economic, and broad societal considerations (McKay, 2011). While human actions can enhance life, those same actions can be damaging if left unchecked. With the current and potential exploration of Mars, a crucial question is how we can explore Mars without contaminating it in the process. Currently, we take extreme measures to make sure astronauts who come back to earth after being in space are quarantined and decontaminated to avoid extraterrestrial matter infecting Earth (David, 2014). However, the same idea applies to Earth contamination of other planets. For places like Mars, we are uncertain of the planet’s biological state. Astrobiologists examine at least three possible biological states: (1) there is life on Mars and it is vastly different from the life on Earth; (2) there is life on Mars and it is genetically related to life on Earth; and (3) there is no life on Mars (McKay, 2011). Each of these different situations represents different ways we can best explore places like Mars; however, since we are unaware of the planet’s biological state, we must explore now in a way that is biologically reversible and keeps all options open. Other scientists, such as Stephen Hawking, have mentioned that alien life forms may not be friendly cosmic neighbors, and that we should be careful of what signals we send and how we explore different celestial bodies (Moskowitz, 2010).

"If the true goal of astrobiology is to enhance the richness and diversity of life throughout the universe, then exploration and human actions with respect to life in the universe should take into account ethical, economic, and broad societal considerations."

Along with societal considerations, the discovery of life comes with inevitable scientific implications, primarily regarding the theories behind panspermia and second genesis. Astrobiologists argue that if we were to find extraterrestrial life, we would like to ask one essential question: is this life related to us (deGrasse Tyson, 2020)? If it is, then the theory of panspermia arises. The theory describes the concept of life as traveling between planets as seed (Kawaguchi, 2019). If this life was related to us, then it would imply that microbial spores


may have escaped the planet from high altitudes of earth and sent off into space, eventually colonizing another solar system. The more interesting conclusions for astrobiologists arise when dealing with the topic of a second genesis. The moment we find a second genesis, regardless of whether life has DNA or not, we know that life is universal; if life can arise in a place other than Earth, then it can arise anywhere that is habitable (Tarter, 2009). Within the second genesis theory, two possibilities are heavily discussed: DNAbased life and non-DNA-based life (deGrasse Tyson, 2020). If a second genesis of life is DNAbased, then it implies that DNA is an inevitable consequence of complex organic chemistry. However, if life is not DNA-based, then we would have to redefine our entire biological definition of life (deGrasse Tyson, 2020). Astrobiologists argue that this is possible. If alternative life is indeed based on carbon and water, it may have a completely different biochemical system, as the number of macromolecules that can be constructed from carbon is aggregately massive (McKay, 2011). Therefore, it is very possible that extraterrestrial life may use different carbon-based molecules for storing genetic information and structural functions than DNA and RNA. This theory of the second genesis is interesting to astrobiologists because it allows for the comparing and contrasting of two different biochemical systems, both capable of sustaining life, and would imply that there could be a common origin to all life and even broader criteria for life to exist.

Conclusion "The moment we find a second genesis, regardless of whether life has DNA or not, we know that life is universal; if life can arise in a place other than Earth, then it can arise anywhere that is habitable."


With the ever-growing population of humans on Earth, astrobiologists have considered the possibility of expanding into other planets in order to sustain life. However, as previously stated, this must be done in a manner that is biologically reversible as to not affect possible undiscovered life. Specifically, the idea of colonizing and terraforming Mars has been a technological fantasy of many scientists (Steigerwald, 2018). Terraforming is the process of transforming a planet to resemble the Earth such that it can support and sustain human life. The proposed mechanism to terraform Mars is to release carbon dioxide into the atmosphere to thicken it and act as a blanket to warm the planet (Steigerwald, 2018). However, the current issue is that Mars does not retain enough carbon dioxide in its polar caps, minerals, and soil to warm Mars enough to have nearly the same atmospheric pressure as Earth (Steigerwald, 2018). In addition, the current layer of carbon

dioxide in the atmosphere is too thin to support liquid water. Along with technological dilemmas, terraforming Mars comes with its own ethical issues, for there could exist life in various forms on Mars we currently are not aware of. Although living on a different planet seems exciting, space is truly a dangerous and unfriendly place. Space missions impact the human mind and body in tremendous ways, and quite recently, NASA has been majorly pursuing research on how a mission to Mars can affect a person’s body. NASA has studied several risks relating to a three-year Mars mission, and there are three main risks (among others) that come with being on a new planet (Abadie et al., 2020). The first risk is gravity fields. There would be three gravity fields in a Mars mission: weightlessness during the sixmonth trek between Earth and Mars, one-third of Earth’s gravity while on Mars’ surface, and the re-adaption to Earth’s gravity when returning. Transitioning between gravity fields can affect many aspects of the human body, including bone density, spatial orientation, and hand-eye coordination. NASA’s primary solution for this problem is to consistently monitor the space traveler’s body during the trip and advise them to take nutrients or exercise when needed (Abadie et al., 2020). The second risk is hostile environments. NASA has learned that the ecosystems inside a spacecraft and on a Mars, base play a large role in a traveler’s life. In space, microbes can have different characteristics, and microorganisms that naturally live on the human body can more easily transfer from person to person. This can lead to elevated stress hormones and an altered immune system, which could cause higher susceptibility to disease. To address this, NASA mainly focuses on rigorous testing. They would consistently monitor air quality, urine, blood, and the immune system. Plus, the living quarters’ environment is carefully planned to ensure a balance of comfort and efficiency (Abadie et al., 2020).The third risk is radiation. Space radiation is by far the most dangerous aspect of traveling to Mars, and astronauts on the space station already receive ten times the radiation they would on Earth. This can lead to higher risk of cancer and increased damage to the body’s key biological systems. Additionally, Mars has nowhere near the magnetic field Earth contains, leading to no protection against solar radiation on Mars’ surface. NASA’s research on this risk is still in its infancy, but the optimization


of shielding seems most promising. Shielding is already implemented in the ISS, although it only partially blocks radiation (Abadie et al., 2020). In order to sustain survival on the spacecraft, humans require some of the same basic needs—food, water, and oxygen—that are required on Earth. However, the process of meeting these requirements is far more complex (“Human Needs in Space,” n.d; “Human Needs: Sustaining Life During Exploration,” 2007). Due to the lack of breathable air present in space, which according to scientists is in large part a “vacuum,” oxygen must be supplied by the spacecraft through the process of electrolysis. During this process, electricity obtained from the spacecraft’s solar panels is used to split water into H₂O and O. The oxygen gas can be supplied to the spacecraft through the spacecraft’s storage tank. For food, meals must be pre-packaged, densely-packed, and nutritious. Water is provided through fuel cells, which produce electricity using both hydrogen and oxygen: in this process, water is produced and can be used for drinking purposes. Methods are constantly being updated with new and improved technology to sustain human survival on spacecraft (“Human Needs in Space,” n.d; “Human Needs: Sustaining Life During Exploration,” 2007). There have been several advances in terraformation research over the past few years; these advances have helped scientists better understand the composition of exoplanets. In order to be able to terraform extra-solar celestial objects, an understanding of the variable atmospheric, geologic, and climatic features of the planetary body is required (Pazar, 2018). Generally speaking, the ability to terraform a celestial body depends on certain qualities including the sea level, temperature, amount of oxygen, and atmospheric pressure on the terrestrial body (Pazar, 2018). Pazar organizes the qualities for planet habitability into 3 groups – atmospheric, geologic, and astrophysical – which each contain factors such as biosphere cycles, orbital eccentricity, and geomorphic processes respectively (Pazar , 2018). To better understand the potential for terraformation and the specific environmental conditions required, scientists have used theoretical models. These include mathematical relationships for calculating the growth rate and biomass capacity based on factors like planetary temperature and presence of ecological resources (Pazar, 2018).


Other factors incorporated include topography, water elevation, and surface distribution. To date, there have been 4,301 exoplanets discovered among 3,176 planet systems – these numbers are continuously being updated as discoveries are made (Exosolar Planets Encyclopedia). There are various findings of potential habitability (such as liquid water and rocky distribution) in “earth-sized” bodies. Some of the notable bodies include the 7 earth-sized planets that transit TRAPPIST-1 at 12 parsecs away (Pazar, 2018; Dittmann et al., 2017). Out of these planets, the Huanca exoplanet, containing a “rocky” terrain (Pazar, 2018; Bourrier et al., 2017) seems to be the most similar to Earth. LHS 1140b, a planet orbiting the LHS 1140 star, is also 12 parsecs away and is present in a habitable zone for liquid water (Pazar, 2018; Dittmann et al., 2017). Current research in the area of terraforming exoplanets indicates that there is much potential for terraformation beyond our solar system.

"In order to be able to terraform extra-solar celestial objects, an understanding of the variable atmospheric, geologic, and climatic features of the planetary body is required."

Although the prospects of life in outer space, terraforming celestial objects, and traveling the cosmos garner much excitement from the general public, no location in the universe has been discovered where life has developed to such an extent as it has on Earth. The unique conditions on our pale blue dot have allowed life to prosper, and the notion that these conditions are easily reproducible elsewhere is misguided. Successfully habiting other planets would require immense labor and resource production. The reality is that if humans can find a way to live in space, then they can also fix the problems facing our planet today. The future of space exploration is exciting, but in looking to the stars, we may miss the beauty right before us. References 5 Ways to Find a Planet | Explore. (n.d.). Exoplanet Exploration: Planets Beyond Our Solar System. Retrieved August 1, 2020, from https://exoplanets.nasa.gov/alien-worlds/ways-to-finda-planet A Short History of the Universe. (n.d.). Retrieved August 1, 2020, from http://www.sun.org/encyclopedia/a-shorthistory-of-the-universe A Simple Response to an Elemental Message. (n.d.). ASCUS. Retrieved August 1, 2020, from https://www.ascus.org.uk/asimple-response/ Administrator, N. C. (2017, June 1). NASA Beams Beatles’ “Across the Universe” Into Space. NASA; Brian Dunbar. http:// www.nasa.gov/topics/universe/features/across_universe. html Alberts, B. (Ed.). (2002). Molecular biology of the cell (4th ed). Garland Science.


Arecibo Message | SETI Institute. (n.d.). Retrieved August 1, 2020, from https://www.seti.org/seti-institute/project/details/ arecibo-message Beck, M. L., Freihaut, B., Henry, R., Pierce, S., & Bayer, W. L. (1975). A serum haemagglutinating property dependent upon polycarboxyl groups. British Journal of Haematology, 29(1), 149–156. https://doi.org/10.1111/j.1365-2141.1975.tb01808.x Benner, S. A., & Hutter, D. (2002). Phosphates, DNA, and the Search for Nonterrean Life: A Second Generation Model for Genetic Molecules. Bioorganic Chemistry, 30(1), 62–80. https:// doi.org/10.1006/bioo.2001.1232 Broadcasting a Message | SETI Institute. (n.d.). Retrieved August 1, 2020, from https://www.seti.org/seti-institute/project/ details/broadcasting-message Burley, S. K., Berman, H. M., Bhikadiya, C., Bi, C., Chen, L., Costanzo, L. D., Christie, C., Duarte, J. M., Dutta, S., Feng, Z., Ghosh, S., Goodsell, D. S., Green, R. K., Guranovic, V., Guzenko, D., Hudson, B. P., Liang, Y., Lowe, R., … Ioannidis, Y. E. (2019). Protein Data Bank: The single global archive for 3D macromolecular structure data. Nucleic Acids Research, 47(D1), D520–D528. https://doi.org/10.1093/nar/gky949 Cleland, Carol E.; Chyba, Christopher F., Origins of Life and Evolution of the Biosphere, v. 32, Issue 4, p. 387-393 (2002). (n.d.). Costa, J. T. (2014). Wallace, Darwin, and the origin of species. Harvard University Press. Darwin, C. (2008). The origin of species by means of natural selection, or, The preservation of favored races in the struggle for life. Bantam Classic. Di Giulio, M. (2003). The Universal Ancestor and the Ancestor of Bacteria Were Hyperthermophiles. Journal of Molecular Evolution, 57(6), 721–730. https://doi.org/10.1007/s00239-0032522-6 Dunbar, B. (2015, April 7). What is Astrobiology? [Text]. NASA. http://www.nasa.gov/feature/what-is-astrobiology Eschenmoser, A. (2007). The search for the chemistry of life’s origin. Tetrahedron, 63(52), 12821–12844. https://doi. org/10.1016/j.tet.2007.10.012 Exoplanet atmospheres—Department of Physics and Astronomy—Uppsala University, Sweden. (n.d.). Retrieved August 1, 2020, from https://www.physics.uu.se/research/ astronomy-and-space-physics/research/planets/exoplanetatmospheres/ FAQ | SETI Institute. (n.d.). Retrieved August 1, 2020, from https://www.seti.org/faq Guseva, E., Zuckermann, R. N., & Dill, K. A. (2017). Foldamer hypothesis for the growth and sequence differentiation of prebiotic polymers. Proceedings of the National Academy of Sciences, 114(36), E7460–E7468. https://doi.org/10.1073/ pnas.1620179114 Hershey, A. D., & Chase, M. (1952). INDEPENDENT FUNCTIONS OF VIRAL PROTEIN AND NUCLEIC ACID IN GROWTH OF BACTERIOPHAGE. The Journal of General Physiology, 36(1), 39–56. https://doi.org/10.1085/jgp.36.1.39 How Many Exoplanets Have Been Discovered, and How Many Are Waiting to Be Found? (n.d.). Discover Magazine. Retrieved


August 1, 2020, from https://www.discovermagazine.com/ the-sciences/how-many-exoplanets-have-been-discoveredand-how-many-are-waiting-to-be Joyce, G. F., & Szostak, J. W. (2018). Protocells and RNA SelfReplication. Cold Spring Harbor Perspectives in Biology, 10(9), a034801. https://doi.org/10.1101/cshperspect.a034801 Kitadai, N., & Maruyama, S. (2018). Origins of building blocks of life: A review. Geoscience Frontiers, 9(4), 1117–1153. https://doi.org/10.1016/j.gsf.2017.07.007 Kool, E. T. (2001). Hydrogen Bonding, Base Stacking, and Steric Effects in DNA Replication. Annual Review of Biophysics and Biomolecular Structure, 30(1), 1–22. https://doi.org/10.1146/ annurev.biophys.30.1.1 Leslie E., O. (2004). Prebiotic Chemistry and the Origin of the RNA World. Critical Reviews in Biochemistry and Molecular Biology, 39(2), 99–123. https://doi. org/10.1080/10409230490460765 Lewin, H. A., Robinson, G. E., Kress, W. J., Baker, W. J., Coddington, J., Crandall, K. A., Durbin, R., Edwards, S. V., Forest, F., Gilbert, M. T. P., Goldstein, M. M., Grigoriev, I. V., Hackett, K. J., Haussler, D., Jarvis, E. D., Johnson, W. E., Patrinos, A., Richards, S., Castilla-Rubio, J. C., … Zhang, G. (2018). Earth BioGenome Project: Sequencing life for the future of life. Proceedings of the National Academy of Sciences, 115(17), 4325–4333. https://doi.org/10.1073/pnas.1720115115 McCollom, T. M. (2013). Miller-Urey and Beyond: What Have We Learned About Prebiotic Organic Synthesis Reactions in the Past 60 Years? Annual Review of Earth and Planetary Sciences, 41(1), 207–229. https://doi.org/10.1146/annurevearth-040610-133457 Miller, S. L. (1953). A Production of Amino Acids Under Possible Primitive Earth Conditions. Science, 117(3046), 528–529. https://doi.org/10.1126/science.117.3046.528 Nelson, D. L., Cox, M. M., & Lehninger, A. L. (2017). Lehninger principles of biochemistry (Seventh edition). W.H. Freeman and Company ; Macmillan Higher Education. Perez, J. (2016, March 30). The Human Body in Space [Text]. NASA. http://www.nasa.gov/hrp/bodyinspace Ram Prasad, B., & Warshel, A. (2011). Prechemistry versus preorganization in DNA replication fidelity. Proteins: Structure, Function, and Bioinformatics, 79(10), 2900–2919. https://doi.org/10.1002/prot.23128 Ruiz-Mirazo, K., Briones, C., & de la Escosura, A. (2014). Prebiotic Systems Chemistry: New Perspectives for the Origins of Life. Chemical Reviews, 114(1), 285–366. https://doi. org/10.1021/cr2004844 Schrödinger, E. (1992). What is life? The physical aspect of the living cell ; with, Mind and matter ; & Autobiographical sketches. Cambridge University Press. Totani, T. (2020). Emergence of life in an inflationary universe. Scientific Reports, 10(1), 1671. https://doi.org/10.1038/ s41598-020-58060-0 Voyager—Fast Facts. (n.d.). Retrieved August 1, 2020, from https://voyager.jpl.nasa.gov/frequently-asked-questions/ fast-facts/ Voyager—Frequently Asked Questions. (n.d.). Retrieved August 1, 2020, from https://voyager.jpl.nasa.gov/frequently-


asked-questions/ Voyager—What’s on the Golden Record. (n.d.). Retrieved August 1, 2020, from https://voyager.jpl.nasa.gov/goldenrecord/whats-on-the-record/ Water Beyond Earth: The search for the life-sustaining liquid. (2019, September 26). Science in the News. http://sitn.hms. harvard.edu/flash/2019/water-beyond-earth-the-search-forthe-life-sustaining-liquid/ Watson, J. D., & Crick, F. H. C. (1953). Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid. Nature, 171(4356), 737–738. https://doi. org/10.1038/171737a0 Zamenhof, S., Brawerman, G., & Chargaff, E. (1952). On the desoxypentose nucleic acids from several microorganisms. Biochimica et Biophysica Acta, 9, 402–405. https://doi. org/10.1016/0006-3002(52)90184-4



Capitalism and Conservation: A Critical Analysis of Eco-Capitalist Strategies STAFF WRITERS: EVA LEGGE '22, JESS CHEN '21, TIMMY DAVENPORT (UNIVERSITY OF WISCONSIN JUNIOR), JAMES BELL '21, LEANDRO GIGLIO '23 BOARD WRITER: ANNA BRINKS '21 Cover image: Climate change and wide scale environmental degradation are defining problems of the modern era. Eco-capitalism seeks to address these issues through market strategies that promote conservation and sustainability Source: Pixabay

Introduction 1.1 Agrarian Origins of Capitalism Capitalism has its roots in agriculture. During the 16th century, the social conditions of Medieval England were ripe for the birth of agrarian capitalism. In most pre-capitalist societies, peasants lived off access to common lands and used the crops they grew to feed their families (Wood, 1998). However, in England, these common lands were minimal, and the majority of land was owned privately. To access farmland, peasants were required to pay rent to the landlord. This is a noticeably different social structure as landlords controlled access to land, while tenants were property-less and could only access land through the sale of their labor (Comninel, 2000). To squeeze more rent from their tenants, landlords encouraged increased productivity. The more the tenants produced, the more they


could afford to pay in rent, which benefited the landlords. This created competition between tenants since they had to innovatively increase farming productivity to afford rent, or otherwise be evicted and replaced with someone who could produce more. Through this system, the forces of capitalism emerged: competition, accumulation, and profit maximization (Wood, 1998). From that point, land was increasingly privatized through enclosure; the more exclusive the land was and the more land that was accumulated, the more profitable it could be for those who owned it. Eventually, these forces of capitalism would drive colonialism by increasing the desire to gain more land, spreading this economic structure to the land that is now the United States. The defining characteristics of capitalism that emerged during this era have permeated American history: private property, a market driven by competition and profit maximization, DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

and a two class system consisting of the capitalist class who own the means of production (land, property), and the working class who must sell their labor to access the means of production. 1.2 The Industrial Revolution and the Current Extinction Crisis Although Medieval England was the starting point of capitalism, what boosted its significance worldwide was the so-called First Industrial Revolution. This period spanned from 1750 to 1840 and involved major innovations in the manufacturing process that transformed Europe and the United States. Improved technology and new inventions resulted in capital accumulation, increased agricultural productivity, and income growth. In addition to the major transformation of economic and social landscapes introduced during the First Industrial Revolution, this period also planted the seed for many of the environmental issues facing the world today. Fossil fuels, primarily coal, were used to generate electricity and steam, and greenhouse gas emissions from these energy sources caused gradual changes to the Earth’s climate. Because of this, many scientists argue that since the industrial revolution, the Earth has now entered a geological era called the “Anthropocene”, illustrating that anthropogenic (human derived) actions are the major driver of global warming (Zalasiewicz, 2008; Clark and York, 2005). Global warming has threatened the existence of species across the globe. “It is generally agreed among biologists that another mass extinction is underway,” writes science reporter Elizabeth Kolbert for The New Yorker in 2009. “Though it’s difficult to put a precise figure on the losses, it is estimated that, if current trends continue, by the end of the century as many as half of Earth’s species will be gone” (Kolbert, 2009). In Earth’s long history, there have been only five mass extinctions. Each presented a loss of at least three-quarters of all species on Earth in at most 2.8 million years, a short time span on a geological timescale (Saltré & Bradshaw, 2019). Each of the “Big Five” mass extinctions, as they are called, were so devastating that it took millions of years for Earth’s biosphere to recover. And when ecosystems were restored, they were completely restructured, looking entirely different from the ecosystems present before the extinction event (Kolbert, 2019).


A growing body of evidence suggests

that we have entered the sixth extinction (Barnosky et al., 2011). Extinction rates of all species on earth are (conservatively) 100-1000 times higher than before the Anthropocene (Pimm et al., 2014). In the past half-century, fifty percent of all vertebrate species have become extinct (Living Planet Index, 2018). If these trends continue, the loss of species interactions on every level of the food web will cause ecosystems to collapse and greatly threaten human survival (Gray, 2019). Unlike most other extinctions, this one is being caused by the destructive actions of one species: humans (Pievani, 2014). In fact, some studies show that the anthropogenic impacts on the environment, such as global climate change, altering the composition of the atmosphere, and degrading the landscape from resource extraction and population, has created the perfect storm for the next big extinction (Pievani, 2014). However, despite the common use of the word “Anthropocene”, some claim that a more apt name would be the “Capitalocene,” expressing that it is not humans but rather our economic system that is primarily responsible for the current ecological crisis (Moore, 2017). It is imperative for capitalist economies to grow: positive growth rates are necessary for firms to make profits and for individuals to prosper financially in a capitalist society. If the growth rate falls below a positive threshold, firms will incur losses, go out of business, and move the economy into a downward spiral. According to this model, capitalist economies can either grow at a sufficiently high rate or shrink if the growth rate falls. This means that, in the long run, a zero or negative average growth rate economy is not feasible (Binswanger, 2009). This poses a problem—a system based on constant growth is not sustainable within the context of our planet’s finite resources.

"Some studies show that the anthropogenic impacts on the environment, such as global climate change, altering the composition of the atmosphere, and degrading the landscape from resource extraction and population, has created the perfect storm for the next big extinction"

1.3 Potential Solutions Despite the seeming paradox of capitalism and conservation, there are many sustainability initiatives based within the capitalist system. This eco-capitalism movement hopes to harness the market, innovation, and productivity towards the construction of a sustainable future. Economic growth can increase income levels and standards of living, making individuals secure in the present and giving them the freedom to focus on the future. This stability can lead to increased demand for environmental protection, lower birth rates, and higher expectations for environmental quality 223

Figure 1. Marginal Social Cost vs. Marginal Private Cost Source: Wikimedia Commons

"Carbon emissions are undoubtedly the largest contributor to global climate change, and the most crucial target for securing a renewable future."

(Reilly, 1990). To have a chance at stopping the next mass extinction, a multifaceted approach to species conservation must be taken. This includes direct measures, such as planting new vegetation and maintaining intact tropical and temperate forests, which hold much of the planet’s biodiversity (Gray, 2019). But there are also other, perhaps more unconventional, approaches that have become vital components of combatting species loss, such as taxes on carbon and plastics, big game hunting, ecotourism, payment for ecosystem services, and more. With scientists predicting grave consequences if current climate change trends continue, it is critical to analyze the feasibility of these eco-capitalist solutions in order to take appropriate action in combatting climate change.

Taxes, Trading, and Governmental Regulation 2.1 Carbon Cap-And-Trade Schemes and Carbon Taxes Carbon emissions are undoubtedly the largest contributor to global climate change, and the most crucial target for securing a renewable future. Presently, there are two main systems for managing emissions. The first is a cap-and-trade system where governments set a quantity of emissions and individual stakeholders allocate units of emissions in a free market. The second is a carbon tax policy where firms are taxed based on each unit of emission. The ultimate goal for both of these systems is to limit the externality, the negative consequence of high emissions


created by each firm pursuing its own interests. A simplified economic model demonstrates that, with no regulation, the market equilibrium of carbon emissions will exceed the societal equilibrium. In Figure 1, it is clear that the Marginal Private Cost (MPC), the cost to an individual of using another unit of CO2 given the current production amount, is much less than the Marginal Social Cost (MSC), the cost to society of using an additional unit of CO2 given the current production amount. If firms supply up to their MPC, the overall impact will be a negative externality for society — global warming, smog, preventable deaths from air pollution, and many other environmental problems. This stems from the fact that the negative externality does not factor into a firm’s decision making, but does in the MSC. The cap-and-trade policy and carbon tax realign individual incentives with societal incentives. Cap-and-trade policy reduces this negative externality by setting a limit on the total allowable quantity of CO2 emissions and allowing firms to purchase rights to a certain quantity. Carbon taxes seek to reduce the quantity of emissions by penalizing large CO2 emissions, shifting firms production from MPC to MSC. Both policies have their advantages and disadvantages. One of the major benefits of a cap-and-trade system is that it allows the market to efficiently allocate resources. For example, traditional standards may force all firms to upgrade their air-conditioning systems, which would regulate the output of ozone depleting


Figure 2: Accumulation of SUPB in man-made landfills have tarnished the natural ecosystem services which organisms rely upon for their wellbeing Source: Flickr

chemicals from these systems. For smaller firms, this would be particularly costly because they bear the same fixed costs (i.e. purchasing and installing the same air-conditioning system), but they cannot produce as much output as the larger firms to cover this cost. As a result, this smaller firm may have to shut down completely as it can no longer remain profitable. Under the cap-and-trade system, a smaller firm may choose not to install the new air-conditioning system, but rather limit their emissions and sell the rest of the emissions rights to a larger firm, thus allocating resources efficiently. One benefit of both cap-and-trade and carbon taxes is the long-term incentives created by such systems. For traditional standards, whereby governments set uniform regulations, the yearly incentives encourage firms to cut emissions to meet those standards on a yearly basis rather than investing in long-term, more efficient technologies (Stewart et al., 1988). In contrast, cap-and-trade creates an incentive to lower future emissions as the quota quantity is often set for years in advance (i.e. firm X has the rights to emit 25,000 metric tons of CO2 by 2025). Carbon taxes also create this incentive by lowering future costs associated with increasing CO2 emissions (Stavins, 2008). Another major benefit to cap-and-trade policy is ensuring an absolute maximum on CO2 emissions by setting an exact quota. Knowing the maximum CO2 a firm can emit is very important for


effective climate policy and maintaining exact data on emissions. Unfortunately, this certainty comes with a major drawback to cap-andtrade. Certainty in supply means that supply is perfectly inelastic (meaning any change in price will have no effect on quantity emitted). As a result, demand shocks can and often do have a massive impact on the cost of CO2 and the price of future emissions. In turn, this can lead to businesses shutting down and a farreaching unintended negative externality (Fell et al., 2020).

"...carbon taxes allow for the quantity of emissions to shift to a new market equilibrium, as there is no set quantity for each firm."

Alternatively, carbon taxes allow for the quantity of emissions to shift to a new market equilibrium, as there is no set quantity for each firm. As a result, there is much less price volatility, but given the incentive to create more efficient technology, taxes are often adjusted to meet increasing output. Another benefit of taxes is that at the most basic level, they affect firms of all sizes proportionally. Of course, the effects depend on the design of the taxes, but unlike cap-and-trade policy, the taxes do not allocate most or all emissions to the largest firms that can afford to buy these permits. Consequently, taxes create industries that are more competitive and do not hurt smaller businesses that would otherwise benefit from shutting down and selling their emission permits to bigger firms (The FASTER Principles, 2005).


Figure 3. The black rhino is native to eastern and southern Africa. Due largely to poaching, the black rhino population has dropped by 98% between 1960 and 1995. The species remains critically endangered to this day, with its total population averaging between 5-5.5 thousand Source: Wikimedia Commons

2.2 Taxes on Plastic

"Using the power of money to impact consumer behavior is not a novel concept. “Sin taxes” imposed on items deemed unhealthy such as alcohol or nicotinebased products are levied to counteract consumption."

The true cost of a plastic bag is never free. In 2014 alone, the United States used over 100 billion single-use plastic bags (SUPB) (Wagner, 2017). These bags have a slow rate of decomposition and have subsequently accumulated in landfills and natural environments as litter (Wagner, 2017). As SUPBs degrade, the quality of shared ecosystem services is diminished by the consumer of the SUPB and is endured by the individuals or organisms that rely upon these littered environments (Figure 2). However, this true cost to the environment is not always captured by consumer behavior when there is no monetary cost of SUPB usage. To remedy this disconnect, local governments in the United States have started to impose per-use taxes or outright bans on SUPB. The purpose of these policies is to alter consumer behavior and help individuals realize that unsustainable actions have a shared environmental cost. In the instance of per-use taxes, when there is no tax, consumers do not have a direct incentive to avoid the overconsumption of SUPB when they are shopping. Conversely, a per-use tax would, in theory, help mitigate this behavior (Wagner, 2017). Using the power of money to impact consumer behavior is not a novel concept. “Sin taxes” imposed on items deemed unhealthy such as alcohol or nicotine-based products are levied to counteract consumption (O’Donoghue and


Rabin, 2006). While SUPB may not directly impose damages onto the consumer, the environment benefits from the reduced consumption, and individuals benefit from the cleaner environment. Long-term environmental degradation stemming from the action of an individual consumer is difficult to comprehend at a store checkout, but there is no doubt SUPBs make a considerable impact on the environment. This is where the emerging use of taxation on SUPBs holds the potential to put a price on the externalities associated with SUPB consumption and make consumers consider the full cost of a plastic bag.

Sustainable Monetization of Natural Resources 3.1 Big Game Hunting In 2014, professional hunter Corey Knowlton attended a hunting convention in Dallas. He was not planning on attending the meeting, but upon a request from his friend to provide the lowest bid for an item, he found himself immersed in the chaos of a big game hunting auction. The “lowest bid” that he promised to provide was priced at $350,000. And the prize of this auction? A “conservation tag” allowing the highest bidder to kill a critically endangered black rhino in Namibia. Despite having the lowest bid, Knowlton won that license to kill. Soon after he won the chance to kill an endangered species, he became an endangered species in his own right — the subject of countless death threats from livid


conservationists, ones that not only threatened Knowlton’s life but also the lives of his wife and children. As counterintuitive as it may seem, some conservationists see transactions such as these as vital components of Africa’s conservation effort. In fact, some Namibian officials believe that Namibia is one of the gold standards of African wildlife conservation precisely because of these “conservation tags”. In the eyes of the Namibian government, the benefit of this legal monetization of hunting is two-fold. First, this license to kill was not for just any black rhino — it was for elderly, aggressive male bulls. In fact, the bull that Knowlton ended up killing had already attacked and killed at least two other rhinos. Second, all of the profits from the trophy hunting bids funnel directly back into anti-poaching efforts. Since the Namibian government allowed people to buy, sell, and shoot wildlife on their land in the early 1980’s, the wildlife population in Namibia has increased by 80 percent, and black rhino populations in Namibia have increased by thirty percent (Grobler, 2019; Adler, 2015). Mike NortonGriffiths, an economist and conservationist, has long argued that the revenue from the trophy hunting industry is vital to the conservation of endangered species in Africa. Though this approach to conservation may be controversial, it is generally agreed upon among the conservation community that action must be taken to conserve Africa’s most endangered species. For decades, many African countries (such as Namibia, Kenya, and South Africa) have been overwhelmed by poachers, largely driven by Chinese market demand for the ivory in elephant tusks and rhino horns (Nuwer, 2016). Between 1969 and 1973, ivory market demand increased ivory prices tenfold, spurring sharp increases in poaching, and between 2006 and 2016, one in five elephants was slaughtered for its ivory (Nuwer, 2016). Black rhinos remain critically endangered, with their global population hovering around 5,500 (Save the Rhino, 2020). Not all African countries support the Namibian model for animal conservation. Many academics are skeptical of monetizing ecosystem services, and many African countries do not follow this model (Temel, 2008). Kenya, for example, banned the utilization of wildlife for profit in 1977 (Norton-Griffiths, 2007). Kenyan government officials, along with many other conservation groups across


Africa, believe that the monetization of big game hunting sends the wrong message: that the only way to save endangered species is to create an industry around hunting them. Although Kenyan wildlife populations have decreased dramatically following the ban, their government officials have found other means of combating poaching. In 1989, Richard Leakey, the director of Kenya’s wildlife program, was the steward of 12 tons of ivory and rhino tusks that had been intercepted from the illegal poaching trade. This was not an unusual occurrence; many African countries had stockpiles of intercepted ivory. However, criminal attraction resulting from keeping the ivory put the lives of security guards at risk, and the ivory often found its way into the black market. Therefore, Leaky was presented with a choice: sell it, or destroy it. Even though the ivory would have brought him a windfall of three million dollars, Leakey decided to burn the ivory. This alternate conservation approach—to shame the buyers instead of trying to profit from them—worked. Demand for ivory dwindled and illegal poaching decreased by about 99 percent (Adler, 2015). This burning event also contributed significantly to the Convention on International Trade in Endangered Species of Wild Fauna and Flora’s decision to ban all international trade on ivory (Nuwer, 2016). In 2016, this burning act was repeated in Kenya, and 105 tons of tusks and horns went up in flames (Warner 2016).

"Even though the ivory would have brought him a windfall of three million dollars, Leakey decided to burn the ivory. This alternate conservation approach—to shame the buyers instead of trying to profit from them—worked. Demand for ivory dwindled and illegal poaching decreased by about 99 percent."

But no matter how many tusks are burned or “conservation tags” sold, they are only temporary solutions to a much deeper problem. Even a million $350,000 bids will not stem the tide of climate change. According to Leakey, the biggest threat to African wildlife is not hunting and poaching, but the dwindling water availability due to climate change (Anderson, 2020). “The fact is,” said Leakey in a recent interview for The New Yorker, “the problems we all face now are far beyond the power of individual conservationists to cope with. The mean temperature is getting warmer, the rainfall is getting less, the snowmelt is increasing, the ice formations are less, oceans are rising. It’s a strangulation grip on the environment, and there’s nothing Kenya can do to arrest climate change globally” (Anderson, 2019, pp 8). Therefore, the conservation game and the carbon-dioxide game are inextricably linked. If we want to save our species, we must curb our CO2 emissions.


Figure 4. Ethical Consumption depends on four main pillars: economics, ecology, politics and culture Source: Wikimedia Commons

"A 2019 metaanalysis of studies on ecotourism showed that ecotourism had positive impacts both on conservation efforts and a financial benefit for local families."

3.2 Ecotourism Ecotourism, or traveling to regions of natural beauty, has often been proposed as an effective model for sustainably monetizing some of the world’s most endangered regions. Ecotourism originated in the 1980s as a way to channel tourism revenues into conservation efforts (Stronza et al., 2019). However, the effects of ecotourism become much more complex when put into practice. The International Ecotourism Society now defines ecotourism as “responsible travel to natural areas that conserves the environment, sustains the well-being of the local people, and involves interpretation and education” (International Ecotourism Society). Just as the monetization of hunting involved much more than the revenues from one “conservation tag,” ecotourism is only successful if net effect on the environment is positive. That said, ecotourism has a host of benefits if it is approached correctly. Ecotourism companies can essentially become “an independently financed partner to the conservation community” (Kirby et al., 2011). A 2019 meta-analysis of studies on ecotourism showed that ecotourism had positive impacts both on conservation efforts and a financial benefit for local families (Stronza et al., 2019). Buckley and colleagues (2016) found that in most instances, the conservation benefits of ecotourism resulted in increased survivorship of highly threatened species. These positive effects on conservation are not just limited to individual species. Ecotourism in Peru, Tanzania, and the Galapagos has helped finance landscape conservation (Kirkby et al., 2010; Charnley, 2005; Stronza & Durham, 2008). In Costa Rica, ecotourism has led to a reduction in land degradation and an increase in reforestation (Stronza et al., 2019). However, these positive results of ecotourism are only possible if a specific set of criteria are met. First, there must already exist a specific forest conservation mechanism, as well as a specific spatial mechanism delineating the boundaries of a “protected area.” In other words, ecotourism companies are much more effective in conserving species when they are not the only regulatory entity defining and conserving the area. In addition, the definition of ecotourism requires local communities “receive direct economic benefits” (Stronza et al, 2019). According to the “alternative income hypothesis,” local residents who enter the ecotourism community will become less reliant on jobs in natural resources, thus reducing the degradation


of the landscape. It is also hypothesized that increasing local jobs in ecotourism can lead to local communities taking action on their own to promote the preservation of their surrounding ecosystems: “ecotourism has been associated with communities setting aside tracts of land and vital habitats, with rules assigned to protect resources and species” (Stronza et al., 2019, p. 238). Despite the overwhelmingly positive effects of ecotourism, more rigorous analyses are essential to ascertain the potentially harmful impacts of ecotourism (Stronza et al., 2019). Beginning in the late 1980’s, many conservation biologists have doubted the efficacy of ecotourism, even wagering that it is harmful to wildlife. Biologists are concerned that ecotourism habituates animals to humans, and that ecotourism may increase prey vulnerability to predators (Stronza et al., 2016). According to Geoffry and colleagues (2016), “it is essential to identify all potential costs to properly evaluate the net benefits.” Therefore, further studies must be conducted in order to ensure a sustainable future both for the ecotourism companies and the ecosystems they seek to protect. 3.3 Payment for Ecosystem Services Ecosystems provide humans countless benefits, including water and nutrient cycling, pest control, climate regulation, and spiritual enrichment (Garbach et al., 2012). These “ecosystem services” (ES) can broadly be defined as any benefits people obtain from nature (Schomers et al., 2013). From an economic perspective, degradation occurs because


many ES exhibit the characteristics of public goods, resulting in externalities. Payments for ecosystem services (PES) are an innovative conservation approach that aims to internalize the value of the services provided by intact ecosystems (Schomers et al., 2013). PES bridge the private interests of landowners and the public benefits of conservation management. This involves providing private landowners with financial incentives to implement conservation practices that preserve ES. Agriculture is an especially promising option, as farming landscapes have the potential to provide both ES and sustain food production (Garbach et al., 2012). Beneficiaries of environmental services can pay land stewards to implement land use practices (such as rotating crops, reducing tillage, and adopting agroforestry practices) that maintain the environmental quality of the area and provide the contracted ES (Schomers et al., 2013). According to the Coase Theorem, given low to no transaction costs on exchanges of goods with clearly defined and enforceable property rights, governmental authority is unnecessary to overcome the problem of internalizing external incentives. This means that individuals will include social costs into their consideration of personal costs without governmental intervention. Private market negotiations can be used among firms to create the optimal allocation of resources (Schomers et al., 2013). An example of a purely Coasean PES scheme includes benefits from upstream-downstream watershed management activities, whereby downstream water users pay upstream landowners to increase water quality and quantity. This practice is used at the Paso de Caballos River Basin in Nicaragua, where upstream landowners are paid by private, downstream households for reforestation efforts (Schomers et al., 2013). By offering financial incentives to increase water quality/ quantity, upstream users now have a tangible personal cost for not maintaining desired water quality/quantity. Conversely, the Pigouvian conceptualization involves governmental payment programs and is based on the Pigouvian philosophy of taxing negative or subsidizing positive externalities. In this case, the direct beneficiary of ES is not paying the service provider. Instead, the state acts as a third party on behalf of service buyers. While Coasean PES schemes often work at local scales and focus on ES that are characterized as club goods, Pigouvian PES schemes operate


on a larger scale and provision public goods (Schomers et al., 2013). Costa Rica’s national PES program, named Pagos por Servicios Ambientales, targets four ES: greenhouse gas mitigation, hydrological services, scenic beauty, and biodiversity. In the program, private forest landowners are paid for forest conservation or reforestation efforts. However, this scheme does not perfectly fit the PES definition as commitment is not voluntary due to Costa Rica’s legal restrictions on forest clearing; the payments simply reduce landowners’ opposition to the legal restrictions (Schomers et al., 2013). Despite its potential, there are some limitations to PES schemes. Not all ecosystem processes sustain and fulfill human life; for example, natural fires are often vital for ecosystem function and provide services to nonhumans. Focusing exclusively on services valuable to humans excludes nonhuman needs (Redford & Adams, 2009). Additionally, focusing on maximizing a single service could justify replacing native species with exotic ones that are more effective but would have other negative impacts on the surroundings. For example, zebra mussels are highly efficient at filtering particulates from water but have other negative impacts on the ecosystem, harming native organisms by starving out other filter feeders and attaching to turtles, crustaceans, and other large animals. Markets also only exist for a certain range of ES, and some services are difficult to value and therefore difficult to regulate financially (such as the fertilizing effect of atmospheric dust from the African Sahel carried across the Atlantic) (Redford & Adams, 2009). Additionally, conserving the functional attributes of ecosystems does not ensure that the full spectrum of biodiversity will be protected (Redford & Adams, 2009).

"Payments for ecosystem services (PES) are an innovative conservation approach that aims to internalize the value of the services provided by intact ecosystems."

Poverty, biodiversity hotspots, and environmental degradation are also linked such that impoverished people often suffer the brunt of ES loss. For example, existing biodiversity hotspots such as Madagascar, the Guinean forests of West Africa, and Brazil’s Cerrado region are disproportionately concentrated in poorer countries in the Global South, as many developed countries have already decimated their natural resources. ES services most directly impact the quality of life of local people, but it would be unfair to expect them to bear the brunt of the economic cost of preserving these lands while more indirect global benefits (like carbon sequestration) are


Figure 5: Global primary energy consumption by source. Primary energy is calculated based on the 'substitution method' which takes account of the inefficiencies in fossil fuel production by converting nonfossil energy into the energy inputs required if they had the same conversion losses as fossil fuels. The image on the left is broken down by specific types of energy sources, while the image on the right shows broader categories including fossil fuels, renewables, and nuclear energy Source: Our World in Data

enjoyed by all. Coasean PES approaches, where the most direct ES beneficiaries are accountable for its provision, may result in burdening poor locals of low-income countries (Schomers et al., 2013). Despite these limitations, PES is an attractive economy-driven strategy to incentivize environmental protection and to engage money-driven stakeholders who are not swayed by arguments about the intrinsic value of nature (which reflects the perspective that nature has value in its own right, independent of human uses) (Rea et al., 2017).

indicate different ways of understanding the problem of over-consumption. In general, "sustainable consumption" reflects a greater concern with environmental issues, but it does not necessarily consider social issues. On the other hand, "conscious consumption" is often used by large companies to promote corporate social responsibility without decreasing consumption. These modes of consumption differ from ethical consumption, as they fail to capture the full effects of consumer behavior (Eleni et al., 2012).

Green Consumption and Production

The demand for products and services that respect the environment and society has become intense. Citizens are beginning to seek consumption alternatives that bring benefits to the community and environment, not only in the short term, but also in the long term.

4.1 Ethical consumption

"A sustainable company is one that not only values the preservation of the environment but also has a management system that allows it to be profitable."

Responsible consumption, conscious consumption, or ethical consumption constitute a set of alternative consumer behaviors used to combat the ecological consequences of overconsumption. These modes of consumption have been around since the post-war period and include fair trade and organic or communitysupported agriculture. The push toward growth in capitalistic societies has put a tremendous strain on natural resources; this development model is unsustainable and the best way to transform it is through the practice of another type of consumption (Delistavrou, 2017). Ethical consumption is a set of habits and practices that reduce social inequality and negative environmental impacts. The model seeks to align production, distribution, and acquisition of products and services with positive environmental and social impacts (Delistavrou, 2017). It is worth mentioning that consumption has been classified in many ways including conscious, sustainable, critical, ethical, and responsible. These different adjectives


4.2 Corporate sustainability Climate change has drawn the attention of business academics and corporations across a wide array of industries including energy, oil and gas, finance, and pharmaceuticals. Corporations that implement sustainable initiatives in their business practices are engaging in “corporate sustainability.” According to Wilson (2013), “while corporate sustainability recognizes that corporate growth and profitability are important, it also requires the corporation to pursue societal goals, specifically those relating to sustainable development — environmental protection, social justice and equity, and economic development.” Corporations’ concern with the environment is growing due to pressures from society and investors (Eccles and Klimenko, 2019). More and more, organizations have adapted their DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure 6: Renewable energy technologies, such as the solar panels and windmills pictured here, are a promising way to mitigate fossil fuel use and reduce carbon emissions. Source: Wikimedia Commons

operations to current sustainability standards. But what exactly is a sustainable company? A sustainable company is one that not only values the preservation of the environment but also has a management system that allows it to be profitable. A company that cares about the environment, but subsequently adopts very expensive processes that compromise its profitability, will certainly not survive in the long term. Corporate sustainability must focus on both environmental, human, and financial factors. A well-known acronym used in the corporate world is ESG: Environmental, Social, and Governance. According to Eccles, Ioannou, and Serafeim (2014), “high sustainability companies are more likely to have established processes for stakeholder engagement, to be more long-term oriented, and to exhibit higher measurement and disclosure of nonfinancial information.” With a working definition of corporate sustainability in hand, it is necessary to evaluate whether a company fits this definition. From a financial point of view, the rational allocation of resources and prioritization of profitable sources of revenue will ensure the success of the company. Nevertheless, from the environmental point of view, the company must evaluate not only its own processes, but also those of its suppliers. Are there excessive emissions of pollutants? Is material disposal occurring in inadequate locations? The adoption of cleaner and more energy-efficient practices will help the company to achieve both


financial and environmental sustainability. At first, sustainable processes may seem more costly, such as the use of solar energy and the adoption of more efficient waste disposal methods. According to Koerth-Baker (2010) and the most up-to-date U.S. DOE's Energy Information Administration, solar energy is undoubtedly pricey, as the cost to produce a single photovoltaic cell is $396 USD. However, in spite of the many challenges posed by sustainable initiatives, the company ends up achieving economic gains, such as energy savings and revenues from the sale of recyclable materials. Another important point is the intangible gain for the sustainable company's image in society, which can help a business thrive (Montiel and Delgado-Caballos, 2014). One way to meet environmental requirements is to form partnerships for the company's sustainability projects. A good example is using a company's waste as raw material for other industries, known as co-processing. Other examples are those related to the sharing of resources, such as freight, and even physical spaces, thus reducing energy costs.

"Currently, fossil fuels supply approximately 80% of the nation’s energy demand."

4.3 Renewable Inputs: Sustainable Energy Currently, fossil fuels supply approximately 80% of the nation’s energy demand (Desilver, 2020). Fossil fuels including coal, crude oils, and natural gas are “non-renewable,” meaning that they are not readily replaced, as they are formed from the fossilized remains of plants


Figure 7: The figure depicts a farmworker spreading pesticides on onion crops. Industrial farmworkers in the U.S., who are predominantly Latinx migrant workers and undocumented immigrants, are disproportionately exposed to synthetic pesticides and herbicides that may be linked to cancer and other long-term health issues due to the heavy use of these chemicals in farming Flocks, 2012

and animals over millions of years. After the link between carbon emissions, fossil fuels, and climate change was recognized, several steps were taken to mitigate carbon dioxide emissions. These actions included the 2015 Paris Agreement where the global community agreed to limit emissions to reduce average global temperatures as close as possible to preindustrial levels (Johnsson et al., 2018). Because the only two ways to reduce carbon dioxide emissions are through carbon capture or by leaving fossil fuels in the ground, there has been a large push towards replacing dependence on fossil fuels with renewable energy sources (Johnsson et al., 2018).

"Renewable energy sources include solar, thermal, photovoltaics, bioenergy, hydro, tidal, wind, wave, and geothermal energy."

Renewable energy sources include solar, thermal, photovoltaics, bioenergy, hydro, tidal, wind, wave, and geothermal energy (Boyle, 2004). Over the past decade, solar power has experienced the largest percentage growth of any U.S. energy source, from generating just over 2 billion kilowatt-hours of electricity in 2008 to more than 93 billion kilowatt-hours in 2018—an almost 46-fold increase. Despite this growth, however, solar accounted for only 1% of the nation’s total energy production in 2018; the largest renewable energy source remained hydropower at 2.8% of total production, followed by wind, wood, and biofuels (Desilver, 2018). So why, despite the clear need, does renewable energy still make up such a small percentage of total energy sources? There are many social, economic, technological, and regulatory barriers across a variety of sectors that hinder the large-scale deployment of renewable


energy (Seetharaman et al., 2019). Social barriers include lack of public awareness and uncertainty about the benefits of renewable energy or their financial feasibility. Social barriers also include “not in my backyard syndrome,” whereby people support renewable energy but do not want large windmills or solar panel fields in their own area. Economic barriers include lower economic benefits of renewable energy compared to fossil fuels, which is generally a cheaper alternative to renewable energy technologies. Additionally, there is a high initial capital cost to develop renewable energy projects and the return on investment has a long horizon, making it more difficult to find investors. Technological barriers include limited availability of infrastructure and facilities for developing renewable energy compared to traditional energy sources, as well as challenges in “grid integration,” the connection of new power-production facilities with old infrastructures. This is especially problematic due to the remote location of many renewable energy power plants. Finally, regulatory barriers include a gap between policy targets set by governments and actual implementation, inadequate fiscal incentives, and a lack of certifications to ensure that targets are appropriately met (Seetharaman et al., 2019). To address these barriers, taking steps to make procedures more user-friendly could improve the complex bureaucratic process involved in the deployment of renewable energy. Cost savings are also critical, as the largest barrier to renewable energy investment is competition with fossil fuels, which are generally less expensive. This could potentially be addressed


by successful research and development ventures that increase the efficiency of renewable energy technologies, thereby making them more competitive with traditional energy sources (Seetharaman et al., 2019).

CCS will become more accessible, affordable, and energy-efficient with the innovations being made in this sector of green technology (Rubin et al., 2012). 4.5 Industrial Agriculture and New Alternatives

Despite the obstacles to renewable energy, capitalistic systems can help drive progress if properly incentivized. “Under capitalism, innovative activity—which in other types of economy is fortuitous and optional—becomes mandatory, a life-and-death matter for the firm. And the spread of new technology, which in other economies has proceeded at a stately pace, often requiring decades or even centuries, under capitalism is speeded up remarkably because, quite simply, time is money” (Baumol, 2004). Although the competitive nature of the market causes issues when renewable energy systems are pitted against fossil fuels, competition within the renewable energy industry itself can drive efficiency, creativity, and rapid advancements in research and technology. This competition between renewable energy sources will push the technology closer to becoming a feasible, wide-spread alternative to fossil fuels. Although capitalist systems can vary greatly across different nations, capitalism has driven great leaps in material advancement and technological innovation across the entire economy and particularly within the energy industry. If these advancements can be properly harnessed and driven towards a sustainable future, it would be a powerful force in combating climate change. 4.4 Carbon Capture Some recent innovations in sustainable technologies have taken a unique approach for greenhouse gas emission reduction, including the implementation of carbon capture and storage. Carbon capture and storage (CCS) technologies aim to trap and contain atmospheric carbon dioxide as a way of removing greenhouse gases from the atmosphere. In the present state, CCS systems are appended to factories or other manufacturing plants. The systems filter the carbon dioxide out of the air or from a direct pipeline, and the CO2 is then stored underground (Rubin et al., 2012). While in concept this technology offers a novel mechanism to curb atmospheric carbon dioxide levels, present CCS systems require significant energy to power and there is a lack of incentive to invest in CCS technologies if not required (Rubin et al., 2012). However, it is expected that


Agriculture started to become industrialized beginning with large scale monoculture systems on European colonial plantations in the 16th and 17th centuries, in which acres of land were used to cultivate a single type of crop (Kremen et al., 2012). With new industrial inventions, agriculture was mechanized so that by the late 1800s, steam tractors instead of horses were used to plow fields. The advancements made possible by improving technology continued into the “Green Revolution” during the mid20th century. The Green Revolution introduced "an integrated system of pesticides, chemical fertilizers, and genetically uniform and highyielding crop varieties" (Kremen et al., 2012, p. 44). In the 1940s, Norman Borlaug, an American scientist who is now named “the father of the Green Revolution,” developed new varieties of wheat and rice that were high-yielding, disease resistant, and bred specifically to respond well to fertilizers (Rhodes, 2017). These crops were also heavily dependent on pesticides to increase their yield. Throughout the 1960s, these new seed and agrochemical technologies were promoted and adopted around the world, predominantly in Mexico and India, as a way to feed a rapidly growing population. While the global human population more than doubled from 1960 to 2010, the Green Revolution allowed for the production of cereal crops to triple while land area cultivated for agriculture only increased by 30% (Pingali, 2012). Thus, the Green Revolution averted hunger for millions around the world.

"Carbon capture and storage (CCS) technologies aim to trap and contain atmospheric carbon dioxide as a way of removing greenhouse gases from the atmosphere."

However, this revolution did not come without a cost. Throughout the Green Revolution, global nitrogen and phosphorous usage for fertilizer increased eightfold and threefold, respectively, while pesticide production increased elevenfold. Today, intensive usage of synthetic fertilizers is linked to air and water pollution as well as soil depletion. Excessive chemical fertilizer can make soil more acidic, destroying microorganisms in soil that are essential for building soil health. The excess of nutrients in fertilizers also leaches into groundwater and waterways, running off into bodies of water and creating algal blooms, which produce


mass fish die-offs. Pesticides are poisonous to pests but can potentially be harmful to humans as well. While research is still ongoing, some synthetic pesticides have been linked to cancer and other long-term health impacts (Horrigan et al., 2002). Humans, especially farmworkers, are directly exposed to pesticides while applying the chemical, but anyone can come in contact with pesticides through spending time where the chemicals have been applied for landscaping, or even through ingesting produce that was sprayed with the chemicals.

"In response to the environmental concerns of industrial agriculture, new, more sustainable farming methods have emerged. These include organic farming, which prohibits the use of synthetic fertilizers and pesticides, and agroecology methods, which promote viewing farms as ecosystems that are best managed with holistic and integrated farming practices."


Industrial agriculture is characterized by intensive pesticide and fertilizer usage, monoculture cropping and diminished biodiversity, and heavy dependence on fossilfuel powered machinery. These features, along with many others, have allowed industrial agriculture to thrive under capitalistic forces of production and profit maximization. Large seed, fertilizer, pesticide, and machinery companies market their products directly to farmers who benefit from efficient production, lower costs, larger yields, and increased profits. It is estimated that agriculture contributes an estimated 24% of global greenhouse gas emissions (US EPA, 2016). Agriculture also accounts for about twothirds of all water usage worldwide (US EPA, 2015). Ultimately, industrial agriculture practices have led to widespread soil degradation and consumption of water and fossil fuels at unsustainable rates. In response to the environmental concerns of industrial agriculture, new, more sustainable farming methods have emerged. These include organic farming, which prohibits the use of synthetic fertilizers and pesticides, and agroecology methods, which promote viewing farms as ecosystems that are best managed with holistic and integrated farming practices (Kremen et al., 2012). This approach to farming ensures that farming solutions are environmentally and ecologically sound by considering the effects on the wider ecosystem. For example, instead of applying a chemical that will have harmful effects elsewhere, you can diminish pest populations by planting crop varieties in different plots of land every year; this is called crop rotation. This way, cucumber beetles that have laid their eggs in cucumber plots one year, will not have cucumber plants to feed on the following year. Furthermore, crop rotation is beneficial for soil fertility because it prevents one type of crop from depleting too much of a nutrient in one area. Other sustainable farming practices include using manure and

compost to fertilize soil, covering the ground with plants to prevent erosion so that there is no bare soil exposed, minimizing mechanical soil disturbance to protect soil structure and microbiology, and increasing plant diversity, among others. The regenerative agriculture movement promotes these sustainable farming methods and takes things a step further; “its guiding principle is not just to farm sustainably—that implies mere maintenance of what might, after all, be a degraded status quo—but to farm in such a way as to improve the land” (VelasquezManoff, 2018). The regenerative agriculture movement wants to shift farming away from being a contributor to climate change to instead being a solution. The movement places large emphasis on rebuilding soil health and building up carbon in soil, a practice known as “carbon farming,” that counteracts the negative effects of industrial agriculture. This is based on the fact that through photosynthesis, plants naturally sequester carbon into the soil, which is enhanced by employing sustainable methods like rebuilding soil health and eliminating agrochemicals (Velasquez-Manoff, 2018). There has been a movement among consumers to support small farmers who practice regenerative, organic farming by shopping at local farmers’ markets and buying directly from farmers. However, critics of regenerative agriculture argue that it is not feasible for widespread adoption; this is because regenerative agriculture goes against the forces of capitalism. Critics argue that regenerative, organic agriculture methods decrease the efficiency of production, require more labor, and will never produce enough yield to feed a growing population. Furthermore, the increased production costs would make food prices rise for consumers (Velasquez-Manoff, 2018). Vertical farming and hydroponics are newer innovations that are growing in popularity as alternatives to industrial agriculture. Vertical farming is urban farming that is done in large indoor controlled environments, growing crops in stacked vertical layers, which saves land space. It entails growing fruits, vegetables, and grains using hydroponics (water and nutrients) instead of soil as a growing medium (Al-Chalabi, 2015). Vertical farming is touted as being highly productive, profitable, and efficient, with less impact on the environment. While the practice is still in its infancy, many vertical farms have


Figure 8. The Whanganui river in New Zealand was granted legal human rights in 2017, due largely to the conservation efforts of indigenous Māori activists. According to Māori ideology, the river is not a resource, but an ancestor, worthy of both protection and respect Source: Wikimedia Commons

been built in cities across the U.S., and they are seen by many as the future of urban agriculture. Vertical farming, however, lacks the potential of regenerative farming to restore soils and ecosystems. The academic community is in debate over the best method for feeding a growing population. Many argue that innovative methods like vertical farming are needed to increase the production of food. On the other hand, some social scientists argue that the current global food system already produces enough food, but it is inequitable and wasteful. In the U.S., it is estimated that 30 to 40 percent of the food supply is wasted (U.S. FDA, 2020). These scientists do not believe in increasing food production; instead, they believe there needs to be better distribution and access to food for all.

Ethical Considerations, the Green New Deal, and Conservation Theories 5.1 Ethical Issues of Capitalism-Environmental Justice and the Dakota Pipeline The clashes between capitalism and environmentalism can be best understood through the idea of environmental justice, which is often conducted through the very justice systems that capitalism has created. Environmental justice is “the fair treatment and meaningful involvement of all people regardless of race, color, national origin, or income, with respect to the development, implementation, and enforcement of


environmental laws, regulations, and policies” (EPA, 2020). When it comes down to enacting purposeful action against environmental degradation, it is the identity of those voicing these grievances that can determine whether effective action is taken. In April of 2016 individuals led by the Standing Rock Sioux Tribe of North Dakota gathered in protest against the construction of the Dakota Access Pipeline (DAPL) (Whyte, 2017). The goal of DAPL was to expedite the transport of crude oil from refineries in North Dakota to processing terminals in Illinois (Whyte, 2017). However, the construction of this pipeline was planned to run through lands and water sources of the Standing Rock Sioux Tribe, threatening the water and soil quality and destroying culturally significance lands (Whyte, 2017). The Army Corps of Engineers (ACE) originally blocked the construction of DAPL by Energy Transfer Partners (ETP) but reversed its decision under the Trump Administration (Johnson, 2017).

"In April of 2016 individuals led by the Standing Rock Sioux Tribe of North Dakota gathered in protest against the construction of the Dakota Access Pipeline (DAPL)."

The consistent conflict between indigenous communities and large corporations exemplifies the dynamics of environmental justice (Proulx and Crane, 2019). While the government must uphold an adequate standard of living for all, the inaction to protect the wellbeing of the Standing Rock Sioux Tribe demonstrates a hierarchy of urgency that works against those of a minority identity. Those that are in this minority have used their voices against the environmental injustices they must endure but have been selectively silenced by those in power. This reflects broader themes


that have prevailed throughout the history of capitalism and colonialism. 5.2 The Green New Deal

"The human-nature divide (HND) examines how the human race perceives itself in coexistence with other organisms."

The Green New Deal (GND) is a nonbinding congressional resolution that lays out an ambitious plan for tackling climate change through economic policy. Proposed in 2019 by Representative Alexandria Ocasio-Cortez of New York and Senator Edward J. Markey of Massachusetts (both Democrats), the GND aimed to transition the United States away from fossil fuels towards clean, renewable energy while creating new jobs in the energy industry (Friedman, 2019). The proposal has the goal of reaching net-zero emissions by 2050, with an intensive “10-year mobilization” plan to reduce carbon emissions in the United States; proposed actions include investment in renewable energy, upgrading the electricity grid, renovating buildings to be energy efficient, and investing in electric vehicles and high-speed rail. There is also an emphasis on social justice, for those who are most affected by climate change are poor and marginalized communities. These communities are disproportionately exposed to pollution through tainted natural resources, though they are the individuals contributing the least to it. Therefore, the bill states that clean air, water, and energy are basic human rights, and the government will have to provide the training necessary to support those that work in the fossil fuel industry in transitioning to renewable industries (H.Res.109 - 116th Congress, 2019). The Green New Deal has been polarizing, with many deeming it “radical” (Friedman, 2019). Ocasio-Cortez has acknowledged that the plan will be expensive, estimating that it will cost at least $10 trillion, but the representative has argued that it will pay for itself through economic growth in renewable energy (Relman, 2019). However, right-leaning critics of the GND have estimated that the plan would cost anywhere from $51 to $93 trillion (Natter, 2019). In March of 2019, the Senate ultimately rejected the proposal (Grandoni & Sonmez, 2019). Still, the GND has massive support from groups like the Sunrise Movement, a national youth-led political movement that advocates for political action on climate change. The GND could continue to see a surge in support in the upcoming years, especially if our conservation problems grow more severe. In July of 2020, democratic presidential nominee Joe Biden announced his climate plan, “A Clean


Energy Revolution,” which draws largely from the principles of the Green New Deal, including the goal to “ensure the U.S achieves a 100% clean energy economy and net-zero emissions no later than 2050” (Plan for Climate Change, n.d.). Also like the Green New Deal, Biden’s climate plan has an emphasis on economic and environmental justice. The plan outlines extensive ways to build a more resilient nation, rally the rest of the world to address climate change, “stand up to the abuse of power by polluters who disproportionately harm communities of color and low-income communities,” and support fossil fuel industry workers in the transition to clean energy jobs (Plan for Climate Change, n.d.). According to the Biden Campaign, “Biden’s climate and environmental justice proposal will make a federal investment of $1.7 trillion over the next ten years, leveraging additional private sector and state and local investments to total to more than $5 trillion” (Plan for Climate Change, n.d.). They will also incentivize the adoption of clean energy technology across the country. The plan was drafted by a task force that included Representative Alexandria Ocasio-Cortez as well as Varshini Prakash, co-founder and director of the Sunrise Movement (Prakash, 2020). Ultimately, the Green New Deal provides an influential framework that, at its core, ties climate change to transforming the economy, an idea that is becoming more mainstream. 5.3 Conservation Theories: The Human/Nature Divide The human-nature divide (HND) examines how the human race perceives itself in coexistence with other organisms. Two primary schools of thought on the HND include anthropocentrism and ecocentrism which interpret the HND in polarizing manners. Anthropocentrism claims that humans are the supreme species of all living organisms on Earth (Kortenkamp and Moore, 2001). This self-perceived superiority stems from humans’ cognitive capabilities relative to other species. These perceptions have led humans to assume the responsibility of saving species from problems that arise from their own actions. This egotistical dogma has called into question the role of humans in deciding the fates of other species. On the other hand, ecocentrism assumes that humans are equal to all other species; ecocentrism is considered as a more environmentalist outlook on the HND (Kortenkamp and Moore, 2001). In short, an anthropocentric understanding of nature would value the protection of nature due to the


ways in which nature affects humans, while an ecocentric ethic would desire to protect nature due to its shared value to all species, including humans (Kortenkamp and Moore, 2001). Dynamic perspectives on how one views the HND may govern stances on global environmental problems such as anthropogenic climate change. Typically, sustainability initiatives rooted within the capitalist system tend to rely upon an anthropocentric view of nature. The role of humans in protecting nature from the effects of climate change is shaped by an understanding of the HND, and ecocentric perspectives may demand different solutions than anthrocentric ones. 5.4 New Animism and Earth Jurisprudence Earth Jurisprudence is a field of thought that focuses on the interconnectedness of humans and non-human beings. This way of thinking reasons that the natural world has rights, just like humans. This ideology has legal precedence. The Ecuador constitution states that nature “has the right to exist, persist, maintain, and regenerate its vital cycles” (Turkewitz, 2017, pp. 19). The practice of Earth Jurisprudence has a rich history which occurred long before characterization by Western academics. A river in the north island of New Zealand that is traditionally used by the indigenous Māori people is now a ‘legal human,’ due largely to 140 years of conservation efforts by Māori activists (Roy, 2017). This legal protection for natural features does not just protect it from degradation, but allows the natural world to fight back if these policies are violated. In the case that New Zealand’s river is illegally polluted, the river itself, not environmental justice organizations, has the right to sue. To date, American attempts for this “new animism” have failed. In 2017, a far-left environmental group called Deep Green Resistance tried to give the Colorado River legal human rights. Senator Steve Daines of Montana scathingly commented in The New York Times that “radical obstructionists who contort common sense with this sort of nonsense undercut credible conservationists” (Turkewitz, 2017, pp 8). The idea of new animism, of imbuing the natural world with legal human rights, seems to undercut many conservationists’ objectives of “overcoming human primacy” (Ferrando, 2013, p. 29). To award legal human rights to non-human entities seems, at the surface, to directly feed


into the idea of human centrality. Even Mr. Flores-Williams, the lawyer fighting for the protection of the Colorado River, explicitly promotes a “human/nature dualism,” in which humans are separate (and perhaps superior to) the natural world. “The ultimate disparity exists between humans that are using nature and nature itself” (Turkewitz, 2017, pp. 11). However, new animism has a clever way of dissolving this fissure: by integrating just enough anthropocentrism to get our attention. “It’s not pie in the sky,” writes Flores. “It’s pragmatic.” When a nonhuman entity is granted legal human rights, it is not given because it is viewed as a person, but because it is viewed as something that is inextricably connected to humans’ well-being. Therefore, protecting its well-being would essentially be protecting the well-being of humans. If the Colorado River, as a human, were to have won the case, Americans may have become more aware of the fact that the river supplies drinking water to millions. This strand of law may not just be the most pragmatic but also the most powerful tool we now have to conserve wildlands.

"This new animist idea of human entanglement with the ecosystem has long been present in many indigenous ideologies. The Māori peoples, for example, believe themselves to be part of the universe, not only equal to the mountains and rivers, but inextricably entangled with This new animist idea of human entanglement with the ecosystem has long been present them." in many indigenous ideologies. The Māori peoples, for example, believe themselves to be part of the universe, not only equal to the mountains and rivers, but inextricably entangled with them. According to the lead Māori negotiator for the Whanganui river, giving the river legal human rights is simply an attempt “to find an approximation in law so that all others can understand that from our perspective treating the river as a living entity is the correct way to approach it, as an indivisible whole” (Roy, 2017, pp. 5). Therefore, to introduce the idea of Earth Jurisprudence and New Animism into Western ideology is not to present a new idea, but an effort to pay due attention to a sustainable development model that has existed for centuries in indigenous thought. In political theorist Jane Bennet’s paper “Steps Toward an Ecology of Matter,” she argues for “a renewed emphasis on our entanglement with things.” This, she believes, will allow us to “tread more lightly upon the earth, both because things are alive and have value and because things have the power to do us harm” (Bennett 2004, p. 375-376). In this sense, Earth Jurisprudence may be a powerful tool in the conservationist’s toolbox, “a suitable way of


departure to think in relational and multi-layered ways, expanding the focus of the non-human realm” (Ferrando, 2013, p. 30). This is what new animism begs people to do – to re-introduce to Western belief the indigenous ideology that the natural world is not something separate, but something enmeshed with human life.

Conclusion "Recent historical species losses have been up to 100 times higher than normal background rates, and there is concern that the current biodiversity crisis constitutes a sixth mass extinction comparable in magnitude and rate to previous events in the deep-time fossil record."

Climate change is the defining crisis of our time. From deep ocean trenches to the highest mountain tops, there is evidence of human impact. Nothing is impervious to the largescale, far-reaching effects of climate change, which is occurring at an unprecedented speed. Rising temperatures are fueling environmental degradation, natural disasters, weather extremes, food and water insecurity, economic disruption, conflict, and terrorism (UN, 2020). Although extinction is a natural process, current rates of biodiversity loss are hugely elevated. Recent historical species losses have been up to 100 times higher than normal background rates, and there is concern that the current biodiversity crisis constitutes a sixth mass extinction comparable in magnitude and rate to previous events in the deep-time fossil record (Turvey et al., 2019). As the population of the human race has exploded, economic systems have also grown in complexity and scale, evolving from simple, community level trading schemes to the interconnected, global system that exists today. Although innovation in technology and subsequent improvements in quality of life have occurred within capitalistic systems, these benefits are certainly not distributed equitably to all, and economic inequality continues to grow. Contrary to popular anthropocentric views, humans are embedded in an intricate web of life, and the impacts of capitalism and the consumption it stimulates has reverberated across this entire web. It is clear that capitalism has transformed both human and nonhuman existence across the globe. Finding solutions to adequately address environmental degradation caused by anthropogenic climate change can become challenging when considering the evolving understanding of who, where, and how climate change has affected global and local systems. What is necessary moving forward is an interdisciplinary collaboration to derive and implement pragmatic solutions to climate change that will be effective in the long term. This does not just include coordination between the subjects of the physical and life sciences. Delving into present market conditions and industry


trends using macro- and microeconomic theory allows individuals, firms, and policy-makers to understand the fiscal considerations of cutting carbon emissions. This was also observed to take place on the consumer level as individuals have only started to realize the true cost of plastic bags after taxation. Taxing actions that generate negative environmental externalities can be supplemented by subsidizing the positive externalities such as those generated through renewable energy production. Alternatively, emerging political actions, such as the Green New Deal, aim to revolutionize how the significance of climate change is managed and prioritized in the United States and around the world. It is without a doubt that the future state of climate change is dependent on the actions taken by the individual and industry within this generation. If anything should be gathered from this article, it is the fact that climate change is a nexus of many different contributing factors. If we do not address each and every one, we will be faced with a crisis even more multifaceted than that which currently confronts us. However, this idea works both ways. The contributing factors to environmental degradation are plentiful, but so are the solutions. Some solutions, such as profiting from limited big game hunting or promoting sustainable ecotourism efforts, are more straightforward, and may be achieved to some extent in the next few years. Other solutions, such as carbon capture and drastic reduction in emissions, are steps that will take more time and money, along with further technological innovation to successfully achieve. But no matter how overwhelming the solutions may be, each is an essential piece of the puzzle to combat the climate crisis. Our biosphere is a complex web of interconnected pushes and pulls between organisms, landscape, air and water. So, it may be unsurprising that our solutions must be as multifaceted as that which we wish to conserve. References Adler, Simon. “The Rhino Hunter.” Radiolab, WNYC Studios, 7 Sept. 2015, www.wnycstudios.org/podcasts/radiolab/articles/ rhino-hunter. Al-Chalabi, M. (2015). Vertical farming: Skyscraper sustainability? Sustainable Cities and Society, 18, 74–77. https://doi.org/10.1016/j.scs.2015.06.003 Anderson, Jon. “Can the Wildlife of East Africa Be Saved? A Visit with Richard Leakey.” New Yorker, 20 Feb. 2020, www. newyorker.com/culture/culture-desk/can-the-wildlife-ofeast-africa-be-saved-a-visit-with-richard-leakey.


Barnosky, A. D., Matzke, N., Tomiya, S., Wogan, G. O., Swartz, B., Quental, T. B., . . . Ferrer, E. A. (2011). Has the Earth’s sixth mass extinction already arrived? Nature, 471(7336), 51-57. doi:10.1038/nature09678 Baumol, W. J. (2004). The free-market innovation machine: Analyzing the growth miracle of capitalism (4. print., and 1. paperback print). Princeton Univ. Press. Bennett, J. (2004). The Force of Things. Political Theory, 32(3), 347-372. doi:10.1177/0090591703260853 Binswanger, M. (2009). Is there a growth imperative in capitalist economies? A circular flow perspective. Journal of Post Keynesian Economics, 31(4), 707–727. https://doi. org/10.2753/PKE0160-3477310410 Boyle, G., & Open University (Eds.). (2004). Renewable energy (2nd ed). Oxford University Press in association with the Open University. Buckley RC, Morrison C, Castley JG. 2016. Net effects of ecotourism on threatened species survival. PLOS ONE 11(2):e0147988 Center for Food Safety and Applied Nutrition. (2020a). Food Loss and Waste. FDA. https://www.fda.gov/food/consumers/ food-loss-and-waste Charnley, S, 2005. From nature tourism to ecotourism? The case of the Ngorongoro Conservation Area, Tanzania. Hum. Organ. 64(1):75–88. Clark, B., & York, R. (2005). Carbon metabolism: Global capitalism, climate change, and the biospheric rift. Theory and Society, 34(4), 391–428. https://doi.org/10.1007/s11186005-1993-4 Comninel, G. C. (2000). English feudalism and the origins of capitalism. The Journal of Peasant Studies, 27(4), 1–53. https:// doi.org/10.1080/03066150008438748 Delistavrou, A. (2017). Understanding Ethical Consumption: Types and Antecedents. 1-23. Desilver, D. (2020). Renewable energy is growing fast in the U.S., but fossil fuels still dominate. Pew Research Center. Durham WH. 2008. The challenge ahead: reversing vicious cycles through ecotourism. In Ecotourism and Conservation in the Americas, ed. A Stronza, WH Durham, pp. 265–71. Wallingford, UK: CABI Dyllick, T., & Hockerts, K. (2002). Beyond the business case for corporate sustainability. Business Strategy and the Environment, 11(2), 130-141. doi:10.1002/bse.323 Eccles, R. G., Ioannou, I., & Serafeim, G. (2014). The Impact of Corporate Sustainability on Organizational Processes and Performance. Management Science, 60(11), 2835–2857. https://doi.org/10.1287/mnsc.2014.1984 Eleni P., Paparoidamis N.G., Chumpitaz R. (2015) Understanding Ethical Consumers: A New Approach Towards Modeling Ethical Consumer Behaviours. In: Robinson, Jr. L. (eds) Marketing Dynamism & Sustainability: Things Change, Things Stay the Same…. Developments in Marketing Science: Proceedings of the Academy of Marketing Science. Springer, Cham. https://doi.org/10.1007/978-3-319-10912-1_72


EPA. (2020). Environmental Justice. United States Environmental Protection Agency. Fell, H., Burtraw, D., & Morgenstern, R. (2020). Climate Policy Design with Correlated Uncertainties in Offset Supply and Abatement Cost. 24. Ferrando, F. (2013). Posthumanism, Transhumanism, Antihumanism, Metahumanism, and New Materialisms. Existenz, 8(2), 26-32. Flocks, J. (2012). The Environmental and Social Injustice of Farmworker Pesticide Exposure. UF Law Faculty Publications. https://scholarship.law.ufl.edu/facultypub/268 Frédérik Saltré Research Fellow in Ecology & Associate Investigator for the ARC Centre of Excellence for Australian Biodiversity and Heritage, & Corey J. A. Bradshaw Matthew Flinders Fellow in Global Ecology and Models Theme Leader for the ARC Centre of Excellence for Australian Biodiversity and Heritage. (2020, July 09). What is a 'mass extinction' and are we in one now? Retrieved from https://theconversation. com/what-is-a-mass-extinction-and-are-we-in-onenow-122535 Friedman, L. (2019, February 21). What Is the Green New Deal? A Climate Proposal, Explained. The New York Times. https://www.nytimes.com/2019/02/21/climate/green-newdeal-questions-answers.html Garbach, K., Lubell, M., & DeClerck, F. A. J. (2012). Payment for Ecosystem Services: The roles of positive incentives and information sharing in stimulating adoption of silvopastoral conservation practices. Agriculture, Ecosystems & Environment, 156, 27–36. https://doi.org/10.1016/j. agee.2012.04.017 Geoffrey Wall (1997). "Is ecotourism sustainable?", Environmental Management, 21:4, 483-491. Goulder, L. H., & Schein, A. R. (2013). CARBON TAXES VERSUS CAP AND TRADE: A CRITICAL REVIEW. Climate Change Economics, 04(03), 1350010. https://doi.org/10.1142/ S2010007813500103 Grandoni, D., & Sonmez, F. (2019, March 26). Senate defeats Green New Deal, as Democrats call vote a ‘sham.’Washington Post. https://www.washingtonpost.com/powerpost/ green-new-deal-on-track-to-senate-defeat-as-democratscall-vote-a-sham/2019/03/26/834f3e5e-4fdd-11e9-a3f778b7525a8d5f_story.html Gray, R. (2019, March 4). Sixth mass extinction could destroy life as we know it– biodiversity expert. Retrieved from https:// horizon-magazine.eu/article/sixth-mass-extinction-coulddestroy-life-we-know-it-biodiversity-expert.html Horrigan, L., Lawrence, R. S., & Walker, P. (2002). How sustainable agriculture can address the environmental and human health harms of industrial agriculture. Environmental Health Perspectives, 110(5), 445–456. https://doi. org/10.1289/ehp.02110445 H.Res.109 - 116th Congress: Recognizing the duty of the Federal Government to create a Green New Deal. (2019/2020). (2019, February 12). [Webpage]. https://www. congress.gov/bill/116th-congress/house-resolution/109/text Ivan Montiel, J. (n.d.). Defining and Measuring Corporate Sustainability: Are We There Yet? - Ivan Montiel, Javier Delgado-Ceballos, 2014. Retrieved


September 07, 2020, from https://journals.sagepub.com/ doi/10.1177/1086026614526413

rates of extinction, distribution, and protection. Science, 344(6187).

Johnson, T. (2019, January 25). The Dakota Access Pipeline and the Breakdown of Participatory Processes in Environmental Decision-Making. Environmental Communication. 13(3), pp. 335-352. https://doi.org/10.1080/17524032.2019.1569544.

Pingali, P. L. (2012). Green Revolution: Impacts, limits, and the path ahead. Proceedings of the National Academy of Sciences, 109(31), 12302–12308. https://doi.org/10.1073/ pnas.0912953109

Johnsson, F., Kjärstad, J., & Rootzén, J. (2019). The threat to climate change mitigation posed by the abundance of fossil fuels. Climate Policy, 19(2), 258–274. https://doi.org/10.1080/14 693062.2018.1483885

Plan for Climate Change and Environmental Justice, Joe Biden. (n.d.). Joe Biden for President: Official Campaign Website. Retrieved August 15, 2020, from https://joebiden. com/climate-plan/

Koerth-Baker, M. (2010, November 06). Shining Light on the Cost of Solar Energy. Retrieved September 07, 2020, from https://www.nationalgeographic.com/news/ energy/2010/11/101105-cost-of-solar-energy/

Prakash, V. (2020, May 13). I’m Joining the Sanders-Biden Taskforce on Climate. Here’s why. Medium. https://medium. com/sunrisemvmt/im-joining-the-sanders-biden-taskforceon-climate-here-s-why-90a3dd0ff546

Kolbert, E. (2009, May 25). The Sixth Extinction? Retrieved from https://www.newyorker.com/magazine/2009/05/25/the-sixthextinction

Proulx, G., & Crane, N. J. (2019, September 16). “To see things in an objective light”: the Dakota Access Pipeline and the ongoing construction of settler colonial landscapes. Journal of Cultural Geography. 37(1), pp. 46-66. https://doi.org/10.108 0/08873631.2019.1665856.

Kortenkamp, K. V., & Moore, C. F. (2001). ECOCENTRISM AND ANTHROPOCENTRISM: MORAL REASONING ABOUT ECOLOGICAL COMMONS DILEMMAS. Journal of Environmental Psychology, 21(3), 261–272. https://doi.org/10.1006/ jevp.2001.0205 Kirkby CA, Giudice R, Day B, Turner K, Silvera Soares-Filho B, et al. (2011). Closing the ecotourism- conservation loop in the Peruvian Amazon. Environ. Conserv. 38(01): 6–17 Kremen, C., Iles, A., & Bacon, C. (2012). Diversified Farming Systems: An Agroecological, Systems-based Alternative to Modern Industrial Agriculture. Ecology and Society, 17(4), art44. https://doi.org/10.5751/ES-05103-170444 Living Planet Report. (2018). Retrieved from https:// livingplanetindex.org/projects?main_page_ project=LivingPlanetReport&home_flag=1 Moore, J. W. (2017). The Capitalocene, Part I: On the nature and origins of our ecological crisis. The Journal of Peasant Studies, 44(3), 594–630. https://doi.org/10.1080/03066150.2016.1235 036

Redford, K., & Adams, W. (2009). Payment for Ecosystem Services and the Challenge of Saving Nature. Conservation Biology, 23(4), 785–787. https://doi.org/10.1111/j.15231739.2009.01271.x Reilly, W. K. (1990). The green thumb of capitalism. Policy Review, 54, 16. Business Source Ultimate. Relman, E. (2019, June 5). Alexandria Ocasio-Cortez says Green New Deal would cost $10 trillion—Business Insider. https://www.businessinsider.com/alexandria-ocasio-cortezsays-green-new-deal-cost-10-trillion-2019-6 “Rhino Populations: Rhino Facts: Save the Rhino International.” Save The Rhino, www.savetherhino.org/rhino-info/ population-figures/. close.

Natter, A. (2019, February 25). Green New Deal Would Cost $93 Trillion, Ocasio-Cortez Critics Say. Fortune. https://fortune. com/2019/02/25/the-green-new-deal-ocasio-cortez/

Rhodes, C. J. (2017). The Imperative for Regenerative Agriculture. Science Progress, 100(1), 80–129. https://doi.org/ 10.3184/003685017X14876775256165

Norton-Griffiths, Mike. “Whose Wildlife Is It Anyway?” New Scientist, vol. 193, no. 2596, 2007, p. 24., doi:10.1016/s02624079(07)60723-4.

Richard B. Stewart, "Controlling Environmental Risks through Economic Incentives," Columbia Journal of Environmental Law 13, no. 2 (1988): 153-170

Nuwer, Rachel. “Kenya Sets Ablaze 105 Tons of Ivory.” National Geographic, 30 Apr. 2016, www.nationalgeographic.com/ news/2016/04/160430-kenya-record-breaking-ivory-burn

Robert G. Eccles and Svetlana Klimenko. (2019, April 26). Shareholders Are Getting Serious About Sustainability. Retrieved September 07, 2020, from https://hbr.org/2019/05/ the-investor-revolution

O’Donoghue T, & Rabin M. (2006, November 1). Optimal sin taxes. Journal of Public Economics, 90(10), 1825-1849. https:// doi.org/10.1016/j.jpubeco.2006.03.001. Payment for Ecosystem Services and the Challenge of Saving Nature. (2009). Conservation Biology, 23(4), 785–787. https:// doi.org/10.1111/j.1523-1739.2009.01271.x Pievani, T. (2013). The sixth mass extinction: Anthropocene and the human impact on biodiversity. Rendiconti Lincei, 25(1), 85-93. doi:10.1007/s12210-013-0258-9 Pimm, S.L. et al (2014). The biodiversity of species and their


Rea, A. W., & Munns, W. R. (2017). The value of nature: Economic, intrinsic, or both? Integrated Environmental Assessment and Management, 13(5), 953–955. https://doi. org/10.1002/ieam.1924

Roy, E. A. (2017, March 16). New Zealand river granted same legal rights as human being. Retrieved from https://www. theguardian.com/world/2017/mar/16/new-zealand-rivergranted-same-legal-rights-as-human-being Rubin E. S., Mantripragada H., Marks A., Versteeg P., & Kitchin J. (2012). The outlook for improved carbon capture technology. Progress in Energy and Combustion Science. 38(5), pp. 630671. https://doi.org/10.1016/j.pecs.2012.03.003. Schomers, S., & Matzdorf, B. (2013). Payments for ecosystem services: A review and comparison of developing and


industrialized countries. Ecosystem Services, 6, 16–30. https:// doi.org/10.1016/j.ecoser.2013.01.002 Seetharaman, null, Moorthy, K., Patwa, N., Saravanan, null, & Gupta, Y. (2019). Breaking barriers in deployment of renewable energy. Heliyon, 5(1), e01166. https://doi. org/10.1016/j.heliyon.2019.e01166 Stavins, R. N. (2008). A meaningful U.S. cap-and-trade system to address climate change. St. Louis: Federal Reserve Bank of St Louis. Retrieved from https://search.proquest.com/docvie w/1698040382?accountid=10422 Stronza, A., Hunt, C., & Fitzgerald, L. (2019). Ecotourism for Conservation? Annual Review of Environment and Resources, 44(229), 229-253. Temel, Julia, et al. 2008. “Limits of Monetization in Protecting Ecosystem Services.” Conservation Biology, vol. 32, no. 5, 2018, pp. 1048–1062., doi:10.1111/cobi.13153.

An International Journal of Indigenous Literature, Arts, & Humanities, Issue 19.1, Available at SSRN: https://ssrn.com/ abstract=2925513. Wilson, M. (2003). Corporate Sustainability: What Is It and Where Does It Come From? Ivey Business Journal. https://iveybusinessjournal.com/publication/corporatesustainability-what-is-it-and-where-does-it-come-from/ Wood, E. (1998). The Agrarian Origins of Capitalism. Monthly Review, 50(3), 14. https://doi.org/10.14452/MR-050-03-199807_2 Zalasiewicz, J., Williams, M., Smith, A., Barry, T. L., Coe, A. L., Bown, P. R., Brenchley, P., Cantrill, D., Gale, A., Gibbard, P., Gregory, F. J., Hounslow, M. W., Kerr, A. C., Pearson, P., Knox, R., Powell, J., Waters, C., Marshall, J., Oates, M., … Stone, P. (2008). Are we now living in the Anthropocene. GSA Today, 18(2), 4. https://doi.org/10.1130/GSAT01802A.1

“The FASTER Principles for Successful Carbon Pricing: An Approach Based on Initial Experience.”The Organization for Economic Cooperation and Development and The World Bank, 2015. Turkewitz, J. (2017, September 26). Corporations Have Rights. Why Shouldn’t Rivers? New York Times. Retrieved from https://www.nytimes.com/2017/09/26/us/does-thecolorado-river-have-rights-a-lawsuit-seeks-to-declare-it-aperson.html Turvey, S. T., & Crees, J. J. (2019). Extinction in the Anthropocene. Current Biology, 29(19), R982–R986. https:// doi.org/10.1016/j.cub.2019.07.040 UN. (2020). The Climate Crisis- A Race We Can Win. Shaping Our Future: UN75. US EPA, OAR. (2016, January 12). Global Greenhouse Gas Emissions Data [Overviews and Factsheets]. US EPA. https:// www.epa.gov/ghgemissions/global-greenhouse-gasemissions-data US EPA, OECA. (2015, August 17). Agriculture and Land Use [Overviews and Factsheets]. US EPA. https://www.epa.gov/ agriculture/agriculture-and-land-use U.S. Food and Drug Administration. (2020b). Food Loss and Waste. FDA. https://www.fda.gov/food/consumers/food-lossand-waste Velasquez-Manoff, M. (2018, April 18). Can Dirt Save the Earth? The New York Times. https://www.nytimes. com/2018/04/18/magazine/dirt-save-earth-carbon-farmingclimate-change.html Wagner, T. (2017, December 1). Reducing single-use plastic shopping bags in the USA. Waste Management, 70, 3-12. https://doi.org/10.1016/j.wasman.2017.09.003. Warner, Gregory. “Up In Flames: Kenya Burns More Than 100 Tons Of Ivory.” National Public Radio, 30 Apr. 2016, www.npr. org/sections/parallels/2016/04/30/476031765/up-in-flameskenya-burns-more-than-100-tons-of-ivory#:~:text=Kenya, which introduced the world,pyres in Nairobi's National Park. What Is Ecotourism? (n.d.). Retrieved from https://ecotourism. org/what-is-ecotourism/. Whyte, K. (2017, May 20). The Dakota Access Pipeline, Environmentalism Injustice, and U.S. Colonialism. Red Ink:



The Chemistry of Cosmetics STAFF WRITERS: ANNA KOLLN '22, ANAHITA KODALI '23, MADDIE BROWN '22 BOARD WRITER: NISHI JAIN '21 Cover Image: The cosmetics industry is an incredibly complex, nuanced, and powerful industry that takes up about $60 billion a year. It varies per region, per culture, and per person – in this paper, we try and elucidate its chemical underpinnings in an attempt to understand the basic building blocks that make up this giant of an industry Source: Wikimedia Commons


What does the cosmetics industry look like? The global cosmetics industry is one of the largest industries in the world. Before the coronavirus pandemic hit, it was expected to make $429.8 billion dollars in just 2 years and is rapidly growing (Rajput et al., 2019). The biggest consumer of cosmetics products in the world is the United States, while France is the largest exporter (Kumar et al., 2005). Over the past decades, the growth of the global economy, as well as increase in disposable incomes, had led to rising demands for cosmetics products. General market growth has shifted from the west to the east; however, western nations are currently experiencing increasing demand for herbal, natural, and organic products, which has contributed to rapid growth in the cosmetics industry and also offers potential areas for further growth in the coming years (Kumar et al., 2005, Rajput et al., 2019). Undoubtedly, the

global pandemic of 2020 has had a significant impact on the industry as it is so heavily reliant on disposable income. Despite initial cuts to profit, beauty companies have reported significant growth in e-commerce. Additionally, experts believe that the industry will recover within the next 5 years, as the beauty industry is relatively more stable and secure than other consumer industries (Utroske et al., 2020). There are hundreds of brands of cosmetics products around the world. However, there are a few mega-companies that control most of the industry. In fact, together, L’Oreal, Estee Lauder, Procter & Gamble, Coty Inc., Shiseido, and Johnson & Johsnon own 182 cosmetics brands (Willett). All of these companies but Johnson & Johnson are also in the top ten list of global cosmetics companies in 2020. The number one company is L’Oreal, with sales of $29.4 billion dollars annually. The company owns several luxury brands and has acquired many DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure I: Benzoyl Peroxide Source: Wikimedia Commons

global beauty brands, including Valentino, Nyx Cosmetics, and Ralph Lauren, which are what currently fuel the company’s overall growth. The next top 9 companies, in order, are: Estee Lauder, Procter & Gamble, Coty Inc., Cosmax, Shiseido, Beiersdorf, Amore Pacific, Kao Corporation, and Intercose S.P.A. (Cosmetics ODM Market, 2019). Over the past decade, skincare has consistently represented the biggest percentage of products sold in the cosmetics market; in 2019, skincare represented about 40% of all products sold. The skincare industry has benefited significantly from movements towards natural products and has a much faster growth rate than the overall market. The most popular brand in the 2010s was Olay Regenerist, which sells a variety of lotions and creams and is owned by Procter & Gamble. The makeup industry is made up of several different products, with foundations making up the biggest proportion of the market share in the United States (Shahbandeh et al., 2019).

used over-the-counter treatments for acne and is regarded as safe and effective by the FDA. When applied to the skin, it enters the sebum-secreting pores (Dutil et al., 2010). There, it breaks down into free radicals which oxidize and kill the acne-inducing bacteria Propionibacterium acnes, or P. acnes. Due to this mechanism of action, P. acnes is unable to develop resistance to benzoyl peroxide like other antibiotic treatments (Tanghetti & Popp et al., 2009). The harmless product benzoic acid is excreted through the urine (Dutil et al., 2010). Benzoyl peroxide is commercially available in many topical forms, with gels being the most common, with concentrations ranging from 2.5% to 10%. Currently, there is no statistically significant proof that efficacy increases with concentration, although lower concentrations have been shown to cause fewer negative side effects such as skin irritation or dryness (Brandstetter & Maibach et al., 2013).

"Over the past decade, skincare has consistently represented the biggest percentage of products sold in the cosmetics market; in 2019, skincare represented about 40% of all products sold."

Another common topical treatment for acne, among other skin conditions such as warts,

Chemistry of Skincare Skincare is a powerful, lucrative industry that can be broken down into a few different sectors: skincare (including acne medication, wrinkle creams, and skin pigment creams) and makeup. Within skincare, there more limited FDA oversight as compared to more aggressive pharmaceutical products, thereby leading to possible effects in their efficacy (something that is normally heavily regulated by the FDA). As a result, understanding the base chemistry can prove to be especially effective in attempting to understand the way that the FDA may shed light on the relative effectiveness of various techniques. Benzoyl peroxide is one of the most widely


Figure 2: Salicylic Acid Source: Wikimedia Commons


Figure 3: Tazarotene Source: Wikimedia Commons

"Topical retinoids are widely considered some of the most effective treatments of acne, as they target multiple contributing factors of acne."

lesions and calluses, is salicylic acid. Although its chemical structure resembles that of a Ă&#x;-hydroxy acid, its aromaticity gives it unique properties including lipophilicity, which allows it to dissolve in sebum. Once absorbed in the skin, salicylic acid is thought to disrupt intracellular connectors in the outer layer of the skin, resulting in an exfoliating effect while removing excess sebum. This mechanism can result in skin irritation and dryness (Arif et al., 2015). Toxic amounts of salicylic acid in the bloodstream can result in salicylism, which, while rare, has in extreme cases resulted in death (Madan & Levitt et al., 2014). As a result, over-the-counter acne treatments only contain concentrations ranging from 0.5% to 2%, although prescribed acne medications can be as high as 10% and treatments for other skin conditions can be as high as 40% (Akhadan & Bershad et al., 2003). Topical retinoids are widely considered some of the most effective treatments of acne, as they target multiple contributing factors of acne. Retinoids are derivatives of vitamin A. Within cells, retinoids bind to nuclear hormone receptors for retinoic acid, a metabolite of vitamin A, and regulate the expression of certain genes (Thielitz & Gollnick et al., 2008). One result is increased surface skin cell turnover, which clears the skin of clogged pores and impedes new clogs from forming. This effect also discourages the proliferation of P. acnes, which thrive in closed, anaerobic pores (Wolf et al., 2002). Additionally, retinoids exhibit an anti-inflammatory effect by inhibiting certain immune-response receptors and pathways. As with many acne treatments, irritation and dryness are common side effects (Thielitz & Gollnick et al., 2008). The retinoids currently approved by the FDA for topical use are tretinoin, adapalene, and tazarotene. They are offered in concentrations ranging from 0.02% to 0.3% depending on the retinoid (Akhadan & Bershad et al., 2003), and only adapalene is available over the counter. Clinical studies have


shown tazarotene to be the most effective against acne, and adapalene to have the least adverse effects on the skin (Thielitz & Gollnick et al., 2008). Despite their success as acne treatments, retinoids are known to have several more severe side effects. Orally prescribed retinoids are well-established teratogens due to their effects on cell growth. Although no link has been made between topical retinoids and birth defects, it is not advisable to use them during pregnancy due to the potential risk (Panchaud et al., 2012). Isotretinoin is controversial for its possible association with depression and suicide. The drug’s ability to cross the bloodbrain barrier suggests its potential to interfere with brain receptors. Despite its reputation, reviews have not found a statistically significant link between the medication and adverse mental health effects, although monitoring patients is recommended (Huang & Cheng 2017). Other retinoids have not been associated with increased risk of depression. Due to the wide range of physical manifestations and severity of acne, there is no catch-all treatment. Acne is a multifaceted condition with several targets for medication. An overproduction of sebum and skin cells clogs skin pores, causing P. acnes to flourish, triggering an immune response and inflammation (Leyden et al., 2003). In mild to moderate cases, combination treatments of the aforementioned topical medications are often recommended to treat multiple aspects of acne formation. For example, benzoyl peroxide in combination with tretinoin has proven to be an effective treatment (Leyden et al., 2003). Tretinoin works to prevent initial pore clogging while benzoyl peroxide targets P. acnes, resulting in a multipronged prevention of acne. Clinical studies have shown that increased risk of skin irritation from treatments like this can


Figure 4: Radicals that are present on the skin can be identified in an NMR machine and produce scans resembling this one Source: Wikimedia Commons

be averted by applying topical treatments at different times of day (Leyden et al., 2003). Topical antibacterial medications such as clindamycin are often prescribed alone or in combination with other treatments to combat acne, as they target P. acnes proliferation. However, the development of bacterial resistance to these treatments is a concern. Combination treatment of antibiotics and benzoyl peroxide can combat this problem, as P. acnes cannot become resistant to benzoyl peroxide (Seidler & Kimball et al., 2010). Acne is generally very treatable if patients are able to match the correct available medications to their individual condition. Alpha lipoic acids, an antioxidant, were discovered and isolated in 1951 as a part of the enzymatic complex that was involved in oxidative metabolism (Perricone et al., 2000, Sherif et al., 2014). When the alpha lipoic acid is applied to the skin topically, the substance is reduced to become dihydrolipoate, which is in and of itself an effective reducing agent that can then eliminate toxic superoxide, hydroxyl, and nitric oxide radicals (Matsugo et al., 2011). This reducing agent can also increase the production of antioxidants and prevent lipid peroxidation (Podda et al., 2001, Zhang et al., n.d.). It is a powerful agent that can act against not only against UV light due to its protection against radicals, but it can also inhibit NFkB signaling, thereby giving it potent antiinflammatory capabilities as well (Puizina-Ivić et al., 2010). Alpha hydroxy acids are a class of compounds


that consist of carboxylic groups substituted with hydroxyl groups on the alpha (adjacent) carbon (Babilas et al., 2012). These organic acids are naturally occurring in many fruits, but also can be synthetically created – as they are in many skincare products. Alpha hydroxy acids are commonly used in skin moisturizing serums or wrinkle reduction creams due to their ability to increase water holding capacity, thereby also increasing skin hydration and skin turgor (Edison et al., 2004, Green et al., 2009). AHAs also induce desquamation, plasticization, and normalization of epidermal differentiation (through interference with intercellular ionic bonding), which then can reduce corneocyte cohesion and facilitate keratolysis (Kornhauser et al., 2012). Alpha hydroxy acids are used in home-use skin peelings, and their common forms include lactic acid, citric acid, mandelic acid, glycolic acid, tartaric acid, ascorbic acid, and malic acid (Tung et al., 2000).

"Copper peptides, an anti-aging component, the most common of which is glycyl-lhistidyl-l-lysine or GHK, stimulates blood vessel and nerve outgrowth, and supports the function of dermal fibroblasts."

Copper peptides, an anti-aging component, the most common of which is glycyl-l-histidyl-llysine or GHK, stimulates blood vessel and nerve outgrowth, and supports the function of dermal fibroblasts (Li et al., 2016, Pickart et al., 2008, Pickart et al., 2018). It additionally has potent anti-cancer and anti-inflammatory capabilities (through inhibition of NFkB signaling) (Pickart et al., 2015). Dermatologists have conducted multiple controlled studies on aged skin to show that GHK has potent effects in tightening skin, improving elasticity and skin firmness, reduction of fine lines, wrinkles, photodamage and hyperpigmentation (Mazurowska et al., 2008). GHK complexes with copper activate


Source: Flickr

many remodeling processes including those related to macrophages and mast cells, and also stimulate the synthesis of collagen, elastin, metalloproteinases, anti-proteases, vascular endothelial growth factor, fibroblast growth factor 2, nerve growth factor, neutrotropins 3 and 4, and erythropoietin (Pickart et al., 2015).

"...eye makeup has existed for thousands of years. In Ancient Egypt, men and women used kohl - a paint like substance containing lead, metal, and ash- to paint dark circles around their eyes to ward off disease."

Dimethylaminoethanol (DMAE), an agent commonly used in anti-wrinkle medications, is an analog of the B vitamin choline and a precursor of acetylcholine (Liu et al., 2014). DMAE is a potent anti-inflammatory agent that has effects on acetylcholine synthesis, storage, secretion, metabolism, and receptivity (Clares et al., 2010). When evaluated in a placebo-controlled trial, DMAE was shown to be efficacious (as well as safe) in the mitigation of forehead lines and periorbital fine lines, improving lip shape and lip fullness, as well as the overall appearance of aging skin (Tadini et al., 2009).

Figure 5: Alpha hydroxy acids are commonly used in wrinkle reduction skincare creams

Hydroquinone has been used since the 1950s in over the counter skin lightening serums but was stopped in the early 2000s due to health concerns (Boyle et al., 1986). It was mainly stopped due to the presence of arbutin. Although there are many other products on the market that contain arbutin, many of which are hair products, commercial availability for skin lightening was discontinued (Matsumoto et al., 2016, O’Donoghue et al., 2006). Its skin lightening capabilities stem from its use as a polymerization inhibitor, which removes circulating melanin and lightens skin (Andersen et al., 2010, Schwartz et al., 2020). Kojic acid is a naturally occurring metabolite that is produced by fungi that has the ability to inhibit catecholase and tyrosinase activity (Burnett et al., 2010). Kojic acid functions as an antioxidant in skin lightening that acts in a time-dependent fashion, which, like hydroquinone, reduces the amount of circulating melanin (Cabanes et al., 1994). This time‐dependence, which is unaltered by prior incubation of the enzyme with the inhibitor, is consistent with a first‐order chemical reaction involving catecholase inhibition. In addition to skin-lightening, kojic acid has been used in antioxidant, anti-proliferative, anti-inflammatory, radio protective capacities (Saeedi et al., 2019). L-ascorbic acid is a water-soluble enantiomer of vitamin C that has several proven functions within the skincare industry (Crisan et al., 2015). It has proven to be an effective antioxidant which destroys free radicals and strengthens


protection against UV light, as well as removes discoloration and helps in fighting melasma, post-acne discoloration and pigmentation (Dulińska-Molak et al., 2019). L-ascorbic acid also functions as an immunostimulant by strengthening the immunity of the skin, which is weakened under the influence of UV rays, meaning that it also prevents carcinogenic changes to the skin (Al-Niaimi et al., 2017). However, the most prolific quality of this molecule is its anti‐wrinkling agency to stimulate collagen synthesis, something that decreases with age (Fitzpatrick et al., 2002). It additionally increases density of skin, improves skin elasticity, and shallows minor surface wrinkles (Elmore et al., 2005). On a more biochemical level, it inhibits MMP‐1 activity, an enzyme of the metalloproteinases class that causes collagen and elastin degeneration (Telang et al., 2013).

Chemistry of Makeup Eye makeup comes in a variety of bright colors; everything from neutral browns to neon pinks and greens. However, eye makeup has existed for thousands of years. In Ancient Egypt, men and women used kohl - a paint like substance containing lead, metal, and ash- to paint dark circles around their eyes to ward off disease (Long, 2017). Kohl is not common in the world today as lead can be extremely toxic. Today, eye makeup, including eyeshadow and eyeliner is made from a wide variety of ingredients, and varies depending on the brand. However, while the ingredients list for these products can be extremely long, there are a series of similar


Figure 6: Copper peptide GHK Source: Wikimedia Commons

ingredients they have in common. A quick scan of a standard eyeshadow palette will most likely reveal the top ingredient is either talc or mica. Talc is a naturally occurring mineral made from magnesium, sodium and oxygen. It is the lowest mineral on Oh’s scale, making it one of the softest minerals in the world (King, Talc: The Softest Mineral, n.d.). Talc is added to powders and creams as a filler. In cosmetics, talc is ground into a fine powder that can be added to eye makeup in order to ensure the product slides on smoothly and makes color opaquer (Goins, 2012). However, talc, while not inherently harmful, has the potential to become a carcinogen. Johnson and Johnson were recently in the news due to lawsuits that claim that Johnson and Johnson’s talc baby powder resulted in ovarian cancer (Rabin, 2020). Talc, like all minerals, is harvested from deposits in the Earth. However, talc deposits often run near or intersect with asbestos deposits. Asbestos is a group of minerals that is known to cause lung, throat, and ovarian cancer. If talc is not carefully inspected, asbestos can contaminate cosmetic properties (Asbestos Exposure and Cancer Risk, 2017). On the other hand, mica is a metallic sheet mineral. Like talc, mica can be added to makeup as a filler and to help products apply smoothly. However, mica can also be used to help add color to the makeup since mica comes in a variety of natural colors (Goins, 2012). It


also does not risk the same contamination issues of talc. Another common ingredient in eye makeup is zinc stearate. Zinc stearate is a zinc salt of a fatty acid. Fatty acids are carboxylic acids that contain a long chain of carbons and hydrogens. In the case of a salt, part of this fatty acid is negatively charged and associated with a positive ion such as zinc. Zinc derivatives are often added to eye makeup in order to act as adhesives as well as a thickening agent (Zinc Stearate, Cosmetics Information). Some makeup may use Magnesium derivatives instead of zinc, but the effects are the same. Cosmetic companies also usually add a “slip” to eye makeup in order to improve the texture. A common slip in the eye

"A quick scan of a standard eyeshadow palette will most likely reveal the top ingredient is either talc or mica."

Figure 7: DMAE Source: Wikimedia Commons


Figure 8: Hydroquinone Source: Wikimedia Commons

"Although there has been a recent push to minimize the amount of preservatives in products, preservatives are vital to ensuring cosmetics are not contaminated with bacteria."

Figure 9: Kojic Acid Source: Wikimedia Commons

makeup is dimethicone. Dimethicone is a manmade silicone polymer. Silicones are a family of polymers made from siloxane monomers and consist of a long chain of non-carbon atoms. Silicones have many useful properties including a high heat resistance and (Britannica, 2020). As such, they are found in everything from medicine to cookware. However, in the case of cosmetics, silicones are valued for their flexibility. The backbone of a silicone polymer consists of a central silicon atom bound to an oxygen atom. The silicon oxygen bond has a very low rotational barrier, meaning that the bond can rotate ‘freely’ in space (Polymer Properties Database). As a result, silicone products are often very flexible and smooth. This unique flexibility means that silicones can vastly improve the texture of eye makeup, allowing eye makeup to glide onto the eyelid with relative ease. As with much of the cosmetics industry, the exact slip varies based on company and there is no universal formula. Some companies may elect to use silicone alternatives such as the fatty ester ethylhexyl palmitate. In modern cosmetics, the bright colors in eye makeup come from color additives. The Federal Food and Drug act of 1938 regulates the color additives that can be used in cosmetics and only a sub portion of this list are approved for use on the eye. The list is extensive and includes everything from aluminum powder to FD&C Yellow No. 5 (Center for Food Safety and Applied Nutrition, 2017). However, some companies have found that the pigments approved for use FDA cannot give them all the colors they desire. Therefore, in recent years, there has been an increase in eye makeup marked “not safe for use around the eyes.” In this case, these pigments have been approved for use in cosmetics but are not approved for use around the eye due to increased risk of staining and allergic reactions

(Lebsack, 2019). The last component of eye makeup is the preservatives. Although there has been a recent push to minimize the amount of preservatives in products, preservatives are vital to ensuring cosmetics are not contaminated with bacteria. A common family of preservatives in cosmetics are parabens, with methylparaben and butylparaben being widely distributed. Parabens are currently approved for use by the FDA (and are often found in haircare and some skin care due to their effectiveness and low price) but concerns have been raised about correlations between parabens and cancer (Ross, 2019). In 2004, researchers found a concentrated number of parabens in breast cancer tissue, launching debates about whether parabens were promoting cancer growth (Harvey, 2004). Since parabens can act as hormone disruptors, some researchers are concerned about the potential effects of increased parabens levels in the body (Ross, 2019). However, recent human clinical studies have found no correlation between parabens and cancer, and the CDC has declared there is insufficient evidence to be concerned about paraben use (Ross, 2019). Nonetheless, parabens have begun to fall out of favor and are being replaced with other preservatives. Two popular alternatives are Glycol, a water-soluble preservative that can also act as a moisturizer, and Tocopherol, a vitamin that is also found in skincare (Seladi-Schulman, 2018). Many of the ingredients of eye makeup carry over into face makeup. Face makeup comes in many different forms and there is a large variation in formula. This section will focus on liquid face products, such as foundation or concealer. Much like eye makeup, face makeup begins with a base to help the ingredients stay together and apply smoothly on the skin. In the modern cosmetics market, most face makeup uses a water-silicone base. As in eye makeup,



allowing a much wider shade range. Synthetic Iron Oxides can be mixed in different color combinations or added to other colorants like titanium oxide until the desired shade is reached (Iron Oxides, 2020).

dimethicone is most commonly used as the silicone component due to its ability to cover skin imperfections and improve the texture of the product (Kimbrough, 2013). However, dimethicone is a hydrophobic molecule. As a result, a water-silicone base should begin to separate as silicone molecules repel the water molecules. In order to combat this, face makeup does not simply contain silicone and water mixed together in solution. Instead, silicone and water are bound through emulsifiers, preventing the product from breaking up(Kimbrough, 2013) A common emulsifier is dimethicone crosspolymer. In dimethicone crosspolymer, dimethicone and water are linked through covalent bonds, preventing the components of the base from separating even if they repel one another (Dimethicone Crosspolymer, 2020). Dimethicone crosspolymer is specifically useful for face makeup because the crosslinked polymers will form a film over the skin in order to keep the active ingredients in contact with the skin (Dimethicone Crosspolymer, 2020). Face makeup has a much wider array of possible ingredients compared to eye makeup, and other emulsifiers such as polysilicone 11 may be used for similar effects. Aside from texture, the pigment of face makeup is vital. Consumers are searching for the perfect shade and won’t buy face makeup that doesn’t match their skin tone. The most common way for face products to get pigment is iron oxide and titanium dioxide. Iron oxide is the main colorant used in face makeup and naturally occurs in several colors, primarily red, yellow, and black (Iron Oxides, 2020). However, Iron Oxide is produced synthetically for cosmetics,


The last major category of cosmetics is lip makeup. While the first lip product most people think of is a traditional lipstick that spins up from a small compartment. However, lip products also include lip gloss, lip balm, bullet lipsticks, and multiple other “specialized” lip makeup that companies market to consumers. For simplicity, the traditional lipstick will be examined. Lipsticks can be broken down into three main ingredients: waxes, oils, and emollients (Freeman, 2009). The waxes are the foundation of any lipstick and allow lipstick to be molded into the well-known cylindrical shape. The most common waxes for lipstick are beeswax, paraffin and carnauba wax (Freeman, 2009). The next ingredients are oils, such as lanolin oil or cocoa butter. The oils allow lipsticks to deposit color onto the lips without crumbling and falling apart (Freeman, 2009). However, the real fun in lipstick is the color. The color in lipsticks can come from a variety of natural or synthetic ingredients similar or identical to the color additives found in eye makeup. Perhaps the most popular lipstick color is red. The red coloring in lipstick most commonly comes from a compound called Carmine. Carmine is a deep red color that is produced from carminic acid. Carmine is not just found in makeup but for food coloring as well (Yoquinto, 2013). However, some consumers have begun to avoid carminic acid because it is produced by crushing and soaking cochineal beetles in an acidic solution. Companies who want to be vegan or cruelty free turn to other synthetic

Figure 10: L-ascorbic acid Source: Wikimedia Commons

"Lipsticks can be broken down into three main ingredients: waxes, oils, and emollients."

Figure 11: This is a siloxane monomer with the Si atom bound to an oxygen atom. The R groups are representative of different substituents which will vary based on silicone type Source: Wikimedia Commons


red dyes. However, some synthetic dyes such as Red No.6 are derived from petroleum or other problematic issues (Yoquinto, 2013)

Conclusion "The cosmetics industry commands billions of dollars a year has products that appear to target the inherent machinery of the cells in order to achieve a result that is altered from the pretreatment condition."

The cosmetics industry commands billions of dollars a year has products that appear to target the inherent machinery of the cells in order to achieve a result that is altered from the pre-treatment condition. Of the multi-billiondollar market – much of which relies on heavy social media marketing through ads, celebrity endorsement, and other testimonials – 23% of the market is skincare (second only to haircare) (Dobric, 2020). The skincare market is is largely dominated by conglomerates that offer at-home treatments for conditions that range from acne to wrinkles to dark spots, among others. The beauty industry is not far behind, however, as it is cosmetic’s most profitable branch – with makeup and eyeshadow contributing most to this trend (Dobric, 2020). While there has been a recent push toward eco-friendliness among products, there has been a more limited effort in understanding just how the products are eco-friendly. Along those lines, there is also a limited understanding and limited attempt to understand the chemistry behind the cosmetics industry. Due to the limited attempts at understanding this and the limited FDA oversight of more topical treatments (that make up the majority of the cosmetics industry) these massive makeup empires have arisen (Cosmetics ODM Market, 2019). Understanding the inherent cellular machinery and medical manipulation of said machinery is key to ensuring the correct cosmetic investment – more widespread knowledge of the methods may shed light on the efficacy and may result in altered consumer choices. References Akhavan, A., & Bershad, S. (2003). Topical Acne Drugs. American Journal of Clinical Dermatology, 4(7), 473–492. https://doi. org/10.2165/00128071-200304070-00004 Arif, T. (2015). Salicylic acid as a peeling agent: A comprehensive review. Clinical, Cosmetic and Investigational Dermatology, 8, 455–461. https://doi.org/10.2147/CCID.S84765 Asbestos Exposure and Cancer Risk Fact Sheet. (2017). Retrieved September 03, 2020, from https://www.cancer.gov/ about-cancer/causes-prevention/risk/substances/asbestos/ asbestos-fact-sheet Bakkali, F., Averbeck, S., Averbeck, D., & Idaomar, M. (2008). Biological effects of essential oils – A review. Food and Chemical Toxicology, 46(2), 446–475. https://doi.org/10.1016/j. fct.2007.09.106


Brandstetter, A. J., & Maibach, H. I. (2013). Topical dose justification: Benzoyl peroxide concentrations. Journal of Dermatological Treatment, 24(4), 275–277. https://doi.org/10. 3109/09546634.2011.641937 Castañeda-Ovando, A., Pacheco-Hernández, Ma. de L., Páez-Hernández, Ma. E., Rodríguez, J. A., & Galán-Vidal, C. A. (2009). Chemical studies of anthocyanins: A review. Food Chemistry, 113(4), 859–871. https://doi.org/10.1016/j. foodchem.2008.09.001 Center for Food Safety and Applied Nutrition. (2017). Summary of Color Additives for Use in the United States. Retrieved September 03, 2020, from https://www.fda.gov/ industry/color-additive-inventories/summary-color-additivesuse-united-states-foods-drugs-cosmetics-and-medicaldevices Clares, B., Ruíz, M. A., Morales, M. E., Tamayo, J. A., & Lara, V. G. (2010). Structural characterization and stability of dimethylaminoethanol and dimethylaminoethanol bitartrate for possible use in cosmetic firming. Journal of Cosmetic Science, 61(4), 269–278 Cosmetics ODM Market: Information by Application (Skincare, Makeup, Haircare, Others)—Forecast Till 2026 (Rep. No. SR1404). (2019, December 2). Retrieved July 25, 2020, from Straits Research website: https://straitsresearch. com/report/cosmetics-odm-market Del Valle, E. M. M. (2004). Cyclodextrins and their uses: A review. Process Biochemistry, 39(9), 1033–1046. https://doi. org/10.1016/S0032-9592(03)00258-9 Dimethicone Crosspolymer. (2020, January 10). Retrieved September 03, 2020, from https://thedermreview.com/ dimethicone-crosspolymer/ Dutil, M. (2010). Benzoyl Peroxide: Enhancing Antibiotic Efficacy in Acne Management. Skin Therapy Letter, 15(10). https://www.skintherapyletter.com/acne/benzoyl-peroxideantibiotic-efficacy/ Editors of Encyclopaedia Britannica. (2020, March 04). Silicone. Retrieved September 03, 2020, from https://www. britannica.com/science/silicone Frith, D. K. T. (n.d.). Globalizing Beauty: A Cultural History of the Global Beauty Industry. 33. Freeman, S. (2009, March 09). How Lipstick Works. Retrieved September 03, 2020, from https://health.howstuffworks.com/ skin-care/beauty/skin-and-makeup/lipstick2.htm Gardner, T. L., & BèE, C. (2020). THE COSMETIC INDUSTRY. 13. Goins, L. (2012, November 15). The Makeup of Makeup: Decoding Eye Shadow. Retrieved September 04, 2020, from https://www.webmd.com/beauty/features/decoding-eyeshadow Green, B. A., Yu, R. J., & Van Scott, E. J. (2009). Clinical and cosmeceutical uses of hydroxyacids. Clinics in Dermatology, 27(5), 495–501. https://doi.org/10.1016/j. clindermatol.2009.06.023 Grimes, P. E., Green, B. A., Wildnauer, R. H., & Edison, B. L. (2004). The use of polyhydroxy acids (PHAs) in photoaged skin. Cutis, 73(2 Suppl), 3–13. Harvey, P. W. (2004). Discussion of concentrations of parabens


in human breast tumours. Journal of Applied Toxicology, 24(4), 307-310. doi:10.1002/jat.991 Huang, Y.-C., & Cheng, Y.-C. (2017). Isotretinoin treatment for acne and risk of depression: A systematic review and meta-analysis. Journal of the American Academy of Dermatology, 76(6), 1068-1076.e9. https://doi.org/10.1016/j. jaad.2016.12.028 Iron Oxides (CI 77491, CI 77492, CI 77499). (2020, February 24). Retrieved September 03, 2020, from https://thedermreview. com/iron-oxides-ci-77491-ci-77492-ci-77499/ Jain, N., & Chaudhri, S. (2009). History of cosmetics. Asian Journal of Pharmaceutics, 3(3), 164. https://doi. org/10.4103/0973-8398.56292 Kim, S.-H., Shum, H. C., Kim, J. W., Cho, J.-C., & Weitz, D. A. (2011). Multiple Polymersomes for Programmed Release of Multiple Components. Journal of the American Chemical Society, 133(38), 15165–15171. https://doi.org/10.1021/ ja205687k Kimbrough, S. (2013). Anatomy of a Beauty Product: Liquid Foundations. Retrieved September 03, 2020, from https:// www.beautylish.com/a/vxais/anatomy-of-liquid-foundations King, H. (n.d.). Talc: The Softest Mineral. Retrieved September 03, 2020, from https://geology.com/minerals/talc.shtml Krakowski, A. C., Stendardo, S., & Eichenfield, L. F. (2008). Practical Considerations in Acne Treatment and the Clinical Impact of Topical Combination Therapy. Pediatric Dermatology, 25(s1), 1–14. https://doi.org/10.1111/j.15251470.2008.00667.x Kumar, S. (2005). Exploratory analysis of global cosmetic industry: Major players, technology and market trends. Technovation, 25(11), 1263–1272. https://doi.org/10.1016/j. technovation.2004.07.003 Lebsack, L. (2019). Neon Eyeshadow Has Never Been More Popular, But Is It Safe? Retrieved September 03, 2020, from https://www.refinery29.com/en-us/2019/07/238234/neoneyeshadow-makeup-palette-pigment-safety Leyden, J. J. (2003). A review of the use of combination therapies for the treatment of acne vulgaris. Journal of the American Academy of Dermatology, 49(3, Supplement), S200–S210. https://doi.org/10.1067/S0190-9622(03)01154-X Lummiss, J. A. M., Oliveira, K. C., Pranckevicius, A. M. T., Santos, A. G., dos Santos, E. N., & Fogg, D. E. (2012). Chemical Plants: High-Value Molecules from Essential Oils. Journal of the American Chemical Society, 134(46), 18889–18891. https:// doi.org/10.1021/ja310054d Madan, R. K., & Levitt, J. (2014). A review of toxicity from topical salicylic acid preparations. Journal of the American Academy of Dermatology, 70(4), 788–792. https://doi. org/10.1016/j.jaad.2013.12.005 Mazurowska, L., & Mojski, M. (2008). Biological activities of selected peptides: Skin penetration ability of copper complexes with peptides. Journal of Cosmetic Science, 59(1), 59–69. O’Donoghue, J. L. (2006). Hydroquinone and its analogues in dermatology—A risk-benefit viewpoint. Journal of Cosmetic Dermatology, 5(3), 196–203. https://doi.org/10.1111/j.14732165.2006.00253.x


Panchaud, A., Csajka, C., Merlob, P., Schaefer, C., Berlin, M., Santis, M. D., Vial, T., Ieri, A., Malm, H., Eleftheriou, G., Stahl, B., Rousso, P., Winterfeld, U., Rothuizen, L. E., & Buclin, T. (2012). Pregnancy Outcome Following Exposure to Topical Retinoids: A Multicenter Prospective Study. The Journal of Clinical Pharmacology, 52(12), 1844–1851. https://doi. org/10.1177/0091270011429566 Pickart, L., & Margolina, A. (2018). Regenerative and Protective Actions of the GHK-Cu Peptide in the Light of the New Gene Data. International Journal of Molecular Sciences, 19(7). https://doi.org/10.3390/ijms19071987 Podda, M., Zollner, T. M., Grundmann-Kollmann, M., Thiele, J. J., Packer, L., & Kaufmann, R. (2001). Activity of alphalipoic acid in the protection against oxidative stress in skin. Current Problems in Dermatology, 29, 43–51. https://doi. org/10.1159/000060652 Polymer Properties Database. Retrieved September 03, 2020, from https://polymerdatabase.com/polymer classes/Silicone type.html Polysaccharide Applications: Cosmetics and Pharmaceuticals. ACS Symposium Series 737 Edited by Magda A. El-Nokaly and Helena A. Soini (The Procter and Gamble Company). American Chemical Society: Washington, DC (Distributed by Oxford University Press). 1999. xvi + 348 pp. $135. ISBN 0-8412-3641-0. (2000). Journal of the American Chemical Society, 122(50), 12614–12614. https://doi.org/10.1021/ ja004825k Rabin, R. (2020, June 23). Women With Cancer Awarded Billions in Baby Powder Suit. Retrieved September 03, 2020, from https://www.nytimes.com/2020/06/23/health/babypowder-cancer.htm Rajput, N. (2019). Cosmetics Market by Category (Skin & Sun Care Products, Hair Care Products, Deodorants, Makeup & Color Cosmetics, Fragrances) and by Distribution Channel (General departmental store, Supermarkets, Drug stores, Brand outlets) - Global Opportunity Analysis and Industry Forecast, 2014 - 2022 (pp. 1-137, Rep.). Portland, OR: Allied Market Research. Ross, R. (2019, February 26). What Are Parabens? Retrieved September 03, 2020, from https://www.livescience. com/64862-what-are-parabens.html Saeedi, M., Eslamifar, M., & Khezri, K. (2019). Kojic acid applications in cosmetic and pharmaceutical preparations. Biomedicine & Pharmacotherapy = Biomedecine & Pharmacotherapie, 110, 582–593. https://doi.org/10.1016/j. biopha.2018.12.006 Seidler, E. M., & Kimball, A. B. (2010). Meta-analysis comparing efficacy of benzoyl peroxide, clindamycin, benzoyl peroxide with salicylic acid, and combination benzoyl peroxide/ clindamycin in acne. Journal of the American Academy of Dermatology, 63(1), 52–62. https://doi.org/10.1016/j. jaad.2009.07.052 Seladi-Schulman, J. (2018, September 29). Tocopheryl Acetate: Uses, Benefits, and Risks. Retrieved September 03, 2020, from https://www.healthline.com/health/tocopheryl-a Shahbandeh, M. (2019, October 24). Cosmetics Industry Statistics and Facts. Retrieved July 25, 2020, from Statista website: https://www.statista.com/topics/3137/cosmeticsindustry/


Tanghetti, E., & Popp, K. (2009). A Current Review of Topical Benzoyl Peroxide: New Perspectives on Formulation and Utilization. Dermatologic Clinics, 27(1), 17–24. https://doi. org/10.1016/j.det.2008.07.001 Telang, P. S. (2013). Vitamin C in dermatology. Indian Dermatology Online Journal, 4(2), 143–146. https://doi. org/10.4103/2229-5178.110593 Thielitz, A., & Gollnick, H. (2008). Topical Retinoids in Acne Vulgaris. American Journal of Clinical Dermatology, 9(6), 369–381. https://doi.org/10.2165/0128071-200809060-00003 Toxic Potential of Materials at the Nanolevel | Science. (n.d.). Retrieved July 7, 2020, from https://science.sciencemag.org/ content/311/5761/622.abstract Utroske, D. (2020, June 1). Coronavirus Impact: What the market research says. Retrieved July 25, 2020, from https:// www.cosmeticsdesign.com/Article/2020/04/20/CoronavirusImpact-what-market-research-says-about-beauty Willett, M. (2017, July 29). These 7 companies control almost every single beauty product you buy. Retrieved July 25, 2020, from https://www.businessinsider.com/companies-beautybrands-connected-2017-7 Wolf, J. E. (2002). Potential anti-inflammatory effects of topical retinoids and retinoid analogues. Advances in Therapy, 19(3), 109–118. https://doi.org/10.1007/BF02850266 Worret, W.-I., & Fluhr, J. W. (2006). Acne therapy with topical benzoyl peroxide, antibiotics and azelaic acid. JDDG: Journal Der Deutschen Dermatologischen Gesellschaft, 4(4), 293–300. https://doi.org/10.1111/j.1610-0387.2006.05931.x Yoquinto, L. (2013, May 30). The Truth About Red Food Dye Made from Bugs. Retrieved September 03, 2020, from https:// www.livescience.com/36292-red-food-dye-bugs-cochinealcarmine.html Zinc Stearate. (2016). Retrieved September 04, 2020, from https://cosmeticsinfo.org/ingredient/zinc-stearate






BOARD WRITERS: ANNA BRINKS '21 Cover image: Vaccines have revolutionized modern medicine. From Edward Jenner’s smallpox vaccine to the current race for a COVID-19 vaccine, some of the brightest minds in science have investigated this remarkable technology. Source: Pixabay

Introduction 1.1 Impact of Vaccines on Modern Medicine and Global Health An injection of attenuated or killed microorganisms (bacteria or viruses) is all it takes to awaken the immune system, allowing it to produce and recruit antibodies– proteins that patrol the body via the blood, recognize foreign substances, and annihilate them. Even after exposure to foreign substances, antibodies continue to circulate, providing protection against future exposure to pathogens, which include causative agents of catastrophic diseases such as polio and measles. This process of administering a vaccine to initiate immunity against a disease has made an enormous contribution to global health with both humans and other animals benefiting, especially in the developing world. Mortality from smallpox and measles was


massive in the pre-vaccination period, and an epidemic could wipe out up to half of an affected population (Greenwood, 2014). Fortunately, through vaccination, smallpox was completely eradicated in 1979, becoming the only human infection to become eradicated through vaccination. Local transmission of measles, another potential candidate for eradication, is being disrupted in the Americas by intensive surveillance campaigns and rapid responses following detection of cases. The eradication of the rinderpest virus in 2011 represented another major milestone in the control of infectious diseases and the continued contribution of vaccination to global health (Greenwood, 2014). Rinderpest, closely related to measles, can cause high mortality in cattle, impoverishing families in developing countries dependent upon these animals and making them susceptible to malnutrition and various infectious diseases. Recent close interactions between research groups developing DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure 1: Global number of child deaths per year, by cause of death. The number of children younger than 5 years old who died in a year is depicted in the graph. The height of the bar shows the total number of deaths with colored sections showing the number of children who died of diseases that are wholly or partially preventable by vaccines. The number of child deaths for which there are vaccines available declined from 5.5 million deaths in 1990 to 1.8 million deaths 27 years later. Graphic: Our World in Data, Data Source: Samantha Vanderslott and Bernadeta Dadonaite, 2013

human and veterinary vaccines, facilitated by organizations such as the Jenner Vaccine Institute, have prompted further positive developments. For example, researchers have found that the tuberculosis vaccine could be used for both a man and his domestic animals (Greenwood, 2014). Overall, vaccines have revolutionized global health and the practice of medicine, preventing an estimated 2 to 3 million deaths each year (WHO, 2020). In fact, between 1990 and 2017, the number of child deaths for which there are vaccines available dropped a staggering amount from 5.5 million to 1.8 million deaths, as shown by Figure 1. 1.2 Basic Overview of the Immune System Humans and other mammals live in a world that is heavily populated by both pathogenic and non-pathogenic microbes, which harbor an array of toxins that can potentially threaten human health (Chaplin, 2010). To ensure that bodily function is maintained, the immune system, which consists of two arms––a nonspecific, innate arm and a more specific, acquired arm––holds these microbes in check by supporting normal tissue and organ function using a complex array of protective mechanisms. These mechanisms control and


eliminate pathological microbes and toxic or allergenic proteins while avoiding responses that produce excessive damage to the body’s tissues or that might eliminate beneficial microbes (Chaplin, 2010). Essentially, the immune system utilizes an exquisite feature that relies on detecting structural characteristics of the pathogen or toxin that mark it as distinct from host cells. This host-pathogen or hosttoxin discrimination is essential to permit the host to eliminate the threat without damaging its own tissues (Chaplin, 2010).

"Essentially, the immune system utilizes an exquisite feature that relies on detecting structural characteristics of the pathogen or toxin that mark it as distinct from host cells."

Both the innate and adaptive immune systems exhibit self and non-self discrimination, (Gonzalez et al., 2011). On the one hand, the innate immune system is characterized by hardwired responses that are encoded by genes in the host’s germ line that recognize molecular patterns shared by many microbes and toxins that are not present in the mammalian host. The innate immune system consists of physical barriers, such as epithelial cell layers bound up by tight cell-cell contacts, the secreted mucus layer that overlays the epithelium in the respiratory, gastrointestinal and genitourinary tracts, and the epithelial cilia that sweep away this mucus layer and permit it to be constantly refreshed after it has been contaminated with inhaled or ingested particles (Chaplin, 2010). 255

The innate response also includes soluble proteins and bioactive small molecules that are either constantly present in biological fluids or that are released from cells as they are activated by foreign molecules (Chaplin, 2010). Finally, the innate immune system includes membrane bound receptors and cytoplasmic proteins that bind molecules distinctly expressed on the surfaces of invading microbes.

"...viruses can contain anywhere from three to more than 100 unique antigens whereas protozoa, fungi and bacteria, which are larger, more complex organisms, contain hundreds to thousands of different antigens."

On the other hand, the adaptive immune system is characterized by responses that are encoded by gene elements that somatically rearrange to assemble antigen-binding molecules with finely tuned specificity for unique foreign structures (Chaplin, 2010). The adaptive immune system produces long-lived cells that persist in an apparently dormant state with the potential to re-express effector functions swiftly after another encounter with their specific antigen, thereby permitting a more effective host response against specific pathogens or toxins when they are encountered a second time, even decades after the initial sensitizing encounter (Chaplin, 2010). This ability to re-express effector functions is the basis of immune memory–– a feature that vaccination relies on to trigger protection against a disease. Since the adaptive immune system consists of a small number of cells with specificity for any individual pathogen, toxin, or allergen, the responding cells must proliferate after encountering the antigen in order to attain sufficient numbers to mount an effective response against the microbe or the toxin. Therefore, the adaptive response generally expresses itself temporally after the innate response in host defense (Chaplin, 2010).

white blood cells, the most important of which are macrophages, B-lymphocytes and T-lymphocytes. Macrophages swallow up and digest the pathogens in addition to dead or dying cells, leaving behind antigens. B-lymphocytes then detect the produced antigens and assemble antibodies, which bind to the antigens. Finally, T-lymphocytes attack cells in the body that have already been infected by the pathogen (CDC, 2018). Antibodies attack antigens by specifically binding to a part of an antigen called the epitope or antigenic site. These antibodies flag pathogens, directing the immune system to destroy them. Prior to pathogen invasion, antibody concentration in the body is relatively low. However, greater quantities are produced and recruited following immune activation. Like pathogen invasion, vaccination also triggers an upsurge in antibody concentration (CDC, 2018). A vaccine may be either live or dead. Live bacterial or viral vaccines are typically attenuated, incapable of causing disease but capable of triggering an immune response. When a vaccine is administered into the body, the immune system recognizes it as foreign, thereby initiating an immune response and increasing the production of antibodies that attack the administered vaccine. Subsequent doses of the vaccine act to boost this response, resulting in the production of long-lived antibodies and memory cells. Thus, vaccines act to prime the body, so that when it is exposed to the live, unattenuated disease-causing organism, the immune system is able to respond rapidly at a high level of activity. This process destroys the pathogen before it causes disease and reduces the risk of it spreading to other people (CDC, 2018).

1.3 Vaccines: Mechanism of Action


The surfaces of pathogens contain antigens–– proteins or polysaccharides connecting to the outer surface of the pathogen. An antigen is a molecule that binds to antibody proteins in the body and initiates an immune response. Any given pathogen may contain many different antigens on its surface. For instance, viruses can contain anywhere from three to more than 100 unique antigens whereas protozoa, fungi and bacteria, which are larger, more complex organisms, contain hundreds to thousands of different antigens.

Vaccines vary in how they stimulate the immune system as some provide a broader response than others. Vaccines influence the immune response through the nature of the antigens they contain, including number and various other characteristics, or through the route of administration, such as oral, intramuscular, or subcutaneous injection. The use of adjuvants (immune response boosters) in vaccines helps to determine the type, duration, and intensity of the immune response and the characteristics of the resulting antigen-specific memory (CDC, 2018).

Upon exposure, the immune system produces antibodies that can bind to particular antigens on pathogens, leading to the activation of

For most vaccines, more than one dose may be required to provide sustained protection. Why? Firstly, for some vaccines (primarily


attenuated vaccines), the first dose provides insufficient immunity, so more than one dose is needed to build more complete immunity. The vaccine that protects against the bacteria Hib, which causes meningitis, is a good example of this principle (CDC, 2018). Secondly, for other vaccines, immunity begins to wear off after a while. At that point, a “booster” dose is needed to bring immunity levels back up. This booster dose usually occurs several years after the initial series of vaccine doses is given. For example, in the case of the DTaP vaccine, which protects against diphtheria, tetanus and pertussis, the initial series of four shots that children receive as part of their infant immunizations helps build immunity. But a first booster dose is needed at four to six years old and a second booster is needed at 11 years or 12 years of age. This booster for older children, teens, and adults is called Tdap (CDC, 2018). Thirdly, for some vaccines (primarily live vaccines), studies have shown that more than one dose is needed for everyone to develop the best immune response. For example, after one dose of the MMR vaccine, some people may not develop enough antibodies to fight off infection. The second dose helps maximize coverage across a population (CDC, 2018). Finally, in the case of flu vaccines, adults and children (six months and older) need to get a dose every year. An annual flu vaccine is needed because the flu viruses causing disease differ from season to season. Every year, flu vaccines are made to protect against the viruses that research suggests will be most common. Furthermore, the immunity a child gets from a flu vaccine wears off over time. Getting a flu vaccine every year helps keep a child protected, even if the vaccine viruses don’t change from one season to the next. Children six months through eight years old who have never gotten a flu vaccine in the past or have only gotten one dose in past years need two doses the first year they are vaccinated (CDC, 2018). In addition to individual protection, vaccination also leads to herd immunity: immunization of large portions of the population to protect the unvaccinated, immunocompromised, and immunologically naive by reducing the number of susceptible hosts to a level less than the threshold needed for transmission. For example, immunization of greater than 80% of the global population against smallpox virus reduced transmission rates to uninfected subjects to a point low enough to achieve eradication of the virus (Mallory et al., 2018). Herd immunity has proved to be extremely


effective especially in developing countries where vaccination resources are scarce, or populations outweigh available resources.

History of Vaccine Development 2.1 Smallpox and Inoculation Smallpox is a devastating disease that ravaged humanity for centuries. It is caused by the variola virus, and it becomes contagious once the first sores appear in the mouth and throat. It can be spread through droplets from the nose or mouth that travel through the air when people cough or sneeze. Contact with the scabs, fluid within the sores, or contaminated bedding and clothing materials can also spread the virus. The virus remains contagious until the last smallpox scabs fall off (CDC, 2016). Initial symptoms include high fever followed by the appearance of a rash that begins as small red spots on the tongue and in the mouth that develop into sores. The rash will then appear on the face and spread outwards to the arms and legs and finally the hands and feet. The rash changes into sores and then finally pustules, which eventually form a crust, scab over, and fall off. These symptoms birthed smallpox’s alternative moniker the “speckled monster” which was commonly used in 18th-century England (Riedel, 2005). Smallpox results in death for approximately 3 out of 10 of those affected, and historically the case-fatality rate in infants was even higher, approaching 80% in London and 98% in Berlin during the late 1800s (CDC, 2016; Riedel, 2005). Survivors can suffer from permanent scars over large areas of their body and may even be left blind (CDC, 2016).

"Smallpox is a devastating disease that ravaged humanity for centuries. It is caused by the variola virus, and it becomes contagious once the first sores appear in the mouth and throat."

Smallpox arose sometime around 10,000 B.C.E., at the time of the first agricultural settlements in northeastern Africa. The devastation of smallpox is woven into history: early evidence of smallpox exists on the faces of mummies from the 18th and 20th Egyptian Dynasties (1570-1085 B.C.E.) as well as in ancient Sanskrit texts of India (Riedel, 2005). It was introduced to Europe between the fifth and seventh centuries and frequently caused epidemics during the Middle Ages. The beginnings of the decline of the Roman Empire in 108 C.E. coincided with a particularly large epidemic which caused the deaths of almost 7 million people. The Arab expansion, the Crusades, and the discovery of the West Indies all carried the deadly disease across the globe (Riedel, 2005). Infamously, smallpox was used during the French-Indian War in one of the first incidences of biological warfare by British commander


Figure 2: Smallpox lesions Source: Wikimedia Commons

"...2-3% of inoculated patients died from the disease, became the source of a new epidemic, or suffered from other diseases such as syphilis that could be transmitted during the inoculation process."

Sir Jeffrey Amherst against Native Americans (Riedel, 2005).

Africa, Asia, and Europe, specifically in the Ottoman Empire.

Early treatments included largely ineffective herbal remedies and cold treatments. Inoculation, or variolation, was the most successful weapon against smallpox before the use of vaccines (Riedel, 2005). Inoculation involves taking a sample, often in the form of a scab or pus, from a sick patient and administering the sample to a healthy individual. Samples were commonly administered into small cuts, inhaled through the nose, or simply rubbed on the skin. Inoculation was first widely practiced with smallpox. Those who were inoculated would typically experience a mild form of the disease which had a significantly reduced death rate compared to a natural smallpox infection— one British ambassador who witnessed the use of inoculation in Northern Africa reported that mortality rates for natural smallpox were about 30%, while inoculation death rates were estimated to be 2% (Boylston, 2012).

Since the patient received a small dose of virus into the skin instead of inhaling a large dose, the mortality rates for inoculation were far below that of smallpox (Smith, 2011). However, inoculation was not without risks: 2-3% of inoculated patients died from the disease, became the source of a new epidemic, or suffered from other diseases such as syphilis that could be transmitted during the inoculation process (Riedel, 2005).

There is no historical consensus regarding where smallpox inoculation began, but it was certainly not, as is popularly believed, in Britain. 16th century accounts place the invention of inoculation in India in 1580 and in China as early as 1000 (Boylston, 2012). These accounts were from people who estimated how far in the past inoculation had begun—the practices were already widespread and trusted across several parts of Asia. Archaeologists have only found definitive documentation of inoculation from the mid 1500s forward, which has left the origin of inoculation a mystery. The words used to describe the practice are similar across languages, leading historians to believe that there was a single origin for the practice, and that the name and practice spread together (Boylston, 2012). Before reaching Britain, inoculation was practiced in areas of North


2.2 Edward Jenner and the Smallpox Vaccine Edward Jenner was one of the most prolific scientists in the field of immunization, and his work on eradicating smallpox is widely regarded as the origin of immunology (Smith, 2011). Jenner was born on May 17, 1749, the eighth child of the Reverend Stephen Jenner (Dunn, 1996). Orphaned at a young age, he was apprenticed at age 13 to a country surgeon. Jenner was an avid scientist, and he studied a variety of subjects including research on cuckoo hatchlings and the hibernation of hedgehogs. In 1796, after hearing that dairy maids were protected from smallpox after suffering from the milder affliction of cowpox, Jenner decided to carry out an experiment. He used material from the fresh cowpox lesions of a young dairymaid named Sarah Nelms to inoculate a young boy. Then, he inoculated the boy with smallpox, and observed that he was unaffected. Although it took several decades, vaccinations eventually became widely recognized, officially replacing inoculation in 1840. While Jenner was not the first to discover vaccination, his meticulous research and persistent advocacy for the practice allowed it to become widespread (Riedel, 2005).


In 1953, the first proposal to undertake a global smallpox eradication campaign was made by the WHO. Deemed unrealistic, it would not be until 1966 that a plan to eradicate the disease in 10 years would be approved, calling for the WHO to contribute $2.4 million per year with additional cooperation from countries around the world. The largest obstacle was producing a heat-stable, fully potent vaccine; initially, less than 10% of the vaccine batches met these standards, but after improved production methods, more than 80% of the vaccine needed was produced in developing countries. The invention of the bifurcated needle, which increased the percentage of successful vaccination, also enhanced vaccination efforts. On May 8, 1980, the WHO officially announced the successful eradication of smallpox (Henderson, 2011). 2.3 Louis Pasteur – Development of Cholera, Anthrax, and Rabies Vaccines French scientist Louis Pasteur succeeded in creating vaccines against fowl cholera, anthrax, and rabies. In the 19th century, fowl cholera was killing thousands of chickens across the country. Pasteur cultured the bacteria, Pasteurella multocida, which caused the disease and noticed that when he injected the cultures into a chicken, the chicken would develop cholera. However, old cultures no longer had that effect on chicken. When a chicken was injected with the old cultures, it could be exposed to a virulent strain and survive. Pasteur called this process “vaccination”, and presented his results to other scientists in the Académie des Sciences (Berche, 2012). Subsequently, Pasteur focused his efforts on preventing anthrax, which is caused by Bacillus anthracis. At the time, anthrax was widely infecting livestock across France. Pasteur began culturing Bacillus anthracis and found that these cultures lost their virulence as time went on. He vaccinated animals with an attenuated culture, and then with a virulent culture 12 days later. He was asked to perform his experiments for the public, so he inoculated 31 animals with the same procedure. When these animals and a control group were exposed to a highly virulent strain two weeks later, all of the control livestock died or became very ill, while all the inoculated animals survived and stayed healthy. Word spread throughout France of this success, and in 1894, 3.4 million cattle were vaccinated against anthrax (Berche, 2012).


Pasteur is also credited with the invention of the vaccination against rabies. He attenuated the virus by inserting it into a rabbit spinal cord and then removing the spinal cord and hanging it in a glass flask for 15 days. Pasteur had the opportunity to test whether or not this vaccine worked when a nine-year-old boy who had been bitten by a rabid dog was brought to him. Pasteur was reluctant to test the vaccine on the child but knew the boy would most likely die from rabies if nothing was done. The boy received 12 injections using attenuated virus from the rabbit spinal cords and survived (Berche, 2012). 2.4 Typhoid In 19th century Britain, pathologist Almroth Wright decided to use killed viruses for vaccines, instead of the method of using attenuated viruses that Pasteur had favored. Wright believed that killed viruses were less risky and just as effective (Chakrabarti, 2010). He focused his efforts on making a vaccine for typhoid fever, caused by Salmonella enterica serovar Typhi (S. Typhi). The bacterium typically enters the body after an individual eats or drinks contaminated food or water, and the mortality rate ranges from 5-30% if the patient is not treated. Wright, along with Richard Pfeiffer and Wilhelm Kolle, developed the typhoid vaccine in 1896 using heat-killed, phenol preserved, and acetone killed bacteria. The vaccine was used in England and Germany but is no longer used due to the side effects it causes (Sahastrabuddhe and Saluja, 2019). These side effects include inflammation, pain, and fever in 9-34% of those who receive the vaccine (Marathe et al., 2012). Currently, there are improved typhoid vaccines in use, including the live attenuated Ty21a vaccine (administered orally) and the Vi-polysaccharide vaccine (administered subcutaneously or intramuscularly) (Syed et al., 2020).

"In 19th century Britain, pathologist Almroth Wright decided to use killed viruses for vaccines, instead of the method of using attenuated viruses that Pasteur had favored."

2.5 Bubonic Plague Waldemar Haffkine was a bacteriologist who worked with Louis Pasteur at the Pasteur Institute in Paris. He was working in India when there was an outbreak of the bubonic plague, otherwise known as the “Black Death”. He was sent by the government of India to Bombay in 1896 to study the illness and find treatments for it. He discovered that heat-killed plague bacillus protected rabbits from succumbing to the plague. Haffkine injected himself with heat-


killed plague bacillus in 1897 to prove the safety of the vaccine, and then went on to vaccinate prisoners in a jail in Bombay (Bannerman, 1904). More than 20 million people were vaccinated against the plague with Haffkine’s vaccine in the years following, but the vaccine fell out of favor due to side effects such as fever (Butler, 2014). Currently, modern health practices including sanitation and public health practices have largely mitigated the impact of the disease, and antibiotics are also available to treat it. 2.6 Diphtheria

"In the late 19th century, scientists began developing serum therapies (a therapy that uses the serum of animals that have been immunized) to provide immunity against diphtheria and tetanus."


In the late 19th century, scientists began developing serum therapies (a therapy that uses the serum of animals that have been immunized) to provide immunity against diphtheria and tetanus. At the time, these two diseases were very dangerous. During the American Civil War, there was a 90% mortality rate among soldiers infected with tetanus (Kaufmann, 2017). In 1892, 50,000 German children died of diphtheria (Winau and Winau, 2002). German physician Emil von Behring and Japanese physician Baron Kitasato Shibasaburō pioneered the development of serum therapy for the treatment of diphtheria, a disease caused by Corynebacterium diphtheriae bacteria, and tetanus, caused by Clostridium tetani. They performed experiments in which they injected serum from mice that had recovered from tetanus infections into mice that had not yet been exposed. When the second group of mice was subsequently exposed to the bacterium, they did not become infected (Kaufmann, 2017). Soon after, Behring performed a similar experiment on guinea pigs. He infected healthy guinea pigs with the diphtheria bacteria and then injected them with the serum of guinea pigs that had survived diphtheria and were now immune. The recovery rate with this therapy was high, leading Behring to conclude that the sera of animals that were immune to the disease could induce disease resistance in other animals as well (Winau and Winau, 2002). Soon, this technique was developed for use in humans. In the 1890s, Paul Erlich developed a method to produce large quantities of antidiphtheria serum from horses (Bosch and Rosich, 2008; Kaufmann, 2017). The horses were made immune to diphtheria through controlled exposure to the bacteria. At the beginning, they were administered a small dose, but the doses increased in size as the horses built up a tolerance, and ultimately became immune (Winau and Winau, 2002). Serum would then be harvested for use in humans and other horses.

When serum therapy was used in children with diphtheria within two days of their diagnosis, the recovery rate was near 100% (Kaufmann, 2017). 2.7 Pertussis Whooping cough, the illness caused by Bordetella pertussis, was a major cause of childhood death at the time Dr. Pearl Kendrick and Dr. Grace Eldering began their work on developing a vaccine. They began by conducting research on pertussis patients in the town of Grand Rapids, Michigan during an outbreak in 1932 (Shapiro-Shapin, 2010). They collected cough plates from infected people in the town to analyze for their research. They then designed a better cough plate growth medium than what was currently in use to make the bacteria grow faster and allow for more rapid diagnosis of those with the disease (ShapiroShapin, 2010). This new method of rapid testing also allowed for the determination of safe quarantine lengths (Shapiro-Shapin, 2010). At the time, there was no established protocol for conducting clinical trials, and most tests involving human subjects used orphans or institutionalized patients treated against their will (Shapiro-Shapin, 2010). Kendrick and Eldering instead relied on the trust of doctors and parents who volunteered to have their kids vaccinated. The original vaccine was a whole-cell vaccine made of the inactivated Bordetella pertussis administered in four doses of increasing bacteria content (Kendrick, 1942; Shapiro-Shapin, 2010). The results of the trial showed that the vaccinated group presented significantly lower rates of infection compared to the control group. As a consequence, Kendrick and Eldering’s pertussis vaccine was in regular use throughout the country by the 1940s (Kendrick, 1942; Shapiro-Shapin, 2010). 2.8 Polio Poliomyelitis, commonly known as polio, had been endemic to the United States for some time before an uptick in cases starting in the 1940s. Scholars attribute the increase in polio cases to the onset of denser living conditions within the United States, as well as the hygiene hypothesis (Mnookin, 2012). The hypothesis states that as hygiene standards increased in the country, children were exposed to fewer diseases when they were young and protected by IgA antibodies delivered through their mothers’ breast milk; this lack of exposure to disease in infancy ultimately led to more cases


Figure 3: A UNICEF officer administering the oral polio vaccine in Hawassa, Ethiopia in 2010. Source: Flickr

of serious illness, such as polio, later in life (Colt, 2009). Jonas Salk, an American researcher at the University of Pittsburgh, was the first to create a polio vaccine, allowing the United States to effectively eliminate the disease by 1979. While the vaccine was widely considered a success, the case of the Salk vaccine exemplifies a number of key issues in the history of vaccination. Perhaps most pertinent is the story of how a disastrous failure of a vaccine manufacturer to deactivate the virus in a batch of polio vaccines led to the beginning of modern vaccine regulation. With the American public yearning for a reprieve from the often-paralytic polio virus, Salk’s research was heavily funded by the government despite criticism of his methods by other scientists (Offit, 2007). While the vaccine was found to be safe and effective when properly prepared, there was an incident where a batch of vaccine that contained live poliovirus was administered to over 200,000 American children. The incident, which caused 51 people to be paralyzed and left ten dead, led to the formation of the Division of Biological Standards within the Food and Drug Administration (Offit, 2007). The division required larger portions of vaccine batches to be quality tested before administration and continues to outline procedures for safe vaccine testing and distribution today.


The Salk vaccine also has a counterpart - the Sabin polio vaccine. While the Salk vaccine includes a deactivated version of a highly virulent strain of poliovirus, the Sabin vaccine includes a live, attenuated form of the virus. Both of these vaccines are still administered today, as both have distinct advantages and disadvantages. A dead virus vaccine is considered less dangerous, since the dead virus has no potential to cause disease. In rare cases, live, attenuated viruses may mutate to become virulent again and cause the disease they were meant to prevent. That being said, live vaccines have the benefit of eliciting a more robust immune response as the virus is alive and replicating in the body. The Salk vaccine only elicits systemic immunity (IgG antibodies), while the Sabin vaccine elicits both systemic and mucosal immunity (both IgG and IgA antibodies) (Baicus, 2012). With the dead virus vaccine, multiple doses are needed to generate protective antibody titers, and those titers decrease over a patient’s lifetime. The live Sabin vaccine was the vaccine of choice for the World Health Organization (WHO) when they resolved to eliminate polio globally in 1988. It is cheap, administered nasally or orally, and has the additional benefit of a herd effect (Baicus, 2012). The attenuated virus, like the virulent virus, spreads through the fecal-oral method, meaning that vaccinated members of a community shed the attenuated virus in their stool, and anyone who comes in contact with it could be effectively immunized (Altamirano et

"While the vaccine was found to be safe and effective when properly prepared, there was an incident where a batch of vaccine that contained live poliovirus was administered to over 200,000 American children. The incident, which caused 51 people to be paralyzed and left ten dead, led to the formation of the Division of Biological Standards within the Food and Drug Administration."


al., 2018). WHO has since switched to the dead Salk vaccine, however, due to concerns about the live vaccine’s capacity to mutate. While effective, this dead vaccine is more costly and must be administered through injection into the muscle (Baicus, 2012). Polio has been eliminated in nearly every country across the globe, yet the disease remains endemic to Afghanistan, Nigeria, and Pakistan. The push for the global elimination of polio is ongoing with a current goal of elimination by 2023 (KFF, 2020).

Current Innovations in Vaccine Therapies 3.1 Administration Route

"The most wellknown and conventional method of vaccine delivery is injection via hypodermic needle."

Choosing an effective administration route for vaccination is critical for initiating the desired immune response. The most well-known and conventional method of vaccine delivery is injection via hypodermic needle. In this route, a liquid-based vaccine is typically injected intramuscularly with a syringe. However, despite its widespread use, this method has many shortcomings. For instance, many children and adults suffer from trypanophobia, or the fear of needles, making vaccination a stressful ordeal (Mitragotri, 2005). In addition, needle-based vaccinations pose a risk for healthcare workers worldwide: an estimated 5% of injections result in accidental needle-stick injuries (Mitragotri, 2005). Moreover, in developing countries, the high cost of hypodermic needles encourages the reuse of syringes – a dangerous practice that promotes the spread of diseases (Mitragotri, 2005). These challenges have encouraged the development of alternative administration routes that do not require needles, including cutaneous and mucosal methods. Cutaneous administration routes include the liquid-jet, ballistic, and topical application methods. In the liquid-jet method, a needleless injector generates a high-velocity liquid vaccination jet to penetrate the skin. This enables the delivery of a vaccine to the intradermal or intramuscular regions without a needle (Mitragotri, 2005). Besides avoiding the use of a hypodermic needle, an additional benefit of this method is that it spreads the vaccine over a larger region than a standard intramuscular injection. Moreover, the liquid-jet targets the skin, which is highly involved in the immune response (Mitragotri, 2005). Thus, a lower dose is needed than for needle injections to generate adequate immunity. However, the liquid jet method has its own drawbacks – the vaccination is often more painful than needle-based injections and


blood can contaminate the nozzle, enabling the spread of diseases between patients if the nozzle is reused (Mitragotri, 2005). In the ballistic method, also known as epidermal powder immunization (EPI), powdered vaccines are accelerated to penetrate the stratum corneum – the outermost layer of the skin (Mitragotri, 2005). The stratum corneum is enriched with Langerhans cells, which promote the immune response. Additionally, powdered vaccines are easier to ship and store than liquid-based vaccines, which could make them especially appealing in developing countries or remote areas (Mitragotri, 2005). Though this is a relatively new method and mainly used in animals, it is being studied for more widespread use in humans. There are numerous topical application methods, including adjuvant patches, colloidal carriers, ultrasound techniques, and microneedles. These methods have been widely studied for general drug delivery but are new for vaccine administration. Overall, while topical application methods are easily deliverable and avoid painful needle injections, they often do not yield an effective enough immune response by themselves since the stratum corneum is difficult to permeate (Mitragotri, 2005). As such, supplementary techniques to increase the permeability of the stratum corneum are needed. One such method is tape stripping, which involves using commercially available tape or rubbing the skin with abrasive emery paper to peel layers from the stratum corneum before vaccinating (Mitragotri, 2005). Researchers are still investigating topical vaccination routes, since such vaccines would be easily administered and avoid many issues encountered with other routes provided that they are effective in generating an immune response. Beyond cutaneous administration methods, vaccines are also delivered via mucosal routes. These include delivery via the oral, nasal, ocular, pulmonary, vaginal, and rectal mucosal membranes. The most common mucosal vaccination routes are oral and nasal; for instance, FluMist is delivered as a nasal spray and there are widely used oral vaccines for polio, typhoid fever, cholera, and rotavirus (Mitragotri, 2005). These vaccines are easily deliverable and avoid cross-contamination by bypassing the need for a needle or nozzle. However, a drawback of oral administration routes is that these vaccines encounter


Figure 4: Researchers at the Texas Center for Cancer Nanomedicine (TCCN) are working on the development of nano-vaccines for cancer therapy. In this research, bone marrow cells were stimulated with cytokines (signaling molecules used extensively for intercellular communication) to favor differentiation into antigen presenting cells, known as dendritic cells. These dendritic cells are then presented with the nano-vaccines (as shown in this image), which are porous silicon particle discs loaded with immune-stimulating molecules and tumor antigens. These now activated cells are then injected back into the host to stimulate an anti-tumor response. Creator: Brenda Melendez and Rita Serda, Ph.D., Source: NCI Visuals Online, public domain

regions with high enzymatic activity and harsh chemical environments, such as the highly acidic gastrointestinal tract (Mitragotri, 2005). Thus, the oral delivery of non-living vaccines is difficult since DNA and proteins denature and break down in such environments. As a result, most orally-delivered vaccines are live, attenuated pathogens (Mitragotri, 2005). However, researchers are investigating the use of mediums to shield antigens from these harsh environments until they reach their target, such as polymer microspheres and bacterial ghosts (Mitragotri, 2005). 3.2 Target: Beyond Viruses Although vaccines have traditionally been used to combat viruses, recent research has explored their potential to be used against other targets such as cancer, allergies, and addiction. Therapeutic cancer vaccines, unlike normal vaccines which are administered to healthy individuals, are used to strengthen cancer patient’s own immune responses in order to help them better attack cancer cells. Vaccines are also important as a preventative measure in cancer. For example, the human papillomavirus accounts for about 70% of cervical cancers and the hepatitis B virus can cause liver cancer (Guo et al., 2013). Vaccines against these viruses


can therefore reduce the prevalence of their associated cancers. However, other vaccines can be used to directly target cancer itself. Examples of cancer vaccines include tumor cell vaccines, which may be prepared using irradiated patient-derived tumor cells or two or three established human tumor cell lines (Guo et al., 2013). Dendritic cell (DC) based vaccines are another option: DCs are the most effective antigen-presenting cell (APC) responsible for sensitizing naive T cells to specific antigens and therefore are appealing vehicles for antitumor vaccines (Cintolo et al., 2012). Figure 4 shows the development of a cancer nano-vaccine that relies on dendritic cells. Peptide based cancer vaccines can be used to target specific tumor associated antigens or stimulate the immune system with immunostimulatory adjuvants (Guo et al., 2013). Recent developments include a new cancer vaccine developed by the Mater Research team that has the potential to treat a variety of blood cancers including myeloid leukemia, non-Hodgkin's lymphoma, multiple myeloma, and pediatric leukemias, plus solid malignancies including breast, lung, renal, ovarian, and pancreatic cancers, and glioblastoma. The vaccine is made of human antibodies linked to a tumor-specific protein

"Although vaccines have traditionally been used to combat viruses, recent research has explored their potential to be used against other targets such as cancer, allergies, and addiction."


(Pearson et al., 2020). Allergies are typically caused by a hyper-immune response due to immunoglobulin E (IgE) production against harmless environmental antigens (Linhart et al., 2012). Antibodies can be either beneficial or detrimental, depending on their epitope specificity. Allergen-specific immunotherapy (SIT) aims to induce antibodies which block, but do not enhance the allergic reaction. IgG antibodies may block IgE binding to allergens, interfering with allergen specific IgE responses and blocking the anaphylactic reaction (Knittelfelder, 2009). Natural allergen extracts have been traditionally used for the preparation of the vaccines, and multiple applications of increasing allergen doses are required to become therapeutically effective. Recently, improved understanding of allergen structures and epitopes have promoted the engineering of new vaccines that are safer and more effective (Linhart et al., 2012). Mimotopes, peptides that mimic proteins, carbohydrates, or lipid epitopes are also being investigated to achieve immunogenicity and induce epitopespecific antibody responses upon vaccination (Knittelfelder, 2009).

"Recombinant DNA vaccines take advantage of genetic engineering techniques to target the production of desired antigens while simultaneously removing potential co-contaminants."

Vaccines can also be used to combat addiction: anti-addiction vaccines can produce antibodies to block the effects of drugs on the brain. An estimated 149 to 272 million people, or 3.3–6.1% of the population aged 15–64, have used illicit substances at least once in the previous year. Addiction poses a significant social and medical problem, and current treatments have limited success (Shen et al., 2011). Drugs of abuse are generally small molecules that can readily cross the blood brain barrier, while antibodies are larger molecules that cannot get into the brain. Therefore, in order to be an effective form of therapy, the antibodies must bind to illicit drugs and prevent them from entering the brain. Drugs do not usually provoke an immune response, so in order to induce the production of antibodies, the drug must be chemically linked to toxins (Kosten, 2005). Alternatively, passive immunotherapy uses monoclonal antibodies that are generated in a laboratory and administered intravenously (Kosten, 2005). In 1994, a cocaine vaccine was produced by attaching the cocaine to the surface of an antigenic carrier protein, which for this first-generation vaccine was deactivated cholera toxin B subunit protein combined with the FDA-approved human adjuvant alum. The vaccine has continued to be refined and has


undergone several clinical trials. There are also vaccines in development that target nicotine, opiates, and methamphetamines. While further research is required in order to make these vaccines sufficiently strong and long-lasting, if successful, they have enormous potential to ameliorate the morbidity and mortality associated with illicit drug use (Shen et al., 2011). Antibodies can be used to treat drug overdose, reduce the incidence of relapse, or protect at-risk populations from becoming drug dependent (Kosten, 2005). 3.3 Type of Vaccines Recent developments in vaccine technology have resulted in the development of many different types of targeted vaccines. Although traditional vaccination methods such as Pasteur’s attenuated viruses and Wright’s use of inactivated viruses are still used, modern biotechnology techniques have allowed researchers to take advantage of newer and more effective routes to vaccine creation. Examples of these new types of vaccines include vaccines that use recombinant DNA, modified toxins (“toxoids”), and RNA. Recombinant DNA vaccines take advantage of genetic engineering techniques to target the production of desired antigens while simultaneously removing potential cocontaminants. Produced from the isolation of desirable DNA fragments via restriction enzymes, recombinant DNA is typically propagated through the insertion of plasmids into sample cells - a process called transformation (Griffiths et al., 2000). The success of recombinant DNA vaccines was first demonstrated in the 1980s with the production of vaccinia virus recombinants for Hepatitis B and Herpes. Encoded to produce the Hepatitis B surface antigen and the Herpes virus Glycoprotein B via genetically engineered poxviruses, it was found that the recombinant vaccine raised the survival rate of mice infected with these viruses to 100% (Paoletti et al., 1984). Recombinant vaccines have also been demonstrated to boost long-term immunity. For example, a recombinant DNA vaccine for Hantavirus using material from the Gn and LAMP-1 Hantavirus vaccines produced an antibody titer 102,000 after 28 weeks in mice; the inactivated virus only produced a titer of 6400. Histological tissue analysis also demonstrated no significant toxic impacts when compared to healthy mice (Jiang et al., 2017).


Another modern vaccination is toxoid vaccines, produced through the deactivation of toxins secreted by certain kinds of bacteria. Toxoids are produced through the purification and denaturation of toxic proteins, either by high temperatures or the addition of formaldehyde. This allows for the toxic particle to provoke an immune response without causing damage (Yadav et al., 2014). For example, the toxoids for tetanus and diphtheria, first discovered in 1927, are capable of provoking cellular immune responses in up to 90% of all patients (Blencowe et al., 2010). Additionally, unlike attenuated vaccines, toxoid vaccines generally last longer and are incapable of causing symptoms of the disease (Baxter, 2007). However, similar to inactivated vaccines, toxoid vaccines require multiple doses. For example, the tetanus vaccine typically requires at least 2 doses of toxoid (Blencowe et al., 2010). Finally, RNA-based vaccines have also shown potential as a novel alternative to traditional vaccination techniques. RNA-based vaccines depend upon the delivery of mRNA molecules to encode desirable antigens. These vaccines can be delivered ex vivo - through the injection of dendritic cells infused with mRNA, or in vivo - typically by packaging them in lipid nanoparticles (Verbeke et. al, 2019). Although there are no mRNA vaccines currently approved for human usage, candidates have been developed for several types of viruses, including Moderna’s COVID-19 vaccine that is currently in phase 1 of clinical trials (Garde, 2020). mRNA vaccines have also been suggested as a potential immunotherapy treatment for some types of cancer (McNamara et al., 2015). These vaccines are believed to possess several advantages over their standard vaccine counterparts. For example, they avoid the risk of genomic integration (where viral DNA becomes integrated into the host’s DNA) and are capable of encoding any protein desired. Furthermore, because mRNA-based vaccines do not depend on viral growth, vaccines can be produced en masse without requiring extensive containment protocols (Armbruster et al., 2019).

Modern Development of Vaccines Vaccine development is a long and complex process, often involving 10-15 years of private and public involvement, with plenty of oversight from the Center for Biologics Evaluation and Research (CBER) within the FDA. The federal government has been overseeing approval of vaccines since 1902, after 13 children were


killed by contaminated diphtheria antitoxin. This tragic incident led to Congress passing the Biologics Control Act, which mandated facility inspections and other certification guidelines (Marshall and Baylor, 2011). The current system of vaccine development is derived from the 20th century, specifically deriving its process from the U.S. Public Health Service Act of 1944 and the Food, Drug, and Cosmetic Act of 1938. These acts defined drugs as biologics “intended for diagnosis, cure, mitigation, treatment, or prevention of disease” (Gruber and Marshall, 2018). This definition makes vaccines a unique class of pharmaceuticals that falls under both a drug and a biological product, subjecting them to difficult tests and trials. 4.1 Exploratory Stage Development begins with the exploratory stage, sometimes referred to as the preInvestigational New Drug (pre-IND) phase. During this phase, scientists spend up to two to four years assessing and developing a procedure for vaccine development based on the disease in question (FDA, 2019). Proper planning at this phase directs scientists into identifying or developing the correct immunogens or synthetic antigens. Once the antigens that might help prevent or treat the disease of interest have been identified, private companies begin the development of the candidate vaccine to be used in the preclinical stage.

"The preclinical stage lasts approximately one to two years; however, despite being one of the shorter phases, most vaccines never progress beyond this stage, for they fail to produce the desired immune response."

4.2 Pre-clinical stage The preclinical stage lasts approximately one to two years; however, despite being one of the shorter phases, most vaccines never progress beyond this stage, for they fail to produce the desired immune response. The pre-clinical stage employs animal testing or testing in live non-human cell-culture systems to assess the safety of the vaccine along with its immunogenicity (ability to provoke an immune response) (Gruber and Marshall, 2018). The vaccine is also tested for its ability to provoke an immune response in various dosages and under different methods of administration (Gruber and Marshall, 2018). If the response is unsatisfactory and unable to substantiate the initial exploratory work, scientists will use the information gained and return to the exploratory stage for additional research on a candidate vaccine. However, if the response goes as intended, then results will be summarized in a report to a sponsor and


Figure 5: Traditional timeline of the vaccine development stages with an accelerated timeline for the current COVID-19 pandemic Source: Flickr

development will progress towards the clinical phase.

"Before beginning any official phases of the vaccine development process, the private company in practice must apply for an Investigational New Drug (IND) to the FDA and be approved."


4.3 Clinical Development Before beginning any official phases of the vaccine development process, the private company in practice must apply for an Investigational New Drug (IND) to the FDA and be approved (FDA, 2019). The private company in question will often have a sponsor approach the FDA and carry out the IND application process. The FDA encourages the sponsor to request a meeting with their review board before applying to discuss any pre-clinical developments, study designs, data requirements, and potential issues that may arise during the trials (Gruber and Marshall, 2018). This pre-IND meeting essentially serves as an oral IND application in front of the review board and often catches small concerns that can be addressed before submitting the IND. When applying for an IND license, the sponsor must provide a set of three specific descriptions: (i) a description of the composition and method of manufacture of the vaccine along with the methods of testing for safety, purity, and potency of the vaccine; (ii) a summary of the preclinical and exploratory experiments and results with the candidate vaccine; and (iii) a proposal of the

clinical study and the names and qualifications of each investigator in the private company (Gruber and Marshall, 2018). 4.4 Phases Clinical development amounts to three separate phases in the clinical evaluation of the candidate vaccine; however, there is often overlap between the phases. Also, most vaccines may experience highly iterative testing as the first couple of phases may be repeated continuously with the influx of more data and research behind the vaccine. (Gruber and Marshall, 2018). Clinical trials start with Phase I – mainly employed to get a preliminary evaluation of the safety and immunogenicity of the candidate vaccine. Phase I is mainly thought of as a second preclinical trial but on humans, requiring it to be a much more strenuous and careful process (Gruber and Marshall, 2018). Phase I trials generally involve 20-80 individuals; if the target group of the vaccine is children, trials will start with adults and gradually decrease in age until the target group is reached. Phase I trials have no blinding component, meaning that both the researchers and subjects may know whether a vaccine or a placebo is used (Gruber and Marshall,


2018). It is important to note that any positive response expressed by individuals, though it may indicate satisfaction with the vaccine, should not be considered as scientific proof of its efficacy; only later do larger trials determine if the candidate vaccine truly protects against the disease of interest. The goal of Phase II is to substantiate the findings of Phase I with a larger group of people. Companies will ask hundreds of volunteers to participate in clinical trials, some of which may be selected because they are at risk of acquiring the disease (National Vaccine Advisory Committee, 1997). These trials are randomized, well-controlled, include placebo trials, and are often double-blinded (neither the participants or experimenters know who is receiving a particular treatment (Gruber and Marshall, 2018). In addition to confirming the safety and immunogenicity of the candidate vaccine from Phase I, Phase II also tests proposed dosages, schedules of immunizations, and methods of delivery. Phase II will often fall back into Phase I with new findings of dosage amounts and delivery methods. If results from these trials are promising, some companies will also bring upon human challenge trials on a small number of people. Human challenge trials are trials in which volunteers, regardless of immunizations, are challenged with an infectious disease. The challenge pathogen is subject to attenuation or kept as close to wild-type and pathogenic as possible (WHO, 2016). To minimize risk, however, most clinical trials genetically modify the pathogen in a manner that puts the volunteers’ health into consideration. These challenge trials may provide preliminary data on a vaccine’s activity against infectious diseases but must be conducted so in a manner within an ethical framework in which truly informed consent is given (WHO, 2016).

questions about whether the vaccine prevents the disease, whether it leads to the production of antibodies or other immune responses, and/ or whether it will prevent infection (Gruber and Marshall, 2018). Sometimes the human challenge trials may occur here too to provide additional corroboration of the vaccine’s safety and efficacy. Only when a vaccine shows satisfactory results in Phase III can it move on to the post-marketing phase where it becomes available to the general population. 4.5 Regulatory review and approval The licensing stage occurs once the clinical trials are completed. A biologics license application (BLA) must be submitted to the CBER Office of Vaccines Research and Review, which needs to include data proving safety, efficacy, and a process to manufacture the vaccine in a consistent way. A multidisciplinary team of clinicians, statisticians, pharmacologists, and other scientists review the application. The product may be licensed, or the reviewers may ask for additional studies to be performed. There are accelerated review processes for vaccines which may prevent life-threatening diseases. Some vaccines, such as vaccines against bioterrorism threats, may be approved with only animal trial data if a human trial is unethical (Marshall and Baylor, 2011). After regulatory approval, the vaccine is further monitored in the post-approval phase. The manufacturing process is closely monitored to ensure that contaminants are not being introduced into the vaccines. Those who receive the vaccines are also monitored for adverse effects, and those who manufacture the vaccines are required to continue to monitor and report safety data (Marshall and Baylor, 2011).

"A biologics license application (BLA) must be submitted to the CBER Office of Vaccines Research and Review, which needs to include data proving safety, efficacy, and a process to manufacture the vaccine in a consistent way."

4.6 Manufacturing Phase III trials serve as the true test of whether a vaccine can truly protect against an infection or disease. The scientific method and experimental procedures are heavily stressed during these trials, as Phase III evaluates an experimental vaccine by comparing the rate of infection in individuals given the experimental vaccine with that of in a group given a placebo. These groups range from thousands of people to tens of thousands of people to test safety in large groups (Gruber and Marshall, 2018). This is because certain side-effects of the candidate vaccine may only surface when a large group is tested (FDA, 2019). These trials also test the efficacy of the vaccine, asking


Once vaccines have been approved for distribution, the next phase of development is mass manufacture. The first step in this stage of development is the production of the desirable antigen - either from growth and inactivation of the pathogen or the production of a desirable recombinant DNA fragment. Vaccines are typically grown in a variety of different media; influenza is grown in chicken eggs, while Hepatitis A is propagated in diploid cells (Gomez et al., 2013). The viral particles or toxoids produced are then collected, purified, and deactivated, typically through extensive heating or the addition of formaldehyde.


Figure 6: A computer rendering of the COVID-19 virus. Source: Pxhere

"Operation Warp Speed aims to deliver 300 million doses of a safe, effective COVID-19 by January 2021."


The formaldehyde is later removed through extensive purification to eliminate potential harm to humans (WHO, 2020). Next, for some pathogens that cannot generate a sufficient immune response, adjuvants are added to stimulate the response. Examples of these adjuvants include aluminum salts for the pertussis vaccine and squalene for influenza (Di Pasquale et al., 2015). Finally, some vaccines may have stabilizers and preservatives added to maintain their effectiveness while in storage. Such agents include sorbitol for the yellow fever vaccine, potassium glutamate for the rabies vaccine, and 2-phenoxy ethanol for the polio vaccine (Gomez et al., 2013).

standard, national regulatory agencies follow an “independent lot release” system. Under this system, individual “lots” of manufactured vaccines are evaluated independently from the manufacturer to ensure quality control (WHO, 2013). Ultimately, after distribution has begun, national regulators are empowered to engage in “Postmarket surveillance.” As a part of this process, a national regulator will collect data on the distribution of a vaccine, noting if there was an unexpected increase in adverse reactions (Raj et al., 2019).

4.7 Quality control

5.1 Infrastructure and Current Research

Finally, up until vaccines are distributed, they are extensively evaluated for quality control purposes. As many vaccines are composed of multiple ingredients, typically several different types of assays are required, depending on the type of vaccine in question. One of the most important factors considered is vaccine efficacy, as measured through the quantity of antigens and their quality. Techniques like mass spectroscopy and the enzyme-linked immunosorbent assay, otherwise known as ELISA, work to quantify the number of antibodies and identify any defects in the structure of antigen proteins (Metz et al., 2009; Engvall and Perlmann, 1972). Other techniques, like isoelectric focusing and reversed phase chromatography, serve to insure the purity of vaccine material (Metz et al., 2009). To ensure that product quality is always to

Operation Warp Speed aims to deliver 300 million doses of a safe, effective COVID-19 by January 2021. This initiative is part of a broader strategy that works to accelerate the development, manufacturing, and distribution of COVID-19 vaccines, therapeutics, and countermeasures (HHS, 2020). Operation Warp Speed is a comprehensive effort that involves partnerships among components of the Department of Health and Human Services (HHS), including the Centers for Disease Control and Prevention (CDC), the Food and Drug Administration (FDA), the National Institutes of Health (NIH), and the Biomedical Advanced Research and Development Authority (BARDA), and the Department of Defense (DoD). There is also collaboration with private firms and other federal agencies. To expedite the vaccine development process, rather than

COVID-19 Vaccine Case Study: Operation Warp Speed


Figure 7: A chart displaying the percentage of children 12-23 months of age who have received a diphtheria, pertussis, and tetanus vaccine (DPT). The size of the circle denotes the country’s population. Multiple DPT vaccines are required to induce immunity, so the vaccination rate for DPT is considered a good indicator for the strength of a country’s vaccination program Image source: Our World in Data. (Vanderslott et al., 2013)