Page 1

Note from the Editorial Board Dear Readers, We are proud to present another print issue of the Dartmouth Undergraduate Journal of Science. Contained within this issue are a variety of types articles, including exposés of important of scientific phenomena, developing techniques and therapies, and student research. All of these articles describe different forms of “flow” in various areas of science. This broad theme allows writers to describe their varying areas of interest in ways that play together and connect. In this issue, Paul Harary and Sam Neff describe new therapeutic methods that could be used to treat various diseases – primarily cancers. Harary depicts up-and-coming cancer therapeutics that ultimately trigger apoptosis. This is accomplished by targeting PD-1 inhibitors to tumors using antigens specific to cancer cells. Neff describes the usage of CRISPR/Cas9 in gene therapies. He focuses on a particularly interesting delivery mechanism for this tool: nanoparticles, such as liposomes. This issue also highlights student research coming out of the Laidre lab. The particular project featured here, a thesis by Leah Valdes, was conducted at the Shoals lab in the Gulf of Maine. Their investigation of how marine animals locate sparse resources is very much worth the read. In a very different investigation of the natural environment, Sarah Chong describes the factors that affect the movement of glaciers. This Winter issue represents the culmination of our entire staff’s hard work. Thank you for reading, and we hope you enjoy. Sincerely, Sam Reed President

The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge. EDITORIAL BOARD Editor-in-Chief: Kevin Kang ’18 President: Sumita Strander ’18, Sam Reed '19 Chief Copy Editor: Anirudh Udutha ’18 Managing Editors: Anirudh Udutha ’18, Logan Collins ’19, Don Joo ’19 Assistant Editors: Nan Hu ’18, Peter Vo ’18, Cara van Uden ’19 Layout & Design Editor: Gunjan Gaur ’20 Webmaster and Web Development: Arvind Suresh ’19 STAFF WRITERS Dylan Cahill ’18 Krishan Canzius ’18 Nan Hu ’18 Kevin Kang ’18 Saba Nejab ’18 Chiemeka Njoku ’18 Chelsea Lim ’18 Sumita Strander ’18 Peter Vo ’18 Kevin Chao ’19 Paul Harary ’18 Hjungdon Joo ’19 Zach Panton ’19 Sam Reed ’19 Arvind Suresh ’19 Cara Van Uden ’19 John Buete ’20 Sarah LeHan ’20 Anders Limstrom ’20 James Park ’20 Zachary Wang ’20 Nishi Jain ’21 Sophia Koval ’21 ADVISORY BOARD Alex Barnett – Mathematics David Bucci – Neuroscience Marcelo Gleiser – Physics/Astronomy David Glueck – Chemistry Carey Heckman – Philosophy David Kotz – Computer Science Richard Kremer – History William Lotko – Engineering Jane Quigley – Kresge Physical Sciences Library Roger Sloboda – Biological Sciences Leslie Sonder – Earth Sciences

SPECIAL THANKS DUJS Hinman Box 6225 Dartmouth College Hanover, NH 03755 (603) 646-8714 http://dujs.dartmouth.edu dujs@dartmouth.edu Copyright © 2017 The Trustees of Dartmouth College

Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Office of the Provost Office of the President Undergraduate Admissions R.C. Brayshaw & Company

Table of Contents

3 12

Restoring the Cancer Immunity Cycle: Immunotherapy and the PD-1/PD-L1 Checkpoint Pathway Paul Harary ’18


The Game of Life and the Economy of Nature: Relating Darwin's Evolution to Adam Smith's Invisible Hand Nishi Jain ’21


Fixing A Faulty Genome Mechanisms for the Delivery of Gene Therapy Sam Neff '21


Developments in Flow Cytometry 17 Sam Reed '19 Effect of Impurities on Glacial Ice Sarah Chong ‘21



All About Arsenic Poisoning Jenny Chen '21


Understanding Uncle Mo Peter Vo '18


Shoals: An Island Where Dartmouth Can Study Marine Life Leah Valdes '18, Mark Laidre


2 01 7 I N T E R N AT I O N A L S C I E N C E E S S AY C O M P E T I T I O N S E C O N D A N D T H I R D P L AC E W I N N E R S


Socially Assistive Robots: An Emerging Technology for Treating Autism Sei Chang Finally a Cure to the Ticking Time Bomb Prakruti Dholiya







A More Realistic Spatial Evolutionary Game Theory Model 37 Jeffrey Qiao The Symptoms of Child Physical Abuse by Frequency and Specificity 41 Yuri Lee, Garam Noh, Alexandra A. Barber, Katherine Ginna, Dennis Cullinane



Restoring the Cancer Immunity Cycle: Immunotherapy and the PD-1/PD-L1 Checkpoint Pathway BY PAUL HARARY '18

The Cancer-Immunity Cycle The immune system has received a great deal of attention in the last few decades for its potential in the treatment of cancer. Immunotherapies aim to enhance the immune response to detect and eliminate cancer and many efforts have been made to selectively activate these natural defense mechanisms to target tumor cells. These approaches typically exploit the fact that tumor cells present certain molecules on their cell surface in greater quantities than normal cells, allowing the immune system to recognize and attack them. These cell-surface molecules are known as tumor-associated antigens (TAAs) and many immunotherapies function by directing adaptive immune cells to target particular TAAs. One of the mechanisms that the immune system uses to eliminate cancerous cells is a multi-step process referred to as the CancerImmunity Cycle (Kim, DS). This system involves T cells, a type of lymphocyte responsible for WINTER 2018

cell-mediated immunity, recognizing TAAs and triggering a response. More specifically the Cancer-Immunity Cycle is characterized by the following steps: 1) Release of antigens by cancer cells, 2) antigens are taken up by antigen presenting cells (APCs) such as dendritic cells (DCs) and are presented via MHC 1 and MHC II molecules on the cell membranes of DCs, 3) priming of naĂŻve T cells and activation of effector T cells against the cancer-specific antigen, 4) trafficking of T cells to tumors, 5) infiltration of T-cells into tumors, 6) recognition of cancer cells by T cells, 7) and killing of cancer cells via release of cytotoxins (induce apoptosis) or perforins (create perforations in cell). The immune system has several checkpoints in place to prevent attacks upon normal, healthy cells, which would lead to autoimmunity. Immune checkpoints can be either stimulatory, giving positive feedback to immune cells to attack certain targets, or inhibitory, providing negative feedback so that T cells know not to

Figure 1: Nivolumab, a human IgG4 anti-PD-1 monoclonal antibody discovered at Medarex, developed by Medarex and Ono Pharmaceutical, and brought to market by Bristol-MyersSquibb. Source: Wikimedia Commons


engage with self-cells. Although these safety guards are essential to maintaining selftolerance and preventing the immune system from acting indiscriminately, they also provide a mechanism by which cancer cells can evade immune regulation. Mutations that result in a cancer cell presenting inhibitory checkpoint markers on its cell surface allow it to escape attack by T-cells via the Cancer-Immunity Cycle.

“Mutations that result in a cancer cell presenting inhibitory checkpoint markers on its cell surface allow it to escape attack by T-cells via the Cancer-Immunity Cycle.”

Checkpoint Inhibitors A class of immunotherapeutic drugs known as immune checkpoint inhibitors aims to restore the cancer-immunity cycle by blockading the negative feedback signals that cancer cells present. They use highly specific antibodies (a large blood protein that is produced by the immune system to bind and neutralize particular pathogens) to bind and block inhibitory checkpoint receptors. By doing so, these drugs prevent T-cells from being deactivated which has the effect of stimulating the immune system against tumor cells. The most well characterized of these inhibitory immune checkpoints are CTLA-4 and PD-1, both of which function to down-regulate the immune response and suppress T-cell inflammatory activity.

The PD-1 Pathway

“All of this points to the importance of finding a way to block PD-1/PD-L1 and restore normal immune function.”


PD-1, which stands for Programmed cell death protein 1, is a cell-surface receptor that is engaged by its ligand PD-L1. PD-1 is expressed on both major sub-populations of T-cells, CD8+ and CD4+ T-cells, following activation by APCs. It delivers inhibitory signals via an immunoreceptor tyrosine-based inhibitory motif (ITIM) when stimulated by PD-L1. This terminates responses of T-cells, typically to self-peptides, to prevent autoimmunity, and in the case of chronic infections, to prevent T-cell exhaustion. PD-1 knockout mice, for example, were shown to develop glomerulonephritis similar to that associated with lupus (Nishimura, 4). In another study, PD-1 knockouts experienced dilated cardiomyopathy and sudden death by congestive heart failure. Furthermore, the mice exhibited high-titer circulating IgG autoantibodies reactive to a 33-kilodaltion protein unique to cardiomyocytes (Nishimura, 5). Both of these results suggest that PD-1 plays an important role in the prevention of automimmune diseases. Structurally, the PD-1 membrane protein is 268 amino acid long member of the extended CD28/CTLA04 family of T-cell regulators, which are involved in activation and contraction of the T-cell response. The protein contains an intracellular tail with two phosphorylation sites within the ITIM, indicating that it works as an “off switch” via negative regulation of T-cell

Receptor (TCR) signals. In addition to activated T-cells, PD-1 is also expressed on B-cells (the key players of humoral immunity that are responsible for directing antibody production) and macrophages (one of the major types of APCs) (Agata, 1996). This indicates that PD-1 has a wider impact than CTLA-4, which is unique to T-cells. Tumor cells have been observed to upregulate the PD-1 ligand, PD-L1, thereby inhibiting anti-tumor activity by engaging PD-1 on regulatory and conventional effector T-cells. As a result, high levels of expression of PD-L1 have been correlated with low survival rates in pancreatic, esophageal, and prostate cancers (Syn, 2017). Upregulation of PD-L1 on cancer cells has been shown to help them evade apoptosis via several mechanisms. Most obviously, binding of PD-1 inactivates cytotoxic T-lymphocytes (known as CTLs or CD8+ T-cells) and prevents them from inducing apoptosis in cancer cells. However, PD-L1 can also engage PD-1 on other immune cells in order to upregulate production of interleukin-10, a cytokine (immune system cell messenger molecules) that inhibits the function of another sub-class of T-cells, CD4+ or “Helper” T-cells (Said, 2010). CD4+ T-cell dysfunction in turn inhibits expansion of the T-cell population as a whole and has a significant immunosuppressive effect.

PD-1 and PD-L1 Inhibitors All of this points to the importance of finding a way to block PD-1/PD-L1 and restore normal immune function. Pharmaceutical companies quickly realized the potential of this form of immunotherapy and began developing drugs to target the pathway. In 2008, the first clinical trials of PD-L1 blocking drugs were conducted on patients with advanced blood cancers. PD-L1 was chosen as a target, rather than the PD-1 receptor, since it was shown that PD-L1 is expressed on 40-50% of melanomas with otherwise limited expression in visceral tissues. Later, in 2014, the first anti-PD-1 medication received FDA approval for the treatment in melanoma. Nivolumab, marketed as Opdivo, is a humanized IgG4 monoclonal antibody to the PD-1 receptor that binds with high-affinity, thereby removing the negative regulator activity and reversing immune suppression (Johnson, 2015). In 2017, further developments were made in PD-1 based immunotherapies with Pembrolizumab (trade name Keytruda), another anti-PD-1 antibody. It became the first cancer drug to be approved based on tumor genetics instead of tumor location or tissue type. It is prescribed to treat tumors that display genetic mutations that cause DNA mismatch repair, DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

which tends to result in many surface TAAs. Pembrolizumab has been shown to be effective in modulating the immune system to clear cells presenting these TAAs, which is especially useful given the fact that such mutations have been shown to be associated with unresectable and metastatic solid tumors (Syn, 2017). PD-1/PD-L1 therapies are not always effective in treating patients. However, when they are beneficial the positive effects tend to last for several years. In fact, studies have shown that the benefits can last indefinitely in certain cases (Johnson 2015; McCullar, 2017). The durability of these agents increases even further when they are combined with other forms of treatments, such as radiation therapy, chemotherapy agents, cancer vaccines, and certain targeted therapies.

Immune-Combination Therapy Although anti-PD-1/PD-L1 antibodies have made tremendous progress in the past decade, they remain ineffective in a large number of patients. However, when combined with another form of therapy, the spectrum of responsive patients increases significantly relative to monotherapy alone (Ott, 2017). Such regimens can include combinations within drug classes or across classes. A striking example of the latter was recorded by a 2017 study, which showed pembrolizumab to be particularly effective when administered in combination with another class of immune checkpoint inhibitor, an anti-CTLA-4 antibody therapy called ipilimumab (trade name Yervoy). Combining a standard-dose of pembrolizumab with a four smaller doses of ipilimumab resulted in robust anti-tumor activity in patients with advanced melanoma, while maintaining a relatively low toxicity profile (Long, 2017). Another benefit of immune-combination therapy is that it allows for modulation of responses against tumors that are not already recognized by the immune system. For example, many immune checkpoint blockade agents are thought to be only effective in mounting responses to cancer cells against which there are pre-existent CD8+ T-cell. In other words, they are only able to enhance previously formed immune responses, not generate them de novo. However, certain combination strategies involve pairing PD-1 therapies with cancer vaccines. This results in targeted therapies that stimulate co-stimulatory molecules, or adoptive T-cell therapy in order to achieve a spontaneous response against non-T-cell inflamed tumors. Recently, the introduction of chimeric antigen receptor T cell therapy (CAR-T) bears great potential for combination with existing PD-1 agents. CAR-T, a process that involves genetically engineering T-cells WINTER 2018

to display specific TCRs in order to engage tumor cells, allows for the generation of strong anti-cancer responses in patients who lack immune-competence (such as those with acute lymphoblastic leukemia). If CAR-Ts can be used to induce an inflamed tumor microenvironment, then PD-1 inhibitors may be able to enhance and broaden the immune response.

Figure 2: Antigen presentation by an Antigen Presenting Cell (APC) and activation of T-cells. Source: Wikimedia Commons

Risks Associated with Checkpoint Inhibitors Despite the many important clinical benefits that have been demonstrated by checkpoint inhibitors, they have also been associated with a unique range of side effects. Therapies that act as checkpoint blockades “remove the brakes” from the immune response, allowing for greater activation of anti-tumor T-cell effector functions. However, the danger of “removing the brakes” is a higher risk of autoimmune reactions and destruction of the host’s own tissues as well. For example, a 2017 meta-analysis found that PD-1 inhibitors increased the incidence and risk of pneumonia in cancer patients relative to those being treated with traditional chemo-therapies alone (Wu, 2017). This increase in risk differed greatly between patients with various tumor types and was shown to be dose-dependent. However, this toxic side-effect was rare and may be managed by using lower doses of checkpoint blockades in combination with conventional therapies. Moreover, anti-PD-1 antibodies have been shown to have far fewer and less frequent side effects than CTLA-4 targeted drugs such as ipilimumab. It has been suggested that this occurs because PD-L1 upregulation is more closely-associated with tumor cells than CTLA4, which is present on all regulatory T-cells (a sub-population of CD4+ cells) and activated conventional T-cells. 5

“WIth over 50 drugs currently in development, checkpoint inhibitors and combination therapy may prove to be the future of unresectable cancers and other diseases once believed to have poor immunogenic potential.”

Figure 3: Tumor immune escape via PD-1 : PD-L1 interaction. Source: Wikimedia Commons


Checkpoint inhibitors have substantially improved the prognosis for patients with advanced melanomas and other advanced diseases, often succeeding when other therapies have been ineffective. The remarkable rates of response, the durability of such responses, and the lack of significant drug-drug interactions all point to the great potential for checkpoint inhibitors. With over 50 drugs currently in development, checkpoint inhibitors and combination therapy may prove to be the future of unresectable cancers and other diseases once believed to have poor immunogenic potential. D CONTACT PAUL HARARY AT PAUL.M.HARARY.18@DARTMOUTH.EDU References 1. Agata, Y., Kawasaki, A., Nishimura, H., Ishida, Y., Tsubat, T., Yagita, H., & Honjo, T. (1996). Expression of the PD-1 antigen on the surface of stimulated mouse T and B lymphocytes. International immunology, 8(5), 765-772. 2. Chen, D. S., & Mellman, I. (2013). Oncology meets immunology: the cancer-immunity cycle. Immunity, 39(1), 1-10. 3. Jin, H. T., Ahmed, R., & Okazaki, T. (2010). Role of PD-1 in regulating T-cell immunity. In Negative Co-Receptors and Ligands (pp. 17-37). Springer, Berlin, Heidelberg. 4. Johnson, D. B., Peng, C., & Sosman, J. A. (2015). Nivolumab in melanoma: latest evidence and clinical potential. Therapeutic advances in medical oncology, 7(2), 97-106. 5. Kim, J. M., & Chen, D. S. (2016). Immune escape to PD-L1/PD-1 blockade: seven steps to success (or failure). Annals of Oncology, 27(8), 1492-1504. 6. Long, G. V., Atkinson, V., Cebon, J. S., Jameson, M. B., Fitzharris, B. M., McNeil, C. M., ... & Hwu, W. J. (2017). Standard-dose pembrolizumab in combination with reduced-dose ipilimumab for patients with advanced

melanoma (KEYNOTE-029): an open-label, phase 1b trial. The Lancet Oncology, 18(9), 1202-1210. 7. McCullar, B., & Taylor Alloway, M. M. (2017). Durable complete response to nivolumab in a patient with HIV and metastatic non-small cell lung cancer. Journal of thoracic disease, 9(6), E540. 8. Nishimura, H., Nose, M., Hiai, H., Minato, N., & Honjo, T. (1999). Development of lupus-like autoimmune diseases by disruption of the PD-1 gene encoding an ITIM motifcarrying immunoreceptor. Immunity, 11(2), 141-151. 9. Nishimura, H., Okazaki, T., Tanaka, Y., Nakatani, K., Hara, M., Matsumori, A., ... & Honjo, T. (2001). Autoimmune dilated cardiomyopathy in PD-1 receptordeficient mice. Science, 291(5502), 319-322. 10. Ott, P. A., Hodi, F. S., Kaufman, H. L., Wigginton, J. M., & Wolchok, J. D. (2017). Combination immunotherapy: a road map. Journal for immunotherapy of cancer, 5(1), 16. 11. Said, E. A., Dupuy, F. P., Trautmann, L., Zhang, Y., Shi, Y., El-Far, M., ... & Fonseca, S. G. (2010). Programmed death-1–induced interleukin-10 production by monocytes impairs CD4+ T cell activation during HIV infection. Nature medicine, 16(4), 452. 12. Syn, N. L., Teng, M. W., Mok, T. S., & Soo, R. A. (2017). De-novo and acquired resistance to immune checkpoint targeting. The Lancet Oncology, 18(12), e731-e741. 13. Thompson, R. H., Webster, W. S., Cheville, J. C., Lohse, C. M., Dong, H., Leibovich, B. C., ... & Blute, M. L. (2005). B7-H1 glycoprotein blockade: a novel strategy to enhance immunotherapy in patients with renal cell carcinoma. Urology, 66(5), 10-14. 14. Thompson, R. H., Gillett, M. D., Cheville, J. C., Lohse, C. M., Dong, H., Webster, W. S., ... & Zincke, H. (2004). Costimulatory B7-H1 in renal cell carcinoma patients: Indicator of tumor aggressiveness and potential therapeutic target. Proceedings of the National Academy of Sciences of the United States of America, 101(49), 1717417179. 15. Topalian, S. L., Drake, C. G., & Pardoll, D. M. (2012). Targeting the PD-1/B7-H1 (PD-L1) pathway to activate anti-tumor immunity. Current opinion in immunology, 24(2), 207-212. 16. Wu, J., Hong, D., Zhang, X., Lu, X., & Miao, J. (2017). PD-1 inhibitors increase the incidence and risk of pneumonitis in cancer patients in a dose-independent manner: a meta-analysis. Scientific reports, 7, 44173. 40(12), 765-778. doi: 10.1016/j.tibs.2015.09.003


E C O N O M I C S , E C O LO G Y

The Game of Life and Economy of Nature: Relating Darwin's Evolution to Adam Smith's Invisible Hand BY NISHI JAIN '21

Introduction “Every individual intends on his own security, only his own gain. And he is in this led by an invisible hand to promote an end which was no part of his intention. By pursuing his own interest, he frequently promotes that of society more effectually than when he really intends to promote it.” Years before Darwin was born, Adam Smith was a Scottish political economist and a key driver of the Scottish Enlightenment, a period in Scottish history that was characterized by a massive academic frameshift from irrational political philosophy that was traditionally inherited from government to government toward an emphasis on political reasoning, free enterprise, personal liberty, limited government, and the scientific method (Irons 1899). With roots traceable to the American Constitution, the American Revolution, and the Bill of Rights, it was arguably one of the most important intellectual revolutions to shape the future of democracy (Broadie 2017). Held as the father of modern economics, Smith’s influence was the most pronounced in the development of the theory of the “invisible hand,” which essentially comprises of the idea of societal good that is fortuitously caused by different individuals who are simply WINTER 2018

absentmindedly pursuing their own self interests (Montanye 2011). A common conservative argument, it pushes the community towards a free market economy in which, arguably, community good will arise from the nonchalant actions of all the different individuals regardless of external intervention. This theory, however political, has biological underpinnings as well. It is commonly believed to be the precursor to Charles Darwin’s theory of evolution, which is primarily considered in his magnum opus, On the Origin of Species. Instead of arguing for the structure of the economy, Darwin argues about the structure of the nature, or rather the “economy of nature,” as Darwin himself puts it in his work (Oleinikov et al. 2004). His theory of evolution—the belief that life on earth essentially descends from generation to generation without a specific guidance but rather the individual characteristics and actions of individual species—can be thought of as a near parallel to the progression of Smith’s economic model, with the individual actors interacting in Smith’s theory not being specific biological organisms in a biological community but rather simply people in a social and economic community.

Figure 1: Adam Smith, father of modern economics and economic theory, whose most celebrated contribution to the field of economic theory was the concept of the “invisible hand.” Source: Wikimedia Commons


Figure 2: John Nash, the mathematician who made significant strides in the field of game theory through the development of the Nash equilibrium, or the point at which we can predict behavior in noncooperative game theory iterations. Source: Wikimedia Commons

Game Theory and the Prisoner's Dilemma: An Overview

“[Nash Equilibrium] presents the idea that even when there is a nonbinding and noncooperative game that is played, that there is still a way to predict the behavior of the parties in the case that there is a longer time horizon.”


The ties between the Darwin’s evolution and Smith’s economics can be first and foremost modeled using game theory—an idea used in a multitude of fields including political science, psychology, and logic, but also economics and biology. A concept first derived as a conflict and cooperation game between two different interdependent parties, game theory’s current applications range anywhere from firms in a market sector to individuals in an ecosystem. A concept first theorized in 1928 by John von Neumann as the “theory of parlor games,” it was furthered immensely with the influence of mathematician John Nash, who developed the model of the noncooperative game theory (“Game Theory” 1971). Game theory has traditionally diverged at the point of cooperation, with cooperative game theory essentially encompassing the idea that the individual parties are contractually bound, and noncooperative game theory describing situations in which individuals are free to act in their own accord—the connection between Smith and Darwin lies in the noncooperative game that describes both people and other biological organisms as the agents of their own free will in economies and environments, respectively (Ehud 2013). The basis of game theory takes two different parties and puts them into a situation in which there is a potential for partnership and cooperation, and there is a potential for anarchy and defection. There are four possible ways that the “game” can play out for Party A and Party B: both parties can cooperate, Party A can cooperate and Party B defect, Party B can cooperate and Party B defect, or both

parties can defect. However, there is a payoff that is associated with each of these options, as modeled in the following table: If there are resources that are available to both the parties totaling four units, in the situation that both parties cooperate according or a previously agreed upon contract, then they both are eligible for the attainment of two units each. In the case that Party A cooperates with the contract and Party B does not, there is a resulting gain for Party A of zero units and a gain of three units for Party B because Party B essentially does not adhere to the rules and steals all the units from Party A, but expends one unit of the energy in the predation process. Likewise when Party B cooperates with the contract and Party A does not, there is a resulting gain for Party B of zero units and a gain of three units for Party A, who also loses one unit in predation. In the situation that neither party cooperates, they both expend one unit each in defense and predation, and thus are left with only one unit each. John Nash contributed a unique take onto this subject in which he argued that in the case that there were an infinite number of times that the game was played (referred to as the Iterated Prisoner’s Dilemma game), that there would eventually be an equilibrium of decisions that would be made for each respective choice. This idea, later termed Nash equilibrium for Nash himself, presents the idea that even when there is a nonbinding and noncooperative game that is played, that there is still a way to predict the behavior of the parties in the case that there is a longer time horizon (Vanderschraaf 1999). This situation in which noncooperative game theory is modeled is also called the Prisoner’s Dilemma and is traditionally applied in economics through the idea of interaction between multiple firms in the creation and implementation of a contract between them. Applied in a biological instance, however, it represents the competition amongst the species that results in a logical decision of whether DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

not to engage in a partnership or to engage in a fight that results in one of four outcomes as previously described.

Game Theory and the Prisoner's Dilemma: Application to Biology The application of game theory to biological settings is most famously modeled in the hawk and dove game that was introduced in 1973 by John Maynard Smith and George R. Price as a biological application and equilibrium refinement of the Nash equilibrium. This equilibrium refinement, also called their solution concept, was called the Evolutionarily Stable Strategy (ESS) and was essentially a formal rule for predicting how an interaction among animals, or a game among the animals was to be played (Haigh 1975). The main difference between the game theory application to economics and the application to biology is that the payoff that is derived in biology is less related to goods attained but instead determined by fitness, or the ability to produce offspring. Additionally, another key difference is that the biological structure is more associated with the evolutionary forces rather than rationality, as it is with the economic structure. Evolution fits the economic archetype of the Prisoner’s Dilemma through the application in context with the hawks and doves (Chastain et al. 2014). The idea behind this game starts with the fundamental fight of resources between the hawks and doves, with hawks characterized as aggressive animals that always fight for the given resources while the dove never fights for a resource and instead displays its feathers in conflict—if it is attacked for any reason at all, it immediately draws back before it suffers any permanent injury. Between these two animals there is again the four-part outcome possibility, with the inclusion of Hawk A, Hawk B, Dove A, and Dove B. There is the possibility that Hawk A and Hawk B interact, where Dove A and Hawk


B interact, where Dove B and Hawk A interact, and finally where Dove A and Dove B interact. The payoff matrix for the interactions is detailed below: If V denotes the value of the contested resource and C is the cost of a fight, and if the assumption is made that V > C, then the four possible outcomes are outlined above. In the case that there is a fight between the two hawks, both of which are aggressive animals, then there will a net gain for both of them of the value of the resource (V) minus the cost of the fight (C) split between the two of them evenly. In the case that there is the competition of resources between a dove and a hawk (either in the case of Dove A with Hawk B or in the case of Dove B with Hawk A), the dove will immediately display and retract in order to avoid injury so as to not injure themselves. In the case that there are two doves that interact with one another, they will both display, and with no conflict they will both take half of the resources. The implications of the fitness in this regard are interesting—in the case of the hawk, they are willing to risk their future fitness (the cost of

Figure 3: Charles Darwin, whose theories of evolution completely changed the face of science forever, had the fundamental roots from this theory match those written decades before by Adam Smith. Source: Wikimedia Commons


Figure 4: A complete derivation of the hawk and dove game as was theorized by John Maynard Smith. Source: Wikimedia Commons


the fight) in order to attain the resource, while the doves think it best to remain neutral in the situations of the hawks in order to invest in the future of their own fitness and not risk injury or their future ability to produce offspring. The same way that the prisoner’s dilemma here can be applied to the hawk and dove, the element of choice enters many of the possible conflicts that animals enter—however, instead of trying to maximize their goods, they are trying to maximize their fitness (JÄger 2007). In this sense, Adam Smith’s “invisible hand” concept overlaps with the animals’ intention— while trying to maintain their own survival and the survival of their own offspring, each of the animals inevitably maintains the society’s diversity of life and thus inadvertently promotes the good of the community and the ecosystem. The choices that are made in the hawk and the dove iteration of the prisoner’s dilemma eventually contribute to the development of evolution, because the amount that each of the individual animals is aware of its fitness contributes a certain amount to its eventual opportunity to produce offspring. This awareness manifests itself in the form of the investment and risk of fitness and their efforts

to attain the resources necessary for their own survival, which would in turn affect whether or not they can produce offspring in the future (Hansen 1986).

The Evolution of Cooperation Eventually, there is a certain predictability that would eventually come out and result in a pattern of cooperation (Jervis 1988). The cooperation pattern has been theorized to occur within kin to allow for the investment into fitness ability, as well as the non-kin, as was in the case with the doves and the hawks, in order to present the opportunity to sacrifice fitness for material benefit in the form of resource. W.D. Hamilton, a prominent evolutionary biologist first put through the theory that oftentimes some kin can cooperate with their subsequent generations even to the extent that they themselves would forego reproduction in order to invest in the reproduction of the future generations (Gardner et al. 2014). On the other hand, Robert Axelrod, an American political scientist, further applied the concept of the iterated Prisoner’s Dilemma, after the animal is able to play multiple iterations or multiple versions of the scenario, that there are certain DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

strategies that come into play that can allow us to predict the outcomes of individual animal behavior (Ostrom 2007). The most prominent of such strategies is known as “Tit for Tat,” where the animals cooperate the first time, and as long as the other animal keeps cooperating the partnership continues. But in the case that someone defects, the other will immediately defect, leading to a returned state of anarchy as was before the cooperation had started. This kind of the strategy then lends itself into the development of reciprocal altruism, where the prevailing mentality is that “if you cooperate with me, I will cooperate with you,” with a punishment for not being altruistic in this regard. Ultimately, this can become the bond that causes biological group structures to remain together (To 2008). This choice of group begins with a simple idea decision while evaluating the variables that are involved: If the fitness of being in the group > fitness of being solitary, then it is reasonable to join the group. If the fitness of being in the group < fitness of being solitary, then it is unreasonable to join the group (Gardner et al. 2014). The fitness of being in the group is derived from the ability to determine whether or not the other individuals in the system will cooperate with the partnership, and anticipation of the result of the reciprocal altruism ideology.

In Conclusion Starting with the initial game theory and prisoner’s dilemma idea that was derived from John von Neumann, the application to biology and eventually evolutionary biology is immense. Thinking about evolution as it pertains to development of the fitnesspromoting or fitness-demoting behaviors in partnership situations applies the economic concepts of risk management and risk assertion as it applies to the development and survival of species. As time passes, Adam Smith’s invisible hand theory additionally comes into play as the organisms who promote their own self-interest of survival ultimately also promote the societal diversification of life. Eventually, the organisms adapt to the formulation of partnerships and find the equilibrium of their survival all in the economy of nature. D

3. Chastain, E., Livnat, A., Papadimitriou, C., & Vazirani, U. (2014). Algorithms, games, and evolution. Proceedings of the National Academy of Sciences of the United States of America, 111(29), 10620-10623. Retrieved from http://www. jstor.org/stable/23803686 4. Ehud, Kalai. (2013). Foreword: The High Priest of Game Theory. The American Mathematical Monthly, 120(5), 384-385. doi:10.4169/amer. math.monthly.120.05.384 5. Game Theory. (1971). Econometrica, 39(4), 96-99. Retrieved from http://www.jstor.org/stable/1912396 6. Gardner, A., & West, S. (2014). Introduction: Inclusive fitness: 50 years on Philosophical Transactions: Biological Sciences, 369(1642), 1-3. Retrieved from http://www.jstor.org/stable/24499046 7. Haigh, J. (1975). Game Theory and Evolution. Advances in Applied Probability, 7(1), 8-11. doi:10.2307/1425844 8. Hansen, A. (1986). Fighting Behavior in Bald Eagles: A Test of Game Theory. Ecology, 67(3), 787-797. doi:10.2307/1937701 9. Irons, D. (1899). The Philosophical Review, 8(4), 420-424. doi:10.2307/2176202 10. JÄger, G. (2007). Evolutionary Game Theory and Typology: A Case Study. Language, 83(1), 74-109. Retrieved from http:// www.jstor.org/stable/4490338 11. Jervis, R. (1988). Realism, Game Theory, and Cooperation. World Politics, 40(3), 317-349. doi:10.2307/2010216 12. Montanye, J. (2011). The Independent Review, 16(1), 135137. Retrieved from http://www.jstor.org/stable/24563231 13. Mr. Charles Darwin. (1879). The British Medical Journal, 2(966), 15-15. Retrieved from http://www.jstor.org/ stable/25251449 14. Oleinikov, N., & Ostashevsky, E. (2004). Charles Darwin. Columbia: A Journal of Literature and Art, (39), 110-110. Retrieved from http://www.jstor.org/stable/41808713 15. Ostrom, E. (2007). Biography of Robert Axelrod. PS: Political Science and Politics, 40(1), 171-174. Retrieved from http://www.jstor.org/stable/20451916 16. Smith, A. (1776). Wealth of Nations. 17. Smith, J. (1979). Game Theory and the Evolution of Behaviour. Proceedings of the Royal Society of London. Series B, Biological Sciences, 205(1161), 475-488. Retrieved from http://www.jstor.org/stable/77441 18. To, T. (2008). More Realism in the Prisoner's Dilemma. The Journal of Conflict Resolution, 32(2), 402-408. Retrieved from http://www.jstor. org/stable/174052 19. Vanderschraaf, P. (1999). Game Theory, Evolution, and Justice. Philosophy & Public Affairs, 28(4), 325-358. Retrieved from http://www.jstor.org/stable/2672876

“Starting with the initial game theory and prisoner's dilemma idea that was derived from John von Neumann, the application to biology and eventually evolutionary biology is immense.”

CONTACT NISHI JAIN AT NISHI.JAIN.21@DARTMOUTH.EDU References 1. Borch, K. (1967). Economics and Game Theory. The Swedish Journal of Economics, 69(4), 215-228. doi:10.2307/3439376 2. Broadie, A. (2017, September 22). Scottish Philosophy in the 18th Century. Retrieved from https://plato.stanford.edu/ entries/scottish-18th/ WINTER 2018



Fixing a Faulty Genome: Mechanisms for the Delivery of Gene Therapy BY SAM NEFF '21 Figure 1: A Cluster of Nanoparticles. Source: Wikimedia Commons

What is Gene Therapy? Gene therapy is the manipulation of one’s DNA to cause some form of beneficial genetic mutation (16). The impetus for this sort of action is the presence of an unhealthy mutation in a person’s genes, either present at birth (like the mutations that cause genetic diseases) or generated within one’s lifetime (like the mutations that lead to cancerous growth). Two approaches are generally followed: (a) To knock out the gene that is mutated and prevent production of mutated proteins and (b) To replace the faulty gene with a functional genetic sequence or add a functional gene in addition to the faulty one(2,6,15). This is accomplished through a variety of means. In order to understand different gene therapy mechanisms, it is useful to have some background knowledge of DNA and accompanying cellular processes. .

A Brief Introduction to DNA Double-stranded molecules of DNA are coiled up within the nucleus of each human cell. A DNA molecule can be thought of as a twisted ladder, where the backbone is composed of deoxyribose sugar and phosphate molecules in a pattern of alteration. The rungs are composed of four nitrogenous bases (or nucleotides): adenine, thymine, guanine, and cytosine. This is how the message of DNA is written; a string of nucleotides in specific combination that provides a blueprint for protein creation. The proteins, in turn, hold tissues together, transport oxygen throughout the body, perform 12

catalytic operations that allow conversion of food to energy, and are even responsible for the maintenance of DNA, among many other things. To be more specific, the conversion of DNA to protein follows a three-step process. First, in a process called transcription, a section of the DNA code is copied onto a similar biological molecule called RNA, whose primary difference is that it is composed of only one strand. The RNA is composed of the same nucleotides (except thymine is replaced with the nucleotide uracil), and the backbone is roughly the same. Only half of the DNA code must be copied to transmit the message because the opposing half is complementary (for every adenine there is a thymine, for every guanine there is a cytosine). In the process of DNA translation, the RNA leaves the nucleus, and complexes with structures called ribosomes within the cytoplasm of the cell. Here the code is deciphered in triplets, each triplet of nucleotides corresponding to a molecule called an amino acid. At the ribosome site, a string of amino acids is assembled. Ultimately, when the entire section of DNA is translated, the string detaches from the ribosome. After a process of folding, the string of amino acids is considered a functional protein. The collection of DNA within the human cell is excessively long. It is composed of about three billion bases! Thus, it is not surprising that sometimes the DNA code is incorrect, coding for a protein that does not possess the proper structure. This is the basis for genetic disease, and even the smallest of imperfections can DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

have disastrous consequences. One misshaped protein can leave an individual with a life extensively compromised by medical difficulty. The section of DNA that is identified with a particular protein (or in some cases, other types of biological molecules) is called a gene. Hence the name gene therapy for the treatment that attempts to mend mutations within these units of heredity.

Gene Therapy Mechanisms The most simplistic gene therapy mechanism is the introduction of “naked” DNA molecules into cells (7). This DNA is unaccompanied by any proteins or other molecular machinery. Its purpose is to be introduced to the cellular nucleus, where it is then transcribed, and functional proteins are produced. This “naked” DNA is not replicated as the cell divides (7), so treatment must be recurrent. Functional DNA can also be introduced to cells by viruses, which invades a cell’s genome by adding its own viral DNA (7,11). This does not entail the fixing of dysfunctional genes, but the introduction of a functional gene in addition to the dysfunctional gene, so that a functional protein is expressed. This integration can be long-term; replicated and passed on to both daughter cells in cell division (11). There are a few different devices that carry out gene therapy by creation of breaks in DNA (TALENs, Zinc-Finger Nucleases, Meganucleases, CRISPR/Cas9) (12). Then, either a mutation occurs at the gene to knockout protein function (as in cancer cases), or functional DNA is introduced (as in some genetic disease cases). The CRISPR/Cas9 system is of particular interest to scientists because of its elegance and simplicity. CRISPR/Cas9 is a natural feature of certain bacterial immune systems. When a virus attempts to invade the bacterial genome, certain proteins attack the virus and chop its genetic material (DNA or RNA) into little pieces. These pieces are integrated into the bacterial genome, separated by short, repeating DNA sequences (Clustered Regularly Interspaced Short

Palindromic Repeats or CRISPR). The integration of viral DNA into the bacterial genome allows future recognition and attack of the virus. A molecule called a guide RNA (gRNA) is built that matches a viral sequence in the bacterial genome. This complexes with a protein called cas9. The guide RNA allows targeting of a virus, and the cas9 protein destroys it (2,14). It is possible to use the guide-RNA-cas9 complex for gene therapy in humans. It can be engineered such that the guide RNA matches a particular sequence of human DNA that needs to be repaired or deactivated. The gRNA would locate the DNA sequence, and the cas-9 protein would create a break in the DNA (14). After the strand break occurs, gene therapy relies on two natural processes of DNA repair: non-homologous end joining (NHEJ) and homology-dependent repair (HDR). In NHEJ repair, DNA ends are simply stitched backed together by an assemblage of proteins. However, this repair mechanism is prone to error, as nucleotide base pairs are often randomly inserted or deleted (indel mutations) (9). This can cause a gene to no longer code for a working protein. The error-prone nature of NHEJ is actually a positive quality in the eyes of a gene-therapist, as it allows the “knock out” of genes that code for dysfunctional proteins, or that are known to cause cancer. Cas-9, or some other gene therapy device, will continue to cut a sequence of DNA until a mutation occurs and the gRNA can no longer recognize it (9). In HDR, there must be a molecule of DNA that is homologous, or similar in sequence, to the DNA around the break. Each of the DNA ends is partially-degraded by a protein so that on each side of the break, there is a long (several-hundred base pair) chain of singlestranded DNA. One of the single-stranded ends associates with the homologous DNA, and uses it as a template to add increase its own length. The length of the single strand increases until its end is complementary with the DNA nucleotides of the other single strand (on the opposing side of the break). Then, the single strands connect,

“After the strand break occurs, gene therapy relies on two natural processes of DNA repair: nonhomologous end joining (NHEJ) and homology-dependent repair (HDR).”

Figure 2, left: DNA Molecule Structure. Figure 3, right: Cleavage of DNA Source: Wikimedia Commons



Figure 4: Non-Homologous End Joining and Homology Directed Repair. Source: Wikimedia Commons

and nucleotides are added by proteins to make the original DNA molecule double-stranded again (8,15). Within the homologous DNA, it is possible to place a certain DNA sequence that corresponds to a functional protein (15). The gene that was originally mutated, created a dysfunctional protein, and caused disease would then have the right string of nucleotides, and would cease to cause disease. Any sequence of DNA can be inserted within the homologous DNA so long as the DNA surrounding this inserted sequence is homologous the original DNA molecule.

Delivery of Gene Therapy The focus of this article lies in the delivery of gene therapy to human cells. One of the greatest obstacles to the effectiveness of gene therapy is the difficult navigation of gene therapy drugs through the bodyâ&#x20AC;&#x2122;s passages (airways, blood vessels, etc.). A major goal is to develop efficient transport systems for gene therapy so that the drugs can flow smoothly to their targets.

Aerosol Gene Therapy Aerosol 14





inhalation of gene therapy drug into the lungs. This type of delivery is attractive because it is noninvasive and covers a large surface area (perhaps most of the lungs, depending on the degree of airway blockage) (1,3). It would be of particular use to those suffering from lung cancer, cystic fibrosis (CF), or some other lungrelated genetic disease. A variety of parameters, such as drug size, and inhalation speed must be considered. If a particle encapsulating the gene therapy is too large, or moving too fast, the particle may deposit in the throat instead of in the lungs as is desired (1). The former can be controlled by the scientists engineering the medicine; the later by the patient adopting a proper inhalation technique (slow, deep breaths) (1). Gene therapy devices, like the Cas-9 complex, must be delivered in some protective casing. Otherwise the DNA and guide RNA would be ripped apart by the forces propelling the medicine through the airways. One such capsule is the cationic liposome. Because it is made of the same biological molecules as the cellâ&#x20AC;&#x2122;s outer membrane, the liposomes can fuse with the membrane and release their contents into the cell. Because these DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure 5, left: Liposome Structure; Positively-Charged Liposomes Can Complex with DNA. Figure 6, right: Virus Structure; Viruses can be engineered to inject DNA into certain parts of the genome without causing illness. Source: Wikimedia Commons

liposomes are cationic (positively-charged), they can interact with DNA or RNA (which are negatively-charged) and form complexes (1).

Nanoparticles for Cancer Therapy The use of different nanoparticles for gene therapy cancer treatment is an area of interest. Nanoparticles, like the aforementioned cationic liposome, allow encapsulation of the gene therapy drug. They can also be engineered to target cancer cells. There are two ways to target cancer cells: passively or actively. Passive targeting involves taking advantage of cancer’s effect on the body’s architecture. Tumor sites are marked by leaky blood vessels This allows nanoparticles to exit the blood stream and aggregate around the cancer cells. The passage of the cancer cells through fenestrations (holes) in the blood stream can be enhanced by manipulating nanoparticle size and material (5). Furthermore, adding functional groups can extend a nanoparticle’s lifetime in the bloodstream. Polyethylene glycol (PEG), for example, attracts water molecules. A film of water is generated around the nanoparticle, preventing it from being engulfed by immune cells (5). Active targeting involves placing biological molecules on the nanoparticle surface that target receptors on the outside of cancer cells. After binding to the cancer cells, the nanoparticles are taken in by the cells and deliver the gene therapy (5). Both types of targeting, active and passive, can be used in conjunction to maximize the effectiveness of gene therapy delivery.

Gene Therapy with Viral Vectors The use of viruses for gene therapy is an ongoing area of research. Viruses are very effective at integrating functional DNA into the host genome, as invading host genomes is their natural purpose. Other methods are either only effective in the short-term (“naked” DNA) or highly prone to error. (systems like CRISPR/ Cas9 that rely on NHEJ or HDR). However, viruses must be manipulated carefully to eliminate their ability to multiply, destroy cells, and cause illness. When viruses WINTER 2018

invade cells under natural circumstances, they invade the genome so they can accomplish the aforementioned objectives; allowing creation of viral proteins by hijacking cellular machinery (7). If not engineered properly, viral gene therapy can make a patient sicker than they were in the first place. Viruses can target cells effectively because they possess surface features that can recognize extracellular receptor sites (11). However, these particular viral surface features also make viruses an easy target for the immune system. Thus the virus must be strong enough to evade immune defenders, but incapable of causing illness after they infiltrate cells. There are different types of vectors with varying DNA-holding capacity, ability to integrate into the host genome, and potential to cause a violent immune response (7,11). The most effective gene therapy viral vector will strike the most positive balance between these factors.

“One of the greatest obstacles to the effectiveness of gene therapy drugs through the body's passages (airways, blood vessels, etc.).”

A Gene Therapy Prognosis Gene therapy was first attempted on a human patient in 1990. Yet this initial success was followed by setbacks that put gene therapy under stricter scrutiny. In 1999, an American patient died following a gene therapy experiment. In 2002, French experiments led to the development of leukemia in some patients (10). The problem with gene therapy is that if administration is not wholly accurate, gene therapy devices may edit or knock out the wrong portion of a gene, or a different gene entirely. This can lead to the triggering of cancerous growth, or to a genetic defect that causes fatality (2). Much stricter regulation followed these failures, and scientists focused on developing gene therapy that was more accurate and efficient. One reason why the development of CRISPR-cas9 technology, first used for genome editing in 2013 (4), is such a heralded achievement, is because it had a higher accuracy than previous methods. Even as gene therapy trials continue to 15

“A system of transport for gene therapy drugs that is both durable and minimzes immune response must be achieved. Once this progress is made, and risk of treatment failure is minimized, companies can justify placing gene therapy treatments on the market.”

increase in frequency, however, there are still significant barriers to surmount. Most of the successful trials thus far have been limited to smaller regions of the body that are easily infiltrated (i.e. gene therapy for vision disorders, gene therapy that is administered to bone marrow by extracting it and then reinjecting it into the body). Many genetic diseases, like Cystic Fibrosis (the most common fatal genetic disorder in the United States), involve body regions of larger surface area (like the lungs) which necessitate a treatment that is minimally invasive (13). Development of gene therapy delivery methods is key to further progress. A system of transport for gene therapy drugs that is both durable and minimizes immune response must be achieved. Once this progress is made, and risk of treatment failure is minimized, companies can justify placing gene therapy treatments on the market. It is difficult to say when viable gene therapy treatment will be available for individuals of average economic means. However, scientists are closer now than ever before to providing safe and effective treatment options. Progress seems to be accelerating now, as trial successes are leading to increased investment in gene therapy research, and competition among biotech firms is sparking further innovation. Perhaps in the next quarter century, or even within the next decade, gene therapy will become a standard practice in the medical community. D


www.ncbi.nlm.nih.gov/pmc/articles/PMC4889725/> 6. Le Cong, F. Ann Ran, David Cox, Shuailiang Lin, Robert Barretto ,Naomi Habib, Patrick D. Hsu, Xuebing Wu, Wenyan Jiang, Luciano A. Marraffini, Feng Zhang. (15 February 2013). “Multiplex Genome Engineering Using CRISPR/Cas Systems.” Science. American Association for the Advancement of Science. Retrieved From <http://science.sciencemag.org/ content/339/6121/819> 7. Nouri Nayerossadat, Talebi Maedeh, Palizban Abas Ali. (6 July 2012). “Viral and nonviral delivery systems for gene delivery.” Advanced Biomedical Research. National Center for Biotechnology Information. Retrieved From <https://www.ncbi. nlm.nih.gov/pmc/articles/PMC3507026/> 8. Oxford Academic (Oxford University Press). (12 August 2014). “Homology-dependent double strand break repair.” Youtube. Youtube. Retrieved From <https://www.youtube.com/ watch?v=86JCMM5kb2A> 9. Oxford Academic (Oxford University Press). (11 April 2014). “Non-homologous end joining.” Youtube. Youtube. Retrieved From <https://www.youtube.com/watch?v=31stiofJjYw> 10. Reuters Staff. (27 April 2015). “TIMELINE – Milestones in gene therapy.” Reuters. Reuters. Retrieved From <https://www. reuters.com/article/health-genetherapy-timeline/timelinemilestones-in-gene-therapy-idUSL5N0XK41J20150427> 11. Shrikant Mali. (January-March 2013) “Delivery systems for gene therapy.” Indian Journal of Human Genetics. National Center for Biotechnology Information. Retrieved From <https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC3722627/> 12. Thomas Gaj, Charles A. Gersbach, Carlos F. Barbas III. (9 May 2013). “ZFN, TALEN, and CRISPR/Cas-based methods for genome engineering.” Trends in Biotechnology. ScienceDirect. Retrieved From <https://www.sciencedirect.com/science/ article/pii/S0167779913000875> 13. (27 December 2013). “Learning About Cystic Fibrosis.” National Human Genome Research Institute. National Institutes of Health. Retrieved From <https://www.genome. gov/10001213/learning-about-cystic-fibrosis/> 14. (2014). “CRISPR Mechanism.” CRISPR/Cas9. Tufts University. Retrieved From <http://sites.tufts.edu/crispr/crisprmechanism/> 15. (2014). “Homology-Directed Repair.” CRISPR/Cas9. Tufts University. Retrieved From <http://sites.tufts.edu/crispr/ genome-editing/homology-directed-repair/> 16. (30 January 2018). “What are genome editing and CRISPRCas9.” Genetics Home Reference. U.S. National Library of Medicine. Retrieved From <https://ghr.nlm.nih.gov/primer/ genomicresearch/genomeediting>

References 1. Ajay Gautam, J. Clifford Waldrep, Charles L. Densmore. (2003). “Aerosol Gene Therapy.” Molecular Biotechnology. Humana Press. Retrieved from: <https://link.springer.com/ article/10.1385/MB:23:1:51> 2. Alex Reis, Breton Hornblower, Brett Robb, George Tzertzinis. (2014). “CRISPR/Cas9 and Targeted Genome Editing: A New Era in Molecular Biology.” New England Biolabs. New England Biolabs. Retrieved From <https://www. neb.com/tools-and-resources/feature-articles/crispr-cas9-andtargeted-genome-editing-a-new-era-in-molecular-biology> 3. Beth L Laube. (13 January 2014). “The expanding role of aerosols in systemic drug delivery, gene therapy, and vaccination: an update.” Translational Respiratory Medicine. National Center for Biotechnology Information. Retrieved From <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4215822/> 4. “CRISPR Timeline.” Broad Institute. Broad Institute. Retrieved From <https://www.broadinstitute.org/what-broad/areasfocus/project-spotlight/crispr-timeline> 5. Hung-Yen Lee, Kamal A Mohammed, Najmunnisa Nasreen. (1 May 2016). “Nanoparticle-based targeted gene therapy for lung cancer.” American Journal of Cancer Research. National Center for Biotechnology Information. Retrieved From <https:// 16



Developments in Flow Cytometry BY SAM REED '19

Flow cytometry (FCM) has drastically changed the capabilities of today’s scientists. Due to its high throughput and automated nature, it allows large numbers of cells or other biomarker-containing bodies to be imaged, separated into different populations, and further analyzed. As a result, FCM is vital to many biological and medical studies. Even though this technique is widely used and highly effective, there is always room for improvement. In addition, new assays that make use of FCM are constantly being developed. This article delves into the history of FCM, recent breakthroughs in its usage, and how it may be used in the future.

Introduction The first steps from static microscopy to imaging a flowing system were made in the midtwentieth century. In 1934, Andrew Moldavan published an article in Science describing an apparatus that would image red blood cells and red-stained yeast cells as they moved through a capillary on a microscope stage, with a photodetector registering them as they passed. Louis Kamentsky was the first to actually build a microscope based on this design. In 1967, he built a microscope that recorded the ultraviolet absorption of 500 cells per second. The system would sort out the high photo-intensity cells with a syringe (Givan, 2013). However, fluorescence wasn’t the only measurement used in FCM. In 1950, a similar apparatus was developed which measured the electrical resistance created by cells as they passed through an isotonic solution (Givan, 2013). FCM has significantly advanced from these WINTER 2018

early methods. Typically, cells are stained with antibodies covalently bound to fluorescent particles. The antibodies bind to antigens on the cell surface, or to intracellular molecules (Aghaeepour et al., 2013). The emitted light is then analyzed by computers as the cells pass under a microscope. The latest flow cytometers can simultaneously analyze as many as 20 different emission spectra, allowing them to analyze the interactions of many different proteins. Even more characteristics can be observed by using a mass-spectrophotometry based cytometer (Aghaeepour et al., 2013). Unlike other spectrophotometry methods, which are limited by the overlap of fluorescent signals, mass-spectrophotometry utilizes antibodies that are bound to distinguishable elemental tags instead of fluorescent tags. These tags are then ionized and analyzed using time-of-flight mass spectrometry, in which the time required for an ion to reach the detector at a set distance is used as a measure of mass-to-charge ratio (Bandura et al., 2009).

Figure 1: A T-Cell and an antigen-presenting cell form a complex that is indicative of future T-cell activity. This complex is known as an immunological synapse. Source: Wikipedia [Kim J, Shapiro M, Bamidele A, Gurel P, Thapa P, Higgs H, Hedin K, Shapiro V, Billadeau D]

New Assays Using FCM Due to the FCM’s high throughput and ability to analyze multiple characteristics, it’s usage has been applied and improved in many different biological techniques. Fluorescence in situ hybridization (FISH) is one FCMbased technique that has been continuously advancing. Using this method, samples are fixed with formaldehyde and permeablized. Afterward, they are stained with fluorescently tagged oligonucleotides, which bind to target RNA or DNA strands. The cells are then 17

“Using methylBEAMing, Johns Hopkins researchers were able to detect 41% of cancer cases from plasma samples, and 41% from stool samples.”

Figure 2: Cells with different fluorescence are characterized and separated in a flow cytometer. Cells are electrically charged after characterization based on their identity, which is used for their sorting. Source: Wikimedia Commons

imaged using flow cytometry, where they are sorted by fluorescence (Amann & Fuchs, 2008). One of the key uses of this technique is to characterize microbial populations. In this case, the oligonucleotides are used to target microbial ribosomal RNA (rRNA). rRNA is targeted because the sequences are more conserved than those of protein coding RNA’s (Amann & Fuchs, 2008). Using this assay, different microbes and microbial populations can be distinguished from each other based on their evolutionary pathways. Sensitivity can be a major problem when using this method, as some bacteria are smaller and only contain a few hundred ribosomes. However, a few solutions to this problem have been developed. One is to simply use a greater number of different probes, all targeting the same rRNA sequence, to create a brighter signal. An increasingly popular method, catalyzed reported deposition (CARDFISH), uses oligonucleotide probes that are bound to peroxidase enzymes. The microbes are also exposed to fluorescently labeled tyramides, which are turned into free radicals by the peroxidase and subsequently bind to cellular components. Multiple occurrences of this reaction result in well-labeled samples (Amann & Fuchs, 2008). Another method that has become popular is used to recognize individual genes (RING-FISH) in a highly sensitive manner. Very small probes with multiple tags are used, and over the course of an extended hybridization period, networks of probes are formed in and around bacteria containing the target gene sequence, resulting in a visible ‘halo’ of light around those bacteria. This method makes it possible to detect very specific differences, such as drug resistance, between bacteria (Amann &

Fuchs, 2008). FISH can also be used to gain quantitative data about the DNA of different cells. In order for FISH to become quantitative, however, the oligonucleotides must consistently bind to all complimentary DNA. One method used to achieve this involves the use of peptide nucleic acid (PNA) probes, which are protein backbones attached to DNA bases. Because the repulsion between the negative phosphates of DNA backbones is no longer an issue, these probes will bind with DNA with great consistency. In one study, PNA probes were used in conjunction with flow cytometry to accurately determine the number of telomeric repeats in a large number of Chinese Hamster Ovary (CHO) cells (Brind’Amour & Lansdorp, 2011). But FISH isn’t the only advancing assay that takes advantage of FCM. One assay, methyl-BEAMing, allows for the quantification of methylated DNA in patient samples via BEAMing and flow cytometry (Li et al., 2009). BEAMing (beads, emulsion, amplification, and magnetics) is a technology that is used to create beads covered in thousands of cloned copies of a template. DNA is isolated from plasma and fecal samples, the target sequence is amplified via PCR, and then added to an oil emulsion filled with magnetic beads. After a process of shaking and heat cycling, the cloned DNA becomes attached to the beads, with thousands of copies on each bead. To examine DNA methylation, prior to this process, the DNA undergoes a bisulfite treatment, which converts cytosine to uracil, while leaving methylated cytosine intact. Once on the beads, the DNA is hybridized with fluorescent oligonucleotide probes derived from either methylated or unmethylated sequences of the target gene, with different fluorescent proteins for each. The beads are then passed through a flow cytometer. Using this assay, one methylated strand out of 5,000 unmethylated strands can be detected (Li et al, 2009). This technique is useful because the methylation of certain DNA sequences is a biomarker for cancer. Using methyl-BEAMing, Johns Hopkins researchers were able to detect 41% of cancer cases from plasma samples, and 41% from stool samples (Li et al., 2009).

Optimizing Computing Technology In addition to finding new and improved ways to use FCM, there is the issue of optimizing FCM itself. Opportunities to optimize FCM lie mainly in the area of computer technology. One of the most important parts of FCM is the sorting of cells into different groups based on their measured characteristics. Generally, this is done by manually setting thresholds based on past data. However, there are problems 18


with this method, as it can be time consuming and the data can be difficult to interpret. Thus, one of the current trends in advancement is to create functional, computational methods for separating cell populations (Aghaeepour et al., 2013). A major effort toward attaining this goal is FlowCAP, a joint effort the computer science and technology communities. FlowCAP supports competitions that access FCM computing algorithms in four areas: complete automation, manually tuned, known number of populations, and “supervised approaches trained using human-provided gates” (Aghaeepour et al., 2013). One finding of a FlowCAP competition was that, among the programs that were both completely automated and manually tuned, only half were significantly more accurate as a result of the manual support. Meanwhile, providing the number of cell populations was found to greatly increase accuracy (Aghaeepour et al., 2013).

Future Uses Recently, attempts have been made to use FCM to determine T-cell reactivity to donor antigens in transplant patients. T-cell reactivity is used to measure immune response to donor tissues, which is important for determining the amount of immunosuppression patients with donor organs will need. However, there are not yet any for accurately measuring reactivity (Juvet et al., 2017). One proposed answer to this problem uses FCM to measure the frequency with which T-cells form complexes with antigenpresenting cells. This method takes advantage of the fact that, in these complexes, the actin cytoskeleton of the T-cells is rearranged and concentrated to the site of contact. Fluorescent phalloidin can be used to make these actin structures visible, and many cells can be imaged with FCM to determine the frequency of complex formation (Juvet et al., 2017). More must be done to develop this technique before it is put to clinical use. One of the largest barriers is the need for tedious manual gating, and efforts are being made to create an automated FCM algorithm (Juvet et al., 2017).

Conclusions FCM is a useful, high-throughput technique that is used in a wide range of biological and biomedical areas, such as microbial characterization, cancer detection, and organ transplant. While the computerization of FCM and the use of mass spectrophotometry have greatly increased the ability of FCM to characterize cellular events, there is still much room for improvement. Most sought after are accurate, automated algorithms for separating cells into different populations. That said, these WINTER 2018

limitations have not stopped scientists from utilizing FCM in new assays, like FISH and methyl-BEAMing. Going forward, it is hoped that the usage of FCM in different diagnostics will become more accurate and less work intensive. D CONTACT SAM REED AT SAMUEL.R.REED.19@DARTMOUTH.EDU References 1. Aghaeepour, N., Finak, G., Hoos, H., Mosmann, T. R., Brinkman, R., Gottardo, R., ... & DREAM Consortium. (2013). Critical assessment of automated flow cytometry data analysis techniques. Nature methods, 10(3), 228. 2. Amann, R., & Fuchs, B. M. (2008). Single-cell identification in microbial communities by improved fluorescence in situ hybridization techniques. Nature Reviews Microbiology, 6(5), 339. 3. Arrigucci, R., Bushkin, Y., Radford, F., Lakehal, K., Vir, P., Pine, R., ... & Tyagi, S. (2017). FISH-Flow, a protocol for the concurrent detection of mRNA and protein in single cells using fluorescence in situ hybridization and flow cytometry. Nature protocols, 12(6), 1245. 4. Bandura, D. R., Baranov, V. I., Ornatsky, O. I., Antonov, A. Kinach, R., Lou, X., ... & Tanner, S. D. (2009). Mass cytometry: technique for real time single cell multitarget immunoassay based on inductively coupled plasma time-of-flight mass spectrometry. Analytical chemistry, 81(16), 6813-6822. 5. Brind'Amour, J., & Lansdorp, P. M. (2011). Analysis of repetitive DNA in chromosomes by flow cytometry. Nature methods, 8(6), 484. 6. Givan, A. L. (2013). Flow cytometry: first principles. John Wiley & Sons. 7. Juvet, S. C., Moshkelgosha, S., Sanderson, S., Hester, J., Wood, K. J., & Bushell, A. (2017). Measurement of T Cell Alloreactivity Using Imaging Flow Cytometry. Journal of visualized experiments: JoVE, (122). 8. Li, M., Chen, W. D., Papadopoulos, N., Goodman, S. N., Bjerregaard, N. C., Laurberg, S., ... & Durkee, K. (2009). Sensitive digital quantification of DNA methylation in clinical samples. Nature biotechnology, 27(9), 858.

Figure 3: Fluorescent probes targeting bacterial rRNA are used to test for specific microbe populations. Peptide nucleic acid can be used to make this test more qualitative. Source: Wikipedia

“While the computerization of FCM and the use of mass spectrophotometry have greatly increased the ability of FCM to characterize cellular events, there is still much room for improvement.”



Effects of Impurities on Glacial Ice BY SARAH CHONG ‘21 Figure 1: Mendenhall Glacier in Juneau, Alaska. Source: Wikimedia Commons

Introduction Rising sea levels have been the poster example of the threat climate change imposes on humankind. Due to the emphasis medias put on rising temperatures, most people think that this is caused simply by rising temperatures. However, the impurities in the air, caused by the same pollution greenhouse effects come from, are also a significant cause of glacial melt.

Glaciers Glaciers are defined as “fallen snow that, over many years, compresses into large thickened ice masses” by the National Snow and Ice Data Center. Over time, snow that persists in one location accumulate more snow every year, resulting in the build of pressure from the increasing mass above the older snow, which ultimately transforms it into ice. Then the snow re-crystallizes due to the compressonal force and the grains gradually grow larger as the air pockets shrink, increasing the density of the 20

ice. The intermediate state between snow and glacier ice is called firn, which is when the ice has two-thirds the density of water. (1) When firn reaches 91.7% density of compared to water, it has become glacier ice. (2) Normally this happens once the ice reaches a thickness of about 50 meters. (4) There are two kinds of glaciers. Alpine glaciers move downward after being formed on mountainsides, often deepening existing valleys or even creating new ones. Every continent has an Alpine Glacier except Australia. They are also called valley glaciers or mountain glaciers. (4) The second kind is an ice sheet, which can form anywhere. They are broad domes that spread out, covering the surrounding with a thick layer of ice. This can include planes and even entire mountains on top of valleys. If large enough, they are called continental glaciers, such as the ones that cover most of Antarctica and Greenland. (4) Another defining aspect of glaciers is its mobility. A body of ice must be capable of DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

moving in order to be called a glacier. (6) In the process called compression melting, the glacier begins to move due to its immense mass and gravity. Its behavior is similar to that of a liquid, sliding over uneven surfaces. (4) Glaciers are extremely powerful because they relentlessly grind away at the soil underneath. This erosion happens due to the debris they carry within the ice at the base, material such as soil, clay, and boulders. These materials are called moraine, which form as horizontal lines either along, within, or on top of the glacier. This is where pollution plays a role. (4) The characteristic cracks, or crevasses, in a glacier forms due to the difference in the speed of flow among different depths in the ice. The top and middle layers moves faster than the base, due to frictional differences, and causes the build of tension. In order to release this tension, the brittle top later fractures.(4) The force formed as the response to the driving stresses of weight and gravity causes deformations in ice. This force is called ‘strain.’ The combination of gravitational driving stress and the strain of deformation controls the glacier’s velocity. (3) Another cause of the difference in speed is moulins, a pipeline that cuts through the glacier vertically, formed by falling meltwater. (4)

Science Behind Flow Velocity: Stress and Strain We will be using Newton’s three laws to explain the movement of ice. 1. Every object in a state of uniform motion tends to remain in that state of motion unless an external force is applied to it. 2. The relationship between an object’s mass, m, its acceleration a and the applied force is: F = ma. 3. For every action, there is an equal and opposite reaction.

“Stress is a measure of how hard a material is pushed or pulled as a result of external forces” (3). In glaciers, the force that is affecting the motion of the glacier mass is gravity. Depending on the area which the force is applied on, the amount of stress can change. Stress is measured in pascals (Pa). There are many different kinds of stress that add up to create the overall movement of a glacier. Mainly, glaciers flow because ice deforms as a result of basal shear stress. This stress carries across the bed of a glacier; the areas over rough beds have higher stresses while smooth surfaces have lower stresses. The typical amount of stress necessary for movement is 10^5 pascals. Ice deforms through non-linear viscous flow, meaning it has an effective viscosity which allows it to support up to only 10^5 pascals. The ice will thicken until it reaches this amount of stress. Anything higher and the glacier will thin and flow faster. The other kinds of stress, longitudinal and transverse, affect this basal shear stress. Longitudinal stress is the pushing or pulling effect on ice, the reason for crevasses. Horizontal transverse stresses come from lateral drags from against a valley wall or slower-moving ice. “Strain, on the other hand, measures the amount of deformation that occurs as a result of the stress” (3). Even with the same stress, different materials respond differently. Strain can be elastic or permanent. A strain is elastic if the change in material is temporary and reverts to the original shape. This change can store strain energy, and at certain amounts this energy is released which leads to failure or permanent deformation. The level of stress at which permanent deformation happens is called yield stress. The combination of deformation and basal sliding is equal to glacier velocity. The flow is a

“There are many different kinds of stress that add up to create the overall movement of a glacier. Mainly, glaciers flow because ice deforms as a result of basal shear stress.”

Figure 2: The forces acting on glacial ice. Source: Wikipedia



“The combination of deformation and basal sliding is equal to glacier velocity. The flow is a combination of deformation in the ice and bed, and the sliding of ice over its bed.”

“Glaciers are a vital source of freshwater for human beings and decreasing levels are distressing immediate communities and effects willl start to spread gloablly.”

combination of deformation in the ice and bed, and the sliding of ice over its bed. (3) Permanent deformation forms from strain in result to stress, glaciers flow. Resistance to strain depends on ice temperature, crystal structure, bed roughness, debris content, water pressure, among others. Deformation specific to ice crystals is called creep. Movement can occur between or within ice crystals. Basal sliding happens when a glacier reaches pressure melting point at the base. This is what helps ice streams form and cause fast ice flow. During regelation, pressures mount up behind obstacles. Pressure melts ice, lubricating the glacier bed. This results in melting upstream and re-freezing downstream. Meltwater enhance sliding by lowering friction and making the bed smoother by submerging minor obstacles.

Effects of Impurities When surface melt increases, the loss of ice mass also increases. Greenland ice sheets, the largest ice mass in the Northern Hemisphere, have been darkening since 1996 by around 2% every decade. (5) This darkening ice absorbs more energy, resulting in accelerated surface melting and speeding up the rise of sea levels. The darkest particles are called “black carbon,” or BC, which come from incomplete burning of biomass and fossil fuel. These particles are small and light, enabling it to travel over thousands of kilometers. Because of the intensity of darkness and stark contrast against white ice, even a small amount significantly effects the energy absorbed by the ice. Some other particles result from microbiological activity or transported by being trapped inside moving ice. These trapped particles can travel long distances over thousands of years before being released as the ice melts. Thus, just a little amount of impurities can have a drastic impact. (5) Curiously, there is a difference and a sort of transition between snow and ice; particles on snow become covered by new snow, temporarily obscuring, while on ice they can stay imbedded for decades before being washed away. So, the long residence time of the particles in ice leads to accumulation, amplifying the rate of melting. Even when the ice melts, it releases more particles, causing even faster melting, “accelerating the feedback loop.” (5)

Effects of Ablation

surface ablation processes. Melting ice’s effect is not only the physical conversion from ice to water; water has a higher capacity for heat, so once water has melted and formed pools, these bodies of water are capable to absorbing more heat and accelerate melting. Calving, subaqueous melting, and melting at the ice bed are also types of ablation. Rising sea levels, also the addition of freshwater threatens species that need salt water in the area. (3) To analyze the gain and loss of mass in glaciers, scientists use a modeling technique called glacier mass balance. It is based on measurements taken at the zone of accumulation and zone of ablation. These zones respectively correspond to the higher and lower areas of the glacier. The firn limit, or snow line, separates the two zones. The difference between the amount of mass accumulated and lost also dictates whether or not the glacier is moving. (6)

Conclusion Glaciers are a vital source of freshwater for human beings and decreasing levels are distressing immediate communities and effects will start to spread globally. High levels of melting have already caused sea levels to rise by 2.6 inches between 1993 and 2014. Rising sea levels not only mean flood threats along coastlines, but also destructive storm surges are able to push further inland. This nuisance flooding has increased around three to nine times the average number from fifty years ago. (7) Although thermal expansion is also a factor in rising sea levels, glacial meltwater is also a huge factor. Overall, rising temperatures must be curved and pollution from particles is exacerbating climate change. D CONTACT SARAH CHONG AT SARAH.W.CHONG.21@DARTMOUTH.EDU

References 1. https://nsidc.org/cryosphere/glaciers/questions/formed.html 2. http://www.iceandclimate.nbi.ku.dk/research/flowofice/ densification/ 3. http://www.antarcticglaciers.org/modern-glaciers/glacierflow-2/glacier-flow-ii-stress-and-strain/ 4. https://www.nationalgeographic.org/encyclopedia/glacier/ 5. https://www.unis.no/impurities-glacier-ice/ 6. http://www.physicalgeography.net/fundamentals/10ae.html 7. https://oceanservice.noaa.gov/facts/sealevel.html

Ablation is the threatening process in which glaciers lose mass. Ablation can occur anywhere on the glacier, but the ablation zone is where net loss of glacier happens. Surface melt, surface meltwater runoff, sublimation, avalanches, and windblown snow all cause 22




All About Arsenic Poisoning BY JENNY CHEN â&#x20AC;&#x2122;21

Introduction Arsenic contamination of groundwater is a form of pollution, one that can cause irreparable harm to the millions of people who rely on that water as their only source of water. Arseniccontaminated water posed a serious risk for the people of Bangladesh and India in the 1970s, and a similar crisis is arising in Pakistan. One reason why arsenic poisoning is so dangerous is because it has been linked with many severe health consequences, including increased risk for lung, bladder, and kidney cancer even 40 years after exposure.

What is Arsenic Poisoning? In many countries, inorganic arsenic in the form of both arsenite (III) and arsenate (V) is naturally present at high levels in groundwater. One possible way that arsenic can seep into groundwater is from rain releasing the arsenic from rocks and sediments into water wells. Arsenic is extremely hard to detect in water because it has neither a distinct smell nor taste. Arsenic levels are often higher in drinking water from ground sources, such as wells, as opposed WINTER 2018

to drinking water from surface sources like lakes or reservoirs (American Cancer Society, 2014). Because of this, arsenic poisoning did not present itself as a problem until people began tapping into ground sources for water, which they did to avoid the pathogens often found in surface waters. In doing so, they avoided diseases such as diarrhea, typhoid, and cholera, but became vulnerable to arsenic poisoning. Most of the people affected by arsenic contaminated water live in southern Asia, including Bangladesh, Cambodia, India, Nepal, and Vietnam (World Health Organization, 2014). In the 1990s, after the incident in Bangladesh, scientists discovered that the reason why countries in Asia are disproportionately affected is that the Himalayan mountain range acts as a place where arsenic-rich rocks and sediments can collect. However, even in the United States, many drinking wells exceed the concentration of arsenic deemed safe to drink. The World Health Organization (WHO) has acknowledged this issue as an international one, citing arsenic as one of their ten chemicals of major public health concern. Arsenic has also been classified as a carcinogen by the International Agency for

Figure 1: Arsenic poisoning: early signs. Source: Flickr


Research on Cancer, the National Toxicology Program, and the U.S. Environmental Protection Agency.

The History Behind Arsenic Contamination “[In Bangladesh,] it has been estimated that 1 in 5 of the 8 million wells are contaminated with arsenic... Approximately 35 to 77 million people in Bangladesh were drinking water with unsafe arsenic. ”

“Approximately 2.1 million Americans use water wells with high levels of arsenic, a number much higher than previously predicted.”


The first record of arsenic-contamination is from Northern England in 1900, when arseniccontaminated beer resulted in 6,000 poisonings and 71 deaths (Agency for Toxic Substances & Disease Regulation, 2011). However, the most well-known incidence of arsenic poisoning occurred in Bangladesh and India in the 1970s. It is considered the largest mass poisoning in history, and it would take 20 years for anyone to realize that people were indeed drinking arsenic-contaminated water. The situation began due to Bangladesh’s high infant mortality rate. In an effort to reduce that rate, aid agencies such as UNICEF and the World Bank proposed tapping into groundwater wells. The infant mortality rate was cut in half, but it has been estimated that 1 in 5 of the 8 million wells are contaminated with arsenic, which has caused problems of its own. Approximately 35 to 77 million people in Bangladesh were drinking water with unsafe arsenic (World Health Organization, 2014). In one highly affected area, 21.4% of all deaths were attributed to the arsenic levels, which were above 10 micrograms per liter, the WHO provisional guideline. Thousands of hair, nail, and urine samples from people living in arsenicaffected villages in Bangladesh and West Bengal were analyzed and 93% and 77% of the samples, respectively, contain arsenic levels above normal. Although the government has begun regularly testing water wells and marking some as unsafe to drink, Bangladesh is not completely rid of the problem. Currently, more local farmers want to draw high volumes of water from irrigation wells for their crops. This poses a risk, as it will create flow conditions that bring arsenic-contaminated water from above ground into the deep aquifers below. It has been shown that uncontaminated domestic wells that are more than 500 feet deep can remain arsenicfree for at least 1,000 years (Michael and Voss, 2008). That length of time could be drastically cut if people begin to draw water from aquifers for irrigation and end up contaminating the aquifers. From a public health standpoint, the deep aquifers should solely be for the purpose of obtaining drinking water. However, from an economic and/or agricultural one, that might not be the case. With agriculture as Bangladesh’s largest sector of the economy, it is understandable that some people want to tap into these wells to pull out more water

for irrigation. However, it is important to keep in mind not only the health effects of arsenic poisoning, but also the fact that said effects play a role decades after exposure. Because of this, alternative sources of water for irrigation should be strongly considered.

Current Events in the US and Pakistan Although arsenic-contaminated water is a more significant problem for other countries, it should not be completely disregarded here in the U.S. Approximately 2.1 million Americans use water wells with high levels of arsenic, a number much higher than previously predicted (Ayotte et. al, 2017). Before, officials only tracked arsenic levels in wells that were already determined to exceed the limit of 10 micrograms per liter. In addition, state records were kept infrequently. Recently, the U.S. Geological Survey created a model which provides an estimate for the probability of finding arsenic in places where data has never been collected from. States such as Maine, New Hampshire, and Massachusetts, were estimated to have high levels of arsenic, in part because private wells are drilled into crystalline bedrock with fractures. When water moves through fractures, it can bring arsenic into the supply. Overall, this model revealed that arsenic contamination is also an issue for the U.S. and enforcing national policies can help with making sure wells are arsenic-free. In Pakistan, a more dire situation awaits, in which 50 to 60 million people are estimated to be using water that contains more than 50 micrograms per liter of arsenic, which is five times the WHO guideline. Similar to the U.S. Geological Survey, scientists used statistics to develop a hazard map and predict the number of people affected. In addition, 1,200 groundwater pumps were tested and ⅔ of the samples exceed 10 micrograms per liter of arsenic (Podgorski et. al, 2017). Most of the people who are affected live in the Indus Valley. Also, similar to the situation in Bangladesh, the people have been increasingly drawing their water from underground aquifers, and human and animal waste contamination in water worsens the problem. One solution that has been proposed is to use wells with older and deeper aquifers, where most of the arsenic has disappeared. The younger sediment is often the culprit, where arsenic is still present at unsafe levels.

Health Implications There have been many scientific studies on the effects of arsenic on the human body, the most harmful effect being cancer. There is substantial evidence of increases in lung, bladder, and kidney cancer even 40 years after DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

the exposure to arsenic (Smith et. al, 2017). Cancer mortality rates began to increase 10 years after exposure and remained high for up to 40 years after. The delay between the time of exposure and the incidence of cancer is extremely long, and emphasizes the danger of arsenic contamination, as it can affect oneâ&#x20AC;&#x2122;s body decades later. In addition to increased risks for cancer, long-term exposure to arsenic has been found to result in an irregular heartbeat, liver and kidney damage, a shortage of white and red blood cells, and changes in the appearance of skin. The first symptoms can be seen on the skin, with the appearance of dark and light spots, and hardening of the palms and soles. The most common neurologic effect, sensorypredominant peripheral neuropathy, also affects the hands and feet. Symptoms include a decrease in sensation and feeling like one is wearing gloves and stockings. Although scientists are not certain about why arsenic poisoning may lead to neuropathy, they speculate that it may be similar to thiamine deficiency, in the sense that arsenic inhibits the conversion of pyruvate to acetyl coenzyme A and blocks the Krebs cycle. The mechanism of arsenic toxicity has been centered around its inhibition of cellular respiration in mitochondria. The liver and kidney were found to have the highest concentration of arsenic in an experiment where samples of organs were ground and dried, and then the total arsenic concentration was measured by electrothermal atomic absorption spectrometry (Benramdane et. al, 1999). This is most likely due to the role of the


liver and kidney in detoxification, as inorganic arsenic becomes methylated in the liver and the kidneys remove wastes from blood. After arsenic is absorbed, it can be partly methylated into its organic counterparts: monomethylarsonic acid (MMA) and dimethylarsinic acid (DMA). MMA and DMA easily leave the body through bile, blood, and urine because they have a weak affinity for tissues. The impact of arsenic on the liver was also seen in hospital-based studies on 248 cases of arsenic poisoning, in which 190 patients had hepatomegaly, or an enlarged liver (Mazumder, 2005). Mice were given water contaminated with arsenic at 3.2 mg/L for 15 months. Liver histology revealed hepatic fibrosis, a condition in which excessive connective tissue builds up in the liver, at 15 months. Furthermore, arsenic uptake into the liver may be increased in people with poor nutrition, as their intrinsic detoxification mechanisms, including methylation, are not fully functional. Animal studies supported this link between nutrition and methylation. A diet low in protein, choline, or methionine showed reduced rates of arsenic excretion among mice, implying a lower rate of methylation. However, what complicates the diagnosis and treatment of arsenic poisoning is that symptoms vary between geographic areas, populations, and individuals (World Health Organization, 2014). Arsenic poisoning can affect the body in many different ways and it is difficult to recognize the symptoms as the result of arsenic poisoning, especially in rural areas where health services are limited.

â&#x20AC;&#x153;However, what complicates the diagnosis and treatment of arsenic poisoning is that symptoms vary between geographic areas, populations, and individuals.â&#x20AC;?

Figure 2: Arsenic risk from water wells. Source: Flickr


Potential Solutions and Global Health Implications

“Nonetheless, in order for these solutions to be put and kept in place, the governments of these countries must take action. They must enforce disease screening, reduce co-exposures, implement treatment and health services, and increase public awareness about the danger of arsenic poisoning.”

Arsenic poisoning is just one example of a disease that predominantly affects third-world countries. In this case, it is because arsenic contamination occurs due to poor water quality and an absence of water filters. In addition, people with diets lacking in certain important nutrients show reduced rates of methylation, a way the body gets rid of arsenic. Furthermore, although there are laboratory tests that can measure the amount of arsenic in one’s blood, urine, hair, and fingernails, they can only be conducted at special laboratories. In hospitals where doctors and patients cannot afford to send these samples, arsenic poisoning can go undetected and untreated. One solution to this crisis is filtering arsenic out of water through common water filters, which Cambodia has started to implement. However, even common water filters may present themselves as too expensive for some rural parts of the world. In Bangladesh, they decided to use deeper aquifers which lack arsenic for drinking water. However, the majority of affected countries only have aquifers that reach 300 feet down. Instead, some suggest that people simply blend the low-arsenic water with higherarsenic water to achieve an acceptable arsenic concentration level. Nonetheless, in order for these solutions to be put and kept in place, the governments of these countries must take action. They must enforce disease screening, reduce co-exposures, implement treatment and health services, and increase public awareness about the danger of arsenic poisoning. High-risk populations should be regularly monitored for early signs of arsenic poisoning, especially skin problems. In the U.S., states should implement regulations on the amount of arsenic in their water wells. A national screening program in the form of individual tests at domestic wells would ensure that arsenic levels are regularly monitored. Overall, arsenic contamination is a complex issue that goes beyond the biological and chemical effect of arsenic on our cells, one that requires effective policies from governments to partially, and ideally completely, remedy. D

atsdr.cdc.gov/csem/csem.asp?csem=1&po=11 2. Allan H. Smith, Guillermo Marshall, Taehyun Roh, Catterina Ferreccio, Jane Liaw, Craig Steinmaus; Lung, Bladder, and Kidney Cancer Mortality 40 Years After Arsenic Exposure Reduction, JNCI: Journal of the National Cancer Institute, , djx201, https://doi.org/10.1093/jnci/djx201 3. American Cancer Society. (2014, July 18). Arsenic and Cancer Risk. Retrieved January 05, 2018, from https://www. cancer.org/cancer/cancer-causes/arsenic.html 4. Ayotte, J. D., Medalie, L., Qi, S. L., Backer, L. C., & Nolan, B. T. (2017). Estimating the High-Arsenic Domestic-Well Population in the Conterminous United States. Environmental Science & Technology,51(21), 12443-12454. doi:10.1021/acs. est.7b02881 5. Benramdane, L., Accominotti, M., Fanton, L., Malicier, D., & Vallon, J. (1999). Arsenic Speciation in Human Organs following Fatal Arsenic Trioxide Poisoning - A Case Report. Clinical Chemistry, 45(2), 301-306. Retrieved from https://www.researchgate.net/profile/Laurent_Fanton2/ publication/13358305_Arsenic_speciation_in_human_organs_ following_fatal_arsenic_trioxide_poisoning_-_A_case_report/ links/5405bf0f0cf2c48563b18959.pdf. 6. Mazumder, D. G. (2005). Effect of chronic intake of arsenic-contaminated water on liver. Toxicology and Applied Pharmacology,206(2), 169-175. doi:10.1016/j.taap.2004.08.025 7. Michael, H. A., & Voss, C. I. (2008). Evaluation of the sustainability of deep groundwater as an arsenic-safe resource in the Bengal Basin. Proceedings of the National Academy of Sciences,105(25), 8531-8536. doi:10.1073/pnas.0710477105 8. Podgorski, J. E., Eqani, S., Khanam, T., Ullah, R., Shen, H., & Berg, M. (2017). Extensive arsenic contamination in high-pH unconfined aquifers in the Indus Valley. Science Advances,3(8). doi:10.1126/sciadv.1700935 9. World Health Organization. (2014, November). Arsenic. Retrieved January 05, 2018, from http://www.who.int/ mediacentre/factsheets/fs372/en/


References 1. Agency for Toxic Substances and Disease Registry. (2010, January 15). What are the Physiologic Effects of Arsenic Exposure? Retrieved January 05, 2018, from https://www. 26




Understanding Uncle Mo BY PETER VO '18

One Play at a Time The date is January 8th, 2018. The stage was the College Football National Championship. It was the up and coming Georgia Bulldogs challenging the dynasty of the Alabama Crimson Tide. By halftime, the Crimson Tide were down 13-0. Quarterback Jalen Hurts, the 2016 offensive player of the year in the Southeastern Conference, was held to only 21 yards passing and 47 yards rushing. The Alabama offense was stagnant and the kicker, Andy Papanstos, already missed an easy 40-yard field goal. Nothing was going well for Alabama. Then, in the second half, head coach Nick Saban made the call of the year pulling his 24-2 starting quarterback Jalen Hurts for a true freshman quarterback named Tua Tagovailoa. Suddenly, a touchdown pass made the score 13-7. Georgia scored quickly extending the lead to 20-7. However, the initial touchdown pass from Tagovailoa was all that was needed to inspire the Alabama defense. The Tide defense forced an interception that lead to another Tagovailoa touchdown. The score was 20-14. The Alabama sideline was suddenly energetic. Hope was in their eyes. The defense tightened and held the Bulldogs scoreless. Two Tide field goals and a crazy touchdown pass on 4th down tied the game up at 20-20. Overtime WINTER 2018

came. Georgia kicked a field goal to make the score 23-20. Tagovailoa took a 16-yard sack. When all hope seemed lost, on 2nd and 26, the true freshman uncorked a beautiful deep ball for a touchdown. Game. Set. Match. Alabama wins with the score 26-23.

Figure 1: Tom Brady hot hand. Source: Flickr

Momentum is on Their Side Usually on sporting events like football, basketball, or soccer, broadcasters will repeatedly mention momentum. They are not referring to the product of an object’s mass and velocity, but the phenomenon of how repeated positive or negative actions can affect the performance of an entire team and change the flow of a game. Jokingly referred to as “Uncle Mo”, this psychological momentum has supposedly appeared in multiple games that were seemingly decided early on. Recent examples include the New England Patriots overcoming a 28-3 deficit in 2017’s Superbowl 51, the Cleveland Cavaliers winning the NBA finals despite losing three games to one in the series in 2016, and the LSU Tigers football team rallying against a 20-0 deficit against Auburn to win 27-23 in 2017. Psychological momentum (PM) is defined as the positive or negative change in cognition, affect, physiology, and behavior caused by an 27

event or series of events that affects either the perceptions of the competitors, the quality of performance, or the outcome of the competition (1). Therefore, PM is not simply confined to a specific play or moment in a game. It could be a win streak a team is experiencing or simply a “hot hand” a player is feeling

All in Their Heads

Figure 2: NBA Championship Source: Flickr


A 1988 study titled “Psychological Momentum and Performance Inferences: A Preliminary Test of the AntecedentsConsequences Psychological Momentum Model” by Vallerand et al from the University of Quebec at Montreal attempted to construct a psychological momentum model and incorporate all the variables that could affect cause and effects of sporting events. In the proposed model, the psychological momentum experienced by the players is the perception and feeling the player is having about the

game (2). These perceptions can be about the game as a whole or about significant moments during the game. These situational variables are referred to as antecedents. Some common antecedents include the number of consecutive wins prior to the contest, a crucial turnover, or a critical scoring play (2). However, the impact of these antecedents relies on the context of game. For example, a football team snagging an interception while leading by 30 points will not give that team any extra motivation. On flip side, snagging that inception while being down by a single point with little time remaining will instill a sense of confidence in the players. Another antecedent is the player’s need to control the situation (2). These players are colloquially known to have a lot of passion for the game. They have vast knowledge about the game and incredible attention to detail. As such, they are more susceptible to antecedents and how they shift the momentum of a game. The player’s sensitivity to the antecedents also supports what is known as the “hot hand” (2). Whenever players appear to continually play well during a game, broadcasters are known to label the player as “hot," “on fire," or “in rhythm.” Such cases occur when a quarterback continues to throw completions, or a basketball player continues to make baskets during the game. In those situations, the players psychologically feel like they cannot make a mistake and play with an air of confidence. However, once a bad play occurs such as an interception or a missed shot, their confidence gets reduced slightly. They might get timid and not try plays they had done previously. This phenomenon is known as going cold. Some well-known “passionate players” include basketball’s Michael Jordan and Kobe Bryant and football’s Tom Brady and Peyton Manning. Other situational factors that affect antecedent impact include the importance of the game and the crowd. There is a heightened sense of pressure in bigger games such as playoff games or championship games. Thus, even the smallest antecedent such as making a first down or forcing a defensive stop can lead to massive impact (2). The study claims the crowd plays a role as well. Having a supportive and enthusiastic crowd would boost the confidence of players while a hostile and aggressive crowd would discourage the players entirely (2). Once the antecedents have taken place, either positive or negative, the player will feel a heightened sense of either confidence and motivation or discouragement and loss of synchronization. This leads to the consequences of the model. Quite simply, the consequences are the player’s subsequent performance and eventual outcome of the game (2). DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Is It in the Numbers? The Antecedent-Consequence model seems to provide a good explanation of the effects of psychological momentum; however, there does not appear to be a lot of supportive data. A statistical analysis done by Cornell and Stanford University in 1985 looked at the shooting percentages of the Philadelphia 76ers during the 1980-1981 season (3). Specifically, the study looked at the players’ shooting percentages after making one, two, or three shots and after missing one, two, or three shots. Across the board, there appeared to be no correlation between making or missing a shot and the result of the following shot (3). However, most sports experts as well as players themselves criticize using a quantitative study to observe psychological momentum (4). The claim is that psychological momentum is so subjective to each specific player. Like the Antecedent-Consequence model suggests, psychological momentum is strictly player’s perception. As such, there is not a viable way to quantify how a person perceives his or her situation in a game (4). Questionnaires may be used to asses how the player is feeling and who he or she believes to have the momentum. However, even then, there does not seem to be empirical evidence of the effect of psychological momentum. The previous Vallerand study attempted to observe psychological momentum using tennis players (2). The study had players play sets of six games. After each game, the players would evaluate the performance and determine who had the momentum. From each questionnaire, a winner for the next game was predicted. In the results of the study, the supposed owner of the momentum did not have any correlation to the winner of the next game.

in the super bowl? How else would you explain Alabama suddenly being energized and winning a thriller of a National Championship in 2017? There may not be a quantifiable scientific explanation to psychological momentum other than the short bursts of courage a player gets. It does however make the game that much more interesting. D CONTACT PETER VO AT PETER.D.VO.18@DARTMOUTH.EDU

References 1. Peterson, D. (2008, October 06). The Reality of Momentum in Sports. Retrieved February 1, 2018, from https://www. livescience.com/5120-reality-momentum-sports.html 2. Vallerand, R. J., Colavecchio, P. G., & Pelletier, L. G. (1988). Psychological Momentum and Performance Inferences: A Preliminary Test of the Antecedents-Consequences Psychological Momentum Model. Journal of Sport and Exercise Psychology,10(1), 92-108. doi:10.1123/jsep.10.1.92 3. Gilovich, T., Vallone, R., & Tversky, A. (1985). The Hot Hand in Basketball: On the Misperception of Random Sequences. Heuristics and Biases,601-616. doi:10.1017/ cbo9780511808098.035 4. Crust, L., & Nesti, M. (2006). A Review of Psychological Momentum in Sports: Why qualitative research is needed. . Athletic Insight: The Online Journal of Sports Psychology,8(1). Retrieved February 1, 2018.

"Despite the lack of concrete evidence, the 'Uncle Mo' phenomenon has grown to be a large part of sports culture. Coaches continuously preach that one play can change the flow of the game.”

You Play to Win the Game Despite the lack of concrete evidence, the “Uncle Mo” phenomenon has grown to be a large part of sports culture. Coaches continuously preach that one play can change the flow of the game. One play is all that’s needed. Experts include analysis of how teams are feeling psychologically and how a play or a win streak is affecting the players’ confidence and perception. ESPN includes on their website a graph depicting a team’s win probability during the game itself. Every pregame show of any sporting event pulls out trends from previous games and assumes the trend will continue into the upcoming game. It is a phenomenon so pervasive in sports culture that it has become the reason people use to explain the inexplicable. How else would you explain that the Atlanta Falcons gave up a 25-point lead to the New England Patriots with just over 16 minutes left WINTER 2018



Shoals: An Island Where Dartmouth Can Study Marine Life BY LEAH VALDES '18 AND MARK LAIDRE Figure 1: Shoals Island. Source: Leah Valdes, Mark Laidre


Oceans cover over 70 percent of Earthâ&#x20AC;&#x2122;s surface, and yet our scientific understanding of the behavior, ecology, and evolution of marine organisms is far less than that of their terrestrial counterparts. Much therefore remains to be discovered about the complexities of marine life. And not just with a biological lens: interdisciplinary tools and perspectives from chemistry, physics, mathematics, and engineering are invaluable, if we are to explore the full depths of life in the ocean. While Dartmouth has on-campus opportunities to study marine systems in the classroom, there is nothing like conducting field work in marine ecosystems. Here we provide a short narrative about marine research we conducted during summer 2017 at Shoals Marine Laboratory. Located in the Gulf of Maine, 6 miles off the coast, Shoals can be easily reached with just a 2-hour drive from Hanover to Portsmouth followed by a 1-hour ferry ride to Appledore Island in the Isles of Shoals. While readily accessible, the pristine surroundings of

Shoals feel a world away from the bustle of the mainland (Figure 1). Indeed, the location of this field site is ideally suited for marine research, boasting well-equipped laboratories, teaching classrooms, helpful research staff, dorms, and a kitchen whose cooks ensure all who stay on the island are well-fed. The island, which is jointly overseen by Cornell University and University of New Hampshire, has been a dedicated field station for over 50 years, with countless long-term research projects, as well as a wide selection of undergraduate classes offered during the summers. Only recently though has Dartmouth established an earnest foothold on Shoals. After completing our first summer of research on Shoals, a Dartmouth alum (Bill Kneisel â&#x20AC;&#x2122;69) generously gave Dartmouth a donation, which funds continued research on Shoals by Dartmouth students and faculty for the next 5 years. An important part of our motivation in writing this piece is therefore to highlight the amazing opportunity that Shoals presents for DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

those with an interest in studying marine life in the field. Our own interest in Shoals centers on animal behavior. For one of us (ML) this interest was also, in part, an act of repentance: ML had never visited Shoals when he was a Cornell undergrad, despite many Shoals research opportunities being advertised across campus . A decade after graduating though, ML atoned for this foolish oversight and set foot on Shoals in September 2016, just as he was starting a faculty appointment at Dartmouth. After a few days of pilot observations on Shoals he was sold on this island as an ideal local research site. We formally launched a Dartmouth project on Shoals from May to September 2017, collecting data for LV’s undergraduate thesis. The key question of this research was: how do animals find resources that are rare in space and time? Many species must solve this dilemma across a variety of landscapes, and particularly in complex environments, such as beneath the ocean, there exists substantial spatio-temporal uncertainty in where and when new resources will become available. The organisms we sought for an answer to this question were subtidal hermit crabs, Pagurus acadianus (Figure 2), ideal creatures for experimental study because they are critically reliant on resources, specifically empty shells, which are exceedingly rare and hard to come by (Laidre 2011). To answer our question, we donned snorkels and flippers, dodged some aggressive gulls and poison ivy, and headed into the ocean to see for ourselves how hermit crabs were

solving this problem. We began our research by systematically sampling the subtidal to quantify shell availability in space and time (see ‘Dartmouth Shoals’ videos on YouTube). Empty shells were indeed rare spatially. Furthermore, powerful tidal forces moved shells, or even buried them altogether, thereby increasing temporal uncertainty. How then can hermit crabs locate their critical resources? One potential solution is to exploit as information the cues incidentally produced when empty shells are created. Over hundreds of millions of years, many predatory species have evolved specialized abilities to prey upon snails. While some of these predators crush the shell to access the snail’s flesh, others like starfish (Figure 3a) and large predatory gastropods (Figure 3b), avoid having to break the shell: instead they secrete specialized enzymes, which degrade the snail’s flesh, allowing the predator to digest the snail externally, while leaving the shell intact. Such non-destructive predation should generate chemical cues as byproducts, cues which, in theory, could serve as information to hermit crabs about the imminent availability of new shells. We ‘fleshed out’ this hypothesis by simulating non-destructive predation events, first treating mollusk flesh with natural enzymes and then conducting controlled field experiments in which we tested the attractiveness of the chemical products to hermit crabs. In our experiments, hermit crabs flocked most to the sites containing enzymetreated flesh. Our results (Valdes and Laidre 2018, in review) thus suggest that animals can resolve the problem of spatio-temporal uncertainty by

"A Dartmouth alum (Bill Kneisel '69) generously gave Dartmouth a donation, which funds continued research on Shoals by Dartmouth students and faculty for the next 5 years.”

Figure 2: Subtidal hermit crabs. Source: Leah Valdes, Mark Laidre



Figure 3a, left: Starfish. Figure 3b, right: Large predatory gastropod. Source: Leah Valdes, Mark Laidre

“If you have an interest in marine life and think that Shoals might be a place you would like to conduct research or take classes, then consider reading more about the site: https://www. shoalsmarine laboratory.org/"


exploiting long-distance chemical cues, within a complex web of ecological relationships involving predators and prey. While these results have answered our original question, many fascinating questions remain. We look forward to exploring this topic and study system further during continued research on Shoals in summer 2018. Some of our follow-up questions include: What precise molecules act as information beacons to individuals searching for shells? How and why does geographical variability in the ‘biological market’—the supply and demand of shells—impact individuals’ sensitivity to this information? And might anthropogenic noise or global change be fundamentally altering the underlying ecological and evolutionary relationships? These are just a few of countless scientific questions that can be asked at Shoals, and most specifically on hermit crabs. Admittedly, it would be a shell-shock to us to learn that not every scientist is absorbed by hermit crabs. Nevertheless, it should be pointed out that a wealth of additional marine life is present on Shoals, from deep-sea sharks and intertidal barnacles to nesting colonies of seabirds and harbor seals. LV is excited to conduct further research on Shoals in summer 2018 and imagines many other undergrads would likewise find the experience intellectually rewarding. If you have an interest in marine life and think that Shoals might be a place you would like to conduct research or take classes, then consider reading more about the site: https://www. shoalsmarinelaboratory.org/ Whatever you decide, don’t be like one of us (ML) and take a decade before finally exploring Shoals. Note: A formal application process for summer research opportunities is presently being created by a committee within the Department of Biological Sciences. This application process will be advertised in future years to encourage a diverse group of researchers from Dartmouth to study on Shoals. We hope other Dartmouth faculty and students will take advantage of this


References 1. Laidre, M.E. 2011. Ecological relations between hermit crabs and their shell-supplying gastropods: constrained consumers. Journal of Experimental Marine Biology and Ecology 397: 65-70. 2. Valdes, L. and Laidre, M.E. 2018 (in press). Resolving spatiotemporal uncertainty in rare resource acquisition: smell the shell. Evolutionary Ecology 3. https://link.springer.com/article/10.1007/s10682-018-9937-4


I S E C 2 0 1 7 - S E C O N D P L AC E W I N N E R

Socially Assistive Robots: An Emerging Technology for Treating Autism BY SEI CHANG, KOREA INTERNATIONAL SCHOOL

Autism is a behavioral disorder that causes deficits in social reciprocity and affects a person’s ability to understand others and their emotions. As Autism Spectrum Disorder (ASD) causes impairments in social interaction and communication, it is challenging to make sense of the social world and the daily interactions that define each individual’s personal experiences (White, Keonig, & Scahill, 2007). In general, people have many daily social interactions, which involve developing relationships, connecting with others, and communicating thoughts and feelings. When diagnosis and treatment is not made early on, it could become increasingly difficult for individuals with Autism to make up for these skills later in life (Dautenhahn, 2003). While it is certainly possible that some individuals with Autism will lead independent lives in their respective professions, they may accomplish this by applying definitive rules to overcome social barriers. In order to develop empathy and understand social norms, they must constantly practice behaviors that are often difficult for them to develop (Dautenhahn, 2003).

individuals with Autism. Social robots are developed and designed to assist people through social assistance and interaction (Feil-Seifer & Mataric, 2009). Social robots are designed to carry out repetitive tasks, allowing the robots to be predictable, non-threatening, and patient (Scassellati, 2007). These behaviors are especially important when interacting with individuals with Autism. While interactive software and computer-mediated therapy models have also been developed for social assistance, social robots perform much better in several areas. For example, the physical presence of a robot impacts how individuals view and interact with robots as a social actor. Various studies have shown that individuals are more compliant (Bainbridge, Hart, Kim & Scassellati, 2010) and have greater cognitive learning gains (Leyzberg, Spaulding, Toneva & Scassellati, 2012) when interacting with a physically present robot than when interacting with a video displayed agent. Social robots act as a physical subject for communication; they are able to teach skills to children with Autism in a non-threatening way and by eliciting certain key behavioral responses. Social robots often act as peers Symptoms and Treatment rather than authority figures to prevent intimidation and encourage closer interaction between the child and the robot. They can either Autism falls on a spectrum, meaning that the disorder varies from observe the child passively and gather data or interact with the child individual to individual in terms of when it begins and how severe in a dynamic and social way (Dautenhahn, 1999). the symptoms are. Social impairments caused by Autism mainly As Autism cannot be simply diagnosed with a genetic or involves speech and interpersonal interactions, with symptoms blood test, constant social including difficulties “Social robots are developed and designed to assist people skills practice is needed to in understanding or expressing emotions, through social assistance and interaction... Social robots are overcome deficiencies in interpreting language designed to carry out repetitive tasks, allowing the robots to this area. Social robots are equipped with sensors in that is not intended to be predictable, non-threatening, and patient. ” order to measure behaviors be literal, and changing verbal tone during communication (Lord, Cook, Leventhal, & Amaral, 2000). These difficulties in engaging socially with others ultimately leads to less opportunities for “positive peer interactions” and inclusion in regular classrooms, impacting the learning environment for those with Autism (White, Keonig, & Scahill, 2007). Social Skills Training (SST), a type of intervention that has been used for treatment, allows children to learn social skills to encourage interaction with other children (White, Keonig, & Scahill, 2007). Different types of SST include group sessions that utilize different intervention strategies in relatively natural environments. These interventions may aim to help children overcome the awkwardness of social behaviors with continuous practice and exposure. An example of this intervention strategy is in the use of “social scripts,” cards that adolescents may use as a reference for different interactions (Tse, Strulovitch, Tagalakis, Meng, & Fombonne, 2007). Intervention techniques have also been developed to target the child’s ability to understand other’s emotions and perspectives through role playing games (Tse, Strulovitch, Tagalakis, Meng, & Fombonne, 2007).

Social Robots as a Supplemental Tool As technology continues to advance, a number of tools could be used to supplement Social Skills Training in diagnosing and treating WINTER 2018

that are typically an indication of Autism and help elicit social behaviors in the child (Scassellati, 2014). In order to accurately gather data to evaluate social behavior, social robots use techniques that analyze behaviors such as motion-tracking, gaze-detection, and verbal word counting. Embedded technologies provide social robots with a perceptual system, which is necessary as a tool in diagnosis and treatment. For example, Microsoft’s KINECT can be used to understand and measure engagement by tracking the child’s motion and distance away from the sensor (Leite, McCoy, Lohani, Ullman, Salomons, Stokes, Rivers, & Scassellati, 2015). Leaning forward can be an indication of engagement while leaning back can be a sign that the individual is disengaged and needs motivation to re-engage (Leite, et al, 2015). Through gaze detection, the robots can detect eye movement and record how often the child is maintaining eye contact. A lack of eye contact is a symptom of Autism, and maintaining eye contact is key to developing social relationships with others (Scassellati, 2005). An alternative to supplement diagnosis is to measure the amount of words the child speaks In fact, 25% of Autistic children do not develop fully functioning language capabilities (Kuhl, Corina, Padden, & Dawson, 2005). By tracking words spoken, a child’s progress can be accurately tracked throughout Social Skills Training. 33

In addition to quantifying movement and expressions, social robots are designed to elicit specific social behaviors through interactions with the child. One of the main goals of interacting with social robots is to encourage certain social behaviors (Scassellati, 2007). A key way robots are a useful addition to Social Skills Training is that robots are novel and entertaining, which can be a good motivator to children. As individuals with Autism are not skilled at understanding the difference between subtle facial expressions, the simple and exaggerated expressions of social robots are much easier to understand than those of humans (Scassellati, 2007). Socially assistive robots should not be so human-like that the child feels threatened and not so new or interesting that the child is instead interested in the robot’s physical appearance. Social robots can also generate social cues and express humanlike emotion through sounds or movement. The robots can try to seek desired behaviors from the child and then respond to the child’s behavior using simple expressions or sounds as a reward. In the case of the Dragon Bot, the robot’s wings gradually spread more as a reward for the child performing a task perfectly (Grigore & Scassellati, 2013). The Pleo dinosaur robot is also customized for children with Autism to “express happiness, disappointment, agreement, or disagreement” through different movements and sounds that represent different emotions (Kim, Berkovits, Bernier, Leyzberg, Shic, Paul, & Scassellati, 2013).

Present Day Obstacles While socially assistive robots hold immense treatment benefits, there are also several obstacles that hinder its potential from becoming more widely used in households. Although social robots are not designed to act as surveillance machines, social robots require a lot of data in order to collect sufficient information about the children they are interacting with, including personal data about the child and visual data that might contain images of the entire house or others in the family. As huge amounts of data is required, it may be difficult to receive consent from families to use social robots outside of the clinic due to privacy concerns. As large amounts of information are entrusted to social robots, cyber security is also increasingly important in preventing data from being stolen by third parties (Denning, Matuszek, Koscher, Smith, & Kohno, 2009). In order to partially alleviate the issue of privacy, simple interactive toys can be used to collect less data than fully capable robots. While these simple toys are not as sophisticated, they can still increase the quantity of the data. As technology continues to advance and become more present in our daily lives, many people fear that robots may take jobs from people since they are becoming more efficient in certain areas. Some individuals may see a risk associated with socially assistive robots as complete replacements of human workers, such as therapists or social workers. While the robots may aid the children in developing social behaviors necessary to interact with people, the robots’ purpose is not to completely substitute the role of a human partner. Instead social robots are a means to get children to the point where they can better interact with other people, but not all individuals will view it this way, especially if they are not familiar with robots.

treat children with Autism during Social Skills Training as robots have the ability to do repetitive behaviors in a non-threatening and engaging way. Embedded sensors give the social robots a way to track and understand children’s behavior and personal preferences, and continue to engage with them in a meaningful way. There is promising Autism research with social robots, and as technology continues to advance forward, the ways we engage with social robots will continue to evolve and improve. D References 1. Bainbridge, W. A., Hart, J. W., Kim, E. S., & Scassellati, B. (2011). The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics, 3(1), 41-52. 2. Dautenhahn, K. (1999). Robots as social actors: Aurora and the case of autism. In Proc. CT99, The Third International Cognitive Technology Conference, August, San Francisco (Vol. 359, p. 374). 3. Dautenhahn, K. (2003). Roles and functions of robots in human society: implications from research in autism therapy. Robotica, 21(4), 443-452. 4. Denning, T., Matuszek, C., Koscher, K., Smith, J. R., & Kohno, T. (2009). A spotlight on security and privacy risks with future household robots: attacks and lessons. In Proceedings of the 11th international conference on Ubiquitous computing (pp. 105114). ACM. 5. Grigore, E. C., & Scassellati, B. (2013). Proceedings of the International Workshop on Developmental Social Robotics (DevSor): Reasoning about Human, Perspective, Affordances and Effort for Socially Situated Robots at the 26th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013. 6. Kim, E. S., Berkovits, L. D., Bernier, E. P., Leyzberg, D., Shic, F., Paul, R., & Scassellati, B. (2013). Social robots as embedded reinforcers of social behavior in children with autism. Journal of autism and developmental disorders, 43(5), 1038-1049. 7. Kuhl, P. K., Coffey‐Corina, S., Padden, D., & Dawson, G. (2005). Links between social and linguistic processing of speech in preschool children with autism: behavioral and electrophysiological measures. Developmental science, 8(1). 8. Leite, I., McCoy, M., Lohani, M., Ullman, D., Salomons, N., Stokes, C., Rivers, S.E., & Scassellati, B. (2015, March). Emotional storytelling in the classroom: Individual versus group interaction between children and robots. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction(pp. 75-82). ACM 9. Leyzberg, D., Spaulding, S., Toneva, M., & Scassellati, B. (2012, January). The physical presence of a robot tutor increases cognitive learning gains. In Proceedings of the Cognitive Science Society (Vol. 34, No. 34). 10. Lord, C., Cook, E. H., Leventhal, B. L., & Amaral, D. G. (2000). Autism spectrum disorders. Neuron, 28(2), 355-363. 11. Scassellati, B. (2005, August). Quantitative metrics of social response for autism diagnosis. In Robot and Human Interactive Communication, 2005. ROMAN 2005. IEEE International Workshop on (pp. 585-590). IEEE. 12. Scassellati, B. (2007). How social robots will help us to diagnose, treat, and understand autism. Robotics research, 552-563. 13. Tse, J., Strulovitch, J., Tagalakis, V., Meng, L., & Fombonne, E. (2007). Social skills training for adolescents with Asperger syndrome and high-functioning autism. Journal of autism and developmental disorders, 37(10), 1960-1968. 14. White, S. W., Keonig, K., & Scahill, L. (2007). Social skills development in children with autism spectrum disorders: A review of the intervention research. Journal of autism and developmental disorders, 37(10), 1858-1868.

Social Robots for Social Skills Training Autism is a restrictive disorder that begins early in life, and without continued treatment and practice, can greatly hinder the social interactions and the ability to develop meaningful relationships. Social robots can be included as an additional tool to 34


I S E C 2 0 1 7 - T H I R D P L AC E W I N N E R

Finally a Cure to the Ticking Time Bomb: Targeting HIV - CRISPR-Cas9 Genome Editing System BY PRAKRUTI DHOLIYA, BERGEN COUNTY ACADEMIES

HIV, or Human Immunodeficiency Virus, is a disease that plagues over 1.1 million people in the US and many more throughout the world (1). As many know, this disease causes the immune system to stop functioning properly and prevents the body from defending itself from viruses and diseases. In recent times, HIV has been discovered to be resistant to most forms of treatment that have been used prior, mainly being drugs. This new drug-resistant HIV does not have any current medical treatment, and the number of people and countries that contain this drug-resistant strain is rising rapidly. However, there is still hope. A new genome editing system has been created on the basis of a natural bacterial system which prevented viruses from attacking the bacteria and spreading. It was first discovered on accident in the 1980s, but not much had been known about it at that time. The genome editing system is known as CRISPR, or Clustered Regularly Interspaced Short Palindromic Repeats, which was first found within bacteria. When a synthetic CRISPR system is paired with the simple Cas9 (CRISPR-associated system) protein and gRNA (synthetic guide RNA), it has the ability to cut into the genome of a cell at specific locations and alter it, showing that this system has the potential to cure HIV (2). This is much more than rather than just preventing or treating it like in the past. HIV has been regarded as a ticking time bomb due to the many drug resistant strains that have been recently discovered (3). If a new cure or treatment is not found soon, the human race will be defenseless in the face of HIV and its attacks. The CRISPR/Cas system is a prokaryotic immune system that functions within a bacterial cell. There are many CRISPR/Cas systems, however, the Cas9 is the easiest to modify and manipulate for the purpose of genome editing. The original systems consists of prokaryotic DNA that is arranged in short, repetitive and palindromic

technological system has been simplified to consist of only two major components, the gRNA and the Cas9 enzyme. In this system, the guide RNA genes can be altered, which is how the enzyme and the system is directed as to where to cut the DNA within the cell (7; 2). Other studies done by researchers show that different sites can be targeted by changing the crRNA sequence. CRISPR systems are being tested on various attacked bacteria since it is easier to design than previous DNA cutters. Former DNA editing systems required creating an entire protein or specific enzyme, but in the CRISPR-Cas systems, the only requirement is to create a new RNA pattern for the gRNA - depending on where and what type of cut is required (4; 8). Since the CRISPR-Cas systems are easy to build and have the potential to edit and cut DNA, researchers see that it has the potential to cure HIV. In the past, HIV could only be treated, not cured. The HIV virus is becoming rapidly more resistant to drug treatment and common methods. As stated earlier, HIV can mutate itself in the presence of antiretroviral drugs, and this is known as HIVDR, or HIV Drug Resistance. According to a World Health Organization survey on HIV Drug Resistance, 82% of the people taking antiretroviral drugs had cases of HIV DR as of December 2016 (9). Being a virus, HIV is simple for the CRISPR-Cas9 system to fight it off with its known methods. With the CRISPR-Cas9 system, a new RNA can be coded and sent into the infected Helper T-Cells (10; 7; 11). This RNA would be coded by the researchers depending on where the cut is needed to take out the DNA inserted by HIV. HIV works in a very simple, effective, and powerful way. It uses the receptors specific to the Helper T-Cells, CD4 receptor and the CCR5 co-receptor known as primary receptors for HIV, to first attach and then bind itself to the cell. Afterwards, it uses its own envelope proteins to pull itself into the cell and merge both

bases with spacer DNA the outer cover of the cell in between. It then reacts â&#x20AC;&#x153;Social robots are developed and designed to assist people and its own envelope. with a nearby cluster of through social assistance and interaction... Social robots are Thereafter, it enters itself Cas proteins, creating two designed to carry out repetitive tasks, allowing the robots to into the cell, and the outer types of RNA, known as proteins are digested once be predictable, non-threatening, and patient. â&#x20AC;? crRNA - CRISPR RNA, and it enters the cell. This tracrRNA - trans-activating CRISPR RNA (4). The crRNA combines leaves a few essential proteins and two strands of RNA from the with the Cas complex, and targets the viral DNA that is attacking virus inside of the cell. The first strand of RNA uses the protein the bacteria. This makes the viral DNA become inactive, and now it reverse transcriptase paired with host nucleotides to first form a can no longer harm the bacterial cell. While this takes place, the Cas1 single DNA strand and then a double strand of DNA. This strand and Cas2 proteins take hold of the viral DNA and create a spacer to of DNA is brought to the inside of the nucleus of the cell and added be inserted into the CRISPR array (5; 6). Through this system, the into the host chromosome using the protein integrase, which carries cell fights off the virus during an attack and continues to let the cell the DNA strand through a nuclear pore. Afterwards, this strand of function properly. viral DNA forms more proteins and envelope enzymes through RNA The system explained above has been modified in certain polymerase and mRNA translation. The mRNA of the viral DNA, ways to create a new, synthetic version that can be used to edit and which is inside the host chromosome in the nucleus of the cell also cut DNA. This new technology is based off of the original CRISPRforms more proteins through the use of ribosomes; these proteins Cas9 complex, but rather than forming two strands of RNA- the are brought to the envelope enzymes and detach themselves from crRNA and the tracrRNA, researchers have combined the two the diseased cell with the envelope and its enzymes. They are RNA into one, synthetic RNA, known as guide RNA, or gRNA. The transported and carried from the rough endoplasmic reticulum to original Cas9 system consisted of four main components, but the the outer membrane of the cell. The virus now needs to mature, as WINTER 2018


in break the individual proteins off from the chain that had been created from the ribosomes. This is done through the usage of the enzyme protease, and the proteins can now form the matrix for the two RNA strands and enzymes to be enclosed within. Thereafter, the new virion in now mature and ready to go on and infect other Helper T-Cells. Through just the infection of one Helper T-Cell, the HIV virus can grow at exponential levels within days. During this period, the HIV strain goes through many mutations due to the many errors made by the protein reverse transcriptase. Reverse transcriptase is what creates the double stranded DNA from one of the RNA strands upon entering the cell, and is the main cause of the different mutations that are created within HIV. Due to all of these mutations and the complexity of HIV, it is very hard to find a cure for it. Once the HIV inserts its own DNA into the DNA of the helper T-Cell, it is nearly impossible to “take it out” since it is not even detectable at that specific moment. It is not until much later that symptoms begin to occur to let the infected human come to the conclusion that they themselves may have contracted HIV (10; 12). As scary and hard it seems to find a cure for HIV, with the number of infected cells and amount of viruses rising rapidly inside the bloodstream once contracted, CRISPR still has the potential to cure it and almost completely eradicate HIV from the blood stream. Through the use of a coded gRNA, the CRISPR-Cas9 system can target the LTR (long terminal repeat) integrases. The LTR is a sequence which the viral DNA uses to integrate itself to the DNA of the host chromosome. The CRISPR-Cas9 system would break the DNA strands on specific locations based off of the encoded guide RNA and its genes. More specifically, a break would occur at the T6 and T5 regions of the LTR strands. To test whether the CRISPR-Cas9 system would be successful, an experiment was done in which a similar type of cell was targeted at its own LTR sites. In this cell, the GFP (green fluorescent protein) was the target. If the system was successful in targeting the encoded area, then the GFP levels would decrease. The data showed that the GFP levels had a 64.5 ± 0.5 reduction, showing that the system was successful (4). This was then done with HIV strains, and after three transfections of the CRISPR-Cas9 targeting, around 31% of cells had had the provirus excised from within. The results show that the CRISPR-Cas9 system have the potential to excise viruses, even ones similar to HIV (13; 14). Through these results, there is a new hope for a cure for HIV. It has become even more important in the recent past for there to be a cure for HIV, since the strains and mutations are now becoming very drug resistant to the antiretroviral drugs, which are the most common methods of controlling HIV activity within a patient. With the CRISPR-Cas9 system, along with other CRISPR systems, there is a potential to cure and fix almost any disease that uses DNA alteration as the method to spread itself. The CRISPR-Cas9 system may have its limits though. It cannot fight off bacteria - only viruses, it cannot recognize deceased cells and destroy them, and it still does not have a high success rate for curing HIV, even though it has proven its ability and usefulness in this field (15). CRISPR and the CRISPR-Cas9 system have been titled the 2015 breakthrough of the year and further research continues with CRISPR. In the near future, there is a very high chance of being able to stop the HIV epidemic, which would aid in saving the lives of millions through the use of CRISPR. D


References 1. World Health Organization (2016). HIV/AIDS. World Health Organization. http:// www.who.int/features/qa/71/en/ 2. Barrangou, R. (2015). Diversity of CRISPR-Cas immune systems and molecular Machines. Genome Biology. https://www.semanticscholar.org/paper/Diversity-ofCRISPR-Cas-immune-systems-and-molecul-Barrangou/040e6dd556fd0cf082ad31b2 47ece945f86dd6f5 3. The Well Project. (2016). HIV Drugs and the HIV Life Cycle. The Well Project. http://www.thewellproject.org/hiv-information/hiv-drugs-and-hiv-lifecycle 4. Ebina H., Misawa N., Kanemura Y., Koyanagi Y. (2013). Harnessing the CRISPR/ Cas9 system to disrupt latent HIV-1 provirus. Scientific Reports 3. Article 2510. https://www.omicsonline.org/references/harnessing-the-crisprcas9-system-todisrupt-latent-hiv1-provirus-447150.html 5. Horvath P., Barrangou R. (2010). CRISPR/Cas, the immune system of bacteria and archaea. Science. http://www.popsci.com/article/science/editing-genes-superbugsturn-antibiotic-resistance#page-2 6. Bolotin A, Quinquis B, Sorokin A, Ehrlich S.D. (2005). Clustered regularly interspaced short palindrome repeats (CRISPRs) have spacers of extrachromosomal origin. Microbiology 151: 2551–2561 7. Jinek M, Chylinski LK, Fonfara I, Hauer M, Doudna JA, Charpentier E. (2015). A programmable dual-RNA guided DNA endonuclease in adaptive bacterial immunity. Science - Science Magazine 337 (6096): 816-821. 8. Grush, L. (2014). Editing The Genes Of Superbugs To Turn Off Antibiotic Resistance. Popular Science. http://www.popsci.com/article/science/editing-genessuperbugs-turn-antibiotic-resistance#page-2 9. World Health Organization. (2017). HIV Drug Resistance Report. World Health Organization. http://www.portal.pmnch.org/hiv/pub/drugresistance/hivdrreport-2017/en/ 10. World Health Organization. (2017). HIV/AIDS HIV Drug Resistance. World Health Organization. http://www.who.int/hiv/topics/drugresistance/en/ 11. Ossola, A. (2015). Scientists Tweak T Cells Using CRISPR. Popular Science. http://www.popsci.com/scientists-modify-t-cells-using-crispr 12. Betton M, Bailey C. (2013). HIV 101: The Stages of Infection. Elizabeth Glazer Pediatric AIDS Foundation. http://www.pedaids.org/blog/entry/hiv-101-the-stagesof-hiv-infection 13. Li, C., Griffin, G. E., Liu, Y., Jin, W., Shattock, R. J., Wang, P., et al Hu, Q. (2015). Inhibition of HIV-1 infection of primary CD4 T-cells by gene editing of CCR5 using adenovirus-delivered CRISPR/Cas9. Journal of General Virology, 96(8): 2381-2393. 14. Ossola, A. (2016). "Genetic Scissors" Could Completely Eliminate HIV From Cells. Popular Science. http://www.popsci.com/enzyme-can-snip-hiv-out-cell-dna 15. Zhang F., Wen Y., Guo X. (2014). CRISPR/Cas9 for genome editing: progress, implications and challenges. Human Molecular Genetics. 23: R40-R46



A More Realistic Spacial Evolutionary Game Theory Model BY JEFFREY QIAO '20

Abstract We considered a generalization of spatial evolutionary game theory in which individuals can choose between two different strategies and are located over a discrete set of sites. In this model, every individual interacts with all other individuals in a proportion dependent on their mutual distance from each other and each individual will update its strategy depending on its performance relative to its neighbors. We compared the results of our model to those of existing literature on a simplified prisoner’s dilemma. Cooperation is significantly inhibited under our model, and our model is much more prone to random chance and allows for so many final shapes during invasions of cooperators or defectors. Since our model bridges the divide between spatial evolutionary game theory and standard evolutionary game theory, differs so greatly from the pure spatial game, and provides a way to adjust the significance of location, it will be productive to use this model to analyze more situations.

1. Introduction Evolutionary game theory is a broad field that broadly encapsulates the application of game theory to the fields of biology and evolution. In general, evolutionary game theory consists of multiple individuals, each playing against all of the others. Each individual then follows his or her own strategy, and receives a payoff depending on a preset payoff matrix and the strategies that other individuals follow. Recently, a new subfield of evolutionary game theory called spatial evolutionary game theory has arisen. Spatial evolutionary game theory keeps many of the rules of standard evolutionary game theory with one caveat: individuals play against their neighbors instead of the general population as a whole. There are many variations on the rules of spatial evolutionary game theory, but in general they are as follows: 1. Players are placed into some sort of spatial structure. 2. Every player plays its own strategy against each of its neighbors in a two-person game. 3. Each player’s total payoff is calculated. 4. Each player copies the strategy of its most successful neighbor. 5. Repeat steps 1-4. Of the many results of spatial evolutionary game theory, the most prominent is that cooperation may thrive - even in games like the prisoner’s dilemma where defection always results in a higher payoff than cooperation. Unfortunately, traditional spatial evolutionary game theory usually stipulates that individuals interact with only their immediate neighbors. This approach has one main shortfall: people also interact with more than just their neighbors. For example, consider disease dynamics. A sick person is around some people (like family, close friends, etc) very often. These are approximately the immediate neighbors in the standard model. But, that same sick person also interacts with other people (acquaintances, people met walking WINTER 2018

down the street, etc.) too! Because of the deficiencies of the standard model, a new model is needed. To address these deficiencies the new model must: 1. Make sure every player interacts with every other player. 1 2. Allow for variation in the weight of interactions; interactions with some players should be weighted more than interactions with others.

2. A Better Spatial Evolutionary Game Theory Model 2.1 Calculating the Distance Between Two Individuals In creating our model, we first decided to use a square grid. The benefits of a square grid are that a square grid provides an easy to to visualize every player’s neighbors and that a square grid allows the weight of interactions between players to be based on the distance between them. With this model, it was very important to find a suitable method to calculate the distance between two individuals. We chose the Manhattan distance, which is the sum of the horizontal and vertical distances. Manhattan Distance = | x1 − x2 | + | y1 − y2 | We must also choose which cells are to be considered another cell’s "neighborhood." There are two important neighborhoods that are used in spatial evolutionary game theory: the Von Neumann neighborhood and the Moore neighborhood. In a grid of square cells, the Von Neumann neighborhood of a cell consists of only the neighboring cells that share a side with the original cell, while the Moore neighborhood consists of the Von Neumann neighborhood

Figure 1: Illustration of the Manhattan Distance. Distance between cells to the central green square are shown above. 37

3.1 Random Initial Strategies In this investigation, both Nowak and we randomly assigned half of the players to initially defect and the other half to initially cooperate. 3.1.1 Theoretical Analysis

Figure 2: Difference between the Von Neumann neighborhood and the Moore Neighborhood. The Von Neumann neighborhood (left) consists of only the four cells that are immediatley adjacent. The Moore neighborhood (right) consists of the Von Neumann neighborhood plus the four diagonal cells.

plus the four cells that share a corner with the original cell. Since we are using the Manhattan distance, we chose to use the Von Neumann neighborhood to ensure that all of a player’s neighbors are equidistant.

2.2 Converting Distance Into a Weight After selecting a suitable way to calculate the distance between two players, it is necessary to also come up with a formula to turn a distance into a weight. For simplicity’s sake, we chose: Weight = 1 / (Distance2)

2.3 Our Final Model With a way to convert distance into weight, we now have everything we need to create an improved spatial evolutionary game theory model. The final rules for our model are as follows: 1. We create a 25 by 25 square grid and put a player in each grid. Its boundaries are periodic, which eliminates any edge or corner effects. Self-interactions are excluded. 2. Every player plays against every other player. Every player’s final payoff is equal to the sum of the payoffs from its weighted interactions with every other player. 3. Every player then copies the strategy of its most successful neighbor for the next round. If a player did better than all of its neighbors, then the player uses its own strategy for the next round. 4. The steps 2-3 are then repeated for as long as desired.

3. Comparing Our Model With Existing Literature In this paper, we compared the results of our model with the results that Martin Nowak published in his 2006 book, Evolutionary Dynamics. In his book, Nowak used the Moore neighborhood and applied the standard spatial evolutionary game theory model to the simplified prisoner’s dilemma game. In the simplified prisoner’s dilemma, the payoff matrix is

Nowak goes on to analyze the behavior of his model as he varies b, the temptation of defection, in three different scenarios. These different scenarios are: random initial strategies, cooperators invading defectors, and defectors invading cooperators. We applied our model to these same scenarios and then compared the two results. 38

Nowak was also able to conduct a significant amount of theoretical analysis on his model. In particular, he noted how one player’s payoff depended solely on its eight neighbors. Consequently, he was able to easily identify "transition points" at which his model’s behavior would change. He identified b = 1.1428, b = 1.166, b = 1.2, b = 1.25, b = 333, b = 1.4, b = 1.5, b = 1.6, b = 1.666, b = 1.75, and b = 1.8 as particularly important transition points. Under our model, identifying transition points is extremely difficult. Since every player interacts with 624 other players and interacts with them with different proportions, it would be a foolhardy endeavor to even attempt to perform all the necessary calculations. Consequently, it is easiest to compare our model to Nowak’s using empirical results. 3.1.2 Theoretical Analysis Nowak claims that under the purely spatial game: 1. For various values of b such that 1 ≤ b ≤ 1.55, cooperators dominate. (a) b=1.10 leads to a few isolated static defectors (b) b=1.15 leads to small sections of defectors pulsating at the ends. (c) b=1.24 leads to lines of defectors being connected, but still little pulsation (d) b=1.35 leads to a pulsating network of defectors where the width oscillates between one and three. (e) b=1.55 leads to an irregular but static network of defectors. 2. For b=1.65, an extremely dynamic defector majority where pockets of defection and cooper- ation constantly expand, collide into each other, break into pieces, and disappear. 3. For b=1.70, there are static islands of cooperation in a defector- dominated world. The results under our model couldn’t be any more different from those of Nowak. First of all, for b ≥ 1.08, our model usually resulted in everybody defecting after three or so rounds. No, there were no pockets of cooperation, not even small and static ones. Every player eventually defected. For extremely small values of b (b ≤ 1.03), the system results in isolated defectors living in a sea of cooperators. These defectors pulsate into blocks of nine defectors, but then shrink back down into a single defector. For slightly larger values of b, the defectors are no longer isolated. Instead, they coalesce into small, static groups. As b gets larger, the groups get more and more elongated. Eventually, when b hits around 1.08, this pattern breaks down completely and results in a world full of defectors. One important observation is that our model favors defection much more than Nowak’s does. While Nowak reported that cooperators existed for values of b as high as 1.70, cooperators virtually disappeared under our model once b was larger than 1.08. One important thing of note is the relative lack of pulsations our model creates. While Nowak’s results indicated that for a large variety of b, Another notable difference is that Nowak’s model is able to achieve a healthy balance of co- operators and defectors at b=1.65, where groups of cooperators and defectors constantly formed groups, expanded into each other, and broke apart. Ours cannot achieve a balance. Either the cooperators dominate, or everybody defects; there is no in between. Under the randomness of the initial strategies in model, cooperators cannot even form static groups like they could under Nowak’s model. Furthermore, our model results in much less pulsation than Nowak’s model. In general, neither cooperators nor defectors under our model pulsate. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Lastly, our model is much more susceptible to random chance. The initial placement of defectors and cooperators has a significant impact on how the system will behave, even if b is held constant. Differences due to random chance has led to bizarre results. For example, in one of approximately eight simulations for b=1.15, cooperators actually managed to form a compact cluster within the first few rounds and then proceeded to dominate the whole system. Alternatively, a value of b as low as 1.06 led to everybody defecting.

4. Discussion 4.1 Conclusions 4.1.1 From Random Initial Strategies In general, defection is far more favored under our model than under Nowak’s. While cooperation was common under Nowak’s model for values of b as high as 1.65, cooperation was utterly annihilated for the most part under our model for any value of b higher than 1.08. Random placement of initial strategies has a significant effect on the outcome of our model. 4.1.2 From Invasions While analyzing the theoretical results of invasions is difficult under our model, our empirical results reveal that our model allows for a whole variety of different configurations than Nowak’s does. For example, when we had cooperators invade defectors, we found an extremely odd pattern of uniform expansion followed by expansion at the sides only followed by expansion at diagonals only. As another example, our model results in stable fortresses of defectors when defectors invade cooperators - a feat

unheard of under Nowak’s model. In general, invasions by cooperators is much more difficult under our model, while invasions by defectors is much easier. This effect further illustrates how defection is more favored under our model.

4.2 Strengths of Our Model The central strength of our model is that it bridges the gap between standard spatial evolutionary game theory and mainstream evolutionary game theory. By adjusting the formula used to convert distances into weights, we can toggle the "spatialness," or how much location matters, in an evolutionary game theory model. For example, by making the weight drop off very quickly as distance increases, our model becomes equivalent to the standard spatial evolutionary game theory model of interacting with neighbors only. On the other hand, keeping the weight constant regardless of distance leads to the standard evolutionary game theory model - in which every player’s payoff depends only on its strategy and the aggregate strategy of all other players. Through picking a function in between those extremes, we can create a model that more closely reflects reality. Below is a chart that illustrates some examples of functions that can be used to convert distance into weight, as well as where they fall on the spatial-nonspatial spectrum. In the chart, "Dist" stands for the distance between two players. Part of this central strength is reflected in how the results of our model are so different from the result of Nowak’s model. Nowak’s model represents one extreme, and by eliminating the necessity of assuming that extreme, we got a wide variety of interesting results:

Figure 3, left: Typical Player Behavior Under Our Model For Selected Values of B. For extremely small values of b, the system eventually stabilizes into a cooperator- dominated world with isolated defectors. These isolated defectors pulsate into larger squares of defectors before shrinking again. However, when b gets just a little bit larger (to 1.04 or 1.05), the end result is a stable, non-pulsating system dominated by cooperators. Red cells represent defectors while blue cells represent cooperators. Figure 4, right: As b gets larger, the model’s development from start to finish gets more interesting. Initially, cooperators quickly formed compact clusters and pushed into the defectors. Soon, the defectors were pushed to a long, irregular chain. After a long time, cooperators managed to break the chain of defectors into smaller chunks. This effect is shown for b=1.07. When b becomes larger than 1.08, everybody defects. Once again, blue represents cooperation and red represents defection. WINTER 2018


4.2.1 Weaknesses of Our Model In this paper, we only used a 25 by 25 grid of cells. 25 is not extremely large, so the simulations presented in this paper may suffer from insufficiently large grid size. Our model suffers from this major weakness: computational costs. Because every player in- teracts with every other player AND the weights of the these interactions vary, it is necessary to perform O(N4) calculations at each timestep, where N is the number of players in each row of a square grid. This heavy computational load was why a 25 by 25 grid was used in the simulations; using a larger grid strained the capabilities of the computer used to perform the simulations.

5. Opportunities for Further Research The model presented in this paper is a new model, and we applied it to only one particular game. Thus, there are two main paths of inquiry that will lead to productive further research. The first is breadth. Since we have only analyzed the simplified prisoner’s dilemma game, other games and scenarios have not been explored yet. The second is depth. We only used one particular formula to convert distances into weights, so varying the formula may lead to other interesting results. Improving the breadth of research is essential. There are so many other games in existence. There’s the snowdrift game, the hawk-dove game, the stag hunt game, the tragedy of the commons, and many

many more that this paper does not investigate. Applying our model to games where defection isn’t always the optimal choice, such as the snowdrift game and the hawk-dove game, will be particularly useful because defection was. Moreover, the game we simulated in this paper only had two possible strategies. We have not yet analyzed games with more than one possible strategy, such as rockpaper-scissors. Furthermore, this paper does not consider stochastic updating rules. Since random chance significantly affects the results of our simulations, it would be interesting to see if stochasticity will further alter results. Lastly, this paper only analyzed games where any given player had to play the same strategy against every other player. For example, Player A can’t cooperate with Player B but defect against Player C at the same time. If we allowed players to play one strategy against certain people and a different strategy against others, then the results may differ significantly. Improving the depth of research is just as important, if not more important, than improving its breadth. The central strength of our model is that it relaxes one of the assumptions of the standard spatial evolutionary game theory model - the presumption that individuals interact with only their neighbors - and instead allows researchers to adjust the spatialness as a new variable. Evolutionary game theorists have expended much effort explaining the persistence of cooperation in animal species. One explanation (as evidenced by Nowak in Evolutionary Dynamics) is the spatial game where interactions are localized. Well, with the model in this paper, it is possible to investigate exactly how localized the interactions must be to allow cooperation to form and develop. D CONTACT JEFFREY QIAO AT JEFFREY.QIAO.20@DARTMOUTH.EDU

References 1. Nowak, Martin A. Evolutionary Dynamics: Exploring the Equations of Life. Belknap Press of Harvard Univ. Press, 2006.Biology and Ecology 397: 65-70.

Figure 5: A red cells represents a defector, while a blue cell represents a cooperator. Initially at T=1, defectors dominate. But, the cooperators formed a compact square by T=2 and were able to expand and fill nearly the whole grid as time went on. 40



The Symptoms of Child Physical Abuse by Frequency and Specificity BY YURI LEE, GARAM NOH, ALEXANDRA BARBER, KATHERINE GINNA, AND DENNIS CULLINANE

Abstract The modern medical history of child physical abuse dates back to 1946 with the publication by John Caffey, Multiple Fractures in the Long Bones of Infants Suffering from Chronic Subdural Hematoma. This now-classic paper was the first modern clinical recognition of child physical abuse and laid the cornerstone for all future clinical diagnoses, as well as prevention and prosecution legislation at state and federal levels. Today, the primary literature is replete with descriptions and analyses of the symptoms of child abuse, but the typical focus is on individual symptoms, their frequencies, and how to diagnose them. Despite the obvious clinical and legal advantages, a quantitatively derived global set of child abuse symptoms based upon both frequency and specificity, and resulting constellations has rarely been addressed or applied. The authors present a quantitative synthesis of the primary literature of child physical abuse, characterizing and ranking symptoms by both frequency and specificity, in hopes that it will serve as a useful tool for future diagnoses and interventions.

Introduction For children in the United States between the ages of one and four, the three leading causes of death in 2014 in descending order were unintentional injuries, congenital anomalies, and homicide (Centers for Disease Control- CDC, 2014). Similarly, homicide was the number two cause of death for children under one year old (CDC, 2014). Clearly, child abuse is perennially a leading cause of morbidity and mortality in young children and infants (Pierce & Bertocci, 2008), with more than four children dying in the United States every day as a consequence (NCANDS, 2014). Aside from physical trauma, child abuse also results in significant psychological and sociological impacts (Johnson et al., 1999), as well as personal and civic costs (Fang et al., 2012). The authors estimated the lifetime cost per victim of nonfatal child maltreatment was $210,012 (2010 dollars), with a total lifetime economic burden resulting from all new cases of fatal and nonfatal child maltreatment at approximately $124 billion. Clearly, child abuse is a significant social dilemma, and it is manifested in different forms, including neglect, sexual, emotional, and physical, and yet its clinical recognition is relatively recent. Caffeyâ&#x20AC;&#x2122;s study garnered a great deal of attention in the medical community because of its revolutionary effort to recognize how patterns of injury, medical history, and parent behavior are indicative of child abuse. In doing so, he was the first to question the sanctity of the parent-child relationship. He was also the first to identify an abuse constellation: long bone fracture and subdural hematoma. A constellation arises when two or more symptoms of abuse are manifested in a potential victim and whose collective suspicion index rises above the sum of the individual symptoms. In this way, Caffey presciently defined the landscape for all future abuse studies. Despite Caffeyâ&#x20AC;&#x2122;s findings, the prevalence of child abuse was not widely acknowledged by the general population until Kempe et al. (1962) published their research on the battered-child syndrome, WINTER 2018

inciting the first national movement towards addressing child abuse. This movement culminated in the passage of the 1974 Child Abuse Prevention and Treatment Act. Despite these advances, it was estimated that 1,580 children in the United States died from physical abuse in 2014 (NCANDS, 2014), and failure to recognize child maltreatment results in chronic exposure to high-risk environments where re-injury or death may occur, with traumatic brain injury (95%) and bruising (90%) the most common injuries (Pierce et al., 2017). Consequently, accurate diagnoses and timely interventions by healthcare professionals in cases of child abuse are essential. Further, childcare professionals such as daycare providers, school nurses, and even gym teachers can serve as the frontline of detection, and thus their recognition of symptoms would be critical in the early characterization of abuse. Thus, the characterization of child abuse symptoms by numerical frequency and specificity as they are referenced in the primary literature would be a useful tool for clinicians, as well as non-medical childcare personnel.

Frequency and Specificity For the purposes of this study, frequency is a simple quantitative representation of the number of appearances of a symptom in relation to non-accidental trauma (NAT), whereas symptoms of high specificity may not be as frequent in NAT, but are relatively significantly rarer in cases of accidental trauma (AT). Thus, they are highly specific to NAT, but also occur rarely in AT. In contrast, highly frequent NAT symptoms may also be frequent in AT. Thus, scapular fractures, for example, are relatively rare in AT, but highly specific for NAT, because biomechanically, they are much less likely to occur in more typical childhood accidents. In contrast, long bone fractures are frequent in NAT, but not highly specific, as they occur frequently in AT, as well. Although NAT occurs in children of all ages, it is typically those younger than three that are victims of abuse (Maguire, 2010). Younger victims of child abuse are less capable of resisting, escaping, or voicing accusations against their abusers, and have more gracile and mechanically susceptible skeletons (Cullinane and Einhorn, 2002). Thus, emergency department (ED) personnel need to be able to identify strongly suggestive constellations, or groupings of symptoms that form a clearly recognizable pattern for child abuse (Kemp, 1962; Klatt, 2009).

Constellations of Abuse Like constellations, or clusters of stars in the night sky that together tell a story to the stargazer, symptoms of abuse can form clusters, or constellations, that tell a story to a caregiver. The concept of a constellation of child abuse incorporates multiple symptoms whose individual frequencies combine in a manner greater than the sum of the individual symptoms. For example, a highly indicative constellation for child abuse would include: subdural hematoma, an undocumented healed fracture, defensive bruises on the forearms, and posterior rib fractures. Each of these injuries is itself a symptom 41

of child abuse, but when found in a collective constellation, their individual suspicion indices create a non-linear, highly positive likelihood. Additionally, barring extremely rare incidences, this constellation of symptoms does not suggest a single accidental traumatic event such as tripping and falling forward onto the ground while running. In contrast, a non-indicative constellation would more likely include bilateral abrasions on the knees, a distal radial fracture, and bilateral abrasion on the palms of the hands. The latter example suggests a single traumatic event and thus AT symptoms (a fall forward to the ground). These two intentionally illustrative examples easily reflect NAT and AT, respectively, however, the reality is typically not as clear. Thus, in order to characterize child abuse constellations and evaluate abuse likelihoods, professionals are in need of a definitive ordering of symptoms of non-accidental trauma in terms of both frequency and specificity. If medical professionals evaluate victims of child abuse with an understanding of symptom frequency, specificity, and resulting constellations, they would likely have more confidence in those evaluations, and their diagnoses would be more accurate. And ultimately, young patients can receive the medical care, legal support, and social care they need; even in the face of uncooperative guardians who often try to deny the possibility of abuse out of fear or guilt, and particularly when they are the perpetrators (Klatt, 2009). The objective of this global analysis of child abuse symptoms is to mine the primary literature of child abuse in order to generate hierarchical tables of both abuse symptom frequency, and specificity. It is hoped these tables might then be available to medical and other childcare personnel to aid in identifying abuse symptoms and constellations. Ultimately, trained medical clinicians are responsible for the diagnosis of child abuse or maltreatment. However, equipping them and frontline childcare providers with more powerful diagnostic tools will only serve to increase child safety. Of course, protocols for the reporting of these symptoms on all levels need to be in place so that ultimately, cases end up in the hands of trained medical and child advocacy professionals.

Materials and Methods The focus of this analysis, congruent with analytical trends in the primary literature, is on children up to 5 years of age. Symptoms of NAT were assessed in terms of both their frequency and specificity. A total of 111 studies of physical symptoms of child abuse were examined for references to the frequency and/or specificity of any injuries associated with physical abuse. Studies were deemed appropriate for use based on three basic criteria: injuries of child abuse had to be the central focus of the article, authors had to make clear indications of trauma as either non-accidental (NAT) or accidental (AT), and there had to be at least one reference to the frequency (percentage of cases exhibiting a symptom) and/ or specificity (how closely tied a symptom is with abuse only) of a symptom of child abuse. If these three requirements were not met, that study was excluded. Each statistic regarding child abuse symptoms (e.g. retinal hemorrhage) was recorded on a Microsoft Excel® spreadsheet with the physical symptoms listed by author(s) and publication. Two tables were created; one dedicated to symptom frequency and the other to symptom specificity. After the frequency data was compiled, a mean frequency was calculated for each injury associated with child abuse and an ordering of frequency was created- from most frequent to least frequent. For studies providing a percentage range for frequency (i.e. 10-25%), the mean of the range was calculated (17.5%) and used to represent that data. This final frequency value for a symptom was 42

calculated by averaging the frequency statistics provided by all authors into a final mean value of frequency for that symptom. It is important to note that individual studies were not weighted for study size (see discussion). The data for the specificity chart was collected in a two-step process: all references to a symptom being “specific” for child abuse were noted in the Microsoft Excel® spreadsheet with a scale containing a numerical range from 0 to 4 (lowest specificity to highest specificity, respectively) and all language used by authors to qualify the specificity (mentions of high or low specificity of symptoms) was recorded next to the numerical values on the table. For example, a score of 4 was given to the term “highest specificity”, whereas a 1 was awarded to a symptom regarded as “low specificity”. The number of studies referring to a symptom as “specific”, for example, we took to reflect the degree of consensus within the scientific/ medical community. Average specificity values were calculated for each symptom based on this scale (Table 1).

For example, metaphyseal lesions were mentioned as being specific to child abuse in eleven different articles. One of these articles described it as a specific injury (2 points), with no further qualifier, while the other ten articles stated that it was a highly specific injury (3 points). As a result, metaphyseal lesions were ascribed a total point value of 32 points from these eleven articles, leading to an average specificity value of 2.9. This numerical specificity value was then compared to the values for other symptoms in order to formulate a hierarchy of specificity. Furthermore, if two injuries received the same average specificity value, the symptom supported by more studies was given priority over the other in the ordering. Regardless, they received appropriately identical scores.

Results The data derived from all 111 studies reviewed was compiled into frequency and specificity tables and divided further into four levels of severity based on natural breaks in the data. The frequency dataset was divided into Levels 1-4 based on the percentage of abused children who suffered the injury. The specificity data was divided into Levels 1-4 as well, whose rankings were designated based on the specificity values calculated as shown in Table 1, with the four levels of each table designated by colors, with blue representing level one (the least specific or least frequent injuries), and increasing in intensity with yellow, orange, and red, representing increasing levels of specificity or frequency, respectively.

Discussion Certain symptoms were found in an overwhelming proportion of abused children, including: soft tissue injuries (74%), subdural hematoma (55.4%), multiple fractures (51.3%), transverse fractures (50.25), and fractures in various stages (50%). Interestingly, the highest DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

specificity score went to absent or evolving trauma (3.5), clearly indicating the suspicious nature of injury without corresponding trauma events, or injury whose stated cause(s) change over time, or injuries that continue to evolve over time. Kellogg (2007) and Kemp et al. (2008) confirm that these circumstances can be consistent to highly indicative of intentional trauma. This symptom is unique in focusing more on the interview and interaction with the patient or guardian, but encompassed the concept of physical trauma, and so was included. Following this were retinal hemorrhages (3.3), scapular fractures (3.0) and sternal fractures (3.0), all less likely to happen by common accidental trauma mechanics. A general note must be made about redundant symptoms such as various forms of single fracture. When possible, redundancies were incorporated into a common pool, averaging their scores appropriate to number of times referenced. Given that, because the terminology in the primary literature was sometimes very precise, there were occasionally symptoms that seemingly overlapped a great deal, such as single fracture and transverse fracture, but they were treated separately because of the detailed term, transverse, for example, implying a specific mechanical environment. Some symptoms were highly ranked on one list but scored relatively low on the other. For example, a single fracture is highly frequent (1 in 2 abused children), putting it at Level 4 for frequency, but is unspecific to child abuse (a score of 1 at the bottom of Level 2). Transverse fractures are also very common (evident in just over 1 in 2 abused children), but relatively unspecific (Level 2 specificity). In contrast, there were also symptoms found to be relatively infrequent, but highly specific. Although these injuries may not be encountered often in evaluating children for child abuse, when they do appear in a patient, they should raise suspicion. For example, spinal injury is experienced by just over 6% of abused children, but rated Level 4 for specificity. Most importantly however, some symptoms were ranked relatively highly in both frequency and specificity. These include soft tissue injuries (74% and a specificity score of 3, some form of subdural hematoma (55.4% and a specificity score of 3), and fractures (~50% and specificity score of 3). The magnitude of the importance of soft tissue injuries in terms of specificity is highlighted by the work of McMahon et al., (1995), who reported more than 3 million cases of child abuse in a single year that included cutaneous symptoms, making it the most recognizable symptom of abuse. Jackson et al. WINTER 2018

(2015) highlight the contribution to delay in abuse diagnoses via clinical inattention to soft tissue findings. Accordingly, soft tissue bruising, for example, the result capillary failure via blunt trauma (Huang et al., 2012; Tang et al., 2013; Grosse et al., 2014), in any of the locations indicated by the Frequency or Specificity Tables, should also be considered a significant finding. When considering the symptoms of child abuse, it must be recognized that some mechanical interaction necessarily occurred between a victim and a perpetrator, or some object utilized by the perpetrator. If multiple symptoms exist but do not share an anatomical location or even body region (posterior ribs and metaphyseal lesion), it is more likely that multiple trauma events occurred (as stated above, multiple events of the same trauma can also occur). This is the heart of the multiplicative power of a constellation of abuse: Multiple injuries of even mid or low-level suspicion, when combined, will increase the index of suspicion by a factor greater than their individual values because they can be attributed to multiple traumas and an environment of continuing abuse. Thus, multiple mechanical insults create multiple symptoms, and are generated because the maltreatment environments that cause them tend to involve chronic physical abuse. It is also important to note at this point that weighting for study sizes was considered but not utilized, as ultimately, the choice of studying any one particular symptom is essentially a random process and weighting of larger studies puts those particular symptoms in the fore, with no regard to symptom importance. This 43

is especially so with regard to specificity, but even when considering frequency, the residual effect of according any kind of weight to larger data studies necessarily biases the reader’s sense of intersymptom importance simply because an author or set of authors chose to investigate any particular symptom(s). Accordingly, the authors felt that the relationship between specificity and frequency, and the magnitude of the sensitivity of the two tables was better served without weighting. Finally, the frequency and specificity tables generated in this study are meant to be useful diagnostic tools, and beyond those, the suggestive power of derived symptom constellations cannot be overstated. The combination of multiple symptoms in a highly indicative constellation can be more strongly suggestive of child abuse than the presence of any one, more noteworthy symptom, either for frequency or specificity. The ordering of child physical abuse symptoms by frequency and specificity can provide guidelines for medical personnel to be able to more assuredly distinguish accidental trauma from non-accidental trauma in children, and to construct diagnostic constellations. These results suggest that symptoms (and especially multiple symptoms) should be followed up with further investigation; the vigor of which should be relative to their frequencies and specificities. Additionally, as indicated in the specificity scale, absent or evolving trauma is highly indicative of abuse, as should be any symptom that is not biomechanically likely, motor skills-appropriate, or temporally consistent with the stated etiology of injury presented by the child or caretaker. Finally, and as stated earlier, in 2014, homicide was the third leading cause of death for children age 1-4, and of children age <1, combined homicide numbers similarly rank it as the second most common cause of injury death (CDC, 2014). Considering the potential for missed diagnoses, these numbers may be even larger. Today, 54 years since Kempe’s Battered Child and 70 years since Caffey’s groundbreaking treatise, a clear public health crisis of child abuse still exists. Sadly, diagnostic errors (Anderst et al., 2016) and systematic under-identification (Hoft and Haddad, 2017) and underreporting of cases of abuse (Flaherty et al., 2008A; Leeb and Fluke, 2015) remain prevalent. Alarmingly, this is true even when the level of suspicion is high and the decision maker is a trained medical professional (Flaherty et al., 2008B). Obviously, there exists an unnecessary gap in the tools available to medical, paramedical, and childcare professionals, with a need for more navigable and “userfriendly” screening tools (Hoft and Haddad, 2017). Thus, more work needs to be done in educating and arming healthcare and childcare professionals so that they are better equipped and more confident in their roles characterizing, raising suspicion of, and committing to reporting abuse.

Acknowledgements The authors would like to than Dr. Shevaun Doyle of the Department of Orthopaedic Surgery, Hospital for Special Surgery and the Weill Cornell Medical Center, and Dr. John Tis, Department of Orthopaedic Surgery, Johns Hopkins University School of Medicine for their feedback and advice on early versions of this manuscript.

Funding This work was supported by the Jonathan H. Grenzke and Dr. Elizabeth A. Kensinger endowment. D



References 1. Centers for Disease Control and Prevention (2014) www.cdc.gov/injury/wisqars/ leadingcauses.html; 2014 Accessed 03.01.17. 2. Pierce MC, Bertocci G (2008) Injury biomechanics and child abuse. Ann. Rev. Biomed. Eng.2008.10:85-106. 3. National Child Abuse and Neglect Data System (NCANDS) (2014) Child Maltreatment. www.acf. Hhs.gov/sites/default/files/cb/cm2014.pdf; 2014 Accessed 02.09.16. 4. Pierce MC, Kaczor K, Acker D, Webb T, Brenzel A, Lorenz DJ, Young A, Thompson R. History, injury, and psychosocial risk factor commonalities among cases of fatal and near-fatal physical child abuse. Child Abuse and Neglect.2017;69:263-277. 5. Johnson JG, Cohen P, Brown J, Smailes EM, Bernstein DP. Childhood maltreatment increases risk for personality disorders during early adulthood. Arch. Gen. Psychiatry.1999;July.56.7:600-6. 6. Fang X, Brown DS, Florence CS, Mercy JA. The economic burden of child maltreatment in the United States and implications for prevention. Child Abuse & Neglect.2011;36.2:156–165. 7. Kempe CH, Silverman FN, Steele BF, Droegemuller W, Silver HK. The battered child syndrome. Journal of the American Medical Association.1962;181:17-24. 8. Every Child Matters Education Fund.2014.www.acf.hhs.gov/sites/default/files/cb/ cm2014.pdf. 9. Maguire, S. Which injuries may indicate child abuse? Archives of Disease in Childhood: Education and Practice Edition.2010;95.6:170–177. 10. Cullinane DM, Einhorn TA (2002) The Biomechanics of Bone, In: Bilezekian JP, Raisz LG, Rodan GA, eds. Principles of the Biology of Bone, 2nd Ed. Academic Press. 2002:17-33. 11. Klatt J. Non-Accidental Trauma (NAT) in Pediatric Patients. ota.org/media/34582/ P02_Abuse-Revision.ppt; 2009 Accessed 17.10.16. 12. Kellogg ND. Evaluation of Suspected Child physical abuse. Pediatrics.2017;119:1232-1241. 13. Kemp AM, Dunstan F, Harrison S, Morris S, Mann M, Rolfe K, Datta S, Thomas DP, Siebert JR, Maguire S. Patterns of skeletal fractures in child abuse: A systematic review. BMJ (Clinical research ed.)2008;337:a1518. 14. McMahon P, Grossman W, Gaffney M, Stanitski C. Soft tissue injury as an indication of child abuse. J. Bone Joint Surg.1995;77.8:1179-83. 15. Jackson AM, Deye KP, Halley T, Hinds T, Rosenthal E, Shalaby-Rana E, Goldman EF. Curiosity and critical thinking: Identifying child abuse before it is too late. Clin. Ped.2015;54(1):54-61. 16. Huang L, Bakker N, Kim J, Marston J, Grosse I, Tis J, Cullinane DM. A multi-scale finite element model of bruising in soft tissues. Journal of Forensic Biomechanics.2012; 3:1-5. 17. Tang K, Sharpe W, Schulz A, Tam E, Grosse I, Tis J, Cullinane DM (2013) Determining bruise etiology in muscle tissue using finite element analysis. Journal of Forensic Sciences.2013;59.2:371-374. 18. Grosse IR, Huang L, Davis J, Cullinane DM. A Multi‐level hierarchical finite element model for capillary failure in soft tissue. Journal of Biomechanical Engineering.2014;136(8). 19. Anderst J, Nielsen-Parker M, Moffatt M, Frazier T, Kennedy C. Using simulation to identify sources of medical diagnostic error in child physical abuse. Child Abuse Negl.2016;52:62-9. 20. Hoft M, Haddad L. Screening children for abuse and neglect: A review of the literature. J. Forensic Nursing.2017;13.1:26-33. 21. Flaherty EG, Sege RD, Hurley TP. Translating child abuse research into action. Pediatrics.2008A.122 Suppl. 1:S1-5. 22. Leeb RT, Fluke JD. Child maltreatment surveillance: enumeration, monitoring, evaluation and insight. Health Promot. Chronic Dis. Prev. Can.2015;35.8-9:138-40. 23. Flaherty EG, Sege RD, Griffith J, Price LL, Wasserman R, Slora E, Dhepyasuwan N, Harris D, Norton D, Angelilli ML, Abney D, Binns HJ. From suspicion of physical child abuse to reporting: primary care clinician decision-making. Pediatrics.2008B.122.3:611-19.


Dartmouth Undergraduate Journal of Science Hinman Box 6225 Dartmouth College Hanover, NH 03755 dujs@dartmouth.edu

ARTICLE SUBMISSION FORM* Please scan and email this form with your research article to dujs@dartmouth.edu

Undergraduate Student: Name:_______________________________

Graduation Year: _________________

School _______________________________

Department _____________________

Research Article Title: ______________________________________________________________________________ ______________________________________________________________________________ Program which funded/supported the research ______________________________ I agree to give the Dartmouth Undergraduate Journal of Science the exclusive right to print this article: Signature: ____________________________________

Faculty Advisor: Name: ___________________________

Department _________________________

Please email dujs@dartmouth.edu comments on the quality of the research presented and the quality of the product, as well as if you endorse the studentâ&#x20AC;&#x2122;s article for publication. I permit this article to be published in the Dartmouth Undergraduate Journal of Science: Signature: ___________________________________

*The Dartmouth Undergraduate Journal of Science is copyrighted, and articles cannot be reproduced without the permission of the journal.

Visit our website at dujs.dartmouth.edu for more information



Profile for Dartmouth Undergraduate Journal of Science

DUJS 18W  

The Winter 2018 Issue of the Dartmouth Undergraduate Journal of Science.

DUJS 18W  

The Winter 2018 Issue of the Dartmouth Undergraduate Journal of Science.


Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded