Page 1


Dartmouth Undergraduate Journal of Science FA L L 2 0 1 8


VO L . X X I


NO. 1

IDENTITY Scientific Inquiry and Discovery

The Genetics of Aging

Isle Royale’s Predators & Prey

A History of Neuroimaging

Learning From Mutations in C. Elegans

Three-Species Dynamics in Nature

Visualizing the Brain: Then & Now

p. 8

p. 16

p. 34

Note from the Editorial Board Dear Readers, The notion of identity is essential to science, technology, engineering, and math whether using identities in linear algebra or considering the personality effects of neurosurgery. This issue examines current examples of the theme identity through a variety of articles including reviews of recent scientific research, exposes about healthcare developments, descriptions of important medical advances, and original scientific research articles. Senior staff writer Samuel Reed begins this issue with a review article discussing health literacy research methods and identifying promising developments to improve health literacy in community settings. Anna Brinks follows with an exploration of advancements in neurosurgeries for mental disorders, their impacts on identity, and emerging treatments. Sahaj Shah considers the identity of neuroscience, in particular tracing the history of neuroimaging and its recent innovations. Liam Locke investigates theories of aging as well as specific genes associated with aging. Next, Megan Zhou discusses the importance of the mathematical identity and its applications in everyday life. Then, Samuel Neff examines the development of organoids for both research and personalized disease treatments. In an original research submission, Armin Tavakkoli and Jessica Kobsa explore prediction models in psychology, specifically using machine learning methods to test the replicability of previous studies on personality traits and a variety of outcomes. In another research paper, Anuraag Bukkuri considers an ecological model for three species on Isle Royale, using the model to explore population dynamics and effect of climate on population. We would like to thank our writers, editors, staff members, and faculty advisors for making this issue of DUJS possible. Through the support of the Dartmouth community, we are able to maintain our success as an outstanding scientific outlet. Sincerely, Josephina Lin

The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge. EDITORIAL BOARD Editor-in-Chief: Josephina Lin ’19 President: Sam Reed ’19 Chief Copy Editor: Shivesh Shah ’18 Managing Editors: Paul Harary '19, Kevin Chao '19 Assistant Editors: Nishi Jain '21, John Kerin '20, Anders Limstrom '20, Ted Northup '21 Layout & Design Editor: Gunjan Gaur ’20 Webmaster and Web Development: Arvind Suresh ’19 STAFF WRITERS Anna Brinks '21 Ed Buckser '21 Anuraag Bukkuri '21 Hunter Gallant '21 Chengzi Guo '22 Paul Harary '19 Nishi Jain '21 Ryan Kilgallon '21 Liam Locke '21 Brenda Miao '19 Samuel Neff '21 Josephine Nguyen '22 Armando Ortiz '19 Sahaj Shah '21 Sanjena Venkatesh '21 Kristal Wong '22 Megan Zhou '21 Raniyan Zaman '22 ADVISORY BOARD


Alex Barnett – Mathematics David Bucci – Neuroscience Marcelo Gleiser – Physics/Astronomy David Glueck – Chemistry Carey Heckman – Philosophy David Kotz – Computer Science Richard Kremer – History William Lotko – Engineering Jane Quigley – Kresge Physical Sciences Library Roger Sloboda – Biological Sciences Leslie Sonder – Earth Sciences


DUJS Hinman Box 6225 Dartmouth College Hanover, NH 03755 (603) 646-8714 http://dujs.dartmouth.edu dujs@dartmouth.edu Copyright © 2017 The Trustees of Dartmouth College

Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Office of the Provost Office of the President Undergraduate Admissions R.C. Brayshaw & Company

Table of Contents

8 Neurosurgeries for Mental Disorders Anna Brinks ’21

14 16 25


The Genetics of Aging: Lessons from Life-Extending Mutations in C. Elegans Liam Locke ‘21


Regrowing Our Organs: The Development of Organoids for Medical Research and Disease Treatment Sam Neff ‘21


Analysis of the Three-Species Predator-Prey Dynamics with Focus on Isle Royale Anuraag Bukkuri ‘21


Epithelial Stem Cell Polarity and Connection to Tumorigenesis Nishi Jain ‘21


Identifying and Addressing Health Literacy Issues Samuel Reed '19


Neuroimaging: A Brief History Sahaj Shah '21


Using the Mathematical Identity Megan Zhou '21


ORIGINAL RESEARCH SUBMISSION A Predictive Approach to Social Psychology: Using Machine Learning to Predict the Five Factor Personality Traits Armin Tavakkoli '20 and Jessica Kobsa '20


30 2




Neurosurgeries for Mental Disorders BY ANNA BRINKS ‘21

Introduction Today, surgeons can build a new knee, fix poor vision in minutes with a laser, and crack open a chest to replace a diseased heart. Tangible ailments are routinely tackled: putting broken bones back together, targeting malignant tumors, and killing harmful bacteria. Some diseases, however, are not so obvious to the naked eye or even a high-resolution microscope; mental disorders such as depression, obsessive compulsive disorder, schizophrenia, and many others require a different approach and diagnosis. Made up of a network of billions of nerve cells, the brain is the most complex organ in the human body. It is critical for normal bodily function, for reasoning and solving problems, and for personality and identity. Advancements in understanding the brain have rendered previously intangible mental disorders increasingly concrete as they are linked to abnormal functioning of brain circuitry and atypical levels of important neurotransmitters. With these advancements, new surgical FALL 2018

techniques for treating mental disorders have emerged, moving surgery from its traditional applications to a new frontier. According to the World Health Organization, one in four people will be affected by a mental disorder at some point in their lives. Mental disorders are a leading health problem, and they currently affect around 450 million people worldwide. Depressive disorders are the fourth leading cause of the global disease burden, which measures the impact of a health problem based on a variety of factors such as mortality, influence on quality of life, and financial burden. By 2020, they are predicted to rise to second place behind ischemic heart disease (World Health Organization, 2001). Treatments for anxiety and depression in the United States cost an estimated 43.7 billion dollars in 1999, and this expense has only increased over time with growing populations and higher rates of diagnosis (Christmas et al. 2004). Due to the serious and wide-spread effects of mental disorders, it is imperative to search for effective treatments.

Figure 1: The Kennedy family. Rosemary Kennedy, victim of Walter Freeman's failed surgery, is on the far right. Source: Wikimedia Commons



“About 50,000 people received lobotomies in the United States between 1949 and 1952, and the famed surgeon Walter Freeman performed 3,500 lobotomies over the course of his career.”

The history of mental disorder treatment is fraught with tragedy and misunderstanding. For the past several hundred years, patients were locked in prisons or mental hospitals and subjected to inhumane, degrading conditions that frequently worsened their symptoms. Invasive surgical procedures were indiscriminately performed with often injurious results. The frontal lobotomy, a procedure that severs the nerve pathways connecting the frontal lobes of the brain, was used to treat schizophrenia, depression, insomnia, and a wide array of other mental disorders (Encyclopedia Britannica, 2018). About 50,000 people received lobotomies in the United States between 1949 and 1952, and the famed surgeon Walter Freeman performed 3,500 lobotomies over the course of his career. Freeman pioneered the transition from prefrontal lobotomies, which involved drilling holes into the patient’s head to access the frontal lobes, to the streamlined transorbital lobotomy which accessed the brain through the eye sockets, left no scars, and could be performed in under ten minutes (NPR, 2005). Side effects such as apathy, passivity, lack of initiative, worsened ability to concentrate, and decreased depth and intensity of emotional responses were not widely reported at the time. His most infamous failed surgery was the lobotomy he performed on Rosemary Kennedy, the oldest sister of President Kennedy, who was

left inert and unable to speak more than a few words. Beginning in the mid-1950s, the popularity of lobotomies decreased as effective antipsychotics and antidepressants became viable treatment options (Encyclopedia Britannica, 2018). This was coupled with backlash during the height of the antipsychiatry movement in the 1960s and 1970s. Public opinion about the “miracle cure” of lobotomies changed as popular novels such as One Flew Over the Cuckoo’s Nest and A Clockwork Orange portrayed psychosurgery as a punitive method of social control (Christmas et al. 2004). Adverse effects of the crude procedure became more widely known and no lobotomies have been performed in the United States since Freeman’s last transorbital lobotomy in 1967 (which ended in the patient’s death). Although progress has been made, people suffering from mental disorders continue to face challenges today. Luckily, many conditions can be successfully treated with a combination of drugs, therapy, and social support (World Health Organization, 2001). While limited access to these treatments and stigma around mental disorders still pose challenges to many people, they nevertheless provide a more viable and dependable option than the invasive surgeries of the past. Over 80 percent of schizophrenia patients are free of relapses after a year of antipsychotic drugs combined with family intervention, and up to 60 percent

Figure 2: Sagittal section through human brain, showing an anterior cingulotomy lesion (cross-hatched area). Source: Christmas, David



of people with depression can recover with antidepressant drugs and psychotherapy (World Health Organization, 2001). However, there is still a proportion of patients with severe symptoms that continue unabated despite different drug treatments and therapeutic support. An estimated 48 percent of patients will still suffer from “clinically relevant” obsessive compulsive disorder after 30 years (Christmas et al. 2004). Obsessive compulsive disorder is characterized by a combination of unwanted and intrusive thoughts with repetitive behaviors or “rituals” that are enormously disruptive to daily life (Balcioglu et al. 2018). Approximately 12 percent of patients suffering from anxiety or depression will experience chronic (lasting more than 24 months) and unremitting symptoms despite treatment (Christmas et al. 2004). Such treatment-resistant depression is a life-threatening disorder. There is an extremely high suicide risk among these patients, with 30 percent of them attempting suicide once in their lifetime: double the rate of patients with non-resistant depression and 15 times the rate of the general population (Bergfeld et al. 2018). For patients with chronic and severe mental disorders that resist all other treatments, modern neurosurgeries may provide the only option.

Neurosurgical Procedures Modern advancements in surgical techniques have allowed for smaller anatomical targets and the placement of precise lesions within the brain. Four irreversible procedures are commonly employed for treatmentrefractory psychiatric conditions, each of which uses lesions on both halves of the brain (Ovsiew and Frim, 1997). Each procedure is differentiated by its own unique target location. For example, an anterior cingulotomy has a target site 2025 millimeters behind the anterior horn of the lateral ventricles, 7 millimeters from the midline of the brain. This focuses on the fibers of the cingulum bundle as it travels through the anterior cingulate gyrus (Christmas et al. 2004). The cingulum bundle consists of fibers that connect different parts of the limbic system (which is involved in motivation, emotion, learning, and memory). The cingulate gyrus processes emotions and helps regulate behavior. The other three common procedures are an anterior capsulotomy, a stereotactic subcaudate tractotomy, and a limbic leucotomy. These lesions can be created using several different techniques. One technique is MRI guided stereotactic thermocoagulation, which involves the use of magnetic resonance imaging and stereotactic guidance (allowing for the precise targeting of a specific site in threedimensional space) of a monopolar needle FALL 2018

followed by current at a high frequency that heats the tissue at the electrode tip (Ovsiew and Frim, 1997). Radiofrequency lesioning and gamma knife irradiations are also possible techniques, as well as using radioactive yttrium rods (Christmas et al. 2004). Radiofrequency lesioning uses heat to target nerves, while gamma knife irradiation and yttrium rods both use radiation to damage tissue. All of these techniques damage a highly localized area of tissue in an extremely controlled manner.

Results For many reasons, assessing the outcomes of neurosurgeries for mental disorders poses methodological challenges. There are only a small sample of procedures performed, and it is very difficult to conduct control procedures. Additionally, the lack of uniform classification systems makes it difficult to empirically measure improvement and compare results among cases. (Christmas et al. 2004). While additional investigation would strengthen their case, neurosurgeries for mental disorders have still shown promise in effectively ameliorating conditions that have resisted all other methods of treatment. In reports on anterior cingulotomy from the Massachusetts General Hospital, about a third of patients, mostly with obsessivecompulsive disorder and often with concurrent major depression, were considered improved and another quarter possibly improved (Ovsiew and Frim, 1997). Anterior capsulotomy showed an approximate 50% success rate with both obsessive-compulsive disorder and major affective disorder (Christmas et al. 2004). Additionally, about a third of patients with depression or anxiety disorders, including obsessive-compulsive disorder, were found to benefit from stereotactic subcaudate tractotomy (Ovsiew and Frim, 1997). Limbic leucotomy showed a 78% success rate with major affective disorder and a 61% success rate with obsessive compulsive disorder (Christmas et al. 2004). While further research is needed on the optimal placements of lesions for each disease, overall, these procedures are a promising advancement in the treatment of mental disorders.

Consequences: Loss of Identity

“Modern advancements in surgical techniques have allowed for smaller anatomical targets and the placement of precise lesions within the brain.”

“While additional investigations would strengthen their case, neurosurgeries for mental disorders have still shown promise in effectively ameliorating conditions that have resisted all other methods of treatment.”

Headache, nausea, confusion, and incontinence are all possible consequences of these procedures, although they are usually short-term. Insomnia, apathy, seizures, and weight gain are also possible side effects. Changes in memory retention and concentration problems are additional concerns, although this was much more prevalent in older, more aggressive surgeries (Christmas et al. 2004). More complicated are the possible long5

Figure 3: The metal poles shown above are DBS electrodes placed into the most common target structure for Parkinson's Disease, the subthalamic nucleus (orange). They connect to a pacemaker that provides the electrical current. Source: Wikimedia Commons

term personality changes that neurosurgery might cause. Freeman admitted that with his prefrontal lobotomies, “every patient probably loses something by this operation, some spontaneity, some sparkle, some flavor of the personality” (Ovsiew and Frim, 1997). While the refined nature of modern surgeries reduces this risk, it still remains a concern. This “sparkle” can be extremely difficult to quantify. Changes may occur from return of premorbid personality function and the alleviation of the symptoms of the mental disorder. It is challenging to differentiate these changes from personality changes caused by the procedure. Ultimately, these risks must be carefully weighed with the extent to which the mental disorder impairs the patient’s life.

Emerging Advancements: Reversible Deep Brain Stimulation The main disadvantage of modern ablative treatments is that they are irreversible. Deep brain stimulation (DBS), however, is a reversible treatment option that can target a region of the brain as small as half a centimeter (Skandalakis et al. 2018). DBS uses chronic, high frequency 6

electrical stimulation of a specific network in the brain (Foley et al. 2018). It is most commonly used for motor disorders such as Parkinson’s disease (a progressive nervous system disorder), essential tremor, and dystonia (which is characterized by muscle contractions causing abnormal, often repetitive, movements or postures) (Mandarelli et al. 2018). However, its applications for mental disorders such as OCD, major depressive disorder, and schizophrenia are increasingly being explored. For OCD, DBS has shown a response rate of up to 60 percent (Mandarelli et al. 2018). Auditory hallucinations (AHs) are one of the most debilitating symptoms of schizophrenia and are therefore an important target for DBS. AHs are associated with destructive behaviors such as assault, homicide, and suicide (Taylor et al. 2017). Transcranial magnetic stimulation, a non-invasive form of brain stimulation, can be used to determine whether DBS, an invasive procedure, will be effective with a specific patient. DBS can also be individualized with specific stimulation parameters. Closed loop stimulation would be the next step forward in DBS treatment; in the case of AHs it would recognize resting brain activity in the prefrontal DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

cortex that portended AH perception and subsequently trigger electrical stimulation before the AH could begin (Taylor et al. 2017). The specific targets of DBS are also constantly refined and new sites are investigated. Different areas of the brain are targeted for specific conditions and symptoms. For example, the viability of targeting the habenula in treatment-refractory depression was explored in a recent study. The habenula is a small structure in the pineal region that has abundant connections to the prefrontal cortex, basal forebrain, limbic system, and other areas. It is activated in stress response and is connected to regions of the brain that exercise direct effects on emotion and behavior. Furthermore, it is involved in reward pathways and in learning, motivation, error detection, and the perception of pain. Habenular function also correlates with the levels of important neurotransmitters such as dopamine and serotonin (Skandalakis et al. 2018). Although DBS on the habenula has not been performed on a large number of patients, two patients showed significant improvement after this procedure. One was a 64-year-old woman who suffered from treatment-refractory major depressive disorder since the age of 18 and experienced complete remission after four months of high frequency stimulation.

Conclusion Each surgical treatment for a mental disorder can be tweaked and customized in dozens of different ways, and additional research is necessary to optimize outcomes and provide a personalized approach for each unique case. Neurosurgery has progressed far beyond the crude lobotomies of the 1940s, and it continues to improve as procedures are refined. The immense complexity of neural networks and of the mental disorders themselves pose a challenge as well as an opportunity for treatment options. Curing major depressive disorder or obsessive-compulsive disorder using a lesion or electrical stimulation is a revolutionary method that, if effective, could significantly improve the quality of life and functionality of patients. While the time of daring and indiscriminate surgery has passed, judiciousness and scientific rigor in exploring neurosurgeries for mental disorders may result in a new hope for otherwise untreatable patients. D

FALL 2018

CONTACT ANNA BRINKS AT ANNA.L.BRINKS.21@ DARTMOUTH.EDU References 1. Balcioglu, Yasin Hasan and Fatih Oncu. "Psychosurgery and Other Invasive Approaches in Treatment-Refractory Obsessive-Compulsive Disorder: A Brief Overview through a Case." Dusunen Adam: Journal of Psychiatry & Neurological Sciences, vol. 31, no. 2, June 2018, pp. 225227. EBSCOhost, doi:10.5350/DAJPN2018310213. 2. Bergfeld, Isidoor O., et al. "Treatment-Resistant Depression and Suicidality." Journal of Affective Disorders, vol. 235, Aug. 2018, pp. 362-367. EBSCOhost, doi:10.1016/j.jad.2018.04.016. 3. Christmas, David, et al. “Neurosurgery for Mental Disorder.” Advances in Psychiatric Treatment, vol. 10, no. 3, 2004, pp. 189–199., doi:10.1192/apt.10.3.189. 4. Foley, Jennifer A., et al. "Standardised Neuropsychological Assessment for the Selection of Patients Undergoing DBS for Parkinson’s Disease." Parkinson's Disease (20420080), 03 June 2018, pp. 1-13. EBSCOhost, doi:10.1155/2018/4328371. 5. “Frequently Asked Questions About Lobotomies.” NPR, 16 Nov. 2005, www.npr.org/templates/story/story. php?storyId=5014565. 6. Mandarelli, Gabriele, et al. "Informed Consent Decision-Making in Deep Brain Stimulation." Brain Sciences (2076-3425), vol. 8, no. 5, May 2018. EBSCOhost, doi:10.3390/brainsci8050084. 7. “Mental Disorders Affect One in Four People.” World Health Report, World Health Organization, 2001, www. who.int/whr/2001/media_centre/press_release/en/. 8. Ovsiew, Fred and Frim Dm Neurosurgery for psychiatric disorders Journal of Neurology, Neurosurgery & Psychiatry 1997;63:701-705. 9. Skandalakis, Georgios P., et al. "The Habenula in Neurosurgery for Depression: A Convergence of Functional Neuroanatomy, Psychiatry and Imaging." Brain Research, vol. 1694, Sept. 2018, pp. 13-18. EBSCOhost, doi:10.1016/j.brainres.2018.04.041. 10. Taylor, Joseph J., et al. "Targeted Neural Network Interventions for Auditory Hallucinations: Can TMS Inform DBS?." Schizophrenia Research, vol. 195, May 2018, pp. 455-462. EBSCOhost, doi:10.1016/j. schres.2017.09.020. 11. The Editors of Encyclopædia Britannica. “Lobotomy.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 6 Apr. 2018, www.britannica.com/science/lobotomy.

“While the refined nature of modern surgeries reduces this risk, [personality changes] still remain a concern... Ultimately, these risks must be carefully weighed with the extent to which the mental disorder impairs the patient's life.”



The Genetics of Aging: Lessons from Life-Extending Mutations in C. Elegans BY LIAM LOCKE '21 Figure 1: Cathepsin B, lysosomal cysteine protease. Known to be upregulated in many cancer cells. Source: Wikimedia Commons (Credit: Jawahar Swaminathan and MSD Staff, European Bioinformatics Institute)

Introduction The term aging refers to the deterioration of an organism’s cellular function over its lifetime. Scientists working with the nematode Caenorhabditis Elegans (C. Elegans) have generated mutations that allow the worm to live two to five times longer than the wildtype strain. The first of these mutations to be discovered was in the gene daf-2 which codes for the insulin-like growth factor-1 receptor protein (IGF-1R). Mutations in daf-2 cause a two-fold increase in lifespan. C. Elegans with double mutants in daf-2 and rsks-1 (ribosomal protein S6 kinase beta) experience a five-fold increase in lifespan. These discoveries suggest that aging may be controlled by specific genetic programs which may be conserved across species.

Theories of Aging The genetic material of an organism is damaged by numerous environmental factors, but it seems that one of the greatest sources of damage comes from within cells. Byproducts of cellular respiration known as reactive oxygen species (ROS) cause significant damage to both mitochondrial and chromosomal DNA. Subunits I and III of the electron transport chain produce superoxide radicals (O2-) which cause downstream oxidation of nucleic acids. As a 8

consequence, the rate of ROS production and the concentration of antioxidants in cells have strong links with the longevity of an organism (1). In addition to the production of metabolic waste, longevity is tightly coupled with the efficiency of cells to repair damage to DNA. A family of genetic diseases known as progerias are characterized by loss-of-function mutations in DNA helicases, DNA repair proteins, and cell cycle regulators. Individuals with progerias experience early incidence of wrinkles, alopecia (hair loss), muscular atrophy, cardiovascular disease, organ failure and cancer (2). Other theories propose that aging is controlled by a specific genetic program. Some evidence to support this claim is the correlation between telomere length and longevity. Telomeres are DNA-protein structures that act as a protective cap at the ends of chromosomes. Enzymes called telomerases act to repair and maintain telomeres, but telomeres are still shortened after repeated mitotic divisions. Telomeres eventually lose their protective qualities, after which damaging agents begin to chew away at chromosomal DNA. It seems that telomeres may act as biological clocks where the length of telomeres and the activity of telomerases determine how long an organism remains resistant to damage (1). DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure 2: Subunits I and III of the electron transport chain produce the reactive oxygen species O2- and H2O2. These radicles can damage mtDNA at their site of production or migrate to the nucleus and cause damage to cDNA. Source: Wikimedia Commons (Credit: Xinyuan Li)

Although telomere length varies with longevity in humans and most mammals, telomere length does not affect the lifespan of the nematode C. Elegans (3). Therefore, telomere length does not constitute an evolutionarily conserved mechanism for the control of longevity in all organisms. A promising candidate for an evolutionarily conserved “longevity module” is the Inulin/IGF-1 signaling pathway. Mutations that reduce the activity of the insulin-like growth factor receptor DAF2 result in a two-fold extension in the lifespan of C. Elegans. The Insulin/IGF-1 pathway modulates cellular respiration and growth, so the life-extending phenotype of daf-2 may be due to a reduced production of ROS. However, the daf-2 phenotype is reversed by mutations in the transcription factor daf-16, a downstream target of Insulin/IGF-1 signaling (4). This implies that the life-extending effects of daf-2 mutations must act in part by regulating transcription of genes bound by the daf-16 transcription factor.

C. Elegans as a Model Organism for Aging C. Elegans has been used as a model organism for aging as early as 1977. Preliminary studies found that the worm’s lifespan could be modulated by changes in temperature and food supply (5). The most common wild-type strain, known as N2, was isolated from a mushroom composite near Bristol, England in 1966 by W.L. Nicholas and subsequently given to the geneticist Sydney Brenner for experimentation. C. Elegans is either a hermaphrodite (XX) or a male (XO). Males occur in very small percentages due to non-disjunction of the X chromosome during gamete production and are useful in performing crosses between two FALL 2018

genetic populations. The hermaphrodite worm is most useful in maintaining particular genetic lines and analyzing specific mutations. C. Elegans hermaphrodites have an average lifespan of 14 days at 25°C and an average brood size of 300 progeny. All eggs are laid during the first 3-4 days of reproductive maturity after which the worm is comprised entirely of somatic cells. In harsh environmental conditions such as caloric restriction, young C. Elegans enter a state of diapause known as the dauer formation. After preferable conditions are restored, the worm develops into an adult. The time spent in the dauer formation is independent of the worm’s adult lifespan, so caloric restriction during development extends the lifespan of C. Elegans (6). There are several features of C. Elegans that make it an ideal model organism for studying aging. C. Elegans have a mechanism similar to plasmid transcription in yeast in which circular, double stranded DNA called arrays can be transcribed in the cytoplasm. Researchers grow arrays in populations of E. Coli cells (the primary food source of C. Elegans) and seed nutrient growth media with the transfected

“Although telomere length varies with longevity in humans and most mammals, telomere length does not affect the lifespan of C. Elegans. Therefore, telomere length does not constitute an evolutionarily conserved mechanism for the control of longevity in all organisms.”

Figure 3: Telomeres shorten after repeated mitotic divisions making chromosomes more susceptible to damage. Source: Wikimedia Commons (Credit: Azmistowski17)


Figure 2: Antibody is joined to anticancer drug via a chemical linker. Source: Wikimedia Commons (Credit: Bioconjugator)

“The life-extending effects of daf-2 are mediated by the downstream targets age-1 and daf-16. Mutations in age-1 extend the lifespan of C. Elegans by increasing the activity of the daf-16 transcription factor.”


bacteria. After ingestion of these cells, C. Elegans are screened for specific markers that are transcribed from the array. Another advantage of C. Elegans as a model organism is that microinjection of an array template with a site specific CRISPR/Cas-9 can result in dozens of progeny with the array or integration. Because C. Elegans is a hermaphrodite, the transgene is maintained in a population after the transgenic animal is isolated. Finally, the genes of C. Elegans have homologues in other animals including humans, so understanding aging in a model organism like C. Elegans can help understand and possibly treat age related phenotypes in humans.

Mutations in daf-2 Affect the Insulin/IGF-1 Signaling Pathway The first life-extending mutation in daf2 was discovered by Cynthia Kenyon in 1993. Daf-2 was known to be an essential gene in the hormonal initiation of the dauer stage in C. Elegans making it a promising target for age related studies. Kenyon named daf-2 the “grim reaper gene” because its normal function caused cell senescence (7). Daf-2 to modulate dauer formation, reproduction and aging are independent of one another and these effects are divided by different stages of development. In the absence growth hormone (GH) during the first larval stage (L1), daf-2 is responsible for signaling dauer formation. During the fourth larval stage (L4), daf-2 signaling initiates the transition to sexual maturity. Once C. Elegans reaches adulthood, daf-2 acts exclusively on the aging process. The insulin-like growth hormone receptor daf-2/IGF-1 derives its name from the 35% identical amino acid sequence it shares with human insulin receptor (IR) and its affinity for GH as a ligand during development. In the adult worm, IGF-1 is bound by insulin-

like peptides. When ligands bind IGF-1, they activate a phosphoinositide 3-Kinase (PI3K) coded by age-1 which in turn phosphorylates a FOXO transcription factor coded by daf16 (Figure 5). The daf-16/FOXO transcription factor is a member of the hepatocyte nuclear factor 3 (HNF-3)/forkhead family which have many roles in embryogenesis, differentiation, and tumorigenesis (8). Other HNF-3/forkhead transcription factors that are activated by insulin have been discovered in humans making this family of transcription factors an important target for future age-related research in humans. The life-extending effects of daf-2 are mediated by the downstream targets age-1 and daf-16. Mutations in age-1 extend the lifespan of C. Elegans by increasing the activity of the daf16 transcription factor. Mutating age-1 in daf-2 mutants do not produce any additional lifeextending effects which is strong evidence that these genes act in the same signaling pathway. Life extension in either daf-2 or age-1 mutants is dependent on the activity of daf-16 and is reversed by daf-16 mutations (4).

Target Genes of the daf-16 Transcription Factor Daf-16 codes for the FOXO transcription factor, a member of the HNF-3/forkhead family characterized by a winged helix structure that mediates DNA binding (8). Specific serine/ threonine residues of the FOXO transcription factor are phosphorylated by PI3K in response to ligand-receptor binding in IGF-1. This phosphorylation results in nuclear localization and transcription of target sequences (9). Mutations in daf-2 reduce ligand-receptor binding in IGF-1 which reduces PI3K activity and excludes FOXO from the nucleus. This mechanism allows daf-16 to control a wide variety of targets in response to changing environments. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

A study conducted in 2003 used RNA interference (RNAi) and DNA microarray analysis to identify genes that varied significantly between wild-type and daf-2 populations and were therefore targets of the FOXO transcription factor coded by daf-16. Two noteworthy genes were sod-2 and mtl-1, both of which code for proteins involved in the oxidative stress response. Sod-2 codes for an enzyme called mitochondrial superoxide dismutase which catalyzes the superoxide radical (O -) produced in cellular respiration into hydrogen peroxide (H2O ) or molecular oxygen (O ). Mtl1 codes for metallothionine-1 which contains a

cysteine rich region that binds heavy metals but also captures superoxide and hydroxyl radicles (OH-). The upregulation of genes responsible for neutralizing ROS in daf-2 mutants supports the theory that aging is a consequence of ROS produced during cellular respiration. Other families of genes identified were antimicrobial genes and heat shock factors (10). Another study conducted in 2005 analyzed gene families that varied between wild-type worms daf-2 mutants using a method known as serial analysis of gene expression (SAGE). Many of the genes identified are involved in lipid, protein and energy metabolisms, oxidative

Figure 5: Humanization of rodent mAb. From left to right: mouse, chimeric, humanized, chimeric/humanized, and human. Source: Wikimedia Commons (Credit: Roland Geider, JE at UWO)

FALL 2018


stress response, and cell structure. The study also characterized the health-span of daf-2 mutants (the physical condition of long-lived animals) stating that a 10-day old daf-2 mutant is the same biological age as a 6-day old wild type worm. This study also demonstrated that mutations in daf-2 result in hypometabolism which reduces the production of ROS (11).

“There must be a synergistic effect that heightens the life-extending qualities of both mutations which means that [daf-2 and rsks-1] do not act in parallel to modulate lifespan, but in tandem.”

“Regulation of lifespan by environmental cues is likely an adaptive response which favors growth and reproduction under favorable conditions and survival under harsh conditions.”

Daf-2/Rsks-1 Double Mutants Experience a Five-Fold Increase in Lifespan The gene rsks-1 codes for a S6 kinase that acts in the target of rapamycin (TOR) pathway. The TOR pathway modulates the activity of the SKN-1 transcription factor which transcribes genes responsible for reproduction and autophagy, the natural regeneration of cells that reduces susceptibility to disease. The TOR pathway has also been suggested as a potential longevity module. Mutations in rsks-1 have been shown to increase lifespan by approximately 20%. The five-fold increase in lifespan observed in daf-2/rsks-1 double mutants greatly surpasses the combined effects of daf-2 and rsks-1 single mutants. There must be a synergistic effect that heightens the life-extending qualities of both mutations which means that these genes do not act in parallel to modulate lifespan, but in tandem. Interestingly, past studies have used RNAi to reduce the activity of rsks-1 in adult daf-2 mutants and observed only an additional 24% increase in lifespan. It is therefore likely that the synergistic activity in the double mutant is the result of a mechanism acting during development. The synergistic effects of daf-2/rsks1 double mutants were disrupted by RNAi of aak-2 which codes for an adenosine monophosphate-activated protein kinase (AMPK). AMPK has key roles in metabolism. Under starved conditions, AMPK promotes catabolism and ATP production. AMPK acts in a cell non-autonomous manner suggesting that the synergistic effects of these two mutations is mediated by endocrine signaling. The lifeextending phenotype was not reversed by RNAi of snk-1 but it was reversed by RNAi of daf-16. Additionally, daf-2/rsks-1 double mutants showed reduced fertility suggesting that hypometabolism negatively regulates reproduction (12).

Caffeine and Metformin Extend Lifespan in C. Elegans If aging is in fact controlled by a longevity module as suggested by the insulin/IGF-1 and TOR pathways, then drugs that act on these pathways should be able to impact lifespan. 12

Studies of caffeine and metformin have both demonstrated life prolonging effects in C. Elegans that are dependent on these pathways. Caffeine (1, 3,7-trimethylxanthine) mimics caloric restriction by inhibiting activity of IGF-1 and localizing daf-16/FOXO to the nucleus. The maximum extension in lifespan is achieved with low concentrations (10µg/ml) of caffeine and is dependent on the presence of wild-type daf-2. There is evidence that caffeine might act directly on age-1/PI3K to localize daf-16 to the nucleus. Additionally, the particular form of caffeine also determines its life extending potential. In C. Elegans, 1-methyl caffeine exhibited a 14.75% increase in lifespan whereas 3-methyl caffeine exhibited only a 5.54% increase in lifespan (13). Metformin the most widely prescribed drug for the treatment of type II diabetes. The drug acts as a dietary restriction mimetic and reduces energy-consumption processes in part by activating the AMPK of the TOR pathway. Additionally, metformin’s life extending qualities are dependent on the activities of aak-1 and snk-1 which promote resistance to biguanide toxicity (14).

An Evolutionary Explanation for Longevity Modules The survival of a species depends on controlled mechanisms of reproduction and aging. Evolutionary theories propose that longevity mutations are associated with reproductive tradeoffs. This is partly corroborated by reduced fertility seen in daf-2 single mutants and daf-2/rsks-1 double mutants. However, this does not seem to be the case across phyla where large variations in lifespan and widely variable reproductive cycles are observed. Future studies may focus on the viability of this claim when applied to individuals of a particular species. Regulation of lifespan by environmental cues is likely an adaptive response which favors growth and reproduction under favorable conditions and survival under harsh conditions. This adaptive response is clearly evident in the C. Elegans dauer formation. If food is scarce, it is beneficial to enter a state of reduced energy consumption and delay reproduction until conditions become favorable. Mutations that increase lifespan may initiate a survival program even in favorable conditions (15). Aging may also be an evolutionary mechanism that increases genetic diversity by promoting reproduction and early death during times when there is less competition for resources. The discovery of longevity modules in humans will be an important area of research for medicine moving forward. Understanding the aging process in model organisms gives researchers a platform to begin investigating the DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

reasons for our own deterioration, but science is far away from a five-fold extension in human lifespan. D CONTACT LIAM LOCKE AT LIAM.N.LOCKE.21@ DARTMOUTH.EDU References 1. Sergiev, P. V., Dontsova, O. A., & Berezkin, G. V. (2015). Theories of Aging: An Ever-Evolving Field. Acta Naturae, 7(1 (24)). 2. Burtner, C. R., & Kennedy, B. K. (2010). Progeria syndromes and ageing: what is the connection?. Nature reviews Molecular cell biology, 11(8), 567. 3. Raices, M., Maruyama, H., Dillin, A., & Karlseder, J. (2005). Uncoupling of Longevity and Telomere Length in C. elegans . PLoS Genetics, 1(3), e30. 4. Kenyon, C. (2005). The plasticity of aging: insights from long-lived mutants. Cell, 120(4), 449-460. 5. Klass M. Aging in the nematode Caenorhabditis elegans: major biological and environmental factors influencing life span. Mechanisms of Ageing and Development. 1977 ;6:413–429. 6. Tissenbaum, H. A. (2015). Using C. elegans for aging research. Invertebrate Reproduction & Development, 59(sup1), 59–63. 7. Kenyon, C., Chang, J., Gensch, E., Rudner, A., & Tabtiang, R. (1993). A C. elegans mutant that lives twice as long as wild type. Nature, 366(6454), 461. 8. Lin, K., Dorman, J. B., Rodan, A., & Kenyon, C. (1997).

FALL 2018

daf-16: An HNF-3/forkhead family member that can function to double the life-span of Caenorhabditis elegans. Science, 278(5341), 1319-1322. 9. Henderson, S. T., & Johnson, T. E. (2001). daf-16 integrates developmental and environmental inputs to mediate aging in the nematode Caenorhabditis elegans. Current Biology, 11(24), 1975-1980. 10. Murphy, C. T., McCarroll, S. A., Bargmann, C. I., Fraser, A., Kamath, R. S., Ahringer, J., ... & Kenyon, C. (2003). Genes that act downstream of DAF-16 to influence the lifespan of Caenorhabditis Elegans. Nature, 424(6946), 277. 11. Halaschek-Wiener, J., Khattra, J. S., McKay, S., Pouzyrev, A., Stott, J. M., Yang, G. S., ... & Riddle, D. L. (2005). Analysis of long-lived C. elegans daf-2 mutants using serial analysis of gene expression. Genome research, 15(5), 603-615. 12. Chen, D., Li, P. W.-L., Goldstein, B. A., Cai, W., Thomas, E. L., Chen, F., ... Kapahi, P. (2013). Germline Signaling Mediates the Synergistically Prolonged Longevity Produced by Double Mutations in daf-2 and rsks-1 in C. elegans. Cell Reports, 5(6), 1600–1610. http://doi.org/10.1016/j.celrep.2013.11.018 13. Du, X., Guan, Y., Huang, Q., Lv, M., He, X., Fang, C., ... & Sheng, J. (2018). Low Concentrations of Caffeine and Its Analogues Extend the Lifespan of Caenorhabditis elegans by Modulating IGF-1-like Pathway. Frontiers in Aging Neuroscience, 10, 211. 14. Cabreiro, F., Au, C., Leung, K. Y., Vergara-Irigaray, N., Cochemé, H. M., Noori, T., ... & Gems, D. (2013). Metformin retards aging in C. elegans by altering microbial folate and methionine metabolism. Cell, 153(1), 228-239. 15. Rodríguez-Rodero, S., Fernández-Morera, J. L., MenéndezTorre, E., Calvanese, V., Fernández, A. F., & Fraga, M. F. (2011). Aging Genetics and Aging. Aging and Disease, 2(3), 186–195.



Regrowing Our Organs: The Development of Organoids for Medical Research and Disease Treatment BY SAM NEFF '21 Figure 1: This is an intestinal organoid, derivced from stem cells and grown in a laboratory. Clearly it is not structurally identical to the human intenstine, but it does have the power to accurately model human intestinal function. Source: Wikimedia Commons (Credit: Meritxell Huch)


Addressing Our Medical Priorities

What Are Organoids?

As of April 2018, more than 114,000 people in the United States were in line to receive an organ transplant (Organ Donor Statistics, n.d.). Due to scarcity of available organs, many on this list will likely never receive the life-saving treatment they need. It is surely a public health priority to pursue technology that will reduce this organ shortage. On a related note, it is estimated that upwards of 600,000 people will die of cancer this year in the United States (Cancer Stat Facts, n.d.). Behind heart disease, it is the second leading cause of death, and it must continue to be addressed (Leading Causes of Death, n.d.). Cancer and organ transplant, alongside treatment of genetic disease, infectious disease control, and study of neurological diseases, are medical research priorities. With such a broad slate of problems to tackle, it is notable that a particular technology has the potential to help the medical community advance along all of these fronts simultaneously. This is the development of organoids; promising a new age of personalized medicine where one’s own body is used in its defense.

Organoids are structures that model the function and appearance of organs in the human body. They are typically derived from patient stem cells, which grants them great specificity in modeling the human body. An organoid in the laboratory grows to model the distinct organ structure of an individual patient. Of course, the process of building an organoid is quite complex. But a special property of stem cells makes it possible to do so in a lab. They are capable of dividing and differentiating into multiple specialized cell types, creating complex biological structures through self-assembly so long as they are subjected to the right physical and chemical signals (Lancaster & Knoblich, 2014) [Figure 2]. This process of stem cell differentiation, with the appropriate biological cues to provide instructions, makes possible the development from an embryo to a fullyfunctional human being. To grow organoids in a lab, certain conditions must be met. The stem cells derived from a patient must be subjected to very specific growth conditions. First, growth must occur DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

within a scaffold that mimics the human body’s internal architecture. Often, these scaffolds are built from proteins like collagen that compose the body’s extracellular matrix (the space between the body’s cells containing structural molecules that provide support for tissue growth) [Figure 3]. In addition, the developing organoid must be subjected to particular chemicals, some of which are called growth factors. Different chemicals must be precisely applied to the organoid at different locations and at different times during its development (Gjorevski, Ranga, & Lutolf, 2014). Furthermore, as the organoid grows, its cells need to be supplied with essential nutrients. A network of blood vessels needs to be created that comes close enough to all the organoids cells, even those farthest to the interior of the structure, or the cells will die, and the organoid will fail (Blitterswijk & de Boer, 2015). As scientists get better at meeting the complex requirements for organoid growth, these structures will come closer to an exact representation of human organs in vivo (in the body). More versatile and physiologically accurate than prior biological models, organoids have potential uses as tools for examining disease progression, subjects for drug testing, and even objects for organ transplant.

Examining Disease Progression Beating Cancer The treatment of cancer has been a public health priority since the 1970s, when President Nixon inaugurated the “war on cancer” by signing the National Cancer Act of 1971 and establishing a national cancer program (National Cancer Act of 1971, n.d.). Progress since then has been significant. From its peak in 1991, the cancer death rate has fallen 26% as of 2015. In particular, lung cancer death rates declined by 45% among men, prostate cancer rates for men by 52%, and breast cancer rates among women by 39% in the same time frame (Simon, 2018). This can be attributed to a multitude of factors, including the accumulation of scientific research and a changing culture. Despite these promising numbers, there is still much work to be done. The death rates for certain cancers are still exceptionally high. Take pancreatic cancer, for example. As of 2017, its 5-year survival rate was still under 7% (Baker, Tiriac, Clevers, & Tuveson, 2016). This suggests that our scientific understanding of pancreatic cancer is very incomplete. One reason for our limited understanding of early-stage cancer progression. Patients with the most common form of pancreatic cancer are often diagnosed too late, such that less than 15% of them are even eligible for surgery (Baker et al., 2016). Thus far, scientists have approached the FALL 2018

study of pancreatic cancer in several ways. Cancerous cells can be isolated and grown in a laboratory, dividing and multiplying into a lineage of cells known as a cell line. Yet this approach, for pancreatic cancer, can only be applied to the 15% of patients who are eligible for surgery, and therefore does not adequately represent the diversity of tumor development in different pancreatic cancer cases. These cells are also not representative of early-stage tumor development. Furthermore, these cell lines are grown in a single layer, so they do not model a tumor’s 3D structure [Figure 4]. A different approach, in which mice are genetically engineered to cause pancreatic tumor growth, solves the latter two problems. Yet the development of tumors in mice is not representative of tumor development in humans (Baker et al., 2016). A new approach, one where organoids are used to model cancer development, could potentially correct all three problems attributed to earlier approaches. First, tumor organoids do not need to be grown from cancerous cells. Multiple stem cells from a particular patient can be grown into pancreatic organoids, so long as the proper physical and chemical signals are applied to direct stem cell differentiation. Second, in the early stage of development, cancerous mutations that enhance tumor growth can be introduced into pancreatic cells with gene editing technology (Baker et al., 2016; Neal & Kuo, 2016). If several different organoids are grown, with one cancerous pancreas and one healthy pancreas as a control (and likely a few sets of backup organoids as well, in case some fail to grow), scientists can examine the progression of pancreatic cancer from the earliest stages of tumor development. Tumor growth can be examined for a particular patient, in three dimensions, such that the diversity and complexity of tumor growth in cancer patients

Figure 2: Stem Cell Differentiation. Stem cells are capable of developing into a variety of cell types depending on the molecular signals they are subjected to. In this example, mesenchyman stem cells can differentiate into two different cell types:adipocytes (fat cells) and osteoblasts (bone cells) Source: Wikimedia Commons (Credit: Mystner)

“If several different organoids are grown, with one cancerous pancreas and one healthy pancreas as a control [...], scientists can examine the progression of pancreatic cancer from the earliest stages of tumor developent.”


Figure 3: The extracellular matrix, the space surrounding a cell, contains a variety of proteins and protein complexes like collagen and proteoglycans. These structural protein elements are attached to cells via linking proteins like integrins. These connections between cell and extracellular matrix guide cellular growth like a mold shapes the formation of molten metal. Source: Wikimedia Commons (Credit: Kassidy Veasaw)


is accurately depicted in the laboratory (Drost & Clevers, 2018). In addition to accumulating a better understanding of cancer development, scientists can gain insight into the mutations that drive cancerous growth. Scientists have studied cancerous tumors and know of the many genetic mutations that contribute to their growth. However, cancer growth is often initiated by a small number of key mutations, whose effects lead to the accumulation of many other mutations. With the tools of genetic engineering, scientists can take the so-called “bottom-up approach” to cancer research (as opposed to the top-down approach, where cancerous mutations are introduced to cancer cell lines that already have many existing mutations). Known cancerous mutations are edited into the cells of organoids with no cancerous mutations, and the accumulation of additional mutations is monitored (Neal & Kuo, 2016). This process would be utilized in a highthroughput approach to cancer research, where many similar organoids (derived from the stem cells of a single patient and all subjected to the same growth conditions) are simultaneously subjected to different genetic mutations. One could pick out the organoids whose singular (or multiple) mutations ultimately led to cancerous growth and point to these mutations as a cause of cancer. Furthermore, if these driving mutations are known, and gene editing techniques reach the point of clinical efficacy, they can be corrected within the patient’s cells to halt cancer progression.

Understanding Neurological Disease Of any organ in the human body, our understanding of the brain is perhaps the most limited. As a consequence, our knowledge of neurological diseases lags behind that of most other disorders. The human brain is incredibly complex and difficult to model, partly due to its high degree of cerebral folding, massive prefrontal cortex (the brain region implicated in higher level reasoning and considered very important to communication), and large total brain size relative to other species of similar body mass (Striedter, 2005) [Figure 5]. This largely rules out animal models for human brain pathology. However, stem-cell derived organoids may provide a better tool. Of course, the physical and chemical signals required to facilitate brain development are complex and not fully understood. However, some early features of brain development have been recreated, and lend insight into neuropsychiatric disorders like autism and schizophrenia. For example, a recent study found that at synapses, the points where two neurons meet, abnormal function is associated with schizophrenia (Quadrato, Brown, & Arlotta, 2016). Discoveries such as these may point to the underlying biological causes of neurological diseases, including fatal disorders such as ALS or Alzheimer’s, and open the way for broader medical treatment.

Testing Drugs Fighting Genetic Disease Now that the human genome has been DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

sequenced, thanks to the efforts of those involved in the Human Genome Project, the treatment of genetic disease can be tackled from a molecular perspective. This newfound knowledge of human genetics, combined with a growing understanding of molecular biology and the inner workings of cells, has allowed staggering advances in the past two decades. Scientists have reached a point where, once a disease becomes associated with a particular gene, or set of genes, therapy can be devised that fixes or compensates for the associated defective proteins, or even attempts to correct the defects in the gene itself. As scientists have advanced treatment for genetic diseases, our picture of genetic disease itself has become more complex. Within a single gene, there are many different mutations that can cause a particular genetic disease, and these differences in mutation have implications for protein structure and disease progression (Understanding Genetics, n.d.). In addition, although most genetic diseases are associated with a single gene, one gene doesn’t determine clinical outcome. An individual with a genetic variant that promotes the immune response, enhances metabolism, or provides any other positive health benefit, could see slower disease progression than an individual without these traits. Likewise, one’s environment has a significant impact on health (Wachter, Thomas, Wanyama, Seneca, & Malfroot, 2017). These considerations apply to individuals with cystic fibrosis (CF), the most common fatal genetic disease in the United States. Some mutations associated with the disease are more common and occur repeatedly within the CF

community. A particular mutation, known as delta F508, is found in at least one allele for 90% of CF patients (Overview of CFTR Mutations, n.d.). However, others are much rarer, with certain mutations found only in a small subset of the community, or even a single person. For these patients, it is notoriously difficult to prescribe drugs, as it is unknown if they will have the same effect that they have on people with more common mutations, on which the drugs have already been tested (Wachter, Thomas, Wanyama, Seneca, & Malfroot, 2017). But even for two individuals with the same CF genotype, the outcome of drug treatment can be markedly different. This is a product of differences in other genes that affect disease progression, the environment of the individual, and other factors. Fortunately, these issues with CF treatment, and the treatment of genetic disease at large, can be addressed with organoid technology. To address the drastic variation in disease progression for individuals with CF, and those with other genetic diseases, it is possible to generate organoids from their stem cells and test drugs on those organoids. As proof of this principle, recent experiments have been conducted on mice, where two types of intestinal organoids [Figure 1] are generated from their stem cells: normal ones and ones genetically engineered to possess the mutant CFTR gene (the gene responsible for the defective protein in CF) (Dekkers et al., 2013; Bartfield & Clevers, 2017). It has been shown that when the chemical forskolin, which has the effect of activating the CFTR protein, is applied to the normal organoid, the organoid swells in size. This occurs because

“Within a simple gene, there are many different mutations that can cause a pargicular genetic disease, and these differences in mutation have implications for protein structure and disease progression.”

Figure 4: This is a pancreatic tumor, with certain proteins stained for greater visibility. The tumor has a complex, distinct 3D structure, difficult to model with traditional 2D cancer cell lines, or tumors from other animals. Source: Wikimedia Commons (Credit: Fkot1290)

FALL 2018


Figure 5: The human brain is quite complex, with a larger relative size than that of any other species, and a very high degree of gyration (folding). Even the ape brain is very different from the human brain, making animal testing to model human neurological function unhelpful.

greater extent than the CF organoids not treated with the drug. This approach to drug testing addresses a growing problem with the traditional format of clinical trials for genetic disease. Because certain disease genotypes are unique to a small group or even an individual patient, it is impossible to test a drug on a large group of patients. Testing a drug on a larger population is essential to ensure that the success of the drug is repeatable and not simply a fluke for a particular patient. But when there is not a large population available for testing, this problem can be circumvented by taking an individual patient’s stem cells, generating many different organoids, and testing them for drug response. In fact, the large-scale clinical trial for genetic disease patients may soon become a relic of the past, as it is theoretically possible to generate large “biobanks” that contain organoids representing all genotypes associated with a particular disease. A drug can be tested on all of these organoids simultaneously, and then applied to patients who would benefit from it (Bartfield & Clevers, 2017). This approach eliminates concern for the potential side effects of new drugs while saving large amounts of money and time.

Source: Wikimedia Commons (Credit: J. Arthur Thomson, The Outline of Science)

Organ Transplant Overcoming Organ Rejection

“The forskolin assay can be used to evaluate CFTR function and assess the effectiveness of drugs for particular CF patients.”


the CFTR protein is an ion channel that regulates transport of chloride and other ions across the cell membrane. The exchange of chloride ions between cells and the lumen is accompanied by the movement of water, which flows in the same direction as the chloride ions (Osmosis, Tonicity, and Hydrostatic Pressure, n.d.). When the CFTR protein is working properly, it pumps chlorine ions outside of the cell, and water follows, filling the lumen with water and causing organoid swelling. In contrast, a mutated CFTR protein can’t pump chloride ions, so water will not flow out of the cells (Dekkers et al., 2013; Bartfield & Clevers, 2017). In short, an organoid which swells when forskolin is applied possesses a functional CFTR protein, and an organoid which doesn’t swell possesses the dysfunctional version. This means that the forskolin assay can be used to evaluate CFTR function and assess the effectiveness of drugs for particular CF patients. If a drug applied to CF organoids is effective, the forskolin assay should reveal a swelling response, at least to a

The ultimate application of organoid technology is the creation of organs viable for transplant into the human body. This would eliminate the problem of organ rejection by the body’s immune response, where the patient’s immune system attacks the donor organ. The immune response necessitates treatment with immuno-suppressant drugs if the donor organ is not rejected by the body outright and makes organ recipients more susceptible to disease. The long-term prognosis for organ transplant patients is not rosy. This novel approach to organ transplant also eliminates the issue of waiting lists. One would not have to wait for a donor, only for organoids to grow from their own stem cells. It is even possible that relatively healthy individuals could use organoid therapy as a preventative measure to stall the decline of organ function with age. An individual might pay for a full or partial transplant, with an organ that functions as well as it did in one’s youthful years. This would not only provide a great economic opportunity for the ambitious entrepreneur, but may extend the average lifespan, and maybe even the natural lifespan, of the human population. But the technology is not quite at the level where such an enterprise could be seriously considered. The organoid structure would have to be incredibly precise, and the scaffolding and DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

growth signals would have to recapitulate in vivo conditions almost exactly, for the transplant to be clinically viable. In a recent study, scientists attempted transplantation of liver organoids into mice. The result was an immune response, and death of the implanted liver cells began within an hour after the surgery (Zhou, Lolas, & Chang, 2017). This result is troubling for the prospect of human organoid transplantation. Until our understanding of organoid development improves further, drug testing and disease modeling, not organ transplant, will have to be the focus. D CONTACT SAM NEFF AT SAMUEL.L.NEFF.21@DARTMOUTH.EDU References 1. Baker, L.A., Tiriac, H., Clevers, H., & Tuveson, D.A. (April 2016). Modeling Pancreatic Cancer with Organoids. Trends in Cancer, 2(4): 176-190. Retrieved from https://www.sciencedirect.com/science/article/pii/ S2405803316000431?via%3Dihub 2. Bartfield, S., & Clevers, H. (July 2017). Stem Cell-Derived Organoids and Their Application for Medical Research and Patient Treatment. Journal of Molecular Medicine. 95(7): 729738. Retrieved from https://link.springer.com/article/10.1007%2 Fs00109-017-1531-7 3. Cancer Stat Facts: Common Cancer Sites. National Cancer Institute. Retrieved from https://seer.cancer.gov/statfacts/html/ common.html 4. Dekkers, J.F., Wiegerinnnck, C.L., de Jonge, H.R., Bronsveld, I., Janssnens, H.M., de Winter-de Groot, K.M., Brandsma, A.M., de Jong, N.W.M., Bijvelds, M.J.C., Scholte, B.J., Nieuwenhuis, E.E.S., van den Brink, S., Clevers, H., van der Ent, C.K., Middendorp, S., & Beekman, J.M. (2 June 2013). A Functional CFTR Assay Using Primary Cystic Fibrosis Intestinal Organoids. Nature Medicine, 19: 939-945. Retrieved from https://www.nature.com/articles/nm.3201 5. Drost, J., & Clevers, H. (24 April 2018). Organoids in Cancer Research. Nature Reviews Cancer, 18: 407-418. Retrieved from https://www.nature.com/articles/s41568-018-0007-6 6. Dye, B.R., Dedhia, P.H., Miller, A.J., Nagy, M.S., White, E.S., Shea, L.D., & Spence, J.R. (28 September, 2016). A Bioengineered Niche Promotes In Vivo Engraftment and Maturation of Pluripotent Stem Cell Derived Human Lung Organoids. eLIFE. Retrieved from https://www.ncbi.nlm.nih. gov/pmc/articles/PMC5089859/ 7. Gjorevski, N., Ranga, A., & Lutolf, M.P. (2014). Bioengineering Approaches to Guide Stem Cell-Based Organogenesis. The Company of Biologists, 141: 17941804. Retrieved from https://www.ncbi.nlm.nih.gov/ pubmed/24757002 8. Iakobachvili, N. & Peters, P.J. (5 December 2017). Humans in a Dish: The Potential of Organoids inn Modeling Immunity and Infectious Diseases. Frontiers in Microbiology. Retrieved from https://www.frontiersin.org/articles/10.3389/ fmicb.2017.02402/full 9. Lancaster, M.A., & Knoblich, J.A. (18 July 2014). Organogenesis in a Dish: Modeling Development and Disease Using Organoid Technologies. Science, 345(6194). Retrieved

FALL 2018

from http://science.sciencemag.org/content/345/6194/1247125 10. Leading Causes of Death. Centers for Disease Control and Prevention. Retrieved from https://www.cdc.gov/nchs/fastats/ leading-causes-of-death.htm 11. National Cancer Act of 1971. National Cancer Institute. Retrieved from https://www.cancer.gov/about-nci/legislative/ history/national-cancer-act-1971 12. Neal, J.T., & Kuo, C.J. (22 February, 2016). Organoids as Models for Neoplastic Transformation. Annual Review of Pathology: Mechanisms of Disease, 11: 199-220. Retrieved from https://www.annualreviews.org/doi/10.1146/annurevpathol-012615-044249 13. Organ Donor Statistics. U.S. Department of Health and Human Services. Retrieved from https://www.organdonor.gov/ statistics-stories/statistics.html 14. Osmosis, Tonicity, and Hydrostatic Pressure. VIVO Pathophysiology. Retrieved from http://www.vivo.colostate. edu/hbooks/pathphys/topics/osmosis.html 15. Overview of Common CFTR Mutations. CFTR.info. Retrieved from http://www.cftr.info/about-cf/cftr-mutations/ cftr-epidemiology/ 16. Quadrato, G., Brown, J., & Arlotta, P. (26 October 2016). The Promises and Challenges of Human Brain Organoids as Models of Neuropsychiatric Diseases. Nature Medicine, 22: 1220-1228. Retrieved from https://www.nature.com/articles/ nm.4214 17. Schwank, G., Bon-Kyoung, K., Sasselli, V., Dekkers, J.F., Heo, I., Demircan, T., Sasaki, N., Boymans, S., Cuppen, E., van der Ent, C.K., Nieuwenhuis, E.E.S., Beekman, J.M., & Clevers, H. (5 December 2013). Functional Repair of CFTR by CRISPR/Cas9 in Intestinal Stem Cell Organoids of Cystic Fibrosis Patients. Cell Stem Cell, 13(6): 653-658. Retrieved from https://www.sciencedirect.com/science/article/pii/ S1934590913004931?via%3Dihub 18. Simon, S. (4 January 2018). Facts & Figures 2018: Rate of Deaths from Cancer Continues Decline. American Cancer Society. Retrieved from https://www.cancer.org/latest-news/ facts-and-figures-2018-rate-of-deaths-from-cancer-continuesdecline.html 19. Striedter, G.F. (2005). Principles of Brain Evolution. Sunderland, MA: Sinauer Associates. 20. Sun, Y., & Ding, Q. (May 2017). Genome Engineering of Stem Cell Organoids for Disease Modeling. Protein and Cell. 8(5): 315-327. Retrieved from https://link.springer.com/article/1 0.1007%2Fs13238-016-0368-0 21. Understanding Genetics: A District of Columbia Guide for Patients and Professionals. Appendix G: Single-Gene Disorders. NCBI. Retrieved from https://www.ncbi.nlm.nih. gov/books/NBK132154/ 22. Van Blitterswijk, C.A., & de Boer, J. (2015). Tissue Engineering. London, UK: Elsevier. 23. Yin, X., Mead, B.E., Safaee, H., Langer, R., Karp, J.M., & Levy, O. (7 January 2016). Engineering Stem Cell Organoids. Cell Stem Cell, 18(1): 25-38. Retrieved from https://www.sciencedirect.com/science/article/pii/ S1934590915005500?via%3Dihub 24. Wachter, E.D., Thomas, M., Wanyama, S.S., Seneca, S., & Malfroot, A. (22 August, 2017). What Can the CF Registry Tell Us About Rare CFTR-Mutations? A Belgian Study. Orphanet Journal of Rare Diseases, 12(142). Retrieved from https://www. ncbi.nlm.nih.gov/pmc/articles/PMC5567473/ 25. Zhou, V.X., Lolas, M., & Chang. T.T. (May 2017). Direct Orthotopic Implantation of Hepatic Organoids. Journal of Surgical Research, 211: 251-260. Retrieved from https://www. sciencedirect.com/science/article/pii/S0022480416305789

“Until our understanding of organoid development improves further, drug testing and disease modeling, not organ transplant, will have to be the focus.�



Analysis of Three-Species Predator-Prey Dynamics with a Focus on Isle Royale BY ANURAAG BUKKURI '21 Figure 1: This image depicts one of many neural networks. Source: Wikimedia Commons (Credit: Hermann Cuntz)

Abstract This paper considers extensions of the two species Lotka-Volterra model into three species, with applications to the ecological modeling of Isle Royale. The dynamics among the wolf, moose, tick, and vegetation populations were investigated, along with the effect that climate has on the population dynamics of the wolves and moose on Isle Royale. After developing theoretical models and running regression analyses with existing data, long-term behavior of these systems was predicted and compared with observed trends.

Introduction In 1926, Italian mathematician Vito Volterra proposed a system of differential equations to model and explain the observed increase in predator fish in the Adriatic Sea during WWI. At this same time, in the US, physical chemist Alfred Lotka derived the same set of equations to describe a hypothetical chemical reaction in which various chemical concentrations oscillate. These equations are now known as the Lotka-Volterra equations, perhaps the simplest model of predator-prey interactions, and serve as the foundation upon which our paper builds. The Lotka-Volterra model, as shown below, is based on the linear per capita growth rates of the prey population, x(t), and the predator population, y(t) (1):


for a, b, c, d > 0 in which: • dx/dt and dy/dt: growth rate of prey and predator populations, respectively, • a: growth rate of prey population in absence of predator population, • b: impact of predation on prey, • c: death rate of predator population in absence of prey, • d: growth rate of predator population in response to size of prey population There are many advantages to using this model: it only requires data on predator and prey populations—something that is often beneficial in ecological situations where precise data is difficult to obtain. For this reason, the Lotka-Volterra model was used as the starting point for our modeling of Isle Royale. First, data on yearly moose and wolf populations on Isle Royale was obtained from the annual reports since 1988 of these populations on the official Isle Royale website. For a given year x, we calculated the growth rates of the moose and wolf populations by averaging the slope between the years x and x-1 and x and x+1. Then, using these growth rates and population data, we ran regression analyses on the Wessa Multiple Regression. First, we analyzed the moose population, as shown below. Moose Population Multiple Linear Regression -Estimated Regression Equation: dx/dt=155 + 0.206x − 0.0227xy DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Actuals and Interpolation for the moose population are shown in the top graph to the right. Though our model somewhat matches the general trends of the actual data, there are some interpolations that are not close to the actual data points, especially near year nine of the data sample. This led to a poor R² value, as will be discussed later in this paper. Below is the regression analysis for the wolf population: Wolf Population Multiple Linear Regression -Estimated Regression Equation: dy/dt= −4.42 + 0.000239xy − 0.0068 Actuals and Interpolation for the wolf population are shown in the bottom graph to the right. Again, the general shape of our model is like that of the data points. However, unlike the moose regression which had a clear “outlier” in the model interpolation near year nine, the wolf regression fluctuated more severely than the data for it did. This also led to a poor R2 value for the wolf regression. From our above analysis, although the Lotka-Volterra model may be simple and seemingly all-encompassing, it does not fare well in the case of Isle Royale. In fact, after running regression analyses as shown above, we obtained R2 values of only 0.1894 for the growth rate of the moose and 0.2532 for the growth rate of the wolves. These rather poor R2 values can be attributed to external factors omitted by the Lotka-Volterra model of population dynamics on Isle Royale such as climate, ticks, and vegetation. In this paper, we incorporate these factors independently into our models to gain more accurate and diverse perspectives of the population dynamics on Isle Royale. First, theoretical models for these cases are developed.

from 1980 to 2016. However, since we only use population data from 1988 to 2013, these are the only NAO index values that we will be concerned with. By considering NAO values of under -2 to represent severe winters on Isle Royale, using the graph above, we determined that the following winters were among the most severe: 1993, 1998, 2005, 2006, 2008, 2010, 2011, and 2012 (3). Then, to determine which coefficient multiplier we were to use in our model, we ran multiple regression analyses testing different coefficient

Figure 2, above: Actuals and Interpolation of moose population (top) and wolf population (bottom). Source: Anuraag Bukkuri

Figure 3, below: NAO index values from 1980 to 2016. Source: NAO.

Climate Since weather affects both wolves and moose, but not vice versa, we decided to incorporate the weather into the coefficients describing predation and kill rates. To do this, we needed a way to quantify weather. We decided to use the NAO (North Atlantic Oscillation) Index for this. A positive NAO index reflects below-normal heights and pressures across higher latitudes of the North Atlantic and above-normal heights and pressures over the central North Atlantic, eastern United States, and western Europe. Negative values indicate the opposite pattern of height and pressure values over the regions (2). Strong positive phases are associated with milder winter temperatures in eastern US and across northern Europe. Opposite patterns of temperature and precipitation apply for strong negative phases of the NAO index. Below is a graph of NAO index values FALL 2018


values for severe winters and manipulated our existing data by multiplying the corresponding predation and kill rate data by this factor, to determine which multiplier yielded results that most accurately matched existing data. The results are as follows: Moose Analysis

“To model vegetation, we used a 'hierarchical model' in which a top predator preys on a middle-level species, which in turn preys on a bottom-level prey.”

From the table above, we notice that the multiplier which yielded results closest to the actual data was 1.6. Using this multiplier, we noted a significant improvement in R2 values— from 0.1894 to 0.3817. Below, we have conducted the same analysis for the wolf population of Isle Royale: Wolf Analysis

From the above regressions, we notice that the most optimal coefficient multipliers were 1.6 and 1/1.6 for the predation and kill rates, respectively. Also note that the R2 values changed much less for the wolves than the moose, suggesting that wolves are less affected by the weather than the moose are. After analyzing the weather based on the NAO index, we next looked at local snowfall, in hopes of obtaining more accurate and local data. Snowfall plays a key role in wolf predation strategies on Isle Royale. During winters with high snowfall, moose are found to aggregate along lakeshores, where snow is less. Due to this, wolves hunt together and kill more moose as the moose are clustered together and have nowhere to run. During low snowfall winters however, the moose are more dispersed throughout the island. As a result, the wolves do not hunt in packs and become more confined to local territories, thus killing fewer moose overall (4). 22

The average precipitation on Isle Royale is 46.54 inches during the winter (December to February) (5). To decide what would deem a winter severe, we needed to pick a threshold that was not too exclusive or inclusive. We decided that we would consider approximately 30% of the winters on Isle Royale to be severe. Using this, we considered any winter with more than 120% of the average snowfall (55.85 inches) to be severe. With these conditions, we determined that the following winters were among the most severe on Isle Royale since 1988: 1988, 1991, 1996, 2001, 2010, 2012, and 2013. Out of these years, only two were also considered severe by our NAO index analysis: 2010 and 2012. Following the same procedure as we did for the NAO index and manipulating the predation and kill rate coefficients, we found the most optimal R2 values for the moose and wolf populations were much worse than when we used the NAO index (0.1940 v. 0.3817 for the moose population). From this, we concluded that the NAO index was a much better indicator of climate’s impact on population dynamics on Isle Royale than local snowfall data.

Vegetation We decided to consider ways to include vegetation in our population model as well, as it could be a limiting factor for moose population growth. To model this, we used a “hierarchical model” in which a top predator preys on a middle-level species, which in turn preys on a bottom-level prey. This model, as outlined by Chauvet, Paullet, and Previte, is described below (7):

for a, b, c, d, e, f, g > 0 in which: • dx/dt, dy/dt, dz/dt: growth rate of bottom prey, middle species, top predator populations, respectively, • a, b, c, and d: as in the Lotka-Volterra equations as described above, • e: effect of predation on species y by species z, • f: natural death rate of predator z in the absence of prey, • g: growth rate of the predator population z in response to size of prey population y. According to the study conducted by McLaren and Peterson, approximately 59% of the diet of Isle Royale moose is composed of Abies balsamea (balsam fir) (8). Due to the importance of the balsam fir to the moose population for not only vegetation, but also for cover, we decided DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

to measure available vegetation on Isle Royale by simply considering the balsam fir population. Accurate data on balsam fir population was quite difficult to find, with the only data available to us being the graph to the right, obtained from the 2014 annual Isle Royale report (9): Since we could not obtain the original data used to create the above graph, we were forced to estimate the balsam fir population for every year between 1988 and 2013 based on the graph alone. Then, we ran regression analyses to determine whether our model of moose and wolf population dynamics had become more accurate with the introduction of vegetation. Because we are only modeling population dynamics between moose and wolves, we only need to compare these species’ regressions with the ones performed without the inclusion of vegetation in the introduction of this paper. In other words, we are not taking into consideration the accuracy for the fir population, as it is not essential to our central analysis. We also notice that, in both the Lotka-Volterra and hierarchical models, the wolf population is only dependent on the populations of wolf and moose. Therefore, the regression for wolves will remain the same. The results of the regression analysis on the moose population are shown below. Moose Population Multiple Linear Regression -Estimated Regression Equation: dx/dt=18.5 + 0.566y + 0.00342xy − 0.0246yz Actuals and Interpolation are shown on the graph below. The regression analysis above yielded an R² value of 0.2504—an improvement over the

FALL 2018

0.1894 R² value obtained when vegetation was not considered. Although the improvement was not as great as it was when we considered climate, vegetation plays a role in population dynamics on Isle Royale—directly impacting the moose population, and indirectly impacting the wolf population. It is important to note that, although we have only considered climate and vegetation as independent events affecting the moose and wolf populations, these factors may be intertwined closely.

Figure 4, above: Balsam fir population data. Source: 2014 annual Isle Royale report. Figure 5, below: Actuals and Interpolation for moose population with vegetation consideration. Source: Anuraag Bukkuri



“We found that, although inclusion of vegetation and climate each improved the accuracy of our models, incorporation of the NAO index influenced the accuracy the most.”

Figure 5: Moose and wolf populations on Isle Royale from 1959 to 2009. Source: Isle Royale official website.


In this paper, we’ve considered several ways to more accurately model the population dynamics of the moose and wolves of Isle Royale by incorporating climate, both in terms of NAO index and snowfall, and vegetation (in terms of balsam fir population). We found that, although inclusion of vegetation and climate each improved the accuracy of our models, incorporation of the NAO index influenced the accuracy the most. Our findings also suggest that the model of the moose population is more vulnerable to large fluctuations than the model of the wolf population is. However, if we look at the following graph, provided by the official Isle Royale website, of moose and wolf populations on Isle Royale from 1959 to 2009, we notice different trends, shown on the graph below (13). From here, we see that the wolf population, like the moose population, also fluctuates. There are many possible reasons to explain why this may have occurred, one of which can be attributed to the small population size of the wolf. As can be seen in the plot above, the largest ever recorded wolf population was only 50, meaning that any small population change could have drastic effects on the overall population size of the wolves. In fact, in recent years, the small population size of the wolves has led to a multitude of other problems. For example, it has led to inbreeding within the Isle Royale wolf population, which in turn has led to the appearance of various genetic defects in the wolf population. Smaller population sizes have also hindered the ability of the wolves to hunt in packs, a critical element in the predation strategies of the wolf. D

CONTACT ANURAAG BUKKURI AT ANURAAG.BUKKURI.21@DARTMOUTH.EDU References 1. Hoppensteadt, F. (2006, October 16). Predator-prey model. Retrieved March 06, 2016, from http://www.scholarpedia.org/ article/Predator-prey_model 2. North Atlantic Oscillation (NAO). (n.d.). Retrieved March 06, 2016, from https://www.ncdc.noaa.gov/teleconnections/nao/ 3. CPC - Teleconnections: NAO. (n.d.). Retrieved March 06, 2016, from http://www.cpc.ncep.noaa.gov/products/precip/ CWlink/pna/month_nao_index.shtml 4. McRoberts, R. E., Mech, L. D., & Peterson, R. O. (1995). The Cumulative Effect of Consecutive Winters' Snow Depth on Moose and Deer Populations: A Defence. The Journal of Animal Ecology, 64(1), 131. 5. Intellicast - Isle Royale Natl Park Historic Weather Averages in Michigan. (n.d.). Retrieved March 06, 2016, from http:// www.intellicast.com/Local/History.aspx?location=USMI0431 6. Theodor, J. M. (2001). Artiodactyla (Even-Toed Ungulates Including Sheep and Camels). 7. Encyclopedia of Life Sciences. 8. Chauvet, E., Paullet, J. E., Previte, J. P., & Walls, Z. (2002). A Lotka-Volterra Three-Species Food Chain. Mathematics Magazine, 75(4), 243. 9. McLaren, B. E., & Peterson, R. O. (1994). Wolves, Moose, and Tree Rings on Isle Royale. 10. Science, 266(5190), 1555-1558. 11. Ecological Study of Wolves on Isle Royale: 20132014. (2014). Retrieved March 6, 2016, from http://www. isleroyalewolf.org/sites/default/files/annual-report-pdf/wolf moose annual report 2014 - color for web.pdf 12. Small Creature, Big Influence. (n.d.). Retrieved March 06, 2016, from http://www.isleroyalewolf.org/node/44 13. Freedman, H., & Waltman, P. (1984). Persistence in models of three interacting predator- prey populations. Mathematical Biosciences, 68(2), 213-231. 14. Smith, H. L. (1982). The Interaction of Steady State and Hopf Bifurcations in a Two- Predator–One-Prey Competition Model. SIAM J. Appl. Math. SIAM Journal on Applied Mathematics, 42(1), 27-43. 15. The Population Biology of Isle Royale Wolves and Moose: An Overview. (n.d.). Retrieved March 06, 2016, from http:// www.isleroyalewolf.org/data/data/home.html



Epithelial Stem Cell Polarity and Connection to Tumorigenesis BY NISHI JAIN ’21

Introduction The development of tumors has long been attributed to the massive proliferation of cells into a solid structure that then develops invasive migratory movements. Such movement can come in the form of cellular metastasis, in which the cancerous cells migrate through the bloodstream or the lymphatic system to other areas of the body. This is a characteristic of a more advanced form of cancer (1, 2). Past studies have investigated the potential causes of such a disease and have settled on a few different answers, the most prominent of which has been due to DNA mutations causing disruptions to the normal cell cycle (35). However, scientists have proposed another cause to this illness: the loss of epithelial stem cell polarity. While historically this cause has only been connected to more advanced stages of tumorigenesis, there has been accruing recent evidence suggesting that cell polarity could also be connected to much earlier stages of tumor formation than previously thought (5).

Epithelial Stem Cell Polarity: An Overview Epithelial cell polarity can be thought of as FALL 2018

“asymmetry” within epithelial cells and epithelial tissue, essentially meaning that it is a different position of the membrane and organelles along the apical-basolateral axis (called the apical basolateral polarity) (6-10). This polarity has been categorized by the existence of separated apical and basolateral membrane domains and also by the positioning of cells in the planar cell polarity, which is the cellular positioning in the plane of the epithelial tissue (11). The mechanisms that control this polarity have been shown to be more connected to cancer than cellular polarity’s other characteristics (12-16). In the apical polarity and basolateral plasma membrane domains themselves, there is a simple epithelia and a stratified epithelia each consisting of one layer of dividing cells—‘simple’ epithelia is characterized by the existence of primary cilium and microvilli, while ‘stratified’ epithelia is characterized by the existence of a barrier in the form of the uppermost layer of the surface (13). Combined, the apicalbasolateral polarity contributes to cell shape and directional transport; it allows different cells to produce a spatiotemporal response to a disruption in the microenvironment (17-19). The current implication in the tumorigenic development is that when the polarity is

Figure 1: An adult stem cell that was imaged through a transmission electron micrograph. This stem cell depicts an ultrastructure, in which we can see the additional magnificiation of the cell using advanced imaging techniques. Source: Wikimedia Commons.


“Studies have shown that cell polarity complexes have been highly evolutionarily conserved, especially the three major ones that have been identified by biologists: the PAR polarity complex, the CRB complex, and the LLGL-DLG complex.”

disrupted, cells can become unresponsive to the microenvironmental cues that may suggest that action is necessary (20). A disruption in the cell cycle could be such a microenvironmental that the cell must be responsive to; however, due to the disruption in polarity it may not act to regulate the disruption, thus potentially leading to cellular proliferation and tumorigenesis.

The Biology Behind Epithelial Stem Cell Polarity Studies have shown that cell polarity complexes have been highly evolutionarily conserved, especially the three major ones that have been identified by biologists: the PAR polarity complex, the CRB complex, and the LLGL-DLG complex. The PAR polarity complex—composed of PAR3, PAR6, atypical protein kinase C (aPKC), and cell division control protein 42 (CDC42)—has a function of establishing the apical-basal membrane border (21-24). The second complex, the CRB complex— composed of the transmembrane protein CRB and the cytoplasmic proteins PALS1 and the tight

junction protein PARJ—works to establish the apical membrane itself. Finally, the LLGL-DLG complex—comprised of the scribble homologue (SCRIB)-lethal (2) giant larvae homologue and the discs large homologue (DLG) complex— creates the basolateral plasma domain (22-24). Epithelial stem cells work to orient their basal surface through first connecting the integrin receptors to the extracellular matrix, then pushing out membrane filopodia to establish contact with neighboring cells, thus creating a cohesive layer of cells (25-26). This process starts with nectin-afadin adhesion complexes associating with PAR3. Next, E-cadherin and Junction Adhesion Molecule A (JAMA) travel to the cell cortex, which then create the clustering of puncta, or adhesion sites (27-29). Then, through the anchoring of the actin cytoskeleton combined with the RHO-GTPase activity, the tight junction proteins extend further the interface of adhesion farther along the basolateral domain (28). Afterwards, the adherens junction proteins separate from the tight junction proteins, and the latter form the mature tight junctions. Next, the SCRIB-LLGL-

Figure 2: This brain tumor, depicted in the right cerebral hemisphere, is a result of proliferation of cells into a solid structure. This particular tumor was the result of metastasis from lung cancer, where the lung tumor developed in invasive migratory patterns that resulted in this second tumor in the brain. Source: Wikimedia Commons



DLG complex localizes the adherens junctions, then controls the expansion of apical domain (29). The LGL phosphorylation by aPKC and the phosphorylation of PAR3 and PAR1 prevent this interaction, however, and rather promote the separation of the lateral domains and the subapical domains (29). Apical membranes contain microvilli which in turn contains actin and spectrin filaments that are connected to the plasma membrane by radixin, ezrin, and moesin proteins (28). Tumor suppressor merlin NF2 shares ancestry with the radixin, exrin, and moesin proteins, and has also been proven to be involved in the creation of the adherens junctions and the development of the epidermis base layer as well as junctional stability (30-31). This means that detrimental changes to the normal state of the aforementioned proteins could lead to a loss of differentiation among cells, leading to enhanced and uncontrolled epithelial cell proliferation as well as the beginning of the aggressive growth and movement of cells (32-34). The depolarization that is associated with this could lead to intracellular adhesion rupturing as well as hyperproliferation in the form of overexpression, downregulation, deletion, alternative splicing, and mislocalization (33-34).

Implications with Tumorigenesis Studies show that many of the proteins that regulate the apical-basolateral polarity are also tumor suppressors and proto-oncoproteins that show crosstalk with signaling pathways known to control cell growth and proliferation (35). These proteins are well connected to the cytoskeleton in their location. As a result, the loss of such proteins is well associated with irreversible changes in epithelial functions, tissue architecture, and has been known to lead to genomic instability (36). Also, such cell polarity mechanisms have been implicated with the control of the orientations of cell divisions in epithelial stem cells, as the maintenance of most adult epithelial tissue is reliant on the presence of polarized stem cells, which selfrenew through symmetric cell divisions (37). The differentiation of the polarized stem cells occurs when they change the orientation of their mitotic spindles. This leads them to divide asymmetrically to create the specialized cells that are the driving factors behind epithelial function and homeostasis. Interestingly, the genes that control epithelial cell polarity also control spindle orientation and the symmetry of divisions in stem cells (38). The SCRIB complex is well associated with tumor suppressors and has recently been named a potential tumor suppressor. However, it is often mislocalized because of the loss of the E-cadherin protein, which is needed for placing FALL 2018

the SCRIB complex with the apical-basolateral membrane. Looking at the SCRIB complex homologue in D. melanogaster model organism, its mislocalization leads to the formation of a malignant and aggressively metastasizing tumor which is affected by the RAS oncoprotein (39). This process then activates the JUN N-terminal signal, which exhibits membrane deterioration, E-cadherin protein loss of expression, tumor movement, and tumor metastasis. The JUN N-terminal then, instead of the general behavior promoting apoptosis among aggressively proliferating cells, adopts the opposite behavior and starts to promote uncontrolled cell growth and tumorigenesis (39-40). Another path to stimulating the behavior of the JUN N-terminal signaling through the SCRIB complex comes with the interaction with beta-PIX—GITI complex, a Guanine Nucleotide Exchange Factor (GEF) for RAC1. This interaction activates GTPases through the release guanosine diphosphate (GDP) that then facilitates the binding of guanosine triphosphate (GTP) (40-41). RAC1, activated through the GEF then activates the JUN N-terminal signals, whose function in this instance is positive, works as a mediator of cell death. In this case, the lack of expression of the SCRIB complex actually reduces the chances of promoting apoptosis and

Figure 3: A depiction of adherens junction proteins that are characteristic to the function of epithelial stem cells. Beginning with the separation from the tight junction proteins, these adherens junction proteins then help control the expansion of the apical domain. Source: Wikimedia Commons


still to cell polarity through the promotion of tissue growth regulation, organ sizing limits, and, most importantly, tumor regeneration. Similarly, the LKB1 tumor suppressor has been linked with cell polarity in both early and later stages of tumorigenesis. In proving that the connection between tumorigenesis and cell polarity is undeniable, scientists have opened another line of investigation into the most notorious disease known to man—which has been a problem proven to be not only a thousand miles wide, but additionally a million miles deep. D CONTACT NISHI JAIN AT NISHI.JAIN.21@ DARTMOUTH.EDU References

Figure 4: Diagram of a tight junction protein which works with the adherens junction proteins for the function of epithelial cells. Source: Wikimedia Commons

thus further stimulates apoptosis (41). Two other components of the SCRIB complex, LGL and DLG, are also proven tumor suppressors that enhance invasiveness when they are changed (42). For instance, when the EMT transcription factor zinc finger E-boxbinding homeobox (ZEB1) suppresses polarity protein expression in either LGL or DLG, there a decrease in cell polarity, and an induction of metastasis in many forms of cancer. LGL loss of function has also been linked to alternative splicing; the creation of this alternative splicing— different forms of LLG1—has been associated strongly with hepatocellular carcinoma (41-42).

Conclusion While the SCRIB complex has been wellconnected to multiple stages of tumorigenesis, it is not the only protein of its kind to be associated with cancerous tumor growth. The CRB complex has been associated with cell polarity as well as with early tumorigenesis, along with the PAR complexes. However, not only have cell polarity proteins been linked with tumor growth, it has also been historically cancer-associated mechanisms that have been linked with cell polarity. For instance, the Hippo tumor suppressor pathway has historically been important tumor suppressors, and yet still has 28

1. Hanahan, D. & Weinberg, R. A. Hallmarks of cancer: the next generation. Cell 144, 646–674 (2011). 2. St. Johnston, D. & Ahringer, J. Cell polarity in eggs and epithelia: parallels and diversity. Cell 141, 757–774 (2010). 3. Martin-Belmonte, F. & Mostov, K. Regulation of cell polarity during epithelial morphogenesis. Curr. Opin. Cell Biol. 20, 227–234 (2008). 4. Lee, M. & Vasioukhin, V. Cell polarity and cancer--cell and tissue polarity as a non-canonical tumor suppressor. J. Cell Sci. 121, 1141–1150 (2008). 5. Weber, G. F., Bjerke, M. A. & DeSimone, D. W. Integrins and cadherins join forces to form adhesive networks. J. Cell Sci. 124, 1183–1193 (2011). 6. Bilder, D., Li, M. & Perrimon, N. Cooperative regulation of cell polarity and growth by Drosophila tumor suppressors. Science 289, 113–116 (2000). 7. Laprise, P., Viel, A. & Rivard, N. Human homolog of disc-large is required for adherens junction assembly and differentiation of human intestinal epithelial cells. J. Biol. Chem. 279, 10157–10166 (2004). 8. Navarro, C. et al. Junctional recruitment of mammalian Scribble relies on E-cadherin engagement. Oncogene 24, 4330–4339 (2005). 9. Yamanaka, T. et al. Mammalian Lgl forms a protein complex with PAR-6 and aPKC independently of PAR-3 to regulate epithelial cell polarity. Curr. Bio.13, 734–743 (2003). 10. Benton, R. & St. Johnston, D. Drosophila PAR-1 and 14-3-3 inhibit Bazooka/PAR-3 to establish complementary cortical domains in polarized cells. Cell 115, 691–704 (2003). 11. Yamanaka, T. & Ohno, S. Role of Lgl/Dlg/Scribble in the regulation of epithelial junction, polarity and growth. Front. Biosci. 13, 6693–6707 (2008). 12. Yi, C. et al. A tight junction-associated Merlinangiomotin complex mediates Merlin’s regulation of mitogenic signaling and tumor suppressive functions. Cancer Cell 19, 527–540 (2011). 13. Birchmeier, W. & Behrens, J. Cadherin expression in carcinomas: role in the formation of cell junctions and the prevention of invasiveness. Biochim. Biophys. Acta 1198, 11–26 (1994). 14. Jeanes, A., Gottardi, C. J. & Yap, A. S. Cadherins and cancer: how does cadherin dysfunction promote tumor progression? Oncogene 27, 6920–6929 (2008). 15. Gonzalez-Mariscal, L., Lechuga, S. & Garay, E. Role of tight junctions in cell proliferation and cancer. Prog. Histochem. Cytochem. 42, 1–57 (2007). 38. Escudero-Esparza, A., Jiang, W. G. & Martin, T. A. The Claudin family and its role in cancer and metastasis. Front. Biosci. 16, 1069–1083 (2011). 16. Benhamouche, S. et al. Nf2/Merlin controls progenitor homeostasis and tumorigenesis in the liver. Genes Dev. 24, 1718–1730 (2010). DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Figure 5: The SCRIB complex is well associated with tumor suppressors and has recently name a potential tumor suppressor. Most importantly, its mislocalization can lead to the loss of the E-cadherin protein which then causes the formation of a malignant and aggressively metastasizing tumors. Source: Wikimedia Commons

17. Tanos, B. & Rodriguez-Boulan, E. The epithelial polarity program: machineries involved and their hijacking by cancer. Oncogene 27, 6939–6957 (2008). 18. Coradini, D., Casarsa, C. & Oriana, S. Epithelial cell polarity and tumorigenesis: new perspectives for cancer detection and treatment. Acta Pharmacol. Sin 32, 552–564 (2011). 19. Moreno-Bueno, G., Portillo, F. & Cano, A. Transcriptional regulation of cell polarity in EMT and cancer. Oncogene 27, 6958–6969 (2008). 20. Brumby, A. M. & Richardson, H. E. scribble mutants cooperate with oncogenic Ras or Notch to cause neoplastic overgrowth in Drosophila. EMBO J. 22, 5769–5779 (2003). 21. Pagliarini, R. A. & Xu, T. A genetic screen in Drosophila for metastatic behavior. Science 302, 1227–1231 (2003). 22. Aigner, K. et al. The transcription factor ZEB1 (deltaEF1) promotes tumour cell dedifferentiation by repressing master regulators of epithelial polarity. Oncogene 26, 6979–6988 (2007). 23. Spaderna, S. et al. The transcriptional repressor ZEB1 promotes metastasis and loss of cell polarity in cancer. Cancer Res. 68, 537–544 (2008). 24. Kuphal, S. et al. Expression of Hugl-1 is strongly reduced in malignant melanoma. Oncogene 25, 103–110 (2006). 25. Lu, X. et al. Aberrant splicing of Hugl-1 is associated with hepatocellular carcinoma progression. Clin. Cancer Res. 15, 3287–3296 (2009). 26. Karp, C. M. et al. Role of the polarity determinant crumbs in suppressing mammalian epithelial tumor progression. Cancer Res. 68, 4105–4115 (2008). 27. Wodarz, A. & Nathke, I. Cell polarity in development and cancer. Nature Cell Biol. 9, 1016–1024 (2007). 28. Murray, N. R., Kalari, K. R. & Fields, A. P. Protein kinase Ciota expression and oncogenic signaling mechanisms in cancer. J. Cell Physiol. 226, 879–887 (2011). 29. Genevet, A. & Tapon, N. The Hippo pathway and apicobasal cell polarity. Biochem. J. 436, 213–224 (2011). 30. Pan, D. The hippo signaling pathway in development and cancer. Dev. Cell 19, 491–505 (2010). 31. Harvey, K. & Tapon, N. The Salvador-Warts-Hippo pathway - an emerging tumour-suppressor network. Nature Rev. Cancer 7, 182–191 (2007). FALL 2018

32. Parsons, L. M., Grzeschik, N. A., Allott, M. L. & Richardson, H. E. Lgl/aPKC and Crb regulate the Salvador/Warts/Hippo pathway. Fly (Austin) 4, 288–293 (2010). 33. Zhao, M., Szafranski, P., Hall, C. A. & Goode, S. Basolateral junctions utilize warts signaling to control epithelialmesenchymal transition and proliferation crucial for migration and invasion of Drosophila ovarian epithelial cells. Genetics 178, 1947–1971 (2008). 34. Grzeschik, N. A., Parsons, L. M., Allott, M. L., Harvey, K. F. & Richardson, H. E. Lgl, aPKC, and Crumbs regulate the Salvador/Warts/Hippo pathway through two distinct mechanisms. Curr. Biol. 20, 573–581 (2010). 35. Robinson, B. S., Huang, J., Hong, Y. & Moberg, K. H. Crumbs regulates Salvador/Warts/Hippo signaling in Drosophila via the FERM-domain protein Expanded. Curr. Biol. 20, 582–590 (2010). 36. Varelas, X. et al. The Crumbs complex couples cell density sensing to Hippo-dependent control of the TGF-β-SMAD pathway. Dev. Cell 19, 831–844 (2010). 37. Zhao, B., Tumaneng, K. & Guan, K. L. The Hippo pathway in organ size control, tissue regeneration and stem cell selfrenewal. Nature Cell Biol. 13, 877–883 (2011). 38. Ashton, G. H. et al. Focal adhesion kinase is required for intestinal regeneration and tumorigenesis downstream of Wnt/c-Myc signaling. Dev. Cell 19, 259–269 (2010). 39. Cai, J. et al. The Hippo signaling pathway restricts the oncogenic potential of an intestinal regeneration program. Genes Dev. 24, 2383–2388 (2010). 80. Karpowicz, P., Perez, J. & Perrimon, N. The Hippo tumor suppressor pathway regulates intestinal stem cell regeneration. Development 137, 4135–4145 (2010). 40. Shao, J. & Sheng, H. Amphiregulin promotes intestinal epithelial regeneration: roles of intestinal subepithelial myofibroblasts. Endocrinology 151, 3728–3737 (2010). 41. Shaw, R. L. et al. The Hippo pathway regulates intestinal stem cell proliferation during Drosophila adult midgut regeneration. Development 137, 4147–4158 (2010). 42. Staley, B. K. & Irvine, K. D. Warts and Yorkie mediate intestinal regeneration by influencing stem cell proliferation. Curr. Bio. 20, 1580–1587 (2010). 29


Identifying and Addressing Health Literacy Issues BY SAMUEL REED ’19 Figure 1: At its base level, health literacy affects the quality of interactions between a patient and their doctor, and the quality of the patient’s care in general. However, evolving definitions of health literacy encompass much more. Source: Health.mil


Achieving health equity is a major concern in today’s health-related fields. While equity is ideal, it has been hard to attain in populations with the lowest healthcare access. While poor financing and infrastructure are well-known barriers to providing healthcare to those populations, health literacy is often overlooked. Health literacy, a term used in healthcare research for the past 30 years, is a measurable outcome of health education and reflects the ability of doctors and patients to communicate (1). In the past, health literacy has been narrowly defined as the capacity to “apply literacy skills to health-related materials,” such as doctor’s notes and prescriptions (2). However, this definition is evolving to encompass different levels of patient empowerment and understanding, beyond basic literacy abilities (2). While health literacy deficiencies are increasingly targeted to improve care in disadvantaged groups, greater efforts to reach those without access are being incorporated into all types of interventions. In particular, research methods have been developed that allow communities to have input into the interventions that target them. This article presents an overview of health literacy research methods and community-based research methods, discuss the development of those fields, and summarize future plans to improve these research areas.

Methods of Evaluating Health Literacy Many studies have demonstrated the value of advanced health literacy and have linked low health literacy to consequences including poor utilization of available health resources, heightened mortality rates, and difficulties with medical prescriptions and instructions (3). Despite these negative outcomes, 36% of adults in the United States only have ‘basic’ or ‘below basic’ levels of health literacy (4). Indepth research around health literacy is equally stunted, as health literacy is still a minor topic within the realm of health communications research. In order for this area of research to grow, consistent, accurate measurement tools are required. There are currently three well-tested, highly prevalent measurement tools that are used to assess health literacy in populations. One measure, the Rapid Estimate of Adult Literacy in Medicine (REALM), is a 66-item questionnaire that mainly tests comprehension of terminology as well as phonetic reading skills (5). Similarly, the Test of Functional Health Literacy in Adults (TOFHLA), tests reading comprehension by having participants fill in missing words on a medical form (5). The Newest Vital Sign (NVS) test, a briefer measurement tool in which participants dissect the contents of a nutrition DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

label, demands “reading, numeracy, and navigational ability (5).” Although the NVS only consists of 6 brief questions, it has been shown to be just as accurate as the REALM and the TOFHLA (5). A study conducted by Halverson et al. of the University of Madison and the Medical College of Wisconsin used similar measurement tools to assess health literacy among cancer patients, and linked this information to healthrelated quality of life (HRQL) (3). In this study, data was used from the Assessment of Cancer Care and Satisfaction (ACCESS) survey, which was a randomly sampled, cross-sectional survey conducted between 2006 and 2007 (3). It utilized the Functional Assessment of Cancer Therapy – General (FACT-G), a test designed to measure health-related quality of life, specifically in cancer patients. This method has been effective cross-culturally, providing accurate results in populations ranging from the elderly to rural (3). ACCESS also drew four questions targeting health literacy from REALM and from other tools. Ultimately, Halverson et al. found that low health literacy was highly correlated with poor health-related quality of life in cancer patients, and this relationship was present with as much confidence in the physical, function, emotional, and social subscales of FACT-G (3). This research highlights the importance of health literacy interventions, and hopefully, could lead to real change.

Reshaping Health Literacy The effectiveness and accuracy of health literacy studies are hinged not only on the tools used, but also on the way that health literacy is defined; the definition of health literacy shapes both its measurement and its interventions. Many early health literacy interventions, which treated health literacy as only the transmission of information, “failed to achieve substantial and sustainable results” because they did account for social factors (2). Thus, in order for health literacy to be accurately assessed and properly addressed, it needs to be defined in a way that is inclusive of those social factors. A major effort to improve the definition, initiated by Nutbeam in 2000, regards health literacy in terms of what it “enables us to do” instead of just in terms of reading and writing ability (2). Nutbeam divides health literacy into three categories: basic literacy, communicative/interactive literacy, and critical literacy (2). Basic literacy constitutes the skills needed to operate in every-day health scenarios (2). At the other end of the scale, critical literacy is a more advanced way of thinking which allows people to utilize health information to have more control over their lives (2). Interactive literacy is an intermediate stage which has to do with deriving information FALL 2018

from health materials (2). Advances in health literacy research have already arisen in response to Nutbeam’s interpretation. In a 2015 study from Guzys et al., community-level characteristics of critical health literacy were evaluated using evolutionary concept analysis (6). Concept analyses are techniques often used in nursing science to narrow the ideas surrounding concept down to their operational definition (7). Evolutionary concept analysis, more specifically, is a literature research methodology that emphasizes the ability of concepts to develop differently over time in different contexts (7). By involving policy makers and practitioners in the review process of community-level critical health literacy, Guzys et al. determined the factors that define this concept. Ultimately, these factors included not only health knowledge and skills, but also effective interactions between providers and different communities as well as communitylevel political action and empowerment (6). In addition to changes in definitions, the tools used to measure health literacy have needed adjustments to measure critical health literacy. Aforementioned measures – the REALM, TOFHLA, and NVS – are quick and effective in measuring basic health literacy but do little to measure critical literacy or empowerment (6). Recently, a new assessment was developed to test health literacy more holistically, such that critical literacy and social determinants were accurately measured (6). To create such a measure, 91 items from 11 scales were generated to reflect a wide range of potential healthcare experiences (7). Cognitive testing either confirmed that these questions were received as intended or allowed researchers to reword questions (7). The items were then tested against a calibration sample from a range of populations (N = 634), and items found to be uninformative were discarded (7). After testing the remaining questions in a replication sample (N = 405), a final set of 9 scales consisting of 44 items was created (7). The resulting measure is called the Health Literacy Questionnaire (HLQ), and is proven to measure the critical and basic health literacy of individuals, as well as critical and basic health literacy on the population level. The HLQ can also be used to assess the efficacy of a range of public health and clinical interventions aimed at health literacy (6).

“In order for health literacy to be accurately assessed and properly addressed, it needs to be defined in a way that is inclusive of social factors.”

“In addition to changes in definitions, the tools used to measure health literacy have needed adjustments to measure critical health literacy.”

A Tangent: Community-Based Research Methods New health interventions and translational research programs are frequently developed today – many of them aimed at improving health literacy. However, ensuring their successful implementation and reach in a range 31

Figure 1: Participatory Rural Appraisal is an intensive process that fully utilizes community knowledge and efforts, in order to form the best action plan while simultaneously empowering the community. Source: Wikimedia Commons (Credit: APB-CMX)


of communities are not small feats. Often, disadvantaged communities suffer not only from low health literacy but also from low access to care and research programs (8). CommunityBased Participatory Research (CBPR) offers one way of obtaining evidence based in the culture and practices of a community. CBPR is conducted with the mindset of engaging with community partners rather than studying the community as if it were an object (8). It also aims to avoid confusing language, or language that shows institutional dominance. This helps researchers avoid inciting historical trauma from the abuse of powerful institutions (8). Finally, CBPR places emphasis on capacity building within partner communities in order to ensure that interventions are sustainable (8). In the last 20 years, community-based research has been increasingly based in Participatory Rural Appraisal (PRA) research methods. While CBPR originated from public health efforts, PRA stemmed from international development research (9). However, it has been incorporated to many studies in the United States (9). Like CBPR, PRA emphasizes a dynamic in which the locals are the ‘teachers’

and the researchers are the ‘learners’ (9). It also focuses on eliminating biases against individuals less likely to participate in the study, and crucially, empowers communities by giving all active roles in data collection and analysis to community members (9). This involves training community members to serve as the researchers themselves (9). The structured process of PRA is most easily understood when seen in action. In a 2016 study, PRA was used in an indigenous, Mexican community to create a Community Action Plan (CAP) that used local resources to bolster environmental education (10). This was done to help locals react to impending attempts at fracking and natural gas extraction in their region (10). The first step taken was to hold community workshops that focused on mapping the region’s infrastructure, environmental resources, and the factors causing environmental harms - in the eyes of the community. Locals made these maps with little to no input from the facilitator (10). In another session, locals onstructed sociogram, which charted the relationships between the groups identified in the first workshops. Again, this


was done with little help from facilitators (10). In further sessions, the community developed a “problem tree” of their main issues and helped propose solutions to those problems (10). The end result of this PRA application was a CAP which was successfully implemented (10).

Going Forward Aside from creating more holistic scales for health literacy and using CBPR to bring research and interventions to communities in need, there are additional areas where health literacy can be expanded and improved. The development of health literacy theories is an area that is particularly lacking (5). Most research so far has focused on measures of health literacy, and very little has put forth models of how health literacy is created (5). This may be attributed to the fact that the field of health literacy research is still quite young (5). The best tools for creating health literacy are also still debated (5). Further research needs to be done in order to determine which medium – a blog, social media, or website – is most effective in improving health literacy (5).

Conclusion There is an urgent need for research and new interventions around health literacy as deficiencies in health literacy lead to negative health outcomes. However, these fields are still developing, and researchers must further investigate best methods for measuring and improving health literacy. The movement from the metric of basic health literacy to the metric of critical health literacy is an important leap in this growth process. The rise of CBPR and PRA is another crucial development because these methods improve the effectiveness of not only health literacy interventions, but also related interventions. At the same time, however, there are still improvements to be made. Due to the youth of the health literacy field, little theory has been developed around the formulation of health literacy. Given the pace at which this research area has developed and grown within the field of health communications, progress is hopefully around the corner. D

FALL 2018

CONTACT SAMUEL REED AT SAMUEL.R.REED.19@ DARTMOUTH.EDU References 1. Nutbeam, D., McGill, B., & Premkumar, P. (2017). Improving health literacy in community populations: a review of progress. Health promotion international. 2. Nutbeam, D. (2000). Health literacy as a public health goal: a challenge for contemporary health education and communication strategies into the 21st century. Health promotion international, 15(3), 259-267. 3. Halverson, J. L., Martinez-Donate, A. P., Palta, M., Leal, T., Lubner, S., Walsh, M. C., ... & Trentham-Dietz, A. (2015). Health literacy and health-related quality of life among a population-based sample of cancer patients. Journal of health communication, 20(11), 1320-1329. 4. Logan, R. A., & Siegel, E. R. (Eds.). (2017). Health Literacy: New Directions in Research, Theory and Practice (Vol. 240). IOS Press. 5. Aldoory, L. (2017). The status of health literacy research in health communication and opportunities for future scholarship. Health communication, 32(2), 211-218. 6. Guzys, D., Kenny, A., Dickson-Swift, V., & Threlkeld, G. (2015). A critical review of population health literacy assessment. BMC Public Health, 15(1), 215. 7. Tofthagen, R., & Fagerstrøm, L. M. (2010). Rodgers’ evolutionary concept analysis–a valid method for developing knowledge in nursing science. Scandinavian journal of caring sciences, 24, 21-31. 8. Wallerstein, N., & Duran, B. (2010). Community-based participatory research contributions to intervention research: the intersection of science and practice to improve health equity. American journal of public health, 100(S1), S40-S46. 9. Williams, K. J., Gail Bray, P., Shapiro-Mendoza, C. K., Reisz, I., & Peranteau, J. (2009). Modeling the principles of community-based participatory research in a community health assessment conducted by a health foundation. Health Promotion Practice, 10(1), 67-75. 10. Solano, C., & Concepción, M. (2017). Participatory rural appraisal as an educational tool to empower sustainable community processes. Journal of cleaner production.

“Given the pace at which this research area has developed and grown within the field of health communications, progress is hopefully around the corner.”



Neuroimaging: A Brief History BY SAHAJ SHAH ’21 Figure 1: Drawings of neural structure by Santiago Ramon y Cajal. Source: "The Beautiful Brain" by Cajal.


Introduction Scientific progress is measured by innovations—some that continue to etch the surface of the unknown, one step at a time. Others take a giant leap into the abyss, shedding light on information that will revolutionize the way we look at science forever and continue to inform research decades later. Santiago Ramón y Cajal’s work proved to be one of the latter, forever shaping the identity of neuroscience. Born in 1952 in Petilla de Aragón, a tiny village in Spain, Cajal is regarded by many as the father of modern neuroscience, the study of the structure and function of brain (1). Driven by a passion for scribbling and observation, Cajal produced over twenty-nine hundred drawings of the human brain that revolutionized neuroscience into what we know and study today. His drawings are concerned with different aspects of the human brain—its structure and anatomy, and how it communicates with the rest of the body. One of his most important discoveries was the Neuron doctrine. The Neuron doctrine dictates that neurons, building blocks that make up the brain, communicate with each other without touching (2). They communicate through gaps between neurons, called synaptic clefts. Cajal did this by examining thin slices of brain under microscope using the newly devised Golgi stain that could stain individual cells deep black. This finding was revolutionary in many ways. Mainly, it

challenged the prevailing notion that portrayed the brain as a single and continuous network for which he was awarded the Nobel Prize in Physiology or Medicine in 1950. This discovery shed a new light on the identity of the neurons as having unique structure and function. Cajal’s research proved to be a conflation of two tools: a powerful microscope and his “irresistible mania for scribbling” (3). Today, his precise and detailed drawings continue to describe and support newer concepts and findings, paving way to what is now known as neuroimaging—experimental techniques that portray the brain, allowing us to study its structure and inner workings.

Development of X-Ray Before the 1800s, physicians were extremely limited in their ability to gather information about an injury or illness, relying on the traditional methods of touch, sight, and sound to diagnose patients. However, with the turn of twentieth century, several medical innovations made it possible for physicians to pinpoint area of injuries and understand internal activity. It all started with the mysterious discovery of X-Rays. Assuming a position at the University of Wurzburg after receiving his PhD, Roentgen was particularly interested in the study of light and the effects of passing electrical currents into a vacuum tube, devoid of air. Recent discovery had shown the presence of cathode rays, a beam DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

of electrons emitted from the negatively charged electrode, in so called Crooke tubes, a glass bulb with opposing-charged electrodes on two ends. When a high voltage was passed between the two electrodes, a fluorescent light was emitted. In 1895, Roentgen discovered that upon covering the Crooke tube with cardboard, the nearby fluorescent screen still glowed. He hypothesized that some unknown rays, had the ability to pass through cardboard, making it over to the distant screen, dubbing them X-rays. Upon further experiments, Roentgen confirmed a mysterious property that while some rays were absorbed by solid material, others contained the ability to pass through, creating a “shadow” on the other side. It was found that this was true for bones, as evident in the x-ray image of his wife’s hand. “I did not think. I investigated...The conviction grew gradually that experiment is the strongest and most reliable lever with which we can wrest Nature’s secrets,” Roetgen later said (4,5). The discovery of X-rays sent ripples across several disciplines. Physicians began taking X-ray images of several body parts to understand human body and localize point of injury without having to operate, chemists began running experiments to understand properties of atoms and molecules, and physicists began to make sense of electromagnetic radiations and nuclear physics. The original X-ray tube was crude but the discovery had made it possible to see what was previously invisible to human eye. Neuroscientists began taking pictures of the brain to understand brain structure and innovating techniques to improve their quality of the imaging. One such scientist was Arthur Schuller, often referred to as the father of neuroradiology for his important contributions to skull radiography and intracranial diseases through an X-ray apparatus at the back of his consulting room. One such technique, Pneumoencephalography (PEG), was introduced in 1919. The idea was to deliberately puncture the ventricles of the brain and inject them with air. The presence of gas proved to be an effective contrast, showing the underlying bones and tissues more clearly in the X-ray images. “For the first time ever, we have a means of diagnosing internal hydrocephalus (buildup of fluid inside the brain) in the early stages..., ” Dr. Walter E. Dandy, then a 32-year old resident at Johns Hopkins Hospital notes (6,7). This technique was further refined, allowing cerebrospinal fluid evacuated by a syringe to be precisely replaced with equal amounts of air, paving the way for radiographic diagnoses of intracranial brain tumors. This important discovery, however, came with several flaws. The process of puncturing holes was excruciatingly FALL 2018

Figure 2: One of the earliest photographic plates from Roentgen's experiments was a film of his wife, Bertha's hand with a ring, produced on Friday, November 8, 1895. Source: Wikimedia Commons.

painful, and spinal and ventricle punctures came with a high risk of hemorrhaging, often resulting in serious injury or even death. Slowly, but surely, the building blocks of brain imaging were coming together. Crude techniques were being constantly refined, blurry images were being replaced by ones with better resolution, and diagnostic methods became increasingly effective. Brain imaging soon became the basis for diagnosing injuries and illnesses.

Development of Cross-Sectional Imaging The invasive and indirect imaging using X-rays, and the inability to look at softer brain tissues remained major sources of frustration to the medical community. In 1961, William Oldendorf developed a prototype that built on the concept of X-rays to pass the rays through the head at different angles (8). Two years later, Alan Cormack, a physicist at Tufts University, independently developed a mathematical algorithm on tomographic reconstruction of cross-sectional imaging through the use of penetrating waves. A decade later, Godfrey Hounsfield, a computer engineer by profession, also developed a technique to reconstruct images of internal structures within the body from multiple X-ray transmissions taken at various angles. These three inventions culminated into what we now know as Computed Tomography Scan, commonly known as the CAT or CT scan (8). The CT scan became an important contribution in the coming years because it was able to look into the softer tissues within the 35

Figure 3: Early X-ray images of the brain. Source: Wikipedia

body with great contrast. The cross-sectional, computer-processed images are a compilation of several X-ray images taken from different angles. The curved detector then collects the X-ray beams passing through the head, and reconstructs a 2-D image using a “back projection” algorithm. Hounsfield and Cormack were awarded the Nobel prize in Medicine for this important discovery. Soon, physicians were able to diagnose tumors within the brain with non-invasive techniques. While CT scans allowed scientists and physicians to examine internal images with great detail, they came with high doses of radiation and proved to be expensive. A single chest CT scan delivered radiation equivalent to about 400 posteroanterior chest films. (9) Figure 4: A CT Scanner.

Development of MRI

Source: Teresa Winslow

The next important technique came around with the invention of MRI (Magnetic Resonance

Imaging). This contribution came from a physicist, I. I. Rabi who extensively studied the magnetic properties of atomic nuclei. It is based on the principle that electrons possess angular momentum, or spin, developed from Bohr’s model of atom that electrons orbit the nucleus in fixed circular orbits and can only have angular momentum is fixed, discrete values. The famous experiment known as the Stern-Gerlach experiment, demonstrated that when thin rays of silver atoms were passed through a nonuniform magnetic beam, they deflect to form two beams. It was postulated that electrons possess an intrinsic magnetic moment, a vector quantity that determines the torque exerted by the magnetic field. Fascinated by the idea of dipole moment of atoms, Rabi followed the experiment closely. His research culminated into a magnetic resonance method, gathering highly accurate values of nuclear spins of atoms. This research translated into what we now know as Magnetic Resonance Imaging (MRI). The majority of our body is composed of water molecules. The hydrogen nuclei in the water can be thought of as a bar magnet with a north and south pole, aligned randomly as a default position. Under strong magnetic conditions (0.23T), these nuclei tend to align themselves with or against the magnetic field, creating a magnetic vector along the axis of the MRI scanner. In addition to the magnetic field, radio waves are emitted to cause interference in the field. This causes the hydrogen nuclei to “flip” their spins. When the magnetic field is turned off, the nuclei “flip” back to normalcy, and in the process, emit radio waves. These radio waves differ with the density of the tissue. The resulting radio waves are detected by a receiver, essentially converting those signals to give us a detailed image of the human body. The invention was revolutionary and allowed us to look inside bones, ligaments, and tendons to pinpoint injuries. This technique was further improved to give us a functional MRI or fMRI, and allowed neuroscientists to closely examine the structure of the brain, blood circulation, and internal brain activity with minimal risks (10).

New Innovations Since the time between the invention of X-rays and fMRIs, thousands of scientific papers have been published across numerous disciplines, allowing us to better understand the structures and function of the human body, specifically the human brain, an area that was fairly nebulous. Cajal’s discovery of neurons as the functional unit of brain gave the field of neuroscience a unique identity of its own with far-reaching capabilities. This identity was further evolved by imaging techniques that have extended our knowledge about human 36


diseases, helping us diagnose tumors, cancers, and neurodegenerative diseases. In recent years, modifications to existing techniques and development of new techniques have constantly shaped the identity of neuroscience and allowed us to look inside the human brain with greater detail. A research article, published in Science Translational Medicine in 2016, tracked changes in brain function responsible for the early progression of the Alzheimer’s disease using Tau and Aβ imaging techniques, leading us closer to its cause (11). While a number of processes and functions still remain unclear, the constantly evolving nature of neuroimaging holds significant promise to the scientific field, furthering research and opening new avenues for diagnosis and treatment. D CONTACT SAHAJ SHAH AT SAHAJ.S.SHAH.21@ DARTMOUTH.EDU References 1. Cajal, S. R., Newman, E. A., Araque, A., Dubinsky, J. M., Swanson, L. W., King, L., & Himmel, E. (2017). The beautiful brain: The drawings of Santiago Ramón y Cajal. New York, NY: Abrams.

2. Neuron doctrine. (n.d.). Retrieved from https://www. sciencedirect.com/topics/neuroscience/neuron-doctrine 3. Smith, R. (2018, January 18). A Deep Dive Into the Brain, Hand-Drawn by the Father of Neuroscience. Retrieved from https://www.nytimes.com/2018/01/18/arts/design/brainneuroscience-santiago-ramon-y-cajal-grey-gallery.html 4. American Physical Society. (2018). November 8, 1895: Roentgen's Discovery of X-Rays. [online] 5. Thomas, A., & Banerjee, A. K. (2013). The history of radiology. Oxford: Oxford University Press. 6. Pneumoencephalography. Wikipedia. https://en.wikipedia. org/wiki/Pneumoencephalography. Published June 17, 2018. Accessed September 21, 2018. 7. Tondreau, R. L. (1985). Ventriculography and pneumoencephalography: Contributions of Dr. Walter E. Dandy. RadioGraphics,5(4), 553-555. doi:10.1148/ radiographics.5.4.553 8. Mishra, S. K., & Singh, P. (2010). History of Neuroimaging: The Legacy of William Oldendorf. Journal of Child Neurology,25(4), 508-517. doi:10.1177/0883073809359083 9. Limitations and pitfalls of computed tomography in the evaluation of craniocerebral injury. (1979). Journal of Computed Tomography,3(3), 242. doi:10.1016/s0149936x(79)80022-3 10. Berger A. Magnetic resonance imaging. BMJ : British Medical Journal. 2002;324(7328):35. (https://www.ncbi.nlm.nih. gov/pmc/articles/PMC1121941/ ) 11. UnderwoodMay, Emily, et al. “Tau Protein-Not AmyloidMay Be Key Driver of Alzheimer's Symptoms.” Science | AAAS, American Association for the Advancement of Science, 9 Dec. 2017, www.sciencemag.org/news/2016/05/tau-protein-notamyloid-may-be-key-driver-alzheimer-s-symptoms.

Figure 5: Tau and Aβ images of the brain showing the progression of Alzheimer's disease. Source: Science

FALL 2018



Using the Mathematical Identity BY MEGAN ZHOU '21 Figure 1: The discrete cosine transform, visualized. This 8×8 matrix pushes high intensity values to the upper left so then the insignificant data can be revalued at zero to save space in the JPEG algorithm. Source: Wikimedia Commons

“As math that is prevalent in most sciences and engineering, linear algebra is used most for modeling and computing natural phenomena as well as approximations for better understanding complicated functions.”


The Mathematical Identity 1+1=2. This equality is perhaps one of the first ideas that comes to mind when considering basic math. But this equality has a special relation with the general mathematical identity A = B, where A and B define the same functions (1). This all-encompassing idea includes the classic algebraic identities–the additive (A + 0 = A) and multiplicative (A * 1 = A)–but also trigonometric, exponential, and logarithmic identities. Identities are used within many smaller mathematical disciplines, and linear algebra utilizes the identity matrix in order to solve many of its problems within its specific field.

Linear Algebra and Real Life Linear algebra refutes the common complaint that everyday life does not use math. As math that is prevalent in most sciences and engineering, linear algebra is used most for modeling and computing natural phenomena as well as for approximations for better understanding complicated functions. But when considering everyday life, two key examples to recognize are JPEG compression and the Google PageRank algorithm. In short, the JPEG compression algorithm is the process by which the number of pixels it takes to store photos is minimized. Without compression, any picture uploaded to Instagram or Pinterest would take up a large amount of storage on any device. Storage, evident in the massively-sized first-generation computers, is precious. Minimizing the amount of storage is fundamental for consumers to achieve the fast and powerful service that is now expected from latest technologies. Essentially, given the problem that photos need to maintain high quality without too much space, linear algebra

is necessary for JPEG image compression. Beyond image compression, another easilyaccessible advantage appeals even more to daily life. The Google PageRank algorithm is run monthly to calculate the importance of any given webpage, which is used to generate the list of links on a user’s screen after a Google search. Daily consumption of the internet can be viewed quite simply as gears working behind one’s Google searches. For instance, consider the search of “Dartmouth College.” The search query generates over thirty-eight million results in under one second. Linear algebra fuels this rapid response to a user’s search through an identity matrix-based solution to a billion-dollar eigenvalue problem that is known the world around, quite simply, as “the internet.” This article does not intend to completely explain how JPEG Compression and the Google Page Rank algorithm work. The purpose is the give a basic snapshot of ways linear algebra can be applied in real life, evident in these complex processes that are so fundamental to our everyday lives.

JPEG Compression: A Lossy Algorithm To minimize the space used by programs to display graphics, lossy compression algorithms are based on the fact that the human capability for sight is limited (2). Specifically, since the human eye cannot detect certain small decreases in quality, algorithms like the JPEG compression allow for slight modifications and thus loss of information since they do not actually affect the way someone views an image. The procedure of compression and then decompression by JPEG in particular is actually able to compress images that have a continuous tone to under 10% of their original size (2)! Without visible degradation of DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

the image quality, this algorithm allows us to save much precious disk storage.

JPEG Compression: Orthogonal Matrices and the Identity Matrix Prior to compression and the lossy aspect of JPEG compression, a change of basis is needed to write the given image M (a matrix of size n×n) as a linear combination (3). A linear combination is the result of combining the vectors (organized such that these are the vertical columns of the matrix) using scalar multiplication with weights and vector addition. The linear combination is the set of all vectors that are possible from this summation, and so the change of basis allows for easier analysis of the given image M. This applies to storing images because it creates an easier point of comparison than how photos are initially organized. This is important because the change of basis allows setting many coefficients to zero if they are deemed small enough (3). The change of basis for JPEG is the discrete cosine transform (DCT), and this is the key step. In order to switch off the standard basis and into any other basis, the program must compute the inverse. Inverses can be solved quickly if they are orthonormal bases, but else manually require the identity matrix to solve. Visually, the identity matrix is just a square matrix with the number 1 on the main diagonal and the number zero elsewhere. By definition, an identity matrix has the property that multiplying it by any n×m matrix A simply results in the matrix A. In other words, the identity matrix I of size n×n solves InA = A; similarly, the identity matrix of size m×m makes AIm = A also a true statement.

JPEG Compression: General Process Preprocessing Before the DCT can be used, preprocessing allows for simplification of later steps. Specifically, images are divided into 8×8 matrices, such that each contain 64 pixels. A single byte value based on intensity varies between 0 and 255. As such, 127 is subtracted from each pixel intensity, which centers the intensities around zero (3). Transformation The DCT determines the parts of an image that can be discarded without cost to its quality. At this point there is no loss of information nor compression achieved, and this is crucial preparation for to the lossy coefficient quantization stage. The discrete cosine transform is based on U, which is the 8×8 DCT matrix and computes out the transformation. It works by pushing most of the high intensity information (larger values) in the upper-left while the rest are small values (4). FALL 2018

Quantization Quantization uses the information determined by the DCT to reduce certain areas of the matrix to zero (3). Essentially, the effect of high-frequency areas on the decompressed image has been eliminated. This means that the insignificant data has been discarded (revalued at zero) and thus the image information has been compressed. Essentially, the zero values were elements that had the least contribution to the graphical image and because the human eye’s function is limited, high precision in their values is not necessary (3). This is the lossy aspect of the algorithm, as converting small values to zero and rounding of all quantized values are not reversible steps; after this step, the original image cannot be recovered anymore (4). Lastly, reversing the process inverts the DCT and effectively reconstructs the image by using significant data. Enough entries were kept so that the new image looks pretty much like the original to the human eye despite taking up much less space with much fewer entries.

“The change of basis for JPEG is the discrete cosine transform, and this is the key step. In order to switch of the standard basis and into any other basis, the program must compute the inverse."

Introduction to the Google PageRank Algorithm Just as the public trusts JPEG images to be the ‘right’ photo, the general public also trusts Google to find what they are looking for. Whether directly searching for a “chicken noodle soup recipe” or some vague description about a strange thing the user saw somewhere, the general expectation is for Google to come up with what one is looking for. This behindthe-scene process has become so credible that modern generations have made it a verb in colloquial speech, often telling others to ‘just google it.’ As a search engine, Google must index web pages, match search criteria, and rank the importance of pages. Yet, according to Google, they index about 130 trillion pages, so finding the actually relevant ones to display first gets extremely complicated (5). The ranking of page importance includes over two hundred factors that look at “the freshness of the results, quality of the website, age of the domain, safety and appropriateness of the content, and user context like location, prior searches, Google+ history and connections, and much more” (5). It’s even more powerful to think about when considering Google’s algorithms for spelling, autocompletion, and more, which all automatically begin their work as you are typing into the box. Clearly, Google’s page rank algorithm is not only secretive but also incredibly intricate. The following is a basic introduction to how directed graphs can be used to create importance vectors and ultimately form the basis of our World Wide Web.

“The ranking of page importance includes over two hundred factors that look at 'the freshness of the results, quality of the website, age of the domain, safety and appropriateness of the content, and user context like location, prior searches, Google+ history and connections, and much more.'”


Figure 2: Markov chain depicting the mood of a person. This is an example of transferring between states given certain probabilities.

specifically solving det(P - λI) = 0 where I is the identity matrix. Simply, this matrix is just equal to P with λ subtracted from each entry on the main diagonal. The rest of solving this linear algebra problem devolves into polynomial simplification, which when set to zero, solves for eigenvalue and thus eigenvector.

Source: mathcs.emory.edu

Mathematical Model for Importance Markov Chains and Directed Graphs

“While one may not need to always think about how Google can output our results at such high speeds every time we use such useful technology, it is important to remember the mathematical foundations every once in a while.”

Probability vectors have non-negative entries that add up to one, and a square matrix whose columns are probability vectors is a stochastic matrix (6). Markov chains describe a sequence of experiments, and are a sequence of probability vectors x0, x1, x2 ... together with a stochastic matrix P such that xk+1 = P xk for k = 0, 1, 2, 3, and so on. The visualization of Markov chains take the form of directed graphs, whose vertices represent the states and arrows pointing from one state to another. Essentially, a Markov chain describes the probability of “hopping” from one state to any other state (7). An example could show the emotional states of a person (see Figure 2). Here, when a person is sad, there is a 0.70 probability to stay sad and a 0.30 probability to transition to a feeling of so-so, with no probability to become cheerful directly. In contrast, if one is cheerful, there is a chance to become sad (0.20 probability), with only a 0.60 probability to stay cheerful. Evidently, one can create a corresponding stochastic matrix corresponding to this Markov chain, and solving a steady-state vector involves an eigenvalue problem (6).

Eigenvectors and Eigenvalues

Figure 3: Social Media Icons. Of the above famous social media platforms, only Twitter maintains the esteemed page rank value of 10. Facebook and YouTube come close behind, valued at 9. Source: Wikimedia Commons


In order to find the steady-state vector q of a stochastic matrix P, one solves P q = q. Clearly, q must be an eigenvector of P with eigenvalue of one (λ = 1), and every stochastic matrix must have such a vector q (6). An eigenvector of a linear transformation is a non-zero vector that only changes by a scalar factor when that linear transformation is applied to it; that scalar factor is the eigenvalue. The eigenvalue problem is rooted in linear algebra and utilizes the identity matrix to solve for this special case. In general, solving eigenvalues needs to find the value of λ that satisfy the characteristic equation;

Now when determining the importance of a website which depends on the number of pages that links to it and the importance of those pages, the matrix involved is a stochastic matrix, and solving the importance vector relies on this idea of solving eigenvectors from a corresponding eigenvalue. The stochastic Hyperlink Matrix H has entries Hij as 0 if pj does not link to pi, and 1/nj if it does. Given so many webpages, this matrix is notably huge and consists of mostly zeros, making it a sparse matrix. The importance vector is found by solving the eigenvector I corresponding to the eigenvalue λ = 1 since H I = I. Additionally, the Google Matrix G needs to guarantee that all web pages have a positive ranking in order to ensure that there is a unique importance vector and that it is regular (6). By having G regular, the sequence will converge to the importance vector even if solving starts with any vector I0. Google’s PageRank ranges between 0 and 10.

Conclusion: Utilizing the Identity Every day, the mathematical identity underlies many activities that the public generally takes for granted. While one may not need to always think about how Google can output our results at such high speeds every time we use such useful technology, it is important to remember the mathematical foundations every once in a while. The mathematical identity, and the matrix identity in particular, can be implemented to solve a myriad of problems and the laws of mathematics truly govern the universe. D CONTACT MEGAN ZHOU AT MEGAN.ZHOU.21@ DARTMOUTH.EDU References 1. https://www.wikiwand.com/en/Identity_(mathematics) 2. https://cs.stanford.edu/people/eroberts/courses/soco/ projects/data-compression/lossy/jpeg/index.htm 3. Trout, Jody. (2016). JPEG Image Compression [Powerpoint slides]. 4. http://www.whydomath.org/node/wavlets/basicjpg.html 5. https://venturebeat.com/2013/03/01/how-google-searches30-trillion-web-pages-100-billion-times-a-month/ 6. Trout, Jody. (2018). Markov Chains and Google’s PageRank Algorithm or The $25,000,000,000.00 Eigenvector [Powerpoint slides]. 7. http://setosa.io/ev/markov-chains/



A Predictive Approach to Social Psychology: Using Machine Learning to Predict the Five Factor Personality Traits BY ARMIN TAVAKKOLI '20 AND JESSICA KOBSA '20

Introduction In a 2017 paper, Yarkoni and Westfall propose that the prevailing paradigm in psychology focuses on finding statistical models that best explain behavior under the assumption that the model that explains the behavior best will also predict it best. This underlying assumption has informed a wide range of literature aiming to uncover causal determinants of behavior, mediating variables, and moderating variables. However, previous research has demonstrated that the most correctly specified model is not always the most successful at prediction (Shmueli, 2010, Hagerty & Srinivasan, 1990, and Wu, Harris, & Mcauley, 2007). Yarkoni and Westfall point out that overfitting is a phenomenon that can cause a model to show high explanatory power but low predictive power. Overfitting is a problem especially relevant to psychology because it is more likely to occur with low sample size, low effect size, or a high number of predictors, all of which are common characteristics of many psychological studies (Yarkoni & Westfall, 2017). Yarkoni and Westfall propose that there is value

in adopting a paradigm that helps psychology become a more predictive science rather than an explanatory one. Today, this goal is very much within reach with the use of machine learning techniques in which the value of a predictive model lies in its ability to predict unobserved data (Yarkoni & Westfall, 2010). In this study, we aimed to explore Yarkoni and Westfall’s claim by taking a predictive approach using machine learning methods to a well-studied psychological construct. An extensive literature exists describing associations between the “Big Five” personality traits of the Five Factor Model (FFM) and a wide variety of outcomes. The FFM was an ideal psychological construct to evaluate in this way for two reasons. First, the conditions in which overfitting is common (low sample size, low effect size, and large number of variables) are common among studies evaluating associations between Big Five personality traits and various outcome variables. Secondly, the vast majority of studies in the relevant literature employ designs making use of correlations, multiple regression, mediation, moderation, and measures of

Tables I-1, 2, 3, 4, 5, 6, from top to bottom, left column then right: Respectively, they show samples of findings of correlates of extraversion, agreeableness, conscientiousness, openness to experience, neuroticism, and Big Five personality traits overall. Source for all tables and figures: Armin Takkavoli, Jessica Kobsa. FALL 2018


model fit, all of which are vulnerable to overfitting. Therefore, the objectives of this study were to (1) test an approach focused on prediction using machine learning techniques, according to Yarkoni’s suggestion and (2) to test the replicability of previous studies on the associations between Big Five personality traits and various outcomes.

Previous FFM Research The FFM of personality includes extraversion, conscientiousness, agreeableness, openness to experience, and neuroticism, also called the Big Five traits. Since the latter half of the twentieth century, an extensive literature has emerged using these Big Five personality traits to predict a wide variety of outcomes. A sampling of such studies and their findings are presented in the tables on the previous page. In order to accomplish the second objective of testing the replicability of previous studies, a new dataset was collected that included measures of the variables described above and measures of the Big Five personality traits.

Methods Data Collection. A Qualtrics survey was distributed through Amazon Mechanical Turk to 142 participants (70 Female, 71 Female, 1 Other, 15-69 years old ) that performed an assessment of the participant’s personality based on the big five personality criteria. Amazon Mechanical Turk is a marketplace for tasks requiring human intelligence that has been shown to be a valuable tool for rapid data collection in psychological studies (Buhrmeister et al, 2011, Crump et al, 2013). Following the personality assessment, participants were asked to answer 17 additional questions about their demographics. This data was recorded, cleaned, and analyzed. A detailed account of each step of data collection is provided below: Amazon Mechanical Turk Participants. One Human Intelligence Task (HIT) was created with a protected link to a qualtrics survey. The task appeared to workers as “a survey about yourself”, and informed them that they would be asked 137 short questions about themselves that takes approximately 15 minutes to complete. The HIT was made available only to workers in the United States, to control for cross-cultural variations. Additionally, to ensure a high quality data, the HIT was only available to “Masters”, a qualification given to high-performing workers. After accepting the HIT, workers were brought to a page where they were assured their responses will remain confidential and anonymous. Workers who consented to the survey were directed to begin the task by entering a password provided on this page and clicking on to a provided link to a Qualtrics survey. Qualtrics Survey Data Collection. The survey itself contained 18 questions. The first question of the survey asked the participants to navigate to an external link containing a 120-question version of the IPIP-NEO developed by Johnson et al. (2014). This short version has been shown to perform comparably to the original 300-item IPIP-NEO developed by Goldberg, and its shorter length makes its deployment for data collection more feasible. After completing this set of questions, the IPIP-NEO automatically provided the workers with a set of percentages describing their unique combination of the Big Five personality traits, including Extraversion, Agreeableness, Conscientiousness, Openness to experience, and Neuroticism. Workers were then asked to enter these percentages in prepopulated response fields 42

in the qualtrics survey. Next, workers entered the second part of the survey consisting of 17 questions that were randomized. These questions asked for the participants demographics, including sex, age, relationship status, employment status, salary, age at first marriage, and hours of work per week, as well as some personal information about themselves, such as the number of jobs they have held in the past year, number of past sexual partners, number of close friends, and number of medical visits in the past year. Participants were also asked to rate their overall job satisfaction and stress level in the past week separately on a 0 (considering quitting/not stressed) to 100 scale (completely satisfied/extremely stressed). As a second measure to ensure the quality of our data, two checks for response quality, or “sanity checks” were also included among the questions. The last question asked for workers’ Amazon Mechanical Turk ID to catch repeat responses. No personal identifying information was collected and all responses were collected anonymously. Compensation For Participation. A first batch of this HIT with 300 assignments was loaded onto Amazon Mechanical Turk, to which 30 unique workers responded, and each received $0.50 in payment. However, due to a low rate of response, this batch was cancelled and a second batch of an identical HIT with payment increased to $1.00 was loaded onto Amazon Mechanical Turk with 170 assignments. At the time of data collection, 112 unique workers had responded to this HIT, and no repeat responses were recorded. All respondents responded to the same 137 questions. The average time to complete the survey was 18.5 minutes. Thus, a total of 142 participants were recorded. Data Analysis Methodology: In total, 142 responses were collected. After removal of responses from those participants who did not follow survey directions correctly or failed one or both sanity checks, data analysis was conducted on 134 observations. The data was uploaded to a Python notebook, where analysis was done. Given the large magnitude of salary values, we standardized them into z-scores to avoid masking of other significant coefficients. In order to take a predictive perspective, we employed a cross-validation based methodology based on a machine learning approach. In this approach, we aim to develop a model within a subset of our data that composed the training set, and then test the out-of-sample accuracy of that model on a test set to assure the construct validity of our methodology. Hence, the data was split into a training set, consisting of 70% of the data, and a test set, consisting of 30% of the data. The following analyses were conducted on the training data set. For each of the five personality traits, a full multiple linear regression model was run with social, demographic, and psychological variables collected in the second part of the survey as inputs, and the personality trait of interest as the output. The results of each of these “full OLS models” identified outcome variables that the multiple regression found to significantly predict each personality trait. Next, each model was crossvalidated using a LASSO approach that selected for variables from the full OLS model. Hence, for each of the personality traits, a “LASSO-based” or “reduced” multiple regression model was developed that included only outcome variables selected by LASSO. It is important to note that the “reduced” nature of the LASSO based model is due to the fact that LASSO selects a subset of all the variables, and these values do not necessarily correlate with significance or lack thereof of what the full OLS model predicts. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Results Since the aim of our paper it to evaluate the replicability of previous literature on correlates of each personality trait, as well as take a predictive machine learning approach to determine the accuracy of each model, it is appropriate to present the analysis of each of the five personality traits individually. 1. Extraversion: As can be seen in Table 1, the full OLS model for extraversion predicted that three variables significantly correlated with extraversion: Stress t(14) = -2.683, p < 0.05, number of jobs in the past year t(14) = 2.185, p < 0.05, and the number of friends t(14) = 2.547, p < 0.05. In comparison, LASSO selected six variables that were included in the the reduced OLS model: Age t(6) = 0.280, p > 0.05, hours of work t(6) = 0.968, p > 0.05, job satisfaction t(6) = 2.005, p < 0.05, number of sexual partners t(6) = 2.052, p < 0.05, stress level t(6) = -2.881, p < 0.05, and number of friends t(6) = 3.353, p < 0.05. An ANOVA between the two models shows no significant difference between the two, F(86,78)= 1.087, p > 0.05. Among the values selected by LASSO, higher extraversion correlates with older age, higher hours of work, higher jobs satisfaction, more sexual partners and more close friends. Higher extraversion also correlates with a lower stress level. A representative of these relationships are presented in Figure 1. Lastly, as can be seen in Table 2, the reduced LASSO based OLS model showed an out-of-sample error of 28.13, which is lower compared to 34.76 out-of-sample error seen in the full OLS model. Additionally, the reduced model has an in sample error of 34.75, which is higher than the 21.78 error of the full OLS 1. 2. Agreeableness: As can be seen in Table 3, the full OLS model for agreeableness predicted that two variables significantly correlated with agreeableness: Employment status t(14) = -2.261, p <0.05, and relationship status t(14) = -2.413, p <0.05. In comparison, LASSO selected three variables that were included in the the reduced OLS model: Age t(3) = 0.159, p > 0.05, hours of work t(3) = 0.381, p > 0.05, and job satisfaction t(3) = 1.392, p>0.05. An ANOVA between the two models shows a significant difference between the two, F(89,78)= 2.034, p < 0.05. Among the values selected by LASSO, higher agreeableness correlates with older age, higher hours of work, and higher job satisfaction. Among the values selected by full OLS, higher agreeableness correlates with being unemployed and single. A representative of these relationships are presented in Figure 2. Lastly, as can be seen in Table 4, the reduced LASSO based OLS model showed an out-of-sample error of 31.02, which is higher compared to 29.58 out-of-sample error seen in the full OLS model. Additionally, the reduced model has an in sample error of 29.58, which is higher than the 27.413 error of the full OLS model.

Figure 2: Representative scatterplots of job satisfaction and age against agreeableness. FALL 2018

Table 1 (top): Extraversion OLS Models. The full OLS model included all outcome variables and found stress level, number of jobs held in the past year, and number of close friends to be significant at significance level of p < 0.05. The LASSO-based OLS model included only age, hours of work, job satisfaction, number of sexual partners, stress level, and number of close friends and found only job satisfaction, number of sexual partners, stress level, and number of friends to be significant. Figure 1 (middle): Representative scatterplots of stress level and job satisfaction against extraversion Table 2 (bottom): Comparison of Extraversion Prediction Models. The LASSO-based OLS model had lower error and thus performed better than the full OLS model on out-of-sample data. 43

3. Conscientiousness: As can be seen in Table 5, the full OLS model for conscientiousness predicted that two variables significantly correlated with conscientiousness: Stress t(14) = -2.175, p <0 .05, and medical visits in the past year t(14) = -2.280 , p <0.05. In comparison, LASSO selected seven variables that were included in the the reduced OLS model: Age t(7) = 0.173, p > 0.05, hours of work t(7) = 2.051, p < 0.05, age of first marriage t(7) = -0.794, p > 0.05, job satisfaction t(7) = 1.797, p > 0.05, number of sexual partners t(7) = 0.171, p > 0.05, medical visits in the past year t(7) = -2.225, p < 0.05, and stress level t(7) = -2.027, p<0.05. An ANOVA between the two models shows no significant difference between the two, F(85,78)= 1.094, p > 0.05. Among the values selected by LASSO, higher conscientiousness correlates with older age, higher hours of work, higher job satisfaction, and more sexual partners . Additionally, higher consciousness also correlates with lower age of marriage, lower number of medical visits, and lower stress level. A representative of these relationships are presented in

Table 3 (top): Agreeableness OLS Models. The full OLS model included all outcome variables and found employment status and relationship status to be significant at significance level of p < 0.05. The LASSO-based OLS model included only age, hours of work, and job satisfaction and found none of these variables to be significant. Table 4 (bottom): Comparison of Agreeableness Prediction Models. The full OLS model had lower error and thus performed better than the LASSO-based OLS model on out-of-sample data. 44

Figure 3. Lastly, as can be seen in Table 6, the reduced LASSO based OLS model showed an out-of-sample error of 43.6, which is lower compared to 44.9 out-of-sample error seen in the full OLS model. Additionally, the reduced model has an in sample error 44.98, which is higher than the 25.86 error of the full OLS model.

Table 5 (top): Conscientiousness OLS Models. The full OLS model included all outcome variables and found number of medical visits and stress level to be significant at significance level of p < 0.05. The LASSO-based OLS model included only age, hours of work, age of first marriage, job satisfaction, number of sexual partners, number of medical visits, and stress level and found only hours of work, number of medical visits, and stress level to be significant. Figure 3 (middle): Representative scatterplots of stress level and medical visits against conscientiousness. Table 6 (bottom): Comparison of Conscientiousness Prediction Models. The LASSO-based OLS model had lower error and thus performed better than the full OLS model on out-of-sample data. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

4. Openness to experience: As can be seen in Table 7, the full OLS model for openness to experience predicted that no variables significantly correlate with being open to new experiences. In comparison, LASSO selected six variables that were included in the the reduced OLS model: Age t(6) = 1.124, p>0.05, hours of work t(6) = -0.657, p > 0.05, age of first marriage t(6) = -1.817, p > 0.05, job satisfaction t(6) = -0.009, p > 0.05, number of sexual partners t(6) = 1.776, p > 0.05, and stress level t(6) = -2.137, p < 0.05. An ANOVA between the two models shows

Table 7 (top): Openness to Experience OLS Models. The full OLS model included all outcome variables and found no variables to be significant at significance level of p < 0.05. The LASSO-based OLS model included only age, hours of work, age of first marriage, job satisfaction, number of sexual partners, and stress level and found only stress level to be significant. Figure 4 (middle): Representative scatterplots of stress level and medical visits against conscientiousness. Table 8 (bottom): Comparison of Openness to Experience Prediction Models. The LASSO-based OLS model had lower error and thus performed better than the full OLS model on out-of-sample data. FALL 2018

no significant difference between the two, F(86,78)= 1.844, p>0.05. Among the values selected by LASSO, being more open to experiences correlates with older age and more sexual partners. Additionally, being open to experiences also correlates with lower hours of work, lower age of marriage, lower job satisfaction, and lower stress level. A representative of these relationships is presented in Figure 4. Lastly, as can be seen in Table 8, the reduced LASSO based OLS model showed an out-of-sample error of 34.55, which is lower compared to 36.26 out-of-sample error seen in the full OLS model. Additionally, the reduced model has an in sample error 36.26, which is higher than the 27.55 error of the full OLS model. 5. Neuroticism: As can be seen in Table 9, the full OLS model for neuroticism predicted that one variable significantly correlated with neuroticism: Stress level t(14) = 7.115, p < 0.05. In comparison, LASSO selected two variables that were included in the the reduced OLS model: Age t(2) = -1.656, p > 0.05, and stress level t(2) = 8.770, p < 0.05. An ANOVA between the two models shows no significant difference, F(90,78)= 1.717, p > 0.05. Among the values selected by LASSO, being more neurotic correlates with younger age and higher stress level. A representative of these relationships is presented in Figure 5. Lastly, as can be seen in Table 10, the reduced LASSO based OLS model showed an out-of-sample error of 30.62, which is lower compared to 37.61 out-of-sample error seen in the full OLS model. Additionally, the reduced model has an in sample error 37.61, which is higher than the 19.94 error of the full OLS model. For each trait, the model with the least error is considered the better model. The variables that the better performing model used to predict each trait and their direction of association are summarized in Table 11.

Table 9: Neuroticism OLS Models. The full OLS model included all outcome variables and found only stress level to be significant at significance level of p < 0.05. The LASSO-based OLS model included only age and stress level and found only stress level to be significant. 45

Collinearity in multiple variables may also have impacted the LASSO-based OLS model. It is also possible that these variables had very small effects that were not detected in the test data set due to its small size. The second aim of this study was to test the replicability of previous studies’ findings of the associations between Big Five personality traits and the selected outcomes. The associations reported in the reviewed literature and whether the result was replicated in the present study are reported in Table 12.


Figure 5 (top): Representative scatterplots of stress level and age against neuroticism. Table 10 (bottom): Comparison of Neuroticism Prediction Models. The LASSO-based OLS model had lower error and thus performed better than the full OLS model on out-of-sample data.

Discussion This study had two goals. The first was methodological and consisted of exploring Yarkoni and Westfall’s proposal to use a statistical approach to psychological data analysis that uses machine learning techniques and focuses on prediction rather than on explanation. Our results show that for all five personality traits, the full OLS model had lower in-sample error than the LASSO-based OLS model, indicating that the full OLS model more accurately predicted data within the sample on which it had been trained. However, for four (extraversion, conscientiousness, openness to experience, and neuroticism) of the Big Five personality traits, the LASSO-based OLS model predicted out-of-sample data with less error than did the full OLS model. This suggests that the lower error that the full OLS model achieved in the in-sample data at least for four of the five traits was the result of overfitting. Because the full OLS model was overfit to the data on which it had been trained, it predicted out-of-sample data with less accuracy than did the LASSO-based OLS model. LASSO cross-validation is a method that penalizes a model for a large number of variables and in this way controls for overfitting. Our Reducing the effect of overfitting in a model improves its generalizability to new data. This finding that the LASSO-based model more accurately predicted out-of-sample data than did the full OLS model supports Yarkoni and Westfall’s claim that models developed using machine learning techniques do a superior job of predicting unobserved data than do traditional multiple regression models. This finding suggests that cross-validation method that LASSO employs is more effective at identifying variables that accurately predict the output in out-of-sample data. Importantly, this result illustrates that a shift in statistical methods for data analysis can help psychology grow as a predictive field. As a predictive field, psychology could eventually offer scores of models and theories that can predict future behavior. There are a number of possible explanations for the finding that the LASSO-based OLS model predicted agreeableness with more error than did the full OLS model. First, it should be noted that none of the three variables selected by LASSO were found to be significant in the LASSO-based OLS model. This may suggest a significant interaction between two of the variables. 46

There are a number of limitations that may have impacted the results of this study. First, transforming salary into z-scores may have masked some true effects. The decision to transform salary was made to reduce type I error due to the large magnitude of salary values which would have made detecting an effect more likely. However, after this transformation, no effect of salary was detected in any of the models, though several previous studies have reported effects of salary. Therefore, it is also possible that transforming salary into z-scores in fact increased type II error. It is also possible that salary was collinear with several other variables related to employment (e.g. employed, hours of work, job satisfaction). This also may have masked any true effects of

Table 11: Associations Found Based on Most Accurate Model. A (+) indicates a positive association, and a (-) indicates a negative association. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE

Table 12: Replication of Previously Claimed Associations. Each association reported in the reviewed literature is presented along with whether the result was replicated in the present study. A (+) indicates a positive correlation, a (-) indicates a negative correlation, and a (0) indicates no association found.


Consistent with Yarkoni and Westfall's claim, machine learning techniques can be used on psychological data to develop statistical models that predict out-of-sample data with improved accuracy over traditional multiple regression models. Methods that produce improved generalizability can help psychology predict future behavior in new data and new applications.

FALL 2018

,, 47

salary. Additionally, due to low response rates and time restrictions, the sample size was smaller than expected and we chose a 70%/30% training/test data set ratio to allow sufficient data for model development in the training set. However, machine learning techniques are optimally employed on large data sets, and large sample size is especially valuable for the test data set in order to gain the most accurate estimates of model error. The small sample size and size of test data set in the present study may have impacted the estimates of model error and model selection. Overall, the present study demonstrates a predictive approach to using the Big Five personality traits to predict various outcomes. The present study successfully replicated some previously found associations but did not replicate others. Consistent with Yarkoni and Westfall’s claim, machine learning techniques can be used on psychological data to develop statistical models that predict out-of-sample data with improved accuracy over traditional multiple regression models. Methods that produce improved generalizability can help psychology predict future behavior in new data and new applications. D CONTACT ARMIN TAVAKKOLI AT ARMIN.TAVAKKOLI.20@DARTMOUTH. EDU AND JESSICA KOBSA AT JESSICA.E.KOBSA.20@DARTMOUTH.EDU

120-item public domain inventory: Development of the IPIP-NEO-120. Journal of Research in Personality, 51, 78–89. https://doi.org/10.1016/j.jrp.2014.05.003 16. Kirkpatrick, L. A., & Davis, K. E. (1994). Attachment style, gender, and relationship stability: a longitudinal analysis. Journal of Personality and Social Psychology, 66(3), 502–512. 17. Schmitt, D. P. (2004). The Big Five related to risky sexual behaviour across 10 world regions: differential personality associations of sexual promiscuity and relationship infidelity. European Journal of Personality, 18(4), 301–319. https://doi. org/10.1002/per.520 18. Seibert, S. E., & Kraimer, M. L. (2001). The Five-Factor Model of Personality and Career Success. Journal of Vocational Behavior, 58(1), 1–21. https://doi.org/10.1006/ jvbe.2000.1757 19. Shaver, P. R., & Brennan, K. A. (1992). Attachment Styles and the “Big Five” Personality Traits: Their Connections with Each Other and with Romantic Relationship Outcomes. Personality and Social Psychology Bulletin, 18(5), 536–545. https://doi.org/10.1177/0146167292185003 20. Shmueli, G. (2010). To Explain or to Predict? Statistical Science, 25(3), 289–310. 21. Wanberg CR, Watt JD, Rumsey DJ. (1996). Individuals without jobs: An empirical study of job-seeking behavior and reemployment. Journal of Applied Psychology, 81,76-87. 22. Wu, S., Harris, T. J., & Mcauley, K. B. (2007). The Use of Simplified or Misspecified Models: Linear Case. The Canadian Journal of Chemical Engineering, 85(4), 386–398. https://doi.org/10.1002/cjce.5450850401 23. Yarkoni, T., & Westfall, J. (2017). Choosing Prediction Over Explanation in Psychology: Lessons From Machine Learning. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 12(6), 1100–1122. https://doi. org/10.1177/1745691617693393

References 1. Amazon’s Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data? - Michael Buhrmester, Tracy Kwang, Samuel D. Gosling, 2011. (n.d.). Retrieved March 4, 2018, from http://journals.sagepub.com/doi/abs/10.1177/1745691610393980 2. Bozionelos, N. (2004). The big five of personality and work involvement. Journal of Managerial Psychology, 19(1), 69–81. https://doi.org/10.1108/02683940410520664 3. Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data? Perspectives on Psychological Science, 6(1), 3–5. https://doi.org/10.1177/1745691610393980 4. Crump, M. J. C., McDonnell, J. V., & Gureckis, T. M. (2013). Evaluating Amazon’s Mechanical Turk as a Tool for Experimental Behavioral Research. PLOS ONE, 8(3), e57410. https://doi.org/10.1371/journal.pone.0057410 5. Ebstrup, J. F., Eplov, L. F., Pisinger, C., & Jørgensen, T. (2011). Association between the Five Factor personality traits and perceived stress: is the effect mediated by general self-efficacy? Anxiety, Stress, & Coping, 24(4), 407–419. https://doi.org/10.108 0/10615806.2010.540012 6. Evaluating Amazon’s Mechanical Turk as a Tool for Experimental Behavioral Research. (n.d.). Retrieved March 4, 2018, from http://journals.plos.org/plosone/ article?id=10.1371/journal.pone.0057410 7. Five-Factor Model of Personality - Psychology - Oxford Bibliographies - obo. (n.d.). Retrieved February 28, 2018, from http://www.oxfordbibliographies.com/view/ document/obo-9780199828340/obo-9780199828340-0120.xml 8. Friedman, H. S., Tucker, J. S., Schwartz, J. E., Martin, L. R., Tomlinson-Keasey, C., Wingard, D. L., & Criqui, M. H. (1995). Childhood conscientiousness and longevity: health behaviors and cause of death. Journal of Personality and Social Psychology, 68(4), 696–703. 9. Furnham, A., & Zacherl, M. (1986). Personality and job satisfaction. Personality and Individual Differences, 7(4), 453–459. https://doi.org/10.1016/0191-8869(86)90123-6 10. Gray, E. K., & Watson, D. (2002). General and Specific Traits of Personality and Their Relation to Sleep and Academic Performance. Journal of Personality, 70(2), 177–206. https://doi.org/10.1111/1467-6494.05002 11. Hagerty, M. R., & Srinivasan, V. (1991). Comparing the predictive powers of alternative multiple regression models. Psychometrika, 56(1), 77–85. https://doi. org/10.1007/BF02294587 12. Heslin, P. A., Keating, L. A., & Minbashian, A. (2018). How Situational Cues and Mindset Dynamics Shape Personality Effects on Career Outcomes. Journal of Management, 14920631875530. https://doi.org/10.1177/0149206318755302 13. Jensen-Campbell, L. A., Adams, R., Perry, D. G., Workman, K. A., Furdella, J. Q., & Egan, S. K. (2002). Agreeableness, Extraversion, and Peer Relations in Early Adolescence: Winning Friends and Deflecting Aggression. Journal of Research in Personality, 36(3), 224–251. https://doi.org/10.1006/jrpe.2002.2348 14. Jerram, K. L., & Coleman, P. G. (1999). The big five personality traits and reporting of health problems and health behaviour in old age. British Journal of Health Psychology, 4(2), 181–192. https://doi.org/10.1348/135910799168560 15. Johnson, J. A. (2014). Measuring thirty facets of the Five Factor Model with a 48



Dartmouth Undergraduate Journal of Science ESTABLISHED 1998

ARTICLE SUBMISSION What are we looking for? The DUJS is open to all types of submissions. We examine each article to see what it potentially contributes to the Journal and our goals. Our aim is to attract an audience diverse in both its scientific background and interest. To this end, articles generally fall into one of the following categories: Research This type of article parallels those found in professional journals. An abstract is expected in addition to clearly defined sections of problem statement, experiment, data analysis and concluding remarks. The intended audience can be expected to have interest and general knowledge of that particular discipline. Review A review article is typically geared towards a more general audience, and explores an area of scientific study (e.g. methods of cloning sheep, a summary of options for the Grand Unified Theory). It does not require any sort of personal experimentation by the author. A good example could be a research paper written for class. Features (Reflection/Letter/Essay/Editorial) Such an article may resemble a popular science article or an editorial, examining the interplay between science and society. These articles are aimed at a general audience and should include explanations of concepts that a basic science background may not provide. Guidelines: 1. The length of the article should be under 3,000 words. 2. If it is a review or a research paper, the article must be validated by a member of the faculty. This statement can be sent via email to the DUJS account. 3. Any co-authors of the paper must approve of submission to the DUJS. It is your responsibility to contact the co-authors. 4. Any references and citations used must follow the Science Magazine format. 5. If you have chemical structures in your article, please take note of the American Chemical Society (ACS)â&#x20AC;&#x2122;s specifications on the diagrams. For more examples of these details and specifications, please see our website: http://dujs.dartmouth.edu For information on citing and references, please see: http://dujs.dartmouth.edu/dujs-styleguide

Dartmouth Undergraduate Journal of Science Hinman Box 6225 Dartmouth College Hanover, NH 03755 dujs@dartmouth.edu

ARTICLE SUBMISSION FORM* Please scan and email this form with your research article to dujs@dartmouth.edu

Undergraduate Student: Name:_______________________________

Graduation Year: _________________

School _______________________________

Department _____________________

Research Article Title: ______________________________________________________________________________ ______________________________________________________________________________ Program which funded/supported the research ______________________________ I agree to give the Dartmouth Undergraduate Journal of Science the exclusive right to print this article: Signature: ____________________________________

Faculty Advisor: Name: ___________________________

Department _________________________

Please email dujs@dartmouth.edu comments on the quality of the research presented and the quality of the product, as well as if you endorse the studentâ&#x20AC;&#x2122;s article for publication. I permit this article to be published in the Dartmouth Undergraduate Journal of Science: Signature: ___________________________________

*The Dartmouth Undergraduate Journal of Science is copyrighted, and articles cannot be reproduced without the permission of the journal.

Visit our website at dujs.dartmouth.edu for more information

FALL 2018


DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE Hinman Box 6225 Dartmouth College Hanover, NH 03755 USA http://dujs.dartmouth.edu dujs@dartmouth.edu

Profile for Dartmouth Undergraduate Journal of Science

DUJS Fall 2018 Print Journal  

The Fall 2018 Issue of the Dartmouth Undergraduate Journal of Science.

DUJS Fall 2018 Print Journal  

The Fall 2018 Issue of the Dartmouth Undergraduate Journal of Science.