PennScience Spring 2015 Issue: The Science of Aging

Page 1

Contents Features 05

Move It or Lose It: How You Can Hack the Biological Clock Anti-Aging Caloric Restriction DIet


Electrical Circuits Exploring the Parallels between Spaceflight and Aging




Epigenetics in Twins

Research of the Morphology of the Heart of Zebrafish 16 Investigation Embryos Exposed to Different Concentrations of Methylmercury

Exploring Genomes: The need for an accessible map

29 Dynamic Reassessment of Awaited Outcomes 2



Editorial Staff WRITING


Ritwik Bhatia Jane Chuprin Sebastian de Armas Rhichard Diurba Andy Guo Anisha Reddy

Vivek Nimgaonkar Donald Zhang

DESIGN EDITORS Courtney Connolly Carolyn Lye

WRITING MANAGERS Grace Ragi Samip Sheth

EDITING MANAGERS Kartik Bhamidipati Karanbir Pahil



Angela Chang Zoe Daniels Joseph Gao Jenna Harowitz Michal Hirschorn Yuichiro Iwamoto Ila Kumar Kevin Sun Kejia Wang Edward Zhao


Emily Chen Chigoziri Konkwo Allison Weiss

BUSINESS Anand Desai Tina Huang

FACULTY ADVISORS Dr. M. Krimo Bokreta Dr. Jorge Santiago-Aviles

About PennScience PennScience is a peer-reviewed journal of undergraduate research published by the Science and Technology Wing at the University of Pennsylvania. PennScience is an undergraduate journal that is advised by a board of faculty members. PennScience presents relevant science features, interviews, and research articles from many disciplines including biological sciences, chemistry, physics, mathematics, geological sciences, and computer sciences. PennScience is a SAC funded organization. For additional information about the journal including submission guidelines, visit or email us at SPRING 2015 | PENNSCIENCE JOURNAL


Letter from the Editors Dear Readers, We are very excited to bring you the second issue of the 13th volume of PennScience. The theme of this issue deals with senescence and the process of aging. The study of aging represents an area that is receiving increasing attention in both scientific and sociocultural contexts. For the spring semester, we had the amazing support of a tremendous staff on our editing, design, and business teams, who have been instrumental in making this publication a reality. Writers, both new and returning, also helped to develop a series of feature articles related to the topic of aging. Ritwik Bhatia examined some of the striking relationships between space travel and the process of aging. Anisha Reddy explores the importance of epigenetics in various disease phenotypes associated with old age. In an article focused upon healthier outcomes in aging, Jane Chuprin writes about the health benefits of exercise and some of the biological mechanisms underlying those benefits. Andy Guo’s article offers a look at interesting recent work that has been done to illustrate the potential value of dietary approaches like caloric restriction in improving health at older ages. Richard Diurba moves the discussion of aging beyond the world of biology, considering the process of aging in circuits. The three research submissions published in this issue dealt with a diverse array of disciplines. Josh Tycko’s paper on genome browsers offers a review of some of the ways that browsers are utilized to facilitate biomedical research. Tasneem Mahmood presents a physiological study that describes the morphological effects of mercury exposure in zebrafish. Darby Breslow has given us an interesting paper on decision-making, conducted through behavioral tests. Beyond our issue this semester, we were proud to launch a series of initiatives aimed at complementing our work in print. With a mission of expanding and fostering the scientific discourse at Penn, we launched a series of coffee chats with professors as well as a journal club event this spring. The coffee chats are designed as a way to offer undergraduates an opportunity to meet faculty members and learn not only about their research, but also gain some of their insights into navigating careers in academic science. We were very grateful to be joined by Professor David Christianson, Professor Brian Keith, and Professor Lawrence Brass. Professor Christianson shared the story of how he came to do the research that led him to the discovery of arginase inhibition. Professor Keith led a discussion about the current frontiers of cancer research in addition to the ways to approach scientific writing. Professor Brass provided an overview of the physician-scientist profession, shedding light on ways that clinical experience can inform lab research and vice versa. Our first journal club featured a discussion of the paper, “14-Step Synthesis of (+)-Ingenol from (+)-3-Carene,” published in Science in 2013. We were very lucky to have Bruno Melillo, who was able to lead our discussion of the paper and highlight some of the exceptional elements of Phil Baran’s total synthesis approach. Overall, it has been a pleasure to serve PennScience as editors-in-chief, and it is our hope that we have been able to serve the journal well. The new editors-in-chief for the upcoming year will be Carolyn Lye and Claudia Cheung, whose combined experience across different wings of PennScience will allow them to lead the journal forward in the upcoming year. We are looking forward to the new directions to be taken by Carolyn, Claudia, and the rest of the PennScience staff. Sincerely, Vivek Nimgaonkar and Donald Zhang Co-Editors-in-Chief




Move it Or Lose It How You Can Hack the Biological Clock By Jane Chuprin


once de Leon was never able to find the Fountain of Youth, but that’s not to say that perhaps he wasn’t far from it. Today, scientists have uncovered a “Fountain of Youth,” but it isn’t a result of modern medicine. It’s simply exercise. While aerobic exercise may not exactly smooth away wrinkles, studies have shown that moderate exercise promotes better cardiovascular, respiratory and cognitive function. Older adults who regularly exercised were shown to not only be physically healthier, but also cognitively. Those who exercised were at a significantly lower risk for dementia and Alzheimer’s disease; but unfortunately less than 10% of adult engage in the appropriate level of exercise (1). But how exactly does exercise lead to stronger cardiovascular and neurological health? Only now we are



FEATURES beginning to understand the molecular mechanisms that underlie the anti-aging effects of exercise. The importance of oxygen at a molecular level is often overlooked; cells require oxygen as the final acceptor of electrons in the process of oxidative phosphorylation that provides energy for all cells. Moderate exercise can improve cardiovascular health, which strengthens the body’s ability to take in and distribute more oxygen to all cells in the body. Scientists are able to look at one’s aerobic physical fitness by measuring the maximal oxygen consumption (VO2). By middle adulthood, arteries become more narrow and rigid, and the lungs take up less oxygen. Exercise has been demonstrated to increase VO2 via improved arterial compliance, and in this way, exercise can counter the effects of aging upon the cardiovascular system (2). Exercise is also responsible for an increase in capillary growth and neurogenesis. Improved blood and oxygen flow to the brain can allow for more cell growth and strengthen brain functions, such as reasoning and memory. During aging, the frontal lobe, involved in working memory, executive control, and exclusion of task-irrelevant information, experiences the greatest decline in the brain. Other parts of the brain, such as the hippocampus, which is involved in memory, also see extensive declines. Memory loss and dementia, a process of neurodegeneration, are large concerns as we age. More seriously, some forms of dementia can progress into Alzheimer’s disease, which currently has no cure and is a debilitating disease characterized by a lack of mental capacity. A study conducted by Erickson et al. in 2011 found a significant trend between exercise and anterior hippocampal volume (3). The anterior hippocampus is the part of the brain involved in spatial memory and memory acquisition. Participants in their late seventies were placed into either a stretching (control) group or a group that performed moderate exercise three times per week for a year. The control group demonstrated a decline in anterior hippocampal volume by about 1.4% in each hemisphere of the brain, whereas the exercise group actually increased in anterior hippocampal volume by about 2% in each hemisphere. The exercise group also showed additional changes in the brain; there was an increase in the dentate gyrus (an area in the brain where cell proliferation occurs), an increase in gray and white matter in the prefrontal cortex, and higher levels of BNDF (hormones that are mediators of neurogenesis and dendritic expansion) in the anterior hippocampus. Aside from the brain, it was also found that the exercise group showed about an 8% improvement in VO2, versus the stretching group, which rose by 1.11%. A different study lead by Colcombe and colleagues predicted that cardiovascular fitness could offset the natural decline in cognitive function that occurs with age (4). They performed this study with adults who were in their late sixties. After just six months of exercising for three 6


times a week, the team could already show that those who were aerobically more fit were better able to perform in tasks that assessed attention and task switching. Using the Flanker test, which measures reaction speed and accuracy, aerobic participants had an error of 1.6-1.9%, whereas the control group error was 18-26%. This means that those who exercise were much better at catching errors. Since Erickson et al. had found an increase in the anterior hippocampus in the exercise group, this is strong evidence that the anterior hippocampus is affected by exercise since it is known to activate the part of the brain that recognizes error. Not surprisingly, it was also found that there was an increase in neurotrophin factors, which increase neuronal survival and promote neurogenesis, and an increase in VO2. Another exercise study suggests that the benefits of exercise in older age can be reaped even after exercising in the short term (5) After only one month, the stretching control group had actually shown a decrease in VO2, while the walking exercise group had already improved. After six months, participants in the fitness group showed improvement in cognitive tasks that specifically involved

Aging ?

Cellular Metabolic Machinery

Long-term Exercise

Regional Capillary Density

Cardiovascular System Source: B. Anderson, S. Greenwood, D. McCloskey, Exercise as an intervention for the age-related decline in neural metabolic support. Frontiers in Aging Neuroscience.2, 30 (2010).

FEATURES the frontal lobe. Exercise, in addition to demonstrating positive effects on normally aging older adults, has also shown promise in aiding against mental disease. A recently published study followed 716 cognitively normal older adults for about 4 years. At the end, the participants’ physical abilities were measured and correlated with the development of mild cognitive impairment or even Alzheimer’s disease. It was found that those who were in the top 10% of older adults with respect to physical fitness had a 50% lower risk of getting Alzheimer’s disease later in life (7). While we cannot prevent the process of aging, we can certainly attenuate its negative effects. Exercise improves cardiovascular health, which in turn increases the amount of oxygen in the body and in the brain, promoting cognitive brain function and memory. As a society, we have developed a preconceived notion that older age means being frail, using walkers with neon green tennis balls, and retiring to a very quiet and sedentary life. And if 90% of older adults are not actually getting the right amount of exercise, it is easy to understand how this outlook on aging has developed. Living actively as an older adult is more than possible. Patricia “Paddy” Jones, who is 79 years young, wowed on the popular reality TV show, Britain’s Got Talent when she stepped on stage and performed a variety of acrobatic salsa tricks. She demonstrates the accessibility of exercise to people of all ages and challenges our notion of sedentary, elder

years. Its time to change what the standard of aging looks like. References: 1. G. Einstein, M. A. McDaniel, Memory Fitness: A Guide for Successful Aging, (Yale University Press, New Haven, July 2004). 2. H. Tanaka, et al., Aging, habitual exercise, and dynamic arterial compliance. Circulation.102, 12701275 (2000). 3. K. I. Erickson, et al., Exercise training increases size of hippocampus and improves memory. Proceedings of the Natural Academy of Sciences.108, 3017-3022 (2011). 4. S. J. Colcombe, et al., Cardiovascular fitness, cortical plasticity, and aging. Proceedings of the Natural Academy of Sciences.101, 3316-3321 (2004). 5. S.T. Colcombe, et al., Fitness Effects On the Cognitive Function of Older Adults: A Meta-Analytic Study. Psychological Science/.14, 125-130 (2003). 6. B. Anderson, S. Greenwood, D. McCloskey, Exercise as an intervention for the age-related decline in neural metabolic support. Frontiers in Aging Neuroscience.2, 30 (2010). 7. D. Kostrzewa-Nowak, et al., Effect of 12-weeklong aerobic training programme on body composition, aerobic capacity, complete blood count and blood lipid profile among young women. Biochemia medica.25, 103 (2015).







n an age of advancing medical technology and treatment, Americans are striving to live longer and healthier lives. Discoveries from dedicated research make it possible to decipher life’s genetic code and synthesize life-saving drugs. Unfortunately, many of the advances in treatment have not yielded healthier life styles. More than two-thirds of American adults are overweight or obese (1). Research also suggests that America will continue to grow heavier in the next decade, indicating that America is not maximizing its life span (2). It is the hope of the medical community that Americans take steps in their daily lives to maximize their health as they age. In recent years, caloric restriction (CR) has developed as a potential strategy to decrease healthcare expenditure on obese patients as well as increasing longevity of the healthy population. CR involves decreasing calorie intake by at least 20% in order to maintain a healthier lifestyle. It is believed that CR diets promote anti-aging qualities, and the healthcare community has shifted attention to the regulation of food intake.




Since it is difficult to conduct longitudinal or invasive CR studies on humans, many of the conclusions that have been drawn about extending life span are extrapolated from studies of other mammals and short-lived spcies. CR is a strategy that has been shown to extend healthy, average and maximum life span in many short lived species as well as primates (3). In primates specifically, CR significantly improved age-related survival in monkeys placed on a long-term 30% restricted diet from young adulthood. In mice, research has shown that life-long calorie restriction significantly improves the overall structure of the gut microbiota (4). Though CR might not have such dramatic impacts on human life, it does provide numerous benefits, such as lowered risk for degenerative conditions of aging and improved measures of health in non-obese humans (5). In the long-term, CR extends longevity by preventing chronic diseases and by preserving a more youthful metabolic state (6). Through these limited studies of CR in humans, some scientists have suggested that CR could extend healthy human life span by 5-10 years. For example, moderate CR with adequate nutrition has protective effects against the development of obesity, type 2 diabetes and atherosclerosis (7). Even though studies of the effects of CR on longevity in humans are in their infancy, researchers at Tufts University, Pennington Biomedical Research Center, Washington University, and Duke University have developed a research program called CALERIE (Comprehensive Assessment of Long-term Effects of Reducing Intake of Energy) to systematically analyze the effects of CR on fat content and health in humans. CALERIE, which has been running trials since 2007, is the first study to investigate the effect of prolonged calorie restriction on human health (8). The program’s goal is to better understand the effects of prolonged caloric restriction on aging and to test how practical 25% calorie-restricted diet is for normalweight individuals. As a result of CALERIE, a diet called Calorie Restriction with Optimum Nutrition (CRON) has been developed. In order to properly start CRON dieting, the caloric needs of a person are determined by the Basal Metabolic Rate, energy needed for normal metabolic activities and physical activity. CR diets attempt to select nutrient-dense, low-calorie foods in order to provide necessary protein and essential fats while reducing the amount of carbohydrates and saturated fats (9). Typically, the foods involved in the diet include oats, seeds, lean meets and omega-3 fatty acids. For example, a typical breakfast may consist of oatmeal, yogurt, or blueberries, followed by fruits and vegetable during lunch. Dinner could consist of a high protein meat such as turkey, chicken, or steak along with potatoes. It is key in a CRON diet to consume Vitamin D, calcium and phosphorus to maintain healthy bones. A well-maintained CRON diet greatly reduces risk factors that hinder a healthy lifespan. Studies on the diet have demonstrated that just a 2230% decrease in caloric intake from normal levels promote

heart function, reduce markers of inflammation, and decrease the risk for certain cancers (10). Collectively, the existing body of literature suggest that CR can have highly desirable impacts upon human health. Through research programs such as CALERIE, it will hopefully be possible to further elucidate the effects of CR on human health and life expectancy. Additionally, research in CR drug mimetics have emerged in an attempt to synthetically produce the same effects of an actual CR diet (11). There have been promising results in initial studies regarding physiological responses that resemble those observed in CR. For now, it has become clear that choosing the right eating habits could profoundly improve quality of life for future generations. References: 1. C. I. Ogden, M. D. Carroll, B. K. Kit, K. M. Flegal, Prevalence of childhood and adult obesity in the United States, 2011-2012. Journal of the American Medical Association.311, 806-614 (2014). 2. M. A. Beydoun, Y. Wang. Gender�ethnic Disparity in BMI and Waist Circumference Distribution Shifts in US Adults. Obesity.17, 169-176 (2009). 3. R. J. Colman, et al., Caloric restriction reduces age-related and all-cause mortality in rhesus monkeys. Nature communications.5, Article Number 3557 (2014). 4. C. Zhang, et al., Structural modulation of gut microbiota in life-long calorie-restricted mice. Nature communications.4, Article Number 2163 (2013). 5. L. K. Heilbronn, E. Ravussin, Calorie restriction and aging: review of the literature and implications for studies in humans. The American journal of clinical nutrition.78, 361369 (2003). 6. E. Cava, L. Fontana. Will calorie restriction work in humans?. Aging (Albany NY).5, 507 (2013). 7. J. O. Holloszy, L. Fontana, Caloric restriction in humans. Experimental gerontology.42, 709-712 (2007). 8. T. M. Stewart, et al., Comprehensive Assessment of Long-term Effects of Reducing Intake of Energy Phase 2 (CALERIE Phase 2) screening and recruitment: methods and results. Contemporary clinical trials.34, 10-20. 9. C. Turner, The Calorie Restriction Dieters. The Telegraph. July 25, 2010. Accessed at news/health/7898775/The-Calorie-Restriction-dieters.html. 10. R. L. Walford, D. Mock, R. Verdery, T. MacCallum, Calorie restriction in biosphere 2: alterations in physiologic, hematologic, hormonal, and biochemical parameters in humans restricted for a 2-year period. Journals of Gerontology Series A: Biological Sciences and Medical Sciences.57, B211224 (2002). 11. D. K. Ingram, et al., Calorie restriction mimetics: an emerging research field. Aging cell.5, 97-108 (2006). SPRING 2015 | PENNSCIENCE JOURNAL



Electrical Circuits By Richard Diurba While human aging is considered an area of research in life sciences, the aging of electrical circuits serves as its counterpart in the physical sciences. With the aging of electrical circuits, the efficiency of the circuit decreases. While the aging of electrical circuits seems like a relatively simple field of research, Jarasolav Va’vra of the Stanford Linear Accelerator stated, “it is difficult to understand any aging measurements quantitatively” due to an unknown, “relationship between microscopic and macroscopic variables (1).” Understanding the mechanisms of aging necessitates understanding the best conditions possible for the care and maintenance of an electrical circuit. The loss of efficiency due to circuit aging has clear consequences for the rate of natural resource consumption. Thus, gaining a better understanding of aging in circuit dynamics could be valuable in the conservation of natural resources. But beyond energy conservation, a clearer picture of how circuits age could deliver new insights into the function of the circuitry of the brain. The aging of a simple electrical circuit consisting of a battery, a wire, and a resistor serves as an effective way to examine aging in electrical circuits. The reasoning behind the simplification lies in the fact that all elements of electrical circuits, semiconductors, and capacitors, have the same goal of a simple circuit: to carry and process charge. Current research shows that the most intriguing element of the circuit is the aging of the battery, but this article will discuss the aging of wires and resistors as well. The wire and the resistor are the fundamental building blocks of electrical circuits. These two electrical elements which carry charge back to the battery lose their effectiveness through very similar circumstances. In the case of wiring, wiring loses its ability to carry charge through degradation of the insulation. The insulation 10 PENNSCIENCE JOURNAL | SPRING 2015

of the wire is heavily impacted by “the applied voltage magnitude and temperature (2).” With the preceding aging mechanism, the wire loses its ability to hold charge through the loss of the wire’s ability to hold in the wire’s moving charge. The key indicator of insulation erosion is the emission of liquid or ebullient (3). The resistor has some similar characteristics in deformation to the wire. In this analysis, the light emitting diode will act as a resistor. While the wire loses its ability to hold charge, the light emitting diode experiences “an increase in nonradiative recombination processes in the active layer (4)”. Nonradiative recombination is an extremely intricate process in which the diode’s anode and cathode undergo chemical recombination of metals into a more inefficient setup. Essentially, the resistor in the modern circuit changes the way it processes the charge. The recombination of the resistor changes the efficiency of the resistor to convert voltage into power. The phenomenon reduces the efficiency of the resistor, which increases the usage of natural resources to produce power. The resistor, similar to the wire, experiences aging through pulses of the electrical current and high temperatures. As temperature and the frequency of pulses increase, the efficiency of the circuit decreases through a reduction in usable voltage. Typically, the shift increases the forward current of the diode, while decreasing light intensity. The light, therefore, experiences both a loss of quantitative power and qualitative light emission (4). The final area of analysis, the battery, remains a mystery of the electrical engineering and physics community. For the sake of simplicity, the battery used for this analysis will be a lithium ion battery. Despite the complexity, it can be heavily inferred that the battery experiences the same aging as the wire and the resistor. The inference arises from the battery’s need to carry

FEATURES charge within itself, a process akin to the wire’s movement of charge.

lows for the creation of energy from phosphorylation (7). These microbiological processes remain an important subject of research in understanding the biochemistry of life. Chemically, the battery involves the traveling of ions to While the term “electrical circuit” may only be in the index the positive end of the battery. The circuit loses its ability to of a physics or engineering textbook, all fields of science function through the buildup of solid-electrolyte interfaces must analyze the efficiency and aging of objects that create (SEI) (5). These are small layers that coat the anodes and electric potential. cathodes of the battery, thereby, hindering the transportation of charge. SEI forestall the charge lithium carries in cre- References: ating the potential of the circuit. The battery loses its ability to carry the charge in a process similar to a capacitor (6). The 1. J. Va’vra, Physics and chemistry of aging–early deloss of capacitance follows a solvent diffusion model. The velopments. Nuclear Instruments and Methods in Physics solvent diffusion model represents exactly what it states; by Research Section A: Accelerators, Spectrometers, Detectors looking at the basic rates of diffusion, electrochemical aging and Associated Equipment.515, 1-14 (2003). can be predicted for the individual battery. The monolayer 2. S. Grzybowski, E. A. Feilat, P. Knight, Accelerated SEI increases in frequency with a high temperature in the aging tests on magnet wires under high frequency pulsating surrounding environment (5). Temperature, apparently, not voltage and high temperature. In Electrical Insulation and only impacts the internal, basic process of a battery, but also Dielectric Phenomena, 1999 Annual Report Conference on, blocks the capacitance of the charge by increasing the reac- vol. 2, 555-558 (1999). tions engendering SEI layers. 3. W. Yost, K. Elliott Cramer, D. F. Perey, “Characterization of effluents given off by wiring insulation” (NASA, The human brain also harnesses the power of electrical Hampton, VA, 2003). circuits. The transportation of ions through axons and syn4. O. Pursiainen, N. Linder, A. Jaeger, R. Oberschmid, apses creates a voltage throughout the neurons, which can and K. Streubel, Identification of aging mechanisms in the be analogized to the wire of the brain’s circuitry. When the optical and electrical characteristics of light-emitting diodes. circuitry degenerates and loses its ability to hold charge, the Applied Physics Letters.79, 2895-2897 (2001) . brain loses capability to function normally. The most well- 5. M. Safari, M. Morcrette, A. Teyssot, C. Delacourt, known disease affected by continual wear-and-tear of the Multimodal physics-based aging model for life prediction of brain’s circuit is dementia. In that case, activity within the Li-ion batteries. Journal of The Electrochemical Society.156, brain may reach inefficient ends in a manner similar to the A145-153 (2009). continual usage of electrical circuits. The example has the 6. H. J. Ploehn, P. Ramadass, R. E. White. Solvent difsynapse mirroring the battery, the axon serving as the wire, fusion model for aging of lithium-ion battery cells. Journal and the internal resistance of the neuron creating resistance. of The Electrochemical Society.151, A456-A462 (2004). These biological interests extend far beyond neurology. The 7. D. Tewari, et al., Modulation of the mitochondrial creation of resistance, voltage, and current is an important voltage dependent anion channel (VDAC) by curcumin. subject in the field of microbiology. For example, the mito- Biochimica et Biophysica Acta (BBA)-Biomembranes.1848, chondria’s membrane creates a potential difference that al- 151-158 (2015).




Exploring the Parallels between Spaceflight and Aging By Ritwik Bhatia


he National Aeronautics and Space Administration, or NASA, was established in 1958, and within the following decade, it began to conduct manned space flight programs to Earth’s orbit and the moon. At this same time, researchers began to notice that astronauts, after space travel, suffered from symptoms similar to those suffered by the elderly population on Earth (1). Recent studies looking into the biological and physiological effects of space-


FEATURES flight have shed light on the relationship between spaceflight and aging. Collectively, they can be used to further improve our knowledge of human aging. During space flight, due primarily to the absence of gravity, astronauts face symptoms such as muscle atrophy, disturbed sleep, and anemia. In addition, they face certain physiological changes that parallel aging, including cardiovascular deconditioning, deterioration of bones and overall weakening of the immune system (2). These symptoms are also commonly found in people who are bed-ridden and live sedentary lifestyles. Interestingly enough, both astronauts and bed-ridden subjects see their aging symptoms reversed upon their return to normal activity. The fleeting nature of symptoms in astronauts and bed-ridden subjects can be attributed to the fact that the change in gravity during spaceflight and change in the orientation of gravitational pull at rest in a bed are only temporary. Thus, a simple lack of gravity, with no force acting down upon one’s body, closely resembles a sedentary lifestyle. In fact, bed rest studies over the past few decades have demonstrated that gravity plays a fundamental role in the adverse health effects of space travel and aging (3). Thus, an active lifestyle, through which one works against the pull of gravity, is key to healthy aging and the reversal of such symptoms. Not only does spaceflight affect the muscles of the body, but it also affects the immune system in a seemingly deleterious way. As people age, the immune system loses its ability to distinguish its own body’s cells from foreign particles, resulting in increased susceptibility to autoimmune disorders. In addition, older immune systems produce lower levels of immune proteins than do younger immune systems, which can be detrimental in defending against bacterial infections. Recently, a collaboration between NASA’s Integrated Immune and their Clinical Nutrition Assessment flight programs studied the presence of cytokines within the bodies of crew members before, during, and after spaceflight (4). Cytokines are proteins that facilitate cell-to-cell communication, and signal immune cells to fight an infection in the process of inflammation. The data indicated that fluctuations in cytokine concentration in the blood occurred throughout spaceflight, causing ‘confusion’ within and weakening of the immune system. Subsequently, it has been critical to determine a relationship connecting aging, spaceflight, changes in the immune system so that steps can be taken to improve outcomes. This past year, researchers imitated spaceflight on mice by subjecting them to low-gravity conditions (2). Researchers used a method called hind limb unloading (HU), which leads to changes in bone microstructure, to simulate spaceflight in mice. They then

specifically examined B lymphocytes, a type of white blood cell that produces antibodies to fight infections as well as release cytokines to signal and regulate an immune response, in young, old, and spaceflight-exposed mice. After three weeks, analysis of the mice’s bone marrow showed that B lymphocytic levels in the ‘traveling’ mice were more comparable to lymphocyte levels in elder mice than to those in younger mice. Researchers noted that a lack of expression of certain transcription factors led to lower lymphocyte levels. As a result, it was concluded that these conditions led to the early aging of the immune system. They hypothesized that inducing certain adaptations of the musculoskeletal system may prove to be vital in combating these effects. Studies such as this, utilizing the ground-based model of HU, can be used effectively to strengthen our understanding of immune responses. In the future, they may help in the development of compounds to improve such responses in a variety of populations, including “astronauts and in elderly, or bed-ridden populations” (2). Ultimately, the next step is to explore whether these symptoms and processes can be reversed. These studies may prove to be invaluable as we enter a phase in which we aim to embark on missions to Mars and other distant locations that may last many years. Continued research regarding aging and spaceflight will not only shed light on how to protect space crews, but can also lead to further understanding of the immune system. Such findings will benefit all inhabitants of the Earth, both young and elderly. Rather than just establishing a correlation between spaceflight and aging, further studies that delve deeper into the biological aspects of these processes will help us determine the exact causes of accelerated aging during spaceflight. Such research may pave the way for revolutionary drugs that not only slow down the human aging process, but also aid in our quest against autoimmune diseases. References 1. J. Vernikos. Synergistic Research. NASA. November 1, 1998. archives/sts-95/aging.html#experiments 2. C. Lescale, et al., Hind limb unloading, a model of spaceflight conditions, leads to decreased B lympohypophesis similar to aging. The FASEB Journal.29, 455-463 (2014). 3. A. Pavy-Le Traon, et al., From space to Earth: advances in human physiology from 20 years of bed rest studies (1986–2006). European journal of applied physiology.101, 143-194 (2007). 4. B.E. Crucian, et al., Plasma Cytokine Concentrations Indicate That In Vivo Hormonal Regulation of Immunity is Altered During Long-Duration Spaceflight. Journal of Interferon & Cytokine Research.34, 778-786 (2014). SPRING 2015 | PENNSCIENCE JOURNAL



Epigenetics in

Twins By Anisha Reddy




inda Lewis and Leora Eisen, age 54, are identical twins; as the twins aged, Leora remained in good health while her sister, Linda, struggled with leukemia (6). In a documentary made by Leora Eisen chronicling the differences between the two, Tim Spector a professor of genetic epidemiology in King’s College, London, says twin development of health is no longer just a question of nature versus nurture (6). Rather, genes interact with life experiences to shape future health. This field is known as epigenetics. Epigenetics is the study of phenotypic variation by changes in gene expression rather than changes in DNA sequences. In recent years, many studies on monozygotic twins have been conducted to identify the phenotypic discord in twins as they age. Monozygotic twins share the same genotype, but as they age, identical twins frequently develop different diseases. With the same DNA and genetic makeup, twins are expected to develop in the same manner. Recent studies suggest that many phenotypic differences can be attributed to epigenetic differences in twins. Epigenetics is the study of phenotypic variation by changes in gene expression rather than changes in DNA sequences. DNA methylation and histone modification are epigenetic mechanisms, altering the expression of a gene without affecting its nucleotide sequence. These epigenetic mechanisms appear to act on processes that pack and unwind DNA into chromatin. For example, DNA methylation is a chemical process through which a methyl group is added to DNA. The insertion of a methyl group alters the appearance and structure of DNA, disrupting DNA interactions with the cellular machinery necessary for transcription. In this way, DNA methylation and histone modifications act as chemical tags, indicating what, where, and when genes should be “turned on,” or expressed. Conversely, epigenetic silencing can result in differential expression amongst twins. Epigenetic differences arise during the lifetime of identical twins due to environmental factors, such as diet, chemicals in the environment, smoking, drugs and medicines. Although identical twins share the same genetic code and initial environment in utero, their environment differs as they age. Epigenetic changes can occur as an individual ages based on lifestyle and environmental factors. As proof, one study from Fraga et al. found that variation in epigenetic patterns in elder twins was four times greater than the differences observed in younger pairs (1). In many cases, environmentally induced gene expression occurs because of altered DNA methylation or histone modifications. Although, many of the precise mechanisms by which environmental factors drive these epigenetic changes are unknown, current research is beginning to explore the molecular basis for environmentally induced alterations in DNA methylation and histone modifications (2). Identical twins have the possibility of developing different diseases as they age. When both twins get the same disease, researchers can often point to a genetic element shared amongst the twins. However, when twins develop distinct diseases epigenetics can play a crucial role. A good illustration of disease

variability can be drawn from the study by Fraga et al. cited above. In observing 50-year old twins, the researchers identified one of the siblings as having lower DNA methylation in repeated DNA sequences, and they found that this sibling manifested substantial disease phenotypes, such as cardial incompetence and actual myocardial function (1). Some of the earliest studies of epigenetics and disease characterized distinctive epigenetic patterns in cancer (3). In 1983, Andy Feinberg and Bert Vogelstein famously published a paper, which demonstrated through Southern blotting that cancer cells feature reduced DNA methylation at cytosine-guanine dinucleotide pairs (CpG sites) in the DNA (4). Since then, the importance of hypermethylation has also been realized. Hypermethylation can shut off tumor suppressor genes and trigger tumor formation. In fact, these types of epigenetic changes are potentially more common in human cancer than point mutations in DNA sequences. A large proportion of the gene loci that cause familial or inherited forms of cancer have also been associated with methylation-associated silencing (5). Beyond cancer, diseases such as Rhematoid arthritis, stroke, and Crohn’s disease have a more epigenetic influence based on environmental factors while reading disability, autism, and Alzheimer’s disease have more of a genetic influence. The once understood link between genotype and phenotype is now shifting after the discovery of epigenetics. To the surprise of many, the environment plays a crucial role in altering gene expression over time, and researchers have successfully used twins to demonstrate the relevance of epigenetic mechanisms in altering phenotypes of individuals as they age. Ultimately, researchers agree that lifestyle factors and environmental factors influence disease phenotype. Moving forward, one area that is certain to continue to evolve involves the utilization of information about epigenetics in disease for the development of drugs and therapies. In the evolving world of personalized medicine, our understanding of the epigenetics of disease could become an integral part of new therapeutic approaches for complex disease phenotypes. References: 1. M. F. Fraga, et al., Epigenetic differences arise during the lifetime of monozygotic twins. Proceedings of the National Academy of Sciences of the United States of America.102, 10604-10609 (2005). 2. R. Feil, M. F. Fraga, Epigenetics and the environment: emerging patterns and implications. Nature Reviews Genetics.13, 97-109 (2012). 3. A. P. Feinberg, B. Tycko, The history of cancer epigenetics. Nature Reviews Cancer.4, 143-153 (2004). 4. A. P. Feinberg, B. Vogelstein, Hypomethylation distinguishes genes of some human cancers from their normal counterparts. Nature.301, 89–92 (1983). 5. P. A. Jones, S. B. Baylin. The fundamental role of epigenetic events in cancer. Nature reviews genetics.3, 415-428 (2002). 6. L. Eisen, Two of a kind. Canadian Broadcast Corporation. November 27, 2014. SPRING 2015 | PENNSCIENCE JOURNAL



Investigation of the Morphology of the Heart of Zebrafish Embryos Exposed to Different Concentrations of Methylmercury Tasneem Mahmood Texas A&M University Research Advisor: Dr. Louise C. Abbott Department of Veterinary Integrative Biosciences May 2014 Mercury is a well-known neurotoxicant. In its elemental form, mercury is easily distributed into the atmosphere due to its relatively low boiling point. Once elemental mercury becomes airborne it can travel long distances to eventually be deposited into soil and in all types of bodies of water, including streams, lakes, rivers and oceans where it is converted to methylmercury by bacteria. Methylmercury can reach high concentrations in predatory or long-lived fish such as swordfish and tuna, which are prime food sources for humans. Consumption of contaminated fish or marine mammals is the major route by which humans are exposed to methylmercury. We examined the effect of different levels of methylmercury exposure on heart development of wild type zebrafish embryos (ZFEs). ZFEs were exposed to one of two different concentrations of methylmercury (10 ppb (parts per billion) or 50 ppb), and to 0 ppb methylmercury, using 24-well flat-bottom plates. A minimum of 24 ZFEs were tested with each dose of methylmercury. The 24-well plates were incubated for up to 72 h at 28.5 °C. After 24 hours exposure to each concentration of methylmercury, all surviving embryos were transferred to fresh embryo medium without methylmercury (0 ppb). Images of stained sections of ZFEs exposed to three different concentrations of methylmercury (10, 50 and 0μgl) and fixed at 72hpf were captured, and NIH Image J was used for measurement. Each ZFE heart was assessed for normal morphological development. No significant differences were observed between any of the groups assessed.

Introduction Methylmercury is a form of mercury that can be harmful to the developing brains of unborn babies and young children, affecting cognitive, motor, and sensory functions. The more methylmercury that accumulates into an individual’s bloodstream, the longer the exposure time, and the younger in age of the person consuming the fish, the more severe the effects may be. Elemental mercury is primarily transformed into methylmercury by sulfate-reducing bacteria that can be found in both soil and water (1-2). Methylmercury is the form of mercury that becomes bioconcentrated because it is better retained by organisms at all levels of the food chain due to its lipid solubility (3). Methylmercury can reach high concentrations in predatory or long-lived fish, including swordfish, tuna, king mackerel and shark, which are prime food sources for humans and other mammals, especially marine mammals (4-9). Zebrafish embryos may not only be ecotoxicologically relevant models, but may also aid in the elucidation of the molecular mechanisms underlying the effects of low-level methylmercury exposure in humans (10). There are many advantages to using zebrafish embryos (ZFEs) in toxicity studies, including: rapid ex utero development; the chorion, which is the protective covering found over the early ZFE, is transparent; transparency of the ZFEs themselves during early embryonic development; and the ability to provide direct, accurate chemical delivery to the ZFEs at any time during development (11-13). Additional beneficial factors for using ZFEs in toxicity studies include: a single pair of breeding zebrafish can generate up to 200 ZFEs in a single week; zebrafish can reach sexual maturity by approximately 3 months of age; and a great deal of information is already known concerning zebrafish genetics and developmental biology (10, 14). The fact that use of 16 PENNSCIENCE JOURNAL | SPRING 2015

ZFEs affords economic simplicity due to their small size and rapid development, the ability to easily expose developing ZFEs to different toxicants, and the ability to understand both the complex genetic and developmental biology of this animal model, make ZFEs an excellent model system with which to examine the effects of possible toxicants that affect the developing cardiovascular system (15). The effects of methylmercury on development of the ZFE neural tube reflect the well-known neural toxicity of methylmercury because all doses tested in this current study have been previously reported to cause decreased staining with proliferating cell nuclear antigen (PCNA), suggesting decreased cell proliferation (16). Perry et al. reported a decrease in mitotic index and an increased percentage of abnormal mitosis in methylmercury-treated killifish embryos (17). Smith et al. observed that the adult zebrafish telencephalon cell body density was significantly decreased at all developmental methylmercury exposures greater than 0.01 μM (18). These observations are similar to those reported by Yang et al. where exposure to methylmercury significantly impaired development of the zebrafish fin fold and the tail fin primordium (10). Furthermore, Yang et al. reported that concentrations of methylmercury as low as 6ppb (6 μg per liter) methylmercury, and an exposure time as short as 6h caused defects in tail fin development. Cuello et al. reported that zebrafish larvae exposed to 5 to 25 ppb methylmercury were observed to have a bent body axis and they accumulated blood in their hearts as well as presented an irregular heartbeat (19). The zebrafish heart begins development in a fashion that is similar in all vertebrates two thin walled primordial cardiac tubes fuse together to form the definitive heart tube by 24 hours after fertilization and it also is around this time that contraction of the heart is initiated (20). The fish heart has two chambers, atrium

RESEARCH and ventricle, and blood flows first into the atrium from the sinus venosus, which receives blood from the venous end of the circulatory system. Blood then flows from the atrium into ventricle and blood is returned to bulbus arterious, which connects to the branchial arteries and the ventral aorta to return blood the rest of the circulatory system (21). We entered this investigation with the hypothesis that early exposure to methylmercury will have an adverse effect on heart development in zebrafish embryos. We anticipate that the hearts of ZFEs that are exposed to low concentrations of methylmercury for hours 5 through 30 of post fertilization development will both show significantly reduced overall morphological growth and complexity when compared to those that have not been exposed to methylmercury during the same developmental time period.

Methods Zebrafish Embryos

Adult wild-type zebrafish of the AB strain were raised in the Department of Biology at Texas A&M University and maintained under standard laboratory conditions at an ambient temperature of approximately 28.0 °C (14) . Male and female adult zebrafish were paired in the evening (4-6 p.m.) and fertilized embryos were obtained at approximately 9-10 a.m. the following morning. All ZFEs were held in an incubator with a constant temperature of 28.5 °C after transfer from the Biology Department. Embryo medium, consisting of ultrapure water containing low concentrations of specific ions and adjusted to pH 7.2, was used to maintain the developing ZFEs and was freshly prepared for each experiment (14). All ZFEs were staged and fixed at specific hours post fertilization (48 and 72 hpf) as described by Kimmel et al. (22). Both adult and embryonic ZFEs were maintained according to protocols that were carried out in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals (National Institutes of Health Publication no. 8523, revised 1996) and animal use was approved by the Texas A&M University Institutional Committee to Approve Animal Use Protocols.

Figure 1. Lateral view of 72 hour normal (control) zebrafish embryo.

all surviving embryos were transferred to fresh embryo medium that did not contain any methylmercury (0 ppb) (16).

Morphological Analysis

Three groups of ZFEs were evaluated: Control, exposure to 10ppb and exposure to 50ppb. ZFEs from each group were anesthetized by exposure to MS-222 (a fish anesthetic) followed by rapid chilling on ice. The ZFEs were fixed in either 10% NBF or 1% Paraformaldehyde: 1% Gluteraldehyde. Once the ZFEs were fixed, 10 hatched ZFEs per group were be embedded in paraffin, sectioned sagittal at a thickness of 5 µm and stained with Hematoxylin and Eosin. The slides were coded so data collection was carried out with the investigator blinded with respect to the individual group. Heart area, pericardial area and heart wall thickness were measured. Images of stained sections of ZFEs exposed to the three different concentrations of methylmercury and fixed at 72hpf were captured using an Eclipse E400 microscope equipped

Methylmercury Preparation and Exposure

Methylmercuric chloride (95% purity) were obtained from Alfa Aesar (Ward Hill, MA, USA). Methylmercuric chloride (methylmercury) and initially dissolved in sterile, deionized water to a concentration of 0.1 mg ml-1) and further diluted with embryo medium for ZFE exposure. All methylmercury stock solutions were stored at 4 °C until used. ZFEs were exposed to one of two different concentrations of methylmercury (10 ppb (parts per billion) and 50 ppb), and to 0 ppb methylmercury, using 24-well flat-bottom plates with low evaporation lids (BD Biosciences, San Jose, CA, USA). The total volume of embryo medium contained in each well of the 34-well plates, was 2.0 ml. In each experiment some ZFEs were placed in embryo medium without methylmercury (0 ppb) to serve as negative controls. Two or three ZFEs were added to each well and the 24-well plates were prepared in triplicate. A minimum of 24 ZFEs were tested at each dose of methylmercury. The 24-well plates were covered with low evaporation lids and incubated for up to 72 h at 28.5 °C (Thelco Laboratory Incubator; Cole-Palmer Instrument, Vernon Hills, IL, USA). After 24 hours exposure to each concentration of methylmercury,

Figure 2. Five micron thick sections of control zebrafish embryos stained with H&E. A = low magnification ; B = higher magnification of heart of zebrafish embryo seen in A. SPRING 2015 | PENNSCIENCE JOURNAL


RESEARCH with a 40X objective, a Nikon DXM1200 digital camera and ACTI imaging software. NIH Image J was used for measurement (23).

Statistical Analysis

ANOVA was performed to assess differences among groups expressed as meansÂąstandard error of the mean (SEM). Post hoc analysis was not carried out because no data set reached statistical significance based on ANOVA analysis.

Results The zebrafish heart, like all fish hearts, consists of a single tube that has two major chambers, an atrium and a ventricle, which are located inside a body cavity called the pericardium (Figures 1 and 2). Blood flows from the body to the atrium, then to the thickerwalled ventricle, out through the bulbus arteriosus and ventral aorta back to the body. After 24 hours of exposure (hours 6-30 after fertilization) to one of two concentrations of methylmercury, zebrafish embryo (ZFE) hearts were examined at 72 hours after fertilization. Measurements of various parameters of the heart were taken from a minimum of seven ZFEs from each group included in this study. All scoring was done with the investigator blind to which group each embryo belonged. The volume of the heart that included the atrium and ventricle, ventricular wall thickness and pericardial cavity volume were measured.

Heart volume

Figure 3. Assessment of heart volume in 72 hour old zebrafish embryos. One-Way ANOVA indicates no significant differences between groups.

Figure 4. Assessment of ventricular wall thickness in 72 hour old zebrafish embryos. One-Way ANOVA indicates no significant differences between groups.

Heart volume data are shown in Figure 3. Heart volume from control ZFEs was compared to heart volumes from ZFEs exposed for 24 hours to either 10 ppb or 50 ppb methylmercury. We expected to observe no difference when control ZFE heart volume was compared to ZFEs exposed to 10 ppb methylmercury, but we expected the hearts of ZFEs exposed to 50 ppb methylmercury to be significantly larger than control ZFE hearts. We observed no statistical difference between any of the groups analyzed, based on analysis by one-way ANOVA.

Pericardial cavity volume

Pericardial cavity volume data are shown in Figure 4. Pericardial cavity volume from control ZFEs was compared to pericardial cavity volumes from ZFEs exposed for 24 hours to either 10 ppb or 50 ppb methylmercury. We expected to observe no difference when control pericardial cavity volume was compared to ZFEs exposed to 10 ppb methylmercury, but we expected the pericardial cavities of ZFEs exposed to 50 ppb methylmercury to be significantly larger than the volumes observed in control ZFEs. We observed no statistical difference between any of the groups analyzed, based on analysis by one-way ANOVA. Ventricle thickness The thickness of the ventricles is shown in Figure 5. When we examined the ventricle thickness we expected to observe either no difference in the thickness of the ventricular walls between any of the groups or to observe that the thickness of the ventricle in the hearts of ZFEs exposed to 50 ppb be thinner than control ZFE heart ventricles. We observed no statistical difference between any of the groups analyzed, based on analysis by one-way ANOVA. 18 PENNSCIENCE JOURNAL | SPRING 2015

Figure 5. Assessment of ventricular wall thickness in 72 hour old zebrafish embryos. One-Way ANOVA indicates no significant differences between groups.

Conclusions In this study, we compared the morphological characteristics of the heart between three experimental groups of zebrafish embryos: Control, 24 hours of exposure to 10ppb, and 24 hours of exposure to 50ppb. While no statistically significant results were observed, several trends in these results were observed. The lack of statistically significant results precludes the formulation of any direct conclusions about the impact of methylmercury upon heart morphology in zebrafish embryos. Nevertheless, based on the microscopic examination of the methylmercury-exposed embryos, ZFEs exposed to the lower concentration of methylmercury (10 ppb) exhibited a strong tendency to have a smaller heart volume as well as pericardial sac volume in comparison to the controls. There was no differ-

RESEARCH ence in the thickness of the ventricle wall among any of the groups examined. We hypothesized that this trend towards smaller volumes could indicate a developmental delay of the heart in these ZFEs that were exposed to 10 ppb methylmercury. A potential and very likely explanation behind the lack of statistical significance may be due to the small sample size. If the experiment was to be carried out once more with a larger sample size, statistically significant results could follow. Despite the absence of statistically significant results, we believe that there is scope for further experimentation. It was interesting to note, that while only trends were observed, the heart and pericardial sac volumes of ZFEs exposed to the higher concentration of methylmercury (50 ppb) tended to be higher than those of ZFEs exposed to 10 ppb methylmercury. This trend could suggest that, while overall development of the heart may be delayed by exposure to methylmercury, exposure to the higher concentration of methylmercury could cause structural damage or functional deficits in the heart, which often result in cardiac edema that can cause enlargement of the heart and/or pericardial sac (24). Although the available data are promising, they are not sufficient to draw a comprehensive picture. Future investigation may focus on the long-term effects on the structure of the heart and provide a path for understanding the effect on the functions of the cardiovascular system such as heart rate. Disruption of heart development in zebrafish embryos exposed to methylmercury in the ppb range is of particular importance in light of the growing body of evidence demonstrating that the cardiovascular system is also a key target of methylmercury toxicity in humans. While zebrafish studies are only beginning to have an impact in the area of cardiovascular toxicology, the value of the model is unquestionable. Zebrafish research is continuing to make a substantial impact on the study of heart development. References: 1. J. R. D. Guimaraes, J. Ikingura, H. Akagi H, Methyl mercury production and distribution in river water–sediment systems investigated through radiochemical techniques. Water Air Soil Pollut.124, 113–124 (2000). 2. R. Sparling, Biogeochemistry: mercury methylation made easy. Nature Geosci.2, 123–126 (2009). 3. F. M. M. Morel, A. M. L. Kraepiel, M. Amyot, The chemical cycle and bioaccumulation of mercury. Annu. Rev. Ecol. Systemat.29, 543–566 (1998). 4. F. M. Al‐Ardhi, M. R. Al‐Ani, Maternal fish consumption and prenatal methylmercury exposure: a review. Nutr. Health.19 289–397 (2008). 5. R. Dietz, et al., Comparison of contaminants from different trophic concentration and ecosystems. Sci. Total Environ.245, 221–223 (2000). 6. C. C. Gilmour, G. S. Riedel, A survey of size‐specific mercury concentrations in game fish from Maryland fresh and estuarine waters. Arch. Environ. Contam. Toxicol.39, 53–59 (2000).

7. R. P. Mason, J. R. Reinfelder, F. M. M. Morel, Bioaccumulation of mercury and methylmercury. Water Air Soil Pollut.80, 915–921 (1995). 8. G. J. Myers, P. W. Davidson, J. J. Strain, Nutrient and methyl mercury exposure from consuming fish. J. Nutr.137, 2805–2808 (2007). 9. US EPA, Fish Advisories: What You Need to Know about Mercury in Fish and Shellfish. Last updated on 18 November 2008. Available from: (accessed 3, April 2014). 10. L. Yang et al. Transcriptional profiling reveals barcode‐like toxicogenomic responses in the zebrafish embryo. Genome Biol.8, R227 (2007). Available from: http:// 2007/8/10/R227. 11. A. Hill, C. V. Howard, U. Strahle, A. Cossins, Neurodevelopmental defects in zebrafish (Danio rerio) at environmentally relevant dioxin (TCDD) concentrations. Toxicol. Sci.76, 392–399 (2003). 12. P. Gonzalez, Y. Dominique, J. C. Massabuau, A. Boudou, J. P. Bourdineaud, Comparative effects of dietary methylmercury on gene expression in liver, skeletal muscle, and brain of the zebrafish (Danio rerio). Environ. Sci. Technol.39, 3972–3980 (2005). 13. R. L. Tanguay, M. J. Reimers, Analysis of ethanol developmental toxicity in zebrafish. Methods Mol. Biol.447, 63–74 (2008). 14. M. Westerfield, The Zebrafish Book. A Guide for the Laboratory Use of Zebrafish (Danio rerio). (University of Oregon Press, Eugene, OR, ed. 4, 2000). 15. W. Heideman, D. Antkiewicz, S.A. Carney, R. E. Peterson, Zebrafish and Cardiac Toxicology. Cardiovascular Toxicology.5, 203-214 (2005). 16. S. A. Hassan, E. A. Moussa, L. C. Abbott, The Effect of Methylmercury Exposure on Early Central Nervous System Development in the Zebrafish (Danio Rerio) Embryo. J. Applied Toxicology.70, 7-13 (2012). 17. D. Perry, J. S. Weis, P. Weis. Cytogenic effects of methylmercury in embryos of the killifish, Fundulus heteroclitus. Arch. Environ. Contam. Toxicol.17, 569–574 (1988). 18. L. E. Smith, et al., Developmental selenomethionine and methylmercury exposures affect zebrafish learning. Neurotoxicol. Teratol.32, 246–255 (2010). 19. S. Cuello, Analysis of Protein Expression in Developmental Toxicity Induced by MeHg in Zebrafish. Analyst.137, 5302-5311 (2012). 20. D. Y. R. Stainier, R. K. Lee, M. C. Fishman. Cardiovascular development in the zebrafish I. Myocardial fate map and heart tube formation. Development.119, 31-40 (1993). 21. N. Hu, H. J. Yost, E. B. Clark, Cardiac morphology and blood pressure in the adult zebrafish. The Anatomical Record.264,1-12 (2001). 22. C. B. Kimmel, W. W. Ballard, S. R. Kimmel, B. Ullmann, T.F. Schilling, Stages of embryonic development of the zebrafish. Dev. Dyn.203, 253–310 (1995). 23. M. D. Abramoff, P. J. Magelhaes, S. J. Ram, Image processing with Image J. Biophot. Int.11, 36–42 (2004). 24. X. Xu, et al., Cardiomyopathy in zebrafish due to mutation in an alternatively spliced exon of titin. Nat. Genetics.30, 205-209 (2002). SPRING 2015 | PENNSCIENCE JOURNAL



Exploring Genomes: The need for an accessible map Josh Tycko University of Pennsylvania Research Advisor: Dr. Brian Gregory December 2014 It’s been nearly 14 years since President Clinton stood with Dr. Craig Venter and Dr. Francis Collins and compared the achievements of the Human Genome Project to those of Lewis and Clark. “We are here to celebrate the completion of the first survey of the entire human genome. Without a doubt, this is the most important, most wondrous map ever produced by humankind. Today we are learning the language in which God created life” (1). The massive undertaking was mostly motivated by the perceived promise that the genetic sequence would hold the key to treating innumerable diseases. Genomics has led to an enormous amount of great science, but it would be a stretch to say that the medical promise has come to fruition. Fewer than 60 genetic variants have been proven to be worth using in clinical care (2). Indeed, the biggest genomics headline of last year was the somewhat gloomy FDA crackdown on direct-to-consumer genetic diagnostic company 23andMe, which is no longer permitted to sell their medical reports (3,28). Successfully implementing genomic medicine will require the parallel progressive action of several key players, including: health-care providers, clinical molecular geneticists, genetic counselors, policy-makers, researchers, translational specialists, electronic medical record (EMR) and other software vendors, and patients.

Here, we will focus on the role of bioinformaticians. According to the director of genomic medicine at Duke, their key next step to make genomic medicine a mainstream reality is the “development of curated genomics databases and means to query them to guide clinical decisions” (4). In light of his assessment, one could argue that the field has neglected Clinton’s exaggerated, but insightful, metaphor. The human genome could be a wonderful map, but today’s genome browsers – the interface through which we visualize the map – are difficult to navigate and give no insights as to where to look next for interesting genomic features. This lack of approachable interface has led to a sense in the public, but also amongst scientists who aren’t trained in computational biology, that genomes are massively confusing and inaccessible (5). There is a demonstrated need for next-generation genome browsers that make “-omic” data accessible to non-computational biologists, including other life scientists, clinicians, and students.

Part One: Next-Generation Sequencing and the Browser Ecosystem A well-designed browser’s first objective is to rapidly synthesize massive data sets into a more easily digestible visual form; the accelerating growth of these data sets drives the need for next-generation genome browsers. The Human Genome Project (HGP) was a $2.7 billion dollar, international effort to piece together a blended genome of a few individuals (5). At peak capacity they were sequencing 1000 bases/second (1). According to ScienceNews (see Figure 1), that number is now closer to 6 million bases/second; moreover, the CIO of Illumina recently said the biggest expense of sequencing a human genome today is storing the data (6, 29). 20 PENNSCIENCE JOURNAL | SPRING 2015

Indeed, since the invention of next-generation sequencing, reading DNA has been growing cheaper faster than storing information in silicon (Figure 2) (6). Recent estimates say the cost of sequencing a human genome is currently hovering around $5000 and is under $1000 at the few institutions with access to the new Illumina HiSeq X Ten Sequencer (7). Lowering costs means there will be a greater number of people with access to raw genomic data, and thus a greater demand for interactive visualizations from which medical meaning can be extracted without computational biology or programming experience. According to recent news, “Saudi Arabia, the United Kingdom, and the United States have all launched projects that in total will sequence about 100,000 individuals. The clinical-sequencing market has been estimated at more than US$2 billion” (4). Datasets of that size could drive new research directions for clinical geneticists without computational expertise - if they had an accessible tool to visualize the genomic maps and generate new hypothesis. Moreover, as the number of people with sequenced genomes moves past six figures, there will be increasing demand from non-specialists to inspect their own genomic maps. Recent news from Nature claimed, “although the $1,000 goal is within striking distance, it has not yet enabled the depth of understanding needed to make full medical or biological use of the knowledge derived from ever more genomes. Attacking that problem is the next challenge of genomics” (8). More so than ever, genomics needs accessible, automated data interfaces to enable more minds, trained in different disciplines, to pore over the DNA map. Today, the UCSC Genome Browser is the most powerful and frequently used browser, and it shares the space with a few other browsers that are similarly tailored to


Figure 1: The cost and speed of genome sequencing (ScienceNews). The cost of human genome sequencing is now actually even lower than shown, for institutions with access to the Illumina HiSeq X Ten Sequencer.

expert users. Fourteen years ago, Jim Kent of UCSC wrote the 10,000 lines of code that assembled the HGP’s data as a graduate student (9). This program was shortly thereafter released (in September 2000) as the UCSC Genome Browser and has remained the dominant browser, followed by a few others such as Ensembl and the Integrative Genomes Viewer (IGV) (10). The UCSC Genome Browser serves up to 8 million page requests per week to 185,000 unique IP addresses per month (10). There are some key differences: UCSC and Ensembl use a clientserver model which offers access to very large databases of genomes, IGV uses a server-side system which is much faster than the web-based browsers but lacks the thirdparty data (10). However, these prominent browsers are similarly only interested in expert users. In the developer’s words, “the basic paradigm of the UCSC Genome Browser is to show as much high quality, whole-genome annotation data as possible and enable researchers to use their expertise to interpret data themselves” (10). One newer, but less popular, browser is Jbrowse, which “helps preserve the user’s sense of location by avoiding discontinuous transitions, instead offering smoothly animated panning, zooming, navigation, and track selection” (11). This open source project stands apart from the dominant browsers by offering an ease of navigation that feels more similar to the standards set by tech products like GoogleMaps (Table 1); browsers like UCSC display portions of the genomes as static images which must be reloaded to move (11). Unfortunately, Jbrowse lacks the useful large reference databases that make UCSC the gold standard.

Above all, the largest problem with these browsers (in terms of enabling widespread genomic medicine) is they are targeted for computational biologists who can code their own tools or use their expertise to select from the nearly 100,000 bioinformatics analytic tools in order to make sense of their data (12). This paradigm makes sense for bioinformatics researchers who need to carefully work through novel and messy datasets, but does not give the non-computational scientist access to the genomics map. In the words of one cancer genomics expert, speaking at the time genome browsers were moving into cloud computing, “The reversal of the advantage that Moore’s Law has had over sequencing costs will have long-term consequences for the field of genome informatics. In my opinion the most likely outcome is to turn the current genome analysis paradigm on its head and force the software to come to the data rather than the other way around” (6).

Part Two: Genome browsing for clinicians and non-computational researchers Genomic data is largely seen as confusing and inaccessible, even to life scientists without computational skills, let alone clinicians and the public. Some find genomics so intractable, they doubt it has anything of value to offer. In 2010, the major direct-to-consumer genomic testing companies, including 23andMe, were brought to a congressional hearing. Congress’ investigative wing, the Government Accountability Office, provocatively wrote in their report, “the most accurate way for these SPRING 2015 | PENNSCIENCE JOURNAL



Figure 2: Decreasing costs of hard disk storage and genome sequencing. With the invention of Next Generation Sequencing (NGS), the cost of sequencing is lowering faster than the cost of hard disk storage (Stein 2010).

companies to predict disease risks would be for them to charge consumers $500 for DNA and family medical history information, throw out the DNA, and then make predictions based solely on the family history information” (13). Needless to say, that’s not quite true; FDA drug labels already include pharmacogenetic variants for 105 drugs, that is drugs that would be dosed differently or swapped based on the patient’s genetic information (14). However, no commercial electronic medical record integrates pharmacogenetic information systematically, and barely any clinicians are trained to use the current genome browsers themselves to make sense of the data (15). The Electronic Medical Record and Genomics (eMERGE) Network (which includes CHOP) has been developing tools and best practices for integrating EMR’s with genomic data which could fill this apparent gap (15). As it currently stands, the effort of genome sequencing is vastly overshadowed by the effort required to extract medical meaning from the reads. Ricki Lewis, a genetics-focused author, publicly stated the hurdles of interpretation are the reason she is choosing not to get her genome sequenced, at least for now. She noted, “when Stephen Quake, a Stanford University engineer and co-inventor of a DNA sequencing device, laid his genetic self bare in the pages of The Lancet in 2010, interpretation required 32 physicians” (16, 17). Notably, Dr. Lewis does not mention any consideration of the pos22 PENNSCIENCE JOURNAL | SPRING 2015

sibility that she would be willing to inspect her genomic map herself, probably because the current interfaces would be unapproachable even with her Ph.D in genetics. There are some actors in this space; Personalis is one company developing a full platform from sequencing to medical interpretation (18). With best practices outlined by eMERGE and other researchers, automated analytic platforms could be developed that make genomic data usable by clinicians, in the form of genomics-integrated EMRs. But, the development of tools like browsers that make the genomic maps themselves more accessible may be farther off. Outside of hospitals, improved genomic interfaces could accelerate biomedical research and lead to new or improved therapies. A key step here would be improved integration of genomic data with epigenomic, proteomic, and transcriptomic databases as many researchers are more interested in the expression and functions of the cell than its genetic code alone. “Although the generation of some of these [integrated] catalogues has already begun, major advances in technologies and data analysis methods are needed to generate, for example, truly comprehensive proteomic data sets and resources” (19). If this were achieved, it is imaginable that all life science experiments would start with an inspection of the genomic map, same as a vacation to an unexplored place starts with a look at GoogleMaps. In my experience in

RESEARCH various life science laboratories (specializing in gene therapy, synthetic biology, and epigenetics), this is not at all the case today. For example, a typical gene therapy project starts with a thorough reading of the literature in order to understand the target gene’s function before designing a therapeutic vector to deliver the gene to the target cell. What the researcher tries to extract about the target genetic pathway from individual papers could be greatly supplemented by interacting with a next-generation genome browser that displays the target gene in its full context, including its epigenetic marks and related functional enhancers and ncRNAs. The therapeutic vector could then hypothetically be better designed so the delivered gene would interact with the target cell’s transcriptional machinery more as it normally would in a human without disease. A next-generation browser would better meet the needs of non-computational life scientists if it were accessible with natural English language queries. It could machine learn (by training on user’s interactions or by integrating knowledge from publications) in order to generate potential hypotheses of interest without the non-computational user necessarily knowing which data tracks would be most revealing of their target gene’s genomic context. For example, the browser could detect a binding site for a well-characterized microRNA upstream of the gene of interest and suggest the user inspect the ncRNA seq track in their target gene. This sort of strategy is floating around the bioinformatics literature, particularly since the wealth of data collected by the ENCODE project: Additional insights will come from combining the information from different catalogues. For example, analyzing genetic variation within functional elements will be particularly important for identifying such elements in non-coding regions of the genome. To this end, the GTEx (Genotype- Tissue Expression) project (http:// has been established to map all sites in the human genome where sequence variation quantitatively affects gene expression (19).

Success and failure in translational research are often determined by the scientists’ knowledge of and assumptions about the disease’s underlying biology; integrating genomic data into the research workflow seems likely to accelerate the development of successful therapeutics. Synthetic biology, a loosely defined field focused on engineering biology often with genetic parts, could also greatly benefit from browsers accessible to its non-computational researchers. George Church, a synthetic biology scientist and founder of several genomics companies, believes genome sequencing has a purpose far beyond finding disease risk factors. In his book Regenesis: How Synthetic Biology will Reinvent Nature and Ourselves, he has no hesitations stating: The real point in reading [human genomes] would be to compare them against each other and to mine biological widgets from them – genetic sequences

that perform specific, known, and useful functions. Discovering such sequences would extend our ability to change ourselves and the world because, essentially, we could copy the relevant genes and paste them into our own genomes, thereby acquiring those same useful functions and capacities (20). Most synthetic biologists do not expect to re-engineer humanity’s DNA outside of disease contexts wherein patients lack safer alternative treatments. However, they are very interested in mining biological widgets for use in the production of drugs or bio-fuels, for example. An accessible genome browser would enhance the workflow of synthetic biologists by enabling rapid comparison and comprehension of interesting genetic widgets across organisms. An accessible genome browser could also impact synthetic biology research while training a new generation of genomic scientists by integrating with iGEM (International Genetically Engineered Machines) – a student synthetic biology research competition. The program is now in its 10th year, and annually includes over 3000 participants in America, Europe and Asia, with estimated annual research expenditures up to $10 million (21). A 10 year study observed that the finalists in the competition are frequently teams who find new genes in the literature to incorporate into engineered genetic circuits; in my experience these decisions are often made on the basis of a few papers that describe a peculiar phenotype in a bacterial cell, which can then be re-engineered for some useful purpose (21). An accessible genome browser integrated with other “-omic” datasets and automated analysis could enable teams to mine for larger biological modules, like a negative feedback loop, as opposed to a single widget, like the inducible promoter that lead a German team to victory in 2012. iGEM has been very successful at training and motivating students to pursue life science research, and in return the students have produced useful biological tools (such as non-fluorescent reporter genes) and software. Integrating genomics into this energetic cycle of education and innovation only requires making the data interfaces more accessible to the non-expert user, and could inspire the next generation to explore the genome map.

Conclusion Given the accelerating output of genomic data, there are ever-increasing numbers of people who could benefit from interacting with genomic maps. However, the current field of genome browsers predominantly caters to expert users, computational biologists, and is unapproachable even to other trained life scientists, let alone clinicians and students. To remedy this problem, accessible genome browsers should be developed with the ease of interface and interpretability of the data in mind as key design factors; they should accept queries with natural language, automatically generate hypotheses of interests by learning from user preferences and published SPRING 2015 | PENNSCIENCE JOURNAL


RESEARCH workflows, and automate the standard analysis pipelines for use without programming skills.

Acknowledgements: I gratefully thank Dr. Brian Gregory, Assistant Professor of Biology at UPenn, for his insights, edits, and support. References 1. F. S. Collins, The Language of Life: DNA and the Revolution in Personalized Medicine (Harper, New York, 2010), p. 304. 2. R. C. Green, et al., ACMG recommendations for reporting of incidental findings in clinical exome and genome sequencing. Genetics in Medicine.15, 565-574 (2013). 3. 23andMe, 23andMe- Genetic kit for ancestry. (2015). 4. G. Ginsburg, Medical genomics: Gather and use genetic data in health care. Nature.508, 451 (2014). 5. G. Church, Improving genome understanding. Nature News.502, 143 (2013). 6. L.D. Stein LD. The case for cloud computing in genome informatics. Genome Biol.11, 207 (2010). 7. E. Check Hayden, Technology: The $1,000 genome. Nature News.507, 294 (2014). 8. Nature News, How to get ahead. Nature News.504, 273 (2014). 9. UCSC, Timeline: UCSC leadership in genomics. (2015). 10. R.M. Kuhn, D. Haussler, W.J. Kent, The UCSC genome browser and associated tools. Brief Bioinformatics.14, 144-161 (2013). 11. M.E. Skinner, A.V. Uzilov, L.D. Stein, C.J. Mungall, I.H. Holmes, JBrowse: a next-generation genome browser. Genome Res.19, 1630-1638 (2009). 12. J. Mesirov, Approaches to Genomic Medicine. University of Pennsylvania, Philadelphia. 1 May 2014. Lecture. 13. M. Wohlsen, Biopunk: DIY Scientists Hack the Software of Life. (Current, New York, 2011). Print. 14. FDA, Table of Pharmacogenomic biomarkers in drug labeling. (2015). scienceresearch/researchareas/pharmacogenetics/ ucm083378.htm. 15. O. Gottesman, et al., The Electronic Medical Records and Genomics (eMERGE) network: past, present, and future. Genetics in Medicine.15, 761-771 (2013). 16. R. Lewis, Why I don’t want to know my genome sequence. (2012). htm?post=882201. 17. E.A. Ashley, et al., Clinical assessment incorporating a personal genome. Lancet.375, 1525-1535 (2010). 18. Personalis, Advanced Genome Services: Research Overview. (2015) 19. E. D. Green, M.S. Guyer, Charting a course for ge24 PENNSCIENCE JOURNAL | SPRING 2015

nomic medicine from base pairs to bedside. Nature.470, 204-213 (2011). 20. G.M. Church, E. Regis. Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves. (Basic, New York, 2012). Print. 21. C. Vilanova, M. Porcar, iGEM 2.0-refoundations for engineering biology. Nat Biotechnol.32, 420-424 (2014). 22. D. Karolchik, et al., The UCSC Genome Browser database: 2014 update. Nucleic Acids Res.42, D764-770 (2014). 23. P.A. Fujita PA, et al., The UCSC Genome Browser database: update 2011. Nucleic Acids Res.39, D876-882 (2011). 24. W. Burke, M.J. Khoury, A. Stewart, R. Zimmern, The path from genome-based research to population health: development of an international public health genomics network. Genet Med.8, 451–458 (2006). 25. T.A. Manolio, et al., Implementing genomic medicine in the clinic: the future is here. Genet Med.15, 258267 (2013). 26. D.G. Macarthur, et al., Guidelines for investigating causality of sequence variants in human disease. Nature.508, 469-476 (2014). 27. C. Farr, Gene startup 23andm2 casts eyes abroad after US regulatory hurdle. Reuters. May 6, 2014. 28. C. Seife, 23andMe is terrifying, but not for the reasons the FDA thinks. Scientific American. November 27, 2013. 29. B. Mole, The gene sequencing future is here. Science News. February 6, 2014.


Call for Submissions Looking for a chance to publish your research? PennScience is accepting submissions for our upcoming Fall 2014 issue! Submit your independent study projects, senior Design projects, reviews, and other original research articles to share your work with fellow undergraduates at Penn and beyond. Email submissions and any questions to

Research in any scientific field will be considered, including but not limited to:Â

Biochemistry | Biological Sciences | Biotechnology | Chemistry | Computer Science | Engineering | Geology | Mathematics | Medicine | Physics | Psychology




Dynamic Reassessment of Awaited Outcomes Darby Breslow University of Pennsylvania The subjective value of a future reward has been implicated in determining a subject’s willingness to wait. Previous work suggests an awaited reward’s subjective value can change as a function of elapsed time, and the theoretical pattern of change depends on the timing environment. The present study used a behavioral task to evaluate the specific subjective value of a future reward for an individual at given points in time. In Experiment 1, participants played a decisionmaking game and could be offered a lesser value trade to move on to a new trial, providing an estimate of what the reward was worth to the subject at that moment. Indifference values for each time probe were calculated and showed a significant difference in early versus late trade times and a significant difference in late trade times between two different temporal conditions. These results indicate that subjects value a reward more after they have already waited, regardless of the time distribution, and subjects value a reward more in a uniform time distribution. Experiment 2 is set up to try to decipher why participants have a higher subjective value than ideal in certain temporal distributions. The goal of Experiment 2 is to determine if subjects are unable to understand the temporal landscape of the task, which would explain why they acted sub-ideally in Experiment 1. A survival analysis was completed to estimate participants’ giving-up times, which were significantly higher than ideal regardless of the presence of counterfactual feedback information. These results show that subjects did not have trouble understanding the temporal environment of the task, but it is possible they did not know how to implement this knowledge into an ideal game strategy.

Introduction In this paper, experimentation about the subjective value of a reward at a given time during a delayed gratification study will be discussed. Decision-making for delayed gratification situations in humans is often explained through ideas of self-control, discounting and reward evaluation, and temporal uncertainty. Selfcontrol, or the ability to resist temptation, is an important factor in the decision-making process. Many instances show an individual abandoning a later reward when they do not possess self-control, shown by Mischel and colleagues in the well known “marshmallow test” (1-3). In this study, young children were given a less desired immediate reward and were told that if they waited for the experimenter to return, they would be given a more preferred reward. Most of the children started to wait for the greater reward but abandoned their fight to accept the lesser reward. This delayof-gratification failure paradigm can be explained in two particular ways. Mischel and colleagues showed that sometimes people do not wait for a later reward because of weak self-control. Another possibility, however, is that people assign a subjective value to a future reward, called discounting, that leads an individual to wait for a reward or give up depending on a number of cost and benefit factors that evolve as time goes by. In the brain, this classification of subjective value for different rewards when making a decision is seen in the ventral striatum, medial prefrontal cortex, and posterior cingulate cortex (4). These regions assign subjective values to different monetary rewards to determine an individual’s willingness to wait. A greater and sooner reward shows increased activity in these brain regions, showing that there is a subjective scaling of different delayed monetary rewards. Dopamine release from the ventral tegmental area of the brain signals the reward circuitry, and studies show that all three of these regions have increased activity during or right before a reward is given (5). In addition to self-control and discounting, the magnitude of temporal uncertainty a decision-maker faces can also be an influence. 26 PENNSCIENCE JOURNAL | SPRING 2015

Individuals may be designed to interpret temporal uncertainty and determine subjective value based on optimal statistical inferences (6). In this experiment, participants were asked to predict the durations of life span, movie length, poem length, and others. Their estimations fell in line with the Bayesian model of statistical inference in many cases, displaying cognition and higher-level thinking may be based on statistical distributions that exist in the world. Temporal discounting, or the tendency to discount a reward depending on an individual’s perception of the expected delay, with the addition of self-control, makes up the decision-making landscape for an individual. In many cases, an individual must decide between a smaller, more immediate reward in exchange for a greater reward in the future. Decision-making when faced with this delayed gratification paradigm is often extremely difficult, especially when the immediate reward can be just as satisfying. Often, it is not a lack of self-control but rather temporal discounting that influences decision-making. Temporal discounting is influenced by the uncertainty of time. Determining when a reward might arise influences the subjective value of the reward and how willing an individual is to wait (7-8). During an experiment performed by McGuire and Kable (2012), temporal uncertainty was manipulated to determine when participants decided to stop waiting for a delayed reward. Participants were given the option to take a smaller reward immediately (1 cent coin) or wait for an unknown amount of time to receive a 15-cent coin. Each participant had seven minutes to earn as much money as possible. Participants had to decide when it was best to wait and when it was best to move on after a few seconds, depending on the delay distribution of the experiment. In the uniform distribution trial, it would be best to wait, as all coins gained value after 12 seconds. In the Pareto distribution, participants would do better by giving up after a few seconds, as most coins gain value quickly while the rest may take 20 seconds or later to augment in value. This study showed that participants were quick to formulate their strategy

RESEARCH when placed under different temporal delays, suggesting that instead of a self-control failure, participants are more likely to alter their willingness to wait based on temporal and subjective value factors. To determine the temporal landscape around them, people explore many possible outcomes rather than just quickly choose one pathway. This exploration is necessary to learn about the surrounding environment and to ensure the exploitation of the most strategic pathway in the future. This theory is often called “exploration versus exploitation” in the current literature, which argues that exploration is a thought out strategy rather than a random determination of willingness to wait. Nishimura (1999) completed an early study in this field examining the giving-up time of a sit-and-wait forager in a stochastic environment. Animals entering a new environment searching for prey do not know if a given patch of area contains prey, so exploring the environment rather than just giving up and entering a new environment can be seen as a positive strategy in this instance (9). Altering one’s giving up time in favor of a mixed strategy can ultimately lead to a more ideal final strategy. Daw and colleagues (2006) used a gambling task to show that participants’ decisions reflected a computational strategy of guiding exploration by expected value to address the exploration versus exploitation strategies. Deciding to explore the environment and accept a suboptimal reward seems to be determined probabilistically on the potential future reward (10). Using functional magnetic resonance imaging, Daw and colleagues (2006) saw that the frontopolor cortex and the intraparietal sulcus were active during exploration, while the striatum and the ventromedial prefrontal cortex were active during exploitation decision-making. During a condition of uncertainty, it seems that subjects switch between exploration and exploitation, monitored by different brain regions at fit to a value-sensitive model (10). Cohen, McClure, and Yu (2007) conducted a review of current literature on exploration versus exploitation and found that many studies suggest an interaction between the acetylcholine and norepinephrine neuromodulatory systems with the dopamine-mediated reinforced learning mechanisms to regulate this behavior. These neuromodulatory systems seem to take into account the uncertainty of the environment to mediate the switch between exploration and exploitation behavior (11).

Experiment 1 Overview

Our study will be a continuation of the previously described experiment by McGuire and Kable (2012). As of now, we understand when participants quit based on different unknown temporal distributions. Currently, McGuire and Kable (2012) hypothesized, but did not directly demonstrate, that decision makers were continuously reevaluating the awaited reward as time elapsed. In Experiment 1, we prioritize the need to determine what subjective value participants placed on the awaited rewards, which is dependent on temporal uncertainty. To do this, participants will play a similar game to the experiment outlined above, but there will not be a continuous opportunity to quit. Instead, discrete probes will arise on the screen prompting the individual to quit or keep playing the round. Like before, there will be different temporal distributions framing when it is optimal to take a lower payoff or wait for the higher reward. This will allow us to

more precisely estimate what value individuals place on a reward at specific time outputs. We predict that participants should be more willing to wait when rewards are equally likely to arrive at any moment, which would result in the subjective value of the reward increasing as time goes on. Participants should be less willing to wait if reward arrival time is likely to take a long time, which would result in the subjective value of the reward decreases as time goes on. Figure 1


The task was programmed using the Psychophysics Toolbox extensions for Matlab; Figure 1 shows the interface. The coin on the screen had no value, but would mature to 10 points after a given amount of time. Participants could be probed with opportunities to trade in their coin for a lower price at certain time intervals. Probes could appear after one or eight seconds with values of -2, 0, 2, 4, or 6. The probability of a trade popping up at a probe time was .5, and each of the trade values were equally likely to arrive. Participants had two seconds to click and accept the trade, or ignore it to continue on with the trial. The task duration was 40 minutes, and the screen continuously displayed the time remaining and total points. The probability of coin maturation was equally distributed across distinct intervals in a given distribution, which depended on what condition was assigned to each subject. The uniform distribution can be seen in Figure 2, and the heavy-tailed distribution can be seen in Figure 3. Intervals for the uniform distribution were equally spread out across a 20 second time span for each trial. In the heavy-tailed distribution, intervals were closely condensed in the first five seconds of the trial, then continued to spread out until the last interval arrived at 30 seconds. In previous studies, a uniform distribution condition got participants to be persistent, and they were willing to wait longer for the coin to mature. Participants in the heavy-tailed distribution, however, were not willing to wait long for the coin to mature and gave up quickly (7). Participants were recruited from around the Penn community (n = 32, 19 female) with the use of the website Experiments at Penn. Subjects age 18-33 (mean = 22.375) with 12-20 years of education (mean = 15.375) signed up online to participate in the experiment. Each participant was randomly assigned to either the uniform distribution or the heavy-tailed condition (n = 16 each) unbeknown to the experimenter and the subjects. Participants received identical instructions and were told they could make an additional $5-10 according to performance on the task, but they were not informed about the possible distributions or probe arrival times. Instructions included six demonstrations: two showSPRING 2015 | PENNSCIENCE JOURNAL



Figure 3

analyzing the data, indifference values for each probe in each condition was calculated, shown in Table 2. The difference between the indifference values within conditions is also outlined in Table 2. Within conditions, there was a significant difference between the indifference values at each probe, shown in the slopes of the lines in Figure 5. As hypothesized, the factor of time accounted for a difference in subjective value of the reward. For the uniform distribution condition, the statistical values were t(15)= 6.58, p= <0.001, showing that the subjective value is significantly increasing as the trial goes on. For heavy-tailed distribution condition, the statistical values t(15)= 3.20, p= 0.006, showing that there is a significant difference in the subjective value of the reward at different time points in the trial. Between conditions, there was a significant difference between the indifference values of the eightsecond probe (t(30)= 2.28, p= 0.0298) and trends towards a difference between the one-second probes (t(30)= 1.70, p= 0.1003). The difference between conditions for the differences of the one and eight second probes had the values t(30)= 0.53, p= 0.6. Figure 4

ing coin maturation, two prompting participants to take the trade, and two prompting participants to ignore the trade. Trials that included a probe were used to get the best understanding of the subjective value of the coin at that moment. Data analyses using Excel calculated how often each trade value was taken at both the one and eight-second probes to obtain overall percentages for each value. The indifference value was calculated using Matlab. Specifically, we fit a logistic function that saturates at zero and one to subjects’ probe acceptance rates and used the midpoint of the logistic curve as the indifference point. This value represents when individuals were equally likely to take or ignore a trade. An indifference value was calculated at the one and eight-second probes for each participant. An example of a participant’s indifference value can be seen in Figure 4. Indifference values were averaged across participants to find the average indifference value for each probe for both distributions. The difference between the probe values within one timing condition was calculated for significance using a paired T-test. The difference between the two timing conditions for one probe value was calculated for significance using an independent-samples T-test. From this information, we predicted that the indifference values would increase between the one and eight-second probes in the uniform condition, as the reward gets closer and more subjectively valuable. We expect the indifference values would decrease between the probes in the heavy-tailed distribution, as the likelihood of a fast reward decreases as time goes on in this condition.


First, only probe events were analyzed. The mean number of trials for the one and eight-second probes is outlined in Table 1. Rewards were more likely to arrive before eight seconds compared to one second, accounting for the difference in trials between the two probes. Also, if the subject took a probe at one second, the trial is complete, so the eight-second probe would not be reached. After 28 PENNSCIENCE JOURNAL | SPRING 2015

Figure 5

Discussion of Experiment 1

Findings from the data collected show a significant difference between indifference values within conditions, and a significant difference between the indifference values of the eight-second probes. Trends towards a difference in the one-second probes were also seen. Analyzing the uniform distribution condition first, these results fall in line with predictions before the experiment began. Expanding on McGuire and Kable (2012), people were more willing to wait in the uniform distribution, as the probability of the arriving reward was getting closer to one as time passed. Subjects should be willing to accept a lower trade after one second, as they have not invested much time in the trial. As they continue waiting, however, subjects should hypothetically want a higher

RESEARCH reward to compensate for time lost and the fact that the coin is approaching maturation. This theory is represented in the data collected. The heavy-tailed distribution condition skewed from our original expectations. McGuire and Kable (2012) showed that subjects tended to give up waiting after a few seconds in the heavy-tailed distribution, suggesting they deciphered the distribution to form the best waiting strategy. But in this experiment, Table 1

Table 2

participants demanded a higher trade value after eight seconds compared to one second. This is shown in the slope of the line for the heavy-tailed distribution in Figure 5, which should be negative according to our expectations combined with findings in the McGuire and Kable (2012) study. In the heavy tailed distribution, if the reward does not arrive after the first few seconds, it has the highest probability of arriving after an extended period of time. Optimally, giving up after the first few seconds to move on to a new trial would offer the greatest monetary value; so we expect the subjective value of the eight-second probe to be lower than the one-second probe in the heavy-tailed distribution. However, the indifference value of the eight-second probe is significantly higher than that of the one-second probe, shown in the slope of the line for the heavy-tailed distribution in Figure 5. There are a few possible reasons for why behavior in the heavytailed distribution condition might have differed from our original predictions. One possibility is that subjects could not decipher the heavy-tailed distribution, hindering them from making accurate decisions to maximize profits. If they did not learn how the time intervals were distributed, it makes sense that participants would demand a higher trade value at the eight-second probe. To determine the timing distribution, subjects could have been prioritizing sampling the environment in order to better decipher the temporal landscape. Purposely letting trials run all the way through could be seen as a strategy to figure out the average coin maturation time. This is an example of exploration versus exploitation described previously, where allowing trials to run all the way through is a way of exploring the temporal landscape of the task before choosing, or exploiting, one strategy. This prioritization of exploration over exploitation could explain why subjects waited longer than ideal in the heavy-tailed distribution. To test this possible explanation for the less-than-ideal strategy of subjects in the heavy-tailed distribution, we decided to change the experimental design for Experiment 2 to eliminate the incentive to wait for the sake of exploring. Therefore, we can test if participants could not understand the temporal distribution of the task, and if they were previously prioritizing exploration.

Experiment 2 Overview

With the previous setup of the experiment, subjects have no choice but to wait for the trial to finish if they chose to reject the eight-second trade. To better test the effect of exploration and strategy implementation on the participants’ willingness to wait, we will no longer have the trade setup. Rather, participants will be able to quit at any time, and the trial period will be only 15 minutes. This is a return to the original experimental design used in McGuire and Kable (2012). We will focus only on the heavytailed distribution, as this was where the data skewed from our predictions in Experiment 1. Additionally, behavior was further from optimal in the heavy-tailed distribution in the McGuire and Kable (2012) study as well. The setup of Experiment 2 will have two conditions to test how exploration and counterfactual feedback information affect a subject’s behavior. Condition 1 is a control condition in which participants are given no information about the temporal distribution of the trial. The task will have a coin at the top of the screen that starts off having no value, and after a certain amount of time it will mature to be worth 10 cents. Participants can then sell the coin by pressing the space bar to move on to the next trial. If the coin is taking too long to mature and they want to move on to a new trial before the reward arrives, participants can press the space bar at anytime. Ideally, participants should wait 2.2 seconds, and if the coin has not matured, they should give up and move onto the next trial. This playing strategy would equate to the highest frequency of rewards and the largest overall payoff. Condition two will give participants counterfactual feedback information to eliminate the exploration versus exploitation conundrum. If a subject decides to give up on a trial before the reward came, a small mark will appear on the progress bar indicating when the reward would have arrived. In this way, the experimental layout eliminates the need to wait longer for the sake of exploration, because the information is available to the participants at all times. In this case, players would ideally give up right away on a number of trials at the beginning of the game to discover the temporal layout, and then once they have understood the distribution they would play accordingly. If, in fact, subjects could not decode the timing distribution of the experiment, we predict that given the opportunity to get a better idea of the temporal distribution of the trial, participants will play in a manner closer to the ideal strategy. We predict this will occur because participants will get more information about the task overall, and the motive to explore by waiting will be reversed because of the counterfactual feedback information. Our hypothesis for Experiment 2 states that if subjects are more directly shown the temporal distribution through counterfactual feedback information, they will implement the ideal game strategy in the heavy-tailed distribution. However, if subjects did understand the distribution but did not know how to implement this information, this manipulation would tell us that over-persistence in the heavy-tailed distribution is not driven by strategic exploration.


The task was programmed using the Psychophysics Toolbox extensions for Matlab; Figure 6 shows the interface with the counterSPRING 2015 | PENNSCIENCE JOURNAL



factual feedback information present. Participants could wait for the coin on the screen to mature or press the spacebar at anytime to sell it, and a blue mark would appear showing the scheduled maturation time only in condition 2. The task duration was 15 minutes, and the screen continuously displayed the time remaining and total points. The same heavy-tailed distribution from Experiment 1 was used for both conditions. Participants were recruited in the same way as Experiment 1 (n=40, 24 female), where subjects had an age range of 18-34 (mean = 21.48) with 12-21 years of education (mean = 14.76). Each participant was randomly assigned to either the control condition or the counterfactual feedback information condition (n = 20 each) unbeknown to the experimenter and the subjects. Participants were told they could make an additional $9-12 according to performance on the task, but they were not informed about the temporal distribution of the task. Instructions differed slightly between the two conditions, but both included four demonstrations: two showing coin maturation, and two prompting participants to give up before coin maturation. Subjects in condition 2, where counterfactual feedback information was given, were told that a blue mark would briefly appear to signal the scheduled maturation time of the coin in each trial. Data analyses using RStudio calculated subjects’ willingness to wait using survival analysis. Quit trials offer a direct estimate of how long subjects were willing to wait for a reward to arrive, but when a reward is delivered, we can only infer that participants were willing to wait at least the length of the trial. By using survival analysis, we can use the results from both trials to best estimate a subject’s willingness to wait. Survival analyses approximate how long a trial would “survive” before a subject gave up and moved onto a new trial. Reward trials were “right-censored,” similar to a situation where patients drop out of a clinical study and only offer information of survival rates up to their drop-out time. Participants’ probabilities of waiting were fit to a Kaplan-Meier survival curve, which is a nonparametric estimator of the survival function (12). Finding the area under the survival curve (AUC) gave us the average amount of time a participant was willing to wait for each trial. The difference in AUC between conditions was calculated using a two-tailed nonparametric Wilcoxon rank sum test, and the difference between AUC and ideal wait time within conditions was calculated using a two-tailed nonparametric Wilcoxon signed rank test.


Figure 7 shows an example of a survival curve with an AUC value of 2.96 s, the closest AUC value to ideal we observed in this experiment. The median AUC value for condition 1 was 7.37 with 30 PENNSCIENCE JOURNAL | SPRING 2015

Figure 7

first and third quartile values of 5.49 and 9.94, respectively. The median AUC value for condition 2 was 8.35 with first and third quartile values of 6.08 and 10.34, respectively. Comparing AUC values in the two groups shows no significant difference between having access to counterfactual feedback information or not (Wilcoxon rank sum W= 180, ncon1 = 20, ncon2 = 20, p = .6017). Subjects in both conditions waited significantly longer than the reward-maximizing point of 2.2 s (condition 1 signed-rank V = 210, p<0.001; condition 2 signed-rank V = 207, p<0.001).

Discussion of Experiment 2

Findings from Experiment 2 showed no significant difference in performance when counterfactual feedback was given throughout the task, and all participants, regardless of which condition they received, waited significantly longer than the ideal 2.2 s waiting time. Nishimura (1999) showed that sometimes exploring an environment is preferred to exploiting one reward pathway, and by adding counterfactual feedback information, we expected to reverse the need to wait to explore the temporal environment of the task. Reflecting on our predictions at the beginning of the experiment, our results imply that people do not learn the timing distribution more easily if they are given counterfactual feedback information. By adding counterfactual feedback information in condition 2, we expected subjects to “explore” the temporal environment much more quickly by giving up on a number of trials in the beginning rather than wait for an extended period of time. Participants did not seem to grasp this idea, and waited for the same amount of time as participants without the added counterfactual feedback information. Therefore, strategic exploration was not the key factor driving people’s excessive persistence in the heavy-tailed distribution. On a non-statistical level, many participants relayed to us in conversations after the task that they understood coins matured either early or late in the trial. In this case, it is most likely that participants did not have trouble learning and understanding the temporal distribution of the task, so it makes sense that the added counterfactual feedback did not help performance, and performance had no significant difference from the control condition. This falls in line with our predictions that counterfactual feedback information would help participants learn the temporal environment of the task but would not increase performance if understanding the temporal environment were not an issue for participants. Perhaps the reason for excessive persistence is not exploration, but rather the inability to create

RESEARCH a clear game strategy from the information obtained about the temporal landscape of the task.

General Discussion The current research suggests that individuals are constantly reevaluating the temporal landscape around them to estimate a subjective value of a reward, shaping how they make decisions. As the field of decision-making research continues to expand, understanding exactly how self-control, discounting, temporal uncertainty, and the subjective value work together to make decisions will be key (1, 4, 6, 7). Literature to date has failed to make connections between these decision-making factors, but now we have a clearer picture as to how discounting and the subjective value in addition to temporal uncertainty modifies decisions. Instead of lack of self-control, which was one of the first hypotheses of decision-making, people seem to continuously evaluate the temporal world around them, and the future reward itself, to statistically make the best decisions (1, 6, 7, 8). Studies have shown that exploration behavior can be preferred as a suboptimal action to further understand the surrounding environment (Nishimura, 1999; Daw et al, 2006; Cohen et al., 2007), but in our experiment reversing the need for exploration still does not push people into the ideal exploitation strategy (9, 10, 11). In future research, it would be interesting to decipher why participants diverged from our expectations in the heavy-tailed distribution. From Experiment 2, we can conclude that participants did not perform better when counterfactual feedback information was added. It is possible that participants could have understood the idea of the heavy-tailed distribution, namely that rewards came early or late, but were not exactly sure how to implement this knowledge into a game strategy. Understanding subconsciously the basic distribution of reward arrival time does not necessarily translate into understanding the task at hand. Because of the trade set-up of the task from Experiment 1, it might have been difficult for participants to create a concrete playing strategy. If no trade popped up during a trial, participants were given no other option but to wait, and if they refused a trade at first, they had to wait for the trial to complete as well. Participants might not have recognized that because rewards may come late, it is best to take the lower trade option if it arises. After removing the trade setup of the task in Experiment 2, subjects still did not perform more ideally, even when counterfactual feedback information was added. Participants may have been unwilling to accept a lower reward after already investing time by waiting, even if they understood rewards came early or late in the trial. This is known as a sunk cost, as they have already invested a cost, in the form of time, that they cannot get back. Although the probability of coin maturation decreases as the trial goes on, eventually the reward will come and subjects may be willing to wait rather than completely wasting their time. Participants may not have understood that sacrificing the full reward is worth moving onto a new trial where the full reward may come sooner. These game strategies are not clearly apparent just from deciphering the temporal landscape of the task. To test the idea that subjects did not understand how to use their knowledge about the temporal distribution of the game to create a playing strategy, future experiments could use an altered response interface to encourage players to think about their quitting strat-

egy. Participants could mark a point on the progress bar that reflects their giving-up time, or the moment when they will move on to a new trial if the reward has not been reached. This will force subjects to explicitly pick a time to quit in advance, rather than make an impulsive decision half way through a trial when they have decided they have waited too long. This task setup could help subjects think actively about their quitting time, which in turn will lead to a closer to statistically ideal playing strategy in the heavy-tailed distribution. If subjects are given an altered response interface that pushes them to think about the quitting time, they might implement a more ideal game strategy in the heavy-tailed distribution. Our present findings suggest that people have a higher subjective value of a reward after they have invested time in waiting, and this subjective value is higher under a uniform distribution condition. This supports our general hypothesis that people’s reward evaluation mechanisms are sensitive to their timing environment, however this form of sensitivity was not what we expected in the heavy-tailed distribution condition. People seem to be able to adjust their persistence in an adaptive and context-sensitive way, but there appear to be limitations on how successfully they can conform to a statistically ideal strategy. References: 1. W. Mischel, E. B. Ebbesen, Attention in delay of gratification. Journal of Personality and Social Psychology.16, 329–337 (1970). 2. W. Mischel, E. B. Ebbesen, A. R. Zeiss, Cognitive and attentional mechanisms in delay of gratification. Journal of Personality and Social Psychology.21, 204–218 (1972). 3. W. Mischel, Y. Shoda, M. L. Rodriguez, Delay of gratification in children. Science.244, 933–938 (1989). 4. J. W. Kable, P. W. Glimcher, The neural correlates of subjective value during intertemporal choice. Nature Neuroscience.10, 1625-1633 (2007). 5. H. C. Cromwell, O. K. Hassani, W. Schultz, Relative reward processing in primate striatum. Experimental Brain Research.162, 520–525 (2005). 6. T. L. Griffiths, T.L. & Tenenbaum, J.B. (2006). Optimal predictions in everyday cognition. Psychological Science, 17(9), 767-773. 7. J. T. McGuire, J. W. Kable, Decision makers calibrate behavioral persistence on the basis of time-interval experience. Cognition.124, 216-226 (2012). 8. J. T. McGuire, J. W. Kable, Rational temporal predictions can underlie apparent failures to delay gratification. Psychological Review.120, 395–410 (2013). 9. K. Nishimura, Exploration of Optimal Giving-up Time in Uncertain Environment: a Sit-and-wait Forager. J. theor. Biol.199, 321-327 (1999). 10. N. D. Daw, J. P. O’Doherty, P. Dayan, B. Seymour, R. J. Dolan, Cortical substrates for exploratory decisions in humans. Nature.441, 876-889 (2006). 11. J. D. Cohen, S. M. McClure, A. J. Yu, Should I stay or should I go? How the human brain manages the trade-off between exploitation and exploration. Phil. Trans. R. Soc. B.362, 933-942 (2007). 12. E. L. Kaplan, P. Meier, Nonparametric estimation from incomplete observations. Journal of the American Statistical Association.53, 457–481 (1958). SPRING 2015 | PENNSCIENCE JOURNAL

31 PennScience is sponsored by the Science and Technology Wing at the University of Pennsylvania.