Volume 5 Spring 2015
Young Investigators Review Staff 2014 - 2015 Editor-in-chief Julia Joseph ’15
Head of Media Preston Kung ’15
President Surya Chalil ’15
Managing Editors Sherry Bermeo ’15 Ashwin Kelkar ’16
Layout Editors Joseph Jacob ’15 Xin Lin ’17 Samuel Lederer ’17 Sarima Subzwari ’18
Vice President Alec Guberman ’17
Associate Editors Matthew Alsaloum ’15 Hanne Paine ’15 Photographer Shoshanna Jadoonanan ’17 Sarima Subzwari ’18 Anirudh Chandrashekar ’16 Webmaster Copy Editors William Long ’17 Eman Kazi ’15 Amanda Ng ’18 Advisor Dr. Robert Haltiwanger
Secretary Khadijah Patterson ’16 Treasurer Dana Espine ’18 Public Relations Officers Eleanor Castracane ’16 Mark Mangarin ’14 Advisor Dr. Peter Gergen
Andalus Ayaz ’16 Marwa Berouti ’15 Michael Cashin ’17 Marianna Catege ’16 Surya Chalil ’15 Anirudh Chandrashekar ’16 Megan Chang ’17 Drew Ciampa ’15 Zuri Dawkins ’15 Marc Emos ’15 Plinio Guzman ’15 Tasfinul Haque ’15 Olivia Joseph ’18 Julia Joseph ’15 Eman Kazi ’15 Ashwin Kelkar ’16 Preston Kung ’15 Amanda Ng ’18 Amanda Zigomalas ’17 Stony Brook iGEM Team
Letter From the Staff Dear Reader, Originally founded in 2008, the Young Investigators Review (YIR) began as an outlet for undergraduate researchers to publish their scientific work. After a brief hiatus from 2012-2013, the organization now has a much more ambitious goal: to provide a forum for students interested in science writing, to make science accessible to the general public, and to highlight the incredible work being done at Stony Brook University by both faculty and undergraduates. Along with successfully publishing four issues, YIR has also invited notable speakers to give lectures open to the Stony Brook community, such as the Nobel Laureate, Dr. Martin Chalfie, and prominent cancer researcher, Dr. Robert Weinberg. This year, we wanted to invite a speaker that is not only recognized as a pioneering leader in their scientific discipline, but is also a role model for underrepresented minorities in science. As a result, we are fortunate to host Dr. Mildred Dresselhaus of MIT, an award winning researcher known as the “Queen of Carbon Science.” In this fifth issue of YIR, we hope to encompass the most pressing and relevant scientific topics of today. We begin with a general research news section that highlights some of the ground-breaking research accomplishments at Stony Brook over the past year. We have also included research reviews, which provide in-depth discussions of topics ranging from the creation of antibiotic-resistant “superbugs” to the use of video games as a source of neural therapy. We further showcase interviews with Stony Brook faculty, such as Dr. Gary Halada, who provides insights into the integration of nanotechnology into Stony Brook’s engineering curriculum. Original research articles have also been contributed which display the depth and scope of undergraduate research taking place at Stony Brook campus. This issue and the success of our colloquium would not have been possible without our dedicated staff and writers, our generous donors, and all those who provided us with insight and advice throughout the year. We hope to continue to advance and expand the presence of YIR with the enthusiastic involvement of dedicated undergraduate students, and encourage you to visit our website at sbyir.com to learn more. It is our sincere desire that while reading this journal, you not only learn, but become fascinated with the revolutionary scientific advances of today that will shape the science of tomorrow. We hope you enjoy reading!
Table of Contents Interviews The Importance of Nanotechnology: An Interview with Dr. Gary Halada..........8 Amanda Zigomalas ’17
Exploring Stem Cells with Dr. Benjamin L. Martin...................................................10 Megan Chang ’17
Research Profile Diving into Marine Ecology with Dr. Bradley J. Peterson......................................13 Michael Cashin ’17
Reviews Ebola Outbreak: Fighting an Epidemic........................................................................16 Marc Emos ’15
The Epigenome: Redefining Hereditary Disease.......................................................19
Marianna Catege ’16
Video Games as a Source for Neural Therapy..............................................................22 Tasfinul Haque ’15
Confronting the Rise of Superbugs in an Increasingly Drug-Resistant Era......25 Julia Joseph ’15
More than a Structural Component: The Vast Biological Functions of Sphingolipids...........................................................................................................................28
Ashwin Kelkar ’16
Primary Research Articles Using Finite Element Analysis to Design & Optimize a Composite Bicycle Frame................................................................................................................................31
Plinio Guzman ’15
The Search for Massive Stars in Nearby Galaxy M83................................................35 Drew Ciampa ’15
RESEARCH NEWS The Immuno-Matrix Skin Patch: A Needleless Approach to Vaccination
Cerebral Blood Flow Imaging Technique Can be Applied to Diesease Diagnosis
Retrieved From http://sb.cc.stonybrook.edu/news/general/2014-11-21-kasia-sawicka.php?=marquee5
By Marianna Catege ’16
Kataryzna M. Sawicka at the Collegiate Inventors Competition, where she took first in November 2014.
By Preston Kung ’15 Kataryzna (Kasia) M. Sawicka, a postdoctoral research associate in the Department of Dermatology, won the national Collegiate Inventors Competition for her “Immuno-Matrix” in November 2014. The Immuno-matrix is a skin patch held together by nanofibers to deliver a vaccine through skin absorption; it’s a needleless vaccination that’s as simple and painless as putting on a Band-Aid. The Immuno-matrix has the potential to play a major role in the combat and eradication of infectious diseases. Not only is this new technique lean, as there is no biohazardous waste, but the very concept is groundbreaking. Previously, scientists believed that a compound’s molecular weight must be under 500 Daltons in order for skin absorption to work. Thus, other technologies relied on some form of mechanical disruption of the skin. Sawicka’s Immuno-matrix works by exchanging moisture via the topmost layers of the skin. Furthermore, the Immuno-matrix has shown to successfully deliver molecules 250 times the previously expected maximum size. By taking advantage of the skin delivery system, the Immuno-matrix is able to deliver only the most im-
munogenic part of the vaccine, which is usually a protein. Yet the Immunomatrix seems to be just as effective in conferring immunity as injections. Sawicka and her team have successfully delivered the whooping cough antigen in vivo and both the influenza and anthrax in vitro using the Immuno-matrix. Another benefit of a skin delivery system is that within the skin, there’s an extensive lymphatic system which makes finding an immunocompetent cell in the skin extremely easy. Sawicka, who placed first of seven graduate student teams from across the United States, hopes to see the commercialization of Immmunomatrix and for her invention to be used in hospitals and clinics throughout the world. “Bringing the concept of infectious disease immunization without the use of needles to this stage is a great advancement to our field,” said President Samuel L. Stanley. “We are excited to see the outcome of the next phase of Kasia’s work on ImmunoMatrix” (1). References 1. Almonte, Alida. “Needleless Vaccination Developed at Stony Brook Takes 1st Place at Inventors Competition.” Stony Brook Research. Stony Brook University, 21 Nov. 2014. Web. 08 Mar. 2015.
Dr. Yingtian Pan, Professor in the Department of Biomedical Engineering at Stony Brook, and his team, have discovered a new imaging technique that allows for a clearer picture of the direction, speed, and quantity of cerebral blood flow. This discovery expands upon and provides an ultrahigh-resolution picture for Stony Brook Medical scientists’ newly developed method for measuring how cocaine interrupts blood flow in mice brains. As described in the Optical Society’s open-access journal, Biomedical Optics Express, this new technique, called “ultrahigh-resolution optical coherence Doppler tomography,” utilizes a TI:Sapphire laser to target the cortex of the mouse brain. The reflected light is then subsequently analyzed. This new technique clears up some of the flaws that other imaging methods have. Some techniques miss critical parts of the blood flow disruption, while others are not powerful enough to detect the disruption of blood flow early on, when it is on a small scale. According to
Retrieved From http://sb.cc.stonybrook.edu/news/medical/140915bloodflow.php
Professor Yingtian Pan, right, and his lab assosiates, Ki Park and Jiang You, left, working in his lab on imaging techniques.
Dr. Pan, this new imaging technique allows scientists to “visualize, in animal models, the micro and regional ischemic effects to the cerebral microvascular networks” (2). This method also leads the way for new information that can help in disease prognosis, diagnosis, and the monitoring of the progress or failure of treatments. It can also be applied to drug abuse, alcohol abuse, and wound repair. Additionally, it can be particularly helpful in displaying an accurate picture of the blood flow rate and quantity of tumor cells, greatly accelerating cancer research. References 1. Pan, Y. et al. 2014. Optical coherence Doppler tomographic for quantitative cerebral blood flow imaging. Biomed. Opt. Express. 5:9.
Understanding The Fear Circuits in The Brain By Ashwin Kelkar ’16 Fear and fear memory have long been a subject of study by both scientists and philosophers. Our understanding of fear and how it may exert control to the point of phobia is imperative to eventually finding the underlying cause of fear itself. Researchers around the world have tapped into the brain to try to elucidate this enigma.
In a groundbreaking study conducted by a team of scientists from Stony Brook University and Cold Spring Harbor Laboratory, we have come one step closer to understanding how fear manifests itself in mammals. Dr. Bo Li and postdoctoral fellow Marior Penzo of Cold Spring Harbor Laboratory, as well as Jason Tucciarone of Stony Brook University, have discovered a circuit in the brain that demonstrates how mice subjected to dan-
ger show high brain actvity in the posterior paravetnricular nucleus of the thalamus (pPVT). The study was conducted using a mouse model, where mice were genetically altered to better study the area in question, the thalamus. They chose the pPVT due to its established role in regulating stressors to organisms, whether physical or psychological. In order to simulate a dangerous scenario, the mice were subjected to mild foot shocks. By utilizing suppressors that blocked neural communication, the researchers discovered that the pPVT was pivotal in determining whether mice could be conditioned to fear specific situations by communicating with the central amygdala (CeL), where fear memories are stored. By
Mice that did not have the brain-derived neurotrophic factor did not respond to danger the same way a normal mouse would. This breakthrough could lead to mapping fear circuits in the brain.
genetically modifying mice that could not produce brain-derived neurotrophic factor (BDNF), and modifying other mice that did not produce the receptor for it, scientists determined that the molecule that transmitted the fear message was BDNF. By doing so, they showed that mice without BDNF or its receptors could not respond to dangerous situations in the same way that normal mice could. On the other hand, when the scientists injected BDNF into normal mice brains, these mice had a larger response to danger. With these experiments, the scientists concluded that BDNF produced in the pPVT and sent to the CeL was necessary for the creation of fear memories and the proper responses to fear. The results of this study allow us to further understand the organ that regulates our emotions. By expanding our knowledge of what our brains can do, we can slowly decipher how fear responses work and eventually treat patients suffering from psychological trauma. References 1. G. Filiano, Researchers discover how brain recognizes danger. Stony Brook Newsroom. (2015).
Infectious Diseases May Cause Major Depressive Disorder By Marianna Catege ’16 Depression may actually be infectious, according to Dr. Turhan Canli, the Associate Professor of Psychology and Radiology at Stony Brook University. His claim puts Major Depressive Disorder (MDD) in a new light, one that suggests it can be caused by parasitc bacterial or viral infections. In this highly prevalent disease, with roughly 7 percent of the U.S. population developing MDD, symptoms can include fatigue and other inflammatory biomarkers, which suggest an infectious origin (1). This may finally provide an explanation for MDD, a disease which previously had no clearly defined cause. Canli’s findings are published in Biology of Mood and Anxiety Disorders in November 2014. Dr. Canli explains that patients affected by parasites, bacteria, and viruses often display changes in their emotional behavior similar to the changes caused by MDD. He further states that
the human body is an ecosystem for these kinds of microorganisms, which we know are capable of affecting human gene expression. Thus, it is highly probable that microorganisms can cause MDD, a disease also associated with an interference in various genetic factors. Based on the correlations Dr. Canli has observed between MDD and infectious agents, more research on the topic is warranted. As he states, “Future research should conduct a concerted effort search of parasites, bacteria, or viruses that may play a causal role in the etiology of MDD” (2). References 1. Kessler RC, et al. Prevalence, severity, and comorbidity of 12-month DSM-IV disorders in the National Comorbidity Survey Replication. Gen Psychiatry. 62. 617-627. (2005). 2. Could Depression Actually Be a Form of infectious Disease? 2014. Stony Brook Newsroom. http://sb.cc.stonybrook.edu/news/ general/141113depression.php 3. Lohoff, F. 2011. Overview of the Genetics of Major Depressive Disorder. PMC.
Pioneering X-Ray Techniques Used to Design More Efficient Batteries By Ashwin Kelkar ‘16 Have you ever wondered why your iPhone battery only lasts 3 hours instead of the projected 8? It can be frustrating to many people around the world, but it can be even worse when that battery is powering a pacemaker, a
tool that allows people’s hearts to continue pumping blood. Imagine what would happen if a battery like this suddenly dies and leaves someone completely vulnerable to their malfunctioning heart. Dr. Esther Takeuchi and colleagues working in conjunction with the Brookhaven Na-
Retrieved from: http://upload.wikimedia.org/wikipedia/commons/7/75/Aerial_View_of_Brookhaven_National_Laboratory.jpg
An aerial view of the National Synchrotron Light Source from the Brookhaven National Laboratory. The source shines x-ray beams on samples to create diffraction patterns.
tional Laboratory have struggled with this dilemma till recently when they decided to use x-ray techniques to better understand lithium-ion batteries. By using this novel method, the scientists can better understand how silver matrix formations can enhance conductivity. Silver, generally a metal with low conductivity, can transform into a matrix within the battery core which allows it to handle electrons much more readily. However, the mechanism for this transformation has been eluding scientists till now. With the x-ray technology, Dr. Takeuchi and colleagues revealed how the silver changes its atomic structure that allows the flow of electricity and linked this transformation to how quickly the battery loses charge.
The team used the National Synchrotron Light Source (NSLS) at Brookhaven Laboratory to determine how the silver becomes displaced by the lithium ions and subsequently forms matrices to allow conductivity. The x-ray diffraction patterns that come about as a result of the battery’s changing structure allowed the team of scientists to properly interpret what exactly was going on at the atomic level. The insightful data collected by this study will lead to better battery life that could potentially be life-saving. References 1. G. Filiano, Mapping of silver matrix formation in batteries will enhance efficiency. Stony Brook Newsroom. (2015).
Groundbreaking Fossil Transforms Current Evolution Views Retrieved From http://sb.cc.stonybrook.edu/news/general/141005creature.php#prettyPhoto/3/
The fossil of Vintana Sertichi was discovered by David Krause and his research team, this discovery fundamentally challenged the current views of the evolutionary mammalian tree.
By Surya Chalil ’15 Gondwanatheria is an extinct group of mammals that has only been known through a few isolated teeth and fragmented jaw pieces. As a result, their clade largely remained a mystery and their placement among the evolutionary tree was uncertain and debatable. However, Stony Brook University paleontologist, David Krause, Ph.D., led the research team that discovered an almost complete cranium of a new fossil animal, which belongs to gondwanatheria. The fossil, named Vintana sertichi, is only the third mammalian skull salvaged from the Cretaceous period in the southern hemisphere. Vintana means luck and Joseph Sertichi is the former graduate student of Dr. Krause that fortunately found a block of sandstone filled with fish fossils. A CT scan of the block revealed the gigantic mammalian skull. Vintana is estimated to be about two to three times heavier than the size of an adult groundhog. This is uncanny since other mammals of its era were usually mouse-sized. Using micro-computed tomography and scanning electron microscopy, Dr. Krause and his team were able to learn more about the anatomy of early mammals that were previously unknown. Features of the teeth, eye sockets, nasal cavity, braincase, and inner
ear illustrates that Vintana was a nimble herbivore with sharp hearing and vision. “We knew next to nothing about early mammalian evolution on the southern continents,” stated Dr. Krause in a news release. “The discovery of Vintana will likely stir up the pot.” The findings published in the journal Nature in November 2014 entirely altered current views of the evolutionary mammalian tree. In addition, news coverage of the unprecedented discovery went global with customer advertising values coming close to $38 million. The next question that needs to be answered is how such a mammal developed during its time. The current theory is based on the fact that the fossil was discovered in Madagascar, an island that has remained isolated for over 20 million years before the existence of Vintana. The continued isolation of this species in various parts of the southern continents allowed for an extremely odd mix of features to develop over time.
Researchers Create Antireflective Surface to Improve Solar Cells By Ashwin Kelkar ’16 The advent of the solar panel has allowed us to harness the energy of the sun, similar to plants. However, unlike plants, solar panels run into a predicament; though we can utilize some of the sun’s rays, most waves end up being reflected off the surface of solar panels. This massive amount of reflected sunlight demonstrates how inefficient solar panels are, even today. However, scientists at Stony Brook University and Brookhaven National Laboratory have now created a surface coating that can reduce glare dramatically to trap sunlight and therefore make solar power that much more efficient. The study was inspired by investigating how moths’ eyes function. Moths have antireflective surfaces on their eyes that resemble fence posts, each smaller than the wavelength of light, that are tightly packed together to ensure the least amount of reflection. To properly imitate this surface, Dr. Matthew Eisaman and colleagues used a “block copolymer” that, when coated on a surface, organize into similar posts spanning nanometers. While
normal solar panels utilize coatings with intermediate indices of refraction to gradually reduce the amount of light reflected by the layers of varying material, this copolymer coat successfully prevents light from being reflected. To increase the antireflective properties even more, the scientists used gaseous silicon oxide to fill the crevices the polymer makes during organization. The gas prevents reflection further, which the team discovered using various different methodologies including electron microscopy and previous surface science. The results of this experiments extend far and wide, as solar panels have multiple functions. Solar panels reduce electrical costs for many people around the world, so increasing efficiency would further reduce living expenses. This breakthrough study could also facilitate the creation of more efficient solar panels, thus reducing our carbon footprint and preventing further damage to the ecosystem. References 1. Research team uses nanostructure surface textures to improve solar cells. Stony Brook Newsroom. (2015).
References 1.Newly Discovered Fossil is a Clue to Early Mammalian Evolution. 2014. Stony Brook Newsroom. http://sb.cc.stonybrook.edu/news/ general/141005creature.php 2. Krause, D. W. et al., 2014. First cranial remains of a gondwanatherian mammal reveal remarkable mosaicism. Nature 515:512–517.
Retreived From: http://www.bnl.gov/bnlweb/pubaf/pr/photos/2014/12/d2300414-chuckblack-hr.jpg
Dr. Matthew Eisaman displaying the non-reflective properties of his antireflective surface (black square). This antireflective surface can be instrumental in the engineering of more efficient solar panels.
The Importance of Nanotechnology: An Interview with Dr. Gary Halada
Photo courtesy of Sarima Subzwari ’18
By Amanda Zigomalas ’17 Like the far reaches of space, nanotechnology represents a new scientific frontier, but in the opposite direction. Utilizing the empirical, physical, chemical, and biological laws of nature, nanotechnology succeeds in creating components that can be easily manipulated at the nanometric level. To put this into perspective, this is 100,000 times smaller than the diameter of a strand of hair. The precise manipulation of nanotechnology gives it the power to revolutionize current practices, ranging from drug therapy and treatment to environmental cleanup. The surging excitement surrounding this new field has encouraged Dr. Gary Halada to pioneer the implementation of nanotechnology education into the university’s curriculum. Dr. Halada spent his childhood intrigued by the world around him. This fascination led him to obtain a Bachelor of Science degree in Physics here at Stony Brook University. His interests later evolved to encompass environmental science, which led him to pursue a Ph.D. in Materials Science at Stony Brook. Upon obtaining his doctorate, Dr. Halada joined Stony Brook as a member of the faculty in order to continue his research on environmental awareness and groundwater cleanup. Realizing the growing appeal of nanotechnology in both his field of research and in other scientific domains, Dr. Halada currently focuses his efforts towards teaching and promoting nanotechnology education. He serves as co-director for the minor in Nanotechnology Studies and as undergraduate program director for the Engineering Science major and the Energy Science, Technology, and Policy minor. Dr. Halada has taken the time to speak with the Young Investigators Review in order to enlighten Stony Brook students to always be aware of their surroundings and to pursue
their interests, while utilizing these interests to benefit society. Dr. Halada’s philosophy is that teaching and research are interconnected. His goal is not only to teach students, but also to immerse them in real life examples in an effort to demonstrate the power of nanotechnology. How long have you been at Stony Brook? Forever... well, I started Stony Brook as an undergraduate and that was back in 1981. [I majored in] physics. I [actually] started out engineering when I first came, but that didn’t last. I was more interested in physics. I was interested in how things worked. Initially, I was interested in every theoretical aspect of physics, like accelerators and high energy physics. With time, [though], I got interested in the applied end [of physics], like laser technology. When I went into the graduate program here, I went into materials science engineering. When did you realize that you were interested in nanotechnology? Nanotechnology wasn’t really an area of study, even though it was known about, up until sometime in the mid1990s or later. At that time, I was doing other things, like how to clean up pollutants in the environment. I was also doing work in corrosion [for] my Ph.D. thesis. [I was trying to] understand it and make alloys that were less susceptible to corrosion. A lot of these processes, whether you’re talking about creating technology to clean a toxic material in the environment or creating a thin film to make coatings on surfaces to prevent corrosion, have nanotechnological aspects. So I guess I always was involved in nanotechnology or nanomaterials to some extent. And then I got more formally involved around 2000-2001. [It was at this] time [that] I was also getting
A lot of these processes, whether you’re talking about creating technology to clean a toxic material in the environment or creating a thin film to make coatings on surfaces to prevent corrosion, have nanotechnological aspects. grants from the National Science Foundation (NSF) to create teaching programs in nanotechnology. Is that why you became involved in teaching the EST 213: Introduction to Nanotechnology course here? Yes, [EST 213] was created as a result of a National Science Foundation (NSF) grant. [NSF] had a program called NUE: Nanotechnology Undergraduate Education. I think that the program no longer exists as of this year, although it was part of NSF for many years. [NSF] realized that this technology had the power to be transformative—to change how we solve problems and how we look at things. So NSF and the government, rightly so, [thought] that it was important to experiment with ways that students could learn about nanotechnology. They funded people to create courses, programs, laboratories, and teaching tools. Are you currently involved in research? As a result of my class, I started to do some work with just nanotechnology. It’s not very big research right now (no pun intended). I’ve had some small grants that basically deal with how you can build and create nanoparticles using various techniques. I did some work with people at Brookhaven and in the sensor program at Stony Brook. We worked with different nanomaterials, whether it was nanoparticles or nanotubes, and seeing what you can do with them. So it’s really a few small grants in that area, nothing big. But this has lead me to become interested in other aspects of materials science, which is related to my growing interest in additive manufacturing. I’m trying to get more involved with [this] now in terms of both the research and the education. Most people think of it as just 3D printing, but there are a lot of other aspects to it. It is a way that we can process materials. Usually you can just make an object that you want to have, but if you want that object to have functionality—to do something—you can either make parts that come together or you can make a material that itself can do something. This area of functional materials is very interesting to me. This [is an] idea [in which] you can have a material that not only provides structure for support, but in itself can be a catalyst or sensor. One of the ways to do this is by engineering the nanostructure of the material so that when you make something out of it, it can actually do something.
Do you feel that this is something students should get involved with, or at the very least, understand? [I think] they should have a basic idea of what’s going on and why it’s important. And that extends beyond nanotechnology. Students should hear what the latest thing in genetic engineering is, or artificial intelligence, or whatever area that [is] becoming “big.” And that’s important from a business sense and a societal sense. But if someone feels driven after hearing about it, they should try to get involved. And it may not work out. A student may try to get involved in something that they think is going to be great and then find out that they really don’t like doing that. That’s okay [and] there’s nothing wrong with that [because] you’ve still learned something. How do you think students who are interested in nanotechnology should learn about it? Find out what’s going on— that means going to labs, looking at websites, and talking to people like graduate students. People who do that and take advantage of those opportunities are the ones who will go far. It’s not a place to sit back and wait for something to be handed to you. There are a lot of faculty who are doing nano-related research. Most of the people in my department [actually] have done nano research whether they know it or not, [especially in regards to] polymer research or surface coating research. And the same can be said of the chemistry department. There are many people doing nanoresearch in different departments, so you just have to go talk to them.
References 1. Halada, Gary. Personal Interview. 11 Nov 2014.
Figure 1 A cryo-transmis-
sion electron microscopy image shows nanoparticles in suspension. While the particles are in suspension, total control of interparticle forces must be taken into account.
So when you teach EST 213, do you incorporate what you’re working on? Yes, I try to incorporate what I’ve done, like making nanoparticles and dye-based solar cells. This all adds to the teaching, which is one of the ways that research can enhance teaching. In all my courses, I try to change things and keep it current. [I’m] either getting a different text or inviting new lecturers to come speak. [It is] especially [important] with nanotechnology [because] things change pretty rapidly.
Retrieved from: http://www.lrcomolli.com/ImageGallery/Gallery4/Cryo-TEM_image_of_nanoparticles_in_suspension_image_1.7_um_by_side.jpg
Exploring Stem Cells With Dr. Benjamin L. Martin
Photo courtesy of Sarima Subzwari â€™18
By Megan Chang â€™17 A life-long fascination with one question can sometimes define your entire career. Such is the case for Stony Brook Biochemistry faculty member, Dr. Benjamin L. Martin. After earning his B.S. from Bowdoin College in Maine, Dr. Martin received his Ph.D. in Developmental Biology at the University of California, Berkeley, before conducting research on developmental processes and pathways as a postdoctoral fellow at the University of Washington. Now, back on the East Coast as an assistant professor, Dr. Martin continues to work towards furthering our understanding of stem cells. Stem cells are cells that, upon receiving specific signals, differentiate and become various other cells, tissues, and organs. They have been identified as possible therapies for several diseases and conditions, but were once an extremely controversial topic. The main argument against stem cell research centered on ethical issues, as research was once performed on fertilized embryos. However, with new technology, the opposition has largely disappeared, and has been overwhelmed by a growing interest in understanding the root cause of cell differentiation and maturation. Dr. Martin has endeavored to answer the question of understanding how stem cells function by conducting his
research in zebrafish, which have become a prominent developmental model due to its external fertilization, rapid development, and transparent embryo. To further discuss his fascinating research, as well as demonstrate how curiosity and passion has driven him to be where he is today, Dr. Martin sat down with Young Investigators Review. Why were you originally drawn to stem cell research? I think everyone has a thing they enjoy thinking about, even if it makes their brain hurt or does not entirely make sense, like infinity or space. As a kid, it amazed me that humans were supposed to be so smart, but we had no idea how to actually make ourselves from a single cell. How would we go about doing that? That was completely unknown and it became something that I really wanted to know. I was confused as to why we could not figure it out, so it was this main question I wanted to pursue, even from a young age. My primary interest has always been developmental biology; stem cells and developmental biology go hand-in-hand. They are really strongly integrated with each other because when you form an embryo, you start out as a single stem cell, [which] then proliferates and eventually differentiates. So the first big thing that got me interested was an undergraduate developmental biology class that I took.
Did you always envision yourself becoming a research scientist? It has always been in my head, but it was never like, “Oh, this is definitely what I’m going to do.” For a long time, I wanted to be an air force pilot. That was my kid dream, which I’m sure a lot of kids my age had after watching Top Gun. Doing something with sports was also something I thought about. I’ve always been a big sports guy, [which] partly defined my life early on. Then in college, I considered medical school as well, so I spent time trying to make a decision between being a research scientist and a physician. But certainly having two parents in the field persuaded me. My dad is a Drosophila geneticist. My mom also did Drosophila research and then later became a high school science teacher. You went to graduate school in UC Berkeley and did your postdoctoral research at the University of Washington in Seattle. How does the West Coast compare to the East Coast? It was interesting when I had first moved to Berkeley, because Berkeley is a pretty weird place. It’s kind of a Mecca for eccentric people, which I really grew to love. But when I first moved there, I thought it was very strange. I had a little bit of a hard time dealing with it, and at the time, I kind of imagined the whole West Coast to be like Berkeley. But when I went to Seattle, I really loved it. When I came back here, I actually thought, “Maybe I actually am a West Coast person.” But there are certainly many things I love about the East Coast, and I am happy to be back. You joined the Stony Brook faculty in 2012, so you are fairly new to the campus. How are you enjoying it so far? It’s great. It’s kind of a roller coaster ride – you start out as an assistant professor and at Stony Brook, and between your fifth and sixth year, you get reviewed for tenure. There are very high expectations for you to get tenure, so it can be a pretty stressful time in your life. But it is awesome to run my own lab and to be the sole person making the decisions about research and what direction to go in. When you Retrieved from: http://www.biocision.com/blog/wp-content/uploads/2014/03/imscn021510_02_04.jpg
first start, you get to buy new equipment for your lab and that is a fun and exciting experience. At the same time, you know the clock is ticking and you have to accomplish a certain amount. But as for the university itself, I really enjoy being here. I teach a few lectures in the graduate school and also BIO 310, which is cell biology. I have to say that I’m impressed by the students. They’re exceeding my expectations in terms of their abilities, so that has been a really nice thing. And, especially in our department, it’s a very collegial and friendly atmosphere. It’s a good place to work. What does a normal day look like for you? I wake up at around 7 AM, when Calvin, my son, starts yelling. I usually get up with him and drink coffee and “veg out” for a half hour, because I can’t do anything until I’ve had a couple cups of coffee. Then it’s a race to eat breakfast and get him dressed and out the door. I drop him off at Stony Brook Daycare on campus and then come into work. Daily life here can be pretty variable depending on what’s going on. Like if there’s a grant due, then the door is shut and I’m working on writing and revising. The thing I like doing most is working in the lab, so whenever I get the chance, I’m doing experiments. Then I pick Calvin up at 6 PM, go home and eat dinner, hang out with him until he goes to bed at 9 PM, and then I start on work again, usually at home, but sometimes I come back. I live about 5 minutes from here, so that can be convenient. Could you give me an overview of the projects you are working on at the moment? There are two big ones going on right now. We mostly study how stem cells decide which type of differentiated cell to become. We do this in zebrafish because it is a great model to study these decisions in their in vivo environment, as opposed to most stem cell research, which is cultured in a dish. So we are interested in how cells make fate decisions. One is a neural fate decision, where we study the WNT signaling pathway. As a post doc, I discovered in a particular set of stem cells that the WNT pathway can tell cells whether
Figure 1 A microscopic image of stem cells. These cells have the ability to become any number of cells based upon varying signals that decide their fate.
Scientifically, it would be the work that I did as a post doc to describe how the WNT signaling pathway controls cell fate decisions between the neural and the mesodermal. That publication came out in 2012 and I am most proud of this piece of work. Outside of science, from my younger days, it would probably be when I got fifth place in the state wrestling tournament when I was a senior in high school. It was something I was always very proud of, especially because wrestling was such a tough sport. There were many times when I was like, “I should stop doing this.” It was just a lot of work and you go out there on the mat alone and sometimes feel really terrible afterwards. But at the end, having persevered and coming away with something, I felt good and it taught me an important lesson. You should really stick with things and not give up too easily. Nowadays, it would be my family. My wife and child and I have a good time together and it’s what I look forward to outside of science.
Retrieved from http://p1.pichost.me/640/64/1887430.jpg
Figure 2 A 3D image of
a stem cell captures it in the midst of it undergoing morphological changes, which can lead to cell differentiation.
to become spinal cord neurons or skeletal muscle. If you deactivate that pathway, the cells will all become spinal cord neurons. If you activate it, they will always become muscle. Now that we know that pathway controls that process, we are trying to understand mechanistically how it’s doing this from a molecular perspective, like what genes are making this decision. Then there’s a second fate decision that we are studying. The cells that go on to become muscle can also become blood vessel tissue. So we are currently looking at how different signaling pathways can affect that decision to form blood vessels versus muscle. The blood vessel one is very far along and we are about to submit it to a journal. That is actually one of the welldeveloped projects in the lab and I feel that we have a very good mechanistic understanding of how this is occurring. The neural fate decision is not as far along, but it is going well also. Have you found any of your research particularly surprising or unexpected? We had worked out a mechanism of how these cells are deciding which fate to become through our previous work regarding the decision to become blood vessels or muscle. The vast majority of these cells that give rise to both of those fates will become muscle, while a small percentage become blood vessels. The surprising thing is that it appears that the blood vessels are actually the default fate. So if you get rid of all of the inducing signals, the cells will all just become blood vessels. One might expect that if the cell lost all the inducing signals, it wouldn’t know what to become and it would just die or do something weird. Also, there are intermediate fates of different types of mesoderm and it’s never really been clear about how these pathways interact to make specific tissue types. I think we’ve made a big step in figuring it out. What do you consider your greatest accomplishment?
Who would you say your inspirations are — both scientific and not? My parents are a big inspiration, though I am sure a lot of people would say the same. Outside of that, I used to read a lot of E.O. Wilson, a Harvard biologist that does lot of research in evolution theory. He was somebody I looked up to and he wrote a lot of popular books about evolution. I was also a huge fan of an old baseball player named Cal Ripken. He was a favorite of mine. Not only was he great at baseball, but he was also known for having the longest stretch of not missing a single game, of any player. I played baseball, soccer, and wrestling, so that was something I really appreciated and looked up to, in terms of mental and physical robustness. How would you advise a student that also wanted to get into research in biochemistry? When I was taking that undergraduate developmental biology course that I really liked in sophomore year, I took it with one of my best friends. We were both really into it, so we both went and talked to the professor and he eventually agreed to let us start working in the lab our junior year. When we first approached him, I think it was evident that we both had a lot of passion for the subject material. Professors can tell who is actually passionate. From emails I receive, I can really gauge who is just trying to fill a checkbox for their application or if they are actually really interested in the research. I’d say it also helps to take a course taught by the researcher. They have a better understanding of you as a student and you have a better understanding of what they are like and what kind of research they are doing.
References 1. Martin, Benjamin L. Personal Interview. 14 Nov. 2014.
Diving into Marine Ecology with Dr. Bradley J. Peterson By Michael Cashin ’17 When J.R.R. Tolkien created the fictitious land of Middle Earth, he said, “All have their worth and each contributes to the worth of the others” (1). In this he implies that the world is dependent upon itself, much like the cogs of a clock that work together to track time. The study of marine ecology embodies this idea entirely. In fact, if one were to subdivide the world into smaller communities, it could be observed through the lens of a marine ecologist. Each area of marine ecology is like an individual flavor of ice-cream in a sundae, the accumulation of which results in a very broad field of research that depends upon the function of each “flavor.” Dr. Bradley J. Peterson, a member of the School of Marine and Atmospheric Sciences at Stony Brook University, enjoys the “flavor” of understanding inter-species interaction and its effect on the marine community. Dr. Peterson’s work ranges from the south shore of Long Island to the established reef systems of Jamaica. Throughout his notable ecological career, Dr. Peterson has penned and been credited as a coauthor on more than twenty scientific publications, aiming to highlight the significance of marine ecological research. Introduction Dr. Peterson was born in Omaha, Nebraska and eventually found his way to the warmer state of Florida, where he studied benthic ecology at Florida Institute of Technology as an undergraduate. After receiving his Master’s Degree in zoology from the University of Rhode Island, Dr. Peterson moved on to the Dauphin Island Sea Lab in Alabama where he acquired his Ph.D. in marine science while focusing on nutrient availability in marine ecosystems. Dr. Peterson’s journey to Stony Brook University began when he accepted a job offer at Stony Brook University’s Southampton Campus. His Lab addresses the various jobs that organisms have in an ecosystem when there are varying food sources, and how these changes affect the health and structure of their environment. The lab works on projects ranging from habitat type in relation to predator-prey interactions, to seagrass ecology and restoration. Dr. Peterson’s involvement with the New York State Governor’s Seagrass Task Force in 2010 was pivotal in passing the first ever legislation to protect seagrass ecosystems in New York State. Although his track record is very impressive, Dr. Peterson considers his greatest accomplishment to be his students. As he said, “I’d say that my biggest accomplishment from Stony Brook has been the successful education and graduation of certain key Ph.D. students that have gone on to other places” (2).
Retrieved from: https://petersonseagrasslab.files.wordpress.com/2011/10/petersongear-redo.jpg
Overview of the Marine Ecology Field, Seagrass Habitats, and their Ecological Role Marine ecology studies the interactions that organisms have with their environment, and how environmental factors affect the biological stability of an ecosystem (3). Theories and methods concerning marine ecosystems and fish population dynamics have been in the making since the early 1950s. During this time, advancements in these respective fields were made by Dr. Milner B. Schaefer, who developed the Schaefer method concerning the population fluctuations in overburdened commercial species (4). Dr. Peterson gained his own interest in marine ecology and seagrass habitat from his predecessors and capitalized on their work. Dr. Peterson spoke about the recent initiative in the marine ecological field, stating, “The flow of money is not toward general kinds of interests, but is now dominated by real world problems and trying to address those problems…I think that the focus of marine ecology has gone from being primarily theoretically based to being a lot more applied now” (2). The applications that Dr. Peterson talks about are represented by the global recognition of the ecological and economic importance of seagrass communities. Seagrass habitats provide necessary stability and energy to marine ecosystems by acting as a food source and habitat for many different types of fauna. Seagrass habitats also serve as juvenile nurseries for
Marine ecology studies the interactions that organisms have with their environment and how environmental factors affect the biological stability of an ecosystem. 13
An acre of seagrass bottom is valued more highly than any other marine habitat, more than coral reefs and more than salt marshes.
Figure 1 The loss of seagrass in the marine ecosystem could be detrimental to the organisms that rely on the seagrass for food and shelter.
commercially important invertebrates and bony fish, which is much of the reason why Dr. Peterson has put so much effort into their research and restoration. Various anthropogenic and environmental factors have placed seagrass habitats into an ecological danger zone. This has led to disastrous amounts of seagrass habitat loss, resulting in greater concern and value of seagrass communities and their inhabitants. As Dr. Peterson stated, “An acre of seagrass bottom is valued more highly than any other marine habitat, more than coral reefs and more than salt marshes.” (2) During his tutelage under Dr. Ken Heck of the Dauphin Island Sea Lab in Alabama, Dr. Peterson realized the significance of suspension feeding bivalves on seagrass habitats. The connection between these two organisms is called facultative mutualism, which involves complementary beneficial interactions between both species. Specifically, Dr. Peterson wrote his dissertation on the facultative mutualism between the bivalve Modiolus americanus, known as the tulip mussel, and Thalassia testudinum, a tropical species of seagrass known as turtle grass. Dr. Peterson realized that increases in seagrass growth were larger in areas with a higher abundance of bivalves due to a few factors. First, M. americanus increased the nutrition of the sediment surrounding the patches of turtle grass by elevating the total levels of usable nitrogen and phosphorus in the sediment. This allowed for larger leaf width, and increased productivity on behalf of T. testudinum. Additional outcomes from the density experiment resulted in a decreased amount of epiphytes, which are organisms, such as algae, who reside on seagrass blades (5). Decreasing the amount of epiphytes on a blade of seagrass increases the amount of available sunlight to facilitate seagrass growth. M. americanus also benefited from this study as a result of a conjoined predation experiment conducted by Dr. Peterson. The mutualism granted to the tulip mussel in its relationship with turtle grass was found to be protection and shelter from predators (6). Dr. Peterson’s work with seagrass and bivalves serves as an example of the hypothesis of the symbiotic relationship
Retreived from: http://www.westcoast.fisheries.noaa.gov/images/sw_to_wc-pics/seagrasseshaspc.jpg
between these two species. Relationship studies such as this can be used to combat the various anthropogenic and environmental factors limiting seagrass health today. A Closer Look into Dr. Peterson’s Work Dr. Peterson’s work is all about connections, specifically how certain events in a marine community are linked, and what effect these events have on other organisms in the environment. Dr. Peterson utilizes these connections and incorporates them into practical methods of restoration. Dr. Peterson has worked with ecological researchers from Stony Brook University in looking at the role of suspension feeders and seagrass habitat health. Organisms that use “suspension feeding” filter water through their body cavities and collect any food particles that float in the water column. A subset of suspension feeders is bivalves, which use a twoway pump system to move water through their bodies for both feeding and movement. A well-known example of suspension feeding bivalves inhabiting the waters of Long Island is the hard clam, Mercenaria mercenaria (7). Dr. Peterson, in conjunction with Dr. Charles C. Wall of Stony Brook University, took advantage of the feeding methods of Mercenaria mercenaria, as well as other bivalves, and related them to the growth and productivity of seagrass beds in Shinnecock Bay, Long Island. Due to the high level of organic biomass that bivalves transfer from the water column to the sediment, Dr. Peterson believed that these organisms could possibly initiate higher yields of seagrass growth (7). Experiments were conducted to highlight the ability of suspension feeders in facilitating higher growth rates in Zostera marina, also known as eelgrass, an abundant species of seagrass found all over the estuaries of Long Island. Limiting factors to Z. marina productivity include eutrophication, which is an overload of nutrients in the water column that poses a threat to seagrass beds, and increased algal leaf cover, which causes low amounts of sunlight to reach seagrass leaves. In order to guarantee accurate results, Dr. Peterson utilized controlled environmental setups known as mesocosms to simulate the environment of Shinnecock Bay. Five separate mesocosm experiments were conducted at the Stony Brook Southampton Marine Science Center in order to test their hypothesis (7). As a result of increased pressure from suspension feeders, Dr. Peterson and his colleagues recorded a considerable decrease in algal concentration, which directly led to higher levels of sunlight reaching the seagrass shoots. The growth of Z. marina was measured using area calculations of seagrass leaves in mesocosms containing dense bivalve populations and absent bivalve populations. They recorded a varying range of leaf area as well as increases in seagrass productivity in the mesocosms with higher bivalve density as compared to that of bivalve absence. Dr. Peterson has also conducted further experiments that have tested the beneficial partnership that can exist between seagrass and suspension feeding bivalves (7). Studies such as this one represent the initiative that Dr. Peterson has taken in order to better understand marine ecosystems. Seagrass communities play very important ecological roles that the majority of people would not suspect. For this reason, it is crucial to recognize the progress in marine ecological research as well as seagrass habitat importance.
Constraints on Seagrass Health Like most marine habitats, seagrass communities are threatened by human activity, but seagrass is especially at risk due to its proximity to the coast. As a result of the ever increasing human population, our shorelines are becoming packed with industrial complexes and luxury condominiums. Coastal installments such as these contribute to ocean pollution and sediment runoff that are harmful to nearby seagrass habitats and other marine ecosystems. Dr. Peterson commented on this issue stating, “Human population is going to continue to increase and we’re going to have to come up with ways on how we can mitigate those negative consequences on the coastal environment” (2). Seagrass ecosystems represent a pivotal alarm system in the marine environment. Dr. Peterson describes them as “the canary in the coal mine” (2). Specifically, since seagrass habitats require light levels that are greater than algae, they are the first organisms to get hit by changes in the water shed, which is why this alarm system is so sensitive. Dr. Peterson is also concerned about the anthropogenic factors that are negatively affecting the genotype pool and biodiversity in seagrass communities. Scientist are now asking the question of whether or not our continued presence in seagrass environments is deleterious to the genetic diversity of the organisms that live there. In another experiment, Dr. Peterson worked with Dr. John Carroll to determine factors other than human activity that are negatively affecting seagrass habitats, such as increased levels of floating sediment and phytoplankton biomass that can block out sunlight to seagrass leaves and prevent growth. Dr. Peterson addressed the issue of decreased sunlight in seagrass communities, stating, “The problem is that light is a moving target, so if this grass is growing in high organic mud, then the requirement for sunlight is significantly more than if it were growing in an environment with a much lower organic sediment” (2). Dr. Carroll and Dr. Peterson’s specific study on this issue recorded decreased levels of seagrass yield in areas where sunlight penetration was lower. This occurred due to higher concentrations of floating algae in the water column, which shaded out the seagrass patches (8). These elevated levels of algae in the water column are known as algal blooms, and can become very severe if left unchecked. Algal blooms are hypothesized to occur due to increased nutrient runoff from farmland fertilizers, and other anthropogenic factors (9). The Future of Marine Ecology and Seagrass Restoration Efforts As more research is conducted on the constraining factors affecting seagrass habitats, the public awareness of seagrass restoration is also increasing. Dr. Peterson has been a crucial member of the well-known Shinnecock Restoration Program, which applies the usage of hard clams in order to transfer nitrogen from the water column into the ocean floor to promote seagrass productivity. Dr. Peterson is also involved in restoration efforts outside of seagrass communities, such as the bay scallop restoration project currently taking place on Long Island. In conjunction with professors from Cornell and LIU, Dr. Peter-
Retrieved From: http://boatingtimesli.com/NY/wp-content/uploads/2010/05/Eelgrass-by-Pickerell.jpg
Figure 2 Eelgrass (Zostera marina) is studied in due to its integral role in the Long Island ecosystems. Factors that limit their exposure to sunlight, i.e. algae blooms, can harshly impact the biomass of eelgrass.
son’s efforts towards restoring the bay scallop populations of Long Island have been “wildly successful” (2). Amber Stubler, one of Dr. Peterson’s Ph.D. students, has also been working on other restoration topics by studying ocean acidification and species interaction, and how this changing environment is going to affect coastal systems. Dr. Peterson believes that this particular work of Stubler’s is where his efforts will focus on in the future. Other future research prospects for the Peterson Marine Ecology Lab include climate change, and the role of seagrass habitats in providing protection for juvenile shellfish against ocean acidification (2). After all of the research regarding suspension feeders, light limitations, and ocean acidification on seagrass communities, Dr. Peterson wants to remind everyone that “people care about fish”. He wants the audience to realize that the fish we love to eat are precisely correlated to the health of seagrass in our marine ecosystems. Dr. Peterson also urges people who are interested in marine ecology to have a “broad perspective of the world,” as well as to become involved in a variety of experiences. According to Dr. Peterson, the collaboration of experienced people and researchers alike can be enough to combat the major marine ecological and restoration problems of our future (2). References 1. Quotes about ecology. Good Reads (2015). 2. Cashin, Michael J. Interview With Dr. Bradley J. Peterson. Rec. 16 Feb. 2015. 2015. MP3. 3. Marine ecology. MarineBio Conservation Society (2015). 4. D. Day, Milner Baily Schaefer Biography. Scripps Institution of Oceanography Archives (1997). 5. South Florida aquatic environments glossary. Florida Museum of Natural History. 6. B.J. Peterson, K.L. Heck, Positive interactions between suspension-feeding bivalves and seagrass - a facultative mutualism. Marine Ecology Progress Series 213, 143-155 (2001), doi: 10.3354/meps213143. 7. C.C. Wall, B.J. Peterson, C.J. Gobler, Facilitation of seagrass Zostera marina productivity by suspension-feeding bivalves. Marine Ecology Progress Series 357, 165-174 (2008), doi: 10.3354/meps07289. 8. J. Carroll, C.J. Gobler, B.J. Peterson, Resource-restricted growth of eelgrass in New York estuaries: light limitation, and alleviation of nutrient stress by hard clams. Marine Ecology Progress Series 369, 51-62 (2008), doi: 10.3354/ meps07593. 9. Why do harmful algal blooms occur?. National Oceanic and Atmospheric Administration (2014). 10. M. Ruse, Edward O. Wilson: American biologist. Encyclopedia Britannica Online (2014).
Ebola Outbreak: Fighting an Epidemic Retrieved From http://www.accessexcellence.org/WN/NM/murphy_EMs.php
By Marc Emos â€™15 When a divergent strain of Ebola Virus (EBOV) first emerged in New Guinea in early 2014, the virus rapidly spread, ultimately affecting Sierra Leone, Liberia, Nigeria, Senegal, and neighboring West African countries. EBOV infection causes Ebola Virus Disease (EVD), which manifests itself as a highly lethal hemorrhagic fever with a fatality rate ranging from 30% to 90%, depending on the virus species (1). Healthcare disparities in West Africa and high infection rates make treatment and isolation difficult. Current experimental treatments and therapies include antisense phosphorodiamidate morpholino oligomers, recombinant vesicular stomatitis viruses, and antibody treatments. Despite the difficulties, procedures such as the timely identification of the infected, contact tracing, quarantine, and the use of supportive treatments have proven effective in treating infected individuals. Ultimately, an acute understanding of viral structure, pathogenesis, and origin can improve the efficacy of future remedies and prevent subsequent outbreaks. Viral Structure and Genome EBOV is a member of the family Filoviridae, which consists of five strains of negative-sense RNA viruses. As a negative-sense RNA virus, EBOV depends on RNA-dependent polymerases to synthesize the coding RNA strand, which then serves as a template for the production of viral proteins. EBOV particles are long filamentous rods with an RNA genome that codes for nucleoprotein, virion protein (VP) 35, VP30, VP24, VP40, glycoprotein, and RNA-dependent RNA polymerase (2). The genome is encapsulated by nucleoprotein, which is associated with VP35, VP30, VP24 and RNA polymerase. Virion proteins function concurrently to regulate the life cycle of EBOV and increase infectivity. VP40 regulates the morphogenesis, packaging, and budding of EBOV. VP35, VP30, and VP24 are necessary for the assembly of the nucleoprotein complex, which protects the genome (3). VP24 and VP30 also act as inhibitors of the interferon response in infected hosts. Interferons act as signal ligands, which trigger antiviral immune responses in targeted cells. Inhibition of this signaling pathway increases the
infectivity of the Ebola virus, making VP24 and VP30 important targets for inhibition. Glycoprotein is a transmembrane protein on the outer membrane, and is responsible for attachment to host cells and catalysis of membrane fusion. It also appears to play a crucial role in infection and seems to bind to and target a wide range of cells in humans (4). Ebola Virus Disease Infection and Pathogenesis Ebola infection occurs through exposure to bodily fluids containing the virus. The virus often enters through mucosal surfaces, abrasions in the skin, and parenteral routes, such as intravenous injections. The incubation period for Ebola ranges from 3 to 5 days. Once infection has started, Ebola targets monocytes, macrophages, dendritic cells, endothelial cells, fibroblasts, hepatocytes, adrenal cortical cells, and other epithelial cells. Glycoproteins on the viral coat bind to host cells and trigger
Retrieved from http://www.rcsb.org/pdb/101/motm.do?momID=178
Figure 1 A representation of selected ZEBOV proteins.
receptor-mediated endocytosis. Some cell surface proteins that are targeted by glycoproteins include tyrosine kinase 3, Î˛-integrin, and macrophage galactose lectin (5). Once infection has initiated, viral proteins inhibit specific immune pathways, such as interferon responses and coagulation cascades. This leads to a large loss of lymphocytes and induced apoptosis in many cells around the body. Early signs of infection are chills, muscle pain, and nausea. Patients often suffer impaired coagulation and necrosis of kidneys, livers, and lymph organs. In order to detect this infection, laboratory diagnoses are performed through RT-PCR and ELISA screenings for EBOV particles (2). Rapid diagnosis and treatment are crucial for the survival of patients. Evolution The origin of Zaire Ebola Virus, which was the cause of the current outbreak can be traced using phylogenetic methods and by assuming a constant rate of mutation in the EBOV genome. ZEBOV has been responsible for many outbreaks
Figure 2 A map showing the distribution of the 2014 Ebola outbreak in Africa.
around West Africa. Through sequence comparisons of each strain at different outbreak sites, it was found that all ZEBOV strains evolved from the Ebola virus that caused the earliest known outbreak in Yambuku, Democratic Republic of the Congo in 1976 (6). In a genomic surveillance of the current EBOV strain, 81 genome sequences were obtained from Sierra Leone, New Guinea and compared with 20 genome sequences from past outbreaks. Genetic relatedness between the strains indicates that EBOV has slowly spread from Central Africa into West Africa since the initial outbreak (7). Genome analysis between ZEBOV in Sierra Leone and New Guinea indicates a high degree of genetic similarity. This implies that the current outbreak has been caused by a single transmission event from the host species, fruit bats, to humans (8). Multiple introductions would show significant degrees of polymorphism between Sierra Leone and New Guinea viruses. The activities of the 12 initial EBOV infected patients were traced as well. They were all found to have attended the funeral of
an individual who died of Ebola virus disease. It is inferred that the transmission event from reservoir to host stemmed from this individual (7). Patterns of evolution since the beginning of the outbreaks show a nucleotide substitution rate that is twice as high as previous outbreaks. In addition, the rate of non-synonymous mutation is more frequent than in other outbreaks. Non-synonymous substitutions would change the amino acid sequences of the seven ZEBOV gene products, increasing the number of polymorphisms within ZEBOV populations. Increased variety in the populations would then increase the chances that a ZEBOV particle would be resistant to treatments. The increased mutation rate could prove detrimental to vaccine research, as ZEBOV has an elevated rate of adaptation due to increased RNA sequence variation (7). The development of different routes of treatment would show the best results against a constantly evolving population of ZEBOV. Vaccines and Treatments During the initial wake of Ebola outbreaks in 1976 and the early 2000s, the usefulness of an Ebola vaccine was disputed due to the rarity and low magnitude of outbreaks. However, this argument has deteriorated due to the increasing range of outbreaks in the past decade and the potential use of Ebola as a bioweapon. Efforts are now underway to produce effective human vaccines and treatments against EBOV. Many vaccines utilize non-reproducing recombinant viruses that express EBOV glycoproteins (2). One of the first effective treatments of EBOV in non-human primates was the use of a recombinant adenovirus expressing Zaire Ebola virus glycoprotein. Other potential vaccines focus on the production of Ebola virus-like particles using non-replicating viruses. Potential treatments targeting the virus focus on RNA interference, which can disrupt the life cycle of EBOV by preventing production of vital viral proteins (9). Currently, the most promising vaccines and treatments for Zaire Ebola virus are AVI 7537, ZMAPP, and rVSV-ZEBOV GP. AVI 7537 AVI 7537 is an antisense RNA that binds to and prevents the translation of VP24 viral mRNA in the Ebola Virus. This RNAbased therapy is dependent on antisense phosphorodiamidate morpholino oligomers (PMO), which is a nucleic acid analog that improves the stability and function of antisense complexes. The PMO was conjugated with cell-penetrating peptides, such as arginine rich peptides, to improve its transfer into viruses. A positively charged amine group was also conjugated to the phosphate linkage that bridged adjacent nucleic acids, which improved the binding dynamics of AVI 7537. This engineered antisense RNA binds to the sense RNA strand, which codes for VP24. VP24 mRNA was chosen as a target because its translation product is responsible for the inhibition of type 1 interferon responses in infected hosts. VP24 targets these signaling pathways to prevent immune responses against the virus. By blocking the inhibitory effect of VP24, the interferon pathway can trigger immune responses against EBOV infection. The efficacy of AVI 7537 was tested in mouse, guinea pig, and non-human primate models. Figure 3 A representation of the ZEBOV genome and its encoded proteins (Expasy Viral Zone)
Retrieved from http://viralzone.expasy.org/all_by_species/207.html
A large scale response incorporating social, medical, public health, and humanitarian agencies is necessary to provide the manpower and funding needed to handle the epidemic A promising survival benefit is observed in non-human primates in whom the dosage of AVI 7537 is positively correlated with survival. Currently, AVI 7537 and other related antisense PMO’s seem to be optimal therapeutic candidates for treatment of Zaire Ebola virus infection (10). ZMAPP ZMAPP is a cocktail of three humanized monoclonal antibodies (mAB) that stimulate passive immunity in Ebola-infected individuals. Each mAB was designed to target a unique region on EBOV glycoproteins. The goal of ZMAPP was to create a cocktail that elicited a higher survival rate from Ebola infection. Experiments treating Rhesus monkeys and guinea pigs observed the reversal of disease symptoms and reduction of viral loads within 2 weeks of disease onset. It was concluded that ZMAPP was capable of reversing severe EBOV infection and managing viral levels within infected Rhesus monkeys (11). The antibody response elicited by ZMAPP has not been proven to be completely adaptive. It is unknown if populations treated with ZMAPP would be susceptible to reinfection. It is shown that T-cell responses can be detected in individuals re-exposed to the Ebola virus and that Ebola glycoprotein antibodies are detectable in individuals treated with ZMAPP several weeks after exposure. Monoclonal antibodies are known to have low rates of negative effects and are capable of eliciting specific, rapid immunity for treated populations (12). ZMAPP was reportedly administered to a handful of EVD patients who subsequently had full recoveries. Despite these promising results, the conditions of the care and other treatments received by these patients were variable, making the efficacy of ZMAPP questionable (13). Therefore, the next stage of drug development involves studies on the safety and side effects of ZMAPP, leading to clinical studies of ZMAPP on human populations. ZMAPP is also one of many plant-made antibodies. Initial techniques to produce these antibodies have been riddled with long production times and undesired modification of target genes. However, a new procedure utilizing high yield transient plant expression systems allows for drug production in five to eight days (14). As production increases and clinical trials are performed, ZMAPP may soon be released for distribution. rVSV-ZEBOV-GP The rVSV-ZEBOV–GP vaccine is a replication-deficient vesicular stomata virus (VSV) that expresses ZEBOV glycoproteins. The strain was created using DNA recombination techniques so as to be incapable of reproducing in host cells and synthesizing proteins of choice. VSV was used for immunization against influenza virus and simian immunodeficiency virus, making this immunization strategy very promising. The mechanism of the rVSV-ZEBOV immunity is based on cellular and humoral responses. Studies on helper T-cell, B-cell, and interferon levels in Rhesus monkeys exhibit the role these immune cells confer for EBOV immunity. CD8+ T-cells are shown to be significant to humoral immunity due to the increased production of these cell types following rVSV-
ZEBOV treatment. The vaccine also seems to stimulate cytotoxic T-cells and the interferon pathway. Post-exposure to the vaccine showed elevated levels of various interferon proteins and chemo attractants. These chemicals could be responsible for the stimulation and mobilization of other cells in the immune system. The exact mechanism of immunity is unknown but the vaccine has shown efficacy in mouse and non-human primate models. The efficacy of rVSV-ZEBOV has moved the vaccine into human clinical trials, which will be carried out throughout 2015 (15). Conclusion ZMAPP and rVSV have transitioned into clinical trials with AVI 7537 soon to follow. Until reliable vaccines and treatments are approved for human use, preventing an uncontrollable spread of the disease is of the utmost importance. A large scale response incorporating social, medical, public health, and humanitarian agencies is necessary to provide the manpower and funding needed to handle the epidemic (16). Treatment of ZEBOV infection is possible through supportive cares. With the use of catheters, fluids, and electrolyte replacement, patients can be cared for and their symptoms can be reduced with constant monitoring (17). Public education is crucial in spreading the word about the Ebola infection and de-stigmatizing those infected. Through joint efforts of many agencies and increased awareness of the nature of the disease, the Zaire Ebola outbreak, as well as future outbreaks, can effectively be contained. References 1. S. Baize, et al. Emergence of Zaire Ebola virus disease in Guinea—Preliminary report. N. Engl. J. Med. 371, 1418-1425 (2014). 2. Feldman H, Geisbert T. Ebola Haemorrhagic Fever. The Lancet. 377, 849-862 (2011). 3. Huang Y, Xu L, Sun Y, Nabel GJ. The assembly of Ebola virus nucleocapsid requires virion-accociated protein 35 and 24 and posttranslational modification of nucleoprotein. Mol Cell. 10, 307-16 (2002). 4. Lee JE, Saphire EO. Ebolavirus glycoprotein structure and mechanism of entry. Future Virol. 4, 621-635 (2009). 5. Hoenen T, Groseth A, Fatzarano D, Feldmann H. Ebola Virus: unraveling pathogenesis to combat a deadly disease. J Mol Med. 12, 206-215 (2006). 6. Walsh P, Biek R, Real L. Wave Like Spread of Ebola Zaire. Journal P Bio. 3, 19461953 (2005). 7. Gire SK et al. Genomic surveillance elucidates Ebola virus origin and transmission during the 2014 outbreak. Science. 35, 1369 (2014). 8. Leroy E, Kumulungui B, Pourrut X, Rouqet P, Hassanin A, Yaba P, Delicat A, Paweska JT, Gonzalez JP, Swanepoel R. Fruit bats as reservoirs of Ebola virus. Nature. 438, 575-576. (2005). 9. Sullivan N, Yang ZY, Nabel GJ. Ebola Virus Pathogenesis: Implications for Vaccines and Therapies. J Virol. 77, 9733-9737 (2003). 10. Iverson P, Warren T, Wells JB, Garza NL, Mourich DV, Welch LS, Panchal RG, Bavari S. Discovery and Early Development of AVI 7537 and AVI 7288 for the treatment of Ebola Virus and Marburg Virus Infections. Viruses. 4, 2806-2830 (2012). 11. Geisbert T. Medical Research Ebola therapy protects severely ill monkeys. Nature. 514, 41-43 (2014). 12. Qiu Xiangguo et al. Reversion of advanced Ebola virus disease in nonhuman primates with ZMapp. Nature. 514, 47-53 (2014). 13. Goodman J. Studying “Secret Serums”- Toward, Safe Effective Ebola Treatments. N Engl J Med. 371, 1086-1089 (2014). 14. Zhang YF, Li DP, Jin X, Huang Z. Fighting Ebola with ZMapp: spotlight on plantmade antibody. Sci China Life Sci. 57, 987–988 (2014) 15. Marzi A, Engelmann F, Feldmann F, Haberthur K, Shupert WL, Brining D, Scott DP, Geisert TW, Kawaoka Y, Katze MG, Feldman H, Messaoudi I. Antibodies are necessary for rVSV-Zebov GP – mediated protection agaisnt Ebola Virus challenge in nonhuman primates. PNAS. 110, 1893-1898 (2013). 16. Farrar JJ, Piot P. The Ebola emergency- Immediate Action, Ongoing Strategy. N Engl J Med. 371, 1545-1546 (2014). 17. Lamontagne F, Clement C, Fletcher T, Jacob ST, Fischer WA, Fowler R. Doing Today’s Work Superbly Well – Treating Ebola with Current Tools. N Engl J Med. 371 1565-1566 (2014).
The Epigenome: Redefining Hereditary Diseases
Retrieved from http://wallpaperscraft.com/image/38174/3840x2160.jpg
By Marianna Catege ’16 Introduction It was previously believed by the scientific community that an organism’s genome remained consistent throughout its life and that genetic mutations within the mature adult somatic cells would not be passed down to the next generation. While this remains to be seen, another dimension of the genome, the epigenome, was discovered to change over time by the influence of certain environmental agents. Consisting of epigenetic actors that function alongside DNA, the epigenome actively moderates and regulates downstream gene expression. These factors include methyl groups and other histoneassociated proteins. They are susceptible to change over time with the influence of environmental stressors such as pollution, poor diet, drugs, and alcohol. Since the epigenome is responsible for gene regulation, alterations to the epigenome can cause diseases like cancer and diabetes through incorrect modification (1). By analyzing the epigenome and its associated components, Dr. Michael Skinner accidentally discovered another anomaly: epigenetic modifications to somatic cells, such as DNA, are in fact heritable. After mistakenly breeding pups from a mother rat exposed to a harmful endocrine-disrupting chemical, vinclozolin, Skinner’s team observed that 90% of the resulting pups exhibited problems with fertility due to this environmental stressor, which was expected. The unexpected phenomenon, however, was observed when this infertility phenotype persisted for at least three more generations, consistently at 90%. This suggests that the rats exposed in the womb experienced some sort of modification to their DNA, which was then inherited. Normal Darwinian inheritance via classical genetics could not explain the persistence of a phenotype determined by the external introduction of an environmental stressor (1, 12). These epimutations, occurring at a frequency two times that of DNA mutations, can be caused by a variety of
aerosolized toxins, diet, and lifestyle decisions. While epimutations that result in a harmful pathology do not always become inherited, an individual can instead inherit a predisposition to a disease. Since epimutations can occur at any gene site in the body at a surprisingly increased rate, the possibility of disease due to improper gene expression is extremely high. Methyl Groups and Histones act as Epigenetic Actors Although there are several types of epigenetic modifications, DNA methylation and histone acetylation are the most influential. Performed via epigenetic actors, modification of host DNA sequences affects gene expression by directly affecting mRNA transcription. This, in turn, can be crucial for determining the segments of the genome that are transcribed (2). DNA methylation, and its reverse process, DNA demethylation, serve to inactivate or activate genes, respectively. This thereby regulates their transcription, and ultimately affects cell differentiation. These methyl groups accomplish this modification by covalently binding to cytosine groups of DNA via the help of an enzyme called DNA methyltransferase. The presence of a methyl group deactivates adjacent genes, while its absence activates them. Environmental stressors alter where the methyl groups attach, either deactivating or activating necessary genes (2). Meanwhile, histones are proteins that DNA wraps around to form the chromatin structure. Their primary function is to condense DNA and regulate which regions are open to simultaneously regulate genes. Formation of heterochromatin, densely-packed DNA, prevents gene transcription since transcription proteins do not have access to the DNA. This leads to downstream gene inhibition. On the other hand, euchromatin, loosely packed DNA, promotes gene transcription, which simultaneously promotes gene expression. Environmental stressors have been shown to alter the way DNA wraps around histones, changing which genes are expressed (2).
Environmental Stressors Alter the Epigenome Environmental metals Environmental metals such as cadmium, arsenic, nickel and methylmercury are all serious concerns to the integrity of the epigenome, specifically DNA methylation patterns. These metals can interact with the DNA, creating reactive oxidative species (ROS), disrupting the body’s natural balance of ROS, interfering with methyltransferase’s ability to interact with DNA. Among these metals, cadmium exposure is quite common through industrial sources, such as battery production, and household sources, such as paints. Cadmium is also present in cigarette smoke in quantities that make the average smoker contain twice the quantity of cadmium than the average non-smoker (3). By acting as a noncompetitive inhibitor of DNA methyltransferase, cadmium causes a decrease in methylation of the genome. This cadmium-induced hypomethylation has also been observed at the site of proto-oncogenes, effectively turning them on and inducing their expression, resulting in the uncontrollable cell proliferation characteristic of cancer progression (4). Similarly, arsenic’s effect on the epigenome is excitatory. Upon ingestion of arsenic, it is enzymatically methylated via Sadenosyl-methionine (SAM) to signal for detoxification. This depletion of available SAM, as it’s covalently bonded to the introduced arsenic, interferes with proper DNA methylation. By studying rat-liver epithelial cell lines exposed to reduced amounts of arsenic, Zhao et al. observed harmful transformations and decreased DNA methyltransferase activity caused by decreased SAM levels (5). On the other hand, the predominant effect of nickel is inhibitory, since it induces hypermethylation. Nickel ions also have been speculated to replace magnesium in DNA interactions, enhance chromatin packing, and increase random DNA methylation. Also, unlike the other metals mentioned, studies have shown that nickel can simultaneously reduce histone acetylation, which loosens the DNA around the histones and allows for more transcription where transcription should not occur (6). Another damaging environmental metal, methylmercury, presents an alarming source of epimutations because it is easily transmitted through such means as fish consumption, the burning of fossil fuels, industrial boilers, and natural sources like volcanoes. Onishcenko et al. exposed developing mice to methylmercury, isolating changes in brain-derived neurotrophic factor (BDNF), which led to improper neural development and maturation. Methylmercury promotes hypermethylation and alters acetylation of histones; this simultaneously induces depression of genes throughout the organism’s genome (3). Particulate Matter and Air Pollution Particulate matter (PM) is a mixture of contaminants including nitrates, sulfates, organic chemicals, metals, soil,
dust, or other debris particles (3). In a study involving steel plant workers with high exposure to PM, global hypomethylation was exhibited post-exposure when compared to baseline. Long-term exposure increased this hypomethylation (3). Similar hypomethylation was observed in a study that measured the exposure of elderly men to black carbon, a component of particulate matter that is formed through the combustion of fossil fuels. This study highlighted the affinity of black carbon in promoting oncogene expression and cancer development (3).
Table 1 The epigenetic role of some common nutrients and the foods in which
they are found. Retrieved from: University of Utah, 2014. Nutrition and the Genome.
Benzene Major sources of benzene emission include automobile exhaust, cigarette smoke, petroleum manufacturing, and oil storage tanks. One study investigating low-benzene exposure in peripheral blood DNA of subjects identified an increase in hypermethylation of the p15 tumor-suppressor gene and MAGE-1 gene. In this instance, extra methylation of the p15 gene effectively turned it off. Meanwhile, reduced methylation of an oncogene, MAGE-1, led to uncontrolled expression, leading to a higher chance of cancer development. From a cellular level, incubation of mice bone marrow cells with benzene altered DNA transcription and resulted in rapid cellular apoptosis (10). Physical Stress To an extent, physical exertion presents an environmental stressor that could potentially harm the epigenome. In a
This cadmium-induced hypomethylation has also been observed at the site of proto-oncogenes, effectively turning them on and inducing their expression, resulting in the uncontrollable cell proliferation characteristic of cancer progression. 20
Figure 1 The proccess in which epigenetic tags affect the developement of a new embryo.
Retreieved from: http://learn.genetics.utah.edu/content/epigenetics/inheritance/
study by Yao et al., gestating rats were restrained and forced to swim in the later stages of the fetus’ development. His team found that 3 generations later, the pups demonstrated abnormal developmental behaviors and had lower weights. The frequencies of offspring that presented with this phenotype was higher than predicted by Darwinian inheritance, suggesting epigenetic inheritance (7). Diet Diet is crucial to the integrity of the developing fetus’s epigenome during pregnancy. This is because a diet not rich enough in methyl-donating folic acid or choline can lead to the child having a hypomethylated genome, which will be carried throughout the person’s life. If an adult is chronically lacking in methyl-donating groups as well, the resulting hypomethylation could induce disease expression in certain parts of the genome. Chronic hypomethylation has been linked to pathophysiological conditions including cancer, atherosclerosis, chronic venous disease, and diabetes (8, 11). Unfortunately, consuming too much of the methyldonating groups has the exact opposite effect of under-consumption. Over-consumption can result in hypermethylation of the genome that is most evident during fetal development. Table 1 lists some nutrients and their sources that influence the epigenome (8). How Epigenetic Information is Inherited The pattern of epigenetic markers is preserved with each mitotic division of a cell, since the information is inherited by the daughter cells after mitosis. Therefore, epimutations can potentially last throughout an organism’s lifetime, unless another factor corrected the epimutation in the same way it was altered (9). Epimutations also can be passed down via meiosis to the germ cells and in turn, the offspring. The offspring will have a combination of the parents’ epimutations at fertilization. To avoid this, a biological precaution known as reprogramming occurs at two instances during fetal development. Here, epigenetic actors are erased and re-added to avoid any parental epimutations being inherited by the fetus. In spite of this, a small percent of epimutations bypass this precaution, leading to generational epigenetic inheritance. Therefore, epigenetic alterations caused by environmental stressors experienced by the parents could potentially affect the child with the same consequences. This inheritance also spans over many genera-
tions and therefore occurs on a transgenerational level. While this fact has been confirmed, the extent at which this occurs is still yet to be formally established (9). Prevention and Precaution With the abuse of drugs, alcohol, and cigarettes, and the increasing levels of toxins in the environment, the frequency of epimutations are increasing. It is evident that the best way to reduce epigenetic mutations is to live a healthy life style and reduce toxin exposure. Still the transgenerational inheritance capabilities for epimutations indicate that they can occur for up to five generations. This means that, someone who currently lives a healthy lifestyle can still have problems with his or her epigenome because of the lifestyle choices made by one of their ancestors, especially if one of their ancestors was exposed in the womb (1). Current research is trying to understand how we can regulate and better understand epigenetics, which can lead to screening tests to determine a person’s predisposition for a disease or disorder. Such information is invaluable, as people can be more wary and adjust their habits accordingly. References 1. Skinner, M.K. A New Kind of Inheritance. Scientific American. 311. 44-51. (2014). 2. Bollati, V., Baccarelli, A. Environmental epigenetics. Heredity. 105. 105-112. (2010). 3. Baccarelli, A., Bollati, V. Epigenetics and Environmental Chemicals. Current Opinion in Pediatrics. 21. 243-251. (2009). 4. Takiguchi M, Achanzar WE, Qu W, Li G, Waalkes MP. Effects of cadmium on DNA-(Cytosine-5) methyltransferase activity and DNA methylation status during cadmium-induced cellular transformation. Experiemental Cell Research. 286. 355–365. (2003). 5. Zhao CQ, Young MR, Diwan BA, Coogan TP, Waalkes MP. Association of arsenic-induced malignant transformation with DNA hypomethylation and aberrant gene expression. Proc Natl Acad Sci. 94:10907–10912. (1997). 6. Chen H, Ke Q, Kluz T, Yan Y, Costa M. Nickel ions increase histone H3 lysine 9 dimethylation and induce transgene silencing. Molecular Cellular Biology. 26. 728–3737. (2006) 7. Yao Y, Robinson AM, Zucchi FCR, Robbins JC, Babenko O, Kovalchuk O, Kovalchuk I, Olson DM, Metz GAS: Ancestral exposure to stress epigenetically programs preterm birth risk and averse maternal and newborn outcomes. BMC Medicine. 12:121. (2014). 8. Genetic Science Learning Center, Nutrition and the Epigenome, Learn Genetics. Retrieved December 6, 2014, from http://learn.genetics.utah.edu/content/ epigenetics/nutrition/ (2014). 9. Genetic Science Learning Center, Epigenetics and Inheritance, Learn Genetics. Retrieved December 6, 2014, from http://learn.genetics.utah.edu/content/epigentics/inheritance/(2014). 10. Gao A, Zuo X, Song S, Guo W, Tian L. Epigenetic modification involved in benzene-induced apoptosis through regulating apoptosis-related genes expression. Cell Biology International. 35. 391-396. (2011). 11. Dauncey MJ. Nutrition, the brain and cognitive decline: insights from epigenetics. European Journal of Clinical Nutrition. 68. 1179-1185. (2014). 12. Skinner M, Anway M. Epigenetic Transgenerational Actions of Endocrine Disruptors. Endocrinology. 147. (2006).
Video Games as a Source for Neural Therapy By Tasfinul Haque ’15 Introduction Invented in 1947, the Cathode Ray Tube Amusement Device allowed people to use an oscilloscope to shoot “missiles” at targets on overlaid transparencies. Since then, video games have become a multi-billion dollar industry, with game developers pushing the limits of graphic technology to create hyperrealistic visuals with engaging plots. The meteoric rise in popularity of video games since the late 1970s has prompted researchers to explore the effects that these games have on our neural functions. While the social repercussions of video games are hotly debated, the neural plasticity that video games induce is more objective. Those who play video games have greater cognitive flexibility, faster visual reflexes, and better motor control than those who do not (1). Neuroscientists are using this wealth of information to understand how video games can be used to rehabilitate those with neural pathologies. Even more recently, researchers and game developers have begun to collaborate in order to tailor video games to improve specific neural functions. Although the results of video game therapy are positive, its role in mainstream rehabilitation has been limited. Nevertheless, video games have the potential to reveal and address the neural basis of physical and mental impairments and thus warrant further investigation. The Neural Basis of Video Games Video games engage several neural areas on the cortex involved in hand-eye coordination, integration of senses, the ability to adapt quickly, and many other characteristics. In fact, studies have shown that those who frequently play video games have increased gray matter (neuronal cell bodies located in the brain and spinal cord) in areas of the brain essential for spatial navigation, strategic planning, working memory, and motor performance (2). For an inexperienced video gamer, cortical activity increases because several new tasks are being performed by the brain. As the gamer becomes more skilled, cortical activity decreases, requiring fewer neurons to execute the same task (3). This implies that the nervous system is becoming more efficient over time as a person plays video games. The increased efficiency may be assisted by the activity of the ventral striatum. As a person plays video games, the ventral striatum releases dopamine, a neurotransmitter that is sig-
nificant in reward processing and motivation. Since dopamine secretion increases the desire of a gamer to continue playing (4), the neural circuits involved in executing a specific skill in the process are reinforced. Dopamine signaling also triggers long term potentiation, the mechanism underlying memory formation and learning in neurons (4). As a result, the neural pathways that are activated by game play are strengthened and produce sustained effects. By understanding these implications, video games can be used to help those with neurological damage. Neural Reorganization and Motor Control Video games can assist in regaining motor control after a stroke by inducing reorganization of the motor cortex through repeated movements of the affected limb. Strokes are caused by a disruption of blood to the brain and often results in some loss of motor control. Constraint-induced movement therapy (CIMT) is a common method to help patients regain motor function, and involves high repetition movements made by the affected limb without relying on the healthy side (5). Recently, customized video games were developed for use in CIMT, where patients complete motor tasks in the game, such as navigating a boat through obstacles using the affected arm. When the affected side is forced to move, its representation in the cortex increases, which increases activity of the motor cortex area associated with the affected limb (6). This suggests that when damage to the motor cortex occurs during a stroke, CIMT can facilitate the distribution of limb representation among the remaining healthy motor cortex. CIMT is often used in traditional therapy, but there are several other benefits that video games can provide. Caregivers have noticed that patients are more engaged and motivated in therapy when video games are utilized, as it creates a non-clinical environment where patients can feel more relaxed (7). Therefore, stroke patients that have sustained motor control deficits can actively and enthusiastically play games to produce long-term improved motor control. Memory Processing and Mental Disorders Researchers have also begun to explore how video games can improve the lives of those with mental and cognitive disorders, such as Alzheimer’s disease. Several studies involving patients with dementia and Alzheimer’s have shown that use of musically based video games can slow or stop the progression of mental decline by reinforcing existing memo-
ries and skills (8). Music therapy has proven to be an excellent tool for patients with dementia, as singing or playing instruments have a long range of effects, such as reinforcing speech construction, inducing reminiscence, promoting fine motor control, and decreasing stress hormones (8). Even after dementia patients have lost several cognitive functions, music can still evoke a response until the late stages of dementia (9). A video game based on music therapy has many of the effects of traditional music therapy, but can be applied to patients with low musical inclinations with minimal training. By reinforcing several neural circuits involved in speech, memory, and motor functions, a patient can gain a sense of control over their dementia. In a study involving the videogame MINWii, patients played music on a virtual keyboard in order to restore self-esteem, a process known as renarcissization (8). Renarcissization improves many behavioral issues, which may otherwise lead to institutionalization. While MINWii may not induce neural reorganization to eliminate dementia, it is able to reinforce many neural circuits that are lost during dementia and prevent its progression. In an even more remarkable study, researchers found that Tetris was able to interrupt the brain’s process of memory formation, which has elucidated potential methods by which post-traumatic stress disorder (PTSD) can be eliminated before it forms. Tetris®, a puzzle game in which falling blocks are rotated onto a board with increasing speed, requires extensive integration of visuospatial data. Visuospatial tasks involve the perception of the spatial relationships between different objects. PTSD is a type of anxiety disorder that develops after a person encounters a traumatic event, and is associated with repeated flashbacks of the traumatic event. Interestingly, the flashbacks associated with PTSD use visuospatial centers in the brain to consolidate these traumatic memories. Because the brain has limited resources, engaging in visuospatial tasks selectively competes for resources to generate mental images (10). The neurobiology of memory formation suggests that there is a six-hour window in which memory consolidation can be disturbed and avoid being processed by the brain.
Retrieved from: http://www.geneticsandsociety.org/img/original/Brain1.jpg
Figure 1 Because the brain has limited resources, engaging in visuospatial tasks, like playing Tetris®, selectively competes for resources to generate mental images. This can diminish memory consolidation of tramautic events that occur before Tetris®.
Photo courtesy of Sarima Subzwari ’18
Figure 2 The Wii, a video game console, can be used to restore self-esteem in patients with dementia.
In order to test the ability of Tetris® to diminish memory consolidation, researchers showed a graphic film to volunteers. They were then divided into groups with no assigned tasks and groups that played Tetris 30 minutes after viewing the film. Researchers then monitored the number of flashbacks experienced for each participant and assessed their clinical symptoms. Those who played Tetris after the traumatic viewing had significantly fewer flashbacks and trauma symptoms (10). While this procedure represents a very crude approximation of PTSD, the underlying mechanism of memory formation remains the same. Effective treatment for PTSD exists in the form of drug therapies and cognitive behavior therapy. However, these therapies are successful if the patient can come to terms with the traumatic event, which is often an emotionally painful process. The application of specially designed visuospatial tasks can provide a novel approach to eliminate traumatic memories as they form. Establishing Video Game Therapy Despite the promising results of video game therapy on several motor and mental disorders, several factors still prevent its integration into traditional therapy. The most immediate issue is the low number of games being created for the purpose of rehabilitation. While the use of existing consoles, like the Nintendo Wii, is proving useful, tailor-made games can more accurately target a specific issue and remove unintentional aspects of mass marketed games from therapy. Even after a video game therapy is developed, access to the games is often limited. Many video games are located in hospitals or rehabilitation centers, which may prevent or discourage patients from receiving the therapy.
Video games have an advantage over other therapies because there is a significant amount of non-clinical interaction with caregivers, family, and other patients.
Video game therapies, although fairly new, have many unique benefits— many of which are not found in traditional rehabilitation.
The most significant obstacle that video game therapy faces is the critical lack of research. While the cognitive and motor benefits of video games are well documented, research regarding how specific areas in the nervous system are affected by video games remains scarce. Video games cannot efficiently serve as a neural therapy if the appropriate neural targets are not identified. Research into video game therapy is also limited because it is complex to study. In order to test the effects on the nervous system, the putative component of the game must be studied in isolation, which often proves difficult for mass marketed games (1). Video game therapy has several obstacles to overcome before it can be integrated into traditional rehabilitation, yet the benefits that it can provide to those with impaired neural capacities should not be overlooked. In particular, the increase in motivation that many patients express for video game rehabilitation may prove therapeutic in itself. Video games have an advantage over other therapies because there is a significant amount of non-clinical interaction with caregivers, family, and other patients. The effects of the added social therapy on selfesteem building and overall well-being should also be explored. In order to provide a therapy where social interaction is valued alongside physiological improvement, research into the neural basis of video games as well as a standard for studying the
components of video games must be established. Video game research can provide valuable insight into how motor and mental disorders are processed by the brain, and may provide an innovative method for treating a variety of neural pathologies. References 1. Green CS, Bavelier D. Action-Video-Game Experience Alters the Spatial Resolution of Vision. Psychological science. 18. 88-94 (2007). 2. S. Kühn, T. Gleich, R. C. Lorenz, U. Lindenberger, J. Gallinat. Playing Super Mario induces structural brain plasticity: gray matter changes resulting from training with a commercial video game. Molecular psychiatry. 19. 265-271 (2014). 3. J. A. Granek, D. J. Gorbet, L. E. Sergio. Extensive video-game experience alters cortical networks for complex visuomotor transformations. Cortex. 46. (2010). 4. S. Kühn, A. Romanowski, C. Schilling, R. Lorenz, C. Mörsen, N. Seiferth, J. Gallinat. The neural basis of video gaming. Translational psychiatry. 1. 53-57 (2011). 5. S. L. Wolf et al. Effect of Constraint-Induced Movement Therapy on Upper Extremity Function 3 to 9 Months After Stroke: The EXCITE Randomized Clinical Trial. JAMA. 296. 2095-2104 (2006). 6. J. Liepert, et al. Motor cortex plasticity during constraint-induced movement therapy in stroke patients. Neuroscience Letters. 1. 5-8 (1998). 7. J. Halton, D. M. Cook, K. McKenna, K. Fleming, R. Darnell. A new frontier for occupational therapy. Occupational Therapy Now. 9. 12-14 (2008). 8. S. Benveniste, P. Jouvelot, B. Pin, R. Péquignot. The MINWii project: Renarcissization of patients suffering from Alzheimer’s disease through video game-based music therapy. Entertainment Computing. 3. 111-120 (2012). 9. A. C. Vink, J. S. Birks, M. S. Bruinsma, R. J. P. M. Scholten. Music therapy for people with dementia. Cochrane Database of Systematic Reviews. 4. 1-49 (2003). 10. E. A. Holmes, E. L. James, T. Coode-Bate, C. Deeprose. Can playing the computer game “Tetris” reduce the build-up of flashbacks for trauma? A proposal from cognitive science. PloS one. 4, e4153 (2009).
Retrieved from: http://media.spundge.com.s3.amazonaws.com/stories/de58904e9d5e11e2b8da12313d2b58c5.jpg
Retrieved from: http://upload.wikimedia.org/wikipedia/commons/0/08/MRSA_dead_neutrophil.jpg
Confronting the Rise of Superbugs in an Increasingly Drug-Resistant Era
By Julia Joseph ’15
sibility of a future “post-antibiotic era” looms, new and effective counter-measures must be swiftly taken (4).
Introduction Approximately 100 years ago, the arsenic-based drug Salvarsan seemed poised to usher in a golden age of “magic bullets.” After many failed experiments, the product of Paul Ehrlich’s 606th attempt to kill Treponema pallidum, the causative agent of syphilis, was a ground-breaking discovery as the first extrinsic chemical to specifically target the pathogen (1). Soon after, with Alexander Fleming’s serendipitous discovery of penicillin and the therapeutic application of sulfonamides in the 1930s, widespread interest in a new class of drugs emerged: antibiotics. With their remarkable success in combating oncedevastating bacterial diseases, antibiotics were widely hailed as one of the wonder discoveries of the twentieth century (2).
How Antibiotics Work Most antibiotics have been discovered as products used by natural life-forms to compete with other microbes in their environment. These include the penicillins and cephalosporins derived from fungi, and streptomycin and vancomycin from different strains of the Streptomyces bacterium. Antibiotics have been further augmented through the synthetic modification of these natural products, giving rise to second and third generation β-lactams for the penicillin and cephalosporin classes. In other cases, antibiotics have been developed through novel synthetic pathways, such as the fluoroquinolones (5). The main antibiotic drugs have been grouped by their
Yet less than 100 years later, a new and dangerous problem has emerged. While it is unquestionable that antibiotics have become one of the most important tools of modern medicine, its Achilles’ heel has been the observed phenomenon of drug resistance. Through years of widespread and unchecked use of antibiotics, so-called “superbugs,” including Methicillin-resistant Staphylococcus aureus (MRSA), Vancomycinresistant Enterococci (VRE), and Carbapenem-resistant Enterobacteriaceae (CRE), have emerged. These bacteria have evolved the ability to evade the antimicrobial action of antibiotic drugs, allowing them to become resistant to the most commonly used front-line drugs and, in more serious cases, antibiotics that are typically reserved as the last line of defense. Today, the Center for Disease Control (CDC) estimates that drug-resistant bacteria are responsible for two million illnesses in the U.S. and for more than 23,000 deaths per year, with the numbers steadily increasing (3). As bacteria become increasingly unresponsive to standard treatment, and the pos-
ability to interfere with essential processes necessary for bacterial survival, including cell wall biosynthesis, protein synthesis, and DNA replication and repair. Penicillins and cephalosporins of the β-lactam class, for instance, target the enzymes involved in peptide crosslinking of the peptidoglycan layer. Peptidoglycan, a covalently cross-linked meshwork of peptide and glycan, is unique to prokaryotes and confers the cell wall with solid support and strength. By targeting such enzymes, β-lactams succeed in weakening the bacterial cell wall and thereby, predispose the bacterium to lysis (5). Additionally, protein synthesis is vital to bacterial survival. Due to distinct differences in protein machinery between prokaryotes and eukaryotes, antibiotics like tetracyclines and macrolides are able to specifically target certain steps in bacterial protein synthesis. Fluoroquinolones, on the other hand, are known to interfere with the process of bacterial DNA replication. By inhibiting the bacterial DNA enzyme gyrase, a type II topoisomerase necessary for uncoiling DNA
during replication, fluoroquinolones enable an accumulation of double-strand breaks that eventually leads to cell death (5).
threatening VRE, for example, has evolved resistance to vancomycin by accumulating five resistance genes on plasmids (5).
How a Bug becomes a Superbug: The Phenomenon of Antibiotic Resistance Acquiring Drug Resistance Bacteria have been in existence for more than three billion years, implicating that resistance had been well established as a means of survival well before the advent of antibiotics (6). Yet with the widespread use of antibiotics, the issue of resistance has only been exacerbated. Fleming himself warned of resistance at his Nobel Prize acceptance speech in 1945, as this phenomenon was noted within two years of the introduction of penicillin in the mid-1940s (5). Drug resistance can be acquired through the inheritance of certain genes or through genetic changes, like mutations. While the mutations that confer resistance to bacteria occur by chance, in the face of selective pressure from exposure to antibiotics, the drug-resistant bacteria eventually outcompete the drug-susceptible bacteria. These “adaptive mutations,” as a result of Darwinian natural selection, are an especially important force for bacteria like Mycobacterium tuberculosis, which are not known to exchange DNA under natural conditions (6).
Bacterial Survival Strategies: Mechanisms of Drug Resistance Drug resistance mechanisms, once genetically transmitted and inherited by bacteria, are generally as varied as the antibiotics themselves, yet all work toward the eventual goals of nullifying the action of the drug (Figure 2). For example, antibiotics that inhibit protein synthesis must have the ability to pass through the cell membrane and accumulate at a high enough concentration to be effective. Bacteria that become resistant overproduce membrane proteins acting as efflux pumps for the drug, allowing the intrabacterial concentrations of the drug to remain low. Such is the case of Staphylococci bacteria that become resistant to the erythromycin class of macrolide antibiotics (5). http://www.intechopen.com/source/html/46480/media/image5.png
Figure 2 Drug resistance is acquired by various means, each aiming to combat the antimicrobial action of the antibiotic drug.
Figure 1 Bacteria can freely exchange genetic material to acquire drug resis-
tance. Retrieved from http://ehp.niehs.nih.gov/wp-content/uploads/2013/08/ ehp.121-a255.g001.png
For other types of bacteria, the successful and rapid development of resistance is due to the ability to transfer and acquire resistance genes (Figure 1). Horizontal gene flow, a driving force in bacterial evolution, allows resistance genes to be shared by means of mobile genetic elements. For instance, bacteriophages, which are viruses that can specifically infect bacteria by the process of transduction, have evolutionarily been a potent force for allowing gene transfer between closely related species of bacteria (6). Even more significantly, the exchange of plasmids, or circular pieces of naked DNA that independently replicate outside of chromosomal DNA, have been critical for accumulating resistance genes (6). In fact, the uptake of plasmids by the process of transformation was directly responsible for the creation of penicillin-resistant Streptococcus viridans (7). Transposons, or self-transmissible elements, can also use plasmids to self-excise themselves from DNA and act as an efficient mode of transfer of antibiotic resistance genes. The life-
Fleming himself warned of resistance at his Nobel Prize acceptance speech in 1945, as this phenomenon was noted within two years of the introduction of penicillin in the mid-1940s. 26
The antibiotic itself can also be chemically modified. β-lactamases, for example, work to deactivate the β-lactam ring in penicillins and cephalosporins, which are necessary to disrupt the cell wall structure. When this ring is deactivated, the drug becomes nonfunctional. Similarly, enzymes can disrupt the binding of aminoglycoside antibiotics to their RNA targets in the ribosome by adding chemical substituents. This prevents the ability of aminoglycosides such as kanamycin, a commonly used antibiotic to treat E. coli infections, to act as protein synthesis inhibitors (5). Alternatively, rather than trying to remove or destroy the antibiotic, bacteria can instead inherit the ability to cleverly camouflage the target of the antibiotic. This strategy has been utilized by VRE to escape vancomycin, where a different reduction pathway of pyruvate allows the substrate to have lowered binding affinity for the antibiotic. Penicillin-binding proteins (PBPs) with lower affinity for penicillin have also been expressed as a means of circumventing its antibacterial action (5). Combating Drug Resistance: A Difficult Task Once an antibiotic is used for an extended period of time, the resulting drug-resistant bacteria have been observed to not only become resistant to that specific antibiotic, but also to several others. An even more problematic aspect is that the resistant bacteria appear to only slowly lose their resistance,
even in the absence of the selecting antibiotic (8). Bacteria thus not only stay resistant for longer, but also can rapidly evolve resistance to multiple antibiotics. From an ecological perspective, the resistance selection is enhanced by the density of antibiotic usage. The “selection density” involves the total amount of antibiotic used in a specified area, such as in a home, hospital, or farm. In this specified area, an individual has the capability of becoming a “factory” for resistant bacteria that enters the environment, conferring antibiotics with unique ecological effects. For instance, antibiotic treatment for acne was found to produce a multidrug resistant flora for other members of the household, although only the individual was undergoing the treatment (8). From a broader perspective, a study revealed that resistance rates in individuals in Nepal were found to correlate more with the total community use of antibiotics than with the individual’s own use (8). The impact of the drug selection process thus is not confined to the individual taking the antibiotic, but rather, uniquely makes antibiotics “societal drugs” by impacting others that share the environment (9). The unchecked use of antibiotics in farming has also contributed to the rise of drug-resistant bacteria. A study published by the Proceedings of the National Academy of Science reported that in 2010, the total consumption of antibiotics in livestock worldwide was estimated at 63,151 tons, and has been projected to rise by 67% by 2030 (10). While considerable debate surrounds the relationship between drug resistance in humans and the use of antibiotics in animals and agriculture, experts generally agree that this has still significantly contributed to the problem of resistance (8). The overuse of antibiotics in animals has only been compounded by the inappropriate use of antibiotics in humans. This is the result of antibiotics being made freely available without prescription and the unnecessary consumption to combat viral or fungal infections (11). In cases where patients do not adhere to strict drug regimens, the antibiotic-resistant strains gain the ability to persist and eventually overpower the drug-susceptible strains. This was a problem attributed to the evolution of multidrug-resistant strains of Mycobacterium tuberculosis, a bacterium that currently infects one-third of the world’s population (12). Confronting drug resistance has also suffered setbacks due to the unenthusiastic response of the pharmaceutical industry. Attention has mostly focused on drugs with a higher margin of profit, such as chronic diseases or cancer drugs. Since infectious diseases do not offer as much financial incentive, interest in finding new drugs has generally been muted (14). Turning Back the Tide Though the challenge appears to be difficult, the ability to deal with this issue is not impossible. As a first step, curtailing the rampant use of existing antibiotics can help avert this crisis. As noted by the WHO, a coordinated global effort of surveillance also needs to be put in place in order to better track outbreaks and monitor antibiotic use (4). Research into new antibiotics as alternatives for current treatments must also be invested. Although a new class of antibiotics has not been discovered in decades, the new antibiotic, teixobactin, was recently reported in Nature. The antibiotic was notable for its ability to efficiently kill superbugs like MRSA in vivo, and was discovered using the novel “iChip” tool to culture previously-unculturable bacteria (15). The iChip holds great
Figure 3 The “iChip” is novel for its ability to identify previously-unculturable strains of bacteria, which may lead to new antibiotics in the future.
promise for uncovering other compounds with antibacterial properties and will hopefully lead to new antibiotics in the future. By developing new compounds in the lab and with concerted public health movements, the issue of antibiotic resistance and the emergence of superbugs can eventually be controlled. As the international scientific community has begun to take note, superbugs have already been observed at an increasing frequency across the globe. Thus, with the global partnership of scientists and healthcare providers, coordinated efforts to combat the growing problem of antibiotic resistance must be rapidly taken. References 1. Yarnell, Amanda. Salvarsan: The Top Pharmaceuticals that Changed the World. Chemical and Engineering News. 83. 3 (2005). 2. Sepkowitz, Kent A. One Hundred Years of Salvarsan. N Engl J Med. 365. 291293 (2011). 3. CDC. Antibiotic Resistance Threats in the United States, 2013. (2013). 4. WHO. Antimicrobial resistance: global report on surveillance 2014. 1-257 (2014). 5. Walsh, Christopher. Molecular mechanisms that confer antibacterial drug resistance. Nature. 406. 775-781 (2000). 6. Morris, Andrew, Kellner, James, Low, Donald. The superbugs: evolution, dissemination and fitness. Curr Opin Microbiol. 1. 524-529 (1998). 7. Dowson CG, et al. Penicillin-resistant viridans streptococci have obtained altered penicillin-binding protein genes from penicillin-resistant strains of Streptococcus pneumonia. PNAS. 87. 5858-5862 (1990). 8. Levy SB, Marshall B. Antibacterial resistance worldwide: causes, challenges and responses. Nature Medicine. 10. S122 - S129 (2004). 9. Levy SB. Antibiotic resistance: an ecological imbalance, in Antibiotic Resistance: Origins, Evolution, and Spread. 1-9 (1997). 10. Van Boeckel, TP, et al. Global trends in antimicrobial use in food animals [early edition]. PNAS. (2015). 11. Nyquist AC, et al. Antibiotic prescribing for children with colds, upper respiratory infections and bronchitis by ambulatory physicians in the United States. JAMA. 279. 875-877 (1998). 12. Borgdroff, FK, Broekmans JF. Intervention to reduce tuberculosis mortality and transmission in low- and middle-income countries. Bulletin on the World Health Organization. 80. 217–227 (2002). 13. CDC. TB: Data and Statistics. (2013). 14. Bax R, Green S. Antibiotics: the changing regulatory and pharmaceutical industry paradigm. J Antimicrob Chemother. (2015). 15. Ling LL, et al. A new antibiotic kills pathogens without detectable resistance. Nature. 517. 455–459 (2015).
More Than a Structural Component: The Vast Biological Functions of Sphingolipids
By Ashwin Kelkar ’16 Introduction What constitutes a cell? In a very general sense, the first successful cell-like structures necessitated a biological barrier that would mediate the flow of molecules into and out of their “bodies.” What developed was the phospholipid bilayer, a semi-permeable membrane intermeshed with various proteins to regulate the passage of important bioactive molecules— ones that regulate cellular responses— in order to initiate various cellular functions. Because of the relative difficulty of conducting experiments with lipids, they have for many years been thought to only play metabolic and structural roles in the body. It wasn’t until the groundbreaking discovery of the role of diacylglycerol (DAG) and inositol 1,4,5-triphosphate (IP3) in stimulating protein kinase C (PKC) that lipids were also considered bioactive (1). Enter the sphingolipid, a class of lipids discovered in the late 1800s in brain tissue (2). This new class of lipids was interestingly named after the mythological
Recent research has highlighted sphingolipids as key players in various cellular pathways, including apoptosis, cell division, motility, invasion, and adhesion. 28
“Sphinx” for its enigmatic character, but recent research has highlighted sphingolipids as key players in various cellular pathways, including apoptosis, cell division, motility, invasion, and adhesion (2,3). They have since become a major site of investigation for many researchers around the world because slight changes in sphingolipid concentrations can greatly impact the cell cycle, the absence or abundance of which can lead to the development of many pathologies including neuronal disease and cancer. In order to understand the influence sphingolipids can have on cell function, it is important to look past its structural attributions and focus on its diverse, yet specific, functions. Sphingolipid Chemistry Structurally, sphingolipids are quite different from the typical lipids that are normally found on the lipid bilayer. Rather than a glycerol backbone, sphingolipids have a long alkyl backbone with a 1,3-diol tail. At the 2 position is usually an amine that can form covalent bonds with a fatty-acid chain of varying length. Though these differences are seemingly minimal, they confer great functional differences that allow this class of lipids to play an ever greater role in regulating the cell functionalities (3). Sphingolipids can have different functional groups extending from this 1-position. For example, sphingomyelin has a phosphocholine group attached here. Removal of this phosphocholine results in ceramide formation. Further cleavage by removal of the fatty acid amide linkage produces sphingosine. Both ceramide and sphingosine can be phosphorylated to produce ceramide-1-phosphate (C1P) and sphingosine-1-phosphate (S1P), which have antagonistic effects to their unphosp-
The sphingolipid family therefore orchestrates a much more complex and intricate regulation of cell functionality than the canonical cellular second messenger pathways like adenylyl cyclase horylated counterparts (3). A full pathway map is illustrated in Figure 1. Glycosphingolipids Glycosphingolipids are the largest in the sphingolipid family, with carbohydrate moieties extending from the 1-position to form acetal linkages, the largest of which are called gangliosides. Glycosphingolipids were the first sphingolipids to be discovered in brain extracts (2). They became recognized for their importance in maintaining the structural integrity of the myelin sheath surrounding neurons. Glycosphingolipids were first identified to play a role in neurological pathologies, such as Tay-Sachs and Gaucher’s diseases (2). For example, Tay-Sachs disease is the direct result of ganglioside accumulation in the brain (4). Gaucher’s disease, another neuropathology associated with sphingolipid metabolism, occurs when glucosylceramide (GlcCer) cannot be broken down and accumulates in various parts of the body, including the central nervous system, causing a massive deterioration in muscle function and in some cases, mental retardation (5). In terms of its molecular biology, glycosphingolipids act as key structural elements of myelin sheathing. It has been observed that animals lacking galactosylceramide (GalCer) cannot form proper myelin sheaths. Though myelin sheaths are observed, evidence indicates that the resulting nodes of Ranvier in these animals are morphed as a result of the GalCer synthesis gene knockdown (6). Disrupting glycosphingolipid metabolism in mice leads to early axon degeneration and demyelination, causing severe impairment. These few examples show the necessity of glycosphingolipids and their important role within the nervous system. However, this does not mean that sphingolipids are only necessary in neurons; in other mammalian cell types, they are shown to cluster into lipid rafts within the plasma membrane, specifically on the outer leaflet. It is now believed that glycosphingolipid metabolites can also act as intermediary molecules, passing on messages from outside to inside the cell (6).
Ceramide and C1P Because of the varying length of the fatty acid functional groups on the sphingosine backbone, many different ceramides are naturally occurring within the context of the body, each with different functionalities (7). Ceramide is formed from the hydrolysis of sphingomyelin via sphingomyelinases (SMase). Ceramide can then be phosphorylated directly by ceramide kinase to generate C1P. Ceramide is neutrally charged and therefore does not have the ability to interact very well outside of the lipid bilayer, but its hydrophobic structure allows it to readily flip-flop between the cytosolic and extracellular faces of membranes. On the other hand, the anionic phosphate group on C1P allows it to interact with hydrophilic substances more readily. These structural properties give ceramide and C1P, as well as the enzymes that synthesize and catabolize them, distinct functional properties and localizations within the cell (3). Ceramides are most closely associated with the skin and can often be found in beauty products to keep skin healthy. This is due to the fact that ceramides are one of the major components of the water barrier located on the epidermis which prevents excess water transpiration (8). There is a clear positive correlation between skin health and ceramide levels, though it is difficult to pinpoint which of the various lipid products is responsible for these conditions due to the sheer amount of compounds necessary for proper skin health (9). Along with its water barrier functions, ceramide is heavily involved in the stress response in cells, signaling cell death and cell cycle arrest to halt cell aging (8,10). One particular activator of ceramide is tumor necrosis factor-α (TNFα), a protein that plays a major role in the inflammatory response. Increased palmitate levels as a result of high carbohydrate intake also stimulate ceramide synthesis (3). Activation of ceramide in turn activates second messenger systems that result in self-induced cell death, cell cycle arrest, and a continuation of the inflammatory response. This shows ceramide’s versatility as both a second messenger and downstream effector (10). Figure 1 Lipids are most commonly found as structural components of organisms, such as the lipid bilayer shown. The structural formation is naturally facilitated by the hydrophilic head (Red) and hydrophilic tail (Green) of phospholipids. However, sphingolipids differ by their involvement in biochemical pathways.
Figure 2 The sphingolipid
metabolic pathway utilizes various enzymes that mediate interconversion between the various bioactive lipids. These are localized, along with the lipids themselves, to various membranes within the cell. The products of this pathway are used in phospholipid formation and incorporation into the lipid bilayer (3).
C1P, on the other hand, has been shown to exert the opposite influences upon the cell. Not only does it inhibit cell death, but it also activates cell proliferation. This twofold activity results in hyperactivity of cell cycle turnover. C1P is known to inhibit ceramide production by directly binding acid SMase, while also activating phosphatidylinositol 3-kinase (PI3-K). This produces phosphatidylinositol (3,4,5)-triphosphate (PIP3), a direct inhibitor of acid SMase (7). By inhibiting SMase, ceramide production is halted, allowing the cell to continue through the cell cycle and initiate proliferative cellular functions. This dual pathway exemplifies the complexity with which the sphingolipids can be interconverted to implement minute changes within the cell. To add to the intricacy, further experimentation has shown C1P to have both pro- and anti-inflammatory properties, and these properties are dependent on cell type (11). Sphingosine and S1P Sphingosine is a metabolite of ceramide via hydrolysis of the fatty acid chain on ceramide, which is catalyzed by one of five ceramidases, each of which is localized to a particular lipid bilayer within the cell. Phosphorylation of sphingosine by one of two sphingosine kinases produces S1P (3). Both S1P and C1P have similar functionalities due to their structural similarities, as do sphingosine and ceramide. But the functions of S1P extend beyond those that are also exhibited by C1P. A study conducted by Chun et al demonstrated that fingolimod-phosphate, an agonist of S1P receptors, can reduce the symptoms of multiple sclerosis in animal models and is currently undergoing phase 3 clinical trials (12). Moreover, S1P has been linked to angiogenesis and vessel morphogenesis in embryonic development (13). Conclusions Since sphingolipids traverse various overlapping biochemical pathways, it is at first difficult to understand why each functionality is distinct and important. It is important to note that these functionalities arose evolutionarily in order to optimize cell survival. These interconnected pathways allow the cell to coordinate and regulate the total positive and negative effectors for cell cycle, death, proliferation, migration, invasion, and many other functions. Thus, the cell can
affect minute changes in sphingolipid concentrations to result in large changes in its major functions. The sphingolipid family therefore orchestrates a much more complex and intricate regulation of cell functionality than the canonical cellular second messenger pathways like adenylyl cyclase. Though the various functions of sphingolipids and their metabolites have been briefly discussed and highlighted here, the actual functions of each bioactive lipid are much more extensive. Research being conducted every day is opening new doorways to what sphingolipids are capable of accomplishing. However, it is clear to see that lipids function as more than just structural compounds, and have a major and growing role in cell cycle regulation as well as motility, invasion, and adhesion. References 1. Y Nishizuka. Intracellular signaling by hydrolysis of phospholipids and activation of protein kinase C. Science. 258, 607-614 (1992). H. Mcilwain, A treatise on the chemical constitution of the brain. Isis 55, 249-250 (1964). 2. Y.A. Hannun, L.M. Obeid, Bioactive signaling lipids: lessons from sphingolipids. Nature Reviews Molecular Cell Biology 9, 139-150 (2008), doi: 10.1038/nrm2329 Tay-Sachs disease. PubMed Health, (2009). 3. What is gaucher disease?. National Institute of Neurological Diseases and Stroke, (2013). R.L. Schnaar, A. Suzuki, P. Stanley, in Essentials of Glycobiology, A. Varki et al. Eds. (Cold Spring Harbor Laboratory Press, Cold Spring Harbor, NY, ed. 2, 2009), chap. 10. 4. L. Arana, et al., Ceramide and ceramide 1-phosphate in health and disease. Lipids in Health and Disease 9, (2010), doi: 10.1186/1476-511X-9-15. H.D. Onken, C.A. Moyer. The Water Barrier in Human Epidermis: Physical and Chemical Nature. Arch Dermatol. 87, 584-590 (1963). doi:10.1001/archderm.1963.01590170042007. 5. L. Coderch, O. López, A. de la Maza, J.L. Parra, Ceramides and skin function. American Journal of Clinical Dermatology 4, 107-29 (2003). Y.A. Hannun, Functions of ceramide in coordinating cellular responses to stress. Science 274, (1996). 6. A. Gomez-Muñoz, New insights on the role of ceramide 1-phosphate in inflammation. Biochimica et Biophysica Acta (BBA) - Molecular and Cell Biology of Lipids 1831, 1060–1066 (2013). 7. J. Chun, H.P. Hartung, Mechanism of action of oral fingolimod (FTY720) in multiple sclerosis. Clin Neuropharmacol.33, 91-101 (2010), doi:10.1097/ WNF.0b013e3181cbf825. 8. S. Lucke, B. Levkau, Endothelial functions of sphingosine-1-phosphate. Cellular Physiology and Biochemistry 26, 87-96 (2010).
Using Finite Element Analysis to Design and Optimize a Composite Bicycle Frame Plinio Guzman1 , Maen Alknader1 Department of Mechanical Engineering, State University of New York, Stony Brook, NY
ABSTRACT A quality bicycle frame should be lightweight as well as laterally and torsional stiff. Additionally, it should be able to withstand both sudden high impact forces and fatigue failure. For this reason, material selection plays a crucial role in the design process. Ideal bicycle frames were modeled, taking into account loading cases that may present themselves during the ride, using Finite Element Analysis software by Altair. Coupling this technique with composite theory, different layups of a carbon fiber were tested for potential bicycle frames. A successful base design for a bicycle frame was obtained. INTRODUCTION Bicycles are a mainstream means of transportation and recreation. More than any other component, the frame is what gives a bicycle its distinct feel. Structurally, it connects various components of the bicycle while at the same time bearing and transferring a variety of forces and moments, and its geometry determines the handling of a bike and the way it behaves in corners and at different speeds (1, 2). Up to the beginning of the last decade, bicycle design was a trial and error process based on heuristics, in which new models were made based on a combination of what had worked in the past. With the development of new materials and computational tools, engineers can now make models to fine tune design parameters of their choosing with the objective of designing better bikes. Finite Element Analysis (FEA), which is also referred to as Finite Element Method (FEM), integrates the theoretical understanding of the behavior of materials with computational techniques to create an interactive visual environment that is useful for analyzing performance (3). The method works by breaking down the structure to be studied into many tiny elements (a finite amount of them) into what is called a mesh. The behavior of each individual element is calculated based on governing mathematical relationships of physical effects which can range from mechanical stress and fatigue, to heat transfer and fluid flow. The behavior and interactions between elements are summed and the overall behavior of the structure is determined. Because of the immense amount of calculations, computer software is used. This is a much more efficient option since designs can be studied for potential behavior prior to actually building it. In turn, this technique allows for several iterations of potential designs, allowing for a quick and inexpensive search to find an optimal design. It has become a standard methodology for pre-production analysis. It is important to note that FEA does not provide actual results, but approximations to them. The accuracy of these approximations depends on the validity of the model and a sound understanding of the underlying theory. The overall goal of this project is to optimize the design of a bicycle frame. The frame should be able to be manufactured using low-tech equipment and for a minimal cost while still meeting the safety and performance standards of a high performance frame.
Vertical stiffness is in some cases desired to absorb shock and vibrations, which translates to good handling and a comfortable ride. The effect of material selection on lateral stiffness and vertical compliance is studied through the analysis and comparison of deformation and stress mappings in three prototypes under equal geometry and loading conditions. Material Selection The word composite means “made of several parts.” It refers to combinations of materials that have properties superior to the materials alone. A common example of a composite material is concrete, which typically consists of loose stones held together within a matrix of cement. Carbon fiber is another type of composite material in which strands of carbon act as a reinforcing material and are bound together with glue. Its high strength-to-weight ratio makes it an attractive material for bicycle frame building. A layer of carbon fiber is called a ply, and several plies stacked up together make up a laminate (5). Plies are arranged along the principal axis along which they have the greatest strength and rigidity, which means that a single ply is mainly strong in only one direction. Unidirectional carbon fiber sheets can be layered at various angles to ensure strength in multiple directions. This versatility in the arrangement is used to tailor the mechanical properties of the carbon fiber body in order to meet the demands of the structure. Given these points, carbon fiber is the material of choice in the design of light weight bicycle frames with high lateral stiffness and vertical compliance. In addition, its reliability against fatigue failure and corrosion, as well as its versatility to be formed into complex shapes, has contributed to its popularity. The behavior of the specific sequence of a carbon fiber layup is studied with classical composite theory, which provides the FEA software a mathematical base upon which to approximate the behavior of the material (6). The direction along the length of the composite body is considered to be 0°, and plies are stacked at angles relative to it.
Figure 1 Unidirectional Laminate stacking sequence (7).
DESIGN General characteristics that define an ideal bicycle frame are low weight and a high lateral stiffness, which contribute to energy efficiency while riding (4). Low weight maximizes the velocity and acceleration that a rider can achieve for a given energy input. On the other hand, lateral stiffness ensures that the maximum amount of the rider’s input force on the pedals is transformed into momentum that will carry the bicycle forward.
Frame Geometry The configuration of the frame is defined by geometrical and topological data. Geometrical data consists of the overall shape defining parameters, such as cross-section, coordinates of vertex points, and control points in curvatures. Meanwhile topological data refers to connectivity relationships between geometric components, such as number of connectivity nodes and
gaps within a solid structure. In the problem of optimizing the design of a bicycle frame, the objective is to maximize the structural stiffness (minimize compliance) while being subject to a volumetric constraint. A 20.5” Rocky Mountain Blizzard bicycle was used as the basis for the geometric configuration as its principal geometric features characterize those commonly found in hard tail mountain bicycles. The geometric data that was used for the model is the location and relative distance and angles of principal nodes such as the front and rear wheel axes, fork, bottom bracket, seat post, stem, and handlebar (8). A simplified model of the frame was made in HyperMesh, a FEA preprocessor by Altair that is used to prepare models and run several types of analysis. The basic geometric configuration of the frame appears in Figure 2, which was subsequently referenced in the construction of a cylindrical shell structure, as shown in Figure 3.
Figure 5 Force distribution during pedaling. Units are in Newtons (10).
The rear dropouts and bottom of the head tube are constrained from translating in the x, y, and z axes. Vertical point loads are applied on the saddle in representation of the rider weight. Handle bar loads are simplified as a single vertical load and moment on the head tube. In light of the asymmetric loading conditions mentioned above, a vertical and a horizontal force are applied on the right sided bottom bracket and rear dropout, respectively. The constraints and loadings appear on Figure 6. Constraints are represented by triangles and loads by arrows.
Figure 2 Basic geometry of bicycle frame on HyperMesh.
Figure 6 Simplified loading conditions and constraints. Constraints are represented by triangles
Figure 3 Basic bicycle geometry composed of cylindrical shells.
A meshing algorithm was used to subdivide the surface of the structure into 4232 shell elements.
Simulations Three different prototypes were modeled using HyperLaminate, the laminate composite module of the software. Each prototype corresponds to a different laminate configuration on the same frame geometry subject to the same set of loading cases. The material used for modeling and constructing each frame is a unidirectional high strength carbon/epoxy carbon fiber. It is modeled in the finite element software as being one layer with orthotropic properties in the principal directions. The 0° direction was taken to be the direction vector along the length of each tube (11). The characteristic equation for an orthotropic material such as the one used in the analysis is
Where σ, C, and ε are the stress, strain, and stiffness matrix. HyperMesh uses the stiffness equation to solve for the deformation and stress in each element. The inverse of the Cartesian matrix is the compliance matrix represented by S. In its expanded form the equation is the following (12).
Figure 4 Bicycle frame Finite Element Model.
Boundary Conditions Forces and constraints were applied on the structure to represent those that would normally be present during riding of a bicycle. An often overlooked point for the representation of realistic loading conditions noted by L. Maestrelli relates to the standard placement of the chain on the right side of the bicycle (9). This positioning causes asymmetric loadings on the right and left sides of the rear wheel axis, as can be seen in Figure 5.
The program sets up the equation and then solves it using the following material properties to simulate carbon fiber:
Table 1 Material properties of simulated carbon fiber (12).
represent the lowest magnitude in blue and highest in red. A scale with corresponding numerical values appears at the left of each figure. Although numerically they don’t accurately represent a real loading scenario due to the simplicity of the model, the resulting contour plots provided insight into the deformation and stress distribution of the body under the predicated loading conditions, and illustrate the difference in the behavior of each model. The deformation of each prototype appears in Figures 8, 9 and 10; it is exaggerated for illustrative purposes.
Figure 8 Displacement distribution of prototype 1.
Different laminate sequences were used for each of the three prototypes. The first two were proved to be good stacking sequences for bicycles by Thomas Jin-Chee Liu and the third one was obtained from the website of IBIS bicycles, a prominent bicycle manufacturer (13, 14). Plies were modeled as having equal and uniform thicknesses, but optimization of thickness distribution will be further studied in the future. As mentioned before, the number 0, 45, -45, and 90 represents the angle relative to the principal longitudinal direction of the cylinder “X”. 1. (0/90/90/0)s 2. (0/90/45/-45)s 3. (90,45,-45,0)s
Figure 9 Displacement distribution of prototype 2.
Figure 10 Displacement distribution of prototype 3. Figure 7 Stacking sequence (12).
The “s” on each of the stacking sequences means that they are symmetric about the mid-plane. As a consequence, the effect of axial loads and moments is uncoupled, which avoids unwanted interactions between the axial and bending deformation components. This is reflected in the stiffness matrix where Bij=0.
RESULTS The frame was modeled using a mesh of 4250 nodes and 4232 elements. The software solved for displacement and stress fields within the body according to laminate composite theory. It solved for displacement and stress distribution based on the stiffness matrix, which it created according to the configuration of the laminate and its material properties. The analysis results in the creation of contour plots, which map the average magnitude of selected variables throughout the structure. The colors
Stress contour plots appear in Figures 11, 12 and 13. They give insight to high risk regions of the frame and contrast the maximum and average stress resulting in each model. A high stress concentration region is relatively more prone to fail and alleviating is should be addressed in further design iterations. Under the same geometry and loading conditions, the best prototype is the one which presents the lowest stress distribution.
Figure 11 Von Mises Stress distribution of prototype 1.
along different regions of the frame as well as the consideration of additional symmetric layup configurations.
Figure 12 Von Mises Stress distribution of prototype 2.
Figure 13 Von Mises Stress distribution of prototype 3.
DISCUSSION The Finite Element software solved for displacement and stress fields within the body of three different prototypes under the same loading conditions. Although not a numerically exact method, it provides insight into the overall behavior of the body under stress. From the finite element results of each configuration, the stress distribution and deflection can be seen and the maximum values can be determined. The analysis shows that the highest stress concentration regions on the frames are the right-side rear dropouts, experiencing a maximum stress of 126.9, 171.1, and 131.9 Pa, respectively. The maximum displacement was of 0.9942, 0.9942, and 0.7613 millimeters, respectively. In all cases, high stress concentration appears at the connections between tubes and at the chain stay. In order to determine a very simple model for the construction of a good frame, success was achieved. However, a more elaborate model would be required in order to conduct a more realistic analysis. The over simplified simulation serves as a comparison between each of the layups and provides a rudimentary idea for the behavior of each design under semi-realistic loading conditions. This therefore highlights high-stress regions and areas of concern that can be improved on. While the prototype with the (0/90/45/-45)s layup experienced the least maximum stress, the (90,45,-45,0)s layup provided a lower average stress distribution and greater compliance. The next step toward improving the design might require a combination of the three layups
CONCLUSION Laminates of unidirectional carbon fiber with layups of (0/90/90/0) s, (0/90/45/-45)s, and (90,45,-45,0)s were tested with Finite Element Analysis software. Comparing the stress distribution along the three individual models, the (90,45,-45,0)s layup would be the best suited for the applied loading cases. More accurate results can be obtained by segmenting the material distribution along the frame to allow for different layup configurations, which would result in an improved stress distribution. Furthermore, making the thickness of each ply a design variable can help reinforce high-risk regions while saving material and therefore weight in the lower stress regions. A composite optimization approach consisting of back and forth iterations between the topological and composite layup optimization procedures can also be conducted in order to find an optimal layup (14). A more detailed analysis would consider the frame as separate the frame into various regions with different layups each. Additionally, conducting the analysis under multiple loading conditions will give a more realistic insight on the behavior of the frame (15). The approach proved to be a successful way of studying the behavior of a bicycle frame design. The model will be elaborated upon with the goal of studying the factors that influence bicycle frame design using Finite Element Analysis tools. References 1. Xiao, D., Liu, X., Du, W., Wang, J., & He, T. Application of topology optimization to design an electric bicycle main frame. Structural and Multidisciplinary Optimization. 46. 913-929. (2012). 2. Paterek, Tim. The Paterek Manual For Bicycle Framebuilders. 1st ed. Kermesse Distributors. Print. 3. Logan, Daryl L., and Daryl L. Logan. A First Course in the Finite Element Method. 3rd ed. Pacific Grove, CA: Brooks/Cole, 2002. Print. 4. Fuerle, F., & Sienz, J. Decomposed surrogate based optimization of carbon-fiber bicycle frames using Optimum Latin Hypercubes for constrained design spaces. Computers & Structures. 119. 4859. (2013). 5. Gay, Daniel, and S. V. Hoa. Composite Materials: Design and Applications. 2nd ed. Boca Raton, FL: CRC, 2007. Print. 6. Kere, P., Lyly, M., & Koski, J. Using multicriterion optimization for strength design of composite laminates. Composite Structures. 62. 329-333. (2003). 7. “Composite Optimization Tutorial – Bike Frame.” Training Altair University. Web. 2 Feb. 2015. <http://training.altairuniversity.com/optimization/composit>. 8. “Technology.” Rocky Mountain Bicycles. Web. 3 Feb. 2015. <http://www.bikes.com/en/design/ technology>. 9. Maestrelli, L., & Falsini, A. (n.d.). Bicycle frame optimization by means of an advanced gradient method algorithm. 10. Ant, Z. P., and Luigi Cedolin. Stability of Structures Elastic, Inelastic, Fracture and Damage Theories. World Scientific ed. Hackensack, NJ: World Scientific, 2010. Print. 11. Lessard, Larry B., James A. Nemes, and Patrick L. Lizotte. “Utilization of FEA in the Design of Composite Bicycle Frames.” Composites. 72-74. Print. 12. Performance Composites. 2009. Mechanical Properties of Carbon Fiber Composite. 13. Liu, Thomas Jin-Chee, and Huang-Chieh Wu. “Fiber Direction and Stacking Sequence Design for Bicycle Frame Made of Carbon/epoxy Composite Laminate.” Materials & Design: 1971-980. Print. 14. “All About Carbon.” RSS News. Web. 2 Feb. 2015. <http://www.ibiscycles.com/support/technical_articles/all_about_carbon/>. 15. Covill, D. Parametric finite element analysis of bicycle frame geometries. Science Direct. 72. 441446. (2014).
The Search for Massive Stars in Nearby Galaxy M83 Drew Ciampa1, Jin Koda1 1
Department of Physics and Astronomy, State University of New York, Stony Brook NY.
ABSTRACT Massive stars play a significant role in galactic formation and evolution, interstellar medium enrichment, and even the re-ionization of the universe. To understand these stars and their effects, it is important to study their formation. Previously, it was thought that no star formation occurred outside the optical range of a galaxy, such as in the spiral arms. This is due to lack of density and metallicity of the environment, factors believed to be necessary for star formation. Recently, due to the Galaxy Evolution Explorer (GALEX) orbiting space telescope (1), a new mode of star formation has been discovered in these unthinkable places called the extended ultraviolet disk (2). In these regions, stars’ radiation ionizes the hydrogen gas around them causing an HII region to form (3). As with all stars responsible for this region, they are extraordinarily massive and relatively young (4). Therefore, HII regions are tracers of both massive stars and stellar formation (5). This HII region is detected by the Hubble Space Telescope (HST) by looking for H-alpha emission, which is present at a wavelength covered by HST. Even with their widespread relevance in astronomy, the nature of these massive stars has remained unknown. Using both GALEX and HST archival data, extremely UV blue objects from the galaxy M83, which resemble massive stars, were analyzed thoroughly. Combining multiwavelength images from HST show unprecedented details of newly-formed stellar populations within a GALEX resolution. Comparing the multi-wavelength data with numerical models of stellar atmospheres help identify the nature of the individual stars that HST was able to resolve. The research project, alongside other recent discoveries, is constantly altering the understanding of star formation and may potentially force the re-development of current star formation models. INTRODUCTION Today, astronomers have strived to understand massive star formation on a fundamental level. Up until now, research has hit just the tip of the iceberg when it comes to understanding how a massive star forms and the environment required for such a process. For much of recent history, it has been believed that these stars form in regions of particularly high density, where large amounts of material can accumulate to form masses. These masses, called protostellar clouds, will eventually collapse and form a massive star. The timeline between the stages of a protostellar cloud and a star formation are highly disputed. Two of the disputed theories of how such a cloud progresses into a star are the monolithic collapse model and the competitive accretion model. The monolithic collapse model is a scaled up version of low-mass stellar formation. It involves a protostellar cloud containing tens of hundreds of solar masses worth of gas that collapses into a small region (6). As it falls inward, the heat and pressure will increase, leading to an eventual balance between gravity and pressure. This is the point where the cloud will become a star. The competitive accretion model involves a different method of massive star formation. This model involves a molecular cloud with several “seeds” which contain slightly higher densities than the cloud. This imbalance leads to gas gathering around these seeds, eventually forming a more massive object, which becomes protostellar masses (7). In the end, multiple stars should form from these masses and even sometimes, an O-Type star. The purpose of the research was to help understand the properties of these types of stars, particularly in the galaxy of M83. Since massive star formation is one of the more difficult yet interesting topics in astronomy, it is considered a success to add any new information to the search. Keeping both methods of star formation in mind, the research aims to not only find hints pointing to the validity of one of the two methods as outlined above, but also to keep an open mind toward any other possible process of star formation. At the culmination of the study, success is achieved by finding some trace of an O-Type star along the outskirts of the galaxy in the extended ultraviolet disk. Observing the detections made by GALEX will also shed light on the environments
of the UV objects and the properties which pertain to the possible stars and their formation. Finding just one potential O-Type star would be exciting and significant progress.
Figure 1 This side-by-side comparison shows the Southern Pinwheel galaxy, or M83, as seen in
ultraviolet light (right) and at both ultraviolet and radio wavelengths (left). While the radio data highlight the galaxy’s long, octopus-like arms stretching far beyond its main spiral disk (red), the ultraviolet data reveal clusters of baby stars (blue) within the extended arms. The ultraviolet image was taken by NASA’s Galaxy Evolution Explorer between March 15 and May 20, 2007. That picture was the first to reveal far-flung baby stars forming up to 63,000 light-years from the edge of the main spiral disk. This came as a surprise to astronomers because a galaxy’s outer territory typically lacks high densities of star-forming materials (8).
The particular star of interest is the O-Type star, which is one of the most massive, hottest, and brightest stars observed in the universe. They make up only 0.00003% of the stars in the main sequence, yet have a crucial effect on the universe’s composition (9). The extreme mass of these stars lead to a spectacular death involving a supernova, which in turn enriches the universe with metals (10). O-Type stars are part of the reason why elements heavier than iron exist on earth and are present in life.
Their temperatures, which can range from 35,000K-50,000K, produce wavelengths that peak in the ultraviolet range. The location of the wavelength peak is the reason why these stars are considered blue compared to the Sun, and has a peak that corresponds to a cooler temperature. The significance of short wavelengths provides incentive to using a specific instrument, like GALEX, since it can detect objects in the ultraviolet range. MATERIALS GALEX is one of the most well-known telescopes that can observe objects in the ultraviolet wavelengths. In 2007, GALEX discovered stars located past the optical bounds of galaxy M83. This discovery has led astronomers to believe star formation can exist at such distances from the galaxy. The benefit to using this telescope, besides the wavelength it is capable of observing at, is the large field of view. This will be a useful surveying telescope in the research to detect UV blue objects.
Figure 2 This figure shows the large field of view that GALEX possesses. The 1.2 degree field of view gives observers the ability to image whole galaxies in one shot (11).
Once the data is successfully utilized to find the location of objects that are very blue in ultraviolet, the object will need to be resolved to see more precisely what they are. Since GALEX has a low resolution and cannot resolve point sources like a star, an instrument that has a high resolution will be necessary to resolve such objects. The Hubble Space Telescope gives astronomers the ability to resolve individual point sources. It also supplies data that ranges from visible to infrared wavelengths. Advantages of having a wide range of wavelengths for data becomes evident when working with spectral energy distributions later in the research. The images taken from Hubble will have a resolution that is about 100 times greater than that of GALEX. The significant improvement in resolution makes this research possible. With GALEX and HST, detections of potential O-Type stars will be made and resolution of those detections will allow for further analysis of the objects. METHODS With all the data accessible for reduction, it is necessary to gather and understand the data useful to the project. Before any analysis begins, organizing data to display the features that are present in the field of view of each GALEX detection is useful. Creating postage stamp layouts of these detections displays their visual properties over a series of different wavelengths ranging from ultraviolet to infrared. It is important to see the changes that occur over a range of wavelengths. After postage stamp creation, each image that is collected from online archival sources will be analyzed through flux extraction using SExtractor.
This program scans an image, locates any objects and measures the flux of that object (12). The flux readings from each image outlines the object’s properties and how they are emitting their energy. Since each image will be taken at a unique wavelength, SExtractor will give flux readings for every wavelength according to the image and the filter used while the image was taken. Flux measurements at several wavelengths provides enough information to find out where the object emits most of its energy. These flux readings can be plotted to create a spectral energy distribution. Comparing this distribution with a theoretical stellar atmosphere model is the main method of object classification. PROCEDURE To begin the search for massive stars, data was retrieved from the easily accessible archive called GALEXView (13). This archive provides images of galaxy M83 and a detection catalog containing thousands of objects detected in the field of GALEX. The images supplied are of two different ultraviolet ranges; Far-UV (FUV) and Near-UV (NUV). The catalog, which has detections from both the FUV and NUV images, supplies statistics for each detection. These statistics include coordinates of detections and the flux values for each filter. In total, the catalog contains over 4,500 detections. Reduction of this data is necessary to create catalogs containing only extremely UV blue objects. The reduction cuts the number of potential OType star candidates significantly. Essential steps must be made including converting units, calculating magnitudes, color, signal-to-noise ratios, and detection significance. Using the results from preceding steps, boundary conditions are applied to form catalogs of potential O-Type stars. The original GALEX catalog had flux measured in units of Janskys. This unit describes the amount of energy per time per area per frequency:
Using the standard magnitude equation the conversion to AB Magnitude is as follows, with fv being the flux in Janskys:
With the magnitude, color calculation is possible. Subtracting an object’s magnitude at two different frequencies gives the object’s color. With GALEX data, the subtraction of NUV magnitude from FUV magnitude provides a color. Color can also be thought of as the logarithm of the ratio of the two filters’ fluxes:
A point to note is that the AB magnitude equation has a negative factor and therefore a smaller magnitude describes a brighter object. Moreover, a smaller color value means the object is bluer in the ultraviolet. The final step to catalog creation is the calculation of signal-to-noise ratio for all of the detections. In astronomy, the signal of an object will be the flux while the noise can be taken as the error of that flux. A large signalto-noise ratio will appear to be clear and distinct while a low signal-to-noise will be hard to see. Signal-to-noise can be calculated by dividing the peak flux by the error of that flux. Calculating this gives the numerical value of the significance of detection and clearness of the object in GALEX images. With the preceding steps taken, project limits are applied in order to create a sample that is aimed at the objective of massive star findings. The first condition that the data will meet is a certain signal-to-noise ratio. A
signal-to-noise ratio of 10 or greater is set to ensure the accuracy of detection and precision of future measurements.
A condition will also be placed on the color of detections. A fundamental property of stars says that more massive objects will peak in bluer/ shorter wavelengths compared to a low mass object which peaks in the red/ longer wavelengths. This property strengthens the decision to apply a color limit that only allows objects with a UV color less than -0.3. The conditions applied create a new catalog that completes the sample set from GALEX. Starting with over 4,500 detections from GALEX, limits placed on the data successfully reduced the sample down to 120 detections. Using the newly created catalog of objects, the search for high resolution images that will provide insight as to what type of objects are located outside the boundaries of this galaxy begins. The Hubble Legacy Archive (HLA) allows access to HST data by uploading coordinates of interest. HLA performs cross-correlation between the coordinates in the catalog and available archival HST data for those locations (14). The images downloaded from HLA will contain several different wavelengths that pertain to the filter used during imaging. Similar to GALEX, the data is to be downloaded from HLA and displayed in postage stamp format. This time the HST data is to be compared to GALEX. This process will include contour creation for the GALEX images that will be over plotted onto the HST images. A contour is a useful display feature that can outline an area of specific flux. Drawing a line at a value of 5 sigma (5 standard deviations) shows where significant flux emission is present. Calculating the standard deviation of the sky can be done using two methods. The methods will guarantee the image has a uniform sky and therefore a single sigma (σ) value that can be used. Comparing the image’s complete background noise with individual subarrays of the image’s background noise confirms the uniformity of the field of view noise. Under the properties of the central limit theorem, it is known that a sufficiently large number of iterations will produce a normal distribution of values. Applying this to background noise, the sample standard deviation equation will be used to calculate the standard deviation of the sky. Once the standard deviation of the sky is calculated, the image viewing program ds9 will display contours at the desired sigma value. Taking snapshots of each detection with the contours plotted produces two sets of images; objects imaged in Far-UV and Near-UV with a Far-UV 5 sigma contour drawn on each. This process will be carried over to the Hubble images where that same Far-UV contour will be plotted. This contour exhibits the resolution of GALEX and displaying it on HST images shows how the objects look inside that GALEX resolution.
ages. Iterating SExtractor’s measurement tool over each image will produce flux values for each detection at the wavelengths available for that object.
Figure 4 Example of flux profiles for different types of stars. Notice the peaks moving gradually to the right as we approach less massive stars (15).
Combining those filters in a plot of flux versus wavelength resembles a spectral energy distribution. With this energy distribution, stellar classification can be made. The key to stellar classification is finding a similar slope between the flux readings and the model stellar atmospheres. In Figure 4, sample energy distributions that belong to different types of stars are shown. Each distribution follows Planck’s law of black-body radiation which states:
The Castelli and Kurucz stellar atmosphere models (16) of massive stars will reproduce similar flux profiles as the ones shown in Figure 4. The data points collected from GALEX and HST are over plotted on the models to find any similarities between the slopes. Well-fitted data alongside the
Figure 3 Images of Object 34 with contours (From right to left: shortest to longest wavelength).
The red cross marks the location of the GALEX detection. Notice the increased resolution when moving from GALEX (FUV& NUV) to HST (F435W – F814W).
Everything up until this point has laid the groundwork for the scientific research that will be carried out in the project. With all the data organized, cataloged, and on display, flux extraction measurements and spectral energy distribution creation is ready. Using similar methods to the ones that GALEX carried out during flux extraction, SExtractor will be the primary extraction method on Hubble im-
Figure 5 Object 101 data graphed with Castelli & Kurucz model of [O3V, O9V, B0V] type stars. Data appears to fit an O3V star best in these plots.
model encourages the probability that the object is a massive star. Using models of three star types, [O3V, O9V, B0V]-Type stars, the data can be matched with the slopes. Below in Figure 5, are some of the results and the comparisons made with the Castelli and Kurucz models. In Figure 5, the normalized flux density is used to study the slope of the data points along the model. Since this part of the experiment focuses on finding a similar slope between data and models, the vertical axis has arbitrary units. This way the attention is placed on slope rather than flux values.
Figure 8 GALEX images of object 3 (left FUV/right NUV). The coloring is artificial and there to help with the visuals.
Figure 6 Colored Hubble images produced in ds9. Object 98 (left) & Object 101 (right).
Along with producing a spectral energy distribution, having the multiple filters allows for the production of colored images. These images display the objects in terms of color rather than just brightness, thereby giving a chance to see if the objects are brighter in the blue filter rather than the red. The simple yet useful practice of making colored images gives context to the objects you are observing. In Figure 6, it is clear to see that on the left, Object 98 is a cluster of blue and red objects. On the other hand, Object 101 seems to be an isolated blue object. DISCUSSION The research project is an ongoing process that has nearly 120 objects left to analyze. This includes having their respective spectral energy distributions constructed and interpreted. Further numerical analysis with atmosphere models will lead to more definitive answers as to what the objects are. Still, with the work that has already been completed, there are objects that offer hope that an O-Type star is present along the XUV Disk. The potential O-Type star candidate that stuck out in the project was Object 101, from Figure 6 (from right). The object appears to be isolated and extremely blue. The isolation makes this object particularly interesting because O-Type stars are not known to be isolated. Also, the apparent blue color leads one to believe the star has a spectral energy distribution that peaks in the short wavelength region. According to data analysis and spectral energy distribution comparison, shown in Figure 5, the object does, in fact, follow an atmospheric model of an O-Type star. This correlation with an O3V-Type model helps its case for being not only a massive star, but an O-Type star.
Figure 7 Radial flux profiles of Object 3. The three peaks seem to characterize the brightness as a function of the gravitational bending of the galaxy cluster. The two side peaks will be duplicates of the object bent through time-space.
Not all of the objects analyzed were stellar objects. There was one object that did not pertain to the study and yet, is worthwhile to look at it. This odd and peculiar object was found using SExtractor on the edge of GALEX’s field of view. The object was identified to be a cluster of galaxies in the background of the GALEX image. The interesting shape of the object leads one to believe that there is gravitational lensing present. Gravitational lensing is a bending of light that occurs when massive objects are between the object and the viewer . The extreme mass is actually bending space-time itself and therefore, is bending the light. The radial flux profiles of this object are also displayed to help visualize the brightness distribution of the object. Overall, the research project has taken a significant step in the process of finding a massive star. The coming months should provide pivotal information on whether an O-Type star exists in the far outskirts of M83. The discovery of object 101 is significant, for if one of the prospects is identified to be an O-Type star, it would be one of the very few ever discovered. While there is hope that an O-Type star is present in the data, several checks need to be completed and confirmed before any definitive statements are made. The few objects that were found so far encourage the likelihood of massive star formation in the XUV disk. This possibility may require astronomers to rethink massive star formation as a process, including where it is possible for it to occur.
References 1. P. Morrissey, T. Conrow, T. A. Barlow, T. Small, M. Seibert, T. K. Wyder and K. Y. Sukyoung, “The calibration and data products of GALEX,” The Astrophysical Journal Supplement Series, pp. 173(2), 682, 2007. 2. J. Koda, M. Yagi, S. Boissier, A. G. de Paz, M. Imanishi, J. D. Meyer and D. A. Thilker, “The Universal Initial Mass Function in the Extended Ultraviolet Disk of M83,” The Astrophysical Journal, pp. 749(1), 20, 2012. 3. D. A. Thilker, L. Bianchi, G. Meurer, A. G. De Paz, S. Boissier, B. F. Madore and K. Y. Sukyoung, “A search for extended ultraviolet disk (XUV-Disk) galaxies in the local universe.,” The Astrophysical Journal Supplement Series, pp. 173(2), 538., 2007. 4. D. A. Thilker, L. Bianchi, G. Meurer, A. G. De Paz, B. F. Madore, D. C. Martin and B. Y. Welsh, “Recent star formation in the extreme outer disk of M83,” The Astrophysical Journal Letters, pp. 619(1), L79, 2005. 5. T. Peters, R. Banerjee, R. S. Klessen, M. M. Mac Low, R. Galvin-Madrid and E. R. Keto, “HII regions: witnesses to massive star formation,” The Astrophysical Journal, pp. 711(2), 1017, 2010. 6. P. Madau, L. Pozzetti and M. Dickinson, “The star formation history of field galaxies,” The Astrophysical Journal, pp. 498(1), 106, 1998. 7. I. A. Bonnell and M. R. Bate, “Star formation through gravitational collapse and competitive accretion,” Monthly Notices of the Royal Astronomical Society, pp. 370(1), 488-494, 2006. 8. GALEX, Image of NGC 5236, 2008. 9. S. R. Heap, T. Lanz and I. Hubeny, “Fundamental properties of O-Type stars,” The Astrophysical Journal, pp. 638(1), 409, 2006. 10. S. E. Woosley, A. Heger and T. A. Weaver, “The evolution and explosion of massive stars,” Reviews of modern physics, pp. 74(4), 1015, 2002. 11. GALEX, GALEX Field of View Image, 2004. 12. E. Berlin and S. Arnouts, “SExtractor: Software for source extraction,” Astronomy and Astrophysics Supplement Series, pp. 117(2), 393-404, 1996 13. MAST, “MAST: GalexView,” [Online]. Available: http://galex.stsci.edu/GalexView/. 14. STSci, “Hubble Legacy Archive,” [Online]. Available: http://hla.stsci.edu/. 15. B. MacEvoy, Spectral Classification of Stars, 2015. 16. F. Castelli and R. L. Kurucz, “New grids of ATLAS9 model atmospheres,” arXiv preprint astroph/0405087, 2004.
Enter a career in marine and atmospheric sciences! With our fleet of three research vessels and our new, state-of-the-art 15,000-square-foot Marine Sciences Center, Stony Brookâ€™s School of Marine and Atmospheric Sciences (SoMAS) is one of the strongest undergraduate research program on the east coast. Research at SoMAS explores solutions to a variety of issues facing the world today, ranging from local problems affecting the area around Long Island to processes that are impacting the entire globe. Join us at our Semester by the Sea program for an experiential learning approach from internationally known faculty. Students explore the diverse marine habitats of eastern Long Island including estuaries, shallow bays, salt marshes, rocky intertidal, dunes, beaches, tidal flats, and the Atlantic Ocean, and examine current environmental issues related to these habitats. For more information about our facilities, courses, or services, please visit http://www.somas.stonybrook.edu/
Help Spread SCIENCE Research • Write • Create • Publish
Become a part of our next publication. Editors, Photographers, and Writers are all Welcome. Stony Brook University Young Investigators Review Contact us at: email@example.com
Acknowledgements The Young Investigators Review would like to give a special thank you to all our benefactors. Without your support, this issue would not have been possible.
Department of Biochemistry and Cell Biology Department of Chemistry Department of Undergraduate Biology SoMAS