Stony Brook Young Investigators Review
Dr. Craig Evinger Research Profile
Language Use On Twitter
Jellyfish Size and Distribution in the North Atlantic Ocean
Volume 6 Spring 2016
Stony Brook Young Investigators Review Staff 2015 - 2016 Editor-in-chief Ashwin Kelkar ’16
Layout Editor-in-chief Samuel Lederer ’17
Managing Editors Anirudh Chandrashekar ’16 Amanda Ng ’18
Layout Editors Dana Espine ’18 Sarah Lynch ’17 Arun Nallainathan ’18 Abrar Taseen ’19
Associate Editors Aaron Gochman ’18 Ayman Haider ’18 Nicole Olakkengil ’19 Sahil Rawal ’19
Webmasters Scott Carson ’18 Lisa Jakubczyk ’16
Copy Editors Rachel Kogan ’19 Jenna Mallon ’18 Julia Newman ’19 Lillian Pao ’18
Executive Committee Eleanor Castracane ’16 Alec Guberman ’17 Sarima Subzwari ’18 Stanley Toyberman ’18 Photographer Sarima Subzwari ’18 Taylor Ha ’18 Advisor Dr. Peter Gergen
Justina Almodovar ’18 Shipra Arjun ’16 Meghan Bialt-Decelie ’19 Shannon Bohman ’19 Cerise Carey ’16 Michelle Goodman ’18 Taylor Ha ’18 Jessica Jolley ’16 Samara Khan ’19 Richard Liang ’18 Rohan Maini ’16 Sarah McTague ’18 Hannah Mieczkowski ’17 Julia Newman ’19 Lee Ann Santore ’19 One Seo ’17 Elizabeth Shaji ’18 Karis Tutuska ’18
Letter From the Staff Dear Reader, In one of his many famous talks, Nobel Laureate Richard Feynman was asked to explain what fire was. His response has become a quintessential example of the fundamental nature of science. By applying heat to a source of carbon, molecules can collide at certain kinetic rates in order to drive a chemical reaction called combustion. The fire itself can only be explained by subatomic physics and chemistry. Wood, man’s ready-made carbon source, can extend this analogy further. Wood stores carbon as a result of photosynthesis, a biological process in which plants consume carbon dioxide and release oxygen. With a heat source, the carbon in the wood can react readily with the oxygen it released in the atmosphere to drive the combustion reaction and create fire. In a way, fire epitomizes the harmony of science, the heart of nature. So it is only natural that Stony Brook Young Investigators Review (SBYIR) adopts a fiery shield as its new logo; fire to embody the sciences and ignite in the undergraduate population an investigative spirit, and a shield to show SBYIR as the bearer of the torch that we hope will shine the light into the labs on our campus. We hope to encompass all sciences as we highlight the exciting and comprehensive undergraduate research that occurs on the Stony Brook campus. Since its conception, SBYIR strove to provide undergraduates with an outlet to express their research interests and share them with their peers. It is our mission and hope that by presenting you with the pressing research of today, we can instill in you the drive to pursue the research of tomorrow. In this sixth issue of SBYIR, you will find discussions in fields ranging from atomic imaging to linguistics, from the rigors of space travel to the rigors of computational image processing. Inside you will also find an interview with Dr. Craig Evinger, a neurobiologist whose research could lead to fascinating discoveries in movement disorders. In addition, you will be able to hear about Stony Brook’s own undergraduate, Sarah McTague, discuss her time across the Atlantic researching the effect of ocean properties such as acidity and temperature on jellyfish. The creation of this issue could not have been possible without the help of our incredible staff and writing team, who diligently worked throughout the year to showcase their work today. In addition, without the help from our generous donors, whose names we will not forget, this issue would never have been published at all. We intend to continue our upward trajectory to showcase student research at all levels, make science accessible to the general public, and demonstrate the elegance and beauty of scientific inquiry. Finally, we would like to thank you, the reader, for whom all this is for. Welcome to SBYIR. We sincerely hope you enjoy.
Table of Contents Research Profiles Neuroscience and Movement Disorders: An Interview With Dr. Craig Evinger.....7 Cerise Carey ’16
Closing the Communication Gap Between Earth and Outer Space......................10 Taylor Ha ’18
Reviews Talent Vs. Hard Work: Why Practice Alone Does Not Necessarily Make Perfect.........12 Samara Khan ’19
Hearing and Neural Communication.............................................................................14 Elizabeth Shaji ’18
BCI Use with Cerebral Palsy Patients.............................................................................17 Jessica Jolley ’16
The Reality of the Grant Writing Genre.........................................................................19 Rohan Maini ’16
Language Use on Twitter....................................................................................................21 Hannah Mieczkowski ’17
Bio-Artificial Organs: An Overview of Current Advances...........................................24 Justina Almodovar ’18
Suicide Genes: A Viral Approach to Cancer Therapeutics.............................................27 Michelle Goodman ’18
The Future of Single-Particle Cryo-Electron Microscopy.......................................29 One Seo ’17
Inverse Problems in Image Processing..........................................................................31 Shipra Arjun ’16
Primary Research Article Jellyfish Size and Distribution in the North Atlantic Ocean in Relation to pH, Surface Water Temperature, Chlorophyll a, and Zooplankton Density .. .........................................................................................................................................33 Sarah McTague ’18 and Kelli Walsh ‘16
Research News Determining the Sex of a Fingerprint
The Virtual Path to Assessing Alzheimer’s in Humans By Meghan Bialt-DeCelie ’19
Image Retrieved from https://upload.wikimedia.org/wikipedia/commons/1/15/Fingerprint_detail_on_male_finger.jpg
Residual amino acids can be used to reveal fingerprints.
By Shannon Bohman ’19 A new innovative test may help determine whether a fingerprint comes from a man or a woman. Certain amino acids are twice as prevalent in women than in men, meaning that testing for amino acid concentration found in fingerprints can determine the gender of the person they belong to. For more than a century, fingerprints have been analyzed as if they were photographs. Improvements in fingerprint technology simply involved new machinery or programs that could cross-compare images of prints faster and more precisely. This new chemical test provided insight that photographic technology could not offer. From
doorknobs to computer screens, the test proved successful in determining the sex of a person based on their fingerprint. However, a larger sample size of fingerprints is necessary in order to ensure statistical significance. Testing for amino acids could become a hallmark of preliminary crime scene investigation. Scientist Jan Halamek, from the State University of New York at Albany, and his colleagues hope to soon create tests that can determine the age and ethnicity of fingerprints as well. References 1. Bhanoo, New technique can classify a fingerprint as male or female. The New York Times (2015). 2. Huynh, et al., Forensic identification of gender from fingerprints. Analytical Chemistry
Scientists have reported that they have developed an analogous rodent test that could aid in Alzheimer research for humans. The Morris Maze Test assesses the ability of rodents with Alzheimer’s disease to reach a pedestal in a water-filled arena. During the assessment, rodents attempt to reach the pedestal in a number of trials. In the first trial, the pedestal is shown to them just above the waterline. To avoid drowning, the rat must swim to the platform. In further trials, the experimenter manipulates the rodent’s surroundings by raising the water level above the platform and making the water opaque. The rodents then perform this task again with the unmoved pedestal hidden underwater. Previous tests suggested that rats with symptoms of Alzheimer’s disease did not perform as well when locating the platform. Because of ethical issues when placing Alzheimer patients into a tank for survival, researchers felt a need to develop a different type of test. A team led by Dr. Katherine Possin of the University of California, San Francisco created a human test that mimics the properties of a Morris Maze without the dangers of participants drowning. Participants begin the test by navigating a virtual
world with landmarks to a marked location. They are then moved to a different part of the virtual world and have to return to the marked location without it being marked. This test is better for assessing Alzheimer in humans especially after testing new treatments from rodent studies. Human tests differed from the rodents’ in that they often involved memory tasks such as retelling stories and recalling a list of words. The researchers analyzed the results from the rodent’s Morris Maze Tests and compared it to their experiment with humans. In the study, the humans and the rodents improved in finding the marked location over the course of 10 to 12 trials. The healthy humans and rodents located the target more easily than the afflicted group with Alzheimer symptoms. This study proved that the Morris Maze Test was a useful tool to compare the two species. Eventually, the Morris Maze Test will be able to help researchers test drugs and other therapies on the mouse models before administering them to humans. References 1. K.L. Possin, et al., Cross-species translation of the Morris maze for Alzheimer’s disease. The Journal of Clinical Investigation, (2016). 2. Shultz, Virtual landscape makes you feel like a rat in a maze, could aid Alzheimer’s research. Science News, (2016).
Application of Psychology in Emails Could Lead to More Effective Communication By Lee Ann Santore ’19 In this age of technology, email is employed as a fundamental form of communication capable of creating and strengthening both casual and professional relationships. Researchers from the USC Viterbi School of Engineering, having studied 16 billion emails sent by 2 million users,
were able to identify several key patterns. The results revealed that most emails are responded to within an hour, and that after two days have passed, they are unlikely to receive a response. Age was also found to have a direct relationship with response times: the older a person got, the longer it took them to respond. Gender, however, had a negligible impact.
The device used to respond had a significant impact: users using laptops took double the time to respond than those using smart phones. Additionally, the study demonstrated that when it comes to email length, users tend to mimic each other. This means that if one user sends an email one paragraph long, if the conversation is going well, they should expect a re-
sponse of the same length. The resulting understanding of email response times and response psychology can be used as an effective tool for initiating and maintaining effective communication between parties. References 1. Morin, Waiting for a reply? Study explains the psychology behind email response time. Forbes, (2015).
Your Robot Coworker Won’t Be Stealing Your Job After All By Cerise Carey ’16 Is artificial intelligence as great of a threat to your job as it may seem? Researchers with the McKinsey Global Institute suggest not. Their research indicates that less than five percent of jobs and forty-five percent of general work activities could be automated. Jobs that could benefit from some activities becoming automated include physicians, financial managers, and senior executives. However, there are still plenty of jobs that are less susceptible to automation, including landscaping and home health
care. Automation presents the opportunity to enrich the work environment by allowing people to focus more on creative tasks, which is currently lacking in the workplace. One report stated that only four percent of work activities in the U.S. require creativity at a median performance level. Rather than thinking about automation taking over the workforce, it should be viewed as a way of redefining current jobs, primarily at the task level. References 1. Lohr, Automation will change jobs more than kill them. New York Times. (2015).
Image Retrieved from https://www.flickr.com/photos/jurvetson/6858583426
A car assembly line. One example of machine automation that replaced jobs requiring people.
New Study Suggests Loneliness Destroys Physical Health
Image Retrieved from upload.wikimedia.org/wikipedia/commons/9/92/Loneliness_(4101974109).jpg
It is widely accepted that loneliness can damage mental health.
By Karis Tutuska ’18 CTRA has two major physiological consequences: it inhibits genetic expression of inflammation and a decreased expression of antiviral genes, weakening the immune system, and increases genetic expression of inflammatory proteins, often leading to cellular damage. Researchers have found that this response was unique to a prolonged feeling of social isolation, and not explainable by factors such as stress or depression. To determine the cellular mechanism behind this relationship, researchers studied the effects of increased CTRA expression in isolated rhesus monkeys. They found that the monkeys had
increased levels of norepinephrine (associated with the “fight or flight” response), which can stimulate stem cells in the bone marrow to produce immature monocytes that exhibit increased CTRA activity in the white blood cell pool. While the exact causes behind this relationship remain unclear, the study suggests that CTRA and loneliness operate in a positive feedback cycle. Individuals suffering from prolonged loneliness could experience a weakened immune system and more frequent illnesses. Who knew that social interaction was so good for physical health? References 1. D. DiSalvo, Loneliness Destroys Physical Health From The Inside Out. Forbes. (2016).
Study Suggests Children with Religious Upbringing are Less Altruistic By Karis Tutuska ’18 Many assume that religion plays a crucial role in a child’s moral development. However, a recent study at the University of Chicago suggests that children raised with a religious background are less altruistic than those who are were raised secularly. The experiment was conducted on children ages five to twelve from six different countries. Initially, the children’s parents completed a questionnaire indicating the family’s religious practices and whether
they perceived their children to have a sense of empathy and justice. To test altruism, each child was given ten stickers and asked whether they wanted to share them with another child. To test moral sensitivity, each child was shown videos of children shoving others and asked what their punishment should be. The children from religious backgrounds were found less likely to share their stickers and more likely to suggest harsher punishments for the bullies in the video. Contrary to popular belief, this study suggests that a religious up-
Image Retrieved from https://upload.wikimedia.org/wikipedia/commons/a/a5/Children_marbles.jpg
Altruism, or the ability to put someone before yourself, has been found to be more prevelent in religious upbringings.
bringing does not prompt moral development more than a secular upbringing. Rather, a non-religious upbringing may even produce kinder and more socially conscious children.
References 1. Decety, et al., The negative association between religiousness and children’s altruism across the world. Science Daily, (2015).
Bat Immune Systems Could Strengthen Our Own By Julia Newman ’19 Immunologist Dr. Michelle Baker, working at the Australian Animal Health Laboratory, has recently discovered something about bats that can protect humans from multiple deadly diseases. Bats are known to be carriers of various diseases such as Ebola and Middle Eastern Respiratory Syndrome. However, unlike humans, the bats are not affected as carriers, prompting research into their immunological responses. Studies have shown that the bat immune system works much
differently than that of humans: while our immune system works only in the presence of diseases, those of bats are constantly working to prevent infection. In fact, although bats have less interferons involved in immune responses than humans, their innate responses are much stronger. If scientists like Dr. Baker can employ the techniques seen in bats in humans, they may be able to stop the widespread deaths caused by diseases worldwide. References 1. S. Mathewson, Bat ‘super immunity’ could help protect people from diseases like Ebola. Nature World News, (2015).
Image Retrieved from http://www.techtimes.com/articles/135755/20160223/immunity-of-bats-to-lethal-diseases-may-help-protect-people.htm
Bats, including this flying fox, are constantly working against diseases.
Microbiome Technology Developed at Stony Brook By Shannon Bohman ’19 Stony Brook University recently incorporated breakthrough microbiome technology into nutrient-based compositions. The university incorporated these compositions into two patent applications filed to Ortek Therapeutics, Inc. Ortek had been seeking partners to develop and commercialize these compositions into over-the-counter and commercialized products. These nutrient-based products, in particular, will efficiently prevent body odor and staph infections. Stony Brook’s Dr. Israel Kleinberg, an expert in the oral microbiome, led the creation of the
innovative compositions. With over 40 years of experience in studying the composition of microbiomes and oral bacteria communities, Kleinberg has used oral communities of bacteria to investigate ways of utilizing nutrient-based compositions to inhibit human malodorgenerating microbiota and Staphylococcus aureus growth. With the findings from this microbiome manipulating approach, Kleinberg’s research will revolutionize everyday infection and body odor prevention.
Single-Pill HIV Treatment
Image Retrieved from http://tctmed.com/hormone-therapy-for-women/
Single-pill treatments could alleviate the stress of HIV patients’ daily intake of drugs.
By Richard Liang ’18 After extensive use of their multi-drug regimens, many HIV patients begin suffering from kidney impairment and decreased bone density. These pathologies have been recently associated with the chemical tenofovir in the medication regimens. As a response to this, the FDA approved Genvoya, a pill that minimizes tenofovir’s negative side effects. This novel pill combines dosages of active ingredients from standard medications, such as elvitegrravir, cobicistat, emtricitabine, and an alternate form of tenofovir, into a single pill. Researchers from the University of North Carolina tested the medication in 21 coun-
tries on over 3000 participants ranging in age and severity of observed side effects from previous medication regimens (None to severe renal and/or immune impairment). Compared to current HIV medications and treatment regimens, Genvoya effectively depressed the overactive immune system, kidney toxicity and the observed rate of bone density degradation. Although it is not a cure, Genvoya can help HIV patients suffer less from medication-induced side effects.
References 1. Feller, FDA approves single-pill HIV treatment. UPI, (2015).
Find More Online!
Visit our website at sbyireview.com
References 1. Filiano, Microbiome technology developed at stony brook may help combat certain infections. Stony Brook Newsroom, (2015).
Neuroscience and Movement Disorders An Interview with Dr. Craig Evinger Cerise Carey ’16
Image Courtesy of Taylor Ha
One of the most famous lines from the musical Hello, Dolly! is recited by the title character: “Money, pardon the expression, is like manure. It’s not worth a thing unless it’s spread around, encouraging young things to grow” (1). Dr. Craig Evinger, a professor of the Department of Neurobiology and Behavior, an associate professor in the Department of Ophthalmology and adjunct professor of Neurology at Stony Brook Medicine at Stony Brook University, believes this idea to be true of the field of neuroscience. A dedicated professor and researcher in the field, he incorporates this belief into his teaching and strives to “spread around” his knowledge of neuroscience in as many ways as he can. Neuroscience is unlike all the other biological sciences – Dr. Evinger describes it as a “vertical science”(2). If other sciences are more horizontal, looking at genes and biochemical reactions, neuroscience addresses both the details of higher order functions, like cognitive processing, as well as interactions happening on a much smaller scale, within 200 angstroms of a lipid bilayer. A broad and continuously expanding field of inquiry, neuroscience thrives when it is explored, advanced, and taught. Dr. Evinger has over 80 publications to his name with research topics ranging from the blink reflex and eye movements to movement disorders. His impressive body of work exemplifies the idea of neuroscience as an area of study integrated in various branches of scientific questioning and with a great amount of potential for researchers Overview of Movement Disorders and Neuronal Hypersynchronization Dr. Evinger’s current research focuses on movement disorders and the related areas in the brain. Neurons in the brain (specifically in the basal ganglia and the cerebellum) fire action potentials, sending electrochemical signals to other neurons
in order to convey information. Neurons in the brain work together, firing action potentials in quick succession in order to respond to a stimulus or to execute a task. The culmination of a neuronal electrical signal is often translated into a physical response, like movement, via the recruitment of muscle fibers. In some types of movement disorders, neurons in the basal ganglia become hypersynchronized (Figure 1). Hypersynchronization occurs when neurons become increasingly synchronized, firing action potentials together in waves which impair normal movement, leading to excess movement or lethargic movement in different cases. The frequency of these waves is what determines the nature of different types of motor deficits. “If you think of neurons as people, you can think about the brain as being a vast collection of people,” said Dr. Evinger, “so normally the brain isn’t all that synchronized. Think about looking out on a street or the academic mall, you see people moving around doing all different things, and everything works just fine – that’s the way your brain normally should work. Every now and then you want to synchronize it so you can get large areas of the brain all involved in solving the same problem, like people doing the wave at a stadium – that’s the synchronization.” Dr. Evinger described hypersynchronization as the neurons in the basal ganglia acting “like the crowd outside a Walmart on Black Friday, five thousand people trying to get through four doors.” He and his lab have modeled hypersynchronization in normal rats in an effort to understand what happens in the brain during this type of excess neuronal synchronization (3). His model of hypersynchronization implements electrical stimulation of the brain, namely deep brain stimulation (DBS), a method by which electrodes are inserted into deeper areas of the brain, such as the basal ganglia. DBS is used in a therapeutic manner in people with Parkinson’s
disease and dystonia, where the brain is stimulated at a certain rate depending on the disease in order to restore normal motor function. Dr. Evinger has found that if a rat brain is stimulated at the rate seen in Parkinson’s disease or dystonia, a normal rat will appear as if it has that disease. Delivering DBS at frequencies specific to Parkinson’s disease produced the same increase in blink reflex excitability and impairment in blink reflex plasticity in normal rats as in rats with 6-hydroxydopamine lesions and patients with Parkinson’s disease (4). To demonstrate that the blink reflex disturbances were frequency specific, he tested the same rats at the DBS frequency typical of dystonia. This exaggerated the blink reflex plasticity similar in focal dystonia. Thus, without destroying dopamine neurons or blocking dopamine receptors, Dr. Evinger found that frequency-specific DBS can be used to create Parkinson’s disease-like or dystonic-like symptoms in a normal rat. His previous research has also looked at stimulating rat models in a therapeutic fashion to understand what hypersynchronization does in the brain, and why it blocks or exaggerates movements. Dr. Evinger’s History Dr. Evinger studied psychology as an undergraduate student attending New College in Florida. After receiving his B.A. in psychology, Dr. Evinger moved on to acquire his Ph.D. in physiology and biophysics from the University of Washington. His dissertation was awarded the first Donald B. Lindsley Prize in Behavioral Neuroscience by the Society of Neuroscience. He continued his academic pursuits as a postdoctoral fellow in the Department of Physiology and Biophysics at New York University Medical Center. Inspired by his passion for neuroscience, Dr. Evinger strives to get others truly interested in neuroscience. Currently, he teaches several undergraduate courses including the principles of neuroscience lecture and laboratory (BIO 334 and 335), an introductory neuroscience course for non-biology majors (BIO 208), as well as the neuroscience courses for the university’s medical and dental students. The joy he gets from teaching stems from his students’ interest in learning; he says, “It’s fun to get them excited about neuroscience,” hoping to inspire a passion akin to his own in the students he teaches (2). In addition to his passion for teaching, Dr. Evinger addresses the question of how the nervous system creates movement, and works on proj-
ects aiming to elucidate movement disorders by unraveling their causes and developing novel therapeutic treatment methods. A Closer Look at Dr. Evinger’s Research When he first started out in research, Dr. Evinger was not particularly interested in clinical issues. However, over time, his purpose in studying motor control has become more clinically oriented. He mentions, “I think it’s really important that as scientists we try to contribute in some way to improve people’s lives; the feeling just sort of grows on you.” Currently, Dr. Evinger’s main focus is on a particular focal dystonia called Benign Essential Blepharospasm, a movement disorder characterized by involuntary spasms of eyelid closure (5). According to Dr. Evinger, those who suffer from these uncontrollable eyelid spasms struggle to complete simple tasks without being at risk, as the dystonia can produce functional blindness. His animal model of Benign Essential Blepharospasm indicates that abnormal interactions among the trigeminal blink circuits, basal ganglia, and cerebellum are the neural basis for the dystonia (3). Blepharospasm is currently treated with Botox injections into the eyelids, a treatment that unfortunately does not cure the disease, but makes it more difficult for an individual to have a spasm of eyelid closure. Patients need to receive injections every two to three months in order to maintain the effects of the Botox, but Dr. Evinger’s lab is currently working on ways to develop a behavioral method for treating blepharospasm that is non-invasive and a potentially better option for patients than regular eyelid injections. The Evinger lab has developed a system in rats that has the potential to lead to new behavioral methods of treatment for blepharospasm. Originally, Dr. Evinger studied the blink reflex in humans by stimulating the trigeminal nerve, which runs above the eye on the forehead. Pairing supraorbital nerve stimulation with different parts of the blink reflex allows for control of eyelid movement, either depressing or potentiating blinking depending on how stimulation is timed during a blink (6). In order to observe the effect of stimulating the trigeminal nerve, Dr. Evinger attempted this type of nerve stimulation in rats by using chronic EMG recording and stimulation of the supraorbital and infraorbital branches of the trigeminal nerve. He found that “after ten days of stimulation in rats, the blink circuits were very depressed.” In someone with spasms of eyelid closure, this would
Figure 1 The first picture shows the basal ganglia (shown in purple) and the cerebellum (shown in green) are two neural structures involved in movement disorders. The second picture shows the dura, among the other meninges, covering the brain. The supraorbital nerve is associated with the dura.
Basal Ganglia and Related Structures of the Brain
Image Retrieved from https://mediconews.com/wp-conthttpent/uploads/2009/10/basal-ganglia.jpg, http://en.academic.ru/pictures/enwiki/77/Meninges-en.svg
in theory allow the system to be depressed to the point where the individual does not experience involuntary spasms. The supraorbital nerve is also associated with the dura, the covering of the brain. The dura is related to migraine headaches, where neurons activate and synchronize excessively during a migraine. If those neurons are stimulated at certain frequencies, the system can be suppressed to decrease the neuronal activity contributing to a migraine. A device similar to a headband can be placed around the forehead to stimulate the supraorbital nerve and relieve migraines. Ideally, if Dr. Evinger’s behavioral therapy works in humans, he can create a device similar to these headbands, allowing a person suffering from blepharospasm to wear it for a few minutes over a period of several days and stimulate their supraorbital nerve at a specific, therapeutic frequency, reducing the instance of involuntary eyelid closure. Dr. Evinger has a grant to develop an animal model of blepharospasm and try out his treatment, and he eventually hopes to try it in humans with the disease. Dr. Evinger’s lab will be able to start the process within the next two years. The Future of Neuroscience When Dr. Evinger was a postdoctorate fellow and assistant professor, the focus of neuroscience was to look at one neuron at a time. Today, the focus has shifted from such a micro scale to a more macro one, with behavior believed to represent the actions of networks of neurons. “It’s exciting because so many different areas are being addressed all at once and neuroscience envelops all of them,” said Dr. Evinger. “Each neuroscientist needs to find the level of the answer that they’re interested in. We can think of movement control as a molecular process or as a high order
process. It’s hard to imagine that there’s a field of neuroscience that’s not being looked into. Neuroscience is lots and lots of people following their own ideas, labs should be spread around and people should be looking at different things.” When asked what type of progress he hopes to see in relation to his field of research, Dr. Evinger stated that “better ways of measuring the activity of entire networks of neurons are needed with both hardware and software”(2). This notion is not something limited to his field alone. Thinking again of the comparison Dr. Evinger made between neurons and people, he suggests that “It’s easy to think of what one person might do, but it’s not as easy to think of what a crowd of five thousand people might do. Our minds aren’t made to think in numbers like that so you need computational tools that can help you.” When asked what advice he would give to people who are interested in neurobiology, Dr. Evinger wished to urge people to join a research lab. “You can read about neuroscience all you want, but until you actually sit down and learn to do some neuroscience, it’s hard to make anything of what you’ve read.” References
1. J. Herman, M. Stewart, Hello, Dolly! (David Merrick, New York, 1964). 2. C. Evinger, Interview with Craig Evinger. Rec. 12 Nov. 2015. MP3. 3. C. Evinger, Animal models for investigating benign essential blepharospasm. Current Neuropharmacology 11, 53-58 (2013), doi: 10.2174/157015913804999441. 4. J. Kaminer et. al., Frequency matters: beta-band subthalamic nucleus deepbrain stimulation induces Parkinsonian-like blink abnormalities in normal rates. European Journal of Neuroscience 40, 3237-3247 (2014), doi: 10.1111/ejn.12697. 5. M. Hallet et. al., Update on blepharospasm. Neurology 71, 1275-1282 (2008), doi: 10.1212/01.wnl.0000327601.46315.85. 6. C. Dauvergne, C. Evinger, Experiential modification of the trigeminal reflex blink circuit. The Journal of Neuroscience 27, 10414-10422 (2007), doi: 10.1523/JNEU-
Image Courtesy of Taylor Ha
Closing the Communication Gap between Earth and Outer Space With Dr. Adam Gonzalez Taylor Ha ’18 Swallowed by the dark silence of the universe, astronauts are prone to stress, anxiety, interpersonal issues, and fatigue (1). One of the remedies to this deterioration in mental health is realtime communication between astronauts and ground-based psychiatrists or psychologists. However, real-time, or synchronous, communication is not possible with extended travel distances, especially if astronauts are stationed at Mars. This delay of up to 45 minutes can punctuate psychotherapy sessions and hamper the astronauts’ primary mission in space, resulting in loss of data for NASA and, ultimately, mankind’s future. However, a grant funded by NASA’s Human Research Program will allow faculty from Stony Brook University’s Department of Psychiatry to improve communication technology on NASA’s future deep-space missions to Mars and asteroids near Earth (1). Research garnered from the grant will result in a set of recommended guidelines that designate the most effective communication methods for behavioral health when real-time communication does not exist. These recommendations for communication from the grant team will be given to the NASA behavioral health experts and applied to actual future deep-space missions. Principal Investigator and Assistant Professor Adam Gonzalez is currently tackling this problem alongside his research team (Figure 1). Allotted three years to garner data in the research project entitled “Asynchronous Techniques for the Delivery of Empirically Supported Psychotherapies,” this Stony Brook University faculty aims to identify technology such as video messaging, that will deliver critical human support to astronauts when real-time communication does not exist. “I’ll have some type of potential impact on space travel and even being slightly involved to whatever degree we are involved, it’s really exciting,” Gonzalez added (2). Gonzalez’s partners also play essential roles in this investigation. Research Assistant Professor Brittain Mahaffey will be in charge of supervising and providing therapy once participants are enrolled as well as supervising hired therapists for the trial. Assistant Professor Roman Kotov is a clinical psychologist who plays the role of statistician in this research project. Distinguished Professor Kaufman, the chair of the computer science department, is helping with the different electronic delivery platforms; he specializes in virtual reality. Jim Murry, the chief information officer at Stony Brook hospital, is in charge of developing the self-management packages through the patient at Stony Brook.
Image Retrieved from https://medicine.stonybrookmedicine.edu/sdmpubfiles/ cckimages/page/GonzalezAdam_144_2.jpg
“So, we’re working closely with them to choose the different selfmanagement treatment packages based on our literature review that would be relevant, and then we’ll put together the different communication methods into this one platform,” Gonzalez clarified (2). Pinpointing the Research Purpose Current astronauts in the International Space Station maintain synchronous communication via phone, email, and video messages with ground-based mission control due to a tolerable distance. In other words, these astronauts, many of whom are men, are able to directly call family and friends on Earth. “So their wife can be out shopping and they’re getting their call from their husband who’s up at the International Space Station,” Gonzalez explained (2). “Right now for our communication purposes, they can speak to even their therapists back here on Earth” (2). However, that will not be the case for astronauts on Mars because on average, they will be 140 million miles away from Earth (3). “I think the magnitude of going to Mars and traveling for such a long period of time is the main reason why they’re (NASA) looking into these different types of treatment delivery methods,” Gonzalez explained (2). A distressed or depressed astronaut may suffer communication problems with fellow astronauts on board or ground-based crewmembers, become distracted from his or her specific research agenda, negatively interfere with the mission, or dangerously operate the spacecraft (4). And if a lack of real-time communication exists, it may hurt the efficacy of therapy sessions between astronauts and their medical professionals. Past Behavioral Issues with Astronauts Documented cases of astronauts who have suffered from psychological stress validate the issue. William Pogue, a member of the three-man crew aboard Skylab from 1973-1974, declared a strike against ground controllers at Cape Canaveral, Florida about six weeks into the mission. According to a New York Times article, “He and the others just wanted more time to look out the window and think” (5). Pogue and the ground-based mission control eventually comprised. Although uncertain if Pogue’s strike prompted the funding of NASA’s current research, Gonzalez believes “the fact that we’re going into some uncharted territory, going into outer space out of our orbit where we won’t have this real-time communication,” is ultimately the cause (2).
Four Fundamental Steps Gonzalez’s team is following four key objectives: to conduct a literature review on empirically supported therapies, complete another literature review on various treatment platforms, use the research gained from the literature reviews to conduct a randomized trial, and finally, condense their effective results in best practice guidelines for NASA. Focusing on empirically-supported therapy techniques for astronauts that have already been in space, the first literature review will cover treatments linked to the problems that astronauts may encounter during long-duration space missions. These issues include anxiety, elevated levels of distress, fatigue, depression, and panic attacks. The second literature review will hone in on different methods for delivering the treatment such as text-based communication, video messaging, and the use of virtual reality. “Those first two steps are to figure out what has been done already,” Gonzalez summarized (2). After approximately 8-9 months, the team will propose the full trial to test therapy techniques and start recruiting and enrolling 126 participants over a two-year time period. These participants will be high-functioning adults called analogs who are similar to astronauts or people with advanced degrees who are under work-related stress but remain relatively healthy. However, these participants are missing one feature – they are not astronauts themselves. In order to correct this important variable, chosen analogs will possess traits that closely resemble those of astronauts. For example, post-doctorates, graduate students, and young professionals in the STEM fields with a master’s degree or higher – traits that astronauts typically have – will characterize the study group. In order to measure their stress levels, they will first have a clinical interview, where therapists will learn if the participants have had any serious mental health problems. Then they will be given self-report measures that analyze different life stressors, including financial problems, family problems, and stressful work environments in order to gauge their stress levels. This screening process will recruit participants who are still high-functioning but also under major stress (2, 4). The communication protocols are the treatments. Gonzalez’s team will be analyzing three different combinations of communication with the analogs to evaluate which treatment delivery methods are most effective when real-time communication does not exist. The first method is a self-management package, which is a set of online tools that astronauts can independently use to learn different techniques for managing stress. Watching a webinar or PowerPoint slides about how to manage depression and consequently understanding specific skills is an example of a self-management package. “Here are different things that you need to do, here are different things that you should be aware of when you’re experiencing depression,” Gonzalez described as an example of a self-management package (2). The second option is not only access to the self-management package, but also video support. In other words, an astronaut can send a video message to his or her therapist and receive one in response, though not in real time. This type of video messaging is similar to the kind that Murphy, a person on Earth, transmits to her father Cooper, an astronaut in space, in the science fiction film Interstellar that explores deep-space travel (6). Lastly, astronauts can have both access to the self-management package and text support, which can include short text messages; longer, diarytype messages; pictures; and even emojis.
Video support and text-based communication may have different benefits and drawbacks. Through videos, therapists can not only see their patients and hear what they are saying but also hear how they are saying them. In other words, therapists can understand their patients’ thoughts and feelings much better through video support rather than text-based communication. Yet with text-based communication, an astronaut does not have to experience the awkwardness of being video recorded (4). However, the team does not yet know which technique will work best. Gonzalez’s team will also have the opportunity to interview and speak with NASA behavioral health experts such as psychiatrists and therapists to understand today’s current practices and how these medical professionals interact with their space patients. According to Gonzalez, they will also be able to speak to past astronauts and discover their personal experiences with space travel and communicating with their therapists on Earth (2). Lastly, Gonzalez and his team will compose best practice guidelines based on their research and submit them to NASA. The parameters of the guidelines will include the most effective methods of communication for behavioral health when there is a delay in communication. “So, to put together some guidelines that will then be used, given to the NASA behavioral health experts as to what, based on our findings, what are our recommendations for communicating,” Gonzalez clarified (2). Reversing the Focus Back to Earth “Asynchronous Techniques for the Delivery of Empirically Supported Psychotherapies” is not only capable of aiding astronauts stuck in space but also people planted on Earth. By understanding which asynchronous communication methods work the best, Gonzalez and his team might be able to magnify the world’s mental health treatment system (2,3). In other words, results from their experiments may provide more effective treatments for people, especially those who are elderly and disabled, who live in rural areas where they may have little to no Internet connection, service, or providers. “So, kinda tapping into a different way of delivering telemedicine,” Gonzalez concluded (2). Conclusion What does Gonzalez and his team’s research mean for humanity? The team’s final output, or recommendations for communication when real time does not exist, will ultimately keep astronauts psychologically intact as they conduct missions in deep space, whether they are uprooting samples from Mars’s surface or investigating the composition of an asteroid. “I think that’s great that there’s a role for mental health, [that] there’s attention being paid to mental health by NASA for the astronauts,” Gonzalez noted (2). But although these future recommendations will aid psychotherapy sessions between astronauts in space and groundbased psychologists or psychiatrists, there is one mode of communication that may always prevail over the rest– face-to-face, physical communication.
References 1. L. Roth, SBU research will help astronauts on future deep-space missions. Stony Brook University Happenings, (2015). 2. A. Gonzalez, Interview With Dr. Adam Gonzalez. Rec. 16 Nov. 2015. MP3. 3. N.T. Redd, How long does it take to get to Mars? Space.com (2014). 4. B. Mahaffey, Interview With Dr. Brittain Mahaffey. Rec. 11 Nov. 2015. MP3. 5. P. Vitello, William Pogue, Astronaut Who Staged a Strike in Space, Dies at 84. New York Times, (2014). 6. C. Nolan, et al., Interstellar. Legendary Pictures (2015).
Talent Vs. Hard Work
Why Practice Alone Does Not Necessarily Make Perfect Samara Khan â€™19
Image Retrieved from https://northmantrader.files.wordpress.com/2015/02/chess2.jpg?w=278&h=185
Introduction Although the existence of talent has been heavily debated in recent years, preliminary research suggests the existence of an underlying factor that affects performance and skill level in fields such as music and sports. Parameters such as region-specific neural activation, working memory capacity, and a subjectâ€™s ability within a certain discipline have been used to test for the presence of an underlying factor. However, none of these parameters have shown a standard quantitative measure of talent. In fact, neuroscientists in the United States and Europe agree that talent can be defined as a measure of high intelligence that results in consistent, above average performance and accomplishment in a certain field or area (1). Some scientists believe that deliberate practice theory, active practice that challenges and raises a personâ€™s skill level, alone, is sufficient for high achievement within a specific discipline. Contrary to that belief, recent studies have proven that it is not the case and an underlying factor plays a role in levels of achievement. Neural Basis of Talent The possible existence of talent was observed in a study through variances in neural activity of different individuals performing the same basic task. It saw quantified differences in neural activity between individuals with distinct and average aptitudes for IQ scores. Children (ages 8-12) and adults (ages 18-21) were separated based on IQ and were each presented with several mathematical word problems. While solving the problem, their brain activity was observed using an EEG. The pre-frontal cortex, a region of the brain responsible for strategic processes, memory retrieval, reasoning, and organization of goal directed actions, was specifically studied. The study concluded that the children with high IQ scores showed increased alpha activity in the left pre-frontal cortex while the
children with average IQ scores showed lesser alpha activity. Furthermore, the increased alpha activity seen in the children was almost the same as that of the adults with average IQ scores. Therefore, talented children with high IQ scores had more sophisticated neural processes at an earlier age, which resulted in evidence supporting the existence of an underlying factor that affected performance on some level (1). Although studies have provided evidence of an underlying factor, they do not account for the presence of child prodigies. Child prodigies demonstrate high levels of ability within a specific area with relatively little practice. A study compared the brain activity of child prodigies doing both basic tasks and activities they were proficient in. The results showed that they demonstrated average neural activity performing basic tasks except when they were practicing the skill they were highly proficient at. This increased brain activity in one specific area led researchers to conclude that there must be an underlying factor that facilitates brain activity in specific areas of performance (2). Children with Savant syndrome, like child prodigies, also depart from the deliberate practice theory. Savant syndrome patients demonstrate abilities similar to geniuses in only a few areas, but have limited mental capacities in all other areas due to damage in the pre-frontal cortex. In 2011, Dr. E.J. Meinz hypothesized that damage to the pre-frontal cortex does not damage talent within a certain area. This hypothesis can be deduced because children with Savant syndrome show severe impairments in personal and social decision-making, but have normal or above average levels of intellect within a specific discipline. The study discovered that the damaged pre-frontal cortex of Savant children led to developmental asynchrony that occurred when emotional development lagged in comparison to intellectual development. It also found that parts of the pre-frontal cortex contribute to talent, and even if specific
parts were damaged, talent within a certain discipline was not affected (1). The results confirmed that there are specific parts of the pre-frontal cortex that contribute to talent in an individual, which accounts for the presence of above-average ability in children with Savant Syndrome. Same Practice, Different Outcomes The case of the Polgar sisters is often used to support the deliberate practice theory, but it may not be as supportive as scientists had originally believed. These three sisters, raised in the same environment, were trained in chess from the same instructor for the same amount of time. Their progress was recorded over several years to observe talent as an underlying factor. The hours of practice for each Polgar sister, along with their chess ratings on the FIDE (a universal chess organization) scale, were compared to the development of eight other chess grandmasters by weekly assessments. As their training progressed, there were at least 1.54 standard deviations on the FIDE scale between Sister 1 and Sister 2, and 0.54 standard deviations between Sister 2 and Sister 3. This level of standard deviation is statistically significant because it shows a level of difference of at least 20 points on a 200-point scale. At each different point in their training, despite having the same amount of instruction, each of the Polgar sisters were on a different level of chess ability. In addition, only two of the three sisters ended up becoming chess grandmasters. This proves that despite the same level of deliberate practice, the sisters still developed their chess abilities at different rates. However, the deliberate practice theory cannot account for these substantial differences in performance, because the sisters often trained together with the same instructor for the same amount of time. Furthermore, the Polgar sisters’ performances were individually compared to the performance of another chess expert, ACP. ACP learned chess when he was nineyears-old, compared to the Polgar sisters who started around five-years-old. However, ACP put in substantially less hours of practice than the Polgar sisters, and he still became a chess grandmaster around the same time as Polgar sisters 1 and 2. Seven months after the study began, ACP equaled Polgar sister 1’s rating despite a late start in practicing chess. If practice and an early start were the only factors in talent, there should have been little differences among the Polgar sisters in rating trajectories and peak ratings. All of the sisters should have become grandmasters. A plausible alternative is that the Polgars all differ in chess ability levels. One sister has lesser ability than the others, while ACP has more ability than all three sisters (2). Deliberate Practice Does Not Account for All Variations in Skill Working memory capacity (WMC) is the amount of information that can be retained in the short-term memory and it is another element that has been used to support the existence of talent. A study discovered that even for professional piano players, deliberate practice only accounted for approximately half of the variation in sight reading skill. The team measured the WMC to determine if it accounted for the other variation in skill (3). In the study, piano players of varying abilities were tested on different levels of sight-reading and were rated based on technical proficiency, musicality, and overall performance. The working memory capacity of each piano player was measured by them
answering yes or no to a set of simple equations. There was a direct relationship correlated as the higher the player’s score on the test, the higher their working memory capacity. The players were asked about their years of experience, paid engagements, memorized solos, times of accompaniment, hours of sight reading practice, and hours of overall practice. The team then found a statistically significant correlation, determined through an R2 value, between the hours of deliberate practice and the years of lessons and sight reading performance. However, the years of practice did not account for all of the variation within the sightreading performance. The WMC was found to have a greater statistically significant correlation with sight-reading performance, as a result of the higher R2 value. This study concluded that at all levels of musical ability, deliberate practice does not account for all of the variations in skill level. This capacity is not something that can be developed over time, but is instead an innate ability. The presence of working memory capacity refutes the deliberate practice theory. Conclusion Proving the existence of talent can help explain the variations in skill level that are seen amongst individuals in the performance and sports industries. Although many scientists deny the existence of talent and claim that repeated practice can help anyone learn a specific skill, studies conducted over the years have proven that view to be incorrect. To date, increased activity in the pre-frontal cortex, the existence of child prodigies and children with Savant syndrome, and working memory capacity have served to support the existence of talent. These factors serve to disprove the deliberate practice theory. Increased alpha activity in the pre-frontal cortex of gifted children indicates an underlying factor that makes these children more adept at a certain skill. Additionally, child prodigies often have little to no deliberate practice, but still demonstrate profound ability; therefore, there is clearly an underlying factor that separates these children from others. This difference in ability was seen in the case of the Polgar sisters, when each sister’s ability was quantified on the FIDE scale and all three sisters’ chess skills developed at different rates despite having the same amount of training. Although it is not clear what this underlying factor is exactly, Savant children demonstrate that there are very specific areas of the pre-frontal cortex that contribute to talent. Lastly, working memory capacity is another component that could contribute to this underlying factor. More research needs to be done to identify this factor and what neurological and psychological factors contribute to talent. Concrete evidence needs to be presented and clearer trends need to be established regarding the exact factors that contribute to talent. The studies above show correlations between certain variables, which are a promising start, but as of now there is no concrete neuroscience-based evidence of the existence of talent.
References 1. M.L. Kalbfleisch, Functional neuroanatomy of talent. The Anatomical Record 277, 21-36 (2004). 2. R.W. Howard, Does high level intellectual performance depend on practice alone? debunking the polgar sisters’ case. Cognitive Development 26.3, 196-202 (2011). 3. E.J. Meinz, D.Z. Hambrick, Deliberate practice is necessary but not sufficient to individual differences in piano sight-reading skill: the role of working memory capacity. Psychological Science 21.7, 914-919 (2010).
Hearing and Neural Communication
Elizabeth Shaji â€™18
Introduction Hearing and understanding language is essential to the growth and development of an individual in society. In order to decipher verbal communication, the human brain creates an aural map based on the pitch, frequency, and intonation of the spoken phrases. Whether it be the shrill sound of an alarm clock or the cadence of poetry, the â€œhearingâ€? portion of the brain, otherwise known as the auditory cortex, compiles information from surroundings and sparks the appropriate response within the brain. Simply, the auditory cortex contains a network of neurons that respond to sound depending on its frequency. On the basis of neural communication, however, the phenomenon of hearing is poorly understood. Recent studies have delved into the largely unexplored auditory cortex to identify how neurons interact to amplify important sounds and inhibit unwanted background noise. Perceptual Features of Noise The Tonotopic Map At the source, a sound may be clear and comprehensible, but as the sound travels through objects and background noise, it gets distorted. The sound wave that reaches the ear is unlike that of the source, as it is a combination of all the other sounds in the aural environment. To process this degraded sound wave, the cochlea, using auditory nerve fibers, decomposes the wave into individual frequencies, and creates a tonotopic map of the aural environment (1). Neurons that are in close proximity to each other respond to sounds with similar frequencies. As a result, higher sound frequencies are seperated from lower sound frequencies in an orderly fashion within the tonotopic map. Spectro-temporal Features and its Neural Foundation The multitude of frequencies is then transmitted into the auditory cortex by the auditory nerve. Before the information reaches the auditory cortex, the auditory nerve carries the information into the cochlear nucleus. Once there, it is then relayed through the superior olive (SO), a structure comprised of the medial (MSO) and lateral (LSO) subsections. The MSO withdraws binaural timing cues and the LSO withdraws intensity cues. Both timing and intensity cues describe the difference in the sound stimuli reaching the right and left ears of the listener. The signal difference is even greater when the source of the sound is within one meter of the ear (2). Binaural time difference is determined by the distance between the two ears of the observer, the speed of sound, and the angular distance of the sound source from the horizontal plane. Directionally
selective neurons respond to the direction the sound stimulus is moving in. Azimuth sensitive neurons respond optimally to the left and right positions of sound sources in space. Paranormic neurons fire action potentials to sounds originating from any direction. The location of the sound source is constructed via the pattern in which these neuron types fire (3). In addition, intensity cues take into consideration the differing intensity of sound between ears. For example, higher frequency sounds have difficulty reaching the opposite ear in a phenomenon called the acoustic shadow. Both processes can be integrated to obtain an accurate spatiotemporal presentation. After the complex sound wave is demarcated, the parallel processing pathways of timing and intensity converge at the inferior colliculus (IC) in the midbrain where the binaural cues are integrated so that the brain can approximate representations of positions in space. After preliminary processing at the SO, the information travels to the auditory cortex via the medial geniculate body (MGB) of the thalamus.
Figure 1 This schematic displays the chain of command within higher auditory processing.
Image Retrieved from http://neurosciencenews.com/speech-sound-meaning-neuroscience-2740/auditory-system-speech-public/
The pathway of hearing as it moves through the ear and reaches the brain.
Auditory Scene Analysis The fields of the auditory cortex are placed hierarchically into the primary, secondary, and tertiary areas (4). Neurons in the secondary and tertiary areas prefer spectrally and temporally rich sounds, and respond with longer latencies and more diverse temporal firing patterns than their primary area counterparts, which prefer and respond to pure tones (3). Once the information reaches the higher areas, the brain now has the responsibility to differentiate the frequencies, identify the source, and understand the significance of the sound. This process, known as auditory scene analysis, depends on associating these sounds with pitch, timbre, and location in space (5). Invariant Representation To decode the different features of sound, the neurons in the auditory cortex represent multiple features rather than a single one. Thus, neuronal activity is often moderated by more than one stimulus dimension. Information about any perceptual feature can be obtained from individual neurons contingent upon stimulation. Moreover, neural responses across multiple auditory cortical fields (subdivision within the auditory cortex) might be represented as a chain of action potentials within the cochlear nerve. Despite the multi-modality of single neurons, certain regions of the auditory cortex may be more sensitive for one perceptual feature of sound than the other two. In other words, these regions can take part in invariant representation (2). In vision, neuronal networks in each level of the brain remove accessory information from the sensory input. At the highest level of the brain, the portion responsible for advanced processess such as planning, memory, and perception, different visualizations activate the same cluster of neurons specifically representing a single image—much like how the brain interprets both a sofa and stool as a chair. Vocalizations are processed in the same invariant representation. Invariant representation permits the ability to discriminate between certain vocalization and speech patterns despite temporal and speaker variability. Although the pronunciation of a word differs, whether it is by pitch, speed, or stress, the brain can still comprehend the word. Steps to a Neural Code Contrary to neurons in the primary auditory cortex (A1), neurons involved in higher auditory processing receive crude and untreated signals from the ear, which then move in a hi-
erarchical fashion through successive, feedforward maps up the auditory cortex, specifically the super-rhinal auditory field (SRAF). Feed forward maps are layered networks, in which the first layer has a connection from the input. Through successive signaling, the final layer produces the output. The higher up in the auditory cortex, the more capable the brain is when generalizing acoustic distortions. The neurons in the SRAF also exhibit later response latencies, meaning they take a longer time to respond to basic sound stimuli, compared to the A1. The integration of neurons with different tuning properties allows higher selectivity in SRAF neurons in addition to tolerating a greater selection of variances. As the information is processed, a neural code is produced that could be further decoded across stimulus transformation. Moreover, some areas of the higher auditory cortex may be better accustomed to tune out basic acoustic transformations, and as a result, are sensitive to patterns of spectro-temporal modulation (6). Evolution’s Weapon: Stimulus-Specific Adaptation From an evolutionary standpoint, survival necessitates the distinction between ordinary sounds and unexpected, possibly dangerous sounds. In the limited capacity of the brain, sensory overload is inhibited by increasing the efficiency in which the A1 can sort through and tune out frequent, ordinary sounds in a process called stimulus-specific adaptation (SSA). A majority of the neurons in the brain partake in SSA; but two specific interneurons, parvalbumin-positive interneurons (PVs) and somatostatin-positive interneurons (SOMs), work synergistically to enhance the brain’s ability to respond to rare sounds by inhibition. Through optogenetics, a technique in which certain neurons can be excited or inhibited with optical fibers, the relationship between PVs and SOMs as well as the associated behavioral response was illustrated. SOMs are generally inhibited during repeated, standard tones whereas PVs inhibit the responses to both standard and deviant tones. The suppression of SOMs leads to an increase in behavioral responses to frequent tones. In contrast, the suppression of PVs leads to an equal increase in behavior responses to frequent and rare tones. The integration of these two inhibitive interneurons generates a greater difference between responses to standard and rare tones. Once a stimulus becomes increasingly repeated, desensitization to that particular stimulus occurs; however, the brain becomes more sensitive to changes in stimuli. The brain’s intensified ability to differentiate and recognize the different sounds within the cacophonous acoustic environment is what drove the ancestors of the current population to run away from a loud crash; but it’s also what regulates daily conversational patterns (7). Hearing in the Presence of Noise Given the astounding ability to recognize communication signals in the midst of background noises, determining noise invariant neural responses is essential in pinpointing the brain regions that play a role in forming aural perceptions as well as understanding the neural computations that are executing these tasks. Although the invariant neural response system is well understood in the occipital cortex, the same cannot be said for the neurons in the auditory cortex. For instance, the neurons found in the secondary auditory cortex (A2), when ex-
posed to sound stimuli, portrayed noise invariant responses of a similar pattern in both silence and the presence of a masking background noise. Since each neuron tunes in response to temporal (time) and spectral (frequency) modulations within the aural environment, characterizing each neural change leads to noise invariances to specific sounds. Particularly, noise invariance is partially brought about when responding to long sounds with a sharp spectral structure. These neurons take a longer time to process signals and have sharp excitatory and inhibitory tuning, resulting in sensitive regions of the A2 that can filter out natural, reoccurring noise. Thus, high-level auditory areas are selective for complex sounds, and can efficiently process communication signals. Neurons in lower auditory areas, on the other hand, have shorter integration times and lack the sharp excitation and inhibition along the spectral dimension (8). To understand how nose invariance is achieved, the neural responses for spectro-temporal patterns in the presence of background noise have to be examined. Because auditory neurons adjust their firing patterns in response to changes in time, frequency, and space, the receptive field for one neuron may be different from another. Graphically, the spectro-temporal receptive field (STRF), a linear model that compares the invariance of each neuron in relation to its depth in the brain, showcases the efficiency of neurons. By estimating the STRF of each neuron and examining the predicted response to certain stimuli, it was shown that many high-level neurons adapt to differing sound intensity levels. These neurons show decreased response to sound in the presence of background noise. This could be attributed to the necessity of preserving some background noise, especially in the circumstance of stimulus specific adaptation. Specifically, neurons in more dorsal areas such as the A2 are more invariant. More importantly, neurons that showed a more linear relationship in the STRF were able to filter sound efficiently (8). Noise-Filtration Algorithm The discovery of noise invariant neurons in the A2 permitted the engineering of a noise-filtering algorithm that could recover a signal from background noise. The development of algorithms that could draw out speech from noise can be used in clinical applications such as hearing aids and cochlear implants. Despite the fact that various forms of noise reduction technologies have improved, most hearing aid users complain that they have a problem listening to speech in noisy environments. Modulation Filter Bank Using artificial neurons, a modulation filter bank can be created since the response of each neuron quantifies the presence and absence of particular spectro-temporal patterns. Modulation filter banks are essentially an array of neurons, each of which tunes at a sharp spectro-temporal modulation rate. These sound filters characterize certain beats and frequencies by specific modulations. Sounds of the same frequency that lack the specific spectro-temporal pattern that a neuron is looking for is then tuned out. Essentially a filtration system, these neurons organize and decipher a stimulus to create a comprehensible sound. The filter bank is an imperative step in the development of an algorithm that could optimize the process of speech recognition.
Clinical Applications: Hearing Aids Given that 10% of the American population suffers from mild to severe hearing loss, algorithms like the aforementioned one can be used in hearing aids to efficiently reduce background noise and amplify noises at certain frequencies. Historically, hearing aids had always been flawed with an inability to fully mask excess sounds that a user is not primarily focusing on (10). Instead, they amplify background sound along with people’s voices, resulting in one incomprehensible package. Successful algorithms tend to be efficient in noise reduction and heightened speech perception. Recently, researchers at Ohio State University developed an algorithm that showed a 90% improvement in speech recognition. The algorithms take advantage of a deep neural network— an artificial neural network with multiple hidden layers of units between the input and output layers— that is representative of the human brain. In fact, many of the hearing impaired test subjects heard better with the algorithm than those with full hearing without the algorithm. Unfortunately, despite the various algorithms that are in use today, most require high computational load, which results in a high power consumption (9). Conclusion From detecting the faintest of whispers to suppressing a hodgepodge of sounds, the brain is essential in coordinating and refining the noises picked up by the ear. The composition of neurons in the auditory cortex translates the sound waves as individual neurons communicate with each other to form an aural picture— one that focuses on the necessary information. Understanding the neural connections that are intrinsic to hearing provides new computational methods, especially for the hearing impaired, to enhance certain features of sounds in an attempt to replicate the quintessential aural environment. Further investigations in auditory research explore whether interneuronmediated SSA can affect behavior, such as the ability to detect unexpected events; identifications of individual subregions within each level of the hierarchy; and the recording and targeting of specific cell types to detect whether or not the invariance transformation occurs in A1-SRAF pathway or canonical cortical circuits.
References 1. H.V. Oviedo, et. al., The functional asymmetry of auditory cortex is reflected in the organization of local cortical circuits. Nature Neuroscience 13, 1413-1420, (2010). doi:10.1038/nn.2659. 2. N. Burgess, Spatial cognition and the brain. Annals of the New York Academy of Sciences 1124, 77–97 (2008). 3. J.O. Pickles, An Introduction to the Physiology of Hearing. (Academic Press, New York, ed. 2, 1988). 4. M. Chevillet, et. al., Functional correlates of the anterolateral processing hierarchy in human auditory cortex. Journal of Neuroscience 31, 9345-52 (2011). 5. J.K. Bizley, et. al., Interdependent encoding of pitch, timbre, and spatial location in auditory cortex. The Journal of Neuroscience 29, 2064–75 (2009). 6. I.M. Carruthers, et. al., Emergence of invariant representation of vocalizations in the auditory cortex. Journal of Neurophysiology 114, 2726-2740 (2015). doi:10.1152/jn.00095.2015. 7. R.G. Natan, et. al., Complementary control of sensory adaptation by two types of cortical interneurons. eLife Sciences Publications, (2015). doi: 10.7554/ eLife.09868. 8. R.C. Moore, et. al., Noise-invariant neurons in the avian auditory cortex: hearing the song in noise, PLoS Computational Biology 9, (2013). doi:10.1371/journal. pcbi.1002942. 9. E.W. Healy, et. al., An algorithm to improve speech recognition in noise for hearing-impaired listeners. The Journal of the Acoustic Society of America 134, 3029-3038 (2013). doi: 10.1121/1.4820893.
BCI Use with Cerebral Palsy Patients Jessica Jolley ’16
Image Retrieved from http://4.bp.blogspot.com/-PC4kez4k5vM/VFL6CPnxAvI/AAAAAAAAAxg/8q0BNyOnLN0/s320/epilepsy%2Btreatment.jpg
Introduction Cerebral Palsy (CP) is a non-progressive condition that affects a person’s ability to move. While CP affects each individual differently, it generally makes muscle control, muscle coordination, reflexes, balance, and overall movement very difficult. Many patients with CP do not have intellectual disabilities, but they struggle to communicate and complete desired actions. Communication options for these patients are heavily motor-based making them very ineffective. Brain-Computer Interfaces (BCIs) offer an alternative communication method that would not rely on movement of the body, but rather on thoughts and brain signals. Unlike other neurological disorders, brain function remains normal in some cases of CP. BCIs allow an individual to use their own thoughts and neurological activity to control the motor output of the corresponding device. Without the need for the patients’ motor function to control the devices, communication becomes more efficient. CP results from damage of the developing brain either during pregnancy or soon after birth. The manifestation of this injury is the misfiring of signals sent from the brain to the body. While brain function is normal, motor commands often do not translate properly to the corresponding muscle. Some patients also struggle with epilepsy, hearing impairments, and visual impairments. The combination and severity of symptoms vary, but the lack of muscle control in combination with non-motor impairments causes universal issues in patients’ everyday lives. One in three people with CP have mobility issues and one in four have bladder or bowel control problems (1). Many products currently exist to help those with mobility issues associated with CP, such as wheelchairs and walkers. In addition, speech generating devices (SGD) exist to aid with communication, and have been developed in a range of complexities. Despite the existing devices, motor-based tasks can still be difficult for those with CP, so researchers must continue to find ways to address these concerns. Overview of Brain-Computer Inerface (BCI) Technology BCIs create a communication pathway between the brain and an external device that bypasses the neuromuscular system. These devices are focused on assisting, enhancing, or replacing sensory-motor functions. BCIs don’t necessarily
“read the mind”, but instead detect changes in energy or frequency patterns in the brain. By detecting these changes, BCIs can communicate and control an external device through a set of pre-programmed controls. Pioneering research on BCIs began in the 1970s at the University of California and was largely inspired by the discovery and development of EEG (2). The purpose of integrating EEG signals in a BCI system was to create assistance or an effective replacement for sight, hearing, or movement in users. In 1998, the first BCI was implanted into a patient, but the majority of BCIs continued to be external cap or headset devices. At this point, control and precision were limited in BCI devices and motion was not very complex. By 2004, Columbia University Medical Center researchers had found success detecting electrical activity in the brain with enhanced precision. In the same year, researchers at the NYS Department of Health’s Wadsworth Center released a report showing patients’ ability to control a computer using a BCI system (2). BCIs continue to become more complex as EEG and other methods of detecting brain signals become more precise. The functionality and practicality of the devices continues to improve as the intricacy of the systems and the number of available controls increases. Despite improvements, many limitations on BCI systems still exist. Acquiring precise brain signals can often be difficult due to interference from the skull, the rest of the body, the environment, and movement from the patient. Directly implanting the device remedies these issues, but is rather impractical at this stage of development. Another ongoing issue is the complexity, or lack thereof, of devices that can be controlled using BCIs (3). Most of the usable BCI systems have few, very simple controls. This is because with increasing complexity of devices comes an increase in inaccuracies and signal misreadings. Timing delay is also a universal issue in BCI due to the nature of the device. Decoding the EEG signals, in order to transfer them to the corresponding controls and systems may cause a delay from a few milliseconds up to a few seconds. Without extensive training, it is difficult to accurately operate a BCI as the technology is still in preliminary stages (3). Assistive Communication Using BCI A common research focus in the field of assistive devices is spelling systems. Spelling systems are a way to help
those who have communication issues by allowing them to spell out words without speech. One very common BCI spelling system is the P300, in which the user must discriminate among stimuli to complete a given task. The first reported use of the P300 system was an experiment by Farwell and Donchin, which consisted of a 6x6 matrix of letters and images in which rows and columns were repeatedly flashed to elicit visual evoked potentials. The P300 was controlled by attention rather than sight direction, so by focusing on the desired selection users during this experiment were able to spell “brain” using the P300 device. This was groundbreaking because up until it was developed, BCIs depended on muscle or eye-movement control. Following this, Wolpaw and his collegues reported the first use of sensorimotor rhythms (SMRs) for cursor control. SMRs are EEG signals that change with movement or imagined movement and do not require specific stimuli to occur. In the test, users were able to control a cursor through changes in their SMRs. A unique feature of this test is that users were made to rapidly switch between two mental states to select the corresponding target. Past tests had involved long-term changes in brain patterns (4). These fundamental concepts now set the basis for multidimensional control. Additionally, row-column communication boards are widely used among people with difficulty communicating. A rowcolumn board consists of a grid with letters in separate boxes. A highlighter or some other marker is scanned along the rows of the grid until the row with the desired item is chosen by the activation of a switch. The marker then moves along the columns of that row until once again the desired item is chosen by switch activation. These communication boards are very common because they are effective, but they are also rather slow. A recent study was aimed at developing a BCI based on the row-column board concept with a binary control signal to replace the switch. The goal was to develop an already effective system into a faster and more practical one. The BCI switch was executed by training the system to recognize imagery versus non-imagery EEG signals. Typically, items were highlighted for four seconds with a two second pause between each. This variable was determined based on user preferences. A “maximum likelihood” selection was implemented in the system, where the most likely letters to appear next would be in the first few boxes of the grid. The continual changes of the grid in this system proved confusing and therefore inefficient for users, so it was eventually removed. Continuous feedback was originally used but was found to be too demanding for users, so it was replaced with discrete feedback in which users were only notified if the switch was activated. The graphical user interface (GUI) contained the grid on the left side, and the feedback on the right side. CP Patients and BCI Tech It was determined that selecting individual letters to spell words was often too tedious and difficult for CP users, so letters were replaced with imagery to represent desired actions. Training and control standards were combined into a single process to make the BCI more user-friendly. One common issue experienced among CP users was the interference in EEG signal measurement due to high levels of erratic movement. Removing the observed electromyographic artifacts from the raw signal collection data correspondingly minimized this interference. Essentially, signals that do not fit within determined criteria are
deemed artifacts and are excluded. Another method that was implemented to overcome the spastic behavior of some CP users was evidence accumulation or repeated confirmation of the selections made by the user. Despite all of the adjustments made to accommodate for various issues, in their final test only 6 of 11 CP users were able to select the target item with an accuracy that was better than chance (5). These results are very encouraging compared to past tests, however, and prove that BCI research is moving in the right direction. One recent study is a great indicator of just how far BCI research has come in regards to its use with CP. Users with CP were compared to those without (the control group) in order to test whether the CP users were able to interact with and utilize the BCI system as well as the control group. Four games were employed to test the users. The first game, Burn, required a user to concentrate on a barrel to cause it to explode. It tests the concentration of the user because the task is completed only once the necessary level of attention is reached. The second game, Float is based on the user’s relaxation levels. The user must be relaxed in order to cause a ball to float the highest for the longest amount of time. The last two games were also concentration games similar to the first one, with slightly more complex controls. In the case of all four games, the feedback from CP users indicated that they found the system easy to control. The results of the CP user group were comparable to the control group as far as average game times and heights. The overall success of this study is a great indication of the positive progress and future capabilities of BCI use with CP (6). Conclusion Despite the initial limitations and issues faced in BCI use with CP patients, some of which still persist, the field of BCI research continues to move forward. Through ongoing experiments and studies many of the initial setbacks have been reduced or eliminated which shows positive progress towards BCI being a reliable option for CP patients. The range of applications for BCI devices will see expansion as research continues. As knowledge of the brain, EEG, and other related fields continues to advance, the effectiveness and efficiency of BCI devices will continue to progress as well.
References 1. What is cerebral palsy?. Cerebral Palsy Alliance, (2015). 2. The brief history of brain computer interfaces. Brain Vision UK, (2014). 3. I. Daly, et al., On the control of brain-computer interfaces by users with cerebral palsy. Clinical Neurophysiology 1787-1797 (2013). 4. J. R. Wolpaw, et. al., Brain-computer interfaces for communication and control. Communications of the ACM 54, 60-66 (2011), doi:10.1145/1941487.1941506. 5. R. Scherer et al., Thought-based row-column scanning communication board for individuals with cerebral palsy. Annals of Physical and Rehabilitation Medicine 58, 14-22 (2015), doi:10.1016/j,rehab.2014.11.005. 6. R. O. Heidruch et al., A comparative study: use of a Brain-computer Interface (BCI) device by people with cerebral palsy in interaction with computers. HCI International 2015-Posters’ Extended Abstracts 529, 405-410 (2015), doi:10.1007/9783-319-21383-5_68.
The Reality of the Grant Writing Genre Rohan Maini ’16
Image Retrieved from http://theupperdecklx.com/wp-content/uploads/2014/08/Watches-_-Jewellery-Fountain-Pens.jpg
Introduction Within the current economically-driven society, science research relies on funding above anything else. Without capital, experiments cannot happen. Consequently, the grant writing genre emerges as the primary medium to advance science and innovation. The main premise behind receiving a grant should be to broaden the horizons of what can be learned and developed, but several pieces of evidence exist that contradict this notion. Could the grant system actually impede the advancement of science? In order to answer this, it is important to analyze the characteristics of a grant proposal and the rhetorical devices that are employed within it. Understanding the genre will reveal the broader dynamics of the science discipline as well as the socioeconomic consequences faced by grant writers. There are several difficulties surrounding the grant system of evaluation, and solving these issues could be a huge leap forward for science research. Common Drawbacks It is vital for a proposal to be specific to the grant guidelines, following each direction to the letter. According to reviewers on the National Institute of Allergy and Infectious Diseases, most people repeat the same mistakes over and over again when writing grants, such as forgetting to support the hypothesis and providing safeguards against potential problems they will face when conducting the experiment (1). Others do not specify what problem the experiment is solving or why it is needed, which should be the primary focus of the grant. Lastly, it is important to be concise and clear, as the administrators do not want to read long pieces with extensive verbosity. Steps and the Importance of Rhetoric Analyzing the proposal of Dr. Michael Airola, a Stony Brook biochemistry researcher who works in the cancer center, reveals the importance of diction and syntax in the grant
writing genre (2). His experiment seeks to understand a specific class of lipids called sphingolipids and how their enzyme regulation contributes to cancer cell progression and survival. The proposal starts with a brief background about the class of lipids and enzymes that will be studied in the project. Then succinct paragraphs are provided to give a quick summary of all the aims of the study. Ariola uses words such as define and develop to give more gravitas to his claims, displaying that he will do something unique: define a novel enzyme mechanism (2). The next step, and most important, is the research strategy, which begins with the significance of the project. Dr. Airola states that grant writing made him a better writer because it helped him focus his ideas on the vital aspects of his work, specifically the direct impact on human health and disease. In one example, the writer starts off the section with the words “colon cancer,” automatically catching the eye of the reviewer with this life-altering, incurable disease. He goes on to give statistics that this cancer is the third most common in the U.S. and is responsible for 50,000 annual deaths (2). By doing this, he challenges the reader to move beyond the scientific jargon, and look to the broader social implications of the project. He is implying that if he receives this grant, they will be one step closer to saving 50,000 lives. The next part of the strategy is the innovation section, which explains what is unique about the experiment and what developments are possible (2). In the very first idea, the writer uses the word “novel” to describe the enzyme and the mechanistic insights we would receive from the experiment. He is marketing the idea that he is bringing something new to the discipline, that he will be a pioneer in the academic world when it comes to this protein family. His next idea in this section creates imagery, telling the reader that understanding this enzyme will help us “visualize” the specificity of the mechanism. This again highlights the importance of diction as the author uses specific words to transform this abstract,
technical process of science into a tangible, physical entity that will have weight in the academic discipline. The last section is the approach, which breaks down each aim into a rationale, previous findings, and alternative strategies (2). The rationale refers to the reason for the approach, while the previous findings contribute information that already exists on the topic. The alternative strategies section is particularly significant as it provides a backup plan if things fail. As pointed out above, this is often forgotten by grant writers, and the reviewers take solace in this addition since it indicates the reliability and forward thinking of the author. The author ends the proposal with a long list of references to provide credibility for his claims and juxtapose his own unique experiment with the previous research done in the field. This method of organization is not essential for all proposals, but is a great outline to summarize key ideas. In the end, the author’s drive and the experiment’s impact should be apparent that will ultimately push the research forward. However, sometimes even this is not enough. Difficulties Faced According to Dr. Airola, the most difficult dilemma faced by the writer is finding the time to write the grant proposal and balancing working at the lab bench with the writing. Many academics in the discipline believe that people are spending too much time writing grants and not enough time at the lab bench, which could actually encumber the progression of the experiment. A survey was done in Australia where they found that over a span of one year, a representative sample of scientists spent a combined five centuries writing grant proposals (3). Since only 20% of these grants were approved, these scientists wasted four centuries worth of time and effort that they could have been using to further the science of their experiments. There is an abundance of other studies showing the harrowing socioeconomic implications that surround the genre of grant writing, including a Canadian Natural Science and Engineering Research Council study that showed that the price of reviewing and rejecting a grant proposal was actually higher than giving each proposal a baseline grant of $30,000 (4). Grant systems hold a tight leash in in cancer research, giving funding to incremental or “safe” projects while rejecting the controversial and potentially transformative experiments (5). For instance, the doctor who invented Herceptin, a groundbreaking breast cancer drug, had his grant rejected by the National Institute of Health while the same cancer institute funded an experiment studying whether people who are more responsive to good-tasting food have a harder time staying on a diet. The latter study got funded on the first try. Lastly, even gender bias has been implicated in the grant writing process. Recent studies show that proposals with female names get rated lower than male proposals, and women regularly get less grant money than men (6). In fact, men are around eight times more likely to win a scholarly award than women. These staggering statistics effectively showcase the inherent flaw in the grant approval process. This disparity further demonstrates how the overseeing administration and competition indirectly impede the advancement of scientific pursuit.
Possible Solutions Several solutions could possibly improve the problems raised above. One solution calls for half of the funding allotted to the reviewers to go to experienced scientists who have received grants before, thereby alleviating the reviewers’ qualms (7). The other half of the money then goes to randomly picked researchers who have never received grants before. This will eliminate all preconception and reduce their reluctance to fund potentially groundbreaking and risky research ideas. Another solution to make the grant writing process more time efficient is to decrease the word length of the application. According to one study, the average primary investigator spends 116 hours writing the grant proposal (8). Limiting the length of the proposal would subsequently decrease the amount of time inputted, bringing science back to the forefront. Another potential solution is simply to improve the quality of the writing. A study was conducted at Clemson University that tested how inexperienced grant writers reacted to a cognitive apprenticeship inside writing classrooms and a social apprenticeship in laboratories, departments, and programs (9). This program promotes a gradual, collaborative learning experience that will help the transition from academia to the workplace. The study revealed that instructors that utilize an eclectic approach, including knowledge about the other disciplines of the students to teach grant writing, improve the quality of the writing. Working in synergy groups allows for a greater scope of understanding and problemsolving, since a multidisciplinary approach will account for unforeseen problems. Many of the students who were involved in the study commented that they learned to write more concisely and make the proposals more understandable to reviewers without background knowledge of the topic. Conclusion In the end, the purpose of grant writing should be to contribute and add to the science discipline. It is vital to break this genre down one section at a time, control all biases, and not allow it to impede science. However, it is apparent that a change needs to be made within the grant system, whether it includes trimming the guidelines, promoting unique experiments, or even random selection. If grant reviewers refuse to play it safe, then great leaps in science research are possible.
References 1. V. Mohan-Ram, Murder most foul: how not to kill a grant application. Science Journal (2000). 2. M.V. Airola, NIH K99 Grant Submission (2014). 3. D. Herbert, Funding: Australia’s grant system wastes time. Nature International Weekly Journal of Science (2013). 4. R. Gordon, Cost of the NSERC science grant peer review system exceeds the cost of giving every qualified research a baseline grant. National Center for Biotechnology Information, (2009). 5. G. Kolata, Grant system leads cancer researchers to play it safe. New York Times (2009). 6. H. Ding, The use of cognitive and social apprenticeship to teach a disciplinary genre. South Carolina Sage Publications (2008). 7. H. De Cruz, Grant writing, wasted time, and red queen effects. New APPS Blog (2013). 8. T. Von Hippel, C. von Hippel, To apply or not to apply: a survey analysis of grant writing costs and benefits. PLOS ONE 10.3 (2015). 9. H. Ding, The use of cognitive and social apprenticeship to teach a disciplinary genre. South Carolina Sage Publications (2008).
Language Use on Twitter Hannah Mieczkowski ’17
Image Retrieved from https://www.flickr.com/photos/krayker/4962969492
Introduction The internet began its exponential growth in popularity in the early 1990s, and now in 2015, it is rare to meet someone who does not have at least some knowledge of the World Wide Web. However, for researchers, especially linguists, there is still much to learn about the internet, specifically about the popular social media site Twitter (Figure 1). Numerous studies have looked at the interactions between Facebook and various societal factors, but Twitter has been largely overlooked. In recent years, Twitter has become one of the most popular social platforms as it has over 316 million monthly active users and sees at least 500 million tweets posted each day (1,2). The site’s huge number of users and even larger volume of individual tweets provide researchers with abundant language samples about a vast array of subjects. Users have tweeted about mundane topics from breakfast choices (“Proper breakfast- egg, oatmeal, berries, and acai juice. Nutrition game going strong”) to ideas as exciting as the possible discovery of a new planet (“A new planet was discovered just beyond pluto, it is neptune sized”) (1). Researching language on Twitter is also advantageous because of how frequently the tweets are produced and the temporal and geographic information coded within the tweets (3). Recently, linguists have used this wealth of information to investigate the subtleties of the composition of a tweet as well as to examine how regional dialects and gender presentation are apparent even within the mere 140 characters Twitter users are afforded (3,4,5). These studies provide fascinating and detailed information that may be overlooked by even the most frequent Twitter user. If language is truly a window into
culture, then there is a great deal to be learned about relationships, behaviors, and identities from this website. Tweets are produced constantly due to Twitter’s international user base and often reflect the ever-changing topics of interest on a weekly basis. Researchers examine the rate at which hundreds of linguistic elements, such as hashtags or specific words, occur and what elements are correlated with specific variables, such as time or place. Correlations between variables can answer current and future research questions regarding anything from language usage in small groups to patterns found in speech overall. Furthermore, data production with these qualities is useful for both crosssectional and longitudinal studies because they show linguistic behavior in the moment and across periods of time, which allow researchers to follow any possible changes. However, researchers have acknowledged a significant bias in the data simply due to the type of people using Twitter who are not a representative sample of the entire population; people who use Twitter often frequent other social media platforms as well, suggesting they are involved in more online activities than the general public (1,6). Attributing the findings from the following studies to a person without Internet access would, undoubtedly, misrepresent some of the characteristics of that person. Yet a dismissal of the ideas presented from Twitter-based data collection would constitute a loss of knowledge regarding computer-mediated communication (CMC) and how people convey themselves online (3). To lose this knowledge would mean an increase in ignorance about one of the most prolific and powerful methods of communication in the modern age.
“If language is truly a window into culture, then there is a great deal to be
learned about relationships, behaviors, and identities from this website.
Tweet Composition Although Twitter users have a relatively small space to express themselves in comparison to other social-networking sites, there are many facets of tweets that can vary from user to user and even tweet to tweet. However, certain patterns are visible when examining studies of popularity on Twitter. For example, tweets that conform to audience, or follower, expectations and are typical of that user tend to have more retweets and are therefore more popular than tweets that include language uncharacteristic of that user or conversation. People tend to accept information more readily if it supports what they already know or are used to, so it is not odd that this phenomenon is applicable to CMC. Moreover, tweets that imitate headlines are usually retweeted more because they are attention-getting and informative (3). Another way to convey information in a tweet is to utilize Twitter’s hashtag feature. Current research divides hashtag usage into two categories: tag and commentary. Tag hashtags provide a way for tweeters to organize their tweets as well as to connect them with other users who have used the same hashtag, usually indicating communication about the same topic (i.e. “#Denver”). In contrast, commentary hashtags can be a part of the main content of the tweet, even though they usually appear at the end of the tweet, and are more likely to convey emotion (i.e. “#loveit”). Rationale behind general hashtag usage is not yet known, but it is hypothesized that hashtags imply the content in the post is uniquely “tweetable” or worth sharing on an online platform such as Twitter and is in some way differentiated from content in verbal conversations (7). Emoticons, or pictorial expressions of various emotions, also display information, albeit in a different manner. Current research has not sufficiently explored the realm of emojis, but information on basic “smileys” is available. Studies have shown that whether or not an emoticon has a nose correlates with other linguistic elements, such as misspelled words. In turn, these features are consistent with the use of either “vernacular” or “standard” language. Original emoticons displayed a nose (i.e. :-)), which became the standard for future smileys. However, many people often see standard speakers and, by extension, the standard variety of language, as old-fashioned or “uncool.” Younger people, in all linguistic contexts, display a higher usage of the non-standard language, and this pattern is observed in their increased display of emoticons without a nose. Thus, it is not surprising that nose-less emoticons are associated with other vernacular speech features, such as expletives (8). Non-linguists often assume deviation from the standard is evidence of the downfall of the English language and an increase in the general population’s idiocy, but there is no such correlation between these claims (9).
Presentation of Identity Despite the anonymity provided by many social networking sites, Twitter included, people often speak as they would in face-to-face communication. Thus, even when people have the opportunity to showcase themselves differently than normal, they typically do not unless there is external motivation to do so. People index many associations in order to construct their identity, and in many cases, this is done subconsciously due to the fact that people rarely take notice of their linguistic choices (10). For example, using the word “ain’t” is connected with non-standard English; if someone says “ain’t,” they could be indexing a wide variety of characteristics (education level, ethnicity, etc.) that are linked with vernacular varieties (10). There is evidence to suggest some decisions, such as tweet content, may be conscious since it is known that Twitter users alter the way they present themselves linguistically in order to cater to their audiences (11). Catering to an audience can result in an increase of popularity, as measured by the number of retweets (3). Nevertheless, it is highly unlikely that every Twitter user crafts a tweet with boundless knowledge of what every feature of their tweet will indicate. This becomes apparent when looking at identities associated with gender and location. Gender As mentioned above, even though there is thought to be a connection between anonymity on the internet and the idea that one can be “liberated” from social constraints through that medium, this is arguably not the case. This is perhaps most notable when indexing gender identities, such as male and female. Concepts surrounding gender are so ingrained in society that people find it difficult to distance themselves from those concepts, even with CMC. Previous linguistic elements such as hesitation (umm, uh) and backchannel (yeah,
Figure 1 Twitter is one of the most
active and readily available social media outlets for information.
Image Retrieved from https://www.flickr.com/photos/stevegarfield/4247757731
Figure 2 Twitter Networks can
be visualized using dots for specific ideas or topics, and lines for relationships between them.
mmhmm) words have been associated with female speech outside of the internet, but similar findings exist on Twitter. Additional findings include a greater use of expressive lengthening (woooooow) and text-speech (omg, lol) by females. Males, on the other hand, swear and use proper nouns, such as brand names or names of celebrities, 30% more often than women (5). An interlocutor, or the person someone is speaking/tweeting to, also shapes the language patterns of the speaker. If a group of Twitter users who engage with each other is predominantly one gender, more so-called gendered language will be used (12). Some researchers and most of the general public believe that a female’s language is “expressive” and “more standard” than a male’s because of the idea that women are more descriptive and polite (5). More recent findings indicate that this assumption may not be true since male speech includes more expletives, a form of expressive speech. Additionally, more instances of the aforementioned “lol” and “omg” are found in female speech, which is not acceptable in “standard” English (9). Geographical Location Indexing your geographical background to create a certain identity works similarly to indexing gender but relies more on specific words or phrases than other linguistic features. There is evidence that implies that people on Twitter are showcasing information about their past or current locations through the language they use. Terms such as “hoagie” (sandwich) and “bogus” (ridiculous), are heard more often in Philadelphia and Chicago, respectively, than anywhere else in the country. They occur with corresponding frequencies online as well as offline, and similar findings suggest this phenomenon occurs in other largely populated US cities as well (11). Once again, it can be observed that even though people have the option to portray themselves any way they would like in CMC, they choose not to, consciously or subconsciously, in the majority of the cases. Choosing to use the local dialect can have advantages and disadvantages. If both members of the conversation use the same dialect, it can increase solidarity between them (11). However, it may indicate some unwanted characteristics and distance the speakers from those not using that local language
variety (3). Some linguists predicted that Twitter or other social media platforms would lead to dialectal leveling– all dialects would become less pronounced or obvious– but in reality, these sites intensify dialectal choices since they are used online as well as in face-to-face communication (11). Conclusion Linguistic research dates as far back as 400 BCE and has covered an exceedingly vast variety of topics (10). However, an underlying theme that ties all of these subfields together is change. Language, being an innately social tool, has undergone just as much change as the human race itself has, if not more (9). While some may suggest that the growth of emoticons, hashtags, and various “slang” terms on Twitter are directly related to the steep decline of the English language and relatedly, the decreased intelligence of the people using it, recent research implies the opposite (9). There are still many underdeveloped subject areas in Twitter-based research, such as the function of emojis or the ways in which new words and phrases spread to other cities (8,11). However, it is the job of linguists to record these changes and hypothesize what various linguistic features can tell us about language and people
References 1. M. Duggan, et al., Social media update 2014. Pew Research Center (2014). 2. A. Jungherr, Analyzing Political Communication with Digital Trace Data (Springer International, Switzerland, 2015), pp. 7. 3. C. Tan, L. Lee, B. Pang, The effect of wording on message propagation: topic- and author-controlled natural experiments on Twitter. Association for Computational Linguistics (2014). 4. J. Eisenstein, Handbook of Dialectology (Wiley, New York, 2015), pp. 1. 5. D. Bamman, et. al., Gender identity and lexical variation in social media. Journal of Sociolinguistics (2014). 6. E. Aronson, The Social Animal (Worth, New York, ed. 11, 2011), pp. 1. 7. A. Shapp, Variations in the use of Twitter hashtags, New York University (2014). 8. T. Schnoebelen, Do you smile with your nose? Stylistic variation in Twitter emoticons. University of Pennsylvania (2014). 9. J. McWhorter, What Language Is (And What It Isn’t and What It Could Be) (Gotham, New York, 2011), pp. 104. 10. J. Holmes, An Introduction to Sociolinguistics (Routledge, New York, ed. 4, 2013), pp. 56. 11. U. Pavalanathan, J. Eisenstein, Audience-modulated variation in online social media. American Speech 90.2, 187-213 (2015). 12. F. Kivran-Swaine et. al., Effects of gender and tie strength on Twitter interactions. First Monday (2013).
Bio-Artifical Organs An Overview of Current Advances Justina Almodovar ’18
Introduction Alternatives to organ donations are becoming a necessity as the demand for organ transplants continues to increase while supply remains unable to match demand. As of August 2015, about 122,572 people are in need of a lifesaving organ transplant but only 20,704 transplants have been done (1). In addition, organ transplants can unfortunately follow up with many post-surgery complications such as organ rejection. Donated organs risk the chance of rejection and even failure by the patient’s immune system or post-operative infections via immunosuppressive therapy. Immunosuppressive therapy, medication intended to inhibit the activity of the patient’s immune system, is administered to promote organ acceptance but can leave the body susceptible to other damaging complications (2). But even before a patient can face these difficulties, they must be able to afford surgeries and future therapies necessary to keep their organ functioning properly. One possible solution is the use of artificial organs, but this is not the best option. These organs are man-made prosthetics designed to replace or assist failing organs. These prosthetics are made of biocompatible materials that function as the organ by imitating its size, hemodynamic and regulation systems. Although these prosthetics allow the body to function normally, they are unable to completely mimic the original organ and sustain themselves for extended periods of time (3). They require a relatively large power supply, replaceable biochemical filters, and chemical processors in order to continue functioning. Dialysis machines, which serve as replacements for host kidney function, require tri-weekly dialysis appointments in order to clear out waste and are associated with a variety of disadvantages including vascular damage, severe dietary restrictions, and increased health care expenses (4,5). In order to minimize life-threatening complications, artificial organ transplants require life-long therapy for continuous and proper function. The emerging field of tissue engineering has given rise to promising future alternatives: bio-artificial organs. In the absence of necessary organs or tissue for patients, bio-artificial replacements can be externally replicated and transplanted into the body. These organs will be made from autologous stem cells and grown in human-like internal environments,
thus reducing the chances of graft-versus-host rejection. Bioartificial organs are intended to be fully functional and selfsustaining. Stem cells are embedded inside synthetic or natural scaffolds to attain specific dimensions of the needed organ. Ideally, the replicated organ will successfully interact with its extracellular environment and function without external assistance. These organs would be able to perform the necessary biochemical functions to successfully replace damaged living tissue or organs (6). By combing engineering and stem cell research, bio-artificial organs are becoming an increasingly possible alternative to relieve the high demand of organ donations, replace high maintenance prosthetics and increase organ transplant success. Stem Cells In order to generate fully functional organs, scientific progress must be able to develop fully specialized cells in a spatial arrangement that must effectively mimic the target organ’s physiological functions. Stem cells, because of their unique ability to specialize into differentiated cells presents an ideal medium for bio-organ synthesis and internal organ repair. Through the process of regulated division these cells can replace and repair damaged tissues (5). Stem cells that have been used in such cases include embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and mesenchymal progenitor cells (MSC) (7). Embryonic stem cells have unlimited propagation and differentiation potential. In other words, these cells are derived from embryonic blastocysts of in-vitro fertilized cells and can differentiate into almost any cell type in the body. Blastocysts is an early stage of embryonic development containing an undifferentiated inner-cell mass. This inner-cell mass is removed from the blastocyst to create an ESC (8). ESCs can be differentiated to exhibit complete characteristics of mature specialized cells by producing and secreting characteristic proteins and wastes upon being grafted onto a damaged organ. In one study, ESCs were programmed to differentiate into human hepatic cells, which are specialized for liver function. When transplanted into patients with chronic liver failure, researchers found a short-term improvement and alleviated symptoms in these patients (9).
Figure 1 Artifical organs can replace crucial contributors to the digestive system (picture above), endocrine system, renal system, etc.
due to religious beliefs. As a matter of religious faith, it is believed that human life begins at contraception and removing this inner mass equates to murder (11). Although, other stem cells such as iPSCs and MSCs prove more difficult to form, these stem cells are derived from somatic and stromal cells rather than embryonic cells. iPSCs and MSCs can serve as possible alternatives due to their less controversial origins.
Image Retrieved from http://assets2.akhbarak.net/photos/articles-photos/2015/1/30/17591904/17591904-v2_xlarge.jpg?1422573102
Figure 2 A tissue engineered scaffold designed to hold a patientâ€™s cells and replace a damaged heart valve.
Induced pluripotent stem cells (iPSCs), are generated from adult somatic cells via reprogramming. iPSCs, like ESCs, have unlimited propagation and differentiation potential but present greater difficulties in acquiring. iPSCs are obtained from reprogrammed autologous somatic cells into an embryonic stem cell like state, but reprogramming efficiency is relatively low. Viruses containing reprogramming factors are introduced to adult somatic cells, but findings show only 5 to 10% of cells will become iPSCs (10). Currently, new methods utilizing different delivery systems of transcription factor genes have improved the efficiency of obtaining reprogrammed iPSCs (7). Beyond these difficulties, iPSCs have compared to ESCs in their pluripotency, differentiation potential, and are capable of differentiating into necessary cells. With more successful methods of reprogramming and differentiating, iPSCs can be a more feasible source of stem-cells. One study of hepatocytes, liver cells, derived from iPSCs showed regenerative capabilities after successfully restoring the liver after a hepatectomy on two-thirds of the mice liver (7). Mesenchymal progenitor cells (MSC) are stem cells derived from connective tissue cells, also known as stromal cells. MSCs can differentiate into specific mesenchymal tissues, such as bone, tendon and muscle. Unlike iPSCs and ESCs, MSCs are multipotent and are limited in their differentiation capabilities. These cells can undergo replication in vitro and in vivo environments, which would be ideal for growing and transplanting organs and tissue. Researchers differentiated MSCs into specialized liver cells called hepatic cells and transplanted them into the spleens of rats with liver cirrhosis. The rats were found with decreased fibrosis, liver scarring, and increased expression of cytokeratin, proteins containing intermediate filaments, after implantation (7). On the downside, stem cells are controversial to many
Scaffolding Stem cells must be organized and contained for proper growth and differentiation before being transplanted into a body, which can be achieved by scaffolds. Scaffolds are three dimensional structures that specialized cells attach to and promote tissue development. They are typically made of porous biomaterials that mimic in vivo conditions of natural microenvironments. Through computer-aided design, three-dimensional scaffolds are created through a layer-bylater additive bio-fabrication process, otherwise known as 3D bio-printing (7,9). With current techniques, 3D printers follow a biomimicry of the damaged organ. Biomimicry is the manufacturing of an identical replication, a scaffold, of the cellular and extra-cellular components of the damaged tissue or organ (12). These scaffolds will be used to support differentiated cells. These materials are tested for compatibility to specialized cells and promote growth and stability. Collagen fibers, for example, are used for structural supports that are biocompatible to stem cells by supporting cell growth, differentiation and protein secretion. Bioactive molecules, such as hormones and growth factors, on the scaffold can promote cell-adhesion, differentiation and growth by regulating the synthesis of biomaterials. Biomaterials, proteins and peptides provide chemical and mechanical cues to aid in secretion of specialized proteins from differentiated stem cells (7,11). Overall, the scaffold will support an in vivo environment containing the necessary components for differentiated cells to grow, communicate and cooperate into one living mass. With the scaffold stabilized physically, chemically and mechanically, the culture of stem-cells can be seeded onto the scaffold. A difficulty in combining the material of the scaffold and the stem-cells is the result of physical change in the scaffold. In other words, the process of cells degrading and reproducing can modify the scaffold. For example, in cardiac tissue, researchers found that only a small portion of the artificially made patch was occupied with cardiomyocytes, cardiac tissue, while the rest of the scaffold was filled with collagen fibers and support architecture without specialized tissue (7). Other techniques, such as cell selfassembly, proved to be successful in recreating organ tissue patches. Researchers of Tokyo Womenâ€™s Medical University, for instance, produced self-assembled cell sheets from polymer substrates which was then successfully transplanted into an adult rat. These cell sheet-based patches, by mimicking characteristics of native heart tissue, had improved damaged heart function in rats (9). Successful scaffolding has the potential to create life-saving regenerative medicine for patients. However, scaffolding a complete bio-artificial organ faces many difficulties, and requires many specialized cells
to address blood flow, immune responses, size and specific protein production found in the native organ. Scaffolds created for these cells normally do not reconstruct a vascular supply found in the body. Furthermore, cells transplanted onto scaffolds are subject to reproducing and degrading which can change the shape of the scaffold. Other difficulties include keeping specialized cells alive, since some studies have found only small portions of the grafted patch to be occupied with surviving cells (13). There have been instances of successful specialized cell grafts onto damaged organs in rodents. These studies also address difficulties in recreating and maintaining the different cells into a selffunctioning organized mass rather than a section of the organ. Complete bio-artificial organs require removal from scaffolding and immediate blood flow from native blood vessels, where the body must now interact with the artificial cells to keep the replacement organ alive (14). Current Research / Future Perspectives As of right now, there has not yet been a fully functional human organ grown rather, there has been success at a smaller degree. Currently, heart, liver, lung and kidney tissues have been specialized and transplanted into rodents with success in reducing symptoms of failing organs. For instance, engineered kidney tissue was transplanted into rodents and successfully produced urine. Such a transplant could alleviate symptoms of failing kidneys in patients. Similarly, heart tissues, which have been well-explored in tissue engineering, have been successfully reconstructed using rat heart cells under a circular mold and transplanted into a damaged rat heart, which improved in function after implantation (7). Reconstructed organ models have so far been built at a small scale. The ever expanding discoveries on overcoming the hurdles with bio-artificial organs are making a path toward a successful bio-artificial organ transplants. The ability to harvest autologous cells and manipulate them to useable stem cells is desired for bio-artificial processes (9). So far, grafts of artificial cells transplanted into mice were successful in fully functioning over the injured part of the organ
(6). The end goal is to recreate a self-sustaining human organ that would potentially be used as a replacement. Further plans include self-repair and regeneration systems in these artificial cells, which would allow the tissue or organ implanted to maintain normal physiological characteristics of the native organ. Current evidence shows promise of artificial cells by either healing themselves or manufacturing them ex vivo and placed inside the body (15).
References 1. Organ Procurement and Transplantation Network. U.S Department of Health & Human Services. (2015). 2. J. F. Fishman, R. Rubin. Infection in Organ Transplant Recipients. The New England Journal of Medicine 338, 741-1751, (1998). doi: 10.1056/ NEJM199806113382407. 3. W. H Fissell, et. al., Achieving more frequent and longer dialysis for the majority: wearable dialysis and implantable artificial kidney devices. Kidney International 84, 256-264 (2012). doi: 10.1038/ki.2012.466. 4. M. Yoshio, et al., Fifteen-year experience with the Bicarbon heart valve prosthesis in a single center. Journal of Cariothoracic Surgery (2015). doi: 10.1186/ s13019-015-0294-x. 5. Stem Cell Basics: Introduction. National Institutes of Health. Stem Cell Information, (2015). 6. X. Ren, H.C. Otto, On the road to bioartificial organs. Pflugers Archiv: European Journal Of Physiology 10, 1847-57 (2014). doi: 10.1007/s00424-014-1504-4. 7. J. S. Lee, S. Cho. Liver tissue engineering: Recent advances in the development of a bio-artificial liver. Biotechnology And Bioprocess Engineering 17, 427-438 (2012). doi: 10.1007/s12257-012-0047-9. 8. B. Lo, L. Parham. Ethical Issues in Stem Cell Research. Endocrine Reviews 30, 204-213 (2009). doi: http://dx.doi.org/10.1210/er.2008-0031. 9. T. Shimizu, et. al., Cell sheet based myocardial tissue engineering: new hope for damaged heart rescue. Curr Pharm Des 15, 2807-14 (2009). 10. J. H. Hanna, et al., Pluripotency and cellular reprogramming: Facts, hypotheses, unresolved issues. Cell 143, 508-525 (2010). doi: 10.1016/j.cell.2010.10.008. 11. C. Buntain, Ethics of Artificial Organ & Tissue Engineering. University of Pittsburg, (2013). 12. S. Murphy, A. Atala. 3D bioprinting of tissues and organs. Nature Biotechnology 32, 773-785 (2013). doi:10.1038/nbt.2958. 13. B.P Chan, K.W. Leong. Scaffolding in tissue engineering: general approaches and tissu-specific considerations. European Spine Journal 17, 467â€“479 (2008). http://doi.org/10.1007/s00586-008-0745-3. 14. M. Salvatori, et al., Regeneration and bioengineering of the kidney: current status and future challenges. Currently Urology Reports 15, (2014). doi:10.1007/ s11934-013-0379. 15. G. Orlando, et. al., Regenerative medicine as applied to solid organ transplantation: current status and future challenges. Transplant International: Official Journal Of The Euuropean Society For Organ Transplantation 24, 223-232 (2011). doi:10.1111/j.1432-2277.2010.01182.x.
Figure 3 A colony of stem cells that have yet to be differentiated.
Image Retrieved from http://www.catholicnewsagency.com/images/Embryonic_Stem_Cells_Credit_Nissim_Benvenisty_CC_by_25_EWTN_US_Catholic_News_5_16_13.jpg
Suicide Genes A Viral Approach to Cancer Therapeutics Michelle Goodman ’18 Image Retrieved from https://upload.wikimedia.org/wikipedia/commons/6/62/Ebola_Virus_(2).jpg
Unlike other diseases, modern medicine has not been able to fabricate the curative treatment that would selectively eradicate the cancerous cells plaguing countless patients. Without this type of solution, patients are subjected to poisoning their bodies with sessions of chemotherapy, not only killing their malignant cells, but also healthy tissue in the process. Recent advances in the treatment of cancer, however, have unveiled the possibility of fighting cancer on a genetic level in which cancer cells will be selectively targeted and terminated, leaving the surrounding tissue intact and unharmed. These treatments, capable of killing only the malignant cells, have physicians and scientists anticipating a new era of cancer suppression. This technique, now known as gene therapy, has the potential to revolutionize the way in which we treat cancer. The Initial Spread of Cancer Cancer has developed an incurable reputation, and is recognized as one of the biggest threats to our health. With just one aberrant cell, all of the body’s processes can eventually disintegrate. This starts when a cell undergoes abnormal cell division; the malignant cell subsequently begins multiplying at dangerous rates, initiating the formation of a tumor. Researchers currently understand that the main threat of cancer is not its onset, but rather its spread, known as metastasis. Attacking or excising just one affected area of the body generally procures a good outcome. Unfortunately, that is often unachievable once the tumor cells metastasize. The lack of ability to isolate cancer cells, as well as the inability of cells to induce apoptosis (programmed cell death), are the driving factors that increase the tumor’s harmful impact on the body. The two most common ways by which these malignant cells spread is either growing directly on the surrounding organs or metastasizing via the bloodstream and lymph nodes. This ease of movement of cancerous cells instigates the peril that countless patients experience as the tumor overtakes their bodies, while evading treatments and growing resistant to even the most powerful drugs (1). Manipulating Gene Therapy Tumors can grow and metastasize at different rates, depending on the conditions of their surrounding environment. Certain aspects of tumor tissue such as mutated DNA and abnormal oxygen requirements differentiate it from normal tis-
sue, enabling it to be manipulated on a molecular level. Gene therapy achieves this by effectively altering the DNA of cancer cells in order to induce apoptosis, which ultimately leads to the cessation of metastasis (2). The tumor, however, is able to resist this in more ways than one. In order to prevent hypoxia, the condition in which the tumor is unable to acquire oxygen, blood vessels develop around the tumor to support its growth and provide a mechanism of vascularization to supply nutrients to the receptors of the tumor cells (1). One form of gene therapy, known as gene transfer, combats this by introducing new genes into the cancerous cells’ DNA sequence, eventually triggering cell death and simultaneously slowing the growth of the tumor. The majority of these targeting genes are known as suicide genes. A number of different viral and gene vectors have been used in clinical trials to deliver these new genes to the desired receptors on the tumors. In recent studies, cancer therapies have incorporated gene transfer along with immunotherapy treatments to selectively engineer lymphocytes, along with other immune cells, to target cancerous masses. The adopted technique, along with other forms of gene transfer therapy, has produced distinct tumor responses. Scientists working with these techniques are now focusing on ways to improve gene transfer by simplifying the protocols by which gene transfer operates, as well as creating a controllable regulation of gene expression (3). After years of studying tumors and the way in which they attack the body, scientists have recently found a way to utilize our body’s immune system more effectively against the growing tumor. With malignant tumors, some of the aberrant cells act to suppress the immune system so that they are able to thrive in that particular environment. Because of this, the immune system is unable to recognize and attack the malignancies, resulting in an overpowering tumor and an even weaker immune system (4). With this in mind, researchers have developed techniques to modify a patient’s immune response, allowing it to recognize and attack malignant tumors much more successfully than it would under otherwise normal circumstances. Using monoclonal antibodies, which essentially interfere with the ability of cancerous cells to bypass the immune response, these immunotherapy treatments have shown great success in various cases from severe melanoma to smoke-induced lung cancer (4,5). This method, known as a
checkpoint blockade, allows the immune system to recognize the cancerous tumors that it generally doesn’t and begin the process of attacking them. Scientists involved in these clinical trials are careful to make sure that these immunotherapy treatments only attack the malignant masses, as the effects on the immune system would be as detrimental as radiation or chemotherapy, if not worse, when used to fight against the body as a whole (4). Taking it one step further, numerous investigations demonstrated that a particular chemotherapy drug, known as gemcitabine, induced apoptosis in cancerous cells whose principal function was to oppress immune function. By using a small dose of chemotherapy to attack just those cells in tandem with immunotherapy treatments, scientists were able to eradicate the entire tumorous mass successfully (4,6). Oncolytic Virotherapy The use of a viral vector to target cancerous cells is a growing field in gene therapy treatment. Oncolytic gene therapy uses genetically engineered viruses capable of targeting and destroying aberrant cells while leaving the surrounding tissue intact (2). By infecting the cancer cells and inducing apoptosis, these viral vectors are able to manipulate gene expression of the growing cancerous cells. The mechanism by which the oncolytic vectors operate is through the replication of the virus once it enters the cell, and subsequent activation of cytotoxic proteins, which are used as the body’s defense system when terminating host and foreign cells. These viruses are additionally capable of entering the cells and catalytically inactivating protein synthesis, ultimately acting as apoptosis inducing agents (5). This treatment, known as “oncolytic virotherapy,” selectively infects cancerous tissues using specific cellular tropisms to determine which cells contain malignancies. Cellular tropisms surround host cells and are used by the malignant cells to promote its growth. Oncolytic virotherapy targets these unique tropisms and uses them as a driving force behind its mechanism. All tumors have evolved to the point where they are able to resist apoptosis and successfully avoid immune destruction. The viral vectors, while innocuous to the rest of the body, have a preferential tropism for tumor cells due to the optimal microenvironment they provide for the survival of these viruses (5). Using this virus-mediated cytotoxicity, the viral vectors are capable of overtaking the tumor cells and ultimately exploiting the protein synthesis of that particular cell (6). Clinical trials involving oncolytic virotherapy have shown mixed outcomes. Most human cancers have a specific mutation inactivating the once active tumor suppressor gene TP53. This perpetual activation promotes expression of the p53 gene and its subsequent downstream pathway. This pathway controls cell growth as well as a variety of other cellular process once a cell undergoes this hemizygous deletion. Since tumor cells lack this tumor suppressor gene, oncolytic treatments in these cases are often times unsuccessful. Due to the immune response of the body to the presence of foreign viral substances, antibodies clear the viral agent before it has time to infect the tumor cells. Presently, scientists are modifying these viral vectors to enhance their efficiency and prevent the body from eliminating the viruses prior to their attack on the tumor (7).
The Reolysin Study Dr. David Cohn, director of gynecologic cancer research at the Ohio State University Medical Center, has led numerous successful clinical trials associated with oncolytic virotherapy for patients with severe ovarian cancer (8). The virotherapy treatment, Reolysin, manipulates a unique genetic mutation inside cancer cells, leaving all surrounding cells unharmed. Most cancer cells possess a mutated Ras pathway, which is involved in regulating cell growth and transformation. Therapeutic reolysin therapy serves to silence the mutations in the Ras pathway, leaving the cancerous cells in a form of stasis allowing for rapid viral multiplication and subsequent cell death (9). Generally, these cancer cells allow the virus to penetrate the tumor cells and begin replicating exponentially. This replication process continues until the tumor cells are overwhelmed and burst, releasing the replicated virus particles and, in turn, infecting more surrounding tumor cells (3). Patients undergoing the Reolysin treatment are intravenously given the virus over the course of five days and, withstanding any severe side effects, this treatment is repeated every 28 days (8). Dr. Cohn, along with other doctors, has collectively treated over 270 patients using the Reolysin virotherapy treatment in various clinical trials. Patients whose tumors showed elevated resistance to other therapies have responded well to Reolysin, and displayed few side effects. The appeal behind virotherapeutic genetic manipulation is the unique targeting mechanism, which kills only cancer cells. Other cells are left intact, resulting in patients with increased appetite and decreased hair loss, unlike side effects commonly associated with chemotherapy (3). With its success in treating ovarian cancer, Reolysin, as well as other forms of oncolytic virotherapy, is opening the door to an entirely new wave of cancer treatment. While there are many cancer-fighting therapies that are used against countless types of cancers, gene therapy gives a promise of a better quality of life. Most therapies, in their attempt to kill cancerous tissue, result in adverse side effects. Patients are often times obligated to change the pace of their lifestyle to accommodate the brutal side effects during their treatments. Using gene transfer and viral technology, scientists have taken a simple idea and applied it so that thousands are able to get the treatment they need, all the while continuing on with their lives without the extreme side effects. It may not be the ultimate solution, but for now, it’s the best one we have.
References 1. H. Peinado, et. al., Cancer Metastasis: Biologic Basis and Therapeutics, (Cambridge University Press, MA, 2011), pp. 191-198. 2. Gene and virus therapy program. Mayo Clinic. (2016). 3. N. S. Templeton Ed., Gene and Cell Therapy: Therapeutic Mechanisms and Strategies (CRC Press, 2015). 4. What is cancer immunotherapy?. Cancer Research Institute. (2016). 5. C. Gorman, Cancer Immunotherapy: The Cutting Edge Gets Sharper. Scientific American (2015) 6. P. Groscurth, Cytotoxic effector cells of the immune system. Anat Embryol Anatomy and Embryology, 109-119 (1989). 7. S. J. Russell et.al., Oncolytic virotherapy. Nature Biotechnology 30, 658-670 (2012). doi:10.1038/nbt.2287. 8. S. Turnbull et al., Evidence for oncolytic virotherapy: Where have we got to and where are we going?. Viruses 7, 6291-6312 (2015). doi:10.3390/v7122938. 9. NCI Staff, Collateral damage: Missing tumor suppressor gene creates opening for cancer treatment. National Cancer Institute, (2015).
The Future of Single-Particle Cryo-Electron Microscopy Image Retrieved from http://www.gatan.com/sites/default/files/First_3D_2.8A_1.jpg
One Seo ’17
Image Retrieved From http://www.sachse.embl.de/ page3/files/stacks_image_667.jpg
Figure 2 On left, reconstruction of
20S proteasome under 2.8Å resolution. Structures of water molecules are present in the right figure.
Introduction Determining the three-dimensional structure of a protein or protein complex is critical for understanding its numerous biological functions. For decades, nuclear magnetic resonance (NMR) spectroscopy and X-ray crystallography served as the undisputed approaches for detecting 3D images of macromolecular structures (1). However, both methods have limitations. Besides meeting the requirements of large sample amounts and isotopic enrichment (a process in which relative abundance of isotopes of a certain element is altered, thus yielding an element that is particularly enriched with one isotope and depleted with another), size requirements limit the power of NMR (2). Limitations for X-ray crystallography include the requirements of high-quality protein crystals as well as the need for large sample amounts (3). Cryo-Electon Microscopy (Cryo-EM) was used to yield images of large protein complexes, integral membrane proteins, polymers, and macromolecules that have resisted crystallization, though were presented as immensely low resolution compared to traditional crystallography (4). Nevertheless, recent technological advances in sample preparation, computation, instrumentation, and software have enabled cryo-EM to visualize macromolecules at the near-atomic resolution (<4Å), thus making it one of the most promising structural biology techniques so far (5). In June 2015, Sriram Subramaniam and his colleagues at the U.S. National Cancer Institute visualized β-Galactosidase, a protein structure that was defined decades ago by X-ray crystallography, via cyro-EM. They saw the structure at an average spatial resolution of 2.2Å, which allowed them to observe beyond just α-helices and amino acid side chains to individual water molecules and ions, a level once considered only possible by X-ray crystallography. This is only one of various triumphs produced by this upstart technique that has stimulated scientists to search for smaller protein targets (6). Despite some of the rapid advances that have been observed with cryoEM, X-ray crystallography still dominates the field of structural biology. However, there are still areas of cryo-EM that need to be improved to avoid controversies. Protein Purification To begin cryo-EM’s experiment, a protein sample must first be purified by visualizing it’s homogeneity under negative-stain EM. This not only provides high contrast but also
Figure 1 3.3Å Resolution cross section of the tobacco mosaic virus under single particle cryo-electron microscopy
induces the protein particles into being absorbed into the carbon film in particular orientation. The presence of contaminants or aggregates, general structural state of target protein or complex, and potential structural, compositional, and conformational variability are also provided by negativestain EM. Merging thousands of images of protein particles is an essential step before analyzing the 3D structure of the protein or complex. Merging the images from identical or highly homogenous particles is the key while potential sources of heterogeneity must be reduced. Composition and conformation are two general heterogeneities can arise from the purification stage. Compositional heterogeneity typically arises from the presence of sub-stoichiometric components or dissociation of protein subunits, and conformational heterogeneity typically arises from loosely tethered domains of the protein. Various biochemical approaches are used to cope with these heterogeneities. For compositional heterogeneity, Thermofluor-based screening approach is typically used to identify a specific buffer solution that will stabilize the loosely associated subunits, while sub-stoichiometric components can be targeted for affinity purification. Substrates, inhibitors, ligands, co-factors, or any other molecules that directly affect the protein or complex are used to reduce or revert the heterogeneity. For whichever approach is used, introducing artifacts must be minimized at all cost (5). Specimen Preparation There is a general procedure of sample preparation to be followed. The protein sample must be prepared in a way that will survive the dehydration caused by the vacuum of the electron microscope and the radiation damage from the scattering of the emitted electrons. The specimen as a whole consists of the purified sample on the carbon film (holey for cryo-EM) with a support structure commonly made of copper. A common problem of such EM grids was the instability and low conductance at low temperatures of the carbon film, but this has been resolved with recent advances in grid designs thatenhance the grid’s conductivity using doped silicon carbide as the substrate and making the grid hydrophilic with glow discharger or plasma cleaner (5). When a sample solution is applied to a grid, the protein particles are distributed in many orientations and orders (4). The grid is then loaded onto the cryo-specimen
holder of the machine and will be blotted away >99.9% on both sides by blotting papers inside the apparatus as they close in on the grid, leaving a thin molecular layer of sample solution (7). The grid is then plunged by a semi-automated plunger, such as Vitrobot and Cryo-plunge, into a solution of cryogen (usually liquid ethane near liquid nitrogen temperature of -196°C), which causes the particles to become frozen and trapped inside a film of vitreous ice (5). Image Acquisition Since cryo-EM targets the sample at near-atomic resolution, only images with high contrast and sufficient resolution are considered high-quality images (5). After appropriate imaging conditions of the electron microscope have been set, a beam of electrons is directed from the microscope onto the frozen protein particles. As the particles absorb incident electrons, 2D images are generated on the viewing screen (4). Generally, increasing the electron dose increases the image contrast, but in order to reduce the radiation damage, the total electron dose of the cryo-EM typically should be limited to ~20 e-/Å2 (5). Thus, the 2D projection of the micrograph that has been generated as electrons penetrate the randomly orientated protein particles is very noisy, or has a very low signal-to-noise ratio. To improve such low resolution, a large number of particles in the grid are merged or “averaged” to form a 3D map, which is further refined into a more accurate 3D model by using software tools that fit in various protein sequences onto the map (4). Image Processing The processing of the recorded images is considered the most significant part of the cryo-EM project. Several main steps include the correction of the microscope contrast transfer function (CTF), selection of the appropriate protein particles, preparation of the image stacks of those particles, and generation of an initial 3D structure. Further steps include refinement of the initial 3D structure into final 3D density map and interpretation of final 3D density map, while treating the problems of structural heterogeneity along each step. Accurately estimating the CTF parameters, such as acceleration voltage and spherical aberrations, is essential for initial evaluation of the micrographs and the subsequent 3D structures. Once the quality of the micrograph has been tested and deemed suitable for further analysis, the particle picking process comes next. This process can be done in a manual, semi-automated, or fully automated manner to select which particles to analyze and to maintain their quality by cleaning them without producing artifacts. After the particles have been selected and cleaned, they are merged by following a popular “K means clustering” algorithm to generate initial 3D structure. The initial 3D structure is then evaluated by various 3D structure determination methods, such as the random conical tilt approach, computational central section theorem approach, and stochastic hill climbing algorithms. Later, further refinement of the 3D structure is pursued in which the progress is usually monitored by the Fourier shell correlation (FSC) curve, a mathematical curve that provides the level of SNR as a function of spatial frequency and the resolution of the initial 3D map. The refined 3D map is then interpreted based on the evaluations from three resolution regimes, which are the low-resolution map (>10Å), intermediate-resolution map (4-10Å), and high-resolution map (<4Å), which reveal distinct
information regarding the protein structure confirm the data obtained by FSC curve. Along these steps, structural heterogeneity of the sample is handled by using 3D multi-reference alignment, which extracts more homogenous subsets to achieve homogeneity (5). Areas of Improvements Although many exciting developments have been made in recent years, cryo-EM still has more challenges. For exampleduring the brief moment between blotting and freezing in the sample preparation stage, Brownian motion causes the protein particles to collide with the air-water interface of the sample, making some particles stick to the interface at particular orientation (depriving random orientation) and some particles to be completely denatured (losing even more sample after blotting). Another challenge is assembling a structurally homogenous sample, which is absolutely critical for each structure of the particles to merge for 3D reconstruction. Structural heterogeneity, which can be introduced during the sample preparation by either the blotting or freezing step, can be tackled by several
many exciting developments have been made in recent years, cryo-EM still has more challenges. methods (e.g. glutaraldehyde cross-linking for stability, highthroughput screening for identifying optimal conditions of the buffer solution for sample, affinity binding for making better cryo-grids, etc.) that still need to be meticulously investigated (7). Several other goals that must also be achieved including making the tool more affordable (currently it costs $5 million) with quick installations demanding institutions and industries, to properly train employees to become cryo-EM practitioners, and reaching a consensus on what raw data to use for referral as well as for storage (6,7). Conclusion Since its birth over two decades ago, cryo-EM is now receiving more spotlight than ever. Certain viruses, ribosomes, and integral membrane proteins that had been rarely pursued by X-ray crystallography are now being analyzed under cryoEM. This yields much better 3D structures for scientists to further explore the unknown biological functions of their targets (3). With new opportunities, the field of cryo-EM has a bright future as it maintains its tenacity in the scientific world.
References 1. Method of the year 2015. Nature Methods 10.1038 (2016). 2. W. E. Holben, P. H. Ostrom, Monitoring bacterial transport by stable isotope enrichment of cells. Applied and Environmental Microbiology 66, 4935-4939 (2000). 3. E. Nogales, The development of cryo-em into a mainstream structural biology technique. Nature Methods 13, 24-27 (2016). 4. A.Doerr, Single-particle cryo-electron microscopy. Nature Methods 13, 23 (2016). 5. Y. Cheng, et al., A primer to single-particle cryo-electron microscopy. Cell 161, 450-457 (2015). 6. M. Eisenstein, The field that came in from the cold. Nature Methods 13, 19-22 (2016). 7. R. Glaeser, How good can cryo-em become?. Nature Methods 13, 28-32 (2016).
Inverse Problems in Image Processing Shipra Arjun â€™16 Introduction Health care in the United States has strongly incorporated medical imaging in remedial practices. The progress in medical imaging has allowed for a more accurate and noninvasive means of diagnostics. These imaging modalities, such as Positron Emission Tomography (PET), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) (Figure 1), ultrasounds, and X-rays, help observe heart and vascular diseases, as well as brain conditions and cancers. For conditions that require surgery, medical imaging allows multiple images to be combined to print a three-dimensional physical model of the desired organ. This three-dimensional model enables surgeons to evaluate the most viable operation route or perform risk/benefit analyses to prevent potential complications during a surgery. A range of health care professionals, from technology specialists to oncologists, utilizes imaging in some capacity to diagnose or treat patients. In an industry as versatile as medical imaging, efforts to increase accuracy and limit error are rigorously researched by medical physicists, computer scientists, and biomedical engineers. This research further bifurcates into segmentation and registration. Image segmentation algorithms isolate and enhance pertinent objects. In medicine, these objects are frequently tumors, cysts, and other anomalies seen in scans through various modalities. Problems arise in segmentation when artifacts, noise, complex features, and textures block and incorrectly represent images in specific locales. For example, artifacts or heavy background noise can obstruct a clear view of a tumor. These problems increase the likelihood of an inaccurate diagnosis. Besides image segmentation algorithms, registration algorithms serve to automatically align data points spatially from one coordinate system to another, maintaining the integrity of corresponding points. Some screening modalities, such as PET and CT scans, produce pictures in slices and require a construction of a three-dimensional model, commanding the use of registration. However, the images are produced several times through different mediums because they are taken at variable times, often through a combination of different modalities. Segmentation An algorithm that is commonly used to segment, or partition an image to preserve essential features, is â€œK means
clusteringâ€?. For an image with N pixels, the algorithm splits the image into K clusters (through user input, the number of K is determined) (2, 3). Each cluster is randomly distributed through the pixels to assure an equal number of pixels in each K. The average value of the property parameters, such as hue, saturation, and pixel intensity, is then used to calculate the mean, uK , of each cluster (4). This is represented as
The pixels of each cluster are reassigned based on their property parameters and the uK of each cluster. This is done first by calculating the distance between the property vector of a given pixel and its originally assigned cluster center, CK. The distance between each pixel and the CK for each cluster is also calculated. Every pixel is then relocated to the closest CK. This recalculation continues until all distances are minimized and no pixels need to be reassigned (3). While K means clustering is frequently used to segment morphologic features in PET scans, not all images can rely on this simple method (2). The Chan-Vese model is an extremely powerful algorithm that uses active contouring and pixel intensities, instead of thresholding or gradients, to detect edges in an image (Figure 2). In the Chan-Vese algorithm, the energy function level set, or curve of real solutions, F(c1, c2, C) is minimized. This minimization allows for a convergence of the contour until the desired region is bound.
This algorithm depends on the average pixel intensity inside the contour, c1, the average pixel intensity outside the contour, c2, and the piece-wise defined image. The first term represents the length of the contour, while the second term represents the area inside the given contour. In equivalence,
This algorithm solves the minimum value at time t when the contour is at a boundary, seen when the contour is at its zero level set (5, 6).
Registration Registration is a complex procedure that involves numerous steps. First and foremost, the similarities and distances between like-objects in the images must be determined. For example, a scan showing the right lateral side of an abdomen must be compared to the corresponding positionin other images. After assessing the similarities of the images, transformations, either rigid or elastic, are performed to optimize the overlay between the different images. While rigid transformations are done by linear alterations such as scaling and rotations, elastic transformations alter an image through its radial axis. For instance, deformations seen in the human body, due to its intrinsic asymmetric nature, undergo elastic transformations. Because of this asymmetry, linear transformations cannot be performed, as assumptions about the 3D shape cannot be made. An important underlying principle in elastic registration is optical mass transport, a process seen in automation, fluid dynamics, and shape detection software (7). In elastic registration, the L2 Kantorovich-Wasserstein distances (parameterized by space and probability distributions) are used to measure the comparisons between images. The Monge and Kantorovich M(u) function is optimized by finding the most precise and accurate mapping of the two images (7, 8).
Where is the original mass space and is the desired mass space, the mass, m, must travel from insert to ____. By optimizing image overlay, the Monge and Kantorovich function determines the cost of iteratively merging images. With every manipulation of m from ______ image resolution is lost. The cost of this movement is
written as , which is dependent on of both mass spaces (7). The result of the Monge-Kantorovich is used to describe the convergence boundary of the gradient descent. In elastic registration, optimal warping is the gradual deformation and distortion that occurs from one layer of the image to the next until the convergence is reached. This allows for a steady overlay of the desired image to produce a three-dimensional representation of the model. Conclusion The algorithms discussed only represent a small portion of the methods that deal with the problems of image registration and segmentation. Although registration in the human body typically sees elastic transformations, rigid transforma-
Figure 1 Visualization of registra-
tions also take place in areas of the body, in head scans and registration of the skull, where deformation of the tissue is minimal. Similarly, in segmentation, there are a variety of algorithms that have been developed, depending on the type of scanning modality used, to produce images of the type of region being segmented. Although there have been numerous advancements in imaging techniques, medical imaging still lacks an algorithm which adequately handles the solutions. It is pertinent to continue research in this field of computer vision and physics, as image analysis is crucial for clinical use. Not only is it used in diagnostics and treatments, but it also provides researchers and doctors with a better understanding of the human physiology.
tion of a MRI and CT scans. The top row shows images taken of the sagittal and coronal planes of the head using two modalities. The images on the bottom row overlay the images to produce the registration model (1).
References 1. S. Angenent et al., Mathematical methods in medical image processing. Bulletin of the AMS 43, 365â€“396 (2006). 2. D.L. Hill, et al., Medical image registration. Phys Med Biol 46, R1-45 (2001). 3. H.P. Ng, et al., Medical Image Segmentation Using K-Means Clustering and Improved Watershed Algorithm. Image Analysis and Interpretation, (2006). 4. S. Guojun, et. al., The optimized K-means algorithms for improving randomlyinitialed midpoints. Measurement, Information and Control, (2013). 5. A. Amin, M. Deriche. Robust image segmentation based on convex active contours and the Chan Vese model. Signal and Information Processing, (2014). 6. M. Niethammer, A. Tannenbaum, Dynamic level sets for visual tracking. Decision and Control, (2003). 7. F.C. Tony, A.V. Luminita, A level set algorithm for minimizing the MumfordShah functional in image processing, Proceedings of the IEEE Workshop on Variational and Level Set Methods, (2001) 8. O. Museyko, et al., On the Application of the Mongeâ€“Kantorovich Problem to Image Registration. SIAM Journal on Imaging Sciences 2, 1068-1097 (2009). 9. A Mongeâ€“Kantorovich mass transport problem for a discrete distance. Journal of Functional Analysis 260, 3494 - 3534 (2011).
Figure 2 (Left) Visualization of segmentation of a heart valve. The active contour evolves to the shape of the desired valve. In the first image, the initial contour region is detected. In the middle image, the evolving contour begins to conform to the shape of the valve. In the third image, the contour has stopped evolving and is in a steady state . (Right) Chan Vese Algorithm utilized in MATLAB to segment desired regions of the heart.
Jellyfish Size and Distribution in the North Atlantic Ocean in Relation to pH, Surface Water Temperature, Chlorophyll a, and Zooplankton Density Sarah McTague1, Kelli Walsh2 1 2
The School of Marine and Atmospheric Science, State University of New York, Stony Brook, NY Ripon College, Ripon, WI
Abstract Jellyfish are an important indicator organism for the conditions of the ocean ecosystem, including ocean acidification and ocean warming. In recent years, jellyfish blooms have become more abundant. This study examined the distribution, abundance (density), and volume (size) of jellyfish, specifically Pelagia noctiluca, along the C260 longitudinal cruise track from Woods Hole, Massachusetts to Cork, Ireland. This study also compared the relationships between jellyfish volume and abundance with sea surface water temperature, pH, chlorophyll a concentrations, and zooplankton abundance. It was found that jellyfish have an uneven distribution in the North Atlantic Ocean and are more abundant and generally larger in the open ocean waters than nearer to the coasts. Jellyfish were also more abundant when sea surface temperatures were greater than or equal to 17째C. Tows containing jellyfish tended to be associated with surface water samples containing generally lower chlorophyll a values, which might reflect predator-prey relationships. No statistically significant correlations were found between jellyfish abundance or volume and either surface water pH or zooplankton density. Though patchiness of jellyfish distribution is an inherent problem that will affect data collection, the important role of jellyfish as indicator organisms in changing ocean ecosystems requires future studies to build upon the existing body of research.
Introduction Jellyfish are members of the phyla Cnidaria (1). They begin their life cycle as larvae and develop into polyps attached to hard surfaces within the ocean. After budding, these polyps become medusas, at which point they are referred to as jellyfish (2). They primarily eat phytoplankton through filter feeding, but their diet varies depending on their size as they can consume crustaceans, fish, and other jellyfish that fit into their mouths, a single opening where food enters and waste exits. Jellyfish not only are significant within their food web, but also play an important role as indicators of change in ocean ecosystems (3). For instance, the northern Atlantic Ocean is home to numerous species of jellyfish and has seen an increased abundance of jellyfish blooms in recent years (4-7). Research results suggest a relationship between the pH and temperature of the ocean and the development of these jellyfish blooms (8,9). Understanding the causes of these increased blooms is important for monitoring and predicting changes in the ocean ecosystem. One factor linked to the increase in jellyfish population in the ocean is pH. Previous work shows that the pH levels in the ocean have decreased approximately 0.02 units per decade, a clear example of ocean acidification (10). Ocean acidification directly influences the under-saturation of calcium carbonate minerals that are necessary for building the skeletons and shells of many organisms (11). The decreasing pH levels in the ocean can be attributed to increasing CO2 levels in the atmosphere, which have increased by nearly 40% over the past 250 years (10). The recent increase of carbon dioxide in the atmosphere is tied to many anthropogenic factors, including fossil fuel burning and deforestation. Researchers have looked for a link between the acidification of the oceans and jellyfish abundance, with inconsistent results. Attrill et al. (2007) studied the increase in occurrence of jellyfish in relation to pH levels in the North Sea and found that as pH decreases, there is a greater abundance of jellyfish. In contrast, Richardson and Gibbons (2008) studied seven different regions in the North Sea and the Northeast Atlantic Ocean looking for a relationship, but found no correlation (8,9).
In addition to ocean pH, ocean temperature has also changed over the past several decades. Levitus et al. (2005) examined the temperature and heat content of oceanic water. Using data from 1955-1998, they calculated an apparent 14-fold increase in the world ocean heat content. Comparatively, the Atlantic exhibited the greatest increase in temperature and heat content due to a concentrated increase in greenhouse gases and global warming effects (12). Climate change also has an impact on jellyfish abundance. As temperatures rise and the sea surface warms, feeding conditions for jellyfish improve. According to a review by Richardson et al. (2009), warmer ocean surface temperatures are linked to enhanced medusa growth, correlating that jellyfish grow better in warmer waters (13). A study done on Pelagia noctiluca, a specific type of abundant jellyfish, found that sea temperatures highly influenced the developmental stages of the jellyfish; higher temperatures supported greater development and maturation rates (14). Additionally, more jellyfish were recorded in the North Atlantic during warmer years (13). It is anticipated that jellyfish populations will continue to increase as climate change leads to further increases in ocean temperature (15). Research also links food availability to jellyfish abundance. Gibbons and Richardson (2008) found that jellyfish bloom peaks are seasonal and are positively correlated with that of phytoplankton and zooplankton populations. These researchers also examined whether changes in jellyfish prey resulted in changes in the amount of jellyfish observed, but results were inconclusive (7). Limited research has been conducted examining the size of jellyfish and potential factors that influence jellyfish size. One objective of this study is to document the distribution of jellyfish present in the North Atlantic Ocean during June of 2015. An equally important objective is to examine the relative size of various jellyfish species present in relation to water temperature and pH. This study will also examine phytoplankton abundance and zooplankton abundance at the collection sites to discover if there is any correlation with either jellyfish abundance or size as well. Based on previous research, we hypothesized that jellyfish would be unevenly distributed
along a cruise track from Woods Hole, Massachusetts to Cork, Ireland, with greatest abundances near the coasts. We also hypothesized that jellyfish size and abundance would be greatest in ocean regions of low pH, high temperatures, and high levels of chlorophyll a and zooplankton biomass. Methods The study was conducted in the North Atlantic Ocean on Sea Education Association (SEA) Cruise C260, along a longitudinal transect that started in Woods Hole, Massachusetts and ended in Cork, Ireland (Figure 1).
with Tow C260-NT-012 having the highest average volume and Tow C260024-NT having the lowest average volume. Over the course of the cruise track, jellyfish were unevenly distributed and were more abundant near the middle of the cruise track (Figure 2). The average volume of jellyfish collected was also unevenly distributed along the cruise track (Figure 2).
Figure 2 Neuston tows along a cruise track from Woods Hole, MA to Cork, Ireland. Circle size varies by abundance of jellyfish found at each tow site. The larger the circle, the more jellyfish collected at this tow. Color varies by average volume of jellyfish collected at that sight. The lightest blue represents the lightest average volume of all the tows, while the darkest blue represents the greatest average volume. Black x crosses represent tow sites where 0 jellyfish were collected.
Figure 1 Cruise track from Woods Hole, Massachusetts to Cork, Ireland. Dots represent locations of neuston tow stations.
Jellyfish were collected in a one-meter-wide neuston net, a large net used for collection of all jellyfish and zooplankton biomass, which was towed along the ocean surface at approximately 2 knots for 30-minute intervals twice daily. The lengths of the tows were determined by summing the distances between minute-by-minute GPS positions of the vessel for the actual time the net was in the water. Jellyfish were speciated and tallied. The volume (mL) of each individual jellyfish collected was measured using a water displacement technique (the amount of water displaced when the biomass is place into a graduated cylinder that already has water inside). Using the total jellyfish counts and the area of each tow, we found the average density of jellyfish (number of jellyfish/m2). In addition to the measurements taken for jellyfish, several techniques were used to obtain relevant pH, temperature, and chlorophyll a levels at each neuston tow site. Such measurements were performed twice a day at the neuston tow site. The water samples collected by either a deployed bucket or an internal flow-thru system from these surface stations were then analyzed for pH, temperature, and chlorophyll a levels. The pH of water samples was determined with a spectrophotometer, according to techniques described in Clayton et al. (1995) and Clayton and Byrne (1993) (16,17). Surface temperature was recorded at each tow site using a flow through meter that was located on the ship. To determine chlorophyll-a content, 500 mL of water from each surface station was also filtered using a Gelman GN6 0.45 μm pore vacuum filter system to filter out chlorophyll, a proxy for the abundance of phytoplankton. The filters were then frozen in a cuvette before tests were run. Chlorophyll extraction consisted of adding 90% acetone to each cuvette and placing the cuvette in a dark, cold environment for a total duration of 12 hours. Samples were then vortexed and centrifuged for one minute. The samples were then put into the fluorometer and the fluorescence of each sample was recorded. The pH, water temperature, and chlorophyll a data were compared with the jellyfish size and abundance data to look for correlations that would support the hypotheses. Results Thirty neuston tows were conducted over the course of an 18-day period. Of these 30 tows, 13 yielded jellyfish, and almost all of the jellyfish captured were Pelagia noctiluca. Tow C260-029-NT contained the greatest abundance with a total of 251 jellyfish and a jellyfish density of 0.308 (Table 1). The average volume of jellyfish in tows ranged from 0.1 mL to 124.5 mL
No significant relationships were identified when correlating pH and chlorophyll a levels of surface water samples at the neuston tow sites with the size (volume) or abundance (density) of the jellyfish collected (Figure 3,4). The pH values of the surface water samples collected at each neuston tow site ranged from 7.772 to 8.133 (Table 1). There were no consistent pH trends along the cruise track, values measured staying typically under 8.0 with the exception of pockets where the surface water pH was above 8.0. Chlorophyll-a concentration ranged from 0.045 to 1.762 µg/L along the cruise track. Tows C260-002-NT and C260-003-NT were the only tows with chlorophyll-a values above 1.0 µg/L. The remainder of the cruise track varied between 0.045 and 0.654 µg/L with no evident pattern across the Atlantic Ocean. Jellyfish were more abundant in warmer waters, specifically in waters with temperatures equal to or greater than 17° C (Figure 5). However, the volume of jellyfish did not correlate with warmer water temperatures (p=0.227) (Figure 6). Zooplankton density shows a weak positive correlation with jellyfish density when present (R² = 0.253) (Figure 8). However, this correlation is not statistically different than zero (p-value = 0.080). The neuston tow with the highest jellyfish density (C260-029-NT) also had the highest zooplankton density (Table 1). However, no significant correlation was found between jellyfish volume and zooplankton density.
Figure 3 Average jellyfish volume (mL) and jellyfish density (#/m²) in relation to pH levels along the C260 cruise track.
Figure 4 Average jellyfish volume (mL) and jellyfish density (#/m²) in relation to chlorophyll a levels (µg/L) along the C260 cruise track.
Figure 5 Average jellyfish volume (mL) and jellyfish density (#/m²) in relation to surface temperature along the C260 cruise track.
Figure 6 Jellyfish density (#/m²) in relation to zooplankton density (mL/m²) along the C260 cruise track.
Latitude and Longitude
Jellyfish Density (#/m²)
Avg. Volume (mL)
41 34.0’ N 69 25.8’ W
41 54.2’ N 68 18.8’ W
42 18.0’ N 66 57.6’ W
42 14.4’ N 65 25.9’ W
41 37.3’ N 62 49.7’W
Surface Temp. (°C)
Chl a (µg/L)
Zooplankton Density (mL/m²)
41 26.1’ N 62 12.1’ W
41 23.5’ N 61 30.3’ W
41 55.1’N 60 18.7’ W
42 33.8’ N 59 11.4’ W
43 16.6’ N 58 0.7’ W
43 53.4’ N 57 11.2’ W
44 27.9’ N 56 15.4’ W
44 32.8’ N 55 4.1’ W
44 31.9’ N 53 22.1’ W
44 36’ N 51 49.1’ W
44 40.9’ N 49 47.4’ W
44 21.6’ N 48 50.4’ W
43 50.9’ N 42 17.1’ W
43 26.8’ N 36 42.1’ W
43 50.1’ N 39 30.2’ N
43 26.8’ N 36 42.1’ W
44 1.1’N 35 13.1’ W
44 26.7’ N 34 2.5’W
44 56.4’ N 32 37.3’ W
45 24’ N 31 31.7’ W
45 50’ N 30 7.4’ W
46 20.4’ N 28 51.2’W
47 6.9’N 26 32.3’ W
48 44.4’ N 21 12.8’ W
49 7.3’N 20 6.6’ W
Table 1 Data collected from each neuston tow along the Cruise C260 cruise track from Woods Hole, MA to Cork, Ireland in June 2015. Surface temperature and pH data taken from corresponding surface stations collected at the same location as each tow. Chlorophyll a and pH data for some tows were not collected.
Discussion Jellyfish were unevenly distributed along the C260 cruise track, as anticipated. We expected to find a higher density of jellyfish near the coasts compared to the middle of the ocean due to the anticipated greater amount of nutrients found nearshore. However, we found that there was a greater abundance of jellyfish near the center of the Atlantic (Figure 2). A possible
explanation for this could be linked with the jellyfish and their life cycle. Because the early life stage of the jellyfish is a polyp stage, jellyfish may begin their lives near the coasts in shallower waters before breaking off and floating to deeper waters where they can better adapt as medusa. This would also explain why the larger jellyfish were generally found in the center of the Atlantic compared to the coasts (Figure 2). Due to the patchiness of jellyfish
blooms, we expected there to be great disproportion in the spreading. The dispersal that we encountered in this study may have been due to more prominent nutrients towards the center of the ocean or predation of the jellyfish. The results of previous research varied in finding correlations between ocean pH and jellyfish abundance. While Attrill et al. (2007) found a relationship between the two, Richardson and Gibbons (2008) found results that were inconclusive (8,9). The results from our Cruise C260 samples do not show a relationship between either the size (volume) or the abundance (density) of jellyfish and the pH of the ocean surface water where those jellyfish were collected (Figure 3). We expected to find a correlation between pH and jellyfish size and abundance similar to the findings of Attrill et al. (2007), with an increase in jellyfish associated with increasing ocean acidification (lower pH) (8). However, our data shows only small variation in pH along the cruise track, so it is possible that pH did not vary greatly enough to significantly influence the abundance or size of the jellyfish we collected. Our data also do not support any correlation between either jellyfish size or jellyfish abundance in relation to chlorophyll-a content of surface water samples, which is contradictory to what we expected (Figure 4). Our hypothesis was justified by the idea that jellyfish are more abundant near their source of nutrition. We also expected an associated increase in average jellyfish size in these locations due to hypothesized increases in prey consumption and growth rates. Other previous research has examined the relationship of jellyfish blooms and phytoplankton abundance and found a seasonal correlation between peaks in phytoplankton and zooplankton abundance and peaks in jellyfish blooms (7). Differences in findings between our research results and the results of previous studies are due to a limited range in our data that spanned a small window of time; we were unable to observe seasonal differences over the course of a one-month cruise track. Limited research has found associations between jellyfish and temperature. The results from this cruise track, however, found a positive correlation between jellyfish density in relation to temperature (Figure 5). We hypothesized this relationship because of observed trends in both jellyfish bloom increases and increasing ocean temperatures. Prior research reveals that jellyfish are more prominent in warmer Atlantic waters and that warmer sea temperatures have been proven to increase medusa growth (13). Our findings support such results. Gibbons and Richardson (2008) studied the relationship between jellyfish blooms and zooplankton abundance, with inconclusive results (7). It is interesting to note that we found a slight positive correlation between zooplankton density and jellyfish density in our data (Figure 6). Similar to our reasoning behind chlorophyll a levels and jellyfish abundance, we expected to find jellyfish would be more prominent near their prey (both phytoplankton and zooplankton). A greater zooplankton density provides greater food availability for the jellyfish, which presumably results in an increase in both jellyfish size and abundance. Although our zooplankton density and average jellyfish density correlation were shown to be inconclusive based on statistical analysis, this study and the results of other researchers suggest that this is an important topic worthy of further investigation. There are several recognized limitations to our study. Data was limited due to the small window of time in which sampling was conducted. Samples were collected over the course of 18 days in the month of June 2015, limiting both sample numbers and seasonal variations. Similarly, we were unable to maintain the routine twice a day tows. In addition to limited data, it is also important to recognize that jellyfish distribution is inherently not uniform or random, but instead is patchy. This patchiness may be caused by nutrient concentrations and can occur across vertical and horizontal dimensions of varying scales (18). Zooplankton biomass data also experienced limitations over the course of this study as accurate numbers were difficult to obtain from tows with high jellyfish abundance because the jellyfish and zooplankton were collected
together and the jellyfish began consuming the zooplankton during the time between when the net was recovered and when its contents were analyzed. It is also important to note that since Pelagia noctiluca was the main species of jellyfish collected the results likely reflect data only for this particular species of jellyfish. Despite these limitations, we find that our results provide a solid basis for understanding the relationships between jellyfish and the tested variables. Conclusion The correlation that was found between temperature and jellyfish density is of great importance due to the fact that jellyfish are an indicator species. Another slight correlation was observed between jellyfish size and zooplankton density rather than for pH or chlorophyll a. As an indicator species, jellyfish are an important organism to continue to research. Future research could examine nutrient levels in the Northern Atlantic to compare nutrient abundance near coasts with levels in the open-ocean environment and observe if varying levels correlate with jellyfish abundance. This would provide insight on how jellyfish respond within their food web to variances in nutrients. Additionally, it would be interesting to examine water temperature, pH, and nutrient availability around ocean bottoms where polyps are present for a better understanding of the factors that influence jellyfish during their early life stages. The increase in jellyfish blooms has been attributed to the acidification of the oceans, and has been proven several times through scientific research. When it comes to temperature however, there has not been as much scientific evidence proving the correlation with jellyfish. This occurrence precedes the need for further scientific experiments that should delve deeper into the study between temperature and jellyfish. By finding some type of correlation, soon there may be evidence of ocean warming, and even lead to a much needed policy change. Acknowledgments All scientific research completed for this experiment was done on an SEA research vessel with the help from our incredible SEA Crew and C-260 classmates. We are so grateful for all the help our classmates gave us when completing this study, and there was no way we would be able to do it without them.
References 1. H. Walters, A. Collins, Jellyfish and comb jellies. Smithsonian Institution, (2014). 2. M. Dawson, Generalized life cycle of scyphozoan jellyfishes (e.g. Aurelia). The University of California, (2005). 3. F. Boero et al., Gelantinous plankton: irregularities rules the world (sometimes). Marine Ecology Progress Series 356, 299-310 (2008). doi:10.3354/meps07368. 4. C.E. Mills, Jellyfish blooms: are populations increasing globally in response to changing ocean conditions?. Hydrobiologia 451, 55-68 (2001). doi:10.1023/A:1011888006302. 5. T.R. Parsons, C.M. Lalli, Jellyfish population explosions: revisiting a hypothesis of possible causes. La Mer. 40, 111-121 (2002). 6. J.E. Purcell, Climate effects on formation of jellyfish and ctenophore blooms: a review. Journal of Marine Biological Association of the United Kingdom 85, 461-476 (2005). doi:10.1017/ S0025315405011409. 7. M.J. Gibbons, A.J. Richardson, Patterns of jellyfish abundance in the North Atlantic. Hydrobiologia 616, 51-65 (2009). doi:10.1007/s10750-008-9593-8. 8. M. J. Attrill, et. al., Climate-related increases in jellyfish frequency suggest a more gelatinous future for the North Sea. Limnology and Oceanography 52, 480-485 (2007). doi:10.4319/ lo.2007.52.1.0480. 9. A.J. Richardson, M.J. Gibbons, Are jellyfish increasing in response to ocean acidification?. Limnology and Oceanography 53, 2040-2045 (2008). doi:10.4319/lo.2008.53.5.2040. 10. S.C. Doney, et. al., Ocean adidification: the other CO2 problem. Annual Review of Marine Science 1, 169-192 (2009). doi:10.1146/annurev.marine.010908.163834. 11. What is ocean acidification?. PMEL Carbon Program (2015). 12. S. Levitus, et. al., Warming of the world ocean, 1955-2003. Geophysical Research Letters 32, (2005). doi:10.1029/2004GL021592. 13. A. J. Richardson, et. al., The jellyfish joyride: causes, consequences and management responses to a more gelatinous future. Trends on Ecology and Evolution 24, 312-322 (2009). doi:10.1016/j. tree.2009.01.010. 14. M. Avian, Temperature influence on in vitro reproduction and development of Pelagia noctiluca (Forskal). Italian Journal of Zoology 53, 385-391 (2009). doi:10.1080/11250008609355528. 15. Selected jellyfish hot spots around the world. The National Science Foundation. 16. T.D. Clayton et al., The role of pH measurements in modern oceanic CO2-system characterizations: precision and thermodynamic consistency. Deep Sea Research Part II: Tropical Studies in Oceanography 42, 411-429 (1995). doi:10.1016/0967-0645(95)00028-O. 17. T. D. Calyton, R. H. Byrne, Spectrophotometric seawater pH measurements: total hydrogen ion concentration scale calibration of m-cresol purple and at-sea results. Deep Sea Research Part I: Oceanographic Research Papers 20, 2115-2129 (1993). doi:10.1016/0967-0636(93)90048-8. 18. C. M. Lalli, T. R. Parsons, Biological Oceanography: An Introduction (Elsevier ButterworthHeinemann, Amsterdam, 1993).
The Stony Brook Young Investigators Review would like to give a special thank you to all of our benefactors. Without your support, this issue would not have been possible.
Provostâ€™s Office Department of Chemistry Department of Biochemistry and Cell Biology Department of Neurobiology and Behavior Department of Ecology and Evolution School of Marine and Atmospheric Science Undergraduate Marine Science Club Graduate Marine Science Club
Help us â€? others. sbyir
Stony Brook Young Investigators Review
Craig Evinger Profile 7 Dr.Research Use On Twitter 21 Language in 33 Distribution the North Atlantic Jellyfish Size and
Volume 6 Spring 2016
Find Out More:
sbyireview.com www.facebook.com/sbyireview Follow us on Twitter @sbyir