Berkeley Scientific Journal: Fall 2021, Bonds (Volume 25, Issue 1)

Page 1


STAFF STAFF

EDITOR’S EDITOR’S NOTE NOTE

Editor-in-Chief Jonathan Kuo

Bonds govern the world around us. Chemical bonds build our environment and regulate our interactions with our surroundings. When you take in the sharp aroma of your morning coffee, odorant molecules rise from your cup and bind to your olfactory receptors, triggering the twisting and rearrangement of the peptide bonds of the receptor. Our treatment of the environment binds us more tightly than ever before. Wildfires, zoonotic diseases, and climate change have all intensified over the past year, causing destruction with seemingly no end in sight lest we change our actions. Social bonds color our daily lives in new, unforeseen ways. Bustling lecture halls and social gatherings are now replaced by Zoom calls, with familiar faces and warm conversation emanating from rectangular boxes on our screens.

Managing Editor Rosa Lee Outreach and Education Chairs Melanie Russo Saahil Chadha Features Editors Shivali Baveja Nick Nolan Interviews Editors Ananya Krishnapura Elettra Preosti

Jonathan Kuo

Research & Blog Editors Andreana Chou Isabelle Chiu Layout Editors Stephanie Jue Michael Xiong Features Writers Lilian Eloyan Nachiket Girish Anisha Iyer Jessica Jen Emily Pearlman Natalie Slosar Christopher Zhan Interviews Team Bryan Hsu Esther Lim Natasha Raut Kaitlyn Wang Michelle Yang

Timothy Jang Alexander Peterson Amar Shah Sabrina Wu

Rosa Lee

Research & Blog Team Liane Albarghouthi Noah Bussell Hosea Chen Andrea He Tiffany Liang Emily Matcham Aarthi Muthukumar Nanda Nayak Rebecca Park Yi Zhu Layout Interns Aarthi Muthukumar Nanda Nayak Rebecca Park Chris Zhan

2

Berkeley Scientific Journal | FALL 2020

In this issue, we explore the ways in which bonds, of all types, shape us and our surroundings. Microrobots that act as moveable bridges between neuron clusters are opening new ways to study neuronal communication. New algorithms that predict off-target binding of CRISPR guide RNAs provide an improved method for anticipating potential undesired effects of CRISPR-Cas9 genome engineering experiments. And student research in the coastal waters of French Polynesia sheds new light on resilience within coral reefs, located within an ephemeral network of anthropogenic disturbances, environmental symptoms, and scientific struggles to preserve one of the most important ecosystems on planet Earth. This semester, our writers and editors, working from distant locations and time zones, have faced both challenges and unexpected advantages in working on this issue. While our writers can no longer drop by a researcher’s office on campus, the new remote format allowed our interviews team to talk to engineers in Italy who have developed an affordable and accessible pulmonary ventilator to aid in the fight against COVID-19. It also allowed us to hear from two wonderful sets of speakers located across the country. Former BSJ editor-in-chief Yana Petri, now the cofounder and editor-in-chief of the MIT Science Policy Review, spoke to our writers about the process of beginning a new science publication. Our writers also engaged in a panel discussion with science journalists Shira Feder, Eric Boodman, Jane Hu, and Kate Gammon, who write for publications varying from STAT to The New York Times to Nature, on their experiences navigating the interface between the public and the scientific community. During this pandemic, science is at the center of our lives. We look to scientists and healthcare professionals to characterize, understand, and develop therapeutics for this disease. The surge of research activity surrounding SARS-CoV-2 only amplifies the need for accurate and careful scientific reporting. With this responsibility in mind, we are proud to present the Fall 2020 issue of the Berkeley Scientific Journal. Rosa Lee Managing Editor


TABLE OF CONTENTS Features 4.

mRNA: TheNext Frontier In Vaccine Science

7.

Plastic: It’s What For Dinner

16.

Microrobots: Bridging The Neuronal Gap, One Micron At A Time

25.

The ‘King Of Poisons’ Journeys Underground In In Search Of Water

28.

Unlocking Peto’s Paradox

37.

Schizophrenia Through The Years

46.

Darwin: Chimp or Chump?

Nachiket Girish

Emily Pearlman Natalie Slosar Jessica Jen

Chris Zhan Anisha Iyer

Lilian Eloyan

Interviews 11.

Why Are We Here? A Journey Into The Quantum Universe (Dr. Hitoshi Murayama)

Amar Shah, Michelle Yang, and Elettra Preosti

Beyond the Racetrack: The Perfect Formula For A Ventilator (Simone Resta) 20.

Elettra Preosti

32.

Exploring Cancer Metastasis Outside The Genome (Dr. Hani Goodarzi)

Timothy Jang and Ananya Krishnapura

41.

Machine Learning Design Optimization for Molecular Biology and Beyond (Dr. Jennifer Listgarten)

Bryan Hsu, Natasha Raut, Kaitlyn Wang, and Elettra Preosti

49.

Applications of Materials Science: From Modeling To Medical Use (Dr. Kevin Healy)

Esther Lim, Alexander Peterson, Sabrina Wu, Ananya Krishnapura

Research 53.

Quantifying Within-Day Abstract Skill Learning and Exploring its Neural Correlates

Gabrielle Shvartsman, Ellen Zippi, and Jose Carmena

60.

Coral Cover and Algae Growth Along a Water Quality Gradient in Moorea, French Polynesia

Savannah Sturla

FALL 2020 | Berkeley Scientific Journal

3


mRNA: The Next Frontier In Vaccine Science BY NACHIKET GIRISH

T

THE NOVEL CORONAVIRUS

he virus which goes by many names—SARS-CoV-2, Covid-19, the Novel Coronavirus, or just the coronavirus, itself needs no introduction. The scourge of the year 2020, this most notorious member of the family of coronaviruses has ravaged the entire world and has fundamentally changed the life of everyone on Earth. From the victims who tragically lost their lives, to those who lost their livelihoods, to those who recovered but had to bear the burden of expensive and inaccessible healthcare, this pandemic has shocked humanity as a whole, and has precipitated a difficult struggle to overcome this biological disaster. The scientific community has been at the forefront of the battle against the virus, and has produced several promising palliatives and post-infection treatment regimes. However, as host of popular podcast Freakonomics Radio Steven Dubner puts it, “when history looks back, the Covid-19 pandemic will be divided into two eras: before the vaccine, and after the vaccine.”1 Indeed, while drugs are periodically developed as possible coronavirus treatments, the majority of public focus remains on the international race to produce

“How do these vaccines even work? While vaccine science at this point is already more than 200 years old, the coronavirus vaccine race has led to brand new answers to this question.” 4

Berkeley Scientific Journal | FALL 2020

the first successful Covid-19 vaccine. How do these vaccines even work? While vaccine science at this point is already more than 200 years old, the coronavirus vaccine race has led to brand new answers to this question. One of these new approaches to vaccine development has been the creation of an effective mRNA vaccine. This long-dormant concept has been resuscitated by vaccine developers such as Moderna and BioNtech (in collaboration with Pfizer) and has the potential to change the very way we think of vaccine production.

VIRUSES AND MESSENGER RNA Messenger RNA, or mRNA for short, is a molecule similar to DNA that is found in the human body. While DNA is famous for its double-stranded helix, RNA is its single-stranded cousin, and the two share many functional similarities. The DNA in the nucleus of our cells is the “instruction set” from which our entire body is designed. One of its many jobs is to tell our cells which proteins to manufacture for our body to function. However, the funny thing about our cells is that ribosomes, the workers who create the proteins our body requires, do not speak the same language as the DNA, the designer who is supposed to tell them what to do. Nor do they even live in the same place—DNA is found inside the cell nucleus, while ribosomes are present in the cytoplasm outside the nucleus. This is where mRNA comes in. Messenger RNA travels from the nucleus into the cytoplasm, containing instructions from the DNA telling the ribosomes exactly what to do in a language the ribosomes

FEATURES


“The body gets fooled into creating defenses, and if the actual pathogen attacks, it can be easily fought off by the immune system.” can understand. The ribosomes can now execute these instructions and create the vital proteins our body needs.2 The ribosomes execute their instructions faithfully, but blindly— they do not even know what they are being told to make. This blind faith is one of the biggest reasons most viruses even exist. Viruses and RNA share a very intimate relationship—viruses are fundamentally just strands of DNA or (as in the case of the novel coronavirus) RNA molecules.3 When viruses are outside a living host, that is literally all they are: random floating bits of genetic material, enclosed in a protein shell and some molecular structures. They show no signs of life, and crucially, they cannot reproduce. When they enter a host, however, their genetic information is quickly delivered into the host cells. There, the RNA strands sneak into ribosomal hatchways and begin barking orders, causing ribosomes to mindlessly create not useful proteins for the body, but copies of the virus itself. These copies escape their host’s confines to infect more cells, a process that often kills the original cell. In this way, the virus quickly multiplies and takes over the body.4

BEATING THE VIRUS AT ITS OWN GAME The conventional approach to vaccine science has been broadly unchanged since its origins in the late 1700s—when Edward Jenner realized that exposure to cowpox, a fairly mild disease, would protect people from its far, far deadlier cousin, smallpox. Usually, our immune system can identify a pathogen (a disease-causing agent such as a virus or bacterium) once it enters our body, and create customized defenses to fight against it. These defenses take time to kick in, by which time the pathogen can wreak havoc in the body. However, if we induce the immune system to create these defenses before infection occurs, our immune system will be ready to fight

similar future pathogens. That is how most vaccines work: they inject something into the body which looks (to our immune system) a lot like a pathogen, but does not cause illness. The body gets fooled into creating defenses, and if the actual pathogen attacks, it can be easily fought off by the immune system. All conventional vaccines follow the above principle, but vary in how they create the “pseudo pathogen” to fool our immune system. Some, like the Oxford Astra-Zeneca Covid-19 vaccine candidate, use dead or “inactivated” viruses with similar characteristics to the pathogen.6 These inactivated viruses are manufactured in a lab and injected into the body. The mRNA vaccine, however, takes a very different approach and bypasses the manufacturing process completely. Instead, taking a page from the virus’ own book, this vaccine delivers mRNA instructions to the human body’s cells, so that the body itself develops the pseudo pathogens which then train the immune system. This makes mRNA vaccines much faster to manufacture, which— especially during a pandemic, when time is of the utmost essence—is a game changer.7,8

THE ROAD AHEAD FOR MRNA VACCINES The idea of using mRNA for vaccines has been in the air for the past few decades, but it has only been during this Covid-19 pandemic that the concept has reached its fruition as a leading vaccine candidate. Why did such a novel, promising approach to vaccine science take so long to find application? The implementation of an mRNA vaccine has involved several practical problems. For instance, scientists could not find a reliable way to deliver the mRNA to the target cell without it being destroyed in the body first. Viruses achieve this by using their protein shells; only recently have researchers been

Figure 1. Various types of vaccine approaches. The Oxford vaccine will potentially be a live attenuated vaccine (second on the left): it will contain something which looks a lot like the coronavirus but isn’t illness-causing, tricking our body into training our immune system to identify what the coronavirus looks like, so our immune response will be customized. If the actual virus shows up, our body will already have defenses against it to protect us. Licensed under CC BY 2.0.

FEATURES

FALL 2020 | Berkeley Scientific Journal

5


Figure 2: The coronavirus. You can see the RNA strands cocooned inside the protein shell. The “Spike Glycoprotein” is a feature on the surface of the virus which allows the virus to attach itself to host cells. Many vaccines try to mimic this spike to train the immune system to recognize it and get triggered whenever it is detected in the body. If the live virus enters the body, the immune system then recognizes its spike and starts attacking it.

able to devise a similar “capsule,” made of lipids, in which to transport the mRNA.9 Moreover, since this is still a new concept in vaccine science, there is still very little data about the efficacy of mRNA vaccines compared to traditional protein vaccines—indeed, this pandemic is going to see the first instance of a wide-scale deployment of such a vaccine, whose success or failure will have large-scale implications on its future potential. As the world waits with bated breath for the vaccine which might end this pandemic, that moment may also herald a new era in vaccine science.

REFERENCES 1. Dubner, S. (Host). (2020, August 26). Will a Covid-19 Vaccine Change the Future of Medical Research? (No. 430) [Audio podcast episode]. In Freakonomics Radio. Freakonomics. https:// freakonomics.com/podcast/vaccine/ 2. National Human Genome Research Institute. Messenger RNA (mRNA). In Talking Glossary of Genetic Terms. Retrieved December 12, 2020, from https://www.genome.gov/geneticsglossary/messenger-rna 3. Boopathi, S., Poma, A. B., & Kolandaivel, P. (2020). Novel 2019 coronavirus structure, mechanism of action, antiviral drug promises and rule out against its treatment. Journal of Biomolecular Structure and Dynamics, 1–10. https://doi.org/10 .1080/07391102.2020.1758788 4. CK-12 Foundation. (2016). 7.13 Virus Replication. In CK-12 Biology Concepts. https://www.ck12.org/book/ck-12-biologyconcepts/section/7.13/ 5. Akpan, N. (2020, July 27). Moderna’s mRNA vaccine reaches its final phase. Here’s how it works. National Geographic Magazine. https://www.nationalgeographic.com/science/2020/05/ moderna-coronavirus-vaccine-how-it-works-cvd/ 6. AstraZeneca. (2020, September 9). Statement on AstraZeneca

6

Berkeley Scientific Journal | FALL 2020

Oxford SARS-CoV-2 vaccine, AZD1222, COVID-19 vaccine trials temporary pause [Press release]. https://www.astrazeneca.com/ media-centre/press-releases/2020/statement-on-astrazenecaoxford-sars-cov-2-vaccine-azd1222-covid-19-vaccine-trialstemporary-pause.html 7. Peters, J. (2020, July 28). What Are the Advantages of an mRNA Vaccine for COVID-19? The Wire Science. https://science. thewire.in/the-sciences/what-are-the-advantages-of-an-mrnavaccine-for-covid-19/ 8. Pardi, N., Hogan, M. J., Porter, F. W., & Weissman, D. (2018). mRNA vaccines—A new era in vaccinology. Nature Reviews Drug Discovery, 17(4), 261–279. https://doi.org/10.1038/ nrd.2017.243 9. Verbeke, R., Lentacker, I., De Smedt, S. C., & Dewitte, H. (2019). Three decades of messenger RNA vaccine development. Nano Today, 28, 100766. https://doi.org/10.1016/j. nantod.2019.100766

IMAGE REFERENCES 1. Figure 1: Gorry, P. R., McPhee, D. A., Verity, E., Dyer, W. B., Wesselingh, S. L., Learmont, J., Sullivan, J. S., Roche, M., Zaunders, J. J., Gabuzda, D., Crowe, S. M., Mills, J., Lewin, S. R., Brew, B. J., Cunningham, A. L., & Churchill, M. J. (2007). Pathogenicity and immunogenicity of attenuated, nef-deleted HIV-1 strains in vivo. Retrovirology, 4(1), 66. https://doi. org/10.1186/1742-4690-4-66 2. Figure 2: Kausalya. (2020). 3D medical animation still shot showing the structure of a coronavirus [Digital graphic]. Scientific Animations. https://www.scientificanimations.com/ coronavirus-symptoms-and-prevention-explained-throughmedical-animation/

FEATURES


Plastic: It's What's for Dinner BY EMILY PEARLMAN

W

hen was the last time you used plastic? Was it the film over your microwaveable meal last night? A produce bag at the grocery store? A plastic water bottle? Plastic is everywhere, including many places it doesn’t belong—tiny plastic particles have been detected in the oceans, the soil, and even the air.1 It’s no secret that plastic waste is a major global problem, one we will have to address soon if we want to prevent serious environmental destruction. Enter Ideonella sakaiensis: a plastic-eating bacteria discovered in the soil outside of a bottle recycling facility in Sakai, Japan. Identified in 2016, I. sakaiensis is one of the few organisms that is able to use

plastic as its main carbon and energy source.2 Studying this bacteria provides insight into plastic biodegradation and the promise of innovative approaches to bioremediation and recycling. So how does I. sakaiensis do it? Let’s look at some of the key players. Polyethylene terephthalate (PET) is its meal of choice. PET is the most abundant polyester in the world, found in many common products like plastic bottles, packaging, and clothing.3 Like all plastics, PET is a long, chainlike molecule called a polymer, which is made up of repeating units called monomers. You can think of the monomers as beads which, when strung together, form a polymer necklace.

Figure 1: Breakdown of PET by PETase and MHETase. PETase breaks PET down into MHET, which then enters the cell, where MHETase breaks it down further into ethylene glycol and terephthalic acid. These monomers are funneled into metabolic pathways, allowing I. sakaiensis to extract energy and carbon for growth.

FEATURES

FALL 2020 | Berkeley Scientific Journal

7


To break PET down into its component monomers, I. sakaiensis enlists the help of enzymes, biological catalysts that facilitate chemical reactions in cells. I. sakaiensis has two enzymes, PETase and MHETase, which work together to break the bonds connecting PET’s monomers to each other. Once broken apart, these monomers can serve as inputs for metabolic pathways, allowing I. sakaiensis to extract carbon and energy for growth.2 Let this sink in: plastic, the poster child for indestructibility, can now be broken down in the environment. Paradigm-shifting as this is, it’s not too difficult to see how I. sakaiensis may have acquired its plastic-degrading ability. We can look to PETase-related enzymes, enzymes that break down natural polymers like cutin, which makes up the waxy coating on plant leaves. Researchers compared the 3D structures of PETase and closely related enzymes to pinpoint specific features of PETase that are responsible for its superior PET-degrading ability.4 Their analysis suggests that PETase evolved from these closely related enzymes under the selective pressure created by the presence of PET in the environment. Given that PET only entered into widespread use about fifty years ago, this evolution was rapid.3 Learning about the evolution of PETase raises a question: how can we build on the work evolution has already done and further improve the enzyme? Presently, PETase is prohibitively inefficient— it takes six weeks for I. sakaiensis to fully degrade a PET film.2 Optimizing PETase would enable its use in biocatalysis: the use of enzymes to catalyze industrial chemical reactions. Specifically, PETase has potential for application in reduction of environmental microplastic pollution and industrial plastic recycling.5 To optimize PETase, researchers employed a technique called rational protein engineering. They compared PETase to an enzyme with a desirable quality, then designed and evaluated PETase mutants

that incorporated specific structural features of this enzyme. One group of researchers used this technique to improve the thermal stability of PETase—they created mutants that are functional at higher temperatures and over a longer period of time than natural PETase.6 Thermal stability is a vital characteristic for biocatalysis because PET degradation is easier at higher temperatures.5 Another group of researchers designed PETase mutants that are better at degrading crystalline PET, a variety found in common products like plastic water bottles.3 These are promising steps in the quest for an improved PETase. This research shows that PETase is amenable to improvements, but how would this look in practice? Rather than using the naked, purified enzyme to break down plastic, it’s more practical to use a whole-cell biocatalyst: a yeast cell (or any other lab-ready microorganism) that has been tricked into expressing PETase on its surface. A whole-cell biocatalyst has higher PET degradation activity than purified PETase under all tested conditions and for a longer period of time. Using a whole-cell biocatalyst is also more costeffective than using a purified enzyme, because it retains catalytic activity through multiple rounds of reuse.7 Wastewater treatment plants present a possible area for implementation of PETase whole-cell biocatalysts; because wastewater isn’t currently treated to remove plastic, microplastics

“Let this sink in: plastic, the poster child for indestructibility, can now be broken down in the environment.”

Figure 2: Comparison of the 3D structures of PETase (right) and a related bacterial enzyme, Cut190 (left). Important components of the enzymes are circled in red, blue, and green. Structural analyses like these allowed researchers to determine the features of PETase responsible for its superior PET-degradation ability. It also provided them with insight on the evolution of PETase. Licensed under CC BY 4.0.

8

Berkeley Scientific Journal | FALL 2020

FEATURES


Figure 3: A yeast whole-cell biocatalyst expressing PETase. PETase must be expressed extracellularly because PET is too large to enter the cell. Whole-cell biocatalysts expressing PETase could be applied to plastic degradation in wastewater treatment plants and industrial plastic recycling. from household products and clothes flow all the way into the groundwater and the ocean. The microorganisms that are already used to treat wastewater could be modified to express PETase, greatly simplifying implementation. For whole-cell biocatalysis to be effective, it should be used in conjunction with rational protein engineering to optimize PETase for the specific physical, chemical, and biological conditions of the wastewater treatment system.8 Another possible application of biocatalysis is in industrial plastic recycling, which currently faces many challenges.9 Biocatalysis could be applied to chemical recycling: breaking PET down into its monomers, and then using those monomers to synthesize new polymers.10 This would contribute to the creation of a circular economy for plastic, greatly reducing use of fossil fuel feedstock and decreasing the amount of plastic waste in landfills and the environment.11 This is not a silver bullet solution—we still have a long way to go. PETase only degrades one type of plastic and is still relatively

“For whole-cell biocatalysis to be effective, it should be used in conjunction with rational protein engineering to optimize PETase for the specific physical, chemical, and biological conditions of the wastewater treatment system.”

FEATURES

“This would contribute to the creation of a circular economy for plastic, greatly reducing use of fossil fuel feedstock and decreasing the amount of plastic waste in landfills and the environment.” inefficient. Future work should focus on broadening its substrate specificity so that it can be applied to more types of plastic.6 “Nonhydrolyzable” plastics, such as polyethylene, present an especially daunting challenge because of the tough C—C bonds in their backbones.12 As bioplastics like polyethylene furanoate gain footing in the plastics economy, it will be important to engineer enzymatic systems that can break them down as well.3 To discover enzymes that can degrade other types of plastic, we can employ environmental screening methods like the one used to discover I. sakaiensis.8 Finally, before we put these enzymes into use for bioremediation, we must do more research into ecosystem safety and unintended effects.6 Despite these limitations, PETase shows great promise for industrial application, as well as potential for improvement. Tackling the insidious problem of plastic waste will require innovation on many fronts, and implementing biocatalytic degradation can be a valuable contributor to the solution. Science often looks to nature for inspiration. Sometimes, its inspiration borders on appropriation—we stole penicillin from fungi, and aspirin from willow trees. Why not give I. sakaiensis a shot?

FALL 2020 | Berkeley Scientific Journal

9


REFERENCES 1. Padervand, M., Lichtfouse, E., Robert, D., & Wang, C. (2020). Removal of microplastics from the environment. A review. Environmental Chemistry Letters, 18(3), 807–828. https://doi. org/10.1007/s10311-020-00983-1 2. Yoshida, S., Hiraga, K., Takehana, T., Taniguchi, I., Yamaji, H., Maeda, Y., Toyohara, K., Miyamoto, K., Kimura, Y., & Oda, K. (2016). A bacterium that degrades and assimilates poly(ethylene terephthalate). Science, 351(6278), 1196–1199. https://doi. org/10.1126/science.aad6359 3. Austin, H. P., Allen, M. D., Donohoe, B. S., Rorrer, N. A., Kearns, F. L., Silveira, R. L., Pollard, B. C., Dominick, G., Duman, R., El Omari, K., Mykhaylyk, V., Wagner, A., Michener, W. E., Amore, A., Skaf, M. S., Crowley, M. F., Thorne, A. W., Johnson, C. W., Woodcock, H. L., … Beckham, G. T. (2018). Characterization and engineering of a plastic-degrading aromatic polyesterase. Proceedings of the National Academy of Sciences, 115(19), E4350– E4357. https://doi.org/10.1073/pnas.1718804115 4. Joo, S., Cho, I. J., Seo, H., Son, H. F., Sagong, H.-Y., Shin, T. J., Choi, S. Y., Lee, S. Y., & Kim, K.-J. (2018). Structural insight into molecular mechanism of poly(ethylene terephthalate) degradation. Nature Communications, 9(1). https://doi. org/10.1038/s41467-018-02881-1 5. Kawai, F., Kawabata, T., & Oda, M. (2019). Current knowledge on enzymatic PET degradation and its possible application to waste stream management and other fields. Applied Microbiology and Biotechnology, 103(11), 4253–4268. https://doi.org/10.1007/ s00253-019-09717-y 6. Son, H. F., Cho, I. J., Joo, S., Seo, H., Sagong, H.-Y., Choi, S. Y., Lee, S. Y., & Kim, K.-J. (2019). Rational protein engineering of thermo-stable PETase from Ideonella sakaiensis for highly efficient PET degradation. ACS Catalysis, 9(4), 3519–3526. https://doi.org/10.1021/acscatal.9b00568 7. Chen, Z., Wang, Y., Cheng, Y., Wang, X., Tong, S., Yang, H., & Wang, Z. (2020). Efficient biodegradation of highly crystallized polyethylene terephthalate through cell surface display of bacterial PETase. Science of The Total Environment, 709, Article 136138. https://doi.org/10.1016/j.scitotenv.2019.136138 8. Zurier, H. S., & Goddard, J. M. (2021). Biodegradation of microplastics in food and agriculture. Current Opinion in Food Science, 37, 37–44. https://doi.org/10.1016/j.cofs.2020.09.001 9. d’Ambrière, W. (2019). Plastics recycling worldwide: Current overview and desirable changes. Field Actions Science Reports, Special Issue 19, 12–21. http://journals.openedition.org/ factsreports/5102 10. Taniguchi, I., Yoshida, S., Hiraga, K., Miyamoto, K., Kimura, Y., & Oda, K. (2019). Biodegradation of PET: Current status and application aspects. ACS Catalysis, 9(5), 4089–4105. https://doi. org/10.1021/acscatal.8b05171 11. Ellen MacArthur Foundation. (2016). The new plastic economy: Rethinking the future of plastics & catalysing action. https://www. ellenmacarthurfoundation.org/assets/downloads/publications/ NPEC-Hybrid_English_22-11-17_Digital.pdf 12. Krueger, M. C., Harms, H., & Schlosser, D. (2015). Prospects for microbiological solutions to environmental pollution

10

Berkeley Scientific Journal | FALL 2020

with plastics. Applied Microbiology and Biotechnology, 99(21), 8857–8874. https://doi.org/10.1007/s00253-015-6879-4

IMAGE REFERENCES 1. Figure 1: Bornscheuer, U.T. (2016). Delicious Plastic [Digital image]. Science. Reprinted with permission. Retrieved from https://doi.org/10.1126/science.aaf2853 2. Figure 2: Kawai, F. (2019). Comparison of overall 3D structures of Cut190* and PETase [Digital image]. Applied Microbiology & Biotechnology. Reprinted with permission. Retrieved from https://doi.org/10.1007/s00253-019-09717-y 3. Figure 3: Chen, Z., Wang, Y., Cheng, Y., Wang, X., Tong, S., Yang, H., & Wang, Z. (2020). PETase whole-cell biocatalyst [Digital image]. Reprinted with permission. Retrieved from https://doi. org/10.1016/j.scitotenv.2019.136138

FEATURES


Dr. Hitoshi Murayama is a professor in the Department of Physics at the University of California, Berkeley as well as the founding director of the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo. In October 2014, he gave a speech at the United Nations headquarters in New York about how science unites people and brings peace. He received the Yukawa Commemoration Prize in Theoretical Physics and is a Fellow of the American Physical Society and the American Academy of Arts and Sciences. In this interview, we discuss Dr. Murayama’s work at the intersection of theoretical particle physics, string theory, and cosmology.

Why are we here? A Journey Into the Quantum Universe Interview with Dr. Hitoshi Murayama

BSJ

: How did your early experiences in Japan shape your career and motivations? What led you to practice physics in both the United States and Japan?

HM

: It started when I was a little kid. I had really bad asthma, so every year, I missed quite a few days of school. When I was at home, I did not have anything to do but watch TV, and on TV, there were some very interesting educational programs. These programs are what got me interested in science and math. Then, throughout university, I had a very keen interest in the area of physics that I am working in right now, namely at the interface of theoretical particle physics and cosmology. It turned out that, back in those days, this was not a very active area of physics in Japan, so I did not really get to further my studies in this area. I was very frustrated, and I felt isolated. I made up my mind that I needed to go to the United States after I got my PhD so that I could really work on what I wanted to. Berkeley was the place for it.

BSJ

: Your lecture, “The Quantum Universe,” focuses on how the universe came to be, starting with the Big Bang. Can you explain what the Big Bang is and what happened during it?

HM

: Nobody knows, but that is something we would obviously like to understand. It is actually what got me started in physics. We have, however, learned quite a bit about what has happened since the Big Bang. While it has not yet been proven, we believe that immediately after the Big Bang, the universe was even smaller than the size of an atomic nucleus. Then, through inflation, that tiny microscopic universe was stretched out to the macroscopic universe we see today. After which, the huge amount of energy inflation used to stretch out the universe was turned into hot, thermal energy. When the universe was three minutes old, it started to put together protons and neutrons to build atomic nuclei. Much later, when the universe was about three hundred and eighty thousand years old, it became transparent, and light started to move freely for the first time. It took another several hundred million years for the first star to be born. Then, as more stars were born, galaxies and clusters of galaxies began to form until a sort of system was born. We were born. That is how things evolved. So a lot happened, especially at the beginning. `

INTERVIEWS

BY AMAR SHAH, MICHELLE YANG, AND ELETTRA PREOSTI FALL 2020 | Berkeley Scientific Journal

11


Figure 1: A timeline of the Big Bang.2 In the public domain.

BSJ HM

: Can you explain the “Big Rip” model?

: We discovered that the universe is expanding back in the 1930s. However, it was not until much later that Berkeley physicist Saul Perlmutter discovered something totally mind boggling: the universe is accelerating. Therefore, there must be something, which we now call dark energy, that is pushing the universe to make it accelerate. The idea of the Big Rip is that if the universe keeps accelerating, at some point it will expand infinitely quickly. An infinitely fast expansion means everything gets ripped apart down to elementary particles. In summary, the universe becomes infinitely big and infinitely thin and that is when it will end.

BSJ

: One of the most fundamental particles in the universe is the Higgs boson. What prompted the conceptualization of the Higgs boson prior to its discovery?

HM

: The discovery of the Higgs Boson has to do with something called the weak force, which causes the radioactive decay of nuclear isotopes into electrons and antineutrons. The weak force is very much like the electromagnetic force. However, unlike the electromagnetic force, the weak force can only act over very, very tiny distances about a thousandth the size of a nucleus. The

electromagnetic force, on the other hand, is a long range force that can act over thousands of kilometers. For example, if you hold a compass, the needle will point towards the North Pole due to the magnetic field of the north pole of the Earth. But, despite acting on different ranges, the weak force and the electromagnetic force are the same kind of force. So, why do these same forces behave in such a different way? The answer to this question can be found in condensed matter physics and, in particular, superconductivity. Superconductivity is a phenomenon that occurs when cooling a material, such as niobium, results in a complete loss of electric resistance of that material. When you apply a magnetic field to these types of superconductive materials, the electromagnetic force can only act over very small distances (about tens of microns). So, the electromagnetic force, which usually acts over a long distance, becomes short range inside a superconducting material. This caused people to think that the universe could be behaving in the same way. The reason why the weak force acts on such a short range may be because the entire universe is a superconductor causing the electromagnetic force to act only in short ranges. If this is true, then there must be something frozen in empty space that causes the weak force to be short range and electrons to move slowly. That is how people came up with the idea of the Higgs boson.

“In summary, the universe becomes infinitely big and infinitely thin, and that is when it will end.” 12

Berkeley Scientific Journal | FALL 2020

INTERVIEWS


HM

: Many physicists now believe that dark matter is composed of a new kind of particle, meaning that it is composed of point-like objects. When these come together, they are pulled towards each other by gravity, but because they are so tiny, they never interact amongst themselves. They also never interact with us, so there is no way of knowing what they are. But, the idea of the SIMP is that dark matter is actually a composite object, like the cherry pies I mentioned earlier. Then, if dark matter “objects” have size, they may be hitting each other and us at some level. Dark matter being able to interact with itself would in turn help us understand small galaxies called dwarf galaxies because we know that something—likely dark matter—has to be holding galaxies together. The SIMP would then serve two purposes: to help us understand dark matter and explain behaviors in dwarf galaxies.

BSJ Figure 2: An illustration of the Higgs boson.3 In the public domain.

BSJ HM

: How was the Higgs boson confirmed experimentally?

: We know that Higgs bosons are everywhere in empty space and that they are very tightly packed together. Thus, if you whack empty space with a big hammer, a boson should pop up. In this scenario, the role of the big hammer was played by the Large Hadron Collider, which causes proton collisions that concentrate a huge amount of energy in a tiny portion of space. When a Higgs boson pops up, protons fragment into two photons after they collide. The resulting photons are how the Higgs boson was observed.

matter?

: What is resonant self-interaction, and what is its importance in furthering our understanding of dark

HM

: One of the keys to understanding dark matter lies in knowing why dark matter particles hit each other more frequently when they move very slowly and less frequently when they move very quickly. The answer to this phenomenon lies in resonance, the idea that when an object vibrates at nearly the same natural frequency as a second object, they will resonate with each other and produce a very clean sound. Dark matter particles can only collide with each other when they are resonating. Thus, in systems like dwarf galaxies where everything is moving slowly, particles achieve resonance and hit each other very often. However, if you look at clusters of galaxies where everything is moving very quickly, there is less resonance between dark matter particles and they do not hit each other as much.

BSJ HM

: What is the importance of the International Linear Collider?

: While we have been able to discover the Higgs boson using the Large Hadron Collider, we do not have a very clear picture of it. This is because colliding protons against protons is a very messy process. It is sort of like throwing cherry pie against cherry pie. Goo will fly out. In the midst of this messy goo, what we are really looking for is the collision between cherry pits. The Higgs boson is unlike any particle we have seen before. It is the most important particle we know, but it is also the most bizarre particle we know. We would like to get to know this particle better. Does it have siblings of the same type? Is it the first of its kind? Does it have a whole tribe behind it? That is why we would like to take a better image of the Higgs boson. We also hope that the International Linear Collider can serve the purpose of finding dark matter in the universe. We actually have a lot of high hopes for that.

BSJ

: Some of your own research has brought us closer to understanding dark matter. A critical part of this research is the Strongly Interacting Massive Particle (SIMP). Can you explain, in simple terms, what the SIMP is?

INTERVIEWS

“[The Higgs Boson] is the most important particle we know, but it is also the most bizarre particle we know.”

BSJ HM

: How does symmetry play a role in your model of selfinteraction resonance?

: Symmetry plays a very important role not only in self-interaction resonance, but in almost all of physics. In fact, Emmy Noether, a German mathematician, theorized that symmetry in systems leads to conservation laws. For instance, the principle of conservation of energy holds because of translational invariance in time. Time has no origin and that is symmetry; when time is shifted, nothing changes. For example, it does not make a difference if you conduct an experiment at six in the morning or seven in the evening. The results will be the same. Thus, when we analyze complicated systems like self-interaction resonance, we cannot perform calculations very well unless we

FALL 2020 | Berkeley Scientific Journal

13


Figure 3: Particles of the SM of particle physics.4 In the public domain. focus on the symmetries of the system. This will in turn lead to conservation laws. Once you have conservation, you can have a handle on how to study these complicated phenomena.

universe is accelerating. Thus, we say that the acceleration of the universe is in the Swampland where you do not want to be instead of the Landscape, a beautiful place where you do want to be.

BSJ

BSJ HM

: We also looked at your research exploring whether the Standard Model (SM) of particle physics, possibly with some extensions, is in the Swampland or not. Can you briefly explain the SM?

HM

: The SM is an amazing theory that pretty much explains everything we know in the universe today. The theoretical foundations of the SM began with the discovery of the electron at the end of the 19th century. Then, as we discovered more particles and how they interact with each other, we were able to build a complete framework for the SM. In some cases, the SM has been tested so rigorously that you can make predictions on certain observables, like the electron magnetic moment for up to twelve significant digits.

BSJ HM

: What is the Swampland?

: The term Swampland was originally put forth by theoretical physicist Hiroshi Ooguri to describe physical theories that cannot be explained using string theory, a theory that purportedly unifies quantum mechanics and gravity. If true, string theory would be the ultimate theory of everything. Initially, Ooguri created the Swampland after discovering that according to string theory, the amount of dark energy in the universe is decreasing, so the universe should be decelerating. However, we know that the

14

Berkeley Scientific Journal | FALL 2020

: What does UV completion mean in the context of the Swampland?

: Atoms are made up of nuclei and electrons. Nuclei are in turn made up of protons and neutrons, and protons and neutrons are made up of quarks. So, as you can see, we have made progress in discovering smaller and smaller particles. Then, UV completion is the idea that you are trying to formulate a theory that works for smaller and smaller distances. In the context of the Swampland, we are trying to bring together two pillars of modern physics which do not mesh well with each other: quantum mechanics and relativity. The hope is to create a UV complete theory that brings together quantum mechanics and relativity and works over very short distances. In this way, we may even be able to understand the very beginning of the universe.

BSJ HM

: What makes an effective low energy field theory?

: Once we have a UV–complete theory that works over short distances, we can make predictions on a microscopic level. However, we would also like to make predictions on a macroscopic scale. A low energy effective field theory is how we can achieve this; it would explain why things behave in the way that they do at much longer distances.

INTERVIEWS


“Physics asks the question: why do we exist, and how did we come to be?”

BSJ HM

: Is the SM in the Swampland?

: The SM is an incredibly successful theory, but it has its problems. It does not explain certain phenomena in the universe. For example, the Higgs boson, which we know to exist cannot be explained through the SM. From what we understand, the SM is not UV complete as we do not think it applies over very short distances. Therefore, it is currently in the Swampland, but we believe that there must be something we can add to it to make it a complete theory.

BSJ HM

: What is charge parity symmetry, and what is the charge parity problem?

: First, we must ask, “Why is there matter in the universe at all?” The Big Bang must have produced the same amount of matter and antimatter. Then, by charge parity symmetry, you can swap matter with antimatter. But, we also know that when matter and antimatter particles meet, they annihilate each other. Thus, if charge parity symmetry exists, we would not be here today. That is why there is a lot of research currently going on to look for why charge parity symmetry is somehow a little bit broken.

BSJ HM

: What are some solutions to the charge parity problem?

: One of the best solutions we have to the charge parity problem is neutrinos. Since the universe began with equal amounts of matter and antimatter, at some point, the balance between the two must have changed. It turns out that there is only one type of matter particle in the SM that has zero electrical charge: a neutrino. A neutrino is a matter particle, and an antineutrino is an antimatter particle. Since both are neutral and do not have electrical charges, the neutrino and antineutrino may transform into each other. Because of this, we expect that neutrinos probably play an important role in reshuffling matter and antimatter so that we could survive the Big Bang. In this case, the charge parity violation is among the neutrinos and not among the quarks that build up the neutrons. In fact, some of my current research focuses on detecting the transformation between matter and antimatter by searching for evidence of gravitational waves coming from the early universe.

BSJ

: Throughout the course of your career, you have conducted research in a variety of fields relating to modern physics. Do you think we are any closer to having a Grand Unified Theory?

step is to unify the strong force together with the weak force and the electromagnetic force. To do this, we must look for proton decay. As far as we can tell, the protons in an atomic nucleus seem stable, but over a long period of time, they may disintegrate into photons and positrons. If proton decay does indeed occur, then ultimately every form of matter is unstable. That is a very scary idea. This process, however, does not happen over a short period of time since we know that protons live for 1034 years. In summary, evidence of proton decay would be actual evidence for grand unification. The Grand Unified Theory comes once you manage to unify the strong force with the other forces. Then, you can turn quark into electron and electron into a quark. That is a process that again we have never seen before, but it is a prediction of the Grand Unified Theory.

BSJ

: What is the most important concept or idea about the universe that you think people should take away from this interview?

HM

: Physics asks the question: why do we exist, and how did we come to be? It is the kind of question that I hope would resonate with anyone, including students in the humanities who may hate physics from undergraduate classes. But, that is the essence of physics. It is actually a lot of fun, and in some sense, of very personal interest to us as it tries to understand where we come from. For example, over time, we have learned that our bodies actually came from the stars. The carbon, calcium, and iron that your body needs actually comes from old stars, which have since exploded. When they blew up, they dispersed many, many different kinds of atoms into empty space and eventually became dust. In summary, there are concrete things we understand and concrete questions we are yet to understand. That is what we are working towards through physics research. It is a very exciting journey, and it is really fascinating. That is the take home message.

REFERENCES 1. Hitoshi Murayama [Photograph]. http://hitoshi.berkeley.edu 2. NASA/WMAP Science Team. (2006). Timeline of the Universe [Digital graphic]. Wikimedia Commons. https://commons. wikimedia.org/wiki/File:CMB_Timeline300_no_WMAP.jpg 3. CMS Collaboration. (2011). Candidate events in the CMS Standard Model Higgs Search using 2010 and 2011 data [Digital graphic]. CERN. https://cds.cern.ch/images/CMS-PHOEVENTS-2011-010-12 4. Dominguez, D. (2015). Standard Model [Digital graphic]. CERN. https://cds.cern.ch/images/OPEN-PHO-CHART-2015-001-1/

HM

: We have already made progress through the SM. Thanks to this discovery of the Higgs boson, we have been able to unify the weak force with the electromagnetic force. The next

INTERVIEWS

FALL 2020 | Berkeley Scientific Journal

15


Microrobots: Microrobots: Bridging Bridging the the Neuronal Neuronal Gap, Gap, One One Micron Micron at at a a Time Time BY NATALIE SLOSAR

W

eighing in at a little over three pounds, the mass of jellylike tissue residing in our skulls is nearly everything that makes us human. It is here that we process sights and sounds, feel emotions such as love or hate, or recall a friend’s favorite ice cream flavor. Despite its simple, globular appearance, our brain is the most complex organ nature has created—it contains 86 billion nerve cells that must migrate to their proper location, differentiate into the correct type of neuron, and eventually die naturally.1 After differentiation, each of these neurons begins to make contact with thousands of others through junctions called synapses.2 Thanks to these constantly changing connections, thoughts are formed, memories are stored, habits and personalities are shaped. Understanding the functions of the human brain—the crown jewel of the human body—is perhaps the most daunting task faced by modern biology. For centuries the intricate network of electrically pulsating nerve cells baffled scientists. But after Sir Charles Sherrington proposed the concept of synapses (1932) and Alan Hodgkin showed how neurons communicate electrochemically (1963), neuroscience

16

research exploded. 3 Over the past ten years, neurogenetics, brain mapping, and the discovery of a malleable brain have significantly advanced our collective knowledge of the structure and function of parts of the brain.4 But the exact systems governing specific synaptic connections are yet to be fully understood. A recent and increasingly popular way to model synaptic connections is to grow a brain from nerve cells. Prior research has shown that it is possible to grow a neural cell network on a plate in the lab, but we have yet to mimic the brain’s function completely.5 A fully functional neural network requires selective neural

Berkeley Scientific Journal | FALL 2020

connections at specified locations—a difficult task, given that the soma, or cell body, of a typical central nervous system neuron is less than 18 micrometers in diameter.6 To further complicate matters, once this network is established, scientists need a method of measuring the neuronal activity at synapses in order to determine how neurons communicate.7 In simplified terms: scientists require a way to position neurons, make them grow in a desired direction, and measure their connectivity. Scientists often run into obstacles engineering their hodgepodge of petri dish neurons to grow and interconnect the way they intend. Multiple groups have attempted

Figure 1. Pictured here are the microrobots, with small grooves and gently sloping sides to aid neuronal growth. Licensed under CC BYNC 4.0.

FEATURES


to manipulate the patterns and direction in which nerve cells grow, using complicated grids, linear and circular micropatterns, and microplates to direct neuronal growth.8,9 Unfortunately, neurons are often unable to approach these small and complex sites, and many of these operations limit neuronal cell growth, leaving scientists with no way of effectively studying how neurons interconnect and communicate. 10 Last month, however, a research group based out of the Daegu Gyeongbuk Institute of Science & Technology proposed an ingenious solution—microrobots.11 These microrobots essentially function as moveable bridges. Their job is to move in between groups of neuron clusters, bridge the gap, and measure the connectivity between the neurons growing on it. As seen in Figure 1, the microrobots are small— about 300 micrometers—and are lined with slender horizontal grooves.7 These grooves are the same width as a neurite—a special term for axons or dendrites, the projections of a nerve cell. Pulling from a 2016 study on directing neuronal growth, the South Korean research group realized that channels the same width of neurites could guide neuron

growth in a desired direction.10 Neurons would grow more easily on a substrate with these microgrooves than on one without. The microrobots were made with metal-oxide nanoparticles, so they could be manipulated by rotating magnetic fields. They also feature gently sloping sides that allow neurites to grow smoothly from the microrobot to the surrounding substrate and vice versa.7 The researchers then placed the microrobot and neuronal clusters on an MEA (multielectrode array) chip, a device that can measure axonal signal transmission.11 This allowed the researchers to access the activity of individual nerve cells and simultaneously record the electrical activity from thousands of cells. As seen in Figure 2, the researchers then manipulated the microrobot to “swim” to the space between two neuronal clusters. As hoped, the neurons in the two cell clusters began growing on the perfectly sized grooves of the robot. By recording the electrical activity on the microrobot with the MEA chip, the researchers deduced that the two neuron groups joined together and began interacting with each other. These results, as shown in Figure 3, were incredibly exciting—the

“By studying the way neuron groups grow and connect, we simplify the complexities of the brain and gain an in vitro insight into what is happening in vivo.” researchers saw that it was possible to connect separate neuronal clusters in vitro both functionally and morphologically to study how they communicated upon connecting. This group has introduced a completely new way of studying neuronal networks: forcing neurons to grow on a carefully controlled substrate that can measure electrical activity between the neurons. By studying the way neuron groups grow and connect, we simplify the complexities of the brain and gain an in vitro insight into what is happening in vivo. Coupling human neurons with this microrobot may yield

Figure 2. In this experiment, the researchers sought to manipulate the microrobot to bridge two cell groups (boxes outlined in faint dashed lines). The microrobot was controlled wirelessly by magnetic fields to “swim” to the target point. The green dashed boxes indicate the position of the microrobot at different timestamps. Licensed under CC BY-NC 4.0.

FEATURES

FALL 2020 | Berkeley Scientific Journal

17


“Using microrobots to mimic corticostriatal synaptic connections would give us a greater insight into the causes behind the abnormal connections, as well as a platform to develop and test therapies.” accurate and highly applicable results for the study of neurological disease. With a greater understanding of these signaling mechanisms, scientists may develop more precise therapies to target the fundamental causes of disease. For instance, individuals afflicted with Parkinson’s Disease are shown to have abnormally weak synaptic connections in regions of the brain essential for voluntary movement. This results in impaired brain plasticity, or an inability of neuronal networks in the brain to change through growth and reorganization, as well as a decrease in the release of the neurotransmitter dopamine. 13,14 Low dopamine causes many of the symptoms of Parkinson’s, such as tremors and slow movement. Using microrobots to mimic corticostriatal synaptic connections would give us a greater insight into the causes behind the abnormal connections, as well as a platform to develop and test therapies.7 And the same process could apply to numerous other neurological disorders, such as Alzheimer’s Disease or schizophrenia.

Instead of testing out a drug on animals and merely speculating its effects by studying brain scans or behavioral changes, imagine analyzing its direct effects on human neurons. Imagine being able to see how a disease or therapy changes the speed at which neurons grow, how they grow, and how they communicate. The possibilities are endless for neurological disease prevention and treatment research—all thanks to these tiny robots.

REFERENCES 1. Azevedo, F. A. C., Carvalho, L. R. B., Grinberg, L. T., Farfel, J. M., Ferretti, R. E. L., Leite, R. E. P., Filho, W. J., Lent, R., & Herculano‐Houzel, S. (2009). Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain. Journal of Comparative Neurology, 513(5), 532–541. https://doi. org/10.1002/cne.21974 2. National Institute of Neurological

3.

4.

5.

6.

7.

Disorders and Stroke. (2019, December 16). Brain basics: The life and death of a neuron. National Institutes of Health. https://www.ninds.nih.gov/Disorders/ Patient-Caregiver-Education/life-anddeath-neuron Queensland Brain Institute. (2019, January). Understanding the brain: A brief history. The Brain Series: Intelligent Machines, 4, https://qbi.uq.edu.au/ intelligentmachines Calderone, J. (2014, November 6). 10 Big Ideas in 10 Years of Brain Science. Scientific American. https://www. scientificamerican.com/article/10-bigideas-in-10-years-of-brain-science/ Simmons, H. (2019, February 26). Labgrown neurons. News-Medical.Net. https://www.news-medical.net/lifesciences/Lab-Grown-Neurons.aspx Chudler, E. H. (n.d.). Brain Facts and Figures. University of Washington. https://faculty.washington.edu/chudler/ facts.html Kim, E., Jeon, S., An, H.-K., Kianpour,

Figure 3. The results of using the microrobot to bridge two neural clusters are seen in graph D, while graph B represents the control group. Researchers used a type of micrographs called height coding fluorescence to analyze the differences between a plain surface and one with a microrobot. Licensed under CC BY-NC 4.0.

18

Berkeley Scientific Journal | FALL 2020

FEATURES


8.

9.

10.

11.

12.

13.

14.

M., Yu, S.-W., Kim, J., Rah, J.-C., & Choi, H. (2020). A magnetically actuated microrobot for targeted neural cell delivery and selective connection of neural networks. Science Advances, 6(39), eabb5696. https://doi. org/10.1126/sciadv.abb5696 Greene, A. C., Washburn, C. M., Bachand, G. D., & James, C. D. (2011). Combined chemical and topographical guid ance c ues for dire c t ing cytoarchitectural polarization in primary neurons. Biomaterials, 32(34), 8860–8869. https://doi.org/10.1016/j. biomaterials.2011.08.003 Roth, S., Bugnicourt, G., Bisbal, M., Gory‐Fauré, S., Brocard, J., & Villard, C. (2012). Neuronal architectures with axo-dendritic polarity above silicon nanowires. Small, 8(5), 671–675. https:// doi.org/10.1002/smll.201102325 Magdesian, M. H., Lopez-Ayon, G. M., Mori, M., Boudreau, D., GouletHanssens, A., Sanz, R., Miyahara, Y., Barrett, C. J., Fournier, A. E., Koninck, Y. D., & Grütter, P. (2016). Rapid mechanically controlled rewiring of neuronal circuits. Journal of Neuroscience, 36(3), 979–987. https://doi.org/10.1523/ JNEUROSCI.1667-15.2016 Cai, L., Zhang, L., Dong, J., & Wang, S. (2012). Photocured biodegradable polymer substrates of varying stiffness and microgroove dimensions for promoting nerve cell guidance and differentiation. Langmuir, 28(34), 12557–12568. https://doi.org/10.1021/ la302868q Bakkum, D. J., Frey, U., Radivojevic, M., Russell, T. L., Müller, J., Fiscella, M., Takahashi, H., & Hierlemann, A. (2013). Tracking axonal action potential propagation on a high-density microelectrode array across hundreds of sites. Nature Communications, 4(1), 2181. https://doi.org/10.1038/ ncomms3181 Gerfen, C. R., & Surmeier, D. J. (2011). Modulation of striatal projection systems by dopamine. Annual Review of Neuroscience, 34(1), 441–466. https://doi.org/10.1146/annurevneuro-061010-113641 Akopian, G., & Walsh, J. P. (2006). Pre- and postsynaptic contributions to

FEATURES

age-related alterations in corticostriatal synaptic plasticity. Synapse, 60(3), 223– 238. https://doi.org/10.1002/syn.20289

IMAGE REFERENCES 1. Banner: Choi, H. (2020). Nerve cells (colored blue and green in this microscope image) grow along thin grooves of a microrobot that scientists control with magnetic fields [Microscope Image]. Retrieved from https://www. sciencenews.org/article/magneticrobots-nerve-cells-connections-braininjury 2. Figure 1: Kim, E., Jeon, S., An, H.-K., Kianpour, M., Yu, S.-W., Kim, J., Rah, J.-C., & Choi, H. (2020). Schematic illustration and fabrication process of a magnetically actuated microrobot for neural networks (D) [Microscope image]. Retrieved from https://doi. org/10.1126/sciadv.abb5696 3. Figure 2: Kim, E., Jeon, S., An, H.-K., Kianpour, M., Yu, S.-W., Kim, J., Rah, J.-C., & Choi, H. (2020). Magnetic manipulation of the microrobot on a glass substrate with an array of neural clusters [Microscope image]. Retrieved from https://doi.org/10.1126/sciadv. abb5696 4. Figure 3: Kim, E., Jeon, S., An, H.-K., Kianpour, M., Yu, S.-W., Kim, J., Rah, J.-C., & Choi, H. (2020). Hippocampal neural connections between neural clusters with and without the microrobot at 17 DIV (B, D) [Microscope image]. Retrieved from https://doi.org/10.1126/ sciadv.abb5696

FALL 2020 | Berkeley Scientific Journal

19


Beyond the Racetrack: The Perfect Formula for a Ventilator Interview with Simone Resta BY ELETTRA PREOSTI

BSJ

: How did your early experiences growing up in Italy shape your interest in Mechanical Engineering, and what led you to pursue a career in Formula 1?

SR

: I was born and raised in Imola, an area with a lot of Formula 1 history. You might know that there is a circuit, the Dino and Enzo Ferrari circuit, in Imola. So, Formula 1 was a big part of the local culture growing up, making it a passion of mine since I was a child. Additionally, when I was younger, my dad owned a small laboratory that specialized in precision mechanics. During the summers, I would help him by working in the laboratory. I was working with my hands, tools, machines, and so on to develop mechanical parts. That was probably where my passion for mechanics started to grow. Eventually, I went to university to pursue a degree in Mechanical Engineering. A career in Formula 1 then brought my passions for mechanical engineering, cars, and Formula 1 together.

BSJ SR

: How did the collaboration between Ferrari and IIT to design pulmonary ventilators in response to COVID-19 begin?

: Ferrari and IIT are two leading institutions in Italy, and there is actually a running collaboration between these two entities. This time, IIT asked Ferrari to support them on the FI5 Pulmonary Ventilator Project as they were looking to fast-track this project. The goal was to design and produce something very, very quickly in response to the COVID-19 pandemic that was, at the time, not only rapidly growing in Italy, but all over the world.

BSJ SR

Simone Resta is Head of Chassis Area at Scuderia Ferrari, the Formula 1 racing team of luxury Italian automobile manufacturer Ferrari. He obtained a master’s degree in Mechanical Engineering from the University of Bologna. Shortly after, he began his career in Formula 1 with Minardi’s automobile team, joining Scuderia Ferrari as a member of the Design Office three years later in 2001. Apart from a brief experience with Alfa Romeo that lasted from August 2018 to May 2019, he has remained with the team for nearly 20 years, where he has worked on some of the most meaningful projects of his career. In this interview, we discuss Simone Resta’s work on leading a team of engineers from Scuderia Ferrari to design the FI5 pulmonary ventilator in collaboration with Istituto Italiano di Tecnologia (ITT) to combat COVID-19. 20

Berkeley Scientific Journal | FALL 2020

: What are some key characteristics of the FI5 Pulmonary Ventilator that are beneficial to patients with COVID-19?

: I think that the most important characteristics of this product are its affordable production costs, accessibility, reliability, and simplicity. To begin, the product is simple and reliable, equipped with the minimum number of functions required to fit its overall purpose. This helps lower the cost of production of the FI5 Pulmonary Ventilator, which is one order of magnitude less than ventilators currently available on the market. Additionally, its components can be easily purchased or manufactured locally. This is especially significant, given the fact that we created this ventilator as an open source project. Thus, our product is something that is very easy to produce by anyone worldwide who is interested or in need. Despite its minimum number of functions, the FI5 ventilator still has multiple applications as it can be used not only with a standard, non-invasive mask, but also a full face mask or helmet, which can be more comfortable for patients. In other words, the product was designed to be as generic as possible, given cost constraints, in order to satisfy multiple uses.

BSJ

: What is the general structure of the FI5 Pulmonary Ventilator, and how does it work to effectively pump oxygen into a patient’s lungs?

SR

: The ventilator is composed of a pneumatic circuit controlled by valves and managed by a computer. Operators can program the system through several interfaces in order to adjust

INTERVIEWS


Figure 1: Full prototype of the FI5 Ventilator (left). Open enclosure of the FI5 Ventilator with details on the positioning of the electronics (right).2 the system’s frequency, released mass flow rate, and composition. This effectively puts an Intensive Care Unit (ICU) inside the FI5 ventilator. The ventilator then manages the air flow from hospital lines. It is important to note, however, that the ventilator has been designed in such a way that it must be connected to a hospital line for air; it can not be used off-site as it is not a fully autonomous system. So, ultimately, it is a machine that takes the fluids from the hospital lines and releases them using parameters (frequency, amplitude, and composition) controlled by the provider.

BSJ

: Ferrari engineers focused on designing the pneumatic and mechanical components of the FI5 Pulmonary Ventilator. What are the three pneumatic stages (pneumatic phases) of the ventilator, and what role does each play in the overall design of the ventilator?

SR

: We applied our knowledge from designing Formula 1 cars to determine the three pneumatic stages of the ventilator. The first pneumatic stage, Stage 1, represents the interface between the ventilator and the hospital line responsible for delivering oxygen. This stage modulates the level of oxygen delivered to patients by adjusting the ratio of oxygen to air, which is a mixture of nitrogen, oxygen, and other gases. It is external to the ventilator and can be customized as needed. In contrast, Stage 2 is physically inside the ventilator. It consists of the main inlet line, which includes filters, a pressure regulator, an electrovalve to control inhalation, sensors, and safety pressure relief valves. Finally, Stage 3 refers to the outlet line, which simply comprises an outlet valve that controls the exhalation

“Despite its minimum number of functions, the FI5 ventilator still has multiple applications as it can be used not only with a standard, noninvasive mask, but also a full face mask or helmet.”

INTERVIEWS

phase. Determining the structure of these three stages was one of the most critical but rewarding aspects of this project.

BSJ

: One of the main goals in designing the FI5 Pulmonary Ventilator was to minimize the cost of production relative to existing ventilators. How did you achieve this goal?

SR

: The guiding principle throughout this project has been to identify the minimum required functions of the ventilator, with respect to a typical hospital’s needs, to minimize production cost. In order to define these requirements, we collaborated with hospitals and doctors like Dr. Forgione. So, that has been the first stage and one of the most important ones. We were further able to reduce the cost of production of the FI5 Pulmonary Ventilator through our choice of components; many of the final components are produced by Italian companies, like Camozzi Automation, which specializes in pneumatic parts. Finally, using technologies like 3D printing to produce mechanical components has also helped in minimizing the cost of production. All together, this allowed us to reach our target cost as the FI5 pulmonary ventilator costs only about a tenth of a normal ventilator.

BSJ

: You minimized the number of custom components present in the F15 pulmonary ventilator in order to reduce the production cost. What off-the-shelf parts, available globally, were used in place of custom components?

SR

: Several parts of the ventilator can be manufactured worldwide, and many three-dimensional parts can be produced using 3D printing. Some of these components include the valves, CPU, and screen. Using commercially-available parts that can be mass produced and obtained everywhere has really allowed our project to be easily accessible worldwide. For example, our product is being developed by several Mexican firms as well as several other Latin American companies.

BSJ

: Ferrari also played a role in running dynamic simulations to develop the FI5 Pulmonary Ventilator. Can you begin by describing the Simulink model?

FALL 2020 | Berkeley Scientific Journal

21


SR

: The Simulink model has three important features. The first feature consists of three mathematical models of the human lungs (soft, medium, and stiff), which essentially simulate the mechanics of lung ventilation. It is the most important part of the Simulink model. The next feature is the main flow line, which consists of controlled valves, pressure relief valves, quick disconnect, and capacitive and resistive pipes. In order to properly test the response of the system, compliance of the lungs, and energy losses, we created a model of all of the pipes, pressure drops, valves, etc. This model included controls to evaluate the valves’ performance under desired pressure dynamics.We also added a section to our model that computes CO2 accumulation near the patient’s mouth, both when the patient is wearing a mask or helmet. This allowed us to implement strategies that prevented buildup of a dangerous concentration of CO2 (>2%). Lastly, the Simulink model includes a model for the Air/Oxygen mixing device—so that we can reach the target O2 percentage—and the pressure regulator, which allows the inlet valve to operate at a controlled pressure.

resistance between a healthy person’s lungs to the lungs of a person who suffers from cystic fibrosis will also be different. So, we had to take all of these factors into consideration. To accomplish this, in each phase of the respiratory cycle, we selected the worst case lung characteristic and checked all of the components to guarantee that there were no issues. For instance, the exhalation valve must be permeable enough not to affect natural exhalation. This is because a stiffer lung will increase the flow rate through the exhalation valve causing the patient’s condition to worsen. On the other hand, the inhalation phase works in the opposite way: the lower the elasticity, which is associated with a higher tidal volume, the higher the required flow needed to reach target mouth pressure will be. So, we used the high compliance model of the lung to design the outlet valve and the low compliance model of the lung to design the inlet valve. However, while the valves have been defined in this way—incorporating worst case lung characteristics—we also had to verify our system worked in the opposite scenario—best lung characteristics—to certify the quality of both hardware and control and made sure dynamics were satisfied under all conditions.

BSJ

BSJ

SR

SR

: How were you able to model the behavior of human lungs, taking into account three different values of compliance (soft, medium, and stiff)? How were these different models used to develop the inlet and outlet valves of the ventilator? : In our model for the behavior of human lungs, the human lungs have been schemed as a viscoelastic model with an elastance factor that simulates the chest wall. It is easy to model this kind of system using a combination of springs and dampers. To aid us in defining our parameters, we found a paper that demonstrates the extreme cases of human lung response to equivalent mass flow boundaries. As you can imagine, the pressure inside of a human lung varies a lot between a young child and a two meter tall basketball player. And, the elasticity, capacity, and

: How does the Air/Oxygen mixing device reach the target O2 percentage, and how is that target percentage determined? : Because the Air/Oxygen mixing device is supplied by the provider, for simplicity, we assumed the most straightforward design. This design is one in which the mixing device can be operated completely manually and the target O2 percentage can be reached by adjusting the two throttle valves controlling the air and oxygen lines. The target O2 is then determined by the provider when evaluating the readings on the patient’s blood oxygenation.

BSJ

:How does the pressure regulator work to allow the inlet valve to operate at a controlled pressure?

Figure 2: Pneumatic scheme of the three pneumatic phases of the ventilator.2

22

Berkeley Scientific Journal | FALL 2020

INTERVIEWS


Figure 3: The three mathematical models of the human lungs characterized by three different compliance values: high compliance (stiff), mid compliance (medium), and low compliance (soft).2

SR

: The pressure regulator consists of a pressure loss, which is a function of the inlet pressure; the pressure loss increases by the same amount as the inlet pressure in order to ensure a nearly constant outlet pressure. It is a necessary component since it allows the inlet valve to work in constant conditions, independent of the hospital pressure line. Every control parameter has been optimized to get the best valve performance in following target mouth pressure, and this allows us to be sure that the valve always operates at its best.

BSJ

: What are the permeability targets for various components of the ventilator, and why are they important in determining the robustness of the system?

SR

: The main sources of pressure loss stem from the HEPA filter and quick disconnect. However, we still want to ensure that pressure losses are as low as possible throughout the system in order to allow for correct system functionality. In particular, in the flow upstream to a patient’s mouth, there are two pressure relief valves, a mechanical component inside the ventilator, and an external water bottle. If the pressure drop between the two valves is too high, it may be impossible to reach target mouth pressure. Moreover, there is a filter located between the patient and the outlet valve. If the pressure loss here is too high, it can impede the exhalation dynamics of the patient. This would lead to high frequency respiratory cycles—almost

“I think that the most important characteristics of this product are its affordable production costs, accessibility, reliability, and simplicity.”

INTERVIEWS

similar to hyperventilation.

BSJ

: What options have been simulated in order to determine the best solution to lower CO2, and which one was most successful?

SR

: First, because we know that CO2 accumulation is a function of exhalation volume around the patient’s mouth, we had to consider different strategies for both the mask (0.5L), and helmet (8L) scenarios. Our initial strategy was to keep both the inlet and outlet valves open simultaneously during different cycle phases; for example, at the beginning or end of the exhalation phase. We next tried simulating a constant inlet valve flow-by. We found that this was the easiest and most efficient way to reduce CO2 concentration since it allows for continuous action and does not change the shape of the mouth pressure curve by much. In other words, the inlet valve flowby method applies a constant positive pressure to the patient’s mouth, which in turn naturally decreases the tidal volume. Our control can then automatically detect and modify the target pressure curve in order to keep the inhaled volume constant.

BSJ

: What are the four mechanical devices that define the flow/ pressure level on the patient’s mouth, and what simulations were carried out to ensure that they would not fail?

SR

: The four devices are the pressure regulator, inlet valve, pressure relief valve, and bottle valve. We performed simulations in order to limit the negative impact a failure in each component could have on the entire ventilator. For example, in the worst case scenario that our bottle valve is either damaged or not fitted, we want to ensure that the functionality and performance of the ventilator are guaranteed even if one of the remaining

FALL 2020 | Berkeley Scientific Journal

23


components also fails. If, for instance, the pressure regulator also fails, control parameters would be set in place to automatically adjust for the failure If, instead, both the inlet and bottle valves fail, the check valve will prevent an unmitigated increase in the pressure (exerted) on a patient’s mouth.

BSJ SR

: What were some of the major challenges that you encountered in designing the F15 Pulmonary Ventilator?

: Certainly, we faced challenges in understanding how to approach a project like this from scratch. The biggest challenge we initially faced was how to adapt our models and parameters to effectively study the lungs. This is where our collaboration with hospitals and doctors came in. We later faced other challenges in terms of cost. In tackling this issue, we had to thoroughly consider the best cost engineering practices when evaluating the proposed method of manufacturing certain parts, 3D printing in general, etc. On top of that, not only did we have to finish this entire project in just a few weeks, but we were also in lock-down. This meant that we had to fundamentally adjust our mindset while adapting to a new, virtual mode of interaction.

BSJ

: How were you able to apply the technology used in developing Formula 1 cars to design the FI5 pulmonary ventilators?

SR

: The project itself is quite different from what we normally work on. However, the technologies on both sides share a lot of common points, so we were able to apply our knowledge from designing F1 cars and adapt it to a completely new product. Similar to vehicular design, we used simulation models of the proposed ventilator in order to understand what to optimize prior to production. Moreover, while the design of F1 cars revolve primarily around hydraulic applications, there were a few pneumatic applications that proved helpful in orienting our approach to designing the FI5 ventilator. The FI5 ventilator also relies on the usage of 3D printing and CNC machines, practices that are rapidly becoming more widespread in the F1 business. Cost engineering is also becoming more and more relevant in the F1 business, so we were able to use some of our cost engineering practices to minimize the cost of the FI5 ventilator.

BSJ

: What analysis is currently being done to determine what further developments can be made towards improving the performance of the FI5 pulmonary ventilators?

SR

: Currently, the effort, driven by IIT, is mostly centered around ensuring that the product is available and can be easily adopted worldwide. The main focus of the effort is not so much in developing the project itself, but rather in making sure that the project becomes homologated and certified in hospitals. In this way, it can be used in many developing countries, and in fact, this initiative is taking off in some South American countries. We are currently supporting IIT on that.

BSJ world?

: What other initiatives has Ferrari taken to combat the spread of COVID-19 not only in Italy, but throughout the

SR

: First, Ferrari has launched an incredible, large-scale initiative called Back on Track. The goal of this project is to protect employees and their families during the COVID-19 pandemic as they return to both the factory and the race circuit. For instance, in order to gauge the safety risk of reopening the workplace, Ferrari distributed blood tests to both employees and their families. Ferrari has also contributed financially to the fight against COVID-19. We were able to double our investment in COVID-19 research through funds from our customers. As opposed to accepting cash back for cancelled events, they requested that we donate these funds. With this money, we were able to donate an ambulance to a local hospital in Modena. Additionally, the entire Agnelli family, which controls Ferrari, has made several donations in response to the COVID-19 pandemic. So, Ferrari is fully committed to the fight against the virus, with FI5 being one of the most important contributions we have made. The project itself very much originated from those in Scuderia Ferrari, who, while in lockdown, were able to realize this completely novel project. We did not want to profit off this project. Rather, we were very happy to bring our efforts beyond the racetrack, and we are determined to do anything we can to contribute to the fight against COVID-19 around the globe.

BSJ SR

: Thank you so much for taking the time to meet with us.

: It was my pleasure. I would also like to thank all of my colleagues at Scuderia Ferrari, who have played a critical role in designing the FI5 Pulmonary Ventilator. I would especially like to thank Maurizio Bocchi, Luca Bottazzi, Luca Brunatto, Marco Civinelli, Marco Gentili, Corrado Onorato, Federico Rossi, and Bruno Petrini.

REFERENCES

“Ferrari is fully committed to the fight against the virus, with FI5 being one of the most important contributions we have made.”

24

Berkeley Scientific Journal | FALL 2020

1. Headshot: Simone Resta [Photograph]. https://www.ferrari. com/en-EN/formula1/simone-resta 2. Maggiali, et al. (2020). FI5 Ventilator Overview. Istituto Italiano di Tecnologia. https://multimedia.iit.it/asset-bank/ assetfile/15783.pdf

INTERVIEWS


The ‘King of Poisons’ Journeys Underground in Search of Water BY JESSICA JEN

H

istorically popular for deliberate poisoning because of its innocuous looks, lack of odor and taste, and availability as a rat poison, arsenic soon fell out of favor with prospective poisoners as chemistry advanced far enough to detect metallic substances.1,2 Nevertheless, the element still influences human health today. Humans are exposed to mostly innocuous forms of arsenic through the environment, pesticides, commercial products, and even some forms of cancer treatment. However, naturally high levels of arsenic in groundwater are a significant public health concern, affecting more than 140 million people worldwide as of 2018.3,4 At the molecular level, arsenic obstructs enzymes that facilitate cellular energy production and DNA repair; it also blocks

FEATURES

voltage-gated potassium channels, which leads to cardiovascular and neurological problems. There is also evidence that arsenic induces changes to DNA and gene expression.4 These mechanisms of action manifest differently for acute and chronic toxicities. Up until two centuries ago, an unhappy individual would stir arsenic into someone’s food or drink and depart satisfied that the additive was effective. That many of these cases involved high-profile deaths contributed to the dramatic sobriquet “king of poisons.”4 The development of the Marsh test and expansion of toxicology accelerated the decline of such nefarious activities. Today, acute arsenic poisoning would most likely occur via oral ingestion of pesticides. While toxicity varies with the type

of arsenic compound, the most commonly encountered compound, arsenic trioxide, has a median lethal dose of around 150 mg/ kg for adult humans weighing an average of 75 kg.5 To put this in perspective, imagine a group of one hundred people attending a gathering. Tea is served. If each person added just over two teaspoons of arsenic to a cup of their favorite tea, they would all enjoy their beverages; within a few days’ time, fifty of them would perish, but not before undergoing disagreeable effects.

PATHOPHYSIOLOGY Telltale signs of acute arsenic poisoning involve the digestive system. Most of the arsenic is absorbed in the small intestine, where it then travels to the liver to be

FALL 2020 | Berkeley Scientific Journal

25


present, chronic exposure also causes affected individuals to develop darkened skin, white lines in nails, and rough patches of potentially cancerous skin growths.6 However, the most significant long term effects are cancers.

ARSENIC IN GROUNDWATER

Figure 1: Paris Green. Paris Green contains harmful levels of arsenic and was once widely used as a pigment and pesticide. Licensed under CC BY-SA 3.0. broken into less hazardous compounds.4 Ingested arsenic will accumulate in high concentrations for a few days before being excreted in urine. Thus, all the aforementioned tea drinkers, including those who survive the acute poisoning, would experience unpleasant symptoms. Even five milligrams of arsenic could induce abdominal pain, diarrhea, nausea, and vomiting. Watery, bloody diarrhea is the characteristic feature of acute poisoning, but other symptoms include weakness, bluish digits, and fluid buildup in the lungs.6 For those less fortunate, bloody lesions in the gastrointestinal tract and extreme amounts of fluid loss result in dehydration and cardiovascular failure, eventually leading to death. Unlike the tea drinkers who ingested a single large serving of arsenic, most people encounter arsenic in far smaller dosages, spread over their lifetimes. Known as chronic exposure, this gradual arsenic intake manifests as its own set of symptoms. While abdominal pain and diarrhea are

“All the aforementioned tea drinkers, including those who survive the poisoning, would experience unpleasant symptoms.” 26

Arsenic works some of its most insidious exposures through groundwater contamination. In the 1970s, researchers in Taiwan noticed considerable disparities in cancer mortality between villages with different concentrations of arsenic in their drinking water.7 Scientists have since found that elevated levels of arsenic in water lead to the highest known mortality rate for environmental exposures, near the mortality rate from cigarette smoking.8 Long term health effects persist even decades after exposure to arsenic has been sharply reduced. The risk of fatal heart attacks only lessened ten years after decreased exposure, while lung, kidney, and bladder cancer mortality rates remained high even forty years after. Thus, public health efforts focus on eliminating arsenic from drinking water. South Asia houses some of the world’s most severe cases of groundwater arsenic exposure.9 Some wells in West Bengal, a state in eastern India, have had high concentrations ranging from 60 to more than 300 micrograms of arsenic per liter, far higher than the World Health Organization’s recommended upper limit of ten micrograms per liter.3,10 Such immense concentrations in areas where people rely on wells for drinking, cooking, daily life, and crop irrigation lead to considerable ramifications. Wells extending into groundwater gained popularity in late-20th century Bangladesh because they were safe from disease-causing microbes present in surface water sources.1,10 Unfortunately, high concentrations of arsenic posed an unexpected issue as arsenic had not yet been included in water testing procedures.11 Surveys in the late 1990s estimated that around 20 million people in Bangladesh were consuming contaminated water. The Bangladeshi government identified contaminated wells and encouraged the use of safe water sources, mostly arsenic-safe tube wells. Other options included deeper

Berkeley Scientific Journal | FALL 2020

tube wells reaching into uncontaminated water, rainwater harvesters, and aerated sand filters that adsorb arsenic.9,10 Despite these federal efforts, around one-third of these arsenic-safe water systems fell into disrepair within a few years. Since there is currently no treatment for prolonged arsenic exposure, changing water sources has shown the most striking changes in arsenic concentration, followed by maintaining the sources and monitoring the population for adverse health effects.10,11 The city of Antofagasta in northern Chile has provided an insightful chronicle of arsenic’s long-term health effects, especially the consequences of switching to, rather than away from, a contaminated source of water.8 Arsenic concentration in 1930s Antofagasta had averaged around 90 micrograms per liter. However, the city replaced its source of water in 1958. Arsenic concentrations surged up to 860 micrograms per liter, staying at such astronomical levels until the city installed an arsenic removal plant in 1970. The level then fell to 110 micrograms over the course of the 1970s, and then to 10 micrograms by the early 2000s. Unfortunately, the damage had already been done, and Antofagasta would show high cancer mortality in the decades following its sudden decrease in arsenic exposure. From an effective means of intentional poisoning to a significant groundwater pollutant, arsenic’s role as a toxin has accompanied the shift from acute to chronic exposure. Sinister stories have given way to widespread public health problems and significant mortality, as discovered by epidemiological studies Figure 2: Medical illustration. A lithograph from 1859 depicts bodily harm from exposure to green arsenic. Licensed under CC-BY-4.0.

FEATURES


“Long term health effects persist even decades after exposure to arsenic has been sharply reduced.” on arsenic’s long term effects. The most urgent action concerning contaminated groundwater is providing safe water sources to reduce exposure to arsenic.3 Community involvement and education are needed for successful interventions, while additional studies on arsenic’s health effects can determine the impact of reducing exposure. Further research may lead to more successful interventions and author the next chapter of humanity’s association with arsenic.

6.

REFERENCES

7.

1. Ferrie, J. E. (2014). Arsenic, antibiotics and interventions. International Journal of Epidemiology, 43(4), 977–982. https:// doi.org/10.1093/ije/dyu152 2. Griffin, J. D. (2016). Blood’s 70th anniversary: Arsenic—from poison pill to magic bullet. Blood, 127(14), 1729–1730. https://doi.org/10.1182/ blood-2015-10-638650 3. World Health Organization. (2018, February 15). Arsenic. https://www. who.int/news-room/fact-sheets/detail/ arsenic 4. Hughes, M. F., Beck, B. D., Chen, Y., Lewis, A. S., & Thomas, D. J. (2011). Arsenic exposure and toxicology: A historical perspective. Toxicological Sciences, 123(2), 305–332. https://doi. org/10.1093/toxsci/kfr184 5. Benramdane, L., Accominotti, M., Fanton, L., Malicier, D., & Vallon, J. J.

8.

9.

10.

(1999). Arsenic speciation in human organs following fatal arsenic trioxide poisoning—A case report. Clinical Chemistry, 45(2), 301–306. https://doi. org/10.1093/clinchem/45.2.301 Gehle, K., & Agency for Toxic Substances & Disease Registry. (2009). Arsenic Toxicity: What are the Physiologic Effects of Arsenic Exposure [Lecture notes]. Centers for Disease Control. https://www.atsdr.cdc.gov/ csem/arsenic/docs/arsenic.pdf Chen, C.-J., Kuo, T.-L., Wu, M.-M. (1988). Arsenic and cancers. The Lancet, 331(8582), 414–415. https://doi. org/10.1016/s0140-6736(88)91207-x Smith, A. H., Marshall, G., Roh, T., Ferreccio, C., Liaw, J., & Steinmaus, C. (2017). Lung, bladder, and kidney cancer mortality 40 years after arsenic exposure reduction. Journal of the National Cancer Institute, 110(3), 241– 249. https://doi.org/10.1093/jnci/djx201 Ahmad, A., Van Der Wens, P., Baken, K., De Waal, L., Bhattacharya, P., & Stuyfzand, P. (2020). Arsenic reduction to <1 µg/L in Dutch drinking water. Environment International, 134, Article 105253. https://doi.org/10.1016/j. envint.2019.105253 Ahmad, S. A., Khan, M. H., & Haque, M. (2018). Arsenic contamination in groundwater in Bangladesh: Implications and challenges for healthcare policy. Risk Management and

Healthcare Policy, 2018(11), 251–261. https://doi.org/10.2147/RMHP.S153188 11. Smith, A. H., Lingas, E. O., & Rahman, M. (2000). Contamination of drinkingwater by arsenic in Bangladesh: a public health emergency. Bulletin of the World Health Organization, 78(9), 1093– 1103. https://www.who.int/bulletin/ archives/78%289%291093.pdf

IMAGE REFERENCES 1. Banner: Two skeletons dressed as lady and gentleman [Etching]. (1862). Wellcome Collection. https://catalogue. wellcomelibrary.org/record=b1194600 2. Figure 1: Goulet, C. (2008). Paris Green (Schweinfurter Grün) [Photograph]. Wikimedia Commons. https:// c om m ons . w i k i m e d i a . org / w i k i / File:Paris_Green_(Schweinfurter_ Gr%C3%BCn).JPG 3. Figure 2: Lackerbauer, P. (1859). Annales d’hygiène publique et de médecine légale [Accidents caused by the use of green arsenic, Lithograph]. Wellcome C ollection. https:// wel l c ome c ol l e c t i on . org / work s / purtqgwv/images?id=yd6ya82w 4. Figure 3: Eishiya. (2017). Arsenic contamination areas [Infographic]. Wikimedia Commons. https:// c om m ons . w i k i m e d i a . org / w i k i / File:Arsenic_contamination_areas.png

Figure 3: Global distribution of arsenic contamination. Arsenic-contaminated groundwater in the orange-shaded areas pose disproportionately high levels of health risks to nearby populations. Licensed under CC BY-SA 4.0.

FEATURES

FALL 2020 | Berkeley Scientific Journal

27


UNLOCKING PETO’S PA R A D O X BY CHRIS ZHAN

W

hat separates you, a human, from other animals, like a hamster or a blue whale? On the molecular level, we are all multicellular creatures composed of varying numbers of cells. Generally speaking, human beings have an average mass of around 70 kg, while blue whales living in the Northern Hemisphere have an average mass of 100,000 kg.1, 2 Since blue whales are several orders of magnitude more massive than humans, researchers generally assume that blue whales possess a much greater number of cells. Despite this difference in size, humans and whales do have some similarities—as multicellular animals, both species are susceptible to death from cancer. If every cell has the potential to become cancerous, then basic probability suggests that the more cells an animal has, the more likely it is to develop some form of cancer. This rings true for humans: a study from UC Riverside shows that a human who is 10 cm taller than the average is 10% more likely than average to develop cancer.3 We should then expect large creatures to be riddled with tumors, while small animals like hamsters should develop cancer less frequently. However, this is not the case in nature. Research shows that large animals such as elephants actually have a lower risk of developing cancer than humans.4 The lack of correlation between animal size and cancer risk summarizes the biological paradox that continues to puzzle researchers, named Peto’s Paradox.

POTENTIAL SOLUTIONS

Ever since Peto’s Paradox was first proposed by Richard Peto in 1977, researchers have been searching for potential answers to this perplexing question: Why do larger animals not develop cancer more often than humans, despite possessing a significantly greater number of cells? Peto’s paradox intrigues many biological researchers, since a better understanding of cancer will aid efforts to prevent or cure the disease. Many solutions have been proposed so far, but none have conclusively provided an answer. Competing Hypertumors When a tumor cell forms, it begins competing against the rest of the human body for resources, such as nutrients and oxygen. As resources are brought to the tumor to support its growth, some tumor cells may become more aggressive and compete against the rest of the tumor for vital supplies. As this competition increases, it may form a tumor within a tumor, called by most researchers a “hypertumor.” Studies suggest that larger animals may actually see increased rates of cancer than smaller animals, but the growth of hypertumors prevents tumors from reaching lethal sizes.5

28

Berkeley Scientific Journal | FALL 2020

FEATURES


Figure 1: Comparison in size between the average human and the average blue whale. The average length of a human is 1.65 meters, while the average length of a blue whale is 24.5 meters. Theoretically, the larger an animal is, the more cells it has, and therefore the greater its likelihood of developing cancer is. However, this is not what scientists observe.

Ticking Telomeres Telomeres are sequences of DNA that cap the ends of chromosomes. They shorten during cellular division, limiting the reproduction of DNA.6 When telomeres become too short to protect the chromosome, the cell generally undergoes apoptosis, a form of cell death, in effect causing the telomeres to act as a timer for the cell’s lifespan.7 Researchers hypothesize that larger animals might have shorter telomeres, resulting in a shorter cellular lifespan and a greater tendency for cells to undergo apoptosis upon receiving damage to the DNA. This could potentially reduce the likelihood of cancer causing mutations forming in the cell.8 The longer a cell lives and replicates, the more likely it is to develop potentially dangerous mutations. It’s worth mentioning that telomeres can be elongated with telomerase. However, large animal cells usually suppress this enzyme from making telomeres longer because longer telomeres increase the chance of the cell developing mutations. One of the most common mutations in cancerous cells is the expression of telomerase, effectively making the cell immortal since the telomeres will never decrease in length.9,10 Natural Selection Another possible solution proposed by researchers is that organisms must evolve cancer suppression or face extinction. Cancer is typically the result of genetic mutations in certain genes that control the growth and reproduction of the cell. These mutations can cause the cell

FEATURES

to divide uncontrollably, producing a tumor.11 In response, natural selection prompts animals to develop defenses against cancer. In nature, this defense is found in genetic mechanisms—a tumor suppressor gene given the name p53.12 p53 triggers apoptosis after the cell detects an abundance of mutations in the cell’s DNA. This is designed to prevent cells from accumulating enough mutations to become cancerous. A study done at the Huntsman Cancer Institute notes that humans only possess one copy of p53 in their genetic code, which means that human cells are less likely to trigger apoptosis upon DNA damage, increasing the likelihood of cancer.12 However, DNA sequencing has revealed that African savannah elephants contain 20 copies of p53, making apoptosis much more likely.13 These pre-emptive cell implosions observed in larger animals like these African savannah elephants could be the evolutionary mechanism necessary to prevent these large animals from succumbing to cancer.8

APPLICATION These comparative studies and hypotheses suggest several solutions to Peto’s paradox. The question then is how these possible solutions can be translated into cancer prevention strategies or remedies

Figure 2: Telomeres marked in pink, found at the ends of chromosomes. Chromosomes are tightly coiled strands of DNA. The ends of chromosomes are protected from damage by strands of DNA coded as telomeres. Telomeres get shorter and shorter with each successful replication.

FALL 2020 | Berkeley Scientific Journal

29


“Research shows that large animals such as the elephant actually have a lower risk of developing cancer than humans.” Figure 3: Artist’s rendering of a mother elephant looking after her child. for humans. An evolutionary solution evident in one species may be difficult to incorporate in the human species due to the divergent trajectories of natural selection. The most promising hypothesis to solving Peto’s paradox looks to be genetic. As discussed earlier, certain larger animals have an abundance of p53 genes which potentially have cancer prevention effects. One early study decided to look deeper into this phenomenon. Researchers modified the genome of mice and inserted extra copies of the p53 tumor suppressor gene. These mice—so-called ‘super p53’ mice—displayed an enhanced response to DNA damage and cancer suppression compared to unmodified mice.14 Modified mice cells were more likely to undergo apoptosis upon receiving DNA damage, preventing potential mutations into cancerous cells. However, a significant shortcoming of this study is that the modified mice experienced accelerated aging effects. This study served as the foundation for many new discoveries and therapies for human cancer treatment. The importance of p53 has not gone unnoticed by cancer therapy researchers, and there is

newly emerging research about the potential for cancer treatment options such as the “p-53 DC vaccine.” This vaccine consists of an injection containing p53 bound to a carrier cell, which activates and strengthens an immune response to cancerous cells.14 Researchers noticed that roughly 50% of human tumors present mutated forms of p53 on the cell surface. In this form, p53 is classified as a tumor-associated antigen because it is a signal from a tumor that activates the immune cells specifically designed to kill cancerous cells.15 Injecting more p53 should theoretically allow for a faster and more efficient response. This particular study confirmed that this vaccine could have strong, toxic effects on cancerous lung cells that present p53 on their surface in a laboratory setting. However, p53 vaccines as a treatment for human tumors are still undergoing clinical testing. This form of cancer treatment has completed Phase I trials with positive results and is currently undergoing Phase II testing, which examines any potential toxic effects that may occur after the injection.16 Peto’s paradox presents many solutions to the age-old battle

Figure 4: A graphic representation comparing mutation rates to the number of stem cells found in an animal. The number of stem cells found in an area allows researchers to project the total number of cells that an animal possesses. Notice how larger animals actually see less mutation in their cells than humans do, despite being considerably larger. Licensed under CC BY 4.0.

30

Berkeley Scientific Journal | FALL 2020

FEATURES


“These mice—so called ‘super p53’ mice—displayed an enhanced response to DNA damage and cancer suppression compared to unmodified mice.” against cancer. Since larger animals do indeed develop less cancer than they theoretically should, they must have some form of natural defense against cancer. Studying these natural defenses provides a great foundation for researchers to develop new cancer therapies for humans. The possibilities for better treatments to cancer are in the natural world, and Peto’s paradox represents just one attempt to better our understanding of this mysterious disease.

13.

REFERENCES 1. Sender, R., Fuchs, S., & Milo, R. (2016). Revised estimates for the number of human and bacteria cells in the body. PLOS Biology, 14(8): e1002533. https://doi.org/10.1371/journal.pbio.1002533 2. Lockyer, C. (1976). Body weights of some species of large whales. ICES Journal of Marine Science, 36(3), 259–273. https:// doi.org/10.1093/icesjms/36.3.259 3. Nunney, L. (2018). Size matters: Height, cell number and a person’s risk of cancer. Proceedings of the Royal Society B: Biological Sciences, 285(1889), Article 20181743. https://doi. org/10.1098/rspb.2018.1743 4. Tollis, M., Boddy, A. M., & Maley, C. C. (2017). Peto’s Paradox: How has evolution solved the problem of cancer prevention? BMC Biology, 15(1), 60. https://doi.org/10.1186/s12915-0170401-7 5. Nagy, J. D., Victor, E. M., & Cropper, J. H. (2007). Why don’t all whales have cancer? A novel hypothesis resolving Peto’s paradox. Integrative and Comparative Biology, 47(2), 317–328. https://doi.org/10.1093/icb/icm062 6. Monaghan, P. (2010). Telomeres and life histories: The long and the short of it. Annals of the New York Academy of Sciences, 1206(1), 130–142. https://pubmed.ncbi.nlm.nih.gov/20860686/ 7. Fagagna, F. d’Adda di, Reaper, P. M., Clay-Farrace, L., Fiegler, H., Carr, P., von Zglinicki, T., Saretzki, G., Carter, N. P., & Jackson, S. P. (2003). A DNA damage checkpoint response in telomereinitiated senescence. Nature, 426(6963), 194–198. https://doi. org/10.1038/nature02118 8. Caulin, A. F., & Maley, C. C. (2011). Peto’s paradox: Evolution’s prescription for cancer prevention. Trends in Ecology & Evolution, 26(4), 175–182. https://doi.org/10.1016/j. tree.2011.01.002 9. Shammas, M. A. (2011). Telomeres, lifestyle, cancer, and aging. Current Opinion in Clinical Nutrition & Metabolic Care, 14(1), 28–34. https://doi.org/10.1097/MCO.0b013e32834121b1 10. Dahse, R., Fiedler, W., & Ernst, G. (1997). Telomeres and telomerase: Biological and clinical importance. Clinical Chemistry, 43(5), 708–714. https://doi.org/10.1093/ clinchem/43.5.708 11. Leroi, A. M., Koufopanou, V., & Burt, A. (2003). Cancer selection. Nature Reviews Cancer, 3(3), 226–231. https://doi. org/10.1038/nrc1016 12. Abegglen, L. M., Caulin, A. F., Chan, A., Lee, K., Robinson,

FEATURES

14.

15.

16.

R., Campbell, M. S., Kiso, W. K., Schmitt, D. L., Waddell, P. J., Bhaskara, S., Jensen, S. T., Maley, C. C., & Schiffman, J. D. (2015). Potential mechanisms for cancer resistance in elephants and comparative cellular response to DNA damage in humans. JAMA, 314(17), 1850. https://doi.org/10.1001/jama.2015.13134 Sulak, M., Fong, L., Mika, K., Chigurupati, S., Yon, L., Mongan, N. P., Emes, R. D., & Lynch, V. J. (2016). TP53 copy number expansion is associated with the evolution of increased body size and an enhanced DNA damage response in elephants. ELife, 5, e11994. https://doi.org/10.7554/eLife.11994 García-Cao, I., García-Cao, M., Martín-Caballero, J., Criado, L. M., Klatt, P., Flores, J. M., Weill, J.-C., Blasco, M. A., & Serrano, M. (2002). “Super p53” mice exhibit enhanced DNA damage response, are tumor resistant and age normally. The EMBO Journal, 21(22), 6225–6235. https://doi.org/10.1093/emboj/ cdf595 Saito, H., Kitagawa, K., Yoneda, T., Fukui, Y., Fujsawa, M., Bautista, D., & Shirakawa, T. (2017). Combination of p53-DC vaccine and rAd-p53 gene therapy induced CTLs cytotoxic against p53-deleted human prostate cancer cells in vitro. Cancer Gene Therapy, 24(7), 289–296. https://doi.org/10.1038/ cgt.2017.21 Chiappori, A. A., Soliman, H., Janssen, W. E., Antonia, S. J., & Gabrilovich, D. I. (2010). INGN-225: A dendritic cellbased p53 vaccine (Ad.p53-DC) in small cell lung cancer: observed association between immune response and enhanced chemotherapy effect. Expert Opinion on Biological Therapy, 10(6), 983–991. https://doi.org/10.1517/14712598.2010.484801

IMAGE REFERENCES 1. Banner: TheDigitalArtist. (2018, December 23). Artist’s rendering of DNA on a blue background [Digital image]. https://pixabay.com/illustrations/dna-genetics-biologyscience-3889611/ 2. Figure 1: Tolorus. (2018, February 17). Polygon blue whale swimming in a sea [Digital image]. https://pixabay.com/ illustrations/blue-whale-animal-water-nature-3158626/ 3. Figure 2: AJC1. (2013, October 4). Telomeres [Digital graphic]. Flickr. https://web.archive.org/web/20151214022627/https:// www.flickr.com/photos/ajc1/10085714333 4. Figure 3: Newexcusive02. (2020, May 4). [Digital illustration of a mother and baby elephant]. Pixabay. https://pixabay.com/ photos/mother-elephant-baby-elephant-5129480/ 5. Figure 4: Caulin, A. F., Graham, T. A., Wang, L.-S., & Maley, C. C. (2015). Solutions to Peto’s paradox revealed by mathematical modelling and cross-species cancer gene analysis [Figure 2, Digital graphic]. Philosophical Transactions of the Royal Society B: Biological Sciences, 370(1673), Article 20140222. https://doi. org/10.1098/rstb.2014.0222

FALL 2020 | Berkeley Scientific Journal

31


EXPLORING CANCER METASTASIS

OUTSIDE THE GENOME Interview with Dr. Hani Goodarzi BY TIMOTHY JANG AND ANANYA KRISHNAPURA Hani Goodarzi, PhD, is an assistant professor of the Departments of Biophysics

& Biochemistry and of Urology at the University of California, San Francisco. He is also a member of the Helen Diller Family Comprehensive Cancer Center as well as the Institute for Computational Health Sciences. Dr. Goodarzi is the principal investigator of the Goodarzi Lab, which combines computational and experimental approaches in its study of cancer systems biology. The laboratory’s current research is largely focused on the metastasis of different cancers and neurodegenerative disease. In this interview, we discuss his research on post-transcriptional pathways affecting breast cancer metastasis to the lung as well as his current work on SARS-CoV-2.

BSJ

: Much of your research focuses on the metastasis of different cancers and how to address the challenges this poses on a molecular level. What initially drew you to this topic in particular?

HG

: I trained as a computational biologist, so I come from more of a theoretical background. I saw myself as a data scientist more than anything else before data scientists were even called data scientists. When I started graduate school, there was this explosion in biological data. There was this surge of the application of microarrays, which were pretty new at the time, to measure mRNA expression genome-wide in different organisms and for different conditions. Specifically, we were seeing the birth of these precision medicine applications of microarrays to profile different types of cancers, look at their gene expression patterns, and learn something from them. It was relatively good timing for someone with my background to start thinking about how to aggregate and integrate these types of data sets and learn something from them in a broad perspective. I ended up joining Saeed Tavazoie’s lab at Princeton, which had a computational and experimental side. I started on the computational side, where I trained with a postdoc, Olivier Elemento, and we started this project, asking, “How can we make sense of the broad regulations that happen in the context of cancer? What is the identity of cancer cells from a molecular perspective?”

32

Berkeley Scientific Journal | FALL 2020

Essentially, my introduction to cancer was predominantly an accidental one, in the sense that I initially simply cared about data. Over time, the nature of my interactions with cancer changed quite a bit. I came to realize that there is no amount of computation and statistical analysis that would make an association into causation, so if I really believed in the way that I was doing cancer research, I owed it to myself to take it to the next level. I slowly geared towards picking up genomics and later multi-omics types of analysis of biological systems. When it came time to be a postdoc, I joined a traditional cancer biology lab, Sohail Tavazoie’s lab at Rockefeller. My focus on metastasis was, in part, also accidental in that his lab studied metastasis under the understanding that if we want to limit mortality from cancer, we really have to pay attention to metastasis because, especially for cancers that are operable, the real cause of mortality is metastatic dissemination.

BSJ

: What initially led you to explore the possibility of a posttranscriptional regulatory pathway for cancer progression and metastasis?

HG

: The idea came from an intersection of two different perspectives that I developed as a postdoc. I had initially approached cancer under the operating idea that there are regulatory and signaling pathways in the cell that are hijacked by cancer cells

INTERVIEWS


"I started to think about the possibility that cancer cells can step outside of that—maybe they can engineer their own regulatory pathways." in order to achieve the types of dysregulation needed to elicit their growth and spread. However, I started to think about the possibility that cancer cells can step outside of that—maybe they can engineer their own regulatory pathways through rewiring and gene expression control mechanisms that do not exist in normal cells. This was one perspective I was thinking about at the time. Expanding on that, I also began to think about what you could possibly need for something like this to be true. One of these possibilities would be the existence of a pool of macromolecules with regulatory potential in cancer cells that are just normally not around. Coincidentally, at the time, I was working on this other project on tRNA fragments. The way that we studied tRNA fragments was through this approach called small RNA sequencing, which captures all the small RNAs (not just the tRNA fragments). As I was poring over that data, I noticed quite a few of these other RNAs every now and then in the genome that were just not annotated. At some point, as we continued to perform small RNA sequencing for various projects, it clicked that we do not see this category of small RNAs as much when we look at normal tissues; however, we see them in cancer cells. This suggested that there was this population of small RNAs that are not annotated and are cancer-emergent.

BSJ

: How did you isolate this set of small RNAs specific to cancer cells?

HG

: We performed small RNA sequencing across cell lines from different breast cancer subtypes and compared them to human mammary epithelial cells (HMECs), which serve as non-cancer models. We then combined our results with data from The Cancer Genome Atlas (TCGA) data set and analyzed this data to find small RNAs specific to cancer cells. We called these molecules “orphan noncoding RNAs” (oncRNA). We borrowed this terminology from bacterial genetics, where “orphan genes” refer to genes that uniquely appear in a given species. Here, there is a similar idea of these RNA molecules simply appearing in cancer cells. In turn, cancer cells can then learn to adapt them for new functions.

BSJ

: In one of your papers, you describe how one oncRNA, T3p, has a strong association with breast cancer progression. How did you demonstrate whether T3p directly affects cancer progression and metastasis?

HG

: We first looked through our lists of orphan RNAs and searched for those that were associated with tumor progression such that they not only appear in cancer cells, but their levels increase as the tumor progresses. T3p came out of that process, but as I said before, association is never causation. In order to prove causation, we performed loss of function experiments to test whether, if we took away T3p, we would see an effect on the biology

Figure 1: Figure 1a is a heat map depicting the significant expression of 437 small noncoding RNAs in breast cancer cell lines (red, green, and yellow groups) as compared to their non-significant expression in normal cell lines, as represented by human mammary epithelial cells (HMECs). In Figure 1b, The Cancer Genome Atlas Breast Invasive Carcinoma (TCGA-BRCA) data collection was used to identify a subset of these smRNAs that was significantly expressed in breast cancer biopsies as well as absent in the surrounding normal tissue. The resulting 201 smRNAs are defined as orphan noncoding RNAs (oncRNAs).2

INTERVIEWS

FALL 2020 | Berkeley Scientific Journal

33


of the cell cancer-related phenotypes. To do this, we used a class of antisense RNAs called locked nucleic acids (LNAs) that form very stable duplexes with small RNAs. They have been used historically to look at other small noncoding RNAs with regulatory functions, such as microRNAs. We used LNAs against T3p to see if, after we inhibit T3p, we can see specific changes in gene expression patterns in the cell and, more importantly, changes in their metastatic capacity. This was measured using xenograft mouse models, where we were able to implant or inject tumor cells into immunocompromised mice and measure how metastatic or aggressive the tumor is. We used these assays to measure the ability of cancer cells to colonize the lungs after perturbations of T3p. We were ultimately able to demonstrate that there is indeed a functional link between T3p expression and metastasis.

instead be a target of the RISC complex, meaning that it interacts with a microRNA already loaded into the RISC complex. In regards to the first possibility, T3p was already a bit too long to be a microRNA to begin with, and we could not find a seed sequence that would explain the gene expression changes as a result of its presence in cells. We thus ruled out the first possibility and landed on the second possibility, where the T3p is binding to the RISC complex in the context of other microRNAs. We then looked for the specific microRNAs that could target T3p, and we found a few. We tested them experimentally to see if they actually do form a complex, and we showed that two of them directly bind T3p. Additionally, we showed that T3p levels are modulating the gene expression of a few targets through these couple of microRNAs. That was how we landed on the link between T3p and the RISC complex.

BSJ

BSJ

: In the article, you discuss T3P’s relationship with the RISC complex. What is the general function of the RISC complex, and how does T3p interact with it?

HG

: As I mentioned earlier, there is this class of small noncoding RNAs called microRNAs. These molecules are loaded into the RISC complex and serve to recognize target RNA molecules for degradation through base pairing. When the microRNA recognizes a complementary sequence on a target molecule, the RISC complex will cut this target RNA, leading to its degradation. Regarding T3p, once we were able to show that T3p had a direct effect on cancer progression, the next question we had to answer was, “What is its mechanism of action?” Since we are a half-computational lab, we had already prebuilt a lot of the tools and data sets necessary in order to analyze the interaction potential of RNAs. This included what are known as CLIP data sets, data sets generated for RNA-binding proteins in order to show where they bind. Through this analysis, we found that Argonaute 2 (AGO2), a key enzyme of the RISC complex, was bound to T3p. That meant one of two things. One was that T3p itself could potentially function as a microRNA. Alternatively, T3p could

: Have you been able to explore the clinical implications of this link between orphan noncoding RNAs and cancer progression?

HG

: Yes, since the paper’s publication, we have started a retrospective collaborative project with the I-SPY breast cancer trial at UCSF where we look at the oncRNA content of serum samples from breast cancer patients and determine how it changes upon treatment or how it relates to the size of the tumor or residual disease. This is one of the directions we are pursuing in order to find out if we can use liquid biopsies built around the detection of oncRNAs to stratify patients by risk. Outside of just diagnostics, I want to add that there is also a possibility of having orphan RNAs serve as therapeutic targets. Since they are not traditional therapeutic targets, we are still in the early days of exploring what is possible, but the bottom line is that they can serve as novel targets that likely have limited toxicity. This is due to the fact that most of the pathways that currently serve as therapeutic targets function in normal cells as well as cancer cells. Thus, once you exceed the therapeutic window, you are hitting normal cells as well as cancer cells, resulting in on-target

"Outside of just diagnostics, I want to add that there is also a possibility of having orphan RNAs serve as therapeutic targets."

Figure 2: Model of the pathway through which T3p drives cancer metastasis. When expressed, T3p binds to Argonaute 2 (AGO2) in RISC complexes, preventing miR-10b-5p and miR-378c-5p from binding. These miRNAs are thus unable to silence expression of their downstream target genes, NUPR1 and PANX2. Elevated expression of these genes is associated with metastasis of breast cancer to the lungs.2

34

Berkeley Scientific Journal | FALL 2020

INTERVIEWS


Figure 3: Graph of lung bioluminescence signal over time for mice injected with H1299 lung cancer cells that either express a control shRNA (in gray) or a shRNA targeting TARBP2 (in blue). Note the greater bioluminescence and presence of tumors in lungs of the control group.3 toxicity. However, targeting functional oncRNAs would not result in this toxicity, since they are not present in normal cells by definition.

BSJ

: Another one of your papers deals with the RNA-binding protein TARBP2 and its oncogenic implications through its involvement in targeted intron retention. How did you initially come to hypothesize that TARBP2 was involved in this pathway?

HG

: TARBP2 was actually one of the first genes I studied as an experimental cancer biologist. When I started as a postdoc, I was studying the changes in RNA stability we see when we compare poorly and highly metastatic breast cancer cells. I found a sizable regulon of genes that were changing the RNA stability in highly metastatic cells, but it was not clear as to why. At the time, most of what we knew about RNA stability had to do with microRNAs or some RNA binding proteins, but when I looked at those, none of them could explain the changes. That implied that there was an unknown mechanism through which the stability of these targets is being dysregulated in highly metastatic cells; landing on these kinds of problems is, in fact, my job. As a systems biologist, I try to build regulatory pathways from scratch, as opposed to relying on what is known. So, I took advantage of a kind of custom application of network biology: given a set of genes that are changing together, can you figure out an associated factor correlated with all these genes? In other words, can you identify a master regulator of genes of a regulon, where if that regulator changes, so will the targets? To answer that, I essentially ran a lot of correlation analyses regarding gene expression. Through this network biological approach, I nominated three potential RNA-binding proteins to be regulators of RNA stability in this context. TARBP2 was one of them. I knocked down each one of them and measured changes in RNA stability, and TARBP2 was the one that was the right candidate. For the remainder of that paper, which came out in Nature back in 2014, we really focused on its function in metastasis. We showed through xenograft mouse models that if you change TARBP2 activity

INTERVIEWS

and expression, you can modulate the metastatic capacity of cancer cells; however, it was not really clear what the actual mechanism was through which TARBP2 regulates RNA stability. This is where the second paper, which is basically a follow-up, comes into play. Basically, we were trying to find how TARBP2 is functioning to change the stability of its target regulon.

BSJ HG

: How did you then narrow down how TARBP2 operates at a molecular level?

: We first made a couple of important observations. TARBP2 had a known function as part of the microRNA processing machinery, where it was thought to be a cytoplasmic RNA binding protein. However, our results localized its function to the nucleus; if you knock TARBP2 down, the stability of its target genes is changing inside the nucleus. This meant that we had this nuclear RNA stability pathway that was different from what was known. We then used pull-down mass spectrometry to target TARBP2 and all the proteins that it could interact with in order to examine its function. This included the key components of the methyltransferase complex and also TPR, a component in nuclear pore-associated proteins involved in RNA surveillance and export. We next modulated the levels of these proteins and observed whether we could see a similar effect on the action of TARBP2 and its target regulon. We used this epistasis experiment to prove that TARBP2 is upstream of RNA methylation, which is upstream of our target regulon. In this manner, we revealed how TARBP2 binds to mostly intronic sequences as the RNA is transcribed. It recruits a methyltransferase complex, which methylates the RNA, and these methylation marks are then used as flags to regulate the rate of splicing. Therefore, if TARBP2 is present, you get less efficient splicing and intron retention. In the nucleus, RNA that is not properly spliced is very quickly degraded. We ultimately think that at the same time TARBP2 is recruiting the methyltransferase complex and prohibiting

FALL 2020 | Berkeley Scientific Journal

35


"I am broadly interested in gene expression control, and by understanding where it breaks, we learn how it works." efficient splicing, through its interactions with TPR, it simultaneously brings this surveillance complex to these target transcripts, resulting in their degradation.

BSJ HG

: What is the association between TARBP2 expression and cancer in vivo?

: In our first paper on the subject, we established that TARBP2 had a role in metastasis to the lung of breast cancer. In binding to its target RNA sequences, TARBP2 results in their degradation, promoting metastasis. On top of that, once we had the signature of TARBP2 and its targets, we looked broadly at where else this pathway could be functional. One of the places that we looked at was cancer gene expression data sets, and we identified breast cancer, which made sense. But even stronger than that, we saw a signal in lung cancer, which is why we started to go down that path and look at how modulations of TARBP2 will impact cancer growth.

BSJ

: What are the implications of having a greater understanding of these post-transcriptional pathways in cancer cells?

HG

: The models that we are using and creating are not just significant for the study of human disease, but they are useful in exploring normal cell physiology as well. As I mentioned earlier, most of the pathways that we find dysregulated in the context of cancer are performing their normal functions in normal cells. They are being perturbed in the context of cancer, but their identities are not changing. If, for instance, A regulates B in cancer cells, it very likely also regulates B in normal cells. In cancer cells, though, you may have more A than normal, and the hyperactivation of this pathway leads to phenotypic consequences. That is really my approach to science. I am broadly interested in gene expression control, and by understanding where it breaks, we learn how it works.

Prize. It focused on whether we found structural elements in viral genomes which had regulatory potential, and whether they were conserved. In the initial analysis that I did, I had included coronaviruses among other families. We were never able to finish that project, but I had the understanding that came with it. I knew that coronaviruses were showing a lot of signal in terms of RNA structure, and so I decided to set up a proposal where we would start looking at the potential role of RNA secondary structure in COVID. Over the years, we have come up with this hybrid strategy of both experimental and computational probing of the secondary structure. Under this strategy, we proposed to use DMS-MaP seq to look at the secondary structure of the entire viral genome in pieces to see if we can find any docking sites or any interesting structural components called switches, where the same sequence can have multiple conformations dependent on what it is interacting with. We are currently building the library, but we will see how it goes.

REFERENCES 1. [Photograph of Hani Goodarzi]. UCSF Helen Diller Family Comprehensive Cancer Center. https://cancer.ucsf.edu/people/ profiles/goodarzi_hani.7686 2. Fish, L., Zhang, S., Yu, J. X., Culbertson, B., Zhou, A. Y., Goga, A., & Goodarzi, H. (2018). Cancer cells exploit an orphan RNA to drive metastatic progression. Nature Medicine, 24(11), 1743– 1751. https://doi.org/10.1038/s41591-018-0230-4 3. Fish, L., Navickas, A., Culbertson, B., Xu, Y., Nguyen, H., Zhang, S., Hochman, M., Okimoto, R., Dill, B. D., Molina, H., Najafabadi, H. S., Alarcón, C., Ruggero, D., & Goodarzi, H. (2019). Nuclear TARBP2 drives oncogenic dysregulation of RNA splicing and decay. Molecular Cell, 75(5), 967–981.e9. https://doi.org/10.1016/j.molcel.2019.06.001

BSJ

: Finally, as a member of the Innovative Genomics Institute (IGI), you are currently working on targeting RNA structural elements in SARS-CoV-2. Could you describe the project?

HG

: Since the start of the pandemic, we have also wanted to contribute to the scientific effort to the extent that we could. We have a couple of projects focused on COVID, and this is one of them. Going back to the TARBP2 story, one of the key ways that we found TARBP2 was through first finding structural elements that TARBP2 binds to. I have had this long-standing interest in understanding how regulatory information is encoded not in the primary sequence, but the structures. Over the years, I have been involved in various projects to find these structural regulatory elements, and one of the projects that I was working on as a postdoc was in collaboration with Charles Rice, who recently won a Nobel

36

Berkeley Scientific Journal | FALL 2020

INTERVIEWS


Schizophrenia Through the Years BY ANISHA IYER How historical perceptions and technological innovation have shaped scientists’ understanding of schizophrenia. PSYCHOSIS IN HISTORY

U

pon first discovering neurodivergence in people, ancient civilizations turned to spirituality and supernaturalism for answers. Without modern science’s understanding of atypical neurological states, Stone Age Egyptians drilled holes in patients’ skulls to release ‘evil spirits,’ a practice that missed the mark for psychiatric treatment, but proves vital millenia later in modern-day neurosurgical practices (Figure 1).1 As centuries passed, new civilizations popularized the idea that neurochemistry correlated with theistic devotion. Greek mythology and Homerian epics maintained that psychosis was punishment for insufficient worship until Greek physician Hippocrates suggested imbalance of humors as the cause, providing foundational theory for later physicians to build upon.1 During the Middle Ages, the Christian church adopted Hippocrates’ therapeutic techniques of blood-letting, purgatives, and special diets for use alongside prayer and confession. For several centuries, sufferers of psychotic disease were deemed ‘heretics’ and burned to combat perceived demonic possession until a wave of scientific breakthroughs in the sixteenth century encouraged a shift towards scientific postulation.1

FEATURES

In 19th century France, physician Philippe Pinel hypothesized a cause for neurodivergence in exposure to psychological and social stressors, advocating for humane treatment and greater respect for patients. In 1910, Swiss psychiatrist Paul Eugen Bleuler coined the term “schizophrenia” to describe the splitting of thoughts from the mind. Decades later, Freud suggested schizophrenia’s basis in ‘unconscious’ conflicts from early childhood, inspiring a new trajectory for schizophrenia research until scientists discovered antipsychotics.1

DISCOVERING ANTIPSYCHOTICS Chlorpromazine, the first and most famous anti-psychotic, was first introduced in medicine as an early anesthetic. After its introduction in 1951 by French surgeon Henri Laborit, chlorpromazine effectively stabilized patients without a loss of consciousness, and Laborit sought to repurpose the drug to treat psychosis.2 According to Laborit, chlorpromazine ameliorated patients’ otherwise irremediable conditions and prepared patients “to resume

Figure 1: Trepanation. Trepanation is the practice of drilling holes into the skull to relieve ailments. This practice was extremely common in ancient civilizations before technology allowed scientists to attribute symptoms to specific brain regions and surgically intervene with appropriate protocols. A more controlled version of trepanation is used today in neurosurgery when surgeons drill burr holes to relieve pressure or to open the skull to perform surgery. Left, image in public domain; Right, licensed under CC BY-SA 3.0 FR.

FALL 2020 | Berkeley Scientific Journal

37


Figure 2: Spectrophotofluorometer. Dr. Sidney Udenfried examining data before the first Aminco-Bowman spectrophotofluorometer by Dr. Robert Bowman. Bowman used the precursor to the bench-top instrument of optical and electrical equipment to assemble the functional spectrophotofluorometer, which allowed scientists to chemically analyze small amounts compounds and has great pharmacological applications to the measurement of neurotransmitters.9 normal life,” suggesting it was relatively curative of psychosis. Chlorpromazine’s remarkable stabilizing capabililities, especially when paired with barbiturates and electroshock therapy, were considered a triumph for the burgeoning field of psychopharmacology.2,3 Although chlorpromazine’s mechanisms of action remained unknown to Laborit, further research led to an explosion of scientific discovery in psychopharmacology and neuroscience. By the end of the decade, scientists had identified many neurotransmitters, including serotonin and dopamine. Electron microscopy clarified the nature of synaptic transmission between neurons as chemically-mediated and the spectrophotofluorometer enabled precise chemical analysis of neurotransmitters in the brain (Figure 2).2,4,5 Together, these new developments led to clinical research that attributed chlorpromazine’s antipsychotic abilities directly to its anti-serotonin and

“According to Laborit, chlorpromazine ameliorated patients’ otherwise irremediable conditions and prepared patients “to resume normal life,” suggesting it was relatively curative of psychosis.”

anti-dopamine effects.2 Antipsychotics research centered on serotonin until Arvid Carlsson demonstrated that blocking dopamine receptors causes antipsychotic effects, shifting the focus to dopamine in 1963.2 Following this shift, schizophrenia research aimed to elucidate the mechanisms by which dopamine causes psychosis, eventually progressing through three distinct versions of the Dopamine Hypothesis.

THE DOPAMINE HYPOTHESIS Version I: Targeting the Dopamine Receptor After Carlsson’s 1963 discovery, dopamine receptors were the targets of schizophrenia research. Dopamine’s involvement was further underscored when Carlsson found that reserpine, a chemical derivative of an Indian treatment for insanity, reduced dopamine stores in synaptic vesicles

Figure 3: PET scans of schizophrenic and healthy patients. Positron Emission Tomography (PET) is a neuroimaging technique is used to compare and visualize activity in different regions of the brain. Blood flow brings oxygen and nutrients necessary for ATP synthase to brain tissue, and can thus be used as a measure of brain activity. PET scan of schizophrenia patient (left) shows more brain activity in the temporal lobes of the brain, another site of D2 dopamine receptors, when compared to that of a healthy patient (right).10

38

Berkeley Scientific Journal | FALL 2020

FEATURES


“Emphasizing dopamine dysregulation’s involvement in psychosis, the team drew an important distinction between schizophrenia, the broader neurodevelopmental disorder, and psychosis, one of its symptoms.”

of neurons. Later, Carlsson discovered that chlorpromazine did not affect these dopamine stores, instead blocking serotonin, noradrenaline, and primarily dopamine receptors.6 In 1977, Carlsson established that dopaminergic hyperfunction produced symptoms of paranoid schizophrenia. 7 Afterwards, scientists blocked dopamine receptors to fight schizophrenia, but had yet to attribute mechanisms to positive, negative, and cognitive symptoms or localize the abnormality to a specific region of the brain. Version II: Localizing the Abnormality In version II, however, scientists localized dopaminergic hyperactivity to D2 dopamine receptors in the striatum and connected an additional facet, D1 dopamine hypoactivity, to the frontal cortex. Following the 1977 invention of Positron Emission Tomography (PET), PET studies revealed hypoactivity and low dopamine metabolites in the frontal cortex of schizophrenia patients (Figure 3). In 1980, scientists connected the aforementioned cortical hypoactivity to

FEATURES

Figure 4: MRI scans of healthy and schizophrenic patients. Neuroimaging techniques such as Magnetic Resonance Imaging (MRI) enable scientists and clinicians to quickly and efficiently recognize abnormalities in brain structure and, at times, attribute them to neurological disorders. MRI aligns protons in brain tissue with a magnetic field to visualize brain anatomy. The MRI of the affected twin shows enlarged ventricles, which is associated with schizophrenia for unknown reasons, signifying how much else remains to be explored. subcortical hyperactivity at D2 receptors in the striatum.8 Together, this D1 hypodopaminergia and D2 hyperdopaminergia likely resulted in negative and positive symptoms of schizophrenia, respectively. However, there was still no framework to explain how subcortical hyperdopaminergia led to delusions or how cortical hypodopaminergia resulted in a depressive affect. Version III: The Final Common Pathway In the following decades, scientific innovation revolutionized scientists’ means of studying psychiatric disease. Recent findings involve synaptic plasticity, the malleable nature of the synapse.9 To enable this malleability, extrasynaptic receptors located further from the synapse regulate neurotransmitter release in a feedback-mediated system.10 Accordingly, extrasynaptic D2 receptors modulate dopamine release based on nearby dopamine levels.6 In 2006, Carlsson attributed schizophrenic dopamine dysfunction to malfunction in dopaminergic synapses and a poorly compensating feedback-mediated

system, suggesting that defective synapses “[lead] to feedback activation and the resulting observed increase in dopaminergic tone.”6 Initially, defective dopaminergic s y n ap s e s c au s e l o w d o p a m i n e c on c e nt r at i ons . To c omp e ns at e , extrasynaptic transmission increases, causing psychosis. Failing to reach the synapse in a controlled fashion, extrasynaptic transmission compensates poorly for the defect, and original low dopamine levels cause negative symptoms.6 In 2009, scientists Oliver Howes and Shitij Kapur proposed that several different stimuli—including genetics, stress, drugs, and the dopamine dysfunction detailed in version II—lead to D2 dopamine dysregulation. Emphasizing dopamine dysregulation’s involvement in psychosis, the team drew an important distinction between schizophrenia, the broader neurodevelopmental disorder, and psychosis, one of its symptoms.8

FALL 2020 | Berkeley Scientific Journal

39


CONCLUSION Throughout history, physicians failed to rationalize psychosis without demonizing patients, and could not attribute its basis to neurochemistry. As innovation fueled scientific discovery, scientists’ means to explore the once-concealed secrets of the mind broadened dramatically. Technological innovations such as electron microscopy, spectrofluorimetry, and neuroimaging have revolutionized modern science’s ability to understand psychiatric diseases (Figure 4). With modern science’s understanding of psychosis, schizophrenia is now considered a biological brain disorder. Medications that are the product of decades of cutting-edge clinical research allow most patients to lead normal lives.6 While early scientists had no choice but to blindly try and err, present-day scientists have access to cellular minutia and effective medications to treat schizophrenia better than ever before. After understanding dopamine’s role in schizophrenia, pharmacologists synthesized several medications to categorically account for hallucinations and delusions. 6 Yet, some facets of the complex brain disorder remain unknown. Future directions for schizophrenia research include reducing side-effects of medications, such as tardive dyskinesia and cardiac arrhythmia; and targeting cognitive and negative symptoms, namely working memory deficits, depressive affect, and social withdrawal.11 D e s p i t e s c i e n c e’s d e e p e n e d understanding, schizophrenia remains highly stigmatized. On average, desire for distance from individuals with schizophrenia increased from 1996 to 2006, largely due to perceived dangerousness.12 Society’s time-honored tradition of demonizing neurodivergent people appears to have persisted into the 21st century. As science resolves the remaining limitations of schizophrenia treatment, one can only hope that society will resolve its remaining stigma to allow neurodivergence to be appreciated, not feared.

REFERENCES 1. Burton, N. (2020, May 4). A brief history of schizophrenia. Psychology Today. https://www.psychologytoday.com/ us/blog/hide-and-seek/201209/brief-

40

history-schizophrenia 2. Ban, T. A. (2007). Fifty years ch lor promazine: A histor ica l perspective. Neuropsychiatric Diseases and Treatment, 3(4), 495–500. https://www.dovepress.com/fiftyyears-chlorpromazine-a-historicalperspective-peer-reviewed-article-NDT 3. Faria, M. A. (2013). Violence, mental illness, and the brain - A brief history of psychosurgery: Part 3 - From deep brain stimulation to amygdalotomy for violent behavior, seizures, and pathological aggression in humans. Surgical Neurology International, 4(1), 91. https://doi.org/10.4103/21527806.115162 4. Gray, E. G. (1961). The granule cells, mossy synapses and Purkinje spine synapses of the cerebellum: Light and electron microscope observations. Journal of Anatomy, 95, 345–356. 5. Udenfriend, S. (2008). Development of the spectrophotofluorometer and its commercialization. Protein Science, 4(3), 542–551. https://doi.org/10.1002/ pro.5560040321 6. Carlsson, A., & Carlsson, M. L. (2006). A dopaminergic deficit hypothesis of schizophrenia: The path to discovery. Dialogues in Clinical Neuroscience, 8(1), 137–142. 7. Carlsson, A. (1977). Does dopamine play a role in schizophrenia? Psycholog ical Medicine, 7(4), 583–597. https://doi.org/10.1017/ S003329170000622X 8. Howes, O. D., & Kapur, S. (2009). The dopamine hypothesis of schizophrenia: Version III—The final common pathway. Schizophrenia Bulletin, 35(3), 549–562. https://doi.org/10.1093/ schbul/sbp006 9. Zucker, R. S., & Regehr, W. G. (2002). Short-term synaptic plasticity. Annual Review of Physiology, 64(1), 355–405. https://doi.org/10.1146/annurev. physiol.64.092501.114547 10. Ford, C. P. (2014). The role of D2autoreceptors in regulating dopamine neuron activity and transmission. Neuroscience, 282, 13–22. https://doi. org/10.1016/j.neuroscience.2014.01.025 11. Gaebel, W., & Zielasek, J. (2015). Schizophrenia in 2020: Trends in diagnosis and therapy. Psychiatry and

Berkeley Scientific Journal | FALL 2020

Clinical Neurosciences, 69(11), 661–673. https://doi.org/10.1111/pcn.12322 12. Silton, N. R., Flannelly, K. J., Milstein, G., & Vaaler, M. L. (2011). Stigma in America: Has anything changed?: Impact of perceptions of mental illness and dangerousness on the desire for social distance: 1996 and 2006. The Journal of Nervous and Mental Disease, 199(6), 361–366. https://doi. org/10.1097/NMD.0b013e31821cd112

IMAGE REFERENCES 1. Banner: Lythgoe, M., & Hutton, C. Enhanced MRI Scan of the Head [Medical Scan]. (2004). Wellcome Collection. Image licensed under CC BY-NC-ND 2.0 UK. https://w w w.f lickr.com/photos/ wellcomeimages/7026370875 2. Figure 1, left: Bosch, H. (1501–1505). The Extraction of the Stone of Madness [Painting, cropped]. Museo Nacional del Prado, Madrid, Spain. https://www. museodelprado.es/en/the-collection/ art-work/the-extraction-of-the-stoneof-madness/313db7a0-f9bf-49ad-a24267e95b14c5a2 3. Figure 1, right: Rama. (2006). [Photograph of the trepanated skull of a woman, dated 3500 BC]. Wikimedia C o m m o n s . ht t p s : / / c o m m o n s . wikimedia.org/wiki/File:Cranetrepanation-img_0507_crop.jpg 4. F i g u r e 2 : U d e n f r i e n d , S . (2008). D e velopment of t he spectrophotofluorometer and its commercialization. Protein Science, 4(3), 542-551. https://doi.org/10.1002/ pro.5560040321 5. Figure 3: Earles, J., McDonald, L., Dietrich, E., & Einstein, G. [PET scans of brains with and without schizophrenia]. Furman University. http://facweb.furman.edu/~einstein/ general/disorderdemo/petscans.htm 6. Figure 4: Carpenter, W. T., & Buchanan, R. W. (1994). Schizophrenia [Figure 1]. New England Journal of Medicine, 330(10), 681–690. https://doi. org/10.1056/NEJM199403103301006

FEATURES


Machine Learning and Design Optimization for Molecular Biology and Beyond Interview with Dr. Jennifer Listgarten

BY BRYAN HSU, NATASHA RAUT, KAITLYN WANG, AND ELETTRA PREOSTI

Jennifer Listgarten is a professor in the Department of Electrical Engineering and Computer Science and a principal investigator in the Center for Computational Biology at the University of California, Berkeley. She is also a member of the steering committee for the Berkeley AI Research (BAIR) Lab, and a Chan Zuckerberg investigator. In this interview, we discuss her work in the intersection of machine learning, applied statistics, and molecular biology.

BSJ

: You have a very diverse background ranging from machine learning to applied statistics to molecular biology. Can you tell us how you came to start working on design optimization?

JL

: It was very unplanned, actually. I had been working in the fields of statistical genetics and CRISPR guide design for some time, so I wanted to look for something really crazy and different. That summer, a graduate student intern and I wondered if we could predict the codon usage in actual organisms with modern day machine learning. That was totally crazy and not necessarily useful, but I thought it might shed some interesting biological insights. Is codon usage predictable, and if so, what enables you to predict it? Is it just the organism or also the type of gene? From there, we moved to codon optimization using more sophisticated modeling techniques and ideally ingesting more data to make use of those techniques. I approached my colleague, John Dunwich, and we started working on this very concrete problem. I came up with a ridiculous idea: what if I just think about finding sequences of amino acids or nucleotides that will do what I want them to do in a general way? Of course, I was aware that there were decades’ worth of research done to answer this question in terms of biophysics based modeling. David Baker’s lab at the University of

INTERVIEWS

Washington, for example, built energy based models. But, I thought that we should use machine learning. I talked to a lot of people, convinced some students to work on this, and now, I think this is my favorite research area that I have ever worked in.

BSJ

: Can you provide a general overview of how machine learning methods such as neural networks are applied to successfully optimize small molecule drug discovery?

JL

: The general way to think about this is that machine learning methods can be used to build an in silico predictive model for measuring things. Measuring quantities in a lab can oftentimes be tricky and require creativity because you cannot always exactly measure what you want. Typically, a proxy is first used to scale things. Then, the correlation between the proxy and the quantity which we want to measure to scale must be understood. But, what if we can have a predictive model to reduce the number of measurements needed? Maybe instead of having to take a thousand measurements, we can get away with taking fifty or a hundred measurements at a particular location and time during the experimental process. This would be a tremendous saving in many senses of the word.

FALL 2020 | Berkeley Scientific Journal

41


Figure 1: Integrating mind and machine in drug discovery. While machines and machine learning models are capable of making and testing designs, they do not yet have the capability to create these designs or derive meaningful conclusions from extensive data analysis. However, as the fields of computational biology and chemistry continue to progress, the collaboration between mind and machine may drastically change scientific research as we know it.²

BSJ

: Do you think that there will come a point in time in which machines can fully take over the analysis and design processes?

JL

: The general answer is no. I think our work is unlike natural language processing, computer vision, and speech, where the benchmarks of machine learning have been blown away by deep neural networks. What distinguishes these three areas from computational biology and chemistry is that it is easy to obtain data in these areas. For example, you can trivially take a gazillion images or snippets of speech from people. You can also have an ordinary human annotate this data since most of us are born with brains that can comprehend and make sense of it. Therefore, getting the labels required for machine learning is really easy. However, you cannot do this in chemistry and biology. You have to spend a lot of time and money in the lab and use your ingenuity to measure the quantities you care about. Even then, it is an indirect measurement. So, the data

“Machine learning methods can be used to build an in silico predictive model for measuring things.” 42

Berkeley Scientific Journal | FALL 2020

problem itself is inherently much trickier. For this reason, I think there is no way we are going to replace domain experts. The question becomes: how can we synergistically interact with each other? For example, as a machine-learning person, I must decide which data an experimenter should grab in order to help me build a good machine learning model. The machine learning model would in turn make more useful predictions. On the other hand, an experimenter might have considerations about how difficult it is to measure one quantity compared to another that is additional to what the machine learning model indicates. Overall, I think that there are so many difficult, complex problems that it will take a very long time, if ever, before humans are out of the loop.

BSJ

: Some of your past work focused on developing algorithms in order to predict off-target activities for the end-to-end design of CRISPR guide RNAs. Why is optimizing guide RNAs important for CRISPR-Cas9?

JL

: In CRISPR-Cas9, the Cas9 enzyme resembles a Pac-Man. The “Pac-Man” comes in, pulls apart a double strand of DNA, and makes a cut. After which, a native machinery attempts to fix the cut. However, since the native machinery is not very good in fixing the cut, it actually disables the gene. But, if you can deliver the “Pac-

INTERVIEWS


Figure 2. Schematic of Elevation off-target predictive modelling. a. A visual walkthrough of how Elevation would score a pair of gRNA target sequences with two potential off-target mismatches. The sequences are first separated into two cases. Then, they are scored by the first-layer model, which deals specifically with single mismatches. Elevation evaluates using the Spearman correlation, which weights each gRNA-target pair by a monotonic function of its measured activity in the cell. Next, the second-layer model combines the two scores. In neural networks, a layer is a container that transforms a weighted input with non-linear functions before passing the output to another layer for further evaluation. b. A closer look at the second-layer aggregation process. The model statistically computes an input distribution of all the single mismatch scores and derives the final score accordingly.³ Man” to the right part of the gene, it is more likely to get messed up without repairing itself. That is how you get a gene knockout. So, the question becomes: how do I deliver the “Pac-Man” to the right part of the gene? This is where the guide RNA comes in. The guide RNA attaches itself to the “Pac-Man” and brings it to a specific part of the genome. It does so on the basis of complementarity between the guide RNA and the human genome since the guide RNA cannot latch onto anything other than the unique sequence to which it is complementary. But, if the target sequence is not unique or has certain thermodynamic properties, the guide RNA may end up attaching itself to other parts of the genome. Thus, if you are trying to conduct a gene knockout experiment, and you design the wrong guide RNA, you might draw the wrong biological conclusions since you have actually knocked out other genes as well.

BSJ JL

: Can you briefly describe the Elevation model, and how it overcomes the limitations of current prediction models?

: There are two things that you care about when it comes to Elevation. The first is that if I am using a guide RNA (gRNA) to knock out a target gene, I want to know what other genes I have knocked out. In order to do this in silico, I need a model that, given a guide RNA and part of the genome, gives us the probability that

INTERVIEWS

we have accidentally knocked out a gene in that part of the genome. Then, I would need to run the model along the genome at every position that I am worried about. The second important thing is the aggregate of all the probability scores. With what I have told you so far, the model will return three billion numbers, each of which is the probability of an accidental knock out. No biologist, when considering one guide RNA, wants to look at three billion numbers. So, how can we summarize these numbers in some meaningful way? The way we solved this problem is by training the model on viability data so that we can measure a sort of aggregate effect. To do this, we target a non-essential gene such that if we were successful in knocking the gene out, the organism would still survive. This means that if we choose a bad guide RNA, and it knocks out other genes by accident, it is going to kill the organism. The organism’s survival rate gives us an aggregate indirect measurement of how much of a target there is. So, now there are some larger number of predictions from the first layer of the model, although not quite three billion. These predictions then get fed into the aggregate model, which has its own supervised label in a wet lab. This was our crazy compound approach. However, a big challenge was the very limited data that was available at the time we developed this model. Because there was so little data, I could not just throw deep neural networks at the problem. We basically had to create new approaches to deal with this problem based on standard, simple models.

FALL 2020 | Berkeley Scientific Journal

43


“That is why it is so beautiful; it is not very obvious until you see it. I think those are often the nicest kinds of results.”

BSJ JL

: What are the universal benefits of creating a cloud-based service for end-to-end guide RNA design?

: To be a successful researcher in computational biology, you typically need to make tools that people can immediately use. Now, I do not know if our CRISPR tool is such a tool, or if it was and has been superseded. Modern-day molecular biology is so heavily dependent on elements of data science and machine learning, but sometimes people who have the skills to develop them do not have the time or the bandwidth. You cannot reinvent the wheel constantly, right? Science progresses more rapidly when researchers build on top of existing tools. Thus, we released the GitHub source code for our tool, which makes it reproducible and robust. But the core code with the machine learning modeling should be pretty accessible.

BSJ

: We also read your recent work about Estimation of Distribution Algorithms (EDAs). What are the core steps of an EDA, and what are EDAs used for?

JL

: So, I am not from the mainstream community that works on EDAs, and I think some of them would quibble with my viewpoint. I would say that EDAs are an optimization method in which you do not need to have access to the gradient (rate of change) of the function you are optimizing. They seem to be very widely used in a number of science and engineering disciplines where optimization is an important factor. First, I have to decide up front what distribution to use that would represent the function, where to start that distribution, and how to move from there. I am not going to follow the gradient. Instead, I am going to draw samples from the distribution and evaluate each sample under f(x). Then, another ingredient is reweighting the samples based on their performance under f(x). We want to throw away the bad points and train a new distribution just with the good points. Finally, the whole process is repeated. It is a lot like directed evolution, but in a computer. I must have a parametric form of the search pathway and a weighting function that will tell me how to modulate evaluations under f(x). Given those two things, everything else follows. Essentially, we are re-estimating the distribution with maximum likelihood estimation. I have to say it was super cool because we had not heard about EDAs before this project. David and I were trying to tackle this protein engineering problem, and he reinvented this thing that was essentially an EDA, except more rigorously defined. Then, looking at it, we realized we are trying to do directed evolution in silico starting from first principles, which blew my mind.

BSJ

: Could you describe the connection between EDAs and Expectation Maximization, and why this connection is important?

44

Berkeley Scientific Journal | FALL 2020

JL

: It is a very technical connection rather than an intuitive one. The machine learning community usually uses EM to fit data to a model. To illustrate this: if we had some points, we would fit the mean and covariance of a mixture of Gaussian (normal) distributions to those points. In contrast, that is not the fundamental problem for EDAs. The EDA problem is how to find the x that maximizes f(x). I am not fitting the function to x; I am trying to find a maximum. So, they sound like very different problems, but for technical reasons, you can actually create an analogy that connects them. That is why it is so beautiful; it is not very obvious until you see it. I think those are often the nicest kinds of results.

BSJ JL

: How can the connection be used in research on design optimization?

: To be honest, it is not clear to me how to leverage our insight. We thought the connection was so beautiful, and we wanted to write it up and share it with the community to see if they might find it interesting and be able to make use of it. However, we did not spend the time to demonstrate how that connection allows people to do things they could not have otherwise done. That remains to be seen.

BSJ

: Since you also have extensive experience in industry, how do you think the field of therapeutics in relation to industry has been impacted by computational protein engineering?

JL

: My time in industry was at Microsoft Research, which in my instance was basically like being in academia. Ironically, one of the reasons I moved to academia was so that I could work with companies that cared about drug design. Biotechnology companies doing diagnostics or therapeutics have been trying to use machine learning, but I am not sure that there have ever been any runaway success stories there. Maybe it has been helpful; computation comes in everywhere. A lot of technologies require sequencing to

Figure 3: The connection between EDAs and EM presented as a mathematical argument where f(z) is the black-box function to be optimized, p(z|θ) represents the search model, and LEDA(θ) can be thought of as an EDA equivalent to the log marginal likelihood in EM without any observed data x.⁴

INTERVIEWS


assess what is happening and sequencing results require a lot of computational biology. But, can you develop a new COVID vaccine using machine learning? I do not think we have seen that kind of thing. However, I actually do think that we are on the cusp of starting to see where machine learning might contribute in groundbreaking ways, which is of course why I am working in this area. There are companies whose whole goal and premise of existence is the combination of high throughput machine learning and high throughput biology to really move the dial. Then, there are a whole bunch of places that use some machine learning on the side. Maybe they save some money on experiments or maybe they get to a better point than they would have otherwise. However, I do think we are starting to see a lot more sophistication in communication between the machine learning and biology spheres, including in industry. The next 5 to 10 years are going to be really interesting in terms of what happens and where it happens. I hope that it happens in protein engineering and small molecule design.

BSJ JL

S., Rush, T. S., Zentgraf, M., Hill, J. E., Krutoholow, E., Kohler, M., Blaney, J., Funatsu, K., Luebkemann, C., & Schneider, G. (2020). Rethinking drug design in the artificial intelligence era. Nature Reviews Drug Discovery, 19(5), 353–364. https://doi. org/10.1038/s41573-019-0050-3 3. Listgarten, J., Weinstein, M., Kleinstiver, B. P., Sousa, A. A., Joung, J. K., Crawford, J., Gao, K., Hoang, L., Elibol, M., Doench, J. G., & Fusi, N. (2018). Prediction of off-target activities for the end-to-end design of CRISPR guide RNAs. Nature Biomedical Engineering, 2(1), 38–47. https://doi.org/10.1038/s41551-0170178-6 4. Brookes, D., Busia, A., Fannjiang, C., Murphy, K., & Listgarten, J. (2020). A view of estimation of distribution algorithms through the lens of expectation-maximization. Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion, 189–190. https://doi.org/10.1145/3377929.3389938

: How do you hope your research in particular will impact the future of drug design?

: There are many hybrid groups out there that are computationally focused, but very application driven. These groups make things happen and get results, but they are typically more consumers of machine learning methods. That is really valuable. It is sort of the equivalent of translational research in biology, right? You need those people there, making sure it works. I sit in the electrical engineering computer science department in the AI group, which has some of the best AI students in the whole country. I have had some students who are really cross-disciplinary with very rigorous technical expertise find me, so my group is one of the very few that is trying to think things through very cleanly from first principles or more abstract concepts. People like our two most recent papers, for example, because we carefully painted a really clear picture of the problem. I think that is what is missing from a lot of computational biology. Sometimes when I give talks, I say, “You know what? I am not even going to show you our results from our paper. If you want to see them, you can see them. What I want to convince you of is how to think about this problem.” That might sound silly, but I think that is actually really important because how you think about it dictates how you find specific solutions with particular collaborators. When you are thinking in a coherent, fundamental way, you are more likely to arrive at an engineering solution that works. We are creating a more rigorous scaffolding on which other researchers can think about the specifics with respect to certain domains. We are also working very collaboratively with people on the translational side of things. Doing both foundation and application is beautiful because there can be a very nice interplay between them.

REFERENCES 1. Headshot: Jennifer Listgarten [Photograph]. Retrieved from http://www.jennifer.listgarten.com 2. Schneider, P., Walters, W. P., Plowright, A. T., Sieroka, N., Listgarten, J., Goodnow, R. A., Fisher, J., Jansen, J. M., Duca, J.

INTERVIEWS

FALL 2020 | Berkeley Scientific Journal

45


Darwin: Chimp or Chump? BY LILIAN ELOYAN

W

hen we think of Charles Darwin, we might imagine a brave explorer who trekked the coast of South America, collecting specimens that would lead to the formulation of the single most important biological concept: the theory of evolution by natural selection. But before the landmark publishing of The Origin of Species, the eventual father of evolutionary biology was a failed medical student, unable to follow in his father’s footsteps.1 Before Darwin became Darwin, he was a young man uncertain about his future, searching for an opportunity for adventure. So in 1831, when Captain Robert Fitzroy asked young Charlie to join him on a dangerous, five-year voyage around the world, aboard his ship, the HMS Beagle, the anxious 22-year-old jumped at the offer, despite his fears of the unknown.2 But Darwin wasn’t the captain’s first choice. When two other more qualified experts declined, one of Darwin’s professors suggested him as a possible option—seeing in him the framework of a promising young scientist. However, at that point, the extent of Darwin’s scientific experience was collecting beetles in his front yard.1 His true scientific education was still to come. Darwin’s family was shocked that their timid and scholarly son would be interested in such a daring expedition. Instead, his father urged him to follow a respectable path and to join the clergy, warning him, “You will be a disgrace, to yourself and all your family,” but Darwin had never been interested in The Church, and his heart had already bound itself to science.3 However, in Darwin’s nineteenthcentury England, religion and science were closely intertwined. Prevailing biological ideas included clergyman Sir William Paley’s

46

Berkeley Scientific Journal | FALL 2020

claim that all organisms were perfectly tailored, each element working faultlessly together like the intricate mechanics of a watch, with God as the watchmaker.1 Scholars like Paley believed in a “Great Chain of Being,” a hierarchical system of life in which God sat at the top with humans just beneath him, extending down to the simplest organisms.1 However, religious explanations left many questions unanswered.

THE VOYAGE OF THE BEAGLE Despite his father’s worries that the trip would be “a useless undertaking,” on December 27th, 1831, The Beagle departed from Plymouth with a woefully unprepared Darwin on board as the unofficial naturalist.3,4 Darwin’s fears of the turbulent sea had been quelled by Captain Fitzroy’s reassurance that he could “at any time get home to England,” and that if he liked, he “shall be left in some healthy, safe and nice country.”2 However, Darwin soon exulted in the experience, writing that “Delight itself, however, is a weak term to express the feelings of a naturalist who, for the first time, has been wandering by himself in a Brazilian forest,” but, as his father warned, hazard often accompanied adventure.5 Darwin survived a volcanic explosion, an immense earthquake, and traveled much of the trip on horseback due to severe seasickness.1 But perhaps most terrifying of all, these experiences helped Darwin begin to see a pattern, an explanation of life that seemed to solve the questions that biologists had been pondering for so long. He wrote in his journal: “A

FEATURES


“There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.” — Charles Darwin, The Origin of Species (1859) bad earthquake at once destroys the oldest associations: the world, the very emblem of all that is solid, has moved beneath our feet.”5 Indeed, the scientific ideas that Darwin had accepted as fact were slowly slipping from beneath him. French naturalist Jean-Baptiste Lamarck’s dominant biological theory was that organisms passed down acquired characteristics.1 Lamarck theorized that giraffes had long necks, for example, because they stretched them out while reaching for leaves, resulting in longer-necked offspring.1 Although Darwin wasn’t aware of the concept of genetic heredity, as Gregor Mendel’s studies on pea plants would only surface decades later, he recognized that any changes made to the body after birth could not be passed onto offspring. His observant eye could tell there was something that everyone was missing.

this process. Nearly two years after his return to England, Darwin scribbled in his journal the simple words “I think” beneath which he sketched a rough tree of life, beginning with one ancestral species, and branching off into various different forms.8

THE ORIGIN OF SPECIES Prior to his return to England, Darwin had journaled, “I dare hardly look forward to the future, for I do not know what will become of me,” knowing that the ideas stirring in his head weren’t going to sit well with the orthodox public.9 Faced with the prospect of revealing his theory to the world, Darwin began experiencing heart palpitations, stomach aches, and nightmares, causing him to put off publishing his work for 23 years after his return.1 However, others were beginning to catch on. French naturalist, Geoffrey-Saint Hilaire noted that all animals look curiously similar during the early stages of fetal development, suggesting a common ancestral background.1

THE GALÁPAGOS ISLANDS Four years into the voyage, Darwin visited the Galápagos Islands and collected dozens of specimens of what he believed were wrens, finches, warblers, and blackbirds.1 It was only when Darwin returned to England that he discovered that all of his Galápagos specimens were, in fact, finches.1 Darwin immediately realized his mistake. Although he had previously noted that the sheer diversity of the birds was remarkable considering that all of them lived so close to one another, he hadn’t recorded which island he had found each specimen on.6 Darwin hypothesized that an ancestral species of finch must have arrived on the archipelago and evolved into several different forms over time to suit the unique conditions on their individual islands.7 Moreover, each finch was adapted to its unique habitat with up to ten different species on a single island.7 Finches who lived on the ground had larger beaks, best for cracking seeds open, while tree finches had sharp beaks better fit for chiseling tree trunks to access the insects living within them.7 Building on English scholar Thomas Malthus’ idea that the Earth did not have ample resources to support the constantly multiplying organisms which lived upon it, Darwin deduced that there is a struggle for existence. In other words, the organisms most fit to their environment survive to pass on their traits to offspring as unfavorable traits die out.1 Darwin called this mechanism natural selection. Given enough time, simple creatures could evolve to become infinitely complex through

FEATURES

Figure 1: Darwin’s 1837 sketch of the tree of life from one of his personal notebooks that begins with the words “I think.” In the public domain.

FALL 2020 | Berkeley Scientific Journal

47


REFERENCES

Figure 2: A caricature of Darwin as an ape subsequent to the publication of The Descent of Man (1871), Darwin’s follow up to The Origin of Species. In the public domain. Finally, in 1858, Alfred Wallace, a fellow British naturalist, wrote Darwin a letter entailing an epiphany he had about the concept of descent with modification.1 It was time. Darwin wrote that it was “like confessing a murder.”10 On November 4th, 1859, Charles Darwin published On the Origin of Species by Means of Natural Selection. His opposers immediately jumped at his throat, but over the years Darwin gained a group of devoted supporters who defended his theory at the famous Oxford Meeting, a furious battle between some of the greatest English scientists of the 19th century during which Darwin did not utter a single word.1 Darwin’s opponents called him a chimp for believing that humans evolved from apes and a chump for not having the guts to argue his controversial ideas in the public sphere, but Darwin believed that The Origin of Species spoke for itself. Behind Darwin’s now canonical title in scientific history, there was a nervous young man who did not let his previous failures or fears stop him from embarking on the journey that would fundamentally transform our understanding of biology.

48

Berkeley Scientific Journal | FALL 2020

1. Zimmer, C. (2006). Evolution: The triumph of an idea. HarperCollins. 2. Eiseley, L. C. (1956). Charles Darwin. Scientific American, 194(2), 62–76. http://www.jstor.org/stable/26171736 3. Darwin, C. (1999). The Autobiography of Charles Darwin. Project Gutenberg. (Original work published 1887)‌ 4. Swab, J. C. (2010). Introducing students to Darwin via the voyage of HMS Beagle. The American Biology Teacher, 72(5), 281–286. https://doi.org/10.1525/abt.2010.72.5.5 5. Darwin, C. (1839). Journal of researches into the natural history and geology of the countries visited during the voyage of H.M.S. Beagle round the world, under the command of Capt. Fitz Roy, R.N. Cambridge University Press. (Original work published 1845). https://doi.org/10.1017/CBO9781139103831 6. Sulloway, F. J. (2009). Tantalizing tortoises and the DarwinGalápagos legend. Journal of the History of Biology, 42(1), 3–31. https://doi.org/10.1007/s10739-008-9173-9 7. Lack, D. (1953, April). Darwin’s Finches. Scientific American, 188(4), 66–73. 8. Darwin, C. (1987). Charles Darwin’s notebooks, 1836-1844: Geology, transmutation of species, metaphysical enquiries (P. H. Barrett, Ed.). Cornell University Press. 9. Darwin, C. (1835, August 9). [Letter to W. D. Fox]. Christ’s College Library (MS 53 Fox 47a, Letter No. DCP-LETT-282). Cambridge, England. https://www.darwinproject.ac.uk/letter/ DCP-LETT-282.xml 10. Darwin, C. (1844, January 11). [Letter to Joseph Dalton Hooker]. Cambridge University Library (DAR 114: 3, Letter No. DCP-LETT-729). Cambridge, England. https:// www.darwinproject.ac.uk/letter/DCP-LETT-729.xml

IMAGE REFERENCES 1. Banner: The Complete Work of Charles Darwin Online [Digital image]. Reproduced with permission from John van Wyhe ed. (2002). http://darwin-online.org.uk/ content/frameset?itemID=F59&viewtype=image&pageseq =602 2. Figure 1: The Hornet. (1871). Editorial cartoon depicting Charles Darwin as an ape [Cartoon]. Wikimedia Commons. Retrieved from https://commons.wikimedia.org/wiki/File:Editorial_ cartoon_depicting_Charles_Darwin_as_an_ape_(1871).jpg‌ 3. Figure 2: Darwin, C. (1837). Darwin Tree [Sketch]. Wikimedia Commons. Retrieved from https://commons.wikimedia.org/ wiki/File:Darwin_Tree_1837.png

FEATURES


APPLICATIONS OF MATERIALS SCIENCE: FROM MODELING TO MEDICAL USE INTERVIEW WITH PROFESSOR KEVIN HEALY BY ESTHER LIM, ALEXANDER PETERSON, SABRINA WU, ANANYA KRISHNAPURA Professor Kevin Healy is a Fandrianto and Salfia Halim Distinguished Professor of Engineering in the Departments of Bioengineering and of Materials Science and Engineering at the University of California, Berkeley. He currently leads the ATP-Bio effort, an interdisciplinary project that aims to develop key biopreservation technologies. Professor Healy is also the principal investigator of the Healy Laboratory, which combines biology and materials science in an effort to better understand core biological phenomena and develop biomedical innovations. Currently, the laboratory is conducting research regarding regenerative medicine and microphysiological systems. In this interview, we discuss the major goals of the ATP-Bio effort in addition to his laboratory’s research on modeling cardiomyopathy and applying hyaluronic acid-based hydrogels in medicine.

BSJ

: Your research focuses on material science and bioengineering, and it often has applications in medicine. What drew you to the intersectionality between these fields?

KH

: As an undergraduate, I went to the University of Rochester as a chemical engineering major. I was an undergraduate from 1979 to 1983, and right around the late 70s, the artificial heart became this big thing. There was a society, which still exists, called the American Society for Artificial Internal Organs (ASAIO). Their big vision was to make an artificial heart and to add plastic and other materials. The vision never really came to fruition, but it got me excited. The question of how to create an artificial organ became a very interesting topic for me. When I was a chemical engineering major, I was allowed to pick three electives and other technical electives. Since there was not a bioengineering department at the time, I took my electives in the medical school in radiation biology and biophysics. One graduate biophysics class was particularly interesting. I thought, “Wow, graduate school is a lot better than undergraduate. You do not really take any tests and you get to think a lot.” So, I really wanted to do some research. I started in a biophysics laboratory at Rochester, where we were interested in blood flow. I would make acrylic molds from

INTERVIEWS

gallium casts of rabbit ears so that the splitting of the blood vessel was accurate. I had to count how red blood cells differentially went into different parts of the blood vessel as it splits, and it turns out it is not symmetric. It has to be asymmetric or else there would be no blood cells in your smallest capillaries. This is called plasma skimming. I thought this was really interesting stuff, but I had a problem: the blood would clot in this little device all the time. I asked, “Why don’t we make this into something where the blood doesn’t clot?” They said, “Go ahead, Kevin. You’ll be a multi-millionaire in a couple weeks if you could figure that one out.” It is still one of the major problems in biomaterial science today—trying to design blood contact materials. But I really wanted to study this further, and that is how I got into biomaterial science.

BSJ

: One of your papers deals with contractile deficits in cardiac microtissues you engineered. How were you able to model a diseased state of the heart in these microtissues?

KH

: The first thing we had to do was make something that the cells were going to sit on. We were trying to make an array to organize collagen fibers, and we used two-photon polymerization (TPP) in order to do so. TPP is a light-active way to form a polymer;

FALL 2020 | Berkeley Scientific Journal

49


the light activation initiates the chemical reaction. An example of this is if you have gone to the dentist and the dentist uses a blue light wand if you have a cavity. They are doing photon-initiated polymerization in your mouth. By changing the optical parameters of the light, we were able to make an array of these fibers and control parameters like the Z height, the spacing of the pitch, and the diameter, all of which became important. Now, when considering the cells that we have to put onto the chips, using heart cells from a rabbit or a rodent would not be very useful. Different species, especially their hearts, have different physiological parameters. Thus, using human cells in heart-type research is really important. In 2009 I did a sabbatical at The Gladstone Institute of Cardiovascular Disease in San Francisco, where I got involved with induced pluripotent stem cell technology. Using these induced pluripotent stem cells, you can work with human cells to create a microtissue. From 2009 onwards, it took another six or seven years until the field of stem cell biology developed to the point where we can now introduce genetic deficiencies in cells and compare them to the healthy cells. That is the ultimate beauty of using human cells and induced pluripotent stem cell technology. If we make a defect in the stem cell state, then we have an infinite source of cardiomyocytes that have that defect if it is carried through. These cells, in turn, can be used to model a disease state.

BSJ

: How did your microtissue model employing diseasestate cells demonstrate a “contractile deficit”? What are the implications of this deficit on heart pathology in general?

KH

: As I stated, we wanted to model cardiomyopathy. This myosin-binding protein C defective cell line, MYBPC3-/-, was what we used. There are two major cardiomyopathies; one is hypertrophic cardiomyopathy, and the other is dilated cardiomyopathy. If you do a genome-wide analysis, you will see that some percentage of the population that has hypertrophic cardiomyopathy has this gene mutation in MYBPC3, and the other part of the population has dilated cardiomyopathy. Thus, in the

“You’ll be a multi-millionaire in a couple weeks if you could figure that one out.” paper, we were trying to figure out what this system really represents or causes. We measured the contraction force exerted by microtissues made of wild-type versus MYBPC3-/- cells, and we found that over time, we see an increase in maximum force. If you take this force and multiply it by the velocity (how fast the tissue is contracting), you can plot the power. The power is a proxy for cardiac pumping and cardiac output. When you overlay the curves tracking power over time for both wild-type and MYBPC3-/- cells (Figure 1), they look kind of similar. If we take the area under the curves, they are basically almost identical. You have to think about that for a moment. The nearly identical areas tell you that the power for both cell lines may be equal, but the contractile force with the wild-type cells is much greater for this power compared to the MYBPC3-/- cells. This is the key finding. Now, cardiomyopathies are progressive diseases, and we did not continue the experiment past 20 days, but you might imagine that the maximum force exerted by the wild-type, healthy cells is going to increase over time, while the maximum force exerted by the MYBPC3-/- disease-state cells will decrease, leading to a contractile deficit.

BSJ

: We also read your paper on the effects of hyaluronic acid (HyA) macromer molecular weight on hydrogel bioproperties. What are hydrogels, and how are they used in the medical field today?

KH

: The easiest hydrogels to think of are the ones in your body. A hydrogel is usually some network of a macromolecule— it could be a protein or synthetic polymer—with a high degree of water by volume, up to 90%. For example, we can model hydrogels using sodium polyacrylate, a chemical in diapers. If we add water to sodium polyacrylate as a dry powder, it gets more viscous and

Figure 1: The graphs respectively depict the maximum force and power measured in cardiac microtissues constructed on a 10 µm fiber matrix. Over time, for a similar level of cardiac output (measured by power), there is a growing divergence between the maximum force measured in microtissues made of wild-type versus mutant MYBPC3-/- cells.2

50

Berkeley Scientific Journal | FALL 2020

INTERVIEWS


“We can now introduce genetic deficiencies in cells and compare them to the healthy cells. That is the ultimate beauty of using human cells and induced pluripotent stem cell technology.” turns into a gel network where the water gets absorbed like a sponge. But at some point, there are two physical forces that are opposing one another. The water wants to separate all the polymer chains and just blow this thing up, but the polymer is actually chemically bound in what we call a crosslinked network. So you are balancing the swelling, which is really an entropy increase caused by mixing, with the elastic strength of the network. This balancing is the reason behind a diaper’s absorbency. Soft contact lenses are another example of hydrogels. For soft contact lenses, we consider, “What does the contact lens have to do? Why does it swell?” It cannot swell too much, because then the optical properties will be messed up, but there’s swelling for comfort—for hydration of your cornea epithelial cells. This is a classic example of a biomaterial. There is nothing biologic about it, but its whole design has to deal with interfacing with the soft tissue of your eye and your cornea epithelium, which are very sensitive.

BSJ KH

: What were some hydrogel properties that you measured to determine the optimal hydrogel molecular weight?

: The paper you are referring to comes on the back of three other papers that ultimately examine how we can best transplant cells. If you want to think of the three most important aspects of a hydrogel to regenerate tissue, we first look at the biological engagement with the cell. We secondly look at the stiffness or the modulus of the material, which affects how responsive the cell is to the hydrogel due to its mechanical properties. Lastly, we ensure that whatever the cells are synthesizing can be captured in the local environment. What we are tr ying to understand in that recent paper was the effect of making these semisynthetic hydrogels from different molecular weights of hyaluronic acid. HyA is distributed all throughout your body. The aqueous humor of your eye is almost 100% HyA, and it is also in your muscles and in cartilage. It is a natural polymer. Why is that interesting? Well, if that is the bulk part of our hydrogel material, then the body’s going to be able to handle it when the hydrogel starts to break down. We also add other

INTERVIEWS

components to the hydrogel that allow it to break down: matrix metalloproteinases. These enzymes degrade collagens and other types of matrix proteins as the cell remodels its environment. This is important for development and natural tissue regeneration. Now, in the case of this paper, we are examining the physical properties of the starting molecular weights of HyA, because as the starting molecular weight changes, the stiffness—this modulus parameter—is actually quite a bit different. On the biological end, the early goal was to promote vascularization because without vascularization you are never going to have a viable tissue, especially if you are trying to regenerate a large portion of, for example, muscle.

BSJ

: Do industries typically aim to optimize the same properties as the ones you analyzed in your paper, or is there substantial variation in desired properties?

KH

: The short answer is that they would not be interested at all in the same properties because our study was done with a biological goal in mind, which may differ from what an industry at large is trying to do. For instance, in our case, if you go to this image (see Figure 2), you can compare the lower molecular weight and the higher molecular weight. The blue bars are the crosslinker MMP-13, which is a peptide. As you can see, there is a different capacity for network structure depending on how many cross linkers are present. In this case, we were designing for angiogenic capacity by testing the ability of stem-cell derived vascular cells to form vascular networks under different conditions. Another example of a hyaluronic acid product that does use small particle cross-links is Synvisc; it is injected in the knee to help out with different types of arthritis by serving as a lubricant in a joint that is deteriorating. This is all in the biological space. However, HyA is used quite extensively in other fields. For example, in cosmetic plastic surgery, it can be injected in your face to reduce wrinkles, so in that case, things like cross-linking would not be necessary.

Figure 2: Diagram of hyaluronic acid (HyA) hydrogels, each composed of HyA macromers with different molecular weights: 60 kD, 500 kD, and 1 MD respectively. The blue bars represent MMP-13, the peptide responsible for crosslinking in these hydrogels.3

FALL 2020 | Berkeley Scientific Journal

51


BSJ

: You mentioned that this paper on hyaluronic acid molecular weight was on the back-end of three previous papers. Is there still ongoing research on this topic, and if so, what are you aiming to explore?

KH

: Since the hyaluronic acid gel itself can be used for a lot of tissue regeneration applications, one of the applications we are currently after regards preventing volumetric muscle loss. Why is muscle tissue able to regenerate in such a fabulous capacity, even when we do not add any cells to the material at the time of transplantation? To recapitulate this growth using biomaterials, the cells surrounding the hydrogel have to migrate into it and, as they do, cleave those MMP degradable linkers and start synthesizing other molecules like chemoattractants. At the end of the day, the tissues are highly vascularized and the muscle structure is excellent compared to native tissue. So, we are looking at this in detail as we do not know why it is working or even what the cell types are. It is not clear what the first cell types are that engage with this implant. We do not know whether neutrophils, early monocytes, or macrophages first enter into the hydrogel and start the woundhealing process. Understanding this process allows us to better titrate the results, if you will, and better titrate the material. But it also leads us into a way in which we can engineer better structures for the regeneration of muscle: specifically, those lost through a traumatic event.

BSJ

: Aside from conducting lab research, you are currently leading the ATP-Bio effort at UC Berkeley. Could you tell us about this project?

KH

: This is fun. It is really a whole engineering research center that is dedicated to the biological preservation of cells, tissue, and organs. The top-end goal, of course, is organ storage and preservation for successful transplantation. For example, you have about four hours for hearts and other sensitive organs to be retrieved from a patient and implanted. Otherwise, they have been out of the body for too long and cannot be used. Cryopreservation strategies, which include both cooling and warming the liquid you put the organ in, are really a science that has been studied in a piecemeal fashion. This is the first time that a center has been fully dedicated to this science. We try to develop better storage techniques that allow for a much lower attrition of cell quality and number for the sake of basic cell biology research. We are also involved in making organs-on-a-chip (sometimes called microphysiological systems). I am involved with using those microtissues as a proxy for actual organs during our testing before we get to the test bed, which are the large, main organs. It is exciting because the far-out folks want to apply technology like this to suspended animation. I went to a conference about five years ago at West Point, and in the room were the individuals we originally started ATP-Bio with, as well as some folks from NASA that wanted to get people to Mars and focused on advanced space flight. They are interested in this suspended animation, which gets back to cryopreservation and being able to cool or warm an organism without affecting them. I think in your lifetime, you might see something like this. That is pretty much the ATP-Bio Lab in a nutshell. It is exciting for us because as a scientist, you do not want to get stale. You do not want to stay in the same thing for more than five or six years without evolving, and that is what we are able to do here.

REFERENCES

Figure 3: Schematic of the proposed impacts of the Engineering Research Center (ERC) for Advanced Technologies for the Preservation of Biological Systems (ATP-Bio). The center aims to develop technologies that allow for the effective biopreservation of tissues, organs, and entire organisms.4

52

Berkeley Scientific Journal | FALL 2020

1. [Photograph of Kevin Healy]. Berkeley Research. https:// vcresearch.berkeley.edu/faculty/kevin-edward-healy 2. Ma, Z., Huebsch, N., Koo, S., Mandegar, M. A., Siemons, B., Boggess, S., Conklin, B. R., Grigoropoulos, C. P., & Healy K. E. (2018). Contractile deficits in engineered cardiac microtissues as a result of MYBPC3 deficiency and mechanical overload. Nature Biomedical Engineering, 2(12), 955–967. https://doi. org/10.1038/s41551-018-0280-4 3. Browne, S., Hossainy S., & Healy, K. (2020). Hyaluronic Acid macromer molecular weight dictates the biophysical properties and in vitro cellular response to semisynthetic hydrogels. ACS Biomaterials Science & Engineering, 6(2), 1135–1143. https:// doi.org/10.1021/acsbiomaterials.9b01419 4. University of Minnesota. (2020). Expected societal impacts of the new National Science Foundation (NSF) Engineering Research Center (ERC) for Advanced Technologies for the Preservation of Biological Systems (ATP-Bio) [Infographic]. National Science Foundation Engineering Research Center for Advanced Technologies for the Preservation of Biological Systems. https:// www.atp-bio.org

INTERVIEWS


Quantifying Within-Day Abstract Skill Learning and Exploring its Neural Correlates BY GABRIELLE SHVARTSMAN GRADUATE STUDENT MENTOR: ELLEN ZIPPI RESEARCH SPONSOR (PI): JOSE CARMENA ABSTRACT Strengthened corticostriatal connections, particularly between the motor cortex and the dorsal striatum, emerge during abstract skill learning when studied on a multi-day scale. While behavioral data implicate that learning happens within-day as well, neural changes and adaptations over this timescale are not as well-studied. Here, a non-human primate (NHP) subject learned to control a brain-machine interface (BMI), initially with many single-day decoders. Novel behavioral analyses were used on this data to quantify within-day learning. The prefrontal cortex (PFC) and striatum were hypothesized to be responsible for changes in the brain underlying this learning. Using a coherence metric, the manner of communication between these brain regions was also studied. Though the results of the coherence calculations were inconclusive, the behavioral analyses support the presence of the hypothesized withinday learning and paves the way for further exploration of its underlying neural circuitry. This advances the field towards a deeper understanding of how neural changes on a day-to-day basis ultimately bring about long-term learning. Major, Year, and Department: Molecular and Cell Biology: Neurobiology; Undergraduate Senior; Department of Molecular and Cell Biology and Department of Electrical Engineering and Computer Science

INTRODUCTION Skill learning is at the center of everyday life and results from neural plasticity, or changes in brain circuitry. Learning a new skill can be characterized by the type of skill, such as physical or abstract, and by the length of the learning timescale. A physical skill is one that requires practiced, coordinated movement to achieve a set goal. For example, learning to swing a tennis racquet is driven by the goal of hitting a ball over the net, and as one progresses from uncoordinated arm movements to consistently accomplishing this goal, the brain adapts in order to learn this physical skill. The same goal-to-skill trajectory is present when learning an abstract skill which does not require physical movement, such as mastering a new chess strategy or learning to categorize songs by genre. These skill learning processes have prompted neuroscientists to investigate which brain circuits are involved in and changing as a result of physical and abstract skill learning.1 Studies imply that corticostriatal circuits play a major role in physical skill learning.2,3,4 Corticostriatal circuits are functional connections between areas of the cortex, particularly regions of the frontal lobe such the premotor cortex (PMd), primary motor cortex (M1), and prefrontal cortex (PFC), and the deeper brain structures that make up the striatum, specifically the caudate nucleus (Cd). Setting up an experiment to test whether these same circuits are critical in abstract skill learning requires a control for predisposition to an abstract skill across test subjects. Brain machine interfaces (BMIs), one of many methods used to study neural plasticity during learning, provide such a control. BMIs involve learning to control

RESEARCH

neuroprosthetic actuators, such as the pitch of a tone or the position of a cursor on a screen, with only neural activity; this guarantees the learned strategy to be a de novo skill. A closed-loop BMI takes in a selected group of neurons as inputs to a decoding algorithm, which subsequently calculates and updates actuator movement. This movement results in visual or auditory feedback for the subject, ultimately enhancing the fine-tuning of the neural inputs and enabling abstract skill learning (Figure 1A). The skill is novel to the brain because the subject is causing movement of physical actuators in the absence of motor execution, a type of skill that is impossible to have been exposed to prior to a BMI implant, which allows experimenters to study the learning of a skill that is necessarily distinct from the subject’s prior skillset. This classic BMI learning paradigm has been shown to result in an emergence of a stable neuronal ensemble, a group of neurons involved in the same neural computation, linked with skill proficiency.5 Analysis of BMI abstract skill learning has shed light on many functional changes in the brain. In one of the first BMI experiments in non-human primates (NHP), neural units in PMd and M1 showed increased predictive power in the decoder, or algorithm that translated the neural signals into the behavioral output, as the subject learned.6 Predictive power measures the correlation between firing from an individual neuron and the behavioral task. For example, if a neuron has very low predictive power, that implies that the neuron is not very involved in the current task since its firing pattern cannot predict what is happening in the task. Both direct units, or neurons which the decoder was trained on, and indirect units, or the neurons that reside in the motor regions but were not used as inputs to the decoder, exhibited changes.7,8 This

FALL 2020 | Berkeley Scientific Journal

53


Figure 1: BMI and task schematics. A) The closed-loop BMI used active PMd and M1 units as inputs to the decoder. A Kalman filter was used as the decoder, with adding CLDA for two-system adaptation. The decoder outputs the next predicted cursor location (red circle), resulting in visual feedback as the subject moves the cursor from the center target (yellow) to the peripheral target (blue). The light gray circles represent other possible peripheral target locations but only one appears at a time. An apple juice reward is administered for successful task trials. B) A center-out task where the subject must move the cursor (red) from center target (yellow) to one of eight peripheral targets (only one is shown in blue). During manual control, the subject is free to move within the 2-D constraints of the Kinarm (dark gray rectangle) while during BMI control, the subject’s arm does not affect the movement of the cursor and the neural inputs must adapt to move the cursor. A timeline of the task is also shown.

breadth of accessible knowledge makes the BMI learning paradigm a valuable technique for studying abstract skill learning. In addition to physical or abstract, skill learning can also be categorized into short-term and long-term. Long-term learning in the context of NHP BMI is the period of time until proficiency, defined by a plateau of accuracy in trial completion, is achieved— usually on the order of days. Thus far, only the long-term emergence of functional connectivity, or correlation of activity between regions during task execution, in the brain has been studied.9 This raises the question of which, if any, connections arise within just one day of BMI learning and whether they are incremental adjustments towards net long-term changes and whether these connections aid in building the long-term connections we see. Past studies have investigated which brain regions play a part in BMI abstract skill learning. PMd and M1 are regions specific to motor planning and execution, so it is expected that neurons in these regions would play a part in learning a movement-related BMI task. Studies in rodents have examined which brain regions may be responsible for causing motor area changes. One study has shown increased coherence—an indirect measure of communication— between M1 and the dorsal striatum, a part of the striatum involved in physical skill learning, over the course of BMI learning.1 PFC has also been implicated to play a part in abstract skill learning. The PFC is known to play a major role in goal-directed planning, including in operant conditioning wherein the subject learns to complete a task in return for a reward.10,11 Correctly guiding an actuator to a target in BMI learning is one such form of operant conditioning, since the subject is rewarded for accomplishing a goal. The lateral prefrontal cortex (LPFC) and Cd have been implicated in learning abstract associations, such as those that develop between actuator-movement and reward as a result of operant conditioning.12

54

Berkeley Scientific Journal | FALL 2020

Although frontostriatal communication between LPFC and Cd during abstract skill learning has been established, the dynamics of this communication are still muddled. The timescale of emergence of changes throughout associative learning has been shown to vary across these two regions: the Cd responds to reward associations more rapidly, while the LPFC and the frontal cortex respond more slowly.12,13 Based on these previous studies, we hypothesize that a more rapid, within-day circuit develops between the Cd and the frontal cortex, whether LPFC or M1, and later a slower across-day circuit develops within the frontal cortex in addition to strengthening of the frontal cortex and Cd connection. In this paper, we propose a novel method of measuring withinday abstract learning in an NHP BMI task by comparing predicted and actual cursor trajectories. Using cursor trajectory information provides more information than the existing fraction correct and time to target metrics; while those rely heavily on only successfully completed trials, the cursor trajectory can incorporate all initiated trials since the reward time is not necessary for its calculation. Using our new method, we then compare potential short-term neural correlates of within-day behavioral trends with those of longerterm neural communication emergence. We hypothesize that the trajectory ratio will further solidify the presence of within-day learning and that neurons in LPFC and Cd will be modulated, or have controlling influence, on this learning, as demonstrated by increased coherence.

METHODS NHP implant One adult male rhesus macaque (Macaca mulatta), Y, was used in this study. A 124-channel large-scale semi-chronic microdrive

RESEARCH


(Gray Matter Research, Bozeman, MT) was implanted in the subject’s left hemisphere. Unlike many traditional recording implants, each electrode in the microdrive could be lowered independently, permitting simultaneous recording across regions of different depths. This feature was especially important for this study as Cd is a deep structure whereas the cortical areas are closer to the surface. Microelectrodes were successfully lowered into M1; PMd; dorsolateral prefrontal cortex (dlPFC), the dorsal subsection of the LPFC; Cd; and putamen (Pu) (Figure 2). The electrodes were capable of recording both spiking and local field potential (LFP) data. Spiking data refers to the action potentials of individual neurons, recorded as all-or-nothing signals in a time series. LFP data refers to the electric potential changes in the space surrounding an electrode, with likely some overlap in signals between neighboring electrodes at the same depths. All procedures and experiments were conducted in compliance with the National Institutes of Health Guide for Care and Use of Laboratory Animals and were approved by the University of California, Berkeley Institutional Animal Care and Use Committee. BMI In this study, we used a closed-loop BMI. Spiking data from PMd and M1 neurons was used as input to a decoding algorithm (“decoder”) that controlled actuator movement. In this task, the actuator was a cursor on a screen in front of Y, the experimental subject. The movement of the cursor, along with a timed juice reward, provided feedback to the subject, driving changes in neural circuitry (Figure 1A). This ultimately resulted in neuroprosthetic abstract learning. A Kalman filter was used as the decoder. A Kalman filter incorporates a history of observations with the current state and outputs a prediction of the next state, updating previous measurements using Bayes rule and calculating the prediction using the law of total probability.14,15 In this application, the decoder was fed past and present position, velocity, and motor unit spiking information as inputs in order to output the next position of the cursor. Upon successfully moving the cursor to complete the task on

the screen, the subject was rewarded with apple juice. The updated actuator position and velocity and motor inputs were then fed back into the decoder, resuming the learning circuit. Each day, the decoder was seeded with approximately 10 minutes of manual control data, from either earlier that same day or a previous day, in which the subject used physical arm movement to complete the task. Then the subject completed trials of the same task with BMI control of the cursor. The manual control seeding period each day served as a control for the BMI control trials since the behavior of the brain during a physical movement task has been well-characterized. The subject’s arm was restricted during BMI control trials to help mediate a context switch between the manual and BMI tasks. Closed-loop decoder adaptation (CLDA) was applied for 2 – 10 minutes to the initial decoder in order to make the decoder easier to learn, and then the subject worked on BMI control for 2 – 4 30-minute sessions. With CLDA, not only is the subject’s brain adapting to the decoder, but the decoder is also adapting based on the direct motor inputs it receives. This strategy allows for more rapid learning of effector control via a tag-team effort of the two systems.16 While the subject attempted BMI control for approximately 60 days, only the days with a substantial number of completed trials (> ~30), or where the subject’s proficiency was visually apparent to the researchers, were analyzed. This resulted in a dataset of 19 days, with both consecutive and non consecutive series of days. Behavioral task Y was trained to perform a self-initiated, two-dimensional, center-out task (Figure 1B). During the manual iteration of the task, the subject’s right arm rested in a Kinarm (BKIN Technologies, Kingston, ON) exoskeleton where the shoulder and elbow were restricted to movement in the horizontal plane. First a center target appeared on the screen, prompting trial initiation. The subject needed to hold his on-screen cursor in the center target for a prespecified hold time (0.1 – 0.5s) in order to initiate a trial. Upon initiation, one of eight peripheral targets, evenly distributed in a circle (radius = 6.5cm) around the center target, would appear and the subject would move the cursor to the peripheral target and

Figure 2: Schematic of 124-channel large-scale semichronic microdrive from Gray Matter Research. The inner cavity of the microdrive outlines the brain regions that we are capable of recording from in Y. Each dark gray dot in the center represents one of the 124 electrodes. The bottom illustrates 4 32-channel connectors that transmit the electrode signals.

RESEARCH

FALL 2020 | Berkeley Scientific Journal

55


complete a hold (0.1 – 0.5s) in order to successfully complete the trial. Upon trial completion, the subject received an apple juice reward. In the months prior to BMI trials, Y was trained to complete this task using manual control. Once the subject was proficient in manual control, he began to learn to use BMI control. Data Analysis While the BMI decoder used spike data as input, data analysis for this study was conducted exclusively on local field potential (LFP) data. In all data analyses, all 30-minute BMI sessions post-CLDA were concatenated and split into the first, middle, and last third of trials. In this analysis, early coherence is defined as the first third of trials within a day while late coherence is defined as the last third. Analysis was conducted to evaluate behavioral change and coherence between brain regions. A. Behavioral: Three measures were used to quantify behavioral changes during the task. 1A. Fraction correct: Rewarded trials were binned into groups of 20 and the fraction correct was calculated for each bin by dividing the number of rewarded trials by the number of self-initiated trials. Trials were considered to be self-initiated if the center hold was successfully completed, indicating that trial initiation was intentional. 2A. Time to target: Timestamps of task events were used to calculate the average time to reach a peripheral target. Self-initiated trials were binned into groups of 20 trials and the mean time to reach the peripheral target was calculated as follows, where timestamp entered is the time at which the peripheral target was entered and timestampappeared is the time at which the peripheral target first appeared:

3A. Trajectory ratio: This method was designed as a way to measure learning in the absence of many successful, rewarded trials. For each target, a subset of all available cursor space was defined as its region, where cursor movement was still relevant to the trajectory for the particular target (Figure 3). The region was defined as an eighth of a circle of radius equal to the reach radius (6.5 cm) plus twice the target radius (1.5 cm). The trajectory ratio metric is:

Trials were separated by peripheral target and binned into groups of 10 trials. The mean of the ratio of time was calculated across bins. The binned averages were then averaged across all targets since Y struggled more with some targets than others.

Figure 3: Representative data for trajectory ratio metric. The decoder from date (2020, 4, 10) was chosen because it showed results that were representative of the mean across days. A) The metric is effective in that it correctly returns a ratio of approximately 1 across manual control trials, which is expected as Y is proficient at the task under manual control. In the BMI control trials, there is an upwards trend depicted by the BMI trials linear regression. B) Visual representation of how the calculations are made. Note that in the manual control trials practically all of the time is spent within the circular and linear bounds. Most of the late BMI trials show noticeable improvement from early BMI. Early BMI here is classified as the first third of the trials whereas late BMI is the last third.

56

Berkeley Scientific Journal | FALL 2020

RESEARCH


thirds of the trials were iterated over, with calculations done pairwise across electrodes from the two regions. In each trial, calculations were taken for two time blocks: 1s before the reward and 1s after the go cue. This is because just after the go cue, or when the peripheral target appears, is the most likely time period to elicit activity from decision-making and planning parts of the brain, dlPFC and caudate, while just before and during achievement of the reward is most likely to elicit activity from the parts involved in associative learning, such as the dlPFC. Pairwise field-field coherence was calculated using the same method and parameters as in Koralek et al. 2012:

where x and y represent one channel from each region, Rxx and Ryy are their respective power spectra, and Rxy is the cross-spectrum between them.1 Pairwise calculations were done over a sliding window of width 0.5s and step size 0.05s across the indicated time block. For every region-to-region relation calculated over, these trialaveraged calculations were separated across four frequency bands: theta (4 – 8 Hz), alpha (8 – 12 Hz), beta (12 – 30 Hz), and an overlap of frequencies that was studied in Koralek et al. 2012 (6 – 14 Hz). Once early and late coherence for each day was calculated, correlation with behavioral results was evaluated by plotting the difference between early and late coherence against the trajectory ratio slope for that day.

RESULTS Calculations which were averaged across days used 19 distinct days of recording data. Figure 4: Behavioral analysis results. In all plots, the red range represents early within-day learning while the blue range represents late within-day learning. A) There was no significant increase in mean within-day fraction correct. The linear regression line has slope = -4.17e-4 and p-value of 0.412 (> 0.05) for the null hypothesis of a 0 slope. B) There was a significant decrease in mean within-day time to target. The linear regression line has slope = -0.705 and a p-value of 9.70e-6 (< 0.05) for the null hypothesis of a 0 slope. C) There was a significant increase in mean within-day trajectory ratio. The linear regression line has slope = 0.0159 and a p-value of 0.00312 (< 0.05) for the null hypothesis of a 0 slope. B. Coherence: Coherence measures the degree of synchronization of oscillatory activity between brain regions. Synchronicity is estimated by taking into account consistency between amplitude and phase of two waveforms. While existence of coherence indicates functional connectivity, changes in the value of coherence can indicate strengthening in connectivity over time. When studied pairwise between units across regions, such an increase can be attributed to novel network connections forming over the time of study.17,18 For each region-to-region relationship examined, early and late

RESEARCH

Within-day Behavior A linear regression was performed for the fraction correct, time to target, and trajectory ratio metrics. The p values that follow represent likelihood for the null hypothesis of a linear regression of slope = 0 (no change). There was no significant increase in mean within-day fraction correct, with slope = -4.17e-4 and p = 0.412 (> 0.05) (Figure 4A). There was, however, significant change in both the time to target and trajectory ratio metrics. There was a significant decrease in mean within-day time to target, with slope = -0.705 and p = 9.70e-6 (< 0.05) (Figure 4B). There was a significant increase in mean trajectory ratio, with slope = 0.0159 and p = 3.12e-3 (< 0.05) (Figure 4C). Within-day Coherence Upon plotting correlation of behavioral values with difference between early and late coherence, the regions and frequency bands which significantly predicted behavior were: M1/PMd and Cd after go cue in the alpha (p = 2.47e-3), beta (p = 4.88e-4), and 6 – 14 Hz (p = 1.02e-3) frequency bands (Figure 5). Interestingly, all of these have negative slopes, indicating an inverse relationship with trajectory ratio slope and thus with within-day learning.

FALL 2020 | Berkeley Scientific Journal

57


Figure 5: Behavior-coherence correlation results. For each day, Coherence Diff. (late coherence — early coherence) was plotted against the slope of the trajectory ratio for three region-to-region relations (direct-Cd, dlPFC-Cd, and directdlPFC), two events (reward and go cue), and four frequency bands (see legend). The regions and frequency bands which significantly predicted behavior were: M1/PMd and Cd after go cue in the alpha (slope = -0.607, p = 2.47e-3), beta (slope = -0.817, p = 4.88e-4), and 6 – 14 Hz (slope = -0.576, p = 1.02e-3) frequency bands. The negative slopes imply an inverse relationship with learning.

DISCUSSION Within-day neuroprosthetic learning is apparent across singleday decoders. The time to target behavioral metric showed significant increase when averaged over the within-day data. The implications of the new trajectory ratio metric are even more convincing because the metric accounts for all self-initiated trials, even if they were not completed. This metric provides much more information in early learning when the subject may be initiating trials but unable to properly complete them. In contrast, the time to target metric can only use self-initiated rewarded trials by design since its calculation depends on the timestamp of entering a peripheral target. That the trajectory ratio metric showed significant increase in within-day BMI control, while capturing a near-perfect result in manual control, shows its effectiveness as a novel behavioral measure for short-term learning of the task. We hypothesized that short-term modulation by Cd or dlPFC would be illuminated by changes in coherence. Past studies have indicated that both PFC and striatum were influenced by reward expectation and that reward information was encoded in increased beta band power in LFP.11,19 Furthermore, beta synchrony between the two regions has been shown to emerge during reward.20 However, dlPFC-Cd behavior-coherence analysis did not show significant modulation in the reward block. Other studies have found that motor cortex and striatum coherence in the 6 – 14 Hz band has also been shown to emerge over the course of long-term neuroprosthetic learning when time-locked to reward.1 In this within-day, short-term study, this same frequency range did not show significant modulation during reward. The lack of modulation in the reward block is contrary to what was expected, both from a reward conditioning perspective and from the literature in long-term neuroprosthetic learning. The behavior-coherence analysis results demonstrated M1/ PMd - Cd modulation between alpha, beta, and 6 – 14 Hz frequency bands 1s after the go cue, inverse to behavior. The inverse modulation may indicate M1/PMd - Cd communication was very high at the beginning of each day, since getting reacquainted with the task may have required an especially high level of motor control, and then

58

Berkeley Scientific Journal | FALL 2020

lessened as the subject eased back into the behavior. However, given the lack of expected results in reward block coherence, considering a vast body of past literature on frontostriatal and corticostriatal modulation, and given the similar methodology for analyses of both the go cue and reward blocks, it is likely that analyses parameters must be altered and repeated in both the reward and go cue blocks before any claims can be made on either. Particularly, developing a filtering metric for determining which LFP channels to include in each region’s analyses could establish more specificity in coherence measurements. For unknown reasons, some LFP channels exhibited lots of noise on select days during recording. Since coherence calculation is based on both phase and amplitude, high noise levels in several channels could greatly pollute mean coherence calculations across regions. Past methods developed for measuring recording effectiveness of microelectrodes based on LFP signal-to-noise ratio could be adapted to gain clarity in future results.21 Another possibility for the absence of PFC-Cd communication increase is that this emergence may occur on a longer timescale than predicted. Future studies will be done to measure PFC-Cd emergence in long-term, across-day neuroprosthetic learning. Additionally, repeating analyses with shorter early and late time periods could illuminate more time-specific coherence. Once within-day functional connectivity is better established, effective connectivity analysis between implicated regions will give even more insight. While functional connectivity establishes a correlation in signaling between brain regions A and B, effective connectivity adds directionality information that functional connectivity lacks, determining whether the flow of information is from A to B or from B to A.9,17 Granger causality (g-causality) and transfer of entropy are two measures that examine the direction of flow of information and could be utilized in future analyses.22,23 Though the preliminary results of functional connectivity analysis were inconclusive, the establishment and quantification of within-day abstract learning paves the way for further investigation of these relationships. With improved and additional metrics, identification of involved brain regions will be possible, and the effects of their within-region and cross-region communication on neuroprosthetic learning will be illuminated.

RESEARCH


REFERENCES 1. Koralek, A. C., Jin, X., Long II, J. D., Costa, R. M., & Carmena, J. M. (2012). Corticostriatal plasticity is necessary for learning intentional neuroprosthetic skills. Nature, 483(7389), 331–335. https://doi.org/10.1038/nature10845 2. Yin, H. H., Mulcare, S. P., Hilário, M. R. F., Clouse, E., Holloway, T., Davis, M. I., Hansson, A. C., Lovinger, D. M., & Costa, R. M. (2009). Dynamic reorganization of striatal circuits during the acquisition and consolidation of a skill. Nature Neuroscience, 12(3), 333–341. https://doi.org/10.1038/nn.2261 3. Barnes, T. D., Kubota, Y., Hu, D., Jin, D. Z., & Graybiel, A. M. (2005). Activity of striatal neurons reflects dynamic encoding and recoding of procedural memories. Nature, 437(7062), 1158– 1161. https://doi.org/10.1038/nature04053 4. Kimchi, E. Y., & Laubach, M. (2009). Dynamic encoding of action selection by the medial striatum. Journal of Neuroscience, 29(10), 3148–3159. https://doi.org/10.1523/ JNEUROSCI.5206-08.2009 5. Ganguly, K., & Carmena, J. M. (2009). Emergence of a stable cortical map for neuroprosthetic control. PLOS Biology, 7(7), e1000153. https://doi.org/10.1371/journal.pbio.1000153 6. Carmena, J. M., Lebedev, M. A., Crist, R. E., O’Doherty, J. E., Santucci, D. M., Dimitrov, D. F., Patil, P. G., Henriquez, C. S., & Nicolelis, M. A. L. (2003). Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates. PLOS Biology, 1(2), e42. https://doi.org/10.1371/journal.pbio.0000042 7. Ganguly, K., Dimitrov, D. F., Wallis, J. D., & Carmena, J. M. (2011). Reversible large-scale modification of cortical networks during neuroprosthetic control. Nature Neuroscience, 14(5), 662–667. https://doi.org/10.1038/nn.2797 8. Hwang, E. J., Bailey, P. M., & Andersen, R. A. (2013). Volitional control of neural activity relies on the natural motor repertoire. Current Biology, 23(5), 353–361. https://doi.org/10.1016/j. cub.2013.01.027 9. Park, H.-J., & Friston, K. (2013). Structural and functional brain networks: From connections to cognition. Science, 342(6158). https://doi.org/10.1126/science.1238411 10. Saito, N., Mushiake, H., Sakamoto, K., Itoyama, Y., & Tanji, J. (2005). Representation of immediate and final behavioral goals in the monkey prefrontal cortex during an instructed delay period. Cerebral Cortex, 15(10), 1535–1546. https://doi. org/10.1093/cercor/bhi032 11. Kobayashi, S., Nomoto, K., Watanabe, M., Hikosaka, O., Schultz, W., & Sakagami, M. (2006). Influences of rewarding and aversive outcomes on activity in macaque lateral prefrontal cortex. Neuron, 51(6), 861–870. https://doi.org/10.1016/j. neuron.2006.08.031 12. Pasupathy, A., & Miller, E. K. (2005). Different time courses of learning-related activity in the prefrontal cortex and striatum. Nature, 433(7028), 873–876. https://doi.org/10.1038/ nature03287 13. Antzoulatos, E. G., & Miller, E. K. (2011). Differences between neural activity in prefrontal cortex and striatum during learning of novel abstract categories. Neuron, 71(2), 243–249. https://doi. org/10.1016/j.neuron.2011.05.040

RESEARCH

14. Kim, Y., & Bang, H. (2018). Introduction to Kalman Filter and its applications. Introduction and Implementations of the Kalman Filter. https://doi.org/10.5772/intechopen.80600 15. Wu, W., Shaikhouni, A., Donoghue, J. R., & Black, M. J. (2004). Closed-Loop neural control of cursor motion using a Kalman filter. The 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2, 4126–4129. https://doi.org/10.1109/IEMBS.2004.1404151 16. Orsborn, A. L., Dangi, S., Moorman, H. G., & Carmena, J. M. (2012). Closed-loop decoder adaptation on intermediate time-scales facilitates rapid BMI performance improvements independent of decoder initialization conditions. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 20(4), 468–477. https://doi.org/10.1109/TNSRE.2012.2185066 17. Bowyer, S. M. (2016). Coherence a measure of the brain networks: Past and present. Neuropsychiatric Electrophysiology, 2(1), 1. https://doi.org/10.1186/s40810-015-0015-7 18. Fries, P. (2015). Rhythms for cognition: communication through coherence. Neuron, 88(1), 220–235. https://doi.org/10.1016/j. neuron.2015.09.034 19. Kawagoe, R., Takikawa, Y., & Hikosaka, O. (1998). Expectation of reward modulates cognitive signals in the basal ganglia. Nature Neuroscience, 1(5), 411–416. https://doi.org/10.1038/1625 20. Zhang, Y., Pan, X., Wang, R., & Sakagami, M. (2016). Functional connectivity between prefrontal cortex and striatum estimated by phase locking value. Cognitive Neurodynamics, 10(3), 245– 254. https://doi.org/10.1007/s11571-016-9376-2 21. Suarez-Perez, A., Gabriel, G., Rebollo, B., Illa, X., GuimeràBrunet, A., Hernández-Ferrer, J., Martínez, M. T., Villa, R., & Sanchez-Vives, M. V. (2018). Quantification of signal-tonoise ratio in cerebral cortex recordings using flexible MEAs with co-localized platinum black, carbon nanotubes, and gold electrodes. Frontiers in Neuroscience, 12. https://doi. org/10.3389/fnins.2018.00862 22. Barnett, L., Barrett, A. B., & Seth, A. K. (2009). Granger causality and transfer entropy are equivalent for Gaussian variables. Physical Review Letters, 103(23), 238701. https://doi. org/10.1103/PhysRevLett.103.238701 23. Vicente, R., Wibral, M., Lindner, M., & Pipa, G. (2011). Transfer entropy—A model-free measure of effective connectivity for the neurosciences. Journal of Computational Neuroscience, 30(1), 45–67. https://doi.org/10.1007/s10827-010-0262-3

ACKNOWLEDGEMENTS Thank you to Ellen Zippi for being a fantastic mentor and a phenomenal resource. Thank you to Maki Kitano, Paul Botros, and Emanuele Formente for helping with the data collection process and to the veterinary staff for all of their assistance. Thank you to Jose Carmena for welcoming me into the lab and allowing me to embark on this project.

FALL 2020 | Berkeley Scientific Journal

59


Coral Cover and Algae Growth Along a Water Quality Gradient in Moorea, French Polynesia BY SAVANNAH STURLA RESEARCH SPONSOR (PI): BRENT MISHLER ABSTRACT As climate change and anthropogenic perturbations cause dramatic shifts in coral reef ecosystems, it is of increasing importance to understand corals’ tolerance, response, and adaptation to a combination of stressors. Nutrient enrichment increases the sensitivity of corals, making them more prone to bleaching from thermal stress. In Moorea, French Polynesia, Cook’s Bay is more developed and potentially more impacted by nutrient runoff from sewage and agriculture than its neighbor, Opunohu Bay. To assess the ecological gradient and level of human disturbance along these two central bays in Moorea, this study surveyed coral diversity by genera, percent coral cover, percent bleached coral cover, macroalgae cover, and nutrient-treated coral transplants in five sites surrounding the two bays. The site in between the two bays was the most ecologically diverse and rich, significantly differing from the Cook’s Bay sites as it had a greater average total coral cover. When Pocillopora verrucosa fragments were transplanted to each of the varying environments at the sites, in which some were treated experimentally with a nutrient diffuser at a different field-site prior to this study, treatment transplants experienced more bleaching than the control. This study supports previous research surrounding anthropogenic nutrient enrichment and its effects on coral reef ecosystems, while exploring the complex interactions among the various factors that impact coral resilience in an isolated island barrier reef. Major, Year, and Department: Environmental Science; Undergraduate Senior; Department of Environmental Science, Policy, & Management

INTRODUCTION As anthropogenic climate change alters conditions in natural environments via rising global average temperatures and ocean acidification, species distribution, functional traits, and biodiversity are threatened within ecosystems.1 Modern rates of climate change are more severe than previously recorded in global history.2 Under such dramatically shifting environmental conditions, certain organisms express a greater sensitivity to climate stress due to a combination of other environmental factors.3 As a result, competitive interactions and spatial partitioning shift accordingly.4 Disproportionate abilities to tolerate a combination of stressors are also expressed within one species.5 Small islands are specifically vulnerable to climate change—in conjunction with such human disturbances—due to their isolation, exposure to unpredictable weather, and dependence on the surrounding ecosystems.6 Coral reefs are culturally, ecologically and economically important ecosystems in small islands. Coral reefs make up the most productive and biodiverse marine ecosystems, providing a habitat for more than a quarter of all marine species.7 In addition, coral reefs are important in the atmospheric carbon cycle and protect

60

Berkeley Scientific Journal | FALL 2020

coastlines from flooding and erosion.8,9 They also hold important economic value for coastal populations, as goods derived from coral reef ecosystems are estimated to be worth over $20 trillion U.S. dollars annually.7 While environmental fluctuations are a normal feature of the coral reef environment, drastic increases in ocean temperature and ocean acidification place stress on coral reefs, causing increased coral bleaching.10 Although coral bleaching events occur naturally and corals can recover their photosynthetic endosymbionts, bleaching events are becoming abnormally frequent and devastating to coral reefs due to climate change. In combination with local stressors, these events can result in both decreased live coral cover and coral diversity.11 Despite their economic, environmental, and cultural importance, coral reefs have recently faced a tremendous amount of physiological stress from a rapidly changing climate in conjunction with human perturbation.10 Other environmental conditions can increase susceptibility to bleaching from thermal stress. One important factor is nutrient enrichment from anthropogenic sources in these otherwise oligotrophic (or low nutrient) coral reef environments.3 Additionally, dissolved inorganic nutrients, such as nitrate, nitrite, phosphates, can impede coral growth and

RESEARCH


functioning, as well as increase the severity of multiple diseases in corals.12,13,14 At a local scale, with decreased coral cover on predominantly conglomerate rock barrier reefs, coral-algal shifts are a growing concern—with increased coral bleaching and disease, macroalgae colonizes dead coral as a substrate and can dominate over coral recovery.15 But still unknown is whether corals can adapt to non-oligotrophic environments and how quickly coral reefs can recover from the ramifications of climate change. The coral reef ecosystems in Moorea, French Polynesia provide a unique opportunity to investigate coral resilience. In particular, the two major bays on the north side of the island, Cook’s (Paopao) Bay and Opunohu Bay vary greatly and provide semi-natural experimental conditions. The differences in development surrounding each bay expose the coral reefs to different environments. Although Opunohu Bay has a nearby coastal shrimp farm and agricultural school, Cook’s Bay is more developed with more housing, pineapple plantations, and watershed pollution.16 These differences affect nutrient inputs to the nearby marine ecosystems via agriculture and sewage. Previous studies have compared the coral communities in these two bays, examining the greater presence of the disease Porites trematodias, a coral disease specific to Porites coral from a parasitic flatworm which reduces growth and reproduction, in Cook’s Bay in comparison to Opunohu Bay.17, 18 Another study has investigated the water discharge and suspended sediments in each bay, yet this study was conducted 27 years ago when land development in surrounding areas was much lower.19 These differences provide a way to investigate the impacts of different nutrient inputs on the species distribution and the state of the corals. This study aims to understand the role of nutrient inputs on coral recovery following bleaching. This was accomplished by evaluating coral recovery, coral and algae cover, and coral diversity on the barrier reefs between Cook’s Bay and Opunohu Bay, after a recent bleaching event in May of 2019. Location relative to land development and sources of nutrient pollution was used to make predictions about differences in water quality in the field surveys and transplant studies. Previous work has suggested that water quality is poorer in Cook’s Bay than Opunohu Bay and relatively intermediate in between the two; therefore, I expected Cook’s Bay to have less abundant and less diverse coral cover in comparison to Opunohu Bay. Additionally, I hypothesized the visual severity of bleaching and macroalgae cover in relation to coral cover would be greater in Cook’s Bay. Finally, I predicted coral transplants placed in Cook’s Bay would exhibit poorer survival and growth in comparison to coral transplants in Opunohu Bay, but those with a history of nutrient exposure may be more tolerant to adapting to environmental changes.

METHODS Study site This study surveyed sites on the east and west sides of Opunohu Bay and Cook’s (Paopao) Bay, along with an intermediate site in between the two bays, in Moorea, French Polynesia. The study sites were examined from October 8th, 2019 to November 12th, 2019. GPS coordinates were recorded at the start of each transect using the

RESEARCH

Figure 1. Locations used in this study in Moorea, French Polynesia. Yellow triangles represent coral transplant locations, while the red dots represent transect locations. Specific coordinates can be found in the Appendix (Table A1 and Table A2, Appendix A). CB1 is Cook’s Bay site 1, CB2 is Cook’s Bay site 2, H is the intermediate/Hilton site, OB1 is Opunohu Bay site 1, OB2 is Opunohu Bay site 2. Map was created in ArcGIS.20 phone application Altimeter (Table A1, Appendix A). The various sites were denoted respectively as Opunohu Bay site 2, Opunohu Bay site 1, Cook’s Bay site 1, Cook’s Bay site 2, and the Hilton site (Figure 1). Coral and algae: Survey methodology. A total of 25 survey locations were selected randomly at each site. Two-stage sampling was used at each site, by first setting a tenmeter transect tape and then collecting data at every meter along the transect using a 0.5 x 0.5 meters square quadrat. The quadrat was held at bent arms-length, while snorkeling at the water surface. The quadrats were submerged approximately 0.5 meters under the water surface and assessed from above the water surface. Quadrats were divided into 25 smaller squares using fishing line, with each square representing four percent cover. Approximate cover of coral to algae was assessed using these squares. Due to three-dimensional space, the cover per quadrat does not add up to 100 percent solely based on the 25 surface level square quadrants. Percent cover was recorded with respect to approximate percentages in three dimensions, examining the coral coverage in spaces extending downwards in addition to visualizing the percent coverage from above. The quadrat square percentages were used to measure both corals parallel and perpendicular to the water surface throughout smaller crevices in the entire column of space below the quadrat. Fleshy macroalgae cover was also assessed using the same method within the same quadrats. Only the percent cover of Turbinaria ornata and Sargassum muticum were recorded (Figure A14, Appendix A). As a result of the quantification methods using the quadrat explained previously, coral and algae percent coverage do not sum to 100 percent. Coral and algae: Genera identification. Corals were identified by genus, using identification references and Moorea coral resources from Dr. Peter J. Edmunds with the California State University Northridge Moorea Coral Reef Long

FALL 2020 | Berkeley Scientific Journal

61


Site

Shannon-Weiner

Simpson’s

Richness

CB1

1.485

0.627

3.2

CB2

2.427

0.875

5.6

H

2.540

0.876

7.6

OB1

2.493

0.884

6.4

OB2

2.516

0.885

7.0

Term Ecological Research Site (Becker, D., personal communication, October 22, 2019).21 Within each genus observed, corals were categorized into ‘normal’, ‘partially bleached’, or ‘bleached.’ Normal coral cover was considered to be corals with enough zooxanthellae symbionts to have a visible color. Partially bleached corals were identified as any coral that showed bleaching (Figure A16, Appendix A), typically around the edges, yet still retained notable coloring from their symbionts. Bleached corals were considered to be corals that were fully bleached, with no detectable coloring (Figure A17, Appendix A). Photos showing examples of corals in each category were taken using a Nikon Coolpix camera. Coral and algae: Statistical analysis. The Shannon-Wiener index, Simpson index, and a richness count (the number of species present) were calculated in R as implemented in RStudio at the transect level and averaged to get indices for each site.22,23,24,25,26,27,28 These indices were chosen because they are very commonly used to quantify ecological diversity, yet they emphasize different aspects of measuring diversity. The Shannon-Wiener index weighs species richness and evenness (the distribution of species), while the Simpson index calculates the likelihood of different species occurring and is more impacted by the most dominant species. The equations for the Shannon-Wiener and Simpson indices, respectively, are as follows.29,30

The relationship between average coral cover and average macroalgae cover per transect was quantified using a linear regression model.23,24,25,31,32 Pairwise least-square means comparisons used to compare sites for relationships with average macroalgae cover per transect.33,34,35,36 A linear mixed model was used to test if the total average coral cover per transect varied by site location, utilizing pairwise least-square means comparisons to compare sites.33,37,38,39,40 To see if the proportion of the transect averages of bleached and partially bleached coral cover to the total coral cover varied due to site location, a general linear model with a quasibinomial family was used to make pairwise least-square means comparisons to compare

62

Berkeley Scientific Journal | FALL 2020

Table 1: Shannon-Weiner indexes, Simpsons indexes and richness counts calculated by location by averaging indexes calculated at the transect level (CB1 is Cook’s Bay site 1, CB2 is Cook’s Bay site 2, H is the intermediate/Hilton site, OB1 is Opunohu Bay site 1, OB2 is Opunohu Bay site 2). and contrast sites.23,24,25,31,32,33,34,35,36,37,38 Coral transplants: Nutrient enrichment. Finally, in addition to the field study exploring coral cover and dominant macroalgae cover, an experiment was also performed to explore resilience following nutrient enrichment. Six coral transplants were placed at each of the five study sites (Figure 1). These Pocillopora verrucosa fragments were obtained from Danielle Becker, working with the Northridge MCR LTER lab, and were initially growing on the east side of Cook’s Bay on the fore reef (Becker, D., personal communication, October 22 – December 6, 2019). Control Pocillopora verrucosa were located within the same fore reef ecosystem, but at a distance which hindered impact from acute nutrient exposure. The treatment corals were treated with slow release fertilizer pellets, Osmocote 19-6-12 (N-P-K ratio), via a small nutrient diffuser tube for 15 months, beginning in July 2018. The control fragments were on the 15th of October 2019 at 9:00 AM and the treatment fragments at 11:00 AM. Although the treatment coral fragments had a much higher symbiont count upon collection, it was confirmed via laboratory testing, using PreSens equipment and an OXY-10, that the treatment coral fragments were underperforming. This was done by recording oxygen and temperature continuously over temperature treatments and then comparing photosynthesis, respiration, and calcification. In this follow-up study, three treatment fragments and three control fragments were planted at each site. Coral transplants: Transplant methods. Prior to placing the fragments in the site locations, the corals spent two to three days in the water tables in the outdoor wet lab at the UC Gump Research Station. At the end of this period, each coral fragment was photographed on each side with a ruler for scale using the Nikon Coolpix camera. The coral fragments were transplanted to the various sites on October 17th and 18th of 2019. Each coral fragment or “nub” was glued to cement tiles with holes in the center of them, using Gorilla hot glue and a Gorilla hot glue gun. These tiles were then hot glued and zip-tied to a metal wire cage, which was labeled with a plastic tag and weighed down using a rock inside, with four ends of rope attached. At each site, the cages were set on the ocean floor near other Pocillopora corals and secured with the ropes to a nearby substrate of either dead coral or rock (Figure A20, Appendix A). The coral transplants were given one week of recovery time and then monitored approximately every week for observations. Coral transplants: Statistical analysis. At the end of the experiment (approximately 3.5 weeks), the corals were collected on the 18th and 19th of November 2019. They were then photographed again to compare color change and

RESEARCH


surface area in ImageJ.41 Using R in RStudio, the change in surface area was analyzed using a generalized linear model as a gaussian family and ANOVA tests.22,23,24,25,38 The color change due to bleaching was quantified using the CoralWatch coral health chart index, and any growth or changes in size were assessed using ImageJ.41,42 The potential color and surface area change were analyzed using a generalized linear model with a quasibinomial family and a gaussian family respectively and ANOVA tests, to look for changes in color and in surface area both in between sites and among the treatment and control replicates. Multiple generalized linear models were performed to assess for significant differences between treatment and control groups overall and to assess differences in between sites.

RESULTS Coral and algae Observed coral genera included: Acropora, Pocillopora, Porites, Pavona, Montipora, Gardinoseris, Millepora, Siderastrea, Phymastrea, Leptoseris, Leptastrea, Psammocora, Acanthastrea, Lithophyllon, and Pleuractis (Figures A1–A13, Appendix A). Consistent between both the Shannon-Wiener and Simpson indices, Cook’s Bay site 1 was the least diverse, containing a lower abundance, evenness and richness of various coral genera. The Hilton site was the most diverse using the Shannon-Wiener index, while Opunohu Bay site 2 was the most diverse using the Simpson index (Table 1). Opunohu Bay site 1 was also more diverse than the Hilton site when using Simpson index. Cook’s Bay site 1 was the least rich among genera, and the Hilton site was the richest among genera. Apart from Cook’s Bay site 1, the other sites were fairly close among diversity and richness indices. The Hilton and Opunohu Bay sites varied in order of diversity based on the two different indices. When per transect averages of coral cover at all locations were analyzed, an almost flat line (m = 0.02268) reflected the relationship

Figure 2. Regression linear model of the average total coral percent cover against the average macroalgae percent cover per transect amongst all sites (p > 0.05, m= 0.02268). between average total coral cover and average macroalgae cover, which was not statistically significant (Figure 2, p = 0.8062). The average macroalgae cover per location was 26.6% for Cook’s Bay site 1, 20.1% for Cook’s Bay site 2, 46.8% for the Hilton site, 39.3% for Opunohu Bay site 1, and 12.4% for Opunohu Bay site 2. After performing a pairwise comparison, the Hilton site differed from Opunohu Bay site 2 notably (p = 0.0234). Other less dominant macroalgae types were noted, but not measured. Dictyota was very commonly noted at Cook’s Bay site 1, while Asparagopsis was observed at the Hilton site and Opunohu Bay sites. The average total coral cover, when all transects in a location were averaged, was 8.1% for Cook’s Bay site 1, 14.2% for Cook’s

Figure 3. Boxplot of the average total coral percent cover per transect by site is shown on the left. Cook’s Bay site 1 and the Hilton significantly differed from each other (p = 0.0001), Cook’s Bay site 1 and Opunohu Bay site 1 significantly differed (p = 0.0431), and Cook’s Bay site 2 and the Hilton site significantly differed (p = 0.0379). Boxplot of the averages of the total bleached (sum of partially bleached and bleached) coral cover per transect among different locations is shown on the right. The right figure uses averages rather than proportions to better show the spread of the transect cover differences. Differences between locations was found to be insignificant (p > 0.05). The same site abbreviations are used as described in Table 1.

RESEARCH

FALL 2020 | Berkeley Scientific Journal

63


Figure 4. Average color change of treatment and control Pocillopora verrucosa, based off the CoralWatch color index by site location.42 Nutrient-enrichment treated coral fragments typically more dramatically bleached than control corals. Site and treatment did not have a significant effect (p > 0.05). Bay site 2, 24.0% for the Hilton site, 17.8% for Opunohu Bay site 1, and 15.0% for Opunohu Bay site 2 (Figure 3A). The Hilton site was significantly different from the Cook’s Bay site 1 (Figure 3A, p = 0.0001). Opunohu Bay site 1 significantly differed from Cook’s Bay site 1 (Figure 3A, p = 0.0431). Cook’s Bay site 2 also significantly differed from the Hilton site (Figure 3A, p = 0.0379) The average percent coverage of bleached coral (both partially bleached and fully bleached) as a proportion of total coral cover was 26.7% at Cook’s Bay site 1, 15.2% at Cook’s Bay site 2, 12.9% at the Hilton site, 9.7% at Opunohu Bay site 1, and 10.1% at Opunohu Bay site 2. In average percent cover per transect not as a proportion, the most bleached coral was at the Hilton site (Figure 3B). Differences in between sites in the average proportion of bleached coral cover over total coral cover per transect were statistically insignificant (Figure 3B, p > 0.05). Coral transplants When comparing the average color index change among the three replicates transplanted at each location, enriched fragments typically had a higher color change than control fragments, save those at Opunohu Bay site 1 (Figure 4). The difference in color change in control and treatment fragments was not significantly different (Figure 4, p = 0.2914). The location did not have a notable effect on color change either (Figure 4, p = 0.1525). Most coral fragments were partially bleached by the end of the experiment, with the exception of two (one unenriched sample in Opunohu Bay site 1 and one enriched sample in Cook’s Bay site 1). One control fragment in Cook’s Bay site 1 and one control fragment in Opunohu Bay site 1 were overgrown with algae, and several fragments had green hues or small patches of algae growth. The size of the coral transplants did not notably change over time. The average surface area change calculated in ImageJ was 0.681 pixels per centimeter for enriched corals and 0.919 pixels per centimeter for control corals.41 By site, the average surface area change was 0.712 pixels per centimeter for Cook’s Bay site 1, 0.987 for Cook’s Bay site 2,

64

Berkeley Scientific Journal | FALL 2020

0.464 for the Hilton site, 0.872 for Opunohu Bay site 1, and 0.966 for Opunohu Bay site 2. Neither treatment nor location were significant (p = 0.2316, p = 0.3979 respectively).

DISCUSSION Coral diversity and coral cover variation by site paralleled nitrogen and phosphorus pollution and correlating reef disturbance estimates made in Boutillier and Duane (2006).16 These types of nutrient pollution can worsen the negative effects of ocean acidification and temperature changes on corals, by causing dysbiosis, an imbalance or impairment in their microbiomes, which may lead to the corals becoming immune-compromised.3,43 Therefore, the sites which are presumed to deal with more nutrient pollution from increased surrounding agricultural runoff and sewage from commercial development may, over time, have less resilient and diverse coral reefs. Yet, with the Simpson index, Opunohu Bay site 1 had more diversity than the Hilton site, and Opunohu Bay site 2 had the greatest diversity index number, because the Simpson index weighs species evenness and more common species more heavily (Table 1). The Shannon-Wiener index is more sensitive to differences in diversity, but the Simpson index can better reflect the dominant species and richness of a site. Thus, the Hilton site may have a greater variety of genera, but the Opunohu sites have a greater abundance of the dominant genera. Still, this trend is in support of this study’s initial hypothesis, suggesting how freshwater, sewage, and runoff inputs from the two bays may be impacting coral. Certain corals tolerate nutrient inputs and sedimentation better because of their morphology and genetic variation, but others may respond by bleaching or partial mortality.29 For example, Porites and Montipora were most commonly found in all sites. These encrusting and bouldering corals were present more often than branching coral formations, as branching corals tend to be more fragile. However, in contrast, a study performed in 2015 on Moorea concluded that Acropora and Montipora were more susceptible to coral bleaching

RESEARCH


than Porites and Pocillopora.5 They also found that 21% less Acropora colonies were bleached proportionately in comparison to 2007, suggesting that more resilient genotypes and small corals may be being selected for over time.5 Yet, that study assessed reefs on the north coast of Moorea in six different reef zones, most of which were at greater depths and further out from the barrier reef than in this research.5 The general trends found in Montipora are in alignment with this 2015 study, but other factors regarding the difference in reef zones assessed and the time frames are likely the cause for the differences in findings around Porites, since large bommies of Porites seem to be more typically found on the flat barrier reefs than the deeper sloping reefs.5 The Hilton site had a higher proportion of bleached coral than the Opunohu Bay sites. This finding supports the hypothesis that the proportion of bleached coral is higher in Cook’s Bay but rejects the prediction that the Opunohu Bay sites would have more bleached coral than the Hilton site. These results suggest that the relationship between diversity, coral cover, and bleaching is more complex than hypothesized. One responding variable may not directly correlate with another, despite these factors having a relationship. Bleaching may not be as large of a concern in areas that are more stable due to species richness and evenness. The average total coral cover per transect had a slightly positive relationship with average total macroalgae cover, when analyzed across all the transects. Because only the percent cover of the dominant macroalgae genera Turbinaria ornata and Sargassum muticum were recorded, other macroalgae genera such as Dictyota (Figure A15, Appendix A) and Asparagopsis were not represented. Since not all macroalgae types were recorded, the full scope of coral-algae relationship was not captured, which is typically spatially competitive when associated with a perturbation such as bleaching in combination with reduced herbivory and nutrient enrichment.15 Therefore, a relationship may have been clearer if all genera were measured. Despite the Hilton site having the most average total coral cover, it had the most average macroalgae cover when averaged at the location level, contrasting to the expected results. Similar observations were also found at Opunohu Bay site 1. Although there was greater average coral cover and diversity at these sites, the large potential surface area for macroalgae to colonize in between and on top of large coral bommies may contribute to the increased macroalgae cover, in comparison to sites at Cook’s Bay. The relationship between these factors is complex, is under current debate, and differs relatively in importance at locations.15 The Pocillopora coral transplants did not significantly change in color overall from their respective starting points. Yet, the treatment transplants generally had a higher color change than the control fragments, which is in alignment with previous studies where corals treated with nutrient loading were more susceptible to bleaching.44 Even though the nutrient loaded coral fragments had a more dramatic color change, they also had an excessive, detrimental level of symbionts to begin with from Becker’s initial assessment (Becker, D., personal communication, October 22 – December 6, 2019). More analysis regarding the transplants’ photosynthesis, respiratory and calcification functioning would need to be performed following a similar experiment. Although the average surface area differences among both

RESEARCH

treatments and site locations (in pixels per centimeter) were not statistically significant, the average surface area change in the control coral fragments was larger than the treatment coral fragments. This may potentially support that the enriched coral fragments could still be underperforming in comparison to the control corals that may have been able to marginally grow. Yet, there is much room for error in this analysis since there are only three replicates of each coral type and inconsistent photography angles in the before and after images. Since the coral fragments used were from the fore reef at a depth of approximately 10 meters, shortly following a major bleaching event, there are other factors that may have also contributed to their stress response. The decrease in depth and excess sunlight exposure in the barrier reef in comparison to their native environment in the deeper fore reef may have also contributed to their final state, because of the stress from the environmental change. The coral fragments were also in the water table in the wet laboratory for a duration of time which impacted their state when they were output into the sites. The hot glue typically bleached the base of the coral fragment, which is why the color change focused on the overall color and overlooked the bleached bases. Yet, these factors may have influenced the results as well. The findings generally support the results of previous work suggesting that barrier reefs near inputs of nutrient runoff are impacted more than corals in barrier reefs nearby. Studying the fringing reefs in these same areas where the effects on bleaching and cover may be more dramatic could be beneficial for comparison, as fringing reefs are closer to the potential pollutant sources and runoff entry points. Because the barrier reefs around this arc of Moorea receive different amounts of freshwater from streams and are geomorphologically different, there are additional factors to consider that were not measured in this study. Variation exists among these sites regardless of additional inputs from sewage and agriculture, due to differences in depth and formation. Although turbidity and total dissolved solids measurements were taken, the results were inconclusive, and thus omitted, largely due to limitations in equipment and the duration of the study. Turbidity is generally regarded as harmful and associated with poorer water quality, when it blocks sunlight penetration for proper photosynthesis in marine organisms. However, other research suggests there may be benefits of turbid water to corals in tolerating higher temperature stress.45 In future studies, the relationship between turbidity, or the total dissolved solids, and coral requires further investigation. An additional indicator of stress to consider in future studies is mucus production, which is an important immune response to nutrient enrichment and sedimentation.46 This study took observations on mucus present on Porites in transects, but the results were mostly dependent on which sites had more Porites present (Figure A18, Appendix A). Many factors influence mucus production, such as depth, light exposure, temperature, and low tides.46 Since it had been around five months following the most recent bleaching event in Moorea, more obvious stress-induced mucus production may be present if surveyed closer in time to a bleaching event. More research should also be done on the rates of photosynthesis and respiration following a long duration of recovery from a nutrient loading environment. Questions concerning bleaching tolerance, stress responses, adaptations, and resiliency following stress are

FALL 2020 | Berkeley Scientific Journal

65


important to the ability of coral reef ecosystems to tolerate changing climate conditions. Maintaining the diversity of coral reefs is important for the long-term stability of marine ecosystems. More research and greater public awareness on current scientific knowledge are needed for better understanding about the different indirect factors that degrade coral reefs, especially when in combination with changing abiotic factors due to global warming, which pose a risk to marine diversity, coastal protection, and human life.

APPENDIX The appendix to this article may be found online by navigating to: https://escholarship.org/uc/our_bsj/

REFERENCES 1. Kubicek, A., Breckling, B., Hoegh-Guldberg, O., & Reuter, H. (2019). Climate change drives trait-shifts in coral reef communities. Scientific Reports, 9(1), 3721. https://doi. org/10.1038/s41598-019-38962-4 2. Schluessner, C., Rogelj, J., Schaeffer, M., Lissner, T., Licker, R., Fischer, E.M., Knutti, R., Levermann, A., Frieler, K., and Hare, W. (2016). Science and policy characteristics of the Paris Agreement temperature goal. Nature Climate Change, 6, 827– 835. https://doi.org/10.1038/nclimate3096 3. Wiedenmann, J., D’Angelo, C., Smith, E.G., & Hunt, A.N. (2013). Nutrient enrichment can increase the susceptibility of reef corals to bleaching. Nature Climate Change, 3(2), 160–164. https://doi. org/10.1038/nclimate1661 4. Eurich, J.G., McCormick, M.I., & Jones, G.P. (2018) Direct and indirect effects of interspecific competition in a highly partitioned guild of reef fishes. Ecosphere, 9(8). https://doi. org/10.1002/ecs2.2389 5. Pratchett, M. S., McCowan, D., Maynard, J. A., & Heron, S. F. (2013). Changes in bleaching susceptibility among corals subject to ocean warming and recurrent bleaching in Moorea, French Polynesia. PLOS ONE, 8(7), e70443. https://doi.org/10.1371/ journal.pone.0070443 6. Nicholls, R. J., Wong, P. P., Burkett, V., Codignotto, J., Hay, J., Woodroffe, C. D., Abuodha, P., Arblaster, J., Brown, B., Forbes, D., Hall, J., Kovats, S., Lowe, J., McInnes, K., Moser, S., RuppArmstrong, S., Saito, Y., & Tol, R. S. J. (2007). Coastal systems and low-lying areas. Cambridge University Press, 315–356. 7. Olguín-Lopez, N., Gutiérrez-Chávez, C., Hérnández-Elizárraga, H., Ibarra-Alvarado, C. and Rojas-Molia, A. (2017). Coral reef bleaching: an ecological and biological overview, Corals in a changing world. IntechOpen. https://doi.org/10.5772/ intechopen.69685 8. Ware, J. R., Smith, S. V., & Reaka-Kudla, M. L. (1992). Coral reefs: sources or sinks of atmospheric CO2? Coral Reefs, 11(3), 127–130. https://doi.org/10.1007/BF00255465 9. World Resources Institute. (2006). Value of coral reefs in Caribbean Islands: draft economic valuation methodology. Retrieved from http://pdf.wri.org/methodology_with_ appendix_jul06.pdf

66

Berkeley Scientific Journal | FALL 2020

10. Frieler, K., Meinshausen, M., Golly, A., Mengel, M., Lebek, L., Donner, S.D., & Hoegh-Guldberg, O. (2013). Limiting global warming to 2 degrees Celsius is unlikely to save most coral reefs. Nature Climate Change, 3(2), 165–170. https://doi.org/10.1038/ nclimate1674 11. Carilli, J. E., Norris, R. D., Black, B. A., Walsh, S. M., & McField, M. (2009). Local stressors reduce coral resilience to bleaching. PLOS One, 4(7), e6324. https://doi.org/10.1371/journal. pone.0006324 12. D’Angelo, C., & Wiedenmann, J. (2014). Impacts of nutrient enrichment on coral reefs: New perspectives and implications for coastal management and reef survival. Current Opinion in Environmental Sustainability, 7, 82–93. https://doi.org/10.1016/j. cosust.2013.11.029 13. Bruno, J. F., Petes, L. E., Harvell, C. D., & Hettinger, A. (2003). Nutrient enrichment can increase the severity of coral diseases. Ecology Letters, 6(12), 1056–1061. https://doi.org/10.1046/ j.1461-0248.2003.00544.x 14. Silbiger, N.J., Nelson, C.E., Remple, K., Sevilla, J.K., Quinlan, Z.A., Putnam, H.M., Fox, M.D., & Donahue, M.J. (2018). Nutrient pollution disrupts key ecosystem functions on coral reefs. Royal Society, 285(1880). https://doi.org/10.1098/ rspb.2017.2718 15. McManus, J.W. & Polsenberg, J.F. (2004). Coral-algal phase shifts on coral reefs: ecological and environmental aspects. Progress in Oceanography, 60(2–4), 263–279. https://doi.org/10.1016/j. pocean.2004.02.014 16. Boutillier, S. & Duane, T. P. (2006). Land use planning to promote marine conservation of coral reef ecosystems in Moorea, French Polynesia. Environmental Science. 17. Shea, A.G. (2011). Coral health and disease: a comparison of Cook’s and Opunohu Bays in Moorea, French Polynesia. “Biology and Geomorphology of Tropical Islands” class, University of California, Berkeley, Student Papers. Retrieved from http://ib.berkeley.edu/moorea/ uploads/6/6/8/3/6683664/2001_final_papers.pdf 18. Aeby, G.S., Ross, M., Williams, G.J., Lewis, T.D., & Work, T.M. (2010). Disease dynamics of Montipora white syndrome within Kaneohe Bay, Oahu, Hawaii: distribution, seasonality, virulence, and transmissibility. Diseases of Aquatic Organisms, 91, 1–8. https://doi.org/10.3354/dao02247 19. London, S. & Tucker, L. (2009). A comparison of the effects of agriculture and development on the two bays in Moorea, French Polynesia, and the effects of a water outflow on a nearshore coastal community. “Biology and Geomorphology of Tropical Islands” class, University of California, Berkeley, Student Papers. Retrieved from http://ib.berkeley.edu/moorea/ uploads/6/6/8/3/6683664/2001_final_papers.pdf 20. ESRI. (2011). ArcGIS Desktop: Release 10. Redlands, CA: Environmental Systems Research Institute. 21. Veron, J.E.N. (1985). Corals of Australia and the Indo-Pacific. London, Sydney: Angus & Robertson. 22. R Core Team. (2018). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from https://www.R-project. org/

RESEARCH


23. Wickham, H., Averick, M., Bryan, J., Chang, W. McGowan, L.D., François, R., Grolemund, G., Hayes, A., Henry, L., Hester, J., Kuhn, M., Pedersen, T.L., Miller, E., Bache, S.M., Müller, K., Ooms, J., Robinson, D., Seidel, D.P., Spinu, V., … Yutani, H. (2019). Welcome to the tidyverse. Journal of Open Source Software, 4(43), 1686. https://doi.org/10.21105/joss.01686 24. Wickham, H. (2011). The split-apply-combine strategy for data analysis. Journal of Statistical Software, 40(1) 1–29. http://www. jstatsoft.org/v40/i01/ 25. Wickham, H., François, R., Henry, L., & Müller, K. (2020). dplyr: A grammar of data manipulation. R package version 0.8.4. https://CRAN.R-project.org/package=dplyr 26. Deepayan, S. (2008). Lattice: multivariate data visualization with R. Springer-Verlag, New York. ISBN 978-0-387-75968-5. 27. Simpson, G.L. (2019). permute: Functions for generating restricted permutations of data. R package version 0.9-5. https:// CRAN.R-project.org/package=permute 28. Oksanen, J., Blanchet, F.G., Friendly, M., Kindt, R., Legendre, P., McGlinn, D., Minchin, P.R., O’Hara, R.B., Simpson, G.L., Solymos, P., Stevens, M.H.H., Szoecs, E., & Wagner, H. (2019). vegan: Community ecology package. R package version 2.5-6. https://CRAN.R-project.org/package=vegan 29. Salas, F., Neto, J.M., Borja, A. & Marques, J.C. (2004). Evaluation of the applicability of a marine biotic index to characterize the status of estuarine ecosystems: the case of Mondego estuary (Portugal). Ecological Indicators, 4(3), 215–225. https://doi. org/10.1016/j.ecolind.2004.04.003 30. Sagar, R. & Sharma, G.P. (2012). Measurement of alpha diversity using Simpson index. Environmental Skeptics and Critics, 1(1), 23–24. 31. Wickham, H. (2016). ggplot2: Elegant graphics for data analysis. Springer-Verlag, New York. https://ggplot2.tidyverse.org 32. Rudis, B., Kennedy, P., Reiner, P., Wilson, D., Adam, X., Barnett, J., Leeper, T.J., & Meys, J. (2020). hrbrthemes: Additional themes, theme components and utilities for ‘ggplot2.’ R package version 0.8.0. https://cran.r-project.org/web/packages/hrbrthemes/ index.html 33. Russel, L. (2019). emmeans: estimated marginal means, aka least-squares means. R package version 1.4.3.01. Retrieved from https://CRAN.R-project.org/package=emmeans 34. Halekoh, U., Højsgaard, S. (2014). A Kenward-Roger approximation and parametric bootstrap methods for tests in linear mixed models - the R package pbkrtest. Journal of Statistical Software, 59(9), 1–30. http://www.jstatsoft.org/v59/ i09/ 35. Henry, L. & Wickham, H. (2020). rlang: Functions for base types and core R and ‘tidyverse’ features. R package version 0.4.4. https://CRAN.R-project.org/package=rlang 36. Wickham, H. & Seidel, D. (2019). scales: Scale functions for visualization. R package version 1.1.0. https://CRAN.R-project. org/package=scales 37. Bates, D., Maechler, M., Bolker, B. & Walker, S. (2015). LME4: linear mixed-effects models using Eigen and S4. R package version 1.1.8. 38. Xiao, N. (2018). ggsci: Scientific journal and sci-fi themed color palettes for ‘ggplot2.’ R package version 2.9. https://CRAN.R-

RESEARCH

project.org/package=ggsci 39. Kuznetsova, A., Brockhoff, R.B., Christensen, R.H.B. (2017). lmerTest package: Tests in linear mixed effects models. Journal of Statistical Software, 82(13), 1–26. https://doi.org/10.18637/ jss.v082.i13 40. Bates, D., Maechler, M., Bolker, B. & Walker, S. (2015). LME4: linear mixed-effects models using Eigen and S4. R package version 1.1.8. 41. Rueden, C. T., Schindelin, J., & Hiner, M.C., et al. (2017). ImageJ2: ImageJ for the next generation of scientific image data. BMC Bioinformatics, 18, 529. 42. CoralWatch. (2019). Using the Coral Health Chart. CoralWatch. Retrieved from https://coralwatch.org/index.php/monitoring/ using-the-chart/ 43. Dougan, K.E., Ladd, M.C., Fuchs, C., Thurber, R.V., Burkepile, D.E., & Rodriguez-Lanetty, M. (2020). Nutrient pollution and predation differentially affect innate immune pathways in the coral Porites porites. Frontiers in Marine Science, 7: 563865. https://doi.org/10.3389/fmars.2020.563865 44. Burkepile, D.E., Shantz, A.A., Adam, T.C., Munsterman, K.S., Speare, K.E., Ladd, M.C., Rice, M.M., Ezzat, L., Mcllroy, S., Wong, J.C.Y., Baker, D.M., Brooks, A.J., Schmitt, R.J. & Holbrook, S.J. (2020). Nitrogen identity drives differential impacts of nutrients on coral bleaching and mortality. Ecosystems, 23, 798–811. https://doi.org/10.1007/s10021-01900433-2 45. Cacciapaglia, C. & van Woesik, R. (2016). Climate-change refugia: shading reef corals by turbidity. Global Change Biology, 22, 1145–1154. https://doi.org/10.1111/gcb.13166 46. Brown, B.E. & Bythell, J.C. (2005). Perspectives on mucus secretion in reef corals. Marine Ecology Progress Series, 296, 291–309. https://dx.doi.org/10.3354/meps296291

ACKNOWLEDGMENTS I thank Stephanie Carlson, George Roderick, Seth Finnegan, and Brent Mishler for their persistent guidance in formulating and executing this study. I also would like to thank Philip Georgakakos, Mo Tatlhego, and Ilana Stein for all their help in statistical analyses and mentorship in study design and fieldwork. I am very grateful to Danielle Becker for all her help in identifying coral genera, giving me the opportunity to work with her coral fragments, and assisting me in designing the coral transplants portion of this study. Additionally, I would like to acknowledge the Moorea Class of 2019 for being so supportive and helpful in the field, with notable mention to Kyra Grove, Yayla Sezginer, Chris McCarron, and Kyle Schwartz. I would also like to give appreciation to Rynier Clinton for his assistance in operating the Geographic Information System and ImageJ.

FALL 2020 | Berkeley Scientific Journal

67


APPENDIX A TRANSECT COLUMNS Site, Transect CB1.1 CB1.2 CB1.3 CB1.4 CB1.5 CB2.1 CB2.2 CB2.3 CB2.4 CB2.5 H.1 H.2 H.3 H.4 H.5 OB1.1 OB1.2 OB1.3 OB1.4 OB1.5 OB2.1 OB2.2 OB2.3 OB2.4 OB2.5

Latitude South

Longitude West

17.48 17.47944444 17.47833333 17.47833333 17.47777778 17.47444444 17.47444444 17.47388889 17.47388889 17.47444444 17.47777778 17.47861111 17.47833333 17.47805556 17.47833333 17.48333333 17.4830556 17.48305556 17.485 17.485 17.48861111 17.48777778 17.48888889 17.48666667 17.48638889

149.82722222 149.82722222 149.82888889 149.83055556 149.83166667 149.81388889 149.81305556 149.81444444 149.80972222 149.81722222 149.84194444 149.84444444 149.84444444 149.84166667 149.84305556 149.85111111 149.85333333 149.85166667 149.85555556 149.85333333 149.86611111 149.86611111 149.86666667 149.86833333 149.87

Table A1: Transect locations in decimal degrees listed by site and transect. There are 5 transects at each site. For example, CB1.1 stands for Cook’s Bay site 1, transect 1 and so on. Cook’s Bay site 1 is abbreviated at CB1, Cook’s Bay site 2 is abbreviated as CB2, the Hilton site is abbreviated as an H, Opunohu Bay site 1 is abbreviated as OB1, and Opunohu Bay site 2 is abbreviated as OB2.

CORAL TRANSPLANT COORDINATES Site CB1 CB2 H OB1 OB2

Latitude South 17.48 17.47444444 17.47833333 17.485 17.48861111

Longitude West 149.82805556 149.8138889 149.84444444 149.85472222 149.86583333

Table A2: Transplant locations in decimal degrees listed by site. Cook’s Bay site 1 is abbreviated at CB1, Cook’s Bay site 2 is abbreviated as CB2, the Hilton site is abbreviated as an H, Opunohu Bay site 1 is abbreviated as OB1, and Opunohu Bay site 2 is abbreviated as OB2.

RESEARCH

FALL 2020 | Berkeley Scientific Journal

A1


CORAL GENERA IDENTIFICATION

Figure A1: Coral identification photo of the Acropora genus.

Figure A2: Coral identification photo of the Pocillopora genus.

Figure A3: Coral identification photo of the Porites genus (left purple coral), Pavona genus (middle orange coral), and Montipora genus (right purple coral).

Figure A4: Coral identification photo of the Gardinoseris genus.

Figure A5: Coral identification photo of the Millepora genus.

Figure A6: Coral identification photo of the Siderastrea genus.

Figure A7: Coral identification photo of the Phymastrea genus.

Figure A8: Coral identification photo of the Leptoseris genus.

Figure A9: Coral identification photo of the Leptastrea genus.

A2

Berkeley Scientific Journal | FALL 2020

RESEARCH


Figure A10: Coral identification photo of the Phymastrea genus.

Figure A11: Coral identification photo of the Acanthastrea genus.

Figure A12: Coral identification photo of the Lithophyllon genus.

Figure A13: Coral identification photo of the Pleuractis genus.

MACROALGAE

Figure A14: Identification photo of Turbinaria ornata (left) and Sargassum muticum (right) fleshy macroalgae types.

RESEARCH

Figure A15: Identification photo of Dictotya bartayresiana fleshy macroalgae type.

FALL 2020 | Berkeley Scientific Journal

A3


CORAL BLEACHING

Figure A16: Representative photo of a “bleached” categorized Pocillopora coral.

Figure A17: Representative photo of a “partially bleached” categorized Phymastrea coral.

CORAL MUCUS

Figure A18: Representative photos of stress-induced coral mucus on a Porites coral head.

Figure A19: Representative photo of string-like mucus from a vermetid snail.

CORAL TRANSPLANTS

Figure A20: Photo of a coral transplant setup at an underwater site during the initial output in Opunohu Bay site 1. Nutrient enriched corals are at the front left row, while control corals are at the back right row.

A4

Berkeley Scientific Journal | FALL 2020

RESEARCH



Articles inside

Coral Cover and Algae Growth Along a Water Quality Gradient in Moorea, French Polynesia Savannah Stu

35min
pages 60-72

Quantifying Within-Day Abstract Skill Learning and Exploring its Neural Correlates

25min
pages 53-59

Applications of Materials Science: From Modeling To Medical Use (Dr. Kevin Healy

14min
pages 49-52

Darwin: Chimp or Chump? Lilian Eloyan

8min
pages 46-48

Machine Learning Design Optimization for Molecular Biology and Beyond (Dr. Jennifer Listgarten

16min
pages 41-45

Schizophrenia Through The Years Anisha Iyer

9min
pages 37-40

Exploring Cancer Metastasis Outside The Genome (Dr. Hani Goodarzi

17min
pages 32-36

Unlocking Peto’s Paradox Chris Zhan

10min
pages 28-31

Amar Shah, Michelle Yang, and Elettra Preosti

16min
pages 20-24

Why Are We Here? A Journey Into The Quantum Universe (Dr. Hitoshi Murayama

16min
pages 11-15

The ‘King Of Poisons’ Journeys Underground In In Search Of Water Jessica Jen

8min
pages 25-27

Microrobots: Bridging The Neuronal Gap, One Micron At A Time Natalie Slosar

9min
pages 16-19

Plastic: It’s What For Dinner Emily Pearlman

9min
pages 7-10

mRNA: TheNext Frontier In Vaccine Science Nachiket Girish

8min
pages 4-6
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.