Page 1


VOLUME 7 | 2017-2018

The North Carolina School of Science and Mathematics Journal of Student STEM Research

Front Cover During the August 21, 2017 solar eclipse, an eclipse shadow is captured crossing over North America by the National Aeronautics and Space Administration’s (NASA’s) Earth Polychromatic Imaging Camera (EPIC). EPIC photographs the full sunlit side of Earth every day.

Biology Section When light is reflected off surfaces distant from the source, arriving at a delay to the observer, a light echo occurs. Here, NASA’s Hubble Space Telescope captures the reflection of light from dust around supergiant star V838 Monocerotis in 2002.

Chemistry Section The last evolutionary stages of a massive star’s life are marked by its final explosion as a supernova, causing the sudden appearance of a very bright star that slowly fades. A supernova remnant can produce nebulas such as the pulsar wind Crab Nebula photographed by NASA’s Hubble Space Telescope.

Mathematics and Computer Science Section Occasionally, the sun’s brightness increases in a sudden flash, causing a solar flare. These flares are often accompanied by a coronal mass ejection of plasma and magnetic field as captured by NASA’s Solar Dynamics Observatory (SDO) on August 31, 2012. This solar flare later caused an aurora on Earth.

Physics and Engineering Section Cosmic debris that enters Earth’s atmosphere on parallel trajectories causes small meteors to appear to originate from a single point in a meteor shower. Photographer Howard Edin used 100 photographs with 45-second exposures to compile this view of the 2009 Leonid Meteor Shower from Kansas.

Back Cover On August 21, 2017, the moon passed between the sun and the Earth, causing a total solar eclipse observable in the United States. Although the campus of the North Carolina School of Science and Mathematics was not in the path of totality, Sahil Sethi ‘19 was able to capture this rare picture of the partially occulted sun.



Letter from the Chancellor


Words from the Editors


Broad Street Scientific Staff


Featured Photo



Essay: Are We Sacrificing Earth for Mars?


Biology 11

Analyzing the Base Methylation of MicroRNAs in Breast Cancer



Different Pigments, Same Protection: Lichens Under Simulated Martian UV



Harmful Algal Growth Suppressed by Allelopathy Regardless of Excess Phosphorus



Insect Growth Regulators as a Biological Control Method for Termites R. flavipes


Chemistry 41

Apoptotic and Immunomodulatory Effects of Gemcitabine Monophosphate Delivery via Lipid

Calcium Phosphate Nanocarriers for Pancreatic Cancer



Plasmon-Assisted Photothermal Catalysis for the Methanol Steam Reforming Reaction



Repurposing Carbon Black Nanoparticles for Use in Crude Oil Spills


Mathematics and Computer Science 61

Least Factorial Problem



Modeling the Relative Impacts of Controls on the Spread of Methicillin-Resistant

Staphylococcus aureus


Physics and Engineering 77

Optimizing the Magnetic Confinement of Electrons for Nuclear Fusion



Reduced Graphene Oxide Fibers for Wearable Supercapacitors



Time Domain Calculations of Scalar Radiation from an Orbiting Point Charge in

Schwarzschild Spacetime


Feature Article 98

An Interview with Dr. Joseph DeSimone

LETTER from the CHANCELLOR “The noblest pleasure is the joy of understanding.”

I am proud to introduce the seventh edition of the North Carolina School of Science and Mathematics’ (NCSSM) scientific journal, Broad Street Scientific. Each year students at NCSSM conduct significant scientific research and Broad Street Scientific is a student led and produced showcase of some of the outstanding research being done by students at NCSSM. Providing students at NCSSM with opportunities to apply their learning through research is not only vitally important in preparing and inspiring students to pursue STEM degrees and careers, but essential to encouraging innovative thinking that will allow them to scientifically address the major challenges and problems we face in the world today and will face in the future. With the rapid advancement of technology in many areas including artificial intelligence and machine learning, the opportunities before us to advance discovery through research are greater than ever before. However, with these enhanced opportunities to know, comes an even greater responsibility to act thoughtfully and ethically, because discovery can be a double-edged sword depending on how it is wielded. Opened in 1980, NCSSM was the nation’s first public residential high school where students study a specialized curriculum emphasizing science and mathematics. Teaching students to do research and providing them with opportunities to conduct high-level research in biology, chemistry, physics, computational science, engineering and computer

~ Leonardo da Vinci

science, math, humanities, and the social sciences is a critical component of NCSSM’s mission to educate academically talented students to become state, national and global leaders in science, technology, engineering and mathematics. I am thrilled that each year we continue to increase the outstanding opportunities NCSSM students have to participate in research. The works showcased in this publication are examples of the research that students conduct each year at NCSSM under the direction of the outstanding faculty at our school and in collaboration with researchers at major universities. For thirty-three years, NCSSM has showcased student research through our annual Research Symposium each spring and at major research competitions such as the Siemens Competition in Math, Science and Technology, the Regeneron Science Talent Search, and the International Science and Engineering Fair to name a few. The publication of Broad Street Scientific provides another opportunity to highlight the outstanding research being conducted by students each year at the North Carolina School of Science and Mathematics. I would like to thank all of the students and faculty involved in producing Broad Street Scientific, particularly faculty sponsor Dr. Jonathan Bennett and senior editors: Elizabeth Beyer, Isabella Li, and Sreekar Mantena. Explore and Enjoy! Dr. Todd Roberts, Chancellor

WORDS from the EDITORS Welcome to the Broad Street Scientific, NCSSM’s journal of student research in science, technology, engineering, and mathematics. In this seventh edition of Broad Street Scientific, we aim to showcase original student research from the past year and encourage continued excitement for discovery and innovation. NCSSM students have made incredible contributions to the scientific community, and we hope you enjoy this year’s issue! This year’s theme is astronomical phenomena: beautiful events that we witness from Earth that have origins thousands and millions of miles away in space. Early in human history, such phenomena were associated with gods, and their behavior was thought to be predictive of the weather and seasons. Over time, ancient civilizations — from Mesopotamia to Greece, China to Egypt — began extensively documenting their observations of celestial behavior. In Renaissance Europe, scientists such as Nicolaus Copernicus and Galileo Galilei elucidated our understanding of our solar system, laying the groundwork for future astronomical breakthroughs. Modern astronomy is diverse, as organizations such as NASA and SpaceX pioneer space exploration, while scientists continue to understand universe-shaping phenomena such as the Big Bang and black hole mergers (which were first detected by LIGO in 2016). Our theme was inspired by the recent August 2017 total solar eclipse, which was the first witnessed by the U.S. in over three decades. NCSSM organized a school-wide observation of the rare

event, generating breathtaking pictures such as the featured photo by Sahil Sethi ‘19 found on page 7. Sethi’s work embodies the NCSSM spirit of curiosity and creativity, and we would like to thank Sethi for contributing his photo for the seventh edition of Broad Street Scientific. Sethi also contributed the back cover photo to round out our theme with a beautiful piece. We would also like to thank the faculty, staff, and administration of NCSSM for their continuous support of science, mathematics, and engineering. It is this unique encouragement of student research that allows our school to excel, both in our state and nationally. By nurturing innovation, NCSSM empowers students and alumni to catalyze change for the future. We especially thank Dr. Jonathan Bennett for his guidance throughout the publication of this issue. We also thank our Chancellor Dr. Todd Roberts, Dean of Science Dr. Amy Sheck, and Director of Mentorship and Research Dr. Sarah Shoemaker for their active support. Finally, we thank Dr. Joseph DeSimone, chemist, inventor, co-founder and CEO of Carbon, former member of NCSSM's Foundation Board of Directors, and an inaugural inductee (2017) of the NC STEM Hall of Fame, for speaking with us about entrepreneurship and research and providing valuable advice for young scientists pursuing STEM in the future. Elizabeth Beyer, Isabella Li, and Sreekar Mantena Editors


Publication Editors

Biology Editors

Chemistry Editors

Engineering Editors

Mathematics and Computer Science Editors

Isabella Li, 2018 Sreekar Mantena, 2018

Elizabeth Beyer, 2018 Maanasi Bulusu, 2019 Kathleen Hablutzel, 2019 Sofia Sanchez-Zarate, 2018

Sophia Luo, 2018 Aarushi Patil, 2019 Megan Wu, 2019

Ritvik Bodducherla, 2018 Navami Jain, 2019

Mihir Patwardhan, 2019 Emily Wang, 2019

Albert Gong, 2019 Hahn Lheem, 2019

Physics Editors

Zack Lee, 2018 Abhijit Gupta, 2019

Faculty Advisor

Dr. Jonathan Bennett


The SOLAR ECLIPSE of August 21, 2017, is observed on the campus of the North Carolina School of Science and Mathematics. Sahil Sethi ‘19 photographed the different phases of the eclipse through a telescope and homemade solar filter. The individual photographs were then applied to a background image taken towards the end of the eclipse.

ARE WE SACRIFICING EARTH FOR MARS? Corinne Miller Corinne Miller was selected as the winner of the 2017-2018 Broad Street Scientific Essay Contest. Her award included the opportunity to interview Dr. Joseph DeSimone, Chancellor’s Eminent Professor of Chemistry at UNC, William R. Kenan Jr. Distinguished Professor of Chemical Engineering at NC State and of Chemistry at UNC, Co-founder and CEO of Carbon, and Co-founder of Liquidia Technologies, Bioabsorbable Vascular Solutions, and Micell. This interview can be found in the Featured Article section of the journal. Fervor over the prospect of colonizing Mars has grown in recent years with the development of programs such as Mars One and NASA’s Journey to Mars. Simultaneously, governmental action in response to climate change has come to a standstill. It has begun to feel as if humans have decided to trade in one planet for another. Climate change is met with sentiments of resignation as its effect becomes more apparent, while interplanetary travel is met with unrestrained awe. Colonization of Mars has been proposed more than once as an alternate home for humanity when the Earth is used up, but this plan is flawed in numerous ways. Further, it rejects our responsibility to assist those who are currently suffering due to climate change. The climate of Earth has fluctuated in atmospheric makeup – and, thusly, temperature – since it first stabilized into a planet suitable for life. These fluctuations have been naturally caused and followed a steady pattern over the last 400,000 years. Despite rising and falling in potency dramatically over this time, atmospheric carbon dioxide had never exceeded approximately 300 parts per million until 1950, when CO2 levels began to skyrocket. This has had catastrophic effects on the global environment, resulting in a variety of weather phenomena. Since the 19th century, the average global temperature has risen an unprecedented 1.1°C, causing mass ice melt and sea levels to rise 20 cm in the last century. In the past 20 years particularly, the rate at which ocean levels have risen has doubled (“Climate change evidence,” 2018). In response to such environmental anomalies, a committee of 1,300 independent international scientists performed an exhaustive study on the changing climate of Earth and determined there is a 95% probability that it is a direct result of human activity (“Climate change causes,” 2017). Through use of fossil fuels and expansive land use, humans have released greenhouse gases into the atmosphere (“USA,” 2017). Greenhouse gases are marked by their ability to absorb electromagnetic energy and retain it as thermal energy. The increased energy of their molecules results in more collisions between atmospheric particles, causing thermal energy to be distributed throughout the planet (“What are the properties,” 2016). Of greenhouse gases, CO2 and water vapor are among the most prolific.

Where water precipitates back to Earth, however, CO2 remains (“Climate change causes,” 2017). As predictions for Earth’s future become more foreboding, people of all nations have turned away from terrestrial reform towards space for hope of a safe future. At the behest of prominent figures such as Stephen Hawking, who has called for colonization of space in 100 years to avoid ecological breakdown on Earth, at least 25 countries have announced plans with the goal of extraterrestrial habitation (Ozimek, 2017; Griggs, 2017; Ghosh, 2017; “What is ESA,” 2017). NASA has reported Mars to be the most likely candidate for successful human colonies, as Mars has a similar day length and obliquity to Earth’s (Griggs, 2017; McKay & Marinova, 2001). For habitation on Mars to continue our species, as Hawking and others propose, self-contained biospheres will not be sufficient to support the human population (Ozimek, 2017). Instead, many have begun to consider the process of planetary ecosynthesis: the purposeful and methodical transformation of a planet’s climate to create sustainable ecosystems. On Mars, the greatest obstacle to human habitation is its atmosphere, which is composed of 96% CO2 and is dangerously thin (Ozimek, 2017; McKay & Marinova, 2001). Though the precise quantities are currently unknown, scientists theorize that vast stores of CO2, N2, and H2O exist on Mars (McKay & Marinova, 2001). Because the atmosphere is so thin, it contains minimal amounts of CO2 (Ozimek, 2017; McKay & Marinova, 2001); instead, CO2 is stored in Mars’s polar caps and regolith, the dusty lair of soil covering the planet’s surface. To terraform Mars, the polar caps would need to be melted, releasing CO2 and H2O into the atmosphere to warm the planet. This would require the energy of 10 years’ worth of solar radiation on Mars if collected at 100% efficiency, which is impossible. The more likely time to melt the Martian polar caps sufficiently to warm the planet is 100 years. Once the atmosphere and surface of Mars are heated, the planet must be warmed on a deeper level by melting its ice stores and creating oceans – an additional 500 years. The final and longest stage of changing Mars is the chemical alteration of its atmosphere. Through use of genetically engineered plants and microbes which naturally thrive in climates

similar to that of a warmed Mars, ecosystems can be established to introduce copious O2 into its atmosphere (McKay & Marinova, 2001). Generous estimates consider the planetary ecosynthesis of Mars to be a 100,000-year process (Ozimek, 2017; McKay & Marinova, 2001). As a reaction to climate change, colonization of space has major shortcomings. The drawn-out terraforming process indicates that colonization cannot provide a timely solution to the urgent environmental situation here on Earth (Williams, 2010). Space travel already possesses a damning history where the environment is concerned, with incidents like the 2007 accident above central Kazakhstan, which left thousands of acres of land poisoned (Williams, 2008). With estimated costs for a manned mission to Mars ranging from $100 billion to $1 trillion in the next 25-40 years, programs such as NASA’s Journey to Mars strain already tight resources, inevitably pulling funding from environmental initiatives (Williams, 2010; Gaffey, 2017). In the past year, the United States has rolled back many of the protections necessary for preventing irreversible environmental damage. The Climate Action Plan has been repealed, and the Clean Power Plan has been threatened, both of which would be necessary to meet the country’s Nationally Determined Contribution (NDC) in accordance with the Paris Agreement, which the US has left as of this past August (“USA,” 2017). Internationally, no major industrialized country has met their NDC – which often consists of expensive but necessary pledges – but many have announced explorations into extraterrestrial colonization (Victor et al., 2017). The trend of turning away from environmental programs towards those focused on space has become undeniably apparent in recent years. Where discussions of climate change are composed of doom and shame, those of colonizing Mars seem alight with hope and glory. People are naturally drawn to fixating on the latter and abandoning solutions for the former, creating a self-fulfilling prophecy of planetary proportions (Ozimek, 2017). But not only does this plan fail on a functional level, it also fails on a humanitarian level. Climate change is a direct threat to human lives. In 2000, the World Health Organization estimated that climate change caused 166,000 deaths and loss of 5.5 million years in human lives due to disease, disabilities, and early death (Luber & Knowlton, n.d.). Extreme weather – wildfires, hurricanes, droughts, floods – endangers both the lives and the livelihood of people everywhere. It has been shown to cause loss of property, death, and trauma for those it impacts (“A Human Health Perspective,” 2010). Mental health problems, both in populations with and without history of mental illness, have been shown to increase after instances of extreme weather (“A Human Health Perspective,” 2010). Increased temperatures and changing seasons result in increased allergens, diseases caused by vectors, heat related illnesses, and crop failures (“A Human Health

Perspective,” 2010). The effects of climate change prey greatly on populations of children, the pregnant, the elderly, and the impoverished (Luber & Knowlton, n.d.). By ignoring climate change for shinier interplanetary space missions, we abandon those already bearing its burden. With greater advancements in technology, people are increasingly putting their faith in colonizing other planets for the continuation of humanity, rather than in fixing their own planet. The glowing media surrounding interplanetary explorations has left many blinded to the various flaws with this plan. Viewing space as an alternative to solving climate change is impractical on every level. Expensive and unfeasible, space programs draw attention – and, as a result, resources – away from initiatives to combat the damage of human activity on Earth. These initiatives, which could save the environment and the people who live in it, have had many setbacks in just the past year, both in the US and abroad (Russel, 2017). It is imperative that with the vociferous hype surrounding interplanetary missions, the scientific community and the general populace do not ignore the very urgent issues here on Earth. References A Human Health Perspective on Climate Change (Rep.). (2010). Retrieved January 7, 2018, from https://www.niehs.nih. gov/research/programs/geh/climatechange/health_impacts/index.cfm Climate change causes: A blanket around the Earth. (2017, January 2). Retrieved January 06, 2018, from Climate change evidence: How do we know? (2018, January 2). Retrieved January 06, 2018, from https://climate. Gaffey, C. (2017, July 14). NASA Can’t Afford to Put Humans on Mars. Retrieved January 06, 2018, from Ghosh, P. (2015, October 16). Europe and Russia mission to assess Moon settlement. Retrieved January 06, 2018, from Griggs, M. B. (2017, September 29). All the countries (and companies) trying to get to Mars. Retrieved January 06, 2018, from Luber, G., & Knowlton, K. (n.d.). Human Health. Retrieved January 07, 2018, from

McKay, C. P., & Marinova, M. M. (2001). The Physics, Biology, and Environmental Ethics of Making Mars Habitable. Astrobiology, 1(1), 89-109. Ozimek, A. (2017, May 06). Sorry Nerds, But Colonizing Other Planets Is Not A Good Plan. Retrieved January 06, 2018, from Russell, R. (2017, November 1). Climate change is happening - but it’s not game-over yet | DW Environment | DW | 01.11.2017. Retrieved January 06, 2018, from http:// USA. (2017, November 6). Retrieved January 06, 2018, from Victor, D. G., Akimoto, K., Kaya, Y., Yamaguchi, M., Cullenward, D., & Hepburn, C. (2017, August 1). Prove Paris was more than paper promises. Retrieved January 06, 2018, from What are the properties of a greenhouse gas? (2016, July 5). Retrieved January 06, 2018, from https://www.acs. org/content/acs/en/climatescience/greenhousegases/ properties.html What is ESA? (2017, May 11). Retrieved January 06, 2018, from What_is_ESA Williams, L. (2010). Irrational Dreams of Space Colonization. Peace Review, 22(1), 4-8. Williams, L. (2008, May 14). Space Ecology: The Final Frontier of Environmentalism. Natural Living Magazine. Retrieved January 6, 2018, from html

ANALYZING THE BASE METHYLATION OF MicroRNAs IN BREAST CANCER Jessica Chen Abstract Recent discoveries demonstrating the dynamic nature of RNA methylation have spurred interest in understanding the epitranscriptome and its role in gene expression. Using immunocapturing and high-throughput sequencing technologies, m6A, m1A, and m5C methyl-groups were identified in microRNAs, which are noncoding RNAs that prevent mRNA translation. High modification levels were observed in small RNAs including miR-3940-5p, miR-3168, and tRF-3007a, indicating a potential regulatory role in breast cancer and other diseases. Since m1A modifications can inhibit library creation, a novel method using ALKB demethylation was tested for identifying m1A-modified miRNAs. The treatment significantly increased localization of previously overlooked modifications in mir-1937a, U18B, mir-21, and more. In breast cancer cells, the differential expression of m1A modifications was observed in mir-let-7c, mir-100, mir-127, mir-200c, Glu-CTC-2, Leu-CAG-2, Thr-TGT-2 and more, revealing the potential role of RNA methylation in cancer biology. In order to better understand the role of regulatory proteins in RNA methylation, knockdowns were conducted for demethylases FTO and ALKBH5, and methyltransferases METTL3, METTL14, and VIRMA to identify miRNA substrates for these enzymes. Overall, this study uncovers a previously unknown mechanism for altering miRNA function, and may lead to future therapeutic studies. 1. Introduction MicroRNAs are small non-coding RNA molecules that downregulate their target genes by binding to the 3’ untranslated regions of messenger RNAs, leading to mRNA degradation or translational inhibition (Shiekhattar & Gregory, 2005). Messenger RNAs are targeted based on specificity and affinity, as binding only occurs in the seed region of nucleotides 2 to 8 on the 5’ end of miRNAs (Bartel). Through the formation of RNA induced silencing complexes, miRNAs play a crucial role in post-transcriptional regulation, and are a major component of RNA interference (Hutvágner & Zamore, 2002). Due to the diversity of miRNAs and their impact on gene products, the regulation of miRNAs is a topic of high interest. Misregulated miRNAs have been found to inhibit crucial tumor suppressors in breast cancer, such as Forkhead box transcription factors, leading to an increase in cell proliferation that aids in the metastasis of the disease (Lin, 2010). Previous research has shown that, compared to normal breast cells, breast cancer cells show significant deregulation of mir-125b, mir-145, mir-21, and mir-155 (Iorio et al., 2005). Through extensive profiling, similar microRNA signatures have been identified as important biomarkers, and are increasingly used in the diagnosis, response to treatment, and prognosis of various genetic diseases (Calin & Croce, 2006). In order to better understand microRNA pathways, this project focuses on RNA base modifications, known collectively as the epitranscriptome. The most common modifications is N6-Methyladenosine (m6A) (Fig. 1), which has been observed in thousands of mammalian genes, as well as in viruses, yeast, and plants (Saletore et al., 2012). BIOLOGY

Figure 1. RNA modification complex with m6A highlighted in blue (Chi, 2017). However, unlike DNA methylation, RNA methylation was not widely studied until the discovery of a regulato-




ry role through an enzyme known as FTO, the fat mass and obesity-associated protein. Through the demethylation of m6A, FTO plays an active role in enabling cells to control RNA methylation, leading to speculation that the differential expression of these reversible modifications can affect many biological processes. Recent discoveries have shown that additional proteins are involved in regulating m6A and other common methyl groups, including N1-Methyladenosine (m1A) and 5-Methylcytosine (m5C) (Fig. 2). Since the activity level of methyltransferases and demethylases can directly affect the expression of crucial RNAs required for protein synthesis, malfunctioning demethylases and methyltransferases have been indicative of breast cancer, Parkinson’s disease, leukemia, endometrial cancer, epilepsy, colorectal cancer, and more (McGuinness D & McGuinness DH, 2014). The impact of RNA methylation on gene expression has attracted attention to the study of the epitranscriptome, but the majority of studies on RNA base modifications have been focused on ribosomal RNAs, transfer RNAs, and mRNAs (Saletore et al., 2012). Our study focused on identifying the m6A, m1A, and m5C modifications in microRNAs, and looking at the impact of ALKB—a recombinant bacterial demethylase—on specifically identifying the m1A modification in miRNAs and tRNAs. Previous methods for investigating the prevalence of RNA modifications, such as using Carbon-14 to radiolabel methionine, have been largely inefficient and technically challenging (Saletore et al., 2012). Using new technologies combining immunocapturing and high-throughput parallel sequencing, the location and relative abundance of the desired modifications can be determined, with precision and accuracy, in a relatively short amount of time (Dominissini et al., 2013). The modifications were identified in MDA-231 and MDA-468 triple negative breast cancer cells, as well as ME16c normal breast epithelial cells; the goal was to find alterations specific to the cancer cells, which therefore may play a role in cancer biology. With a deeper understanding of miRNA methylation, this research serves as the foundation for understanding modification-based regulation of miRNAs, and reveals the potential roles of small RNAs as biomarkers and gene regulators. 2. Materials and Methods

Figure 2. Modifications a) m6A, b) m1A, and c) m5C highlighted in red (Modomics).

In order to identify modified microRNAs, next generation sequencing was used to analyze an RNA sample that was immunoprecipitated based on a protocol described by the Dominissini Lab (Dominissini et al., 2013). MDA-231 cells were washed with PBS, and 1 ml of TRIzol was added to lyse the cells. The RNA was extracted by adding 150 μl of chloroform and centrifuging until the top layer could be removed. 500 μl of isopropanol was added, and the supernatant with the RNA was removed and washed with 70% ethanol. To conduct the immunoprecipitation, 4 setups BIOLOGY

were used, 3 of which had 10 μl of antibody specific to 1 of the modifications (m6A, m1A, or m5C), and a fourth setup with no antibody as a control. 40 μl of magnetic beads with protein A and protein G were added so that the antibody could bind to the beads. The excess solution was removed with a magnetic stand, and 500 μl of buffer and 50 μl of the pre-fragmented RNA were added. The modified RNA was then extracted by washing away the unwanted RNA, and the desired RNA was separated from the antibody using 200 μl of TRIzol and 50 μl of chloroform. The RNA was pelleted by adding 5 μl of glycogen and 100 μl of isopropanol; after centrifuging, the supernatant was removed, and the RNA pellet was stored in a -20°C freezer. Due to the positioning of the m1A modification, methylation can potentially block Watson-Crick base pairing and inhibit the reverse transcriptase needed to create the RNA library. To circumvent this obstacle, half of the RNA with the m1A modification was treated with a known demethylase, ALKB. 168 μl of water, 20 μl of tris, 2 μl of αKG, 2 μl of AA, 2 μl of BSA, 2 μl FAS, and 4 μl of ALKB were added, and the desired RNA was immunoprecipitated. To create the small RNA library, the RNA was prepared for next generation sequencing. 300 ng of each RNA sample—including a sample of RNA that had not undergone immunoprecipitation—was added to 2 ng of 3’ Ad-linker and 13 μl of water. Primers were added to the 3’ end using 3 μl of water, 2 μl of ligase buffer (without ATP), 0.5 μl of RNaseout, and 0.5 μl of Lig2. To add primers to the 5’ end, 1 μl of reverse transcriptase primer, 1 μl of ligase buffer, 1 μl of 5’ linker, 3 μl of 10mM ATP, 3 μl of water, and 1 μl of Lig1 were added. cDNA was generated from the RNA using 1 μl of SS reverse transcriptase, 10 μl of 5x SS buffer, 2 μl of 100 mM DTT, 2 μl of 10 mM dNTP, and 5 μl of water, and the solution was incubated before adding 150 μl of water. To amplify the cDNA, 1 μl of primer, 0.5 μl of dNTP, 10 μl of HF Buffer, 0.1 μl of SYBZ, 0.5 μl of DNA Polymerase, and 32 μl of water were added to a PCR plate. 2 technical replicates were conducted for each sample, and each replicate had 5 μl of the cDNA and 1 μl of a barcode primer unique to each modification. After conducting PCR, the cDNA was separated using a polyacrylamide gel made with 3 mL of 10% TBE, 10 mL of acrylamide, 300 μl of ammonium persulfate, 30 μl of TMEDA, and water. After running the gel for 2 hours, a noticeable bright line was seen under ultraviolet light, indicating the presence of adaptors that ligated to each other and did not contain an insert. The section of the gel containing the desired DNA inserts located above the bright band was removed and placed in water overnight for the DNA to diffuse out (Fig. 3). Then, the DNA was precipitated and sent to Illumina for high-throughput sequencing. After receiving the data, the hits files were analyzed to determine the number of sequence reads for each RNA sample. For each modification, the sequence reads were normalized to the total number of reads and then convertBIOLOGY

Figure 3. Image of polyacrylamide gel under ultraviolet light. Noticeable bright band of adaptors with no insert. Ladder on far right. ed to reads per million in order to account for differences in quantity due to amplification, pulldown, and sequencing. The ratio between the sequence reads for microRNAs with a modification and the sequence reads for the total RNA was calculated. Upon determining the fold change in reads, the microRNAs with relatively large changes were further analyzed. Data from a previous experiment was used to analyze the effect of ALKB, a demethylase from Escherichia coli, on identifying m1A modifications. The RNA extracted using immunoprecipitation was treated for 5 or 15 minutes with ALKB before being sent in for next generation sequencing. The sequence read data were converted to BED format using a program that matches the chromosome position for each sequence read to the name of the corresponding RNA. The bedtools toolset was used to intersect the sequence read data with existing sno/miRNA and tRNA gene databases from the UCSC Table Browser—using the July 2007 mm9 assembly for mice and the February 2009 hg19 assembly for humans (UCSC Table Browser). The number of reads for each RNA was counted, and the reads were plotted for each cell line, with and without ALKB treatment. In order to analyze specific RNAs, the genomecov function from bedtools was used to create bedgraphs that plot the sequence reads for individual sno/miRNAs, with and without the treatment. Additionally, sequence reads for MDA-231 and MDA-468 cells were compared to sequence reads for ME16c cells. tRNAs reads were normalized and multiplied by 1,000,000, and miRNA reads were normalized and multiplied by 500,000. After calculating the ratio between the normalized reads in cancer cells and normal cells, alterations were identified as RNAs that may potentially play a role in breast cancer. 3. Results


m 6A







hsa-miR-3168 7 hsa-miR-3940-3p 78

473 0

41 115









tRF-3019a tRF-3020a

26,235 129,051

7250 37,162

21,459 98,200




hsa-miR-3940-5p 79



hsa-miR-3940-5p 0


Total RNA 2









hsa-miR-3940-3p 0



















Table 1. Normalized sequence reads for select small RNAs. Significant fold changes were observed between the sequence reads for RNAs with m6A, m1A, or m5C modifications and the Total RNA.

Figure 4. Plots comparing the sequence reads for total RNA to the sequence reads for modified RNA pulldowns. Every data point corresponds to a known small RNA. The library of modified RNAs indicated that select microRNAs and tRNA-derived RNA fragments (tRFs) had a relative enrichment of sequence reads for specific modifications. Data points below the 45° line passing through the origin represent RNAs with more sequence reads from the modification pulldown than from the total RNA in MDA231 cells (Fig. 4). Read counts are analogous to the expression level of each modification, so relatively high read levels correspond with highly modified small RNAs. Some RNAs have up to 37 times more methylation than the av-

erage RNA (Table 1). In analyzing the effect of ALKB demethylation on identifying m1A modifications, there was no major difference in the overall sequence reads for miRNAs that received the treatment. Most miRNA reads in human cells lie around the 45° line passing through the origin, indicating that the treatment did not have a large impact on identifying methylation (Fig. 5). A similar trend was observed in mouse cells, with the exception of mmu-mir-1937a and mmumir-1937b, which had a 3 to 6-fold increase in sequence reads after the ALKB treatment. Upon closer examination, select human miRNAs have significantly higher levels of sequence reads when using the ALKB demethylase treatment (Fig. 6). Small nucleolar RNA (snoRNA) U18B in ME16c cells with 5 minutes of ALKB treatment displayed a greater than 10-fold change in sequence reads, and hsa-mir-21 showed a greater than 3-fold change. Hsa-mir-1826, which only had a 2-fold change in sequence reads, was notable for its large number of overall reads in MDA-231 and MDA-468 cells, going from over 100,000 reads to over 200,000 reads with 5 minutes of treatment. Although the overall trend BIOLOGY



Figure 5. miRNA sequence reads in human breast cancer cell line MDA-468 after 15 minutes (top) and 5 minutes (bottom) of ALKB treatment. indicated that demethylation either increases or has no effect on sequence reads for m1A-modified miRNAs, a notable exception was hsa-mir-197, which displayed a 5-fold decrease in reads when using the ALKB demethylation. The differential expression of m1A modifications in cancerous and noncancerous breast epithelial cells was observed in various small RNAs (Table 2). Such RNAs either had more sequence reads (fold change greater than 2) or less sequence reads (fold change less than 1) in MDA231 and MDA-468 cells than in ME16c cells, indicating that methylation levels in small RNAs may be linked with breast cancer. tRNAs were also treated with the ALKB demethylase, and the results showed that the treatment significantly increased the level of sequence reads for tRNAs in human and mouse cells. This enrichment indicated that the demethylase successfully removed the modification, allowing for proper reverse transcriptase activity and aiding in the identification of m1A-modified tRNAs. Data points above the 45° line passing through the origin represent tRNAs with a greater number of sequence reads after using ALKB (Fig. 7). Similar trends indicated that in all of the experimental cell lines, there was no significant differencebetween the 15-minute and 5-minute treatment (Fig. 5 & 7); generally, the direction of any change in sequence reads BIOLOGY



Figure 6. Sequence read bedgraphs for a) U18B, b) hsamir-21, and c) hsa-mir-1826 in MDA-231 cells, and d) hsa-mir-1826 in MDA-468 cells. In each subgraph, the top graph indicates sequence read levels without the ALKB treatment, and the bottom graph indicates read levels with the treatment.

snoRNA or miRNA ME16c hsa-mir-92b


MDA-231 MDA-468 3590


hsa-let-7c hsa-mir-200c

52 715

490 8690

520 15

hsa-mir-27b hsa-mir-30c U105B

340 3170 170

1140 7980 1180

900 8260 440


103,000 380



14,200 13


snoRNA or miRNA MDA-231 Fold M D A - 4 6 8 Change Fold Change hsa-mir-92b 16.3 20.7 hsa-let-7c hsa-mir-200c

9.4 12.2

10 0.02

hsa-mir-27b hsa-mir-30c

3.35 2.5

2.65 2.6




3.7 x 10



9 x 10


2.6 -3


Table 2. Normalized miRNA sequence reads for normal breast cell line ME16c and breast cancer cell lines MDA-231 and MDA-468. Fold changes were determined by dividing the sequence reads for RNA in the breast cancer cell line by the sequence reads for the RNA in the normal cell line. remained the same, although the magnitude of the change may have varied between RNAs. Since the majority of tRNAs displayed an increase in sequence reads with the use of ALKB, the statistical significance of the increase was determined by whether or not the binary logarithm of the fold change was greater than 1. Select tRNAs with the most significant fold changes include Arg-ACG-2-1, Pro-AGG-3-1, and Pro-TGG-3-5. In a previous study, similar highly modified tRNAs were discovered in B-cell derived human cell lines GM05372 and GM12878 (Cozen, 2015). Using the same method, our study identified additional modified tRNAs in cell lines MDA-468, MDA-231, and ME16c, including Ala-AGC-2, Ala-TGC-3, Ala-TGC-5, Leu-TAG-1, Thr-TGT-4, and more. Alterations between cancerous and normal cells were also observed in tRNA sequence reads, revealing the differential expression of m1A-modified tRNAs in breast cancer (Table 3). It is interesting to note that all 6 Glu-CTC-1 tRNAs showed significantly more methylation in breast cancer cells than in normal breast cells.

Figure 7. tRNA sequence reads in human cell lines MDA-468 after 5 minutes (top) and 15 minutes (bottom) of ALKB treatment. 4. Discussion

The identification of highly methylated microRNAs and tRFs prompts further research on understanding the effects of these modifications on RNA activity and gene expression. The modified small RNAs discovered in this study could potentially be regulated by the methyl-groups attached to their base pairs. This form of RNA regulation is especially important because microRNA expression often correlates with risk for disease. A few notable examples of modified RNAs identified in this study include tRF-3007a, a potential biomarker found in the tumors and bodily fluids of bladder cancer patients (Armstrong et al., 2015); hsa-miR-3168, which is significantly downregulated during the transition from human embryonic stem cells to DAZL-expressing cells (Hinton et al, 2014); and hsamiR-3940-3p, which displays gender-specific expression in patients with osteoarthritis (Kolhe et al., 2017). Future analysis will reveal if the presence or absence of a modification contributes to the expression level of these RNAs, and their corresponding medical diseases. Interestingly, in identifying modified RNAs, most of the read levels for RNAs containing the m1A modification were actually lower with the ALKB treatment (Fig. 5). Theoretically, the demethylase should have allowed for the modified RNAs to be more easily sequenced, which in turn should have increased the read count. A possible explanation is that the efficacy of the ALKB treatment depends BIOLOGY





Arg-CCT-2 Leu-CAA-1 Ser-ACT-1 Leu-CAG-2 Gln-TTG-1 Lys-TTT-3 Glu-CTC-2 Thr-TGT-2 Gln-CTG-4

250 530 90 330 460 280 2640 1090 305

1410 3890 920 2590 2260 1850 10,230 305 44

5270 5450 495 2350 4110 1620 18,160 65 10


MDA-231 Fold Change Arg-CCT-2 5.64

MDA-468 Fold Change 21

Leu-CAA-1 Ser-ACT-1 Leu-CAG-2 Gln-TTG-1 Lys-TTT-3

7.34 10.2 7.85 4.91 6.61

10.3 5.5 8.4 8.9 5.79

Glu-CTC-2 3.88 Thr-TGT-2 0.28 Gln-CTG-4 0.14

6.88 0.06 0.03

Table 3. Normalized tRNA sequence reads for normal breast cell line ME16c and breast cancer cell lines MDA-231 and MDA-468. Fold changes were determined by dividing the sequence reads for RNA in the breast cancer cell line by the sequence reads for the RNA in the normal cell line.

on the location of the modification; if the methyl-group is located near the beginning of a gene, it is more likely to block base pairing for reverse transcriptase compared to if it was located in another area of the gene. Another potential explanation is that, since the data was normalized to the total number of reads, a few RNAs with highly enriched sequence reads might have partially skewed the results. The ALKB treatment revealed an abundance of m1A-modified RNAs that were previously undetected. The data provided evidence for the use of ALKB demethylation—after the modification pulldown and prior to RNA library creation—showing that it allows for more accurate measures of methylation in RNAs. Comparative analysis will be conducted to understand the role of m1A modifications in diseases related to these formerly overlooked modified RNAs. A major finding was that hsa-mir-21, which was mentioned earlier as an RNA that is significantly downregulated in breast cancer, is more methylated than previously described. Without the demethylase treatment, the m1A modifications in hsa-mir-21 BIOLOGY

were not easily identified, so this discovery elicits interest in determining whether or not the presence of m1A plays a role in inducing breast cancer through the downregulation of hsa-mir-21. Additional studies on the impact of m1A methylation will be conducted on mmu-mir-1937a and mmu-mir-1937b, which have been shown to be related to erectile dysfunction and early diabetic renal injury in mice (Chen, 2009). Further analysis will also be conducted on hsa-mir-197, which defied the overall trend, and had less sequence reads after using ALKB. This outlier may be indicative of unique chemical properties that prevent hsa-mir-197 from being sequenced after demethylation, potentially by inhibiting binding sites for enzymes such as reverse transcriptase. In order to identify differences in methylation that may be linked with breast cancer, modification levels were determined for ME16c normal breast epithelial cells, as well as for MDA-231 and MDA-468 breast cancer cells. As mentioned in the results section, a wide variety of snoRNAs, miRNAs, and tRNAs displayed differential expression of m1A modifications in cancerous and noncancerous breast cells. Interestingly, relative to the regular breast cell line, a few miRNAs, such as hsa-mir-100, had more methylation in 1 breast cancer cell line, but less methylation in the other. Using modification levels in the noncancerous cells as a baseline, these differences show that the change in methylation levels can go in opposite directions for different cancer cell lines. Thus, miRNA methylation may be 1 factor in differentiating breast cancer cells, a key discovery that may impact personalized drug design and therapy. In hsa-mir-200c, relative to in ME16c cells, methylation increased in MDA-231 cells, but decreased in MDA-468 cells. Previous research has shown the inverse relationship between methylation and expression levels of hsa-mir-200c in triple negative breast cancer cells (Damiano, 2017); thus, based on the epigenetic silencing observed in this study, hsa-mir-200c is expressed less in MDA-231 cells, but expressed more in MDA-468 cells. The differential expression of this miRNA may be 1 factor in the development of unique breast cancer cell lines. Our study also showed that hsa-mir-let-7c, which is commonly under-expressed in tumors, is more methylated in both breast cancer cell lines; additionally, hsa-mir-100, which is overexpressed in the metastasis of prostate cancer (Leite et al., 2009), is less methylated in MDA-231 cells. Furthermore, our results showed that hsa-mir-127 is significantly less methylated in breast cancer cells, a finding that may correlate with a previous discovery that hsa-mir-127 is downregulated by greater than 2-fold in breast cancer (Yan et al., 2008). In summary, the localization of modified miRNAs leaves major implications on the study of the epitranscriptome. These discoveries in breast cancer cells serve as stepping stones for uncovering the role of base modifications in the hidden post-transcriptional regulation of

microRNAs. In the long run, these findings will initiate more mechanistic studies on miRNA regulation, and shed light on the potential of finding therapeutic solutions to breast cancer and other diseases through the manipulation of RNA modifications. This study also provides evidence for the use of ALKB in identifying m1A modifications, and discovers a novel application for the ALKB demethylase in enhancing localization for modifications in miRNAs and snoRNAs. The improvement in screening for the m1A modification will facilitate the study of RNA processing, with future applications in understanding cellular signaling and regulation, as well as genetic diseases including cancer and neurodegeneration (Cozen, 2015). Lastly, the differential expression of m1A modifications, as seen in the small RNAs identified in this study, reveals a novel epitranscriptomic variable that may have large implications on breast cancer. 5. Acknowledgements I would like to acknowledge my mentor, Dr. Scott Hammond of the UNC Chapel Hill Lineberger Comprehensive Cancer Center, for allowing me to work in his lab and for providing guidance on this research project. I would like to thank Karl Kaufmann and Madison Rackrear for their assistance in the laboratory. I would also like to thank Mr. Robert Gotwals for giving me the opportunity to participate in the Research in Computational Science program, and Dr. Michael Bruno for his guidance during the Summer Research Internship Program. 6. References Armstrong, D et al. (2015). MicroRNA Molecular Profiling from Matched Tumor and Bio-fluids in Bladder Cancer. Calin, G.A & Croce, C.M. (2006). MicroRNA signatures in human cancers. Nature Reviews Cancer. Chen, Y et al. (2009). Abated microRNA-195 expression protected mesangial cells from apoptosis in early diabetic renal injury in mice. J Nephrol. Chi, K.R. (2017). The RNA Code Comes into Focus. Nature. Cozen A, Quartley E, Holmes A, Hrabeta-Robinson E, Phizicky E & Lowe1 T. (2015). ARM-seq: AlkB-facilitated RNA methylation sequencing reveals a complex landscape of modified tRNA fragments. Nature. Damiano V, Brisotto G, Borgna S, Gennaro A, Armellin M, Perin T, Guardascione M, Maestro R & Santarosa M. (2017). Epigenetic silencing of miR-200c in breast can-

cer is associated with aggressiveness and is modulated by ZEB1. Gene Chromosome Cancer. 56(2): 147-158. Dominissini, D et al. (2013). Transcriptome-wide Mapping of N6-methyladenosine by m6A-seq based on Immunocapturing and Massively Parallel Sequencing. Nature Protocols. 8(1). Hinton, A et al. (2014). sRNA-seq Analysis of Human Embryonic Stem Cells and Definitive Endoderm Reveals Differentially Expressed MicroRNAs and Novel IsomiRs with Distinct Targets. Hutvรกgner, G & Zamore P. (2002). A microRNA in a Multiple-Turnover RNAi Enzyme Complex. Science. Iorio, MV et al. (2005). MicroRNA Gene Expression Deregulation in Human Breast Cancer. Cancer Research. Kolhe, R et al. (2017). Gender-specific Differential Expression of Exosomal miRNA in Synovial Fluid of Patients with Osteoarthritis. Leite, K et al. (2009). Change in expression of miR-let7c, miR-100, and miR-218 from high grade localized prostate cancer to metastasis. Urologic Oncology. Lin, H. (2010). Unregulated miR-96 Induces Cell Proliferation in Human Breast Cancer by Downregulating Transcriptional Factor FOXO3a. PLOS ONE. McGuinness D & McGuinness DH. (2014). m6A RNA Methylation: The Implications for Health and Disease. Journal of Cancer Science and Clinical Oncology. 1(1). Modomics: A database of RNA modification pathways. Saletore, Y et al. (2012). The Birth of the Epitranscriptome: Deciphering the Function of RNA Modifications. Genome Biology. Shiekhattar R & Gregory RI. (2005). MicroRNA Biogenesis and Cancer. Cancer. UCSC Table Browser. (2004). Karolchik D, Hinrichs AS, Furey TS, Roskin KM, Sugnet CW, Haussler D, Kent WJ. The UCSC Table Browser data retrieval tool. Nucleic Acids Res. Yan LX, Huang XF, Shao Q, Huang MY, Deng L, Wu QL, Zeng YX & Shao JY. (2008). MicroRNA miR-21 overexpression in human breast cancer is associated with advanced clinical stage, lymph node metastasis and patient poor prognosis. RNA. 14(11): 2348-2360. BIOLOGY

DIFFERENT PIGMENTS, SAME PROTECTION: LICHENS UNDER SIMULATED MARTIAN UV Madeline Paoletti Abstract Lichens, composite organisms of algae and fungi, tolerate extreme desiccation and high-energy radiation. Because lichens can survive in outer space, they can act as eukaryotic model organisms to model life on Mars. Lichens tolerate Ultra Violet (UV) radiation by using adaptive pigments produced by the fungus symbiont that reflect harmful UV light; however, it is unknown whether different pigments protect the organism better than others. The goal of this study was to discover whether some pigments protect the lichen thallus from UV radiation better than others. The project used lichen samples from harsh environments in Chile―each exposed to 1800 J/m2 of UV radiation as oxygen levels inside a closed container were measured. A second experiment tested the algal symbiont, which does not contain pigments, to see its response. The results indicated that UV exposure increased oxygen levels significantly but that UV exposure affected all lichen pigments equally. Differences between oxygen levels of species were noticeable; Ramalina usnea was the most affected by UV while Usnea sp. was the least affected. No specific conclusions could be made from the second experiment. Understanding the adaptive traits of lichen is vital for future astrobiology studies into possible life on the red planet.

1. Introduction The last 20 years of astrobiology research have shown that many prokaryotic extremophiles can survive simulated and real space exposure, but research on eukaryotic extremophiles is the next critical direction for future studies on the potential of extraterrestrial life and the advancement of complex model organisms (de Vera, 2012). Eukaryotic lichens specifically have been shown to survive in harsh environments including 10 years submerged in liquid nitrogen and on the side of a space shuttle (Sancho et al., 2008; de la Torre, 2010). Lichens are a symbiotic relationship between a photosynthesizing algae, photobiont, and a fungus, mycobiont; this symbiosis allows the organism to sustain extreme conditions, particularly areas with high UV radiation. Laboratory experiments have also shown their capacity to live in space and other harsh conditions; this includes metabolic activity even after 10 years of dehydration, freezing temperature, no sunlight, and immersion in liquid nitrogen (Sancho et al., 2008). Some regions on Earth in which lichens live resemble space due to their extreme temperatures of 38°C to -28.2°C and high UV radiation exposure between wavelengths 200-400 nm (Sancho et al., 2008). The ability of lichens to survive in space and on Mars has been tested. Lichens on a 2 week flight onboard the Biopan facility of the European Space Agency survived complete space exposure (de la Torre et al., 2010). Most samples were tested in simulation chambers, but one sample was attached to the outer surface of the space capsule protected only by a thin textolite cover. Their symbiotic system, consisting of fungal and photosynthetic cells proBIOLOGY

vided efficient shielding against the hostile conditions of space (de la Torre et al., 2010). Another simulation facility, the Planetary Atmospheres and Surfaces Chamber (PASC) located at the Center for Astrobiology in Madrid, Spain, simulated the atmosphere and surface temperature of Mars. The resistance of the lichen Circinaria gyrosa was investigated with a 120 hour exposure to simulated Martian atmosphere, temperature, pressure and UV conditions. Results showed an unaltered photosynthetic performance, demonstrating high resistance of the lichen photobiont (Sanchez et al., 2012). These space simulation experiments with lichens continue to create new questions about the survival of eukaryotes in Martian conditions. Based on previous work by de Vera (2004) which studied the ecological potential of germination in Mars conditions, it can be concluded that lichen would survive and photosynthesize for a short period of time on Mars due to symbiotic adaptations including genetics, morphology, and pigments. One adaptation is genetic recombination through DNA transfer between the two organisms, ensuring cohesive repair of double-strand breaks after radiation damage (de Vera et al., 2004). The mycobiont also contains a mucilage layer and metabolites to resist radiation damage and desiccation (de Vera et al., 2008). Inside the layer are metabolites, such as parietin and carotene, that screen UV light and protect the lichen by changing the pigmentation of the thallus to a dark-orange (de Vera et al., 2008). Previous studies have shown that the production of parietin in lichens such as Xanthoria elegans underlies color change following UV exposure (de Vera et al., 2008). The array of colors caused by pigmentation found in lichens

ranges from greens, yellows, reds, oranges and browns to white, grey, and even black depending on the compound (Table 1). The most common pigment is usnic acid, a yellow pigment (Kohlhardt-Floeher et al., 2010). These pigments appear to protect the underlying photobionts from excessive UV radiation. They act as a ‘sunscreen’ for lichen and the amount produced can change based on the levels of exposure and light (Brodo, 2001). Pigment Usnic Name Acid Color

Pale Green/ Gold

Fumarprotocetraric Acid Brown/ Green

Lobaric Acid

Diffraic Acid

Light Gray/ Blue

Pale Yellow

Figure 1. Las Lomitas, Pan de Azúcar collection site photos. Photo credits to Reinaldo Vargas.

Table 1. Common lichen pigment names and color (Lutzoni Lichen Catalog,” 2017). Lichen pigments show variations in concentrations based on location because thalli growing in locations with more light exposure have higher concentrations of filtering compounds than thalli in the shade (Brodo, 2001). Even with these adaptations and potential in space, little is still known about lichen pigment protection and variations between types. The hypothesis of this project is that some pigments will protect the lichen from UV damage better than others with a significant difference in levels of pigment protection. The answer to this question will help solidify fully the potential of lichen as a new space study organism to model eukaryotic growth and adaptations in astrobiology research.

Figure 2. Geographic layout of sample collection sites.

2. Materials and Methods 2.1 Biological Material Samples of 8 different lichen species were obtained by Duke University’s Lutzoni lab from different regions across Chile (Fig. 1 and Fig. 2); they were labeled based on location and pigment type (Table 2). The separate samples were stored in dry paper bags inside a dark drawer at room temperature for a period of 3 months before experimentation. 2.2 Culture Conditions The photobiont was cultured on Bold Basal Medium (BBM) at a 6 pH. It was cultured at 20-21°C with light/ dark periodicity of 16 hours light and 8 hours dark. For each sample, 3 replicates were used. The protocol for isolating the photobiont cells was adapted from Yamamoto (2002). The cortex of the lichen thallus was removed from the sample with a scalpel and crushed using a mortar and pestle to separate the remaining mycobiont. To homogenize the sample, it was diluted with distilled water and fil-

Figure 3. Step-by-step pictures of photobiont isolation. A. Removing cortex and mycobiont of thallus. B. Diluting sample. C. Transferring algae to plate. Conditions of 20-21°C with 16h light/8h dark. ter paper twice to remove mycobiont pieces. The total end volume was 1 ml, which was be poured onto a prepared BBM plate (Fig. 3). One plate for each sample was made. After 2 weeks of growth, plates were re-cultured as necessary. 2.3 Martian UV Simulation The lichen thallus pieces were exposed to UV light with wavelengths between 200-400 nm and a ~1800 J/m2 dose was used based on previous literature values simulating the levels experienced on Mars (de Vera et al., 2008). A High Intensity UV-Lamp was placed to directly target each 0.5 g BIOLOGY

Lichen Species

Collected Site Location

Collection Pigment Date Type

Elevation Yearly Latitude LongiAve. tude Temp. of Region 1440 m 16°C -18°C 38°25.180 S 71°32.601 W

Protousnea magellanica Mocho, Chile (PM) 1




Malaca, Chile



1172 m

16°C -18°C 39°56.041 S 72°.06.355 W


Christmas Crater



1297 m

Cladonia chlorophaea

Angol, Chile


1300 m

Stereocaulon sp.

Christmas Crater


Fumarprotocetraic (Fumo) Lobaric

16°C -18°C 40°47.07954 S 72°11.49984 W 18°C 40°47.06604 S 72°11.43696 W

1357 m


37°49.168 S 73°01.625 W

Usnea sp.

Pan de Azucar 3/9/17 (Atacama Region)


780 m


26°01.116 S 70°36.606 W


Pan de Azucar



780 m


26°01.116 S 70°36.606 W

Ramalina usneae

Pan de Azucar



780 m


26°01.116 S 70°36.606 W

Table 2. Comparison of lichen samples’ pigment, location, elevation, and latitude/longitude. *Genus/species is unknown (Lutzoni Lichen Catalog, 2017) sample of the lichen thallus. For this experiment, each lichen species had 3 replicates. The time required to receive the correct dose was found with a calculation (Model UVGL-55 UV meter was used to find the wavelength in μW/cm2 which for this lamp was 50 μW/cm2): 100 μW/cm2 = 1J/m2 (Dose per second 1 J/m2) (time in seconds) = Total Dose in m2 0.5 J/m2 (t) = 1800 J/m2 T = 3600 s Based on the total dose necessary and the rate of the UV lamp, each sample was exposed for 1 hour. In the second experiment, the cultured photobionts also were exposed to UV radiation but the dose was lowered to 240 J/m2 and a 1 cm2 area of each culture plate was used per replicate of species to be exposed under the UV lamp. 2.4 Oxygen Measurements Lichen thallus pieces weighing approximately 0.5 g in mass were prepared from samples and during exposure had their oxygen levels measured by a Vernier O2 Sensor. This acted as an indicator of maintaining normal physiological function while under UV stress. For experiment 1, the lichen thalli were placed inside a 324 cm3 closed container with no direct light and oxygen levels were measured for 1 hour in this setup while the samples were exposed to UV light. Irradiation took place behind a glass sheet for safety. BIOLOGY

Figure 4. Diagram of experiment one UV treatment setup. Control samples received no UV treatment, but still had their oxygen levels measured over 1 hour in a closed container (Fig. 4). For the second experiment algae were exposed to UV for 8 minutes then had oxygen levels measured for 8 minutes inside a 70.7 cm3 container with no direct lighting. Control algae had no UV treatment but had oxygen levels recorded in the 8 minute period. Data were recorded every second and a change in oxygen over time was calculated by the final point minus the initial. 3. Results 3.1 Experiment 1: Change in Oxygen Levels of Lichen Thalli

Figure 5. Graph of Parmeliaceae lichen oxygen levels during UV exposure.

Figure 6. Percent oxygen change of lichen thalli samples based on pigment type after exposure (top) compared to controls (bottom). Three replicates were used for each sample collected. Orange lines represent the baseline change in oxygen for the controls when no thallus was present and the grey line represents negative control for oxygen levels under UV lamp with no sample. Letters represent significant difference Tukey’s hsd (bars with the same letter are not significantly different). Graphed with one standard error.

After 1 hour exposure time, corresponding to a total dose of ~1800 J/m2, positive increases in oxygen gas were detectable. Data were recorded every second for 1 hour. 1 graph was created per species, showing both treatment and control replications. In the control, the oxygen levels decreased over 1 hour while UV treated lichen increased oxygen outtake. From the graph of oxygen gas vs time (Fig. 5), the change in oxygen levels from t > 2000s subtracted from the initial was treated as the percent change in O2. The change values were averaged within a sample and graphed. The graphing and standard error values indicated that there was a difference in lichen response based on UV treatment but that there was no significant difference between pigment groups (Fig. 6). Only the lobaric acid groups had no difference among treatments, but they also had no discernible deviation from the other pigments. Change in oxygen values were analyzed in JMP 10 statistical software for ANOVA analysis (JMP Statistical Analysis Software, 2017). The degrees of freedom, sum of squares, and ANOVA showed that data collected have a pattern of statistical significance among UV treatments (Table 3). When analyzing the change in oxygen vs. species, there was statistical significance in both UV treatments and species type (Fig. 7). The sample most affected by the UV exposure was Ramalina usnea, which varies significantly from BIOLOGY

Source Model UV Pigment UV x Pigment Error Total

DF 7 1 3 3 40 47

Sum of Squares 0.1405 0.0900 0.0015 0.0046 0.0835 0.2240

F Ratio 9.6206 43.1310 0.2472 0.7299 -

P-Value <.0001 <.0001 NS NS -

Table 3. ANOVA of Lichen Samples Organized by Pigment, UV, and UV x Pigment. Source Model UV Species UV+Species Error Total

DF 11 1 5 5 36 47

Sum of Squares 0.1830 0.1136 0.0049 0.0437 0.0410 0.2240

F Ratio 14.6166 88.8053 0.8595 7.6807 -

P-value <.0001 <.0001 NS <.0001 -

Table 4. Statistics of Lichen Samples UV by Species with ANOVA.

Figure 7. Full comparison of lichen treatments and species based on standard error significance. Orange line represents change in oxygen with no thalli in closed container (with UV for treatment and without for control), and the grey line represents negative control for oxygen levels under UV lamp with no sample. Blue bars are treatment with UV and yellow bars are control. Letters represent significant difference Tukeyâ&#x20AC;&#x2122;s hsd (bars with the same letter are not significantly different). Graphed with one standard error. all other collected groups, even those with the same pigment. Both Usnea sp. and Stereocaulon sp. had no different oxygen levels when exposed to UV radiation in comparison to their controls, but the other species did experience a significant effect of UV light. A side-by-side analysis of the species data further highlights the differences between Ramalina usnea, Usnea sp., and Stereocaulon sp. (Fig. 8). Using the same ANOVA analysis with UV crossed with pigmentation, UV treatment and species had significant values along with UV treatment alone. The degrees of freeBIOLOGY

Figure 8. Percent oxygen change of algae samples based on pigment type after exposure compared to controls. Three replicates were used for each sample collected. Letters represent significance of each group (bars with the same letter are not significantly different). Orange line represents closed container oxygen levels without algae units. Graphed with one standard error. dom, sum of squares, and f ratio confirmed the patterns in the data (Table 4). 3.2 Experiment 2: Change in Oxygen in Photobiont Cultured algae with 8 minutes of UV exposure (~242 J/ 2 m ) showed no significant difference among UV treatment or pigmentation (Fig. 9). While there is a baseline difference in diffraic and lobaric acid controls, no significant differences in pigments were detected. The same statistical analysis, ANOVA, for the lichen was done for algae cultures and showed no statistical significance between UV treated samples. The degrees of freedom, sum of squares, and f ratio indicate no significant differences among collected samples (Table 5). When the algae data are organized by species, there

Source Model UV Pigment UV+Pigment Error Total

DF 7 1 3 3 47 47

Sum of Squares 0.0041 0.0002 0.0011 0.0029 0.0092 0.0133

F Ratio 2.5659 0.7767 4.2557 1.5372 -

P-value 0.0278 NS NS 0.0106 -

Table 5. Statistics of Algae Samples UV by Pigments with ANOVA. Source


Sum of Squares

F Ratio


Model UV Species UV+Species Error Total


0.0050 0.0001 0.0017 0.0031 0.0083 0.0133

1.9514 0.2166 1.4631 2.7144 -

NS NS NS 0.0351 -

36 47

1 5 5

Table 6. Statistics of Algae Samples UV by Species with ANOVA.

Figure 9. Percent oxygen change of algae samples based on species after exposure compared to controls. Three replicates were used for each sample collected. Orange line represents closed container oxygen levels without algae units. Graphed with one standard error. were no significant differences. The change in oxygen over 8 minutes showed no variation among different lichens or UV treatments. The statistical least squares fit of the UV crossed with species also indicated no significance in the collected data. The f ratio, degrees of freedom, and sum of squares detects no variation in the any of the treatments or algae in the experiment (Table 6). 4. Discussion Lichens are promising organisms for astrobiological experiments because they survive in the most extreme terrestrial habitats. The Martian atmosphere is 100 times thinner (in terms of the higher pressure 600 Pa in comparison to Earth’s 101325 Pa) than Earth’s and is made up of 95% CO2 opposed to Earth’s 78% N; a thin atmosphere

and a larger distance from the sun also make Mars colder (from -125°C to 20°C) and allow more UV radiation to enter the atmosphere (300–400 nm) (Sharp, 2010). These intense conditions indicate that most terrestrial life could not survive on Mars; however, extremophiles, including lichens, have survived in similar environments (Sancho et al., 2008). This project tested the effect of pigments as protection against UV radiation. The results in the first experiment indicated no discernable difference in pigment protection of lichens in terms of oxygen production. The four pigment types―diffraic, usnic, fumarprotocetraric, and lobaric acid―had similar trends in relation to UV treatment. All lichens exposed to UV radiation had a positive percent change in oxygen levels over the 1 hour exposure period. In contrast, the control treatment lichen had decreased oxygen levels. This shows that lichens have a response to Martian UV levels. The results therefore imply that pigments provide the same response, even among different types. The oxygen levels may have risen for exposed lichen because of oxidative stress in the reactive oxygen species (ROS). ROS are formed by normal metabolic activities such as respiration and photosynthesis, but their production is enhanced during stresses like UV exposure. When this occurs, oxidative stress creates an imbalance between the reactive oxygen and biological system’s ability to repair the UV damage. This uptake of oxygen can cause toxic effects through the production of peroxides and free radicals that damage proteins, lipids, and DNA (Weissman, Garty, & Hochman, 2005). The actual cause of the rise in oxygen levels in treated lichen can’t be proven without future research into the cellular metabolic activities of the symbionts. BIOLOGY

When the data were sorted by species, there were significant responses. UV treatment trends were the same as when organized by pigment type, but there was a significant difference between Ramalina usneae and the other samples. The lichen groups Usnea and Sterocaulon responded the least to the UV radiation with no significant difference in their control, indicating that these species continued normal levels of oxygen more than the other samples. This means that, of the tested samples, Usnea and Stereocaulon are the most adaptable to Martian UV. When comparing Ramalina usneae and Usnea, no specific patterns emerge with the information known about the species (Table 7). The difference between the high responder Ramalina usneae and Stereocaulon in addition to pigment type is evidently elevation, latitude/longitude, and average temperature. However, these variables can’t fully explain the differences, as the other low responder, Usnea, shares the same traits as Ramalina usnea. Overall, experiment 1 indicated that differences in lichen protection from UV treatment is more species-based than pigment-based, but the underlying reasons for this behavior aren’t completely based in temperature, location, and elevation. Lichen Species UV Response Pigment Group Location Elevation Latitude/ Longitude Average Temperature of Region

Ramalina usnea

Usnea sp.

Highest Responder Lowest Responder Diffraic


Foggy Desert, Chile 780 m 26°01.116 S 70°36.606 W 31°C

Foggy Desert, Chile 780 m 26°01.116 S 70°36.606 W 31°C

Table 7. Comparison of the greatest/least responders in Experiment 1. No patterns emerged in the second experiment with algae cultures. There was no significant difference between UV treatment, species, and pigments. This indicates that the algae had no detected relation to the oxygen levels exhibited in the whole thallus from experiment 1. However, since the photobiont provides the photosynthesis for the entire thallus, it would make sense to find significant difference; looking further into ANOVA analysis shows that the experiment 2 results are not conclusive. This research is unique in the lichenology and astrobiology field because it compares pigment differences in relation to UV protection, while other experiments have focused on the extent of lichen survivability (Sanchez et al., 2012). While previous studies have used pigments as BIOLOGY

an explanation to lichen’s extremophile nature (de Vera et al., 2008), this is the first to compare different pigments and species with their response to UV damage and oxygen levels. Past research projects all tested the extent of survivability of lichens and found individual species suitable for astrobiology model organisms (Sancho et al., 2008). In contrast, this research focused on the differences in the thallus and algae unit based on pigmentation, tested the differences in survivability of different lichens, and proposed multiple lichens that could be used as model organisms. Past literature helped assume that the symbiotic state enhances survivability of separated symbionts, and that lichens can be used as model space organisms. This project created new knowledge in the field by studying the nature and specifics of pigment protection and adaptation, finding that there was no difference between the levels of protection each pigment gave the lichens while individual taxa varied in responses unrelated to pigmentation or location. The lack of pigment differences among several lichens further shows the nature of lichen adaptations. Ultimately, this reveals that several parameters must be considered when choosing model organisms in exobiology studies. 5. Conclusion The current study demonstrates that different pigments respond similarly to the same to UV exposure, but that individual species vary in response. The separated algae unit had no differences among exposed pigments and among species. However, the data for the photobiont is inconclusive according to the statistical analysis. Possible sources of error in this experiment could be the experimental unit size of the algae, which may have been too small for the oxygen sensor to detect accurately in the closed system. The closed container where oxygen measurements were made could have also been too large relative to the algae unit. Overall, the hypothesis that there would be differences in UV resistance among pigments could not be accepted, though interesting trends in individual species responses were identified. Future experiments should test a wider array of lichens from various locations and with other pigments, extend the exposure time, or vary the dose of UV radiation to observe difference in pigment responses. Questions that remain to be answered include why some species responded differently – since this couldn’t be fully explained by environment conditions – and why oxygen levels in a closed system increased under exposure. Future studies could analyze the differences in species on a molecular/genetic level, while a re-trial of experiment 2 with a large algae unit could be performed. With more time, a molecular response to the UV would have been conducted. From the work presented, it can be concluded that different pigments have the same response to UV, that differences are

more apparent between species, and that the lichen species Usnea and Stereocaulon sp. may be suitable model organisms for Mars and exobiology research. 6. Acknowledgments This work was supported by the Glaxo Endowment to NCSSM. Thanks to Dr. Sheck and Dr. Monahan for mentorship through the research; Emma Garval and Ana Sofia Uzsoy for providing peer-support and lab instruction; Dr. Muth for allowing this project to occupy some of her lab space; and Dr. Bullard for statistical help in analysis. Also, special thanks to Dr. Lutzoni from Duke University for collecting the lichen samples used in this research. 7. References Bhat, S., Dudani, S., Chandran, M., & Ramachandra, T. (n.d.). Lichens: General Characteristics. Retrieved February 02, 2018, from Brodo, I. M. (2001). Lichens of North America. Retrieved February 02, 2018, from Jänchen, J., Bauermeister, A., Feyh, N., Vera, J. D., Rettberg, P., Flemming, H., & Szewzyk, U. (2014). Water retention of selected microorganisms and Martian soil simulants under close to Martian environmental conditions. Planetary and Space Science, 98, 163-168. Kohlhardt-Floehr, C., Boehm, F., Troppens, S., Lademann, J., & Truscott, T. G. (2010). Prooxidant and antioxidant behaviour of usnic acid from lichens under UVB-light irradiation – Studies on human cells. Journal of Photochemistry and Photobiology B: Biology,101(1), 97-102.

Vera, J. D., Horneck, G., Rettberg, P., & Ott, S. (2004). The potential of the lichen symbiosis to cope with the extreme conditions of outer space II: germination capacity of lichen ascospores in response to simulated space conditions. Advances in Space Research, 33(8), 1236-1243. Vera, J. D., Rettberg, P., & Ott, S. (2008). Life at the Limits: Capacities of Isolated and Cultured Lichen Symbionts to Resist Extreme Environmental Stresses. Origins of Life and Evolution of Biospheres, 38(5), 457-468. Vera, J. D., Möhlmann, D., Butina, F., Lorek, A., Wernecke, R., & Ott, S. (2010). Survival Potential and Photosynthetic Activity of Lichens Under Mars-Like Conditions: A Laboratory Study. Astrobiology, 10(2), 215-227. Vera, J. D. (2012). Lichens as survivors in space and on Mars. Fungal Ecology, 5(4), 472-479. Vera, J. D., Schulze-Makuch, D., Khan, A., Lorek, A., Koncz, A., Möhlmann, D., & Spohn, T. (2014). Adaptation of an Antarctic lichen to Martian niche conditions can occur within 34 days. Planetary and Space Science, 98, 182-190. Weissman, L., Garty, J., & Hochman, A. (2005). Rehydration of the Lichen Ramalina lacera Results in Production of Reactive Oxygen Species and Nitric Oxide and a Decrease in Antioxidants. Applied and Environmental Microbiology, 71(4), 2121-2129. Wierzchos, J., Davila, A. F., et al. (2013). Ignimbrite as a substrate for endolithic life in the hyper-arid Atacama Desert: Implications for the search for life on Mars. Icarus, 224(2), 334-346. Yamamoto, Y., Kinoshita, Y., & Yoshimura, I. (2002). Culture of Thallus Fragments and Redifferentiation of Lichens. Protocols in Lichenology, 34-46.

Lutzoni Lichen Catalog. 2017. Duke University Lutzoni Lab. Sancho, L. G., Torre, R. D., & Pintado, A. (2008). Lichens, new and promising material from experiments in astrobiology. Fungal Biology Reviews, 22(3-4), 103-109. Sánchez, F., Mateo-Martí, E., et al. (2012). The resistance of the lichen Circinaria gyrosa (nom. provis.) towards simulated Mars conditions—a model test for the survival capacity of an eukaryotic extremophile. Planetary and Space Science, 72(1), 102-110. Torre, R. D., Sancho, L. G., et al. (2010). Survival of lichens and bacteria exposed to outer space conditions – Results of the Lithopanspermia experiments. Icarus, 208(2), 735-748. BIOLOGY

HARMFUL ALGAL GROWTH SUPPRESSED BY ALLELOPATHY REGARDLESS OF EXCESS PHOSPHORUS Elizabeth Farmer Abstract Harmful algal blooms (HAB) plague eutrophic waters and are often caused by excess anthropogenic nutrients, specifically phosphorus. Allelopathy, a defense mechanism in which an organism excretes harmful chemicals, has been previously identified as a potential biological control method for HABs. The effect of varying nutrient levels on this allelopathy, however, has not been investigated. This study examined whether the allelopathy of the cyanobacterium (Microcystis aeruginosa) towards the green HAB alga (Chlorella) was impacted by varying phosphorus levels. This was tested by measuring the growth rate and final cell density of Chlorella under high or low phosphorus conditions, and 1 of the allelopathy treatments. The allelopathy treatments were: a negative control, live M. aeruginosa, or M. aeruginosa filtrate. Chlorella growth was significantly inhibited by phosphorus limitation, but further inhibited by the filtrate or live M. aeruginosa, even in the high phosphorus treatment. The results indicated that the effect of the allelopathy was stronger than the high phosphorus treatment and the allelopathy could override the benefit of eutrophic growing conditions. This project demonstrates the effectiveness of allelopathy for HAB suppression in any nutrient conditions, establishing a promising control for limiting the destructive and costly impacts of HABs. 1. Introduction Over the past several decades, the frequency and duration of harmful algal blooms (HAB) has increased dramatically (Anderson, 2009; Heisler et al., 2008). This has led to increased economic losses, especially in commercial fisheries and tourism industries (Anderson, 2009), and severe damage to wildlife (Ryan et al., 2017). Individual blooms can result in damages exceeding $1 billion in the US, depending on the duration and intensity of the bloom (Anderson, 2009). However, the annual average economic loss for the United States from HABs is $100 million per year (in 2012 USD): 45% from public health costs, 37% from commercial fisheries, 13% from tourism and recreation losses, and 4% from monitoring and managing HABs (Davidson et al., 2014) Harmful algal bloom (HAB) describes an event in which a particular species of an alga experiences a rapid increase in biomass, or dominates the community in which it lives, subsequently harming other species in the community (Anderson, 2009). HABs can be caused by many different types of both freshwater and marine photosynthetic organisms, including dinoflagellates, diatoms, and cyanobacteria. Also, HABs can be separated into 2 classes: high biomass blooms and low biomass blooms. High biomass blooms are seldom toxic but are harmful because they cause oxygen depletion in the benthic zone (the sediment and subsurface layers of a body of water) through respiration and the bacterial decomposition of the bloom when it sinks. Also, high biomass blooms can clog the gills of both farmed and wild fish as well as decrease the amount of light that reaches the benthic zone (Davidson et al., 2014; BIOLOGY

Anderson, 2009). Low biomass blooms are characterized as blooms with a few hundred or thousand cells per liter (Davidson et al., 2011). They are more commonly associated with toxic algal blooms, creating biotoxins that can become concentrated in filter feeders and transferred up the food web (Davidson et al., 2014). Anthropogenic nutrient loading, the release of excessive nutrients from human activities, is known to be a major cause of HABs (Davidson et al., 2014). Fertilizer runoff, air pollution, sewage, animal waste, ballast water discharge, and coastal aquaculture have all been linked to an increase of HABs (Anderson et al., 2008; Davidson et al., 2014). In freshwater ecosystems, phosphorus is most often the limiting nutrient in primary production and algal growth is often proportional to phosphorus concentrations (Schindler, 1977; Steinman and Duhamel, 2017). Therefore, increasing phosphorus concentrations through anthropogenic nutrient loading allows for greater algal productivity and growth, potentially leading to a bloom event. However, nutrient loading can not only lead to an increase in occurrences of HABs, but it can alter the composition of marine and freshwater communities due to physiological adaptations of different microalgae allowing for compositional changes in the community with increased nutrient input (Heisler et al., 2008). Changing the composition of an algal community disrupts the homeostasis that has been established in the community, affecting the biodiversity of algae in the area and allowing greater proliferation of harmful algae. Because different species of microalgae require different niches, it is difficult to pinpoint a universal cause of algal blooms (Ulloa et al., 2017). The lack of a universal

cause of HABs means that the prevention of HABs has to be customized to each area (Anderson, 2009). That being said, a relatively simple and effective preventive solution to HABs is to limit nutrient input into watersheds. This is done by enforcing sewage reduction, decreasing the amount of ballast water discharge, and generally reducing the amount of nutrient runoff into watersheds (Anderson, 2009; Heisler et al., 2008). However, reversing the nutrient balance from its current altered state and reestablishing the original dynamics requires a very long amount of time (Heisler et al., 2008). It is also possible to use biological control strategies to suppress the rapid increase of biomass and the production of toxins. By taking advantage of the defensive chemicals secreted by beneficial algae, otherwise known as allelopathy, an HAB may be controlled. Allelopathy is a chemical defense mechanism in which organisms release chemical compounds to deter or inhibit their competitors. In a study by Wang et al. (2007), it was determined that the fresh tissue and dry powder of 3 different types of macroalgae (Ulva linza, Corallina pilulifera, and Sargassum thunbergii) exhibited allelopathic effects on the red tide microalga Prorocentrum donghaiense, inhibiting its growth. The capability to produce allelopathic chemicals for use against microalgae was further supported by Nan et al. (2008) in their study of Ulva lactuca, a type of macroalgae, and its effects on marine HAB microalgae species Heterosigma akashiwo, Skeletonema costatum, and Alexandrium tamarense. Nan et al. (2008) found that U. lactuca can use allelopathy to compete with other algae by reducing biomass, where the allelopathic effects were strongest in co-cultures with U. lactuca and microalgae, with dried U. lactuca added to microalgae cultures. However, allelopathy is not exclusive to U. lactuca. Microcystis aeruginosa, a freshwater strain of cyanobacteria, is also capable of allelopathy against harmful, bloom forming algae and cyanobacteria (Wang et al., 2017). The study found that M. aeruginosa most effectively inhibits the growth of common green algae, such as Chlorella, during the stationary and exponential growth phase. What has not been researched however, is whether or not the effectiveness of the allelopathy of M. aeruginosa and other types of algae is changed under different nutrient conditions, specifically, phosphates. This knowledge would be instrumental in knowing whether or not allelopathy can be used as a preventative method for harmful algal blooms in waters that are in danger of eutrophication to decrease the chance of a bloom event if the water were to be subjected to excess nutrient input. 2. Materials and Methods This experiment had 2 allelopathy treatment levels and a negative control (containing no allelopathic algae), and 2 phosphate dosage levels (Fig. 1). The sample size for each treatment was 3 experimental units. The first allelopathy

treatment was a co-culture of the freshwater cyanobacteria, Microcystis aerugionsa, with the microalga Chlorella. The co-culture tested the effect of the allelopathy of M. aeruginosa on the growth rate and ending cell density of Chlorella, but it is impossible to rule out the effects of competition for light and nutrients. The second allelopathy treatment was the addition of the filtrate of M. aeruginosa cultures to Chlorella cultures. The filtrate treatment tested specifically for the effect of the allelopathic chemicals that M. aeruginosa excretes and distinguished the inhibitory effect of the general competition in the co-cultures from allelopathy. These treatments are the standard treatments to test allelopathy of different types of algae (Accoroni et al., 2016; Nan et al., 2016; Tang and Gobler, 2011). The negative control was a pure culture of Chlorella. 1 experimental unit was defined as a 200 mL total (including the filtrate or addition of M. aeruginosa cells) culture of Chlorella in a 500 mL flask.

Figure 1. Experimental design showing the allelopathy treatments, phosphorus treatments, duration of data collection, and sample size. The co-culture treatment was used to test the inhibiting effect of M. aeruginosa, not only through allelopathy but also through the competition for resources and light that occurs in co-cultures. This allelopathy treatment most closely modeled the effect of M. aeruginosa in actual environments because competition would be found in real ecosystems. The filtrate treatment tested specifically the allelopathy of M. aeruginosa against Chlorella through allelochemicals secreted by algae that remain in the filtrate after the algal cells have been removed from the culture. This was designed to test the efficacy of the allelochemicals secreted by M. aeruginosa in nutrient limiting environments. In addition to the allelopathy treatments, there were also 2 phosphorus treatments. The first treatment was a high phosphorus treatment that promoted algal growth BIOLOGY

and simulated eutrophic conditions in which Chlorella would bloom. The phosphorus concentration of the high phosphorus treatment was 1.78x10-3 M. This concentration is the phosphorus concentration of Bold’s Basal Medium, a medium in which freshwater algae thrive because there is an excess of nutrients. The concentration of the low phosphorus treatment was 8.93x10-4 M. Because the concentration of the low phosphorus is half the concentration of the high phosphorus, the available nutrients are much more limited, therefore limiting algal growth. Phosphorus was chosen as varying nutrient because it is most often the limiting nutrient in aquatic ecosystems and algal growth is often proportional to phosphorus levels (Schindler, 1977). The low phosphorus treatment was chosen so that the effects of the different allelopathy treatments can be examined in a nutrient limited environment to see if the effects change. 2.1 - Algal Culture Preparation The M. aeruginosa and the Chlorella were obtained from Carolina Biological Supply Company. M. aeruginosa is a cyanobacterium and Chlorella is a microalga. The microalgae cultures were prepared by inoculating 200 mL of the proper growth with enough of the stock culture (6.4 mL) such that there were 30 cells/µL total in a 500 mL flask (Accoroni et al., 2016). This is pre-bloom density; therefore, the research will test whether or not M. aeruginosa can prevent HABs, not whether or not it can control them once they occur. 2 types of media were prepared for the algal cultures. The first was Bold Basal Media (Culture Collection of Algae and Protozoa, 2017) which was used for all of the cultures within the high phosphorus treatment group, due to the high levels of phosphorus (1.78x10-3 M) included in the recipe to promote algal growth. The second type of media was a modified Bold Basal media only containing half of the phosphorus normally included in this media (8.93x10-4 M). This simulated nutrient limiting conditions similar to those found in waters that have not yet experienced eutrophication or are not very eutrophic because the phosphorus concentration was decreased by half. 2.2 -Addition of Growth Medium Filtrate For the co-culture of Chlorella and M. aeruginosa, after 200 mL of growth media were inoculated with enough stock culture such that there were 30 cells/µL in the flask, the cultures were inoculated with an additional 30 cells/µL of M. aeruginosa cells. This target cell density was achieved by manually counting the cell density of a 1ml sample of the stock culture of algae using a Sedgewick rafter and performing the necessary dilutions of the stock culture. For the filtrate treatment, a 10 mL sample of M. aeruginosa was centrifuged and the supernatant was filtered through a 25 mm GF/F (ultrafine filter with retention down to 0.7 micrometers in liquids) Whatman filter. The 10 mL were then added to the 200 mL Chlorella cultures requiring this BIOLOGY

algal treatment. The growth medium filtrate contained the allelochemicals secreted from M. aeruginosa during its growth, which then inhibited the growth of the Chlorella without the confounding presence of the M. aeruginosa cells. 2.3 - Growing Conditions and Data Collection All cultures were kept inside a Percival growth chamber on a 16 hour light/8 hour dark cycle which is the standard convention for growing algae in summer conditions. The Percival was kept at 21°C for optimal growing conditions (National Center for Marine Algae and Microbiota, 2017). The algae were grown under cool fluorescent lights with a distance of about 30 cm between the bulb and the flask. (Tang and Gobler, 2011; Wang et al., 2007; Nan et al., 2008). The placement within the Percival was randomized and the locations of the flasks within the Percival were rotated every 2 days. For all treatments, every day for 7 days, 1 mL of each experimental unit was removed and placed onto a Sedgewick rafter to count the cell densities of each treatment (Accoroni et al., 2016). When the cell density exceeded 100 cells/µL, the 1mL samples from the culture were diluted before they were measured, and then adjusted to calculate the undiluted cell density. 10 microliters within the Sedgewick rafter (10 squares) were counted and averaged to find the cell density per treatment. The cell density from each experimental unit was averaged with the cell density of the other units identical in treatment to create 6 growth curves for each treatment. The 2 response variables were the cell densities of the Chlorella at the end of the 7th and final day of data collection, as well as the slope of the growth of each flask. JMP 10 was used for statistical analysis, specifically, to perform LS Means Differences Tukey HSD test and to perform an ANOVA. 3. Results The growth rate data were best modeled by a linear function because the Chlorella did not grow exponentially or reach a carrying capacity within the time scale of data collection. When the allelopathy was not present, the growth rate was highest (see control, Fig. 2). In contrast, the co-culture had the lowest growth rate (Fig. 2) and the growth rate of the filtrate was in between the control and co-culture (Fig. 2). The ANOVA showed that the allelopathy treatments (co-culture and filtrate) had a significant effect on lowering the growth rate of the Chlorella (p<.0001, Table 1). The data also show that there was a significant difference between phosphorus treatments averaged across allelopathy treatments (p<.0001, table 1). The growth rate of cultures containing high phosphorus was higher than that of the low phosphorus cultures (Fig. 3).

Degrees of Freedom Model 5 Allelopathy 2 Phosphorus 1 AxP 2 Error 12 Total 17

Sum of Squares 11736.331 10847.464 276.454 612.413 33.321 11769.651

Mean Square 2347.27 2.78 -

F Ratio 845.3348 1953.281 99.5610 110.2759 -

P-Value <.0001 <.0001 <.0001 <.0001 -

Table 1. Results of an ANOVA test determining the effect of the allelopathy of Microcystis aeruginosa (control, filtrate, and co-culture) and phosphorus (high and low) treatments on the growth rate of Chlorella.

Figure 2. Average growth rate of Chlorella cultures, determined by the slope of the linear regression of the data for 7 days of growth. The control is only Chlorella. The filtrate is the allelopathic chemicals M. aeruginosa excreted, but none of the M. aeruginosa cells. The co-culture is a culture with both Chlorella and M. aeruginosa. 1 standard error is shown.

Figure 3. Average growth rate of the Chlorella cultures, determined by the slope of the linear regression of the data, under the 2 phosphorus treatments. The high phosphorus and provides and excess of nutrients while the low phosphorus represents a nutrient limited environment. 1 standard error is shown. Most important however, is the significant interaction of phosphorus and allelopathy. A significant interaction between the allelopathy and phosphorus treatments (p<.0001, F 110.2759, Table 1), meaning the effect of the phosphorus depends on the allelopathy treatment applied

Figure 4. Average growth rate of the Chlorella cultures. Control designates cultures with just Chlorella, while filtrate designates cultures with Chlorella and the allelopathic chemicals of M. aeruginosa, and co-culture designates cultures with both Chlorella and M. aeruginosa. Different letters indicate a significant difference based on LS Means Differences Tukey HSD test. 1 standard error is shown. to the cultures (Fig. 4). The differences among the treatment combinations were determined by an LS Means Differences Tukey HSD test. There was not a uniform effect of phosphorus levels across all allelopathy treatments. In the control, high phosphorus caused significantly higher growth than low phosphorus (Fig. 4). However, in contrast in the allelopathy treatments (co-culture and filtrate), there was no effect of phosphate level. The growth rate of the low phosphorus control was significantly lower than that of the high phosphorus control. In the filtrate and co-culture treatments, the model could not detect a difference between the high phosphorus filtrate and the low phosphorus filtrate, or the high phosphorus co-culture and the low phosphorus co-culture (Fig. 4). In addition to the results demonstrating significant differences between the growth rates of different treatments, the results show a significant difference between the cell densities on day 7 of data collection of the cultures of Chlorella (p<.0001, Table 2). The cell density of the control was higher than the cell density of the filtrate, which was in turn higher than the co-culture (Fig. 5). Once again, the ANOVA showed that the allelopathy treatments significantly lowered the cell density of the Chlorella. BIOLOGY

Degrees of Freedom Model 5 Allelopathy 2 Phosphorus 1 AxP 2 Error 12 Total 17

Sum of Squares 410036.79 374451.37 12529.45 23055.97 3012.82 413049.61

Mean Square 82007.4 251.1 -

F Ratio 326.6336 745.7160 49.9045 45.9157 -

P-Value <.0001 <.0001 <.0001 <.0001 -

Table 2. Results of an ANOVA test determining the effect of the allelopathy of Microcystis aeruginosa (control, filtrate, and co-culture) and phosphorus (high and low) treatments on the ending cell density of Chlorella on the 7th day of data collection.

Figure 5. Average ending cell density on day 7 of data collection of Chlorella cultures, under the 3 allelopathy treatments. The control is the average of the cultures with only Chlorella. The filtrate is average of the controls with Chlorella and the media in which the M. aeruginosa, containing the allelopathic chemicals it excreted but none of the M. aeruginosa cells. The co-culture is a culture with both Chlorella and M. aeruginosa. 1 standard error is shown.

Figure 6. Average ending cell density on day 7 of data collection of the Chlorella cultures, under the 2 phosphorus treatments. The high phosphorus treatment models a eutrophic ecosystem and provides and excess of nutrients while the low phosphorus represents a nutrient limited environment. 1 standard error is shown. BIOLOGY

Figure 7. Average ending cell density on day 7 of data collection of the Chlorella cultures, under the 3 allelopathy treatments and the 3 phosphorus treatment. Control designates cultures with just Chlorella, while filtrate designates cultures with Chlorella and the allelopathic chemicals of M. aeruginosa, and co-culture designates cultures with both Chlorella and M. aeruginosa. Different letters indicate a significant difference based on an LS Means Differences Tukey HSD test. 1 standard error is shown. The data also demonstrate a small, but significant difference between the phosphorus treatments (p<.0001, table 2), again indicating that there is an effect of phosphorus levels on the growth of Chlorella. The ending cell density of the high phosphorus treatment groups was higher than the ending cell density of the low phosphorus treatment groups (Fig. 6). In the comparison of both treatment types (allelopathy and phosphorus), the cell density of the high phosphorus control was highest and in contrast, the cell density of the co-cultures was lowest (Fig. 7). In addition, the low phosphorus control was lower than the high phosphorus control, but higher than the cell density of the high and low phosphorus filtrates (Fig. 7). Most importantly, the data shows there are significant interactions within the treatment types (p<.0001, Table 2). This demonstrates that the effect of the phosphorus was

dependent on the effect of the allelopathy treatment. An LS Means Differences Tukey HSD test revealed that there is only a significant difference between phosphorus levels in the control allelopathy treatments. The cell density of the high phosphorus control was significantly greater than that of the low phosphorus control. However, within the filtrate and co-culture treatment groups, there was not a significant difference in the cell density between the high and low phosphorus treatments. This means that for the filtrate and the co-culture, the allelopathy was independent of varying phosphorus levels and the effect of the allelopathy in these cases was stronger than the effect of the phosphorus. 4. Discussion This study found clear effects of the allelopathy and phosphorus treatments on the growth rate and ending cell density of Chlorella. These findings confirm previously published literature on the allelopathy of M. aeruginosa (Wang et al., 2017). They also confirmed that phosphorus is a limiting nutrient in algal growth and that the addition of phosphorus will increase the growth rates of algae (Schindler, 1977). However, unlike previous studies, this research demonstrated that the significant allelopathic effects of M. aeruginosa were able to overcome any potential benefit the Chlorella received from elevated phosphate levels enhancing its growth. The study showed that even though the control Chlorella grew significantly better under elevated phosphorus conditions compared to low phosphorus conditions when allelopathy was present, Chlorella growth was significantly inhibited, regardless of the phosphorus level. This suggests that even in nutrient-polluted waters, allelopathy from other algae has the potential to suppress the growth and subsequent blooms of Chlorella and other harmful blooming species. This also suggests that even though management strategies that limit anthropogenic nutrient input in aquatic ecosystems are very important, biological controls, such as allelopathy, have the potential to be significantly more effective in reducing HABs. Both allelopathy treatments significantly inhibited the growth of Chlorella. However, there was a greater difference in growth rate and cell density between the co-culture and the filtrate compared to the filtrate and the control. The greater suppression of Chlorella growth in the co-cultures, is likely due to the fact that in the co-culture, the Chlorella is not only influenced by the allelopathic chemicals the M. aeruginosa excretes as it grows, but also by the competition for nutrients and light within the cultures. Therefore, it has more factors preventing it from blooming and the algae is not able to reach as high a cell density or growth rate. Both the growth rate and the ending cell density of the cultures with the filtrate allelopathy were significantly

lower than the growth rate and cell density of both of the controls, even when the growth of the control was limited in phosphorus. The inhibition of Chlorella cell growth in the presence of the M. aeruginosa filtrate confirms previous literature stating that allelopathic algae always excrete allelopathic chemicals, even when not in the presence of competitors (Accoroni et al., 2015). If the allelopathic chemicals were only excreted in the presence of competitors, then there would not have been an inhibitory effect of the filtrate treatments. This is because the filtrate was obtained from pure M. aeruginosa cultures and was centrifuged and filtered to ensure that there were no remaining M. aeruginosa cells in it before it was added to the Chlorella. Because there was no significant difference between the high and low phosphorus treatments of both the filtrate and the co-culture in both of the models, it is evident that the effect of the allelopathy treatments was greater than the effect of the nutrient limitation. This signifies that allelopathy has the potential to prevent HABs in any nutrient conditions. The statistical significance of the results of the ending cell densities of the different treatments mirror the significance of the results of the growth curves of the cultures. The 2 response variables and models support each other in demonstrating the significant effect of allelopathy and phosphorus treatments on the growth of Chlorella. They also both demonstrate the independence of the effect of the filtrate and co-culture treatments on phosphorus levels when the interactions between the allelopathy and phosphorus treatments were examined. These findings confirm the effects of allelopathy inhibiting the growth of algae that is present in other literature (Wang et al., 2017; Accoroni et al., 2015; Nan et al., 2008; Tang and Gobler 2011; Wang et al., 2007). Currently, other published work only addresses the presence of allelopathy as a defense mechanism used to inhibit competitors in certain species of algae. The majority of the literature in the field is focused on determining which types of allelopathic algae best inhibit potential HAB species. What none of the literature addresses is the efficacy of this allelopathy in varying conditions that more closely represent the nutrient levels of environments that are in danger of being subjected to eutrophication. This information is important because it provides key insight to potential environmental factors that impact the roles of each organism and how well they can survive in different ecosystems. Because individual HABs can cost up to $1 billion USD in damage, implementing a control strategy could save fishery and tourism industries enormous amounts, as well as prevent severe damage to environments and species populations within affected areas. 5. Conclusions and Future Work The results clearly indicated that the inhibiting effect BIOLOGY

of allelopathy on Chlorella growth was able to overcome the benefit the Chlorella received from growing in excess nutrients. Therefore, the results show that the varying phosphorus levels did not have an effect on the impact of the allelopathy treatments. Even though phosphorus is the limiting nutrient in algal growth, nitrogen and other nutrients are still important and contribute to algal blooms (Anderson, 2009). In the future, an experiment in which different nutrients, such as nitrogen, were tested would provide a better picture of how much the limiting of nutrients affects the algae in question. Additionally, performing this experiment with other types HAB algae, specifically toxic, low biomass HAB species such as dinoflagellates, would determine whether the phenomenon is only true for M. aeruginosa and Chlorella or if this knowledge can be applied to other allelopathic and HAB species and potentially used to formulate a prevention method for a broader scope of HABs. Research should also be done on allelopathy potentially preventing HABs in environments that more closely model the actual ecosystem to better represent the role allelopathy plays in inhibiting the growth of algae and other organisms. Experiments testing the environmental impact of allelopathy on other benthic organisms and aquatic plants would also help determine the viability of replenishing algal communities as a potential form of HAB mitigation and prevention. If the experiment were to be repeated, there should be more types of freshwater algae tested. In addition, there should be more replicates of each treatment to generate more data and a larger sample size. If there had been more time, it would be important to test different phosphorus levels and collect a larger sample size. Additionally, taking data for longer than 7 days would provide insight as to whether or not the behavior and growth of the algae changes in relation to the allelopathy treatments after the exponential phase once the algae has reached carrying capacity. Harmful algal blooms can wreak havoc on ecosystems, economies, tourism industries, and human health. Toxic HABs can cause paralytic, diarrhetic, neurotoxic, amnesic, and azaspiracid shellfish poisoning which can either be contracted through direct contact with the biotoxins, or be concentrated in filter feeders before transferring up the food web, affecting fish, seabirds, dolphins, whales, humans, and other organisms (Davidson et al., 2014; Anderson, 2009). Nontoxic, high biomass blooms can also wreak havoc from oxygen depletion and result in plant and animal mortalities in the affected area (Anderson, 2009). Without research being done on the factors that can lead to HABs and methods to most effectively prevent them, the problem will continue to grow and create greater negative impacts.

6. Acknowledgements I would like to thank Dr. Amy Sheck for all of her help as my research teacher and mentor. I would also like to than the Research in Biology classes of 2017 and 2018 for their support and help. Additionally, I would like to acknowledge Dr. Kim Monahan, Summer Research Internship Program mentor, as well as Abinav Udaiyar and Jamie Chamberlin for their help as lab assistants, Dr. Floyd Bullard for his help with statistical analysis, and Elle Allen, NCSU, for her assistance in choosing a study organism. This research was funded by the Glaxo Endowment of the NCSSM Foundation. 7. References Accoroni, S., Percopo, I., Cerino, F., Romagnoli, T., Pichierri, S., Perrone, C., & Totti, C. (2015). Allelopathic interactions between the HAB dinoflagellate Ostreopsis cf. ovata and macroalgae. Harmful Algae, 49, 147-155. Anderson, D. M. (2009). Approaches to monitoring, control and management of harmful algal blooms (HABs). Ocean & Coastal Management, 52(7), 342-347. Anderson, D. M., Burkholder, J. M., Cochlan, W. P., Glibert, P. M., Gobler, C. J., Heil, C. A., Vargo, G. A. (2008). Harmful algal blooms and eutrophication: Examining linkages from selected coastal regions of the United States. Harmful Algae, 8(1), 39-53. Culture Collection of Algae and Protozoa (CCAP). (n.d.). Retrieved January 31, 2018, from https://app.scientist. com/providers/culture-collection-of-algae-and-protozoa-ccap Davidson, K., Tett, P., & Gowen, R. (2011). Harmful Algal Bloom. Marine Pollution and Human Health, 95-127. Davidson, K., Gowen, R. J., Harrison, P. J., Fleming, L. E., Hoagland, P., & Moschonas, G. (2014). Anthropogenic nutrients and harmful algae in coastal waters. Journal of Environmental Management, 146, 206-216. [EPA] U.S. Environmental Protection Agency. (2000). Water Quality Issues Related to Multiple Watersheds in the Neuse River Basin (3rd Report to the U.S. Congress (1) Section A, Chapter 4). Gobler, C. J., Burkholder, J. M., Davis, T. W., Harke, M. J., Johengen, T., Stow, C. A., & Waal, D. B. (2016). The dual role of nitrogen supply in controlling the growth and toxicity of cyanobacterial blooms. Harmful Algae, 54, 87-97. Heisler, J., Glibert, P., et al. (2008). Eutrophication and


harmful algal blooms: A scientific consensus. Harmful Algae, 8(1), 3-13. Nan, C., Zhang, H., Lin, S., Zhao, G., & Liu, X. (2008). Allelopathic effects of Ulva lactuca on selected species of harmful bloom-forming microalgae in laboratory cultures. Aquatic Botany, 89(1), 9-15. Ryan, J. P., Kudela, R. M., et al. (2017). Causality of an extreme harmful algal bloom in Monterey Bay, California, during the 2014-2016 northeast Pacific warm anomaly. Geophysical Research Letters, 44(11), 5571-5579. Schindler, D. W. (1977). Evolution of Phosphorus Limitation in Lakes. Science, 195(4275), 260-262. Steinman, A., & Duhamel, S. (2006). Methods in stream ecology. Amsterdam: Academic Press/Elsevier. Tang, Y. Z., & Gobler, C. J. (2011). The green macroalga, Ulva lactuca, inhibits the growth of seven common harmful algal bloom species via allelopathy. Harmful Algae, 10(5), 480-488. Ulloa, M. J., Ă lvarez-Torres, P., Horak-Romo, K. P., & Ortega-Izaguirre, R. (2017). Harmful algal blooms and eutrophication along the mexican coast of the Gulf of Mexico large marine ecosystem. Environmental Development, 22, 120-128. Wang, L., Zi, J., Xu, R., Hilt, S., Hou, X., & Chang, X. (2017). Allelopathic effects of Microcystis aeruginosa on green algae and a diatom: Evidence from exudates addition and co-culturing. Harmful Algae, 61, 56-62. Wang, R., Xiao, H., Wang, Y., Zhou, W., & Tang, X. (2007). Effects of three macroalgae, Ulva linza (Chlorophyta), Corallina pilulifera (Rhodophyta) and Sargassum thunbergii (Phaeophyta) on the growth of the red tide microalga Prorocentrum donghaiense under laboratory conditions. Journal of Sea Research, 58(3), 189-197.


INSECT GROWTH REGULATORS AS A BIOLOGICAL CONTROL METHOD FOR TERMITES R. flavipes Tyler Edwards Abstract Each year, termites cause $7 billion of damage in the United States. Insect Growth Regulators (IGRs) are synthetic chemicals created to mimic or inhibit hormones involved in an insectâ&#x20AC;&#x2122;s development which provide powerful, targeted pest control. Bait stations containing hexaflumuron, an IGR, are commonly used to control termites. However, because hexaflumuron only targets the developmental system, it only affects termites as they molt. Conversely, methoprene, a different form of IGR, has been proven to affect the digestive system of termites, not their ability to molt. The present study seeks to examine the viability of combining methoprene with hexaflumuron to create a more effective termiticide. In preliminary experiments, the efficacy of methoprene was determined to be significantly less than that of hexaflumuron, but significantly more than that of the control. Topical treatments of combined dosages of a mid level dose of methoprene and hexaflumuron resulted in a significantly greater mortality than combinations of higher or lower doses. Bait exposure resulted in a lower rate of mortality and no significant difference between treatments. Therefore, further testing should be conducted to investigate the possibility of combining IGRs with more attractive baits. 1. Introduction Termites are among the most damaging pests in the world. They cause $7 billion in damages yearly in the United States, and an estimated $50 billion worldwide (Korb, 2005). Termites are soft-bodied, pale insects belonging to the order Isoptera which aerate soil and decompose plant tissue on forest floors (Verma, 2009). Additionally, termites are eusocial insects, meaning different generations live together and divide labor amongst different castes within a large colony, similar to insects of the order Hymenoptera, such as ants and wasps (Korb, 2011). Reticulitermes flavipes is the primary termite species responsible for the bulk of the damage in the USA (Peterson, 2006). Termites cause damage by consuming cellulose from wood structures, such as trees, buildings, utility poles, and other products and structures derived from plants (Peterson, 2006). Although they are responsible for significant amounts of damage, they also play an important role in the ecosystem. In an effort to decrease the large yearly cost due to termite related damage and repairs, construction sites are commonly treated with termiticides before, during, and after buildings are created (Peterson, 2006). It is also common for soil to be treated with termiticidal chemicals to prevent an infestation (Verma, 2009). Unfortunately, the most common forms of chemical control are often harmful to the environment and non-target organisms (Peterson, 2006). Insect growth regulators (IGRs) are chemicals that provide powerful, targeted pest control by interrupting the developmental processes of insects (Su, 1998). Hexaflumuron is one such IGR and has been proven effective against termites specifically (Sheets, 2000). Hexaflumuron prevents termites from creating chitin, an important component of their exoskeletons, causing immatures to BIOLOGY

have failed molts and die (Sheets, 2000). Hexaflumuron is specifically used in termite baits, and has the potential to wipe out entire colonies by preventing the nymphs from molting (Habibpour, 2010). Pieces of wood, paper, or other forms of cellulose are treated with a low concentration of hexaflumuron and placed into a plastic stake. These stakes can be placed around the perimeter of an affected area, attracting termites which return to the colony and feed each other the treated material (Peterson, 2006). The use of termiticidal baits is preferable to spraying chemicals to the soil as lower amounts of chemicals are required, the chemicals are specific to termites, and fewer termites need to come into contact with the bait station in order for the treatment to be effective (Peterson, 2006). However, hexaflumuron bait stations are less effective than treatments that target multiple systems of the termiteâ&#x20AC;&#x2122;s body. Other forms of IGR, such as juvenile hormone analogs, also cause mortality by killing insect larvae. Methoprene is a broad spectrum juvenile hormone analog (JHA) which typically prevents molting in insects, but primarily affects the digestive system of termites (Howard, 1978). JHAs, such as methoprene, have been used as effective pest controls and are nontoxic to humans. Methoprene, hydroprene, and kinoprene are all JHAs that are useful in preventing adult hymenopterans from consuming crops, as well as preventing swarming in stored products and urban settings (Subramanian, 2016). When used against termites, methoprene causes death by starving the termites after the protozoa in their guts which allow them to digest cellulose are eliminated more often than from failed molts (Glare, 1999). Although studies have deemed methoprene to be a mediocre termiticide on its own, little is known about its viability in combination with hexaflumuron. The present study seeks to further compare the effect of methoprene to

that of hexaflumuron at high concentrations, as well as test the viability of combining it with hexaflumuron to create a more effective termiticide at a lower dose. 2. Methods This study included two preliminary experiments and two main experiments. For all experiments, the mortality was recorded after topical exposure to the chemical or through exposure to a treated piece of cardboard. The experimental unit was a petri dish containing moistened sand, moistened cardboard, and fifteen termites. The preliminary experiments were conducted to establish a point of reference for the survivability of methoprene after topical or bait exposure, and to determine the most effective concentration to be used in the combination experiments. Both preliminary experiments used three treatments of methoprene, a negative control of no treatment, and a positive control of a 0.05 mg/mL dose of hexaflumuron. The sample size of the preliminary topical experiment was three experimental units and four experimental units in the preliminary bait experiment. The subsequent topical experiment measured the mortality after exposure to combinations of different concentrations of methoprene and hexaflumuron (n=3). Finally, the bait experiment compared the mortality caused by exposure to cardboard treated methoprene or hexaflumuron alone to that caused by a combined treatment of the two. In every experiment, the treatments were assigned according to a randomized block design. 2.1 - Insect Collection and Care Termites were collected from traps at several sites around the campus of the North Carolina School of Science and Mathematics (NCSSM) in Durham, North Carolina. Traps consisted of a piece of moistened cardboard covering an area of exposed soil of approximately 1 m2. The traps were set in May 2017, and termites were first spotted three weeks later. Additional termites were collected from a rotting stump in a forest near NCSSM. After identifying a group of viable termites, they were aspirated and brought into the lab. The termites were housed in a petri dish containing a 6 cm x 6 cm square of corrugated cardboard moistened with distilled water and a substrate of 45.0 g (±.1g) of play sand. The sand was sifted to remove large particles, rinsed at least three times to remove impurities, and placed in a drying oven overnight at 110 °C. The sand was then moistened overnight with 15% of its mass of distilled water with added food coloring to improve visibility. 45 g of sand was added to each petri dish. Similar petri dishes were used in each of the experiments except for the preliminary acetone response experiment, where no food coloring was used. When not being counted, the termites were kept in a dark cabinet at room temperature. Termites were treated hours

after collection, and termites collected for previous experiments were not used in later experiments. Collection occurred in June for the preliminary and topical experiments and in August for the bait experiment. 2.2 - Solution preparation Methoprene was obtained from SPEX Certirep in a stock solution a concentration of 1 mg/ mL methanol. Solid Pestanal® hexaflumuron powder was obtained from Sigma-Aldrich. The stock solution of methoprene and the solid hexaflumuron were mixed with appropriate volumes of acetone to create 0.1 mg/mL, 0.05 mg/mL, and 0.01 mg/mL solutions. 2.3 - Identifying the optimal volume of solvent To establish a negative control, acetone was applied to the termites in either a 1 μL or a 5 μL dose. The mortality was then measured each day for a period of seven days. In order to determine what size of dish and what size of group the termites should be held in, both 60 x 15 mm and 100 x 15 mm dishes (n=2) were used in this experiment. The smaller dishes contained 10 termites apiece, while the larger ones contained 20 termites. Controls of each dish size contained 10 or 20 termites treated with no acetone. The solvent for the topical experiments was 1 μL of acetone, which was determined to cause fewer deaths than a 5 μL dose, and the effect of group size was found to be insignificant (p = 0.32). 2.4 - Identifying the optimal topical dose of methoprene Termites were paralyzed by being placed in a freezer for thirty seconds. Then, 1 μL of solution was pipetted onto their abdomens and allowed to absorb before the termites were transferred to their respective petri dishes. In the preliminary experiment, three treatments of methoprene were applied to the termites: 0.01 mg/mL, 0.05 mg/mL, and 0.1 mg/mL (n = 3). A positive control of 0.05 mg/ mL hexaflumuron and a negative control of no solution were used as a comparison. Mortality was measured over six days. 2.5 - Mortality of a topical application of a combination of IGRs Combinations of methoprene and hexaflumuron in the topical application of a combination of IGRs were as follows: 0.05 mg/mL/ 0.0 5mg/mL, 0.05 mg/mL/.1 mg/mL, 0.1 mg/mL/.05 mg/mL, 0.1mg/mL/.1 mg/mL. A negative control of acetone was also used, as well as a positive control of 0.1mg/mL hexaflumuron. 1μL of the solution was applied to termites topically. Each dish contained 10 termites and was monitored for three days (n=3). 2.6 - Identifying the optimal bait dose of methoprene In the preliminary bait experiment, the cardboard squares were dosed with 1 mL of the appropriate solution BIOLOGY

and dried overnight at room temperature before being moistened. Three treatments of methoprene treated bait were prepared: 0.01 mg/mL, 0.05 mg/mL, and 0.1 mg/mL (n = 4). A positive control of a 0.05 mg/mL hexaflumuron and a negative control of no solution were used as a comparison. Mortality was measured each day for eight days. 2.7 - Mortality of bait exposure to a combination of IGRs This experiment examined the effect of a combination of insect growth regulators on termites after being exposed to them through contact with cardboard treated with 1 mL of one of four solutions: acetone, 0.1 mg/mL methoprene, 0.1mg/mL acetone, or both 0.1 mg/mL methoprene and 0.1 mg/mL hexaflumuron. Mortality was monitored for seven days (n = 5).

deaths after a treatment of a combination of IGRs were highest for the 0.05 mg/mL methoprene per 0.05 mg/mL hexaflumuron after three days (Fig. 2). The curve for the 0.1 mg/mL methoprene per 0.1 mg/mL hexaflumuron was the lowest and even lower than the control. The 0.05 mg/mL/ 0.1 mg/mL treatment did not diverge from the 0.1 mg/mL/ 0.05 mg/mL treatment until the third day. However, the 0.1 mg/mL / 0.05 mg/mL treatment had the highest slope of 4.5 more deaths each day.

2.8 - Mortality Assay Mortality was the response variable for each of the experiments. A termite was considered to be dead if it did not move after being prodded with a paintbrush. Each day, the dead termites were counted and removed from the petri dishes. 3. Results 3.1 - Identifying the optimal topical dose of methoprene The highest number of cumulative deaths was recorded after exposure to a topical dose of 0.05 mg/mL hexaflumuron treatment, which is expected for the positive control (Fig. 1). The experimental treatments of topical methoprene applications were not significantly different from one another (p = 0.2353)

Figure 2. Cumulative deaths three days after exposure to a combination of methoprene and hexaflumuron. Error bars represent one standard error from the mean. 3.3 - Bait exposure to methoprene Termites exposed to cardboard treated with a 0.01 mg/ mL methoprene had the lowest rate of mortality, while those treated with 0.05 mg/mL or 0.1 mg/mL were not significantly different from one another (p = 0.3401). After six days, a difference of the cumulative deaths could not be detected amongst the different concentrations of methoprene (Fig. 3). These findings suggest that the positive and negative controls were effective, but the experimental treatments were not.

Figure 1. Cumulative deaths due to exposure to methoprene after six days. Error bars represent one standard error from the mean. 3.2 - Mortality of a topical application of a combination of IGRs Because an increased concentration did not lead to an increase in mortality in the preliminary experiments, several combinations of hexaflumuron and methoprene were used in the subsequent experiments. The cumulative BIOLOGY

Figure 3. Cumulative deaths after six days of exposure to bait treated with methoprene. Error bars represent one standard error from the mean.

3.4 - Mortality of bait exposure to a combination of IGRs Because there was no significant difference between treatments in the preliminary bait experiment, 0.1 mg/ mL of methoprene and 0.1 mg/mL of hexaflumuron were used in the bait experiment. The cumulative deaths after seven days of exposure to the treated cardboard resulted in no significant difference based on the treatment of the cardboard (Fig. 4).

Figure 4. Cumulative deaths after seven days of exposure to bait treated with a combination of methoprene and hexaflumuron. 4. Discussion and Conclusion The present study explored the possibility of using the termiticidal IGRs hexaflumuron and methoprene in conjunction in order to create a more effective termiticide. The ability of methoprene to cause death both through a topical treatment and in the form of exposure to bait was examined in different concentrations and as compared to hexaflumuron and acetone alone. The results of the study affirm the accepted use of hexaflumuron instead of methoprene as a control for termites, as it was consistently more effective than methoprene. However, in the topical experiment, the combination of 0.05 mg/mL methoprene and 0.05 mg/mL hexaflumuron proved to be significantly more effective than a treatment of 0.1 mg/mL of hexaflumuron. Such a result suggests that it is possible that the combination of methoprene and hexaflumuron in lower concentrations may prove to be effective in a bait station. The combination of mid-level doses of both methoprene and hexaflumuron caused the highest mortality in the topical experiment. Methoprene consistently caused lower mortality than hexaflumuron throughout the experiments. Compared to the topical experiment, the overall mortality rate was lower in the bait experiment, and there were no detectable differences among the treatments. The bait experiment also saw no differences in mortality from either treatment alone.

In the preliminary experiments, the mid-level dose of methoprene was the most effective at causing mortality. However, in the bait experiment resulted in no significant effect of treatments. This is likely partially due to the lack of consumption in the bait experiment. The presence of an optimal dose in the topical experiment, rather than a constant increase in efficacy based on concentration, is consistent with the behavior of hexaflumuron in past studies (Su, 1998). The presence of an optimal dose also provides hope that an efficient and cost effective treatment could be created by using an optimal dose of two IGRs combined. In the bait experiment, after termites were exposed to the same combination of concentrations of methoprene and hexaflumuron, they died at a lower rate and in lower numbers than in the treatments where only one IGR was used. This effect was the opposite of what was seen in the topical experiment. Feasibly, a termiticide would be administered to a colony by placing bait stations containing treated wood bait near their primary feeding sites. Therefore, in order to truly deem a combination of IGRs to be an effective treatment method for termites, it would need to be effective in the bait administration experiment. In the preliminary experiments, which questioned the effect of methoprene in comparison to hexaflumuron, an increase in mortality based on the dose was suggested between the concentration of methoprene and the cumulative deaths. Methoprene was consistently and significantly less effective than hexaflumuron, but more effective than the controls in these experiments. It is also possible that consumption in the bait experiments was low because, in past studies, exposure to hexaflumuron or methoprene individually has resulted in the elimination of termite gut protozoa which aids in digestion (Glare, 1999). As a result, most termites exposed to methoprene die from starvation rather than failed molts. In future studies, this effect could possibly be minimized by attempting to find a lower optimal dose which is not as toxic to gut protozoa. The presence of gut protozoa should also be evaluated after exposure to both hexaflumuron and methoprene in future research. Additional improvements to the present study would also include controlling for termite size, instar, or caste. It is possible that using a constant volume for each termite, as opposed to a constant ratio of solution to the termiteâ&#x20AC;&#x2122;s mass, resulted in uneven dosage. The lack of controlling for instar is a likely source of error in the present experiment, as IGRs affect larvae differently as they approach a molting phase (Khatter, 2011). The termites used in the study were primarily late instar workers, but also included nymphs, soldiers, and presoldiers. Hexaflumuron is only directly effective in actively molting individuals, and not all of the termites used in the present study were actively molting. In previous studies, doses of 1 Îźg/mL or smaller were used in topical doses of hexaflumuron, allowing for the termites to survive long enough to molt (Khatter, BIOLOGY

2011). The mortality which is found in the present study may be partially attributable to the large doses of methoprene and hexaflumuron used in the experiment, not the inhibition of chitin synthesis. Therefore, further research should be done to determine which other form of pest control could be added to hexaflumuron to improve its efficacy. Additionally, in the topical experiment, each termite was given the same 1 ÎźL dosage of the solution, instead of a percentage of its mass. Therefore, nymphs and adult workers received the same dose, but likely responded differently due to a difference in mass. This issue could be mitigated by only testing third instars or older termites instar, as identified by the presence of sclerotized mandibles and a melanized exoskeleton. A final source of error may have been caused by lack of consumption in the bait experiments. Although the termites were collected from infested cardboard, they did not consume the cardboard present in their petri dishes when brought into the lab. Instead, the termiticides were transferred to them after direct contact with the surface of the cardboard. Smaller termites received a proportionally larger dose of the chemicals than the larger, more mature termites did. Finally, the low level of consumption in the bait experiments may be a result of the termites not being given time to acclimate to the lab conditions. In future studies, the termites would be allowed to live in the lab environment for a period of time before being placed into the experimental unit, and would be given a softer or more decomposable material as a bait source to increase the likelihood of consumption. Most previous studies employing a combination of IGRs have found that a combination of chemicals was more effective at producing similar levels of mortality at lower dosages than either chemical on its own. The results of the present experiment suggest that the use of combination IGR treatments directly affect the insects more than either of the individual chemicals when an optimal dose is used. The results should be expanded and improved upon by removing confounding variables and extending the trial period. While positive results were found in the topical experiment that suggest that there is an optimal combination of the two IGRs which can be used to cause a higher mortality rate than either one, in order for a combination of IGRs to be used as a termiticide, adjustments must be made to cause higher rates of mortality after exposure to treated baits. 5. Acknowledgements I would like to thank Dr. Amy Sheck for advising me and guiding me through the research process. Thanks to her, I have learned to apply the scientific method and, just as importantly, have gained the confidence to learn independently. Thank you to Dr. Kimberly Monahan for working with me over the summer and helping me navigate the BIOLOGY

data collection process. Thank you to the Research in Biology class of 2018 for accompanying me on my journey through this project, and to the Research in Biology class of 2017 for being willing to mentor me. Thank you to Abi Udaiyar and Jamie Chamberlin for being my lab assistants over the summer. Finally, I would like to thank the North Carolina School of Science and Mathematics and the Glaxo Endowment to NCSSM for allowing me the opportunity to experience research for the first time. It has been an invaluable experience that has taught me skills that I will use for the rest of my research career. 6. References Glare T and Oâ&#x20AC;&#x2122;Callaghan M. (1999). Report for the ministry of health: environmental and health impacts of the insect juvenile hormone analog, S-methoprene. Biocontrol and Biodiversity. Howard R W and M I Haverty. (1978). Defaunation, mortality, and soldier differentiation: concentration effects of methoprene in a termite. Sociobiology, 3, 73-78. Habibpour B. (2010). Laboratory evaluation of Flurox, a chitin synthesis inhibitor, on the termite, Microtermes. Journal of Insect Science, 10, 1-8. Korb J. (2005). Termites. Current Biology, 17, 995- 999. Korb J, K Hoffmann, K Hartfelder. (2011).Molting dynamics and juvenile hormone titer profiles in the nymphal stages of a lower termite, Cryptotermes secundus (Kalotermitidae)- signatures of developmental plasticity. Journal of Insect Physiology, 58, 376-383. Khatter N A and F F Abuldhab. (2011). Combined effect of three insect growth regulators on the digestive enzymatic profiles of Callosobruchus maculatus(Coleoptera: Bruchidae). Journal of Egyptian Society of Parasitology, 41, 757-766. Peterson C, T L Wagner, J E Mulrooney, T G Shelton. (2006). Subterranean termites: their prevention and control in buildings. USDA Forest Service. Perrott, R C. (2003). Hexaflumuron efficiency and impact on subterranean termite (Reticulitermes spp.) (Isoptera: Rhinotermitidae) gut protozoa. Virginia Polytechnic Institute and State University. accessed 18 June 2017. Sheets J J, L L Karr, J E Dripps. (2000). Mechanics of uptake, clearance, transfer, and metabolism of hexaflumuron by eastern subterranean termites (Isoptera: Rhinotermitidae). Journal of Economic Entomology, 93, 871-877.

Su, N and R H Scheffran. (1998). A review of subterranean termite control practices and prospects for integrated pest management programmes. Integrated Pest Management Reviews, 3, 1-13. Subramanian, S.and K. Shankarganesh. (2016). Ecofriendly Pest Management for Food Security. Elsevier Saunders Inc., Philadelphia. pp. 613-50 Verma M, S Sharma, and R Prasad. (2009). Biological alternatives for termite control: A review. International Biodeterioration and Biodegradation, 63, 959-972.


APOPTOTIC AND IMMUNOMODULATORY EFFECTS OF GEMCITABINE MONOPHOSPHATE DELIVERY VIA LIPID CALCIUM PHOSPHATE NANOCARRIERS FOR PANCREATIC CANCER Michelle Bao Abstract The efficacy of chemotherapy treatments in solid tumors is drastically reduced by pathophysiological barriers, so techniques to maximize drug delivery and reduce side effects are of utmost interest. Researchers have recently turned to targeted nanoparticles that encapsulate and release the drug at the tumor site as a potential solution. This paper examines the use of lipid calcium phosphate nanoparticles (LCP NPs) encapsulated with gemcitabine monophosphate (GMP), a standard chemodrug derivative, in a syngeneic orthotopic allograft of KPC, a mouse model for pancreatic ductal adenocarcinoma (PDA). The efficacy of the treatment was evaluated by investigating changes in apoptotic and immunomodulatory effects. Immunofluorescent imaging was used to visualize apoptotic cells in the tumor, and flow cytometry was exploited to examine its effect on the immune cell populations in the tumor microenvironment. Results indicate that GMP LCP NPs reduced tumor weight at sacrifice, decreased tumor progression, and induced apoptosis in the tumor. Most apoptosis was localized in non-cancer cells in the tumor, leading to the inference that the LCP NPs promote off-target effects. The treatment is promising and can further be augmented by tailoring the nanoparticles to selectively target the cancer cells in the tumor, as well as the supporting cells. 1. Introduction Researchers expect pancreatic ductal adenocarcinoma (PDA) to become the second most common cause of death by the year 2030 (Rahib et al., 2014). Sometimes referred to as “stealth cancer,” it is characterized by early metastasis and resistance to conventional therapies, making it especially difficult to treat compared to other cancers (Harvard Health, 2007). Furthermore, the prognosis of patients, which is often delayed due to the lack of noticeable symptoms in early stages, has remained extremely poor for decades (Ellermeier et al., 2013). In fact, over 80% of the patients’ diagnoses have advanced or metastasized beyond the pancreas, progressing past the point where surgical resection is possible (Hirshberg et al., n.d.). The treatment to which most patients revert is conventional chemotherapy, a common therapeutic that uses a potent drug to prompt death in cells that are dividing quickly, like cancer cells. However, the inefficacies of chemotherapy are especially prominent in PDA due to its characteristically dense extracellular matrix surrounding the tumor nests and immunosuppressive cells, comprising the tumor microenvironment (TME). This desmoplastic microenvironment includes fibroblasts, inflammatory cells, connective tissue, and abnormal vasculature, forming a barrier surrounding the tumor cells and preventing the drug from taking effect on the cancer cells (Teague et al., 2015). Furthermore, studies have indicated that this barrier not only limits therapies from reaching the cancer cells but cross-talks with cancer cells to promote proliferation and metastasis (Feig CHEMISTRY

et al., 2012). This extracellular matrix is one of the largest obstacles to the effective use of chemotherapeutic drugs. To enhance penetration of chemotherapeutic agents into the tumor site, much research has gone into examining the viability of drug delivery through nanoparticles. Nanoparticles can encapsulate anticancer agents such as cytotoxic nucleoside analogues like gemcitabine monophosphate (GMP) (Fig. 1). Gemcitabine (2’, 2’-difluoro 2’ deoxycytidine) is a common nucleoside analog used in chemotherapy for pancreatic cancer and non-small-cell lung cancer. Gemcitabine enters cells through nucleoside transporters and is phosphorylated by deoxycytidine kinase to become gemcitabine mono-, di-, and triphosphate, which are then incorporated into the DNA strand (Zhang et al., 2013). Gemcitabine inhibits ribonucleotide reductase, a crucial enzyme in DNA synthesis, by binding to its active site, preventing cells from dividing properly and causing cell death (OncoLink Team, 2015). Despite it being a clinically approved treatment and the current standard treatment for pancreatic cancer, only about 5-10% of patients respond to it (Hingorani et al., 2003). Few patients respond to this treatment because of inefficient delivery to the tumor site; most of the drug does not take effect at the cancer cells. Drug uptake is also further decreased when mutational resistance occurs, especially with the regulation of nucleoside influx transporters (ENT1, ENT2, CNT1, CNT2) and drug efflux proteins on the cell membrane (Zhang et al., 2013). Other major therapeutic hurdles include the rapid development of cell chemoresistance, high toxicities, and unpredicted side effects.

Figure 1. The structure of gemcitabine monophosphate formate salt (Santa Cruz Biotechnology). The phosphate group plays a key role in the drug encapsulation in a lipid calcium phosphate nanoparticle.

ticles cannot undergo lysosomal degradation after endocytosis, with the calcium phosphate core preventing the GMP from being degraded. When the nanoparticles reach an acidic endosome, the CaP core dissolves, increasing osmotic pressure in the endosome and causing it to rupture, releasing the encapsulated drug into the cell (Zhang et al., 2013). Another more novel treatment approach is utilizing the body’s immune response through mechanisms of immunomodulation, or regulating the immune system to combat the effects of the immunosuppressive tumor microenvironment. Incorporating chemotherapeutic and immunotherapeutic approaches would improve the chance of success, as this approach would place less reliance on the drug’s ability to target and reach the tumor site. Gemcitabine’s ability as a chemodrug to inhibit tumor metastasis and prevent cancer regression could be greatly improved if a proinflammatory immune response is actuated (Chang et al., 2016). This research paper examines the therapeutic efficacy of GMP LCP NPs and its potential as a unique formulation strategy in a KPC mouse model of PDA. Furthermore, capability of the aforementioned strategy to modulate the immunosuppressive tumor microenvironment was investigated as well. 2. Materials and Methods

Figure 2. The lipid calcium phosphate nanoparticle encapsulated with gemcitabine monophosphate. Encapsulating chemotherapeutic agents in nanoparticles is a novel nanomedicine strategy that overcomes some of these common in vivo drug delivery challenges (Teague et al., 2015). The nanocarrier used in this research project was a calcium phosphate nanoparticle coated with a lipid bilayer, designated as lipid calcium phosphate nanoparticles (LCP NPs) (Fig. 2). GMP and other biological therapeutics with phosphate groups are ideal candidates for encapsulation in an LCP NP, as they can be efficiently co-precipitated with calcium phosphate nanoparticles, thus significantly increasing the encapsulation efficiency of GMP (Satterlee & Huang, 2016). The unique properties of the lipid calcium phosphate (LCP) shell allow GMP to bypass metabolism, avoid enzymatic degradation, enter the cell through receptor-mediated endocytosis, and deliver the drug at the targeted site. The entrapped nanopar-

2.1 Materials Gemcitabine monophosphate disodium salt was synthesized by HDH Pharma (Research Triangle Park, NC). 1,2-Dioleoyl-3-trimethylammonium-propane chloride salt (DOTAP) and dioleoylphosphatydic acid (DOPA), and 1,2-distearoryl-snglycero-3phosphoethanolamine-N-[methoxy(polyethylene glycol-2000) ammonium salt (DSPE-PEG) were procured from Avanti Polar Lipids (Alabaster, AL). DSPE-PEG was obtained from the lab and was synthesized according to an established protocol (Banerjee et al., 2004). DeadEnd Fluorometric TUNEL assay kits were obtained from Promega (Madison, WI). All other chemicals were purchased from Sigma-Aldrich. 2.2 Cell Culture The primary pancreatic tumor cell KPC98027 was derived from the spontaneous KPC model of PDA (LSLKras ; LSL-Trp53 ; Pdx-1-Cre, on C57Bl/6 background). The cell line was provided by Dr. Serguei Kozlov at the Center for Advanced Preclinical Research, Frederick National Laboratory for Cancer Research (NCI). The cells were cultured in F-12 (DMEM/F12) Nutrient Mixture: Dulbecco’s Modified Eagle Medium, supplemented with 10% fetal bovine serum (Gibco), 1% Penicillin/Streptomycin at 37°C and 5% CO2 in a humidified incubator. The cell line was transfected with mCherry red fluorescent protein (RFP), firefly luciferase (Luc) by lentiviral transfection. Cell line procedures were done by a trained technician G12D/+



from an established protocol (Miao et al., 2017). 2.3 Synthesis and Characterization of GMP-loaded LCP Nanoparticles The nanoparticles used had been synthesized and characterized by a trained technician according to a previous protocol (Zhang et al., 2013). Briefly, the calcium phosphate cores were prepared in a water-in-oil microemulsion. 180 µl of 60 mmol/l GMP was mixed with 12.5 mmol/l Na2HPO4 to a final volume of 600 µL. This solution was added to 20 ml of oil phase, containing cyclohexane/Igepal CO-520 solution. 600 µL of 2.5 mol/l CaCl2 was added to a separate oil phase. 400 µL of 20 mmol/l dioleoylphosphatydic acid (DOPA) was then added to the mixture. The two separate micro emulsions were mixed, stirred, and re-emulsified with 400 µL of 20 mmol/l DOPA. Ethanol was added before the solution was centrifuged at 10,000 g for 15 minutes. The supernatant was discarded and the cores were washed twice with 100% ethanol and then dried with nitrogen gas. The LCP core pellets were stored in chloroform at -20º C until future use. To improve the targeting ability of the nanoparticle, anisamide, a small molecule ligand, was conjugated to the anti-fouling polymer Polyethylene glycol (PEG). As previous studies have shown, anisamide targets sigma receptors that are overexpressed in human cancer cells to greatly improve targeted drug delivery efficiency (Banerjee et al., 2004). The nanoparticles were characterized by particle size and zeta potential through dynamic light scattering with a Malvern ZetaSizer Nano series. Encapsulation efficiency was determined using a UV spectrophotometer at 275 nm by lysing the nanoparticles with a THF/1 mol/1 HCl (v/v = 70/30) solution (Zhang et al., 2013). 2.4 Orthotopic Allografting of KPC model mice The KPC model for PDA is a well-validated, clinically relevant mouse model of pancreatic cancer. The mice are genetically engineered to express K-rasLSL.G12D/+; p53R172H/+ under pancreatic tissue specific promoter, causing them to develop precursor lesions for pancreatic cancer at an expedited rate (Hingorani et al., 2003). Over 80% of KPC model mice develop metastases in organs such as the liver and lung, both of which are the most common metastasis sites observed in human PDA (Olive et al., 2009). An orthotopic mouse model is a type of xenograft mouse model in which tumor cells are injected directly in the pathological site of the cancer – in this case, the pancreas. The tumorigenesis of this model more closely resembles that of humans than a heterotopic mouse model, in which the cells are injected subcutaneously. The orthotopic mouse model is much more clinically relevant (Qui & Su, 2013). The sub-confluent KPC-RFP/Luc cells were harvested with 0.05% trypsin-EDTA (Gibco), washed with phosphate buffer saline (PBS), and mixed with equal volume of Matrigel matrix (Corning) just prior to implanCHEMISTRY

tation. Bupivacaine, an anesthetic, was administered subcutaneously at the site of incision. 1×106 cells were injected into the tail of the pancreas for the orthotopic allografting. 6-0 polyglucolic acid sutures were used to seal the skin and abdominal wall. Buprenorphine was administered as a post-operative analgesic. These procedures were conducted by my mentor due to safety issues. 2.5 Experimental Animals The mice in the preliminary experiment were over ten weeks old and had initial pancreatic tumor burdens of 5x107 radiances (photons/sec/cm2/sr). The tumor burdens were measured with the IVIS Lumina Series III in vivo optical system (Perkins Elmer) at the initiation of treatment and every other day following. The mice in the follow-up experiment were six to eight weeks old, obtained from Charles River Laboratories. All work performed on the mice was approved by the Institutional Animal Care and Use Committee (IACUC) at the university. 2.6 Tumor Growth Inhibition Analysis Due to the high cost of testing on mice and established literature indicating their ineffectiveness, neither free gemcitabine nor empty LCP NPs were tested in the preliminary experiment (Goldstein et al., 2015; Von Hoff et al., 2013; Conroy et al., 2011). The mice bearing KPC98027 RFP/Luc allografts were randomized into two groups (n=4-5) as follows: phosphate-buffer saline (PBS) and GMP LCP NPs. The treatments were intravenously injected post-inoculation every two days for a total of four doses of 20 mg/kg of treatment per injection. The mice in the follow-up study were inoculated with tumor cells at Day 0 and the tumor volumes were monitored every two days using the same bioluminescence imaging system. Fourteen days after the inoculation, the mice bearing KPC98027 RFP/Luc allografts were sorted from strongest to weakest radiance of bioluminescence signal intensity. The mice were drafted to the following treatment groups: untreated group (PBS), empty LCP NPs, and GMP LCP NPs, to ensure homogeneity between the groups. The treatments were intravenously injected post-inoculation every two days for 30 days. For both studies, the tumor growth was monitored with bioluminescence imaging, detailed in the following sub-section. Immediately after sacrifice, the tumors were weighed. Results were presented as mean ± standard deviation. 2.7 Bioluminescence imaging for monitoring tumor growth Tumor growth was monitored using the IVIS Lumina Series III in vivo imaging system (Perkins Elmer) and luciferase imaging. Animals were injected with 10 mg/kg of body weight of D-luciferin via an intra-peritoneal injection. Three images were recorded 10 minutes post-administration, and the peak signal intensities were record-

ed. The bioluminescence signal intensity was reported as average radiance and standardized with respect to the initial average radiance at the start of the therapy when calculating fold change of bioluminescence intensity. The intensities were quantified using Living Image software.

with antibodies, incubated, and fixed with 4% paraformaldehyde. Fluorescent labeled antibodies against immune cell markers were used to stain for different cell populations. The flow data were analyzed using De Novo FCS Express 6 software.

2.8 Tissue Cryoprotection For the preliminary study, major organs and tissues were harvested on Day 8 after the initiation of the therapy. For the follow-up study, organs and tissues were harvested around Day 30. The tissues were then rinsed in PBS and incubated in 4% paraformaldehyde at 4°C for 48 hours. Following the paraformaldehyde-fixation, the tissues were placed into 30% sucrose and left at 4°C overnight for cryoprotection. Tissue-Tek O.C.T. Compound (Fisher Scientific, Pittsburgh, PA) was used to embed the tissues. The frozen tissues were sectioned into 10 micrometer-thick sheets with a microtome-cryostat.

2.11 Statistical Analysis For all statistical analyses, either a two-tailed Student’s t-test was used or a one-way analysis of variance (ANOVA). The t-tests were used in comparing tumor weight and fold change in bioluminescence from the preliminary study. ANOVA was used for the flow cytometry results in the follow-up. Additionally, a Tukey-Kramer Post Hoc HSD (Honestly Significant Difference) test was conducted for all flow cytometry results, which accounted for the unequal sample sizes used. Graphpad Prism 5.0 software was used for ANOVA, TukeyKramer tests, and graphs. A p-value of 0.05 was used as the threshold for statistical significance.

2.9 Immunofluorescence Staining For the immunofluorescence staining, paraffin-embedded tissue specimens were deparaffinized by washing them in xylene and decreasing concentrations of ethanol. Immediately after the washes, the specimen was placed in boiling sodium citrate to retrieve the antigens. Finally, the tissues were blocked by placing 1% Bovine serum albumin and then the fluorescent-conjugated primary antibody overnight at 4°C. The samples were counterstained with ProLong Gold Antifade mountant (Life Technologies) containing DAPI (4’,6-diamidino2-phenylindole), a fluorescent stain that binds to A-T rich regions in DNA in the nuclei of cells. Samples were imaged using the Olympus BX61 microscope and processed with ImageJ software. TUNEL assays were conducted to detect DNA fragmentation, a hallmark characteristic of in vivo cell apoptosis (Gavrieli et al., n.d.). The DeadEnd Fluorometric TUNEL System (Promega, Madison) was used to stain cells with fluorescein-12-dUTP, a green fluorescent dye. Specimens were counterstained with DAPI and analyzed with the microscopy described above. The percentage of apoptotic cells was determined through ImageJ software with a well-documented procedure by dividing the number of apoptotic cells (TUNEL positive) by the total number of cells (stained by DAPI), as well as the area of apoptotic cells by the area of the total cells (Biological Sciences Division, Uc., n.d.). 2.10 Flow Cytometry and Immunophenotyping Flow cytometry was conducted for the samples in the follow-up experiment. After the mice were euthanized, the tumor samples were placed into a mixture of 10% FBS, DNAase (100 µg/mL), and collagenase (2 µL/mL of 20 mg/mL). The enzymatically digested tissue samples were passed through 70 µm cell strainer to obtain single cell-suspensions. The single cell suspension was stained

3. Results 3.1 Drug-loaded LCP NPs inhibited tumor growth in vivo To examine the effect of the GMP treatment on tumor weight in the preliminary study, the tumors were removed at sacrifice and weighed. To prevent distress as the mice approached the sacrificial endpoint in the untreated group, the study was terminated eight days after the initial treatment. A significant difference in tumor weight was observed between the PBS control group and the GMP group (Fig. 3). The GMP treatment significantly reduced the size of the tumor, but further analysis was necessary to determine the types of cell being reduced. Therefore, a follow-up study with the flow cytometry data was de-

Figure 3. Average tumor weight comparison between PBS and GMP group at the end of preliminary study (n = 4-5 per group). Mice, which were residual untreated mice, had initial tumor burdens of 5 x 107. ** indicates p < 0.01, data show mean ± standard deviation. CHEMISTRY

Figure 4. Preliminary bioluminescence study of KPCRFP/luc tumor (n = 3-5 per group). A) IVIS bioluminescent images of tumors using Dluciferin. B) Tumor bioluminescence fold change curve, calculated by averaging the radiances of the tumor and dividing the value by the initial tumor radiance. ** indicates p < 0.0001, data show mean Âą standard deviation. signed to sort the proportions of various populations of immune cells. 3.2 Drug-loaded LCP NPs inhibited tumor radiance signal in vivo Tumor bioluminescence studies were conducted for the preliminary experiment. By injecting the D-Luciferin into the mice prior to the imaging, the tumor was optically imaged and its radiance was quantified (photons/sec/ cm2/sr). Red areas had higher radiance values and stronger tumor signals, whereas blue areas had lower radiance valCHEMISTRY

ues and weaker tumor signals. The GMP treatment group had much higher radiance and larger regions of interest compared to the PBS control group (Fig. 4A). To quantify this data, the bioluminescence fold change was analyzed. A significant difference was observed between the PBS and GMP groups at sacrifice, while there was even a decrease in bioluminescence for the GMP group from Day 0 (Fig. 4B). 3.3. Drug-loaded LCP NPs induced cell apoptosis in vivo Next, in the preliminary study, the effect of the treat-

Figure 5. The induction of apoptosis after administration of GMP treatment from immunohistochemical staining results of the preliminary study. A) Fluorescent images from randomly selected pancreatic tumor samples (n=3 per group). The left column is the control group, the right column is the treated GMP group. All images stain for cancer cells (red, RFP) and apoptotic cells (green, TUNEL). Only the top row include DAPI (blue, nuclei). B) Apoptosis induction in PBS and GMP groups (n=3 per group). The chart on the left depicts the average percent area of apoptotic cells, while the chart on the right depicts the average percent number of apoptotic cells. ** indicates p < 0.01, data show mean Âą standard deviation. ments on cell-apoptosis, a programmed cell death, was observed through immunohistochemical staining images (Fig. 5A). TUNEL assays were used to visualize apoptotic cells, RFP was used to visualize the cancer cells, and DAPI was used to visualize cell nuclei. The tissues were taken immediately after sacrifice, which was Day 8 after the therapies were initiated. From the results of the average percentage of apoptotic cells in each treatment group, the GMP LCP NP treatment significantly induced apoptosis compared to the PBS control (Fig. 5B). The results indicated that apoptosis was induced at a significantly higher level for tumors treated with GMP LCPs, but the apoptotic cells were clustered in areas surrounding the cancer cells (Fig. 5A). The percentage of apoptotic cells was calculated to quantify these results. 3.4 Drug-loaded LCP NPs as an immunotherapeutic treatment Flow cytometry was conducted to quantify the percent-

Figure 6. Flow cytometry results from follow-up study, comparing immune cell populations among PBS, empty LCP, and GMP groups. There was no significant change in immune cell populations when the GMP treatment was administered. Data show mean Âą standard deviation. ages of cell populations in the follow-up study. The populations of activated dendritic cells, T cells, immunosuppressive plasma cells and cancer stem cells, were analyzed with one-way ANOVA tests and TukeyKramer post hoc tests. The data indicated that there was no significant difference in the percentage of regulatory T regulatory cells among the three groups and no relationship on the dendritic cell populations (Fig. 6). Even less literature exists on the effect of gemcitabine on other immune cells like cancer stem cells or immunosuppressive plasma cells. 4. Discussion 4.1 Lipid Calcium Phosphate Delivery Platform In this project, we studied the delivery of gemcitabine through a lipid calcium phosphate nanoparticle platform. The objective was to determine if the gemcitabine LCP NP formulation induced apoptosis in cancer cells and induced immunomodulatory effects. Gemcitabine alone is a clinically-approved treatment for pancreatic cancer, yet previous studies have shown that it has little effect on pancreatic tumors in the KPC model (Hingorani et al., 2003). In fact, studies have shown that KPC model mice treated with free gemcitabine had the same growth rates as mice treated with saline controls (Olive et al., 2009). Our formulation of gemcitabine monophosphate encapsulated in a lipid calcium phosphate nanoparticle reduced the average tumor growth significantly. Furthermore, the formulation induced a significant amount of apoptosis when compared to the control PBS, most likely because the nanoparticles can better penetrate the tumor. Therefore, gemcitabine CHEMISTRY

was improved as a chemotherapeutic treatment with the LCP formulation, as the tumor size was significantly reduced and a proportion of tumor cells were killed. 4.2 Immunmodulatory Effects Immune cells, such as dendritic cells, T-cells and macrophages are key regulators of immune response (Chang et al., 2016). These cells can be affected by the chemotherapeutic drug delivered, so the immune-modulatory effects of gemcitabine were examined. According to an extensive literature review, there has been limited research conducted on the immunotherapeutic or immunomodulatory effects of gemcitabine and no previous research on the immunotherapeutic effects of gemcitabine encapsulated in lipid calcium phosphate nanoparticles. Although some previous literature indicates that gemcitabine does affect certain immune cell populations, there is not a clear consensus on which immune cells. One study reports that gemcitabine reduced granulocytic myeloid-derived suppressor cells (MDSC) and regulatory T-cells (Homma et al., 2014), but another indicated that only regulatory T-cells were decreased in population with no other cell populations affected (Liyanage et al., 2002). According to previous studies, pancreatic tumor cells are strongly correlated with high levels of regulatory T-cells (Treg), contributing to immune suppression (Miao et al., 2016), while gemcitabine as a chemodrug significantly reduced the percentage of Treg cells (Liyanage et al., 2002). The data indicate that there was no change in immune cell populations, so immune response was not affected and immunomodulatory effects were not observed. 4.3 Apoptosis Hypothesis From the immunohistochemical images, we found that most of the apoptosis occurred in regions outside of the cancer cells, although there was apoptosis in the cancer cells as well. The visualization of the samples provided a representation of the physical barrier of connective tissues that the nanoparticles encountered when targeting the tumor site. With the GMP treatment, apoptosis was induced both in the cancer cells and in other tissues, but more so in the other tissues. This visible difference is mostly likely because the drug reaching the cancer cells is much more difficult than reaching the surrounding cells. Reaching the cancer cells requires the nanoparticles to penetrate the dense tumor nests, while affecting the surrounding cells does not. Nonetheless, our results indicate that the LCP NP formulation of gemcitabine monophosphate is more effective than free gemcitabine, the current standard of therapy for pancreatic cancer, which does not induce apoptosis in either humans or mice (Olive et al., 2009). The highlight of the study was in analyzing the location of apoptotic cells. Results indicated that most apoptotic cells were in cells outside of the cancer cells (Fig. 5A). Although the types of non-cancer cells present are unknown, CHEMISTRY

it was hypothesized that most of these cells are fibroblasts, which are key components in the desmoplastic extracellular matrix surrounding the cancer cells. Cancer cells in the KPC model are strongly sigma receptor positive, and existing research indicates that aSMA-positive activated fibroblasts also have high expression of sigma receptors (Miao et al., 2016). It is likely that the targeted ligand, anisamide, on the nanoparticles encountered the fibroblasts without passing through the extracellular matrix, causing the apoptosis to be localized around the fibroblasts. In a previous study, researchers found that activated fibroblasts near the cancer cells had significantly higher uptake efficiency for targeted nanoparticles, but not for non-targeted nanoparticles (Hu et al., 2017). This finding implies that targeted nanoparticle therapies with anisamide have off-target effects, as the fibroblasts are attacked instead of the cancer cells themselves. While it is important to target supporting cells as well as cancer cells to suppress the tumor, the efficacy of the treatment could be improved if cancer cells were targeted as well. The treatment has even greater potential if the targeting ligand is optimized, and nanoparticles targeted the cancer cells instead of sigma receptors in fibroblasts. 5. Conclusion and Future Work In conclusion, the lipid calcium phosphate nanoparticle encapsulated with gemcitabine was a significantly improved pancreatic cancer chemotherapy treatment, compared to the standard clinical gemcitabine treatment and the control groups tested. An increased number of apoptotic cells were observed in the cancer cells, hindering tumor development. However, this improved formulation of gemcitabine did not significantly change the population of T-cells, activated dendritic cells, cancer stem cells, or immunosuppressive plasma cell populations, indicating that there was no significant change in regulation of immune response. The percentage of apoptotic cells could be increased if the permeation of the TME was improved. Various compounds, including quercetin, can remodel the TME by suppressing the expression of genes in activated fibroblasts (Hu et al., 2016). This remodeling of the TME can increase the nanoparticle permeation circumnavigating the dense extracellular matrix around desmoplastic tumors, allowing for more drug to reach the cancer cells and an improved nanoparticle therapeutic treatment. Subsequent studies could be conducted to examine the efficacy of a therapy that incorporates gemcitabine and quercetin. Furthermore, more research should focus on the longterm effects of the treatment on tumor development to determine if chemotherapeutic resistance is a factor, causing tumor growth to decrease but eventually increase as the cancer cells become resistance to the drug. Another possible path of research is to confirm the hypothesis that the apoptosis is mainly occurring in the fibroblasts. Then,

the targeting ligand, anisamide, could be replaced with a ligand that predominantly targets cancer cells and not fibroblasts. Anisamide targets sigma receptors, but cannot distinguish between sigma-1 and sigma-2 subtypes. Recent approaches have been gravitating towards ligands with high selectivity for the sigma-2 receptor, possibly reducing brain uptake and increasing uptake in the tumor cells (van Waarde et al., 2015). However, ligands that distinguish the sigma-2 receptor have not been recently substantively investigated for pancreatic cancer, and certainly not for the KPC mouse model of pancreatic cancer. One study, which tested on multiple cell lines (BxPC3, AsPC1, Cfpac, Panc1, and MiaPaCa-2), indicates that a few candidates include SW43, SRM, and SV119 sigma-2 receptor ligands (Hornick et al., 2010). If an improved targeting ligand were to be explored, the next step would be to apply these sigma-2 receptor ligands to the KPC mouse model of PDA, which is the most clinically relevant model. These subsequent studies would be extremely useful in investigating the improvements that can be applied to the treatment, as well as its efficacy if it were applied in a clinical setting. 6. Acknowledgements I would like to thank Manisit Das and Dr. Leaf Huang at the Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, for their continued mentorship and guidance. I would also like to thank Dr. Michael Bruno, Dr. Monique Williams, Dr. Sarah Shoemaker, and the NCSSM Foundation for making my work at UNC possible. 7. References

bifunctional siRNA combining TGF-$β$1 silencing with RIG-I activation in pancreatic cancer. Cancer Research, 73(6), 1709–1720. Feig, C., Gopinathan, A., Neesse, A., Chan, D. S., Cook, N., & Tuveson, D. A. (2012). The Pancreas Cancer Microenvironment. Clinical Cancer Research, 18(16), 4266–4276. Gavrieli, Y., Sherman, Y., & Ben-Sasson, S. A. (n.d.). Identification of Programmed Cell Death In Situ via Specific Labeling of Nuclear DNA Fragmentation. Retrieved from https://www.ncbi.nlm.nih. gov/pmc/articles/PMC2289665/pdf/jc1193493.pdf Goldstein, D., El-Maraghi, R. H., et al. (2015). nab-Paclitaxel Plus Gemcitabine for Metastatic Pancreatic Cancer: Long-Term Survival From a Phase III Trial. JNCI Journal of the National Cancer Institute, 107(2), dju413--dju413. Hingorani, S. R., Petricoin, E. F., et al. (2003). Preinvasive and invasive ductal pancreatic cancer and its early detection in the mouse. Cancer Cell, 4(6), 437–450. Hirshberg Foundation for Pancreatic Cancer Research. (n.d.). Prognosis of Pancreatic Cancer. Retrieved from Homma, Y., Taniguchi, K., et al. (2014). Changes in the immune cell population and cell proliferation in peripheral blood after gemcitabine-based chemotherapy for pancreatic cancer. Clinical and Translational Oncology, 16(3), 330–335.

Banerjee, R., Tyagi, P., Li, S., & Huang, L. (2004). Anisamide-targeted stealth liposomes: A potent carrier for targeting doxorubicin to human prostate cancer cells. International Journal of Cancer, 112(4), 693–700.

Hornick, J. R., Xu, J., et al. (2010). The novel sigma-2 receptor ligand SW43 stabilizes pancreas cancer progression in combination with gemcitabine. Molecular Cancer, 9(1), 298.

Biological Sciences Division, Uc. (n.d.). Two Ways to Count Cells with ImageJ. Retrieved from https:// pdf

Hu, K., Miao, L., Goodwin, T. J., Li, J., Liu, Q., & Huang, L. (2017). Quercetin Remodels the Tumor Microenvironment To Improve the Permeation, Retention, and Antitumor Effects of Nanoparticles. ACS Nano, 11(5), 4916–4925.

Chang, J. H., Jiang, Y., & Pillarisetty, V. G. (2016). Role of immune cells in pancreatic cancer from bench to clinical application: An updated review. Medicine, 95(49), e5541. Conroy, T., Desseigne, F., et al. PRODIGE Intergroup. (2011). FOLFIRINOX versus Gemcitabine for Metastatic Pancreatic Cancer. New England Journal of Medicine, 364(19), 1817–1825. Ellermeier, J., Wei, J., et al. (2013). Therapeutic efficacy of

Liyanage, U. K., Moore, T. T., et al. (2002). Prevalence of regulatory T cells is increased in peripheral blood and tumor microenvironment of patients with pancreas or breast adenocarcinoma. Journal of Immunology (Baltimore, Md. : 1950), 169(5), 2756–2761. Miao, L., Li, J., et al. (2017). Transient and Local Expression of Chemokine and Immune Checkpoint Traps To Treat Pancreatic Cancer. ACS Nano, acsnano.7b01786. Miao, L., Newby, J. M., et al. (2016). The Binding Site BarCHEMISTRY

rier Elicited by Tumor-Associated Fibroblasts Interferes Disposition of Nanoparticles in Stroma-Vessel Type Tumors. ACS Nano, 10(10), 9243–9258. Olive, K. P., Jacobetz, M. A., et al. (2009). Inhibition of Hedgehog Signaling Enhances Delivery of Chemotherapy in a Mouse Model of Pancreatic Cancer. Science, 324(5933), 1457–1461. OncoLink Team. (2015). Gemcitabine (Gemzar{®}). Retrieved from Qiu, W., & Su, G. H. (2013). Development of orthotopic pancreatic tumor mouse models. Methods in Molecular Biology (Clifton, N.J.), 980, 215–223. Rahib, L., Smith, B. D., Aizenberg, R., Rosenzweig, A. B., Fleshman, J. M., & Matrisian, L. M. (2014). Projecting cancer incidence and deaths to 2030: the unexpected burden of thyroid, liver, and pancreas cancers in the United States. Cancer Research, 74(11), 2913–2921. Satterlee, A. B., & Huang, L. (2016). Current and Future Theranostic Applications of the Lipid-Calcium-Phosphate Nanoparticle Platform. Theranostics, 6(7), 918–929. Teague, A., Lim, K.-H., & Wang-Gillam, A. (2015). Advanced pancreatic adenocarcinoma: a review of current treatment strategies and developing therapies. Therapeutic Advances in Medical Oncology, 7(2), 68–84. van Waarde, A., Rybczynska, A. A., Ramakrishnan, N. K., Ishiwata, K., Elsinga, P. H., & Dierckx, R. A. J. O. (2015). Potential applications for sigma receptor ligands in cancer diagnosis and therapy. Biochimica et Biophysica Acta (BBA) Biomembranes, 1848(10), 2703–2714. Von Hoff, D. D., Ervin, T., et al. (2013). Increased Survival in Pancreatic Cancer with nab-Paclitaxel plus Gemcitabine. New England Journal of Medicine, 369(18), 1691– 1703. Zhang, Y., Kim, W. Y., & Huang, L. (2013). Systemic delivery of gemcitabine triphosphate via LCP nanoparticles for NSCLC and pancreatic cancer therapy. Biomaterials, 34(13), 3447–3458. Zhang, Y., Schwerbrock, N. M., Rogers, A. B., Kim, W. Y., & Huang, L. (2013). Codelivery of VEGF siRNA and gemcitabine monophosphate in a single nanoparticle formulation for effective treatment of NSCLC. Molecular Therapy: The Journal of the American Society of Gene Therapy, 21(8), 1559–1569. CHEMISTRY

PLASMON-ASSISTED PHOTOTHERMAL CATALYSIS FOR THE METHANOL STEAM REFORMING REACTION Vincent Xia, Alex Xiong, and Rohin Shivdasani Abstract Solar fuels, or energy-storing compounds produced from light, are promising due to their excellent power density and “on-demand” availability, rendering them superior to many conventional renewable energy sources. Therefore, using solar energy to produce hydrogen can be advantageous when compared to photovoltaics. The goal of this study is to develop a method of performing the methanol steam reforming through plasmon-driven photothermal catalysis. Plasmonic nanowave (NW) substrates coated with Cu/ZnO/Al2O3 (CZA) nanoparticles are used to exploit localized surface plasmon resonance (LSPR) and thus reach temperatures required to perform the methanol steam reforming reaction. Gas chromatograms showing the presence of hydrogen were obtained after plasmonic substrates were tested in closed vials under ~56x solar intensity. The “cold reactor,” with maximum ambient fluid temperatures of only 35°C, produced approximately 42% hydrogen in gas chromatography measurements. Although literature predicts 75% hydrogen production from the methanol steam reforming reaction, this discrepancy was attributed to vial and syringe leakage, providing evidence that LSPR processes still drove the methanol steam reforming reaction. Additionally, a hybrid H2 generation system is proposed; using plasmonic substrates in such a system would have potential to efficiently produce cleaner energy for the future. 1. Introduction 1.1 - Background As the global population continues to grow, humans are steadily consuming more energy resources. At the forefront of these dwindling resources lies the costly, damaging, and inefficient use of fossil fuels. From automobiles to power generation, nearly all human activities are driven by fossil fuel consumption. As a result, economic growth is also inextricably linked to energy consumption in today’s world. In 2004, The World Energy Outlook found that “There is a very strong link between per capita energy consumption (commercial and non-commercial) and the UN Human Development Index for all countries” (Fig. 1) (International Energy Agency, 2016).

risen dramatically, leading to an increase in clean and renewable energy research. Because the use of hydrogen as a fuel is virtually non-polluting and efficient, hydrogen has been referred to as the energy carrier of the future (Edwards et al., 2008). A more efficient and sustainable method of hydrogen production would have global implications. 1.2 - Steam Reforming Current steam reforming methods utilize catalysts and heat to react hydrocarbons and water in large reactors, producing hydrogen gas and carbon monoxide. According to stoichiometry, only 1 mole of water is needed to convert 1 mole of methanol, but in reality, much more steam is required in conventional methods. A thermally driven nanoscale plasmonic reactor may achieve a more stoichiometrically desirable production of hydrogen via the methanol steam reforming reaction (Jiang et al., 1993): CH3OH (l) + H2O (l) → CO2 (g) + 3H2 (g)

Figure 1. Human Development Index and Primary Energy Demand per Capita, 2002 (International Energy Agency, 2004). Thus, the need for alternative energy sources has also

Reaction (1) has been shown to occur between 200° 300°C when copper-based catalysts are used, with less than 1% CO gas as a byproduct (Takezawa, Koboyashi, Hirose, Shimokawabe, & Takahashi, 1982). These temperatures are needed to overcome the high activation energy barrier for the rate-limiting steps of the reaction (Christopher, Xin, & Linic, 2011). However, our work suggests that plasmonic structures coated with nanoscale catalyst can accomplish the reaction in a “cold reactor,” where the entire system does not reach activation temperatures. Very little research has focused on the possibility that plasmon-assisted catalysis can be applied to convert alCHEMISTRY

cohol to hydrogen. In one study, CO2 and H2 conversion with an Au/ZnO catalyst was tested (Adleman et al., 2009; Jones, Neal, & Hagelin-Weaver, 2008). However, the study used Au nanoparticles as both the plasmonic and catalytic material. Because pure Au is not an ideal catalyst for alcohol steam reforming, hydrogen production was limited (Santacesaria & Carrá, 1983). Furthermore, the study did not investigate the interaction between plasmonic and catalytic processes. In contrast, our study considers these processes in much more depth. 1.3 - Localized Surface Plasmon Resonance and Plasmonic Heating Localized Surface Plasmon Resonance (LSPR) occurs when incident photons are shone on nanostructure materials with high free electron mobility (eg. Ag, Au, doped semiconductors). When the light is at the resonance frequency of surface electrons, these electrons oscillate against the restoring force of the positively charged nuclei (Chou et al., 2012). For Cu, Au, and Ag, this frequency occurs in the visible light spectrum. As a result, photonic energy remains and builds up on the surface of the nanostructured materials, increasing the electric field strength and concentration of energetic electrons in the region (Brus, 2008). Plasmonic heating occurs as a consequence of the nonradiative decay of LSPR, or the decay of plasmon resonance as described above. Once generated, the plasmon-induced hot carriers (electrons) lose their phase of oscillation with photon frequencies. As a result, hot carriers release their energy to the particle via electron-photon scattering on a timescale of ~1 ps, generating localized heat in the vicinity of the plasmonic nanostructures (Chen et al., 2014). 1.4 - Photocatalytic Reactor A closed photocatalytic reactor was used. Spherical polystyrene shells were deposited on a glass substrate and subsequently coated with Au, constituting the plasmonic substrate (Fig. 2). Then, CuO/ZnO/Al2O3 catalytic nanoparticles were applied using heated spray deposition. The resulting sample was placed in a 1:1.87 water to methanol mixture underneath the solar simulator. From this reactor, product gas is extracted and measured. Based on preliminary experiments, the diameter of plasmonic spheres (520 nm) used here has been shown to exhibit maximum absorbance at approximately 500 nm in the green wavelength of light, near maximum solar intensity within the visible light spectrum (Terashima, Fujita, Inoue, Chow & Oguchi, 2013). 1.5 - Goal The primary goal of this study was to use the localized high temperatures on the surface of plasmonic nanostructures to efficiently drive the methanol steam reforming reaction. This study also aims to demonstrate that by proCHEMISTRY

Figure 2. Plasmonic reactor consisting of Au coated spherical polystyrene shells on a glass slide. "Hot Spots" for H2 generation depicted. ducing significant amounts of H2, plasmon-driven photothermal catalysis can provide sustainable, independent, and local electricity generation. 2. Materials and Methods 2.1 - Experimental Paradigms Absorbance, reflectance, and transmittance data were obtained using Shimadzu UV-3600 UV-VIS-NIR spectrophotometer. Scanning electron micrographs (SEM) were taken with an FEI XL30 SEM-FEG, and Transmission electron micrographs (TEM) were taken using an FEI Tecnai G2 Twin (700,000x, 200 kV, point resolution 0.3 nm). 2.2 - Plasmonic Substrate Prepared by a collaborator using the self-assembly water-air interface, the substrates consisted of a monolayer of polystyrene spheres (520 nm) annealed to a thin hydrophilic glass slide with a 50 nm layer of Ti and a 195 nm layer of Au spray-deposited onto the surface (Ngo et al., 2013). Au was chosen due to its low reactivity and thickness threshold so that the resulting plasmonic nanowave (NW) substrate is highly stable for extended periods of time and easily reproducible (Đurović, Bugarčić, & van Eldik, 2017). 2.3 - Catalytic Nanoparticles CuO/ZnO/Al2O3 (CZA) produced by a flame spray pyrolysis (FSP) method was used because this method had a 100% methanol conversion rate at 255°C and produced relatively low levels of CO2. Generally, when hydrogen is generated from hydrocarbons, large percentages of CO are produced, but FSP produced CZA nanoparticles have been shown to convert 100% of CO within operational conditions, producing gas more appropriate for fuel-cell use. The FSP method also allows for more customization of particle size, mass production feasibility, and catalyst com-

Figure 3. Hydrogen areas for syringe injected samples of varying amounts of calibration gas and helium. A) 99% He, 1% Calibration Gas; Area - 3.47 B) 90% He, 10% Calibration Gas; Area - 5.58 C) 67% He, 33% Calibration Gas; Area - 7.32 D) 50% He, 50% Calibration Gas; Area - 11.8 E) 100% Calibration Gas; Area - 26.3. position (Lim et al., 2013; Purnama et al., 2004). In the following experiments, FSP produced CZA nanoparticles (~20 nm in diameter) with a weight percent composition of 65%:25%:10% (CuO:ZnO:Al2O3) were used. 2.4 - Experimental Setup for Photocatalytic Testing To prepare the catalyst for deposition, a mixture with ratio 2:1 mg of CZA nanoparticles to mL of ethanol was sonicated for 60 minutes in a 31°C water bath to disperse the agglomerated particles. Using a heated spray deposition technique, the substrate was heated to 300°C using a hot plate and ~5 mL of catalyst solution was sprayed onto the substrate, leaving behind a layer of catalyst particles on the substrate as the ethanol evaporated. An Oriel Sol1A full-spectrum solar simulator was used to irradiate the sample, and a Fresnel lens was subsequently used to concentrate the light ~56x. The product gas was then extracted with a gas-tight syringe and analyzed using a 7890A Agilent gas chromatograph to determine the product gas composition. The formation of product gas bubbles, their detachment from the substrate, and movement to the top of the reactor vial was recorded with a high-resolution CCD camera (Nikon). These images were used to quantify the size of the bubbles and determine the approximate volumetric gas production rate. 2.5 - Thermodynamic Analysis The apparent local temperatures on the catalyst particles were estimated to give an Arrhenius-type reaction rate (Eq. 2), where the molar rate of hydrogen production depends on the apparent catalyst temperature Tapp:

where cCH OH is the molar concentration of methanol in 3 the reactant mixture, mcat,act is the mass of plasmon-activated catalyst, and R is the ideal gas constant. The activation energy Ea (84.1 kJ/mol) and pre-constant AR (6.1984·106 m3/(kg·s)) have been previously measured (Đurović, Bugarčić, & van Eldik, 2017). The catalyst loading is 1 mg/ cm2; however, only a small portion of the catalyst is close enough to the location of the plasmonic effect between two spheres of the plasmonic substrate. The area fraction of plasmonic hot spots ϕ depends on the spatial confinement of the hot spots Lp and the diameter of the spheres Dsph :

3. Results 3.1 - Syringe Injection Calibrations After observing photocatalytic activity in the reactor, GC injection measurements assayed gas composition, with a particular focus on the detection of H2 and CO2. For the purposes of H2 quantification, only H2 area values are given here. In order to ensure these measurements were as accurate as possible, GC calibration measurements on various known gas mixtures were taken prior to experimental testing. Both continuous flow calibration data and syringe injection calibration data were gathered. Syringe calibration data was collected for 1%, 10%, 33%, 50%, and 100% CHEMISTRY

calibration gas, with the remainder of the vial filled with the carrier gas He (Fig. 3). Given that the final reactor design exclusively supported syringe injection GC measurements, only syringe injection calibration data is discussed in the following sections.

Figure 4. UV-Vis Reflectance of plasmonic samples of various spray times. Plasmonic dip is apparent at approximately 500 nm.

Figure 5. Plasmonic and catalytic materials. a) SEM image of plasmonic substrate (top view), b) TEM image of catalytic CuO/ZnO/Al2O3 nanoparticles, c) SEM image of plasmonic substrate (side view) showing PS spheres (dark) coated with Au (light), d) SEM image of plasmonic substrate coated with CuO/ZnO/Al2O3 nanoparticles (side view).

3.2 - UV-Vis Spectroscopy and SEM & TEM imaging To determine whether the nanowave substrates behaved as plasmonically predicted, UV-Vis Spectroscopy found the reflectances of a series of samples sprayed with varying amounts of catalyst. Indeed, the lowest reflectance (and thus highest absorbance) values occurred at approximately 500 nm, indicating that the samples absorbed large quantities of visible green light (Fig. 4). SEM imaging (Fig. 5 a and c) helped view original plasmonic substrates and TEM imaging helped view the catalytic CZA nanoparticles separately (Fig. 5 b). After being coated, the layer of CZA catalyst on top of the plasmonic substrate was also viewed via SEM (Fig. 5 d). These images were used as a reference to verify and view the distribution of catalyst coating levels throughout the experiment. 3.3 - Initial Gas Production In this study, various amounts of catalyst were tested to optimize the coating levels of plasmonic samples. The hydrogen peak areas for 5, 10, 15, 20, 25, and 30 seconds of catalyst coating were 6.75, 7.33, 9.89, 13.9, 9.14, and 6.96, respectively. 20 seconds of sprayed catalyst yielded the largest hydrogen peak values (Fig. 6). Early tests were performed in closed airtight vials containing the plasmonic substrate and completely filled with liquid methanol-water fuel (Fig. 7). When solar irradiance of 1 sun (1000 W/m2 intensity) was focused with a 56x lens onto the sample, gas bubbles were quickly produced in an irradiated area of approximately 1 mm in diameter. During all tests, the liquid remained below 35°C, indi-

Figure 6. H2 dips for samples coated with 5, 10, 15, 20, 25, and 30 seconds of CZA spray in cases A) - F). The dips have areas of 6.75, 7.33, 9.89, 13.9, 9.14, and 6.96 in these six cases, respectively. CHEMISTRY

Figure 7. The initial tests of a plasmonic-catalytic sample immersed in liquid methanol-water fuel under 20x concentrated solar irradiance. (a) A gas bubble (~350 Âľm) growing on the sample, (b) detaching from the sample, and (c) moving upwards. meaning samples were not simply boiling.

Figure 8. Consistent hydrogen dips of plasmonic samples coated with 20 sec of CZA spray. The area of the dips are A) 12.8 B) 13.9 and C) 12.5.

3.4 - (Gas Chromatography) Hydrogen Production Initially, samples were tested in open containers for visual evidence of hydrogen generation (Fig. 7). Given the high levels of observed activity, a continuous flow system with He as a carrier gas was tested. However, the relatively small quantities of H2 produced compared to He were difficult to detect in gas chromatograms. This indicated that the continuous flow system lacked accuracy. Then, closed vials filled with 5 mL of methanol-water mixture, purged with the carrier gas He, were instead used for future trials. Calibrations were completed with the closed vials, in which varying amounts of calibration gas (24.3% CO2, 74.7% H2, and 1.02% CO) were placed along with He in the vial. With plasmonic samples, after approximately 4 hours under solar simulation, gas was extracted and analyzed by GC and compared with helium calibrations. Additionally, air calibration measurements were conducted. Compared to these, the results obtained consistently indicated the presence of a significant characteristic hydrogen dip (Fig. 8). Using the results of calibration measurements led to a curve fit of the equation (Fig. 9): y = 6.0*10-5 x3 - 0.0033x2+ 0.0861x - 0.2677 This curve was used to approximate the amount of hydrogen production in the plasmonic samples.

Figure 9. Hydrogen area to % composition conversions. Best fit equation: y = 6.0*10-5 x3 - 0.0033x2+ 0.0861x 0.2677 cating that only localized temperatures were responsible for gas production. Both NW substrate and catalyst coating were found to be necessary for gas generation. Furthermore, when plasmonic substrates coated with catalyst were active, the samples showed little or no condensation,

4. Discussion 4.1 - Analysis of Data Based on the GC calibration measurements, the best fit line in Equation (4) was used to approximate the amount of hydrogen in plasmonic samples based on the composition of calibration gas. The results yield an average area of 13.1 across 3 different samples (Fig. 8); the percent composition of hydrogen is calculated to be 42.7% (Fig. 9). In CHEMISTRY

contrast to a measurement of atmospheric air, this elevated hydrogen level confirms the occurrence of the methanol steam reforming reaction. However, this value is not as large as the 75% that is predicted from methanol steam reforming in literature (Tesser, Di Serio, & Santacesaria, 2009). The presence of both N2 and O2 in resulting GC measurements suggests there were minor leakages, causing lower H2 levels. With regards to optimal spray quantity, the quantity of hydrogen production increases as the amount of spray increases from 5 seconds to 20 seconds (Fig. 6). However, once this amount of catalyst coating is reached, as additional coating is added, the efficiency of the reaction decreases and hydrogen production decreases. This phenomenon can be explained via UV-Vis data. The 0 sec sprayed substrate has a dip at approximately 500 nm (Fig. 4), reinforcing the concept that the plasmonic substrate is absorbing light in the optimal wavelength of green light. More catalyst absorbs more light, causing the % absorbance to increase until 20 sec. spray. Beyond this, however, the plasmonic dip shifts to the right and flattens. With too much coating, the sample loses much of its plasmonic character and cannot produce localized high temperatures as efficiently. Optimal absorbance lies at higher wavelengths and the plasmonic characteristics are interrupted. Although SEM and TEM imaging were performed for a variety of samples, the quantity of coating was difficult to distinguish in the resulting images. The plasmonic character is obtained at the nanoscale level, making it difficult to detect variations on imaging machines. However, the discrepancy in shading indicates that catalyst is being effectively deposited on NW substrates (Fig. 5 c & d). 4.2 - Use of Plasmonics & Catalyst vs. Only Plasmonics, Catalyst, or Neither In this study, multiple experiments compared the use of plasmonics with catalyst to just plasmonics, just catalyst, and neither. Compared to procedures without catalyst or NW substrate, the CZA coated plasmonics performed significantly better both qualitatively and quantitatively. Without a catalyst, the substrate could not reach high activation temperatures. Without the plasmonic NW, the sample only reached the maximum ambient reactor temperature. Without either, the sample could not achieve the necessary activation energy and localized temperature. To ensure that the color of the coated substrates did not contribute significantly to an increase in temperature, the plain glass slide was coated with a layer of black paint and tested again, yielding inactive results. Together, these results strongly indicate that the presence of both plasmonic substrate and catalyst is crucial for the production of hydrogen.


4.3 - Analysis of Thermodynamic Data Immediately upon irradiation, small gas bubbles formed on the substrate, detached from the substrate, and quickly rose towards the top of the vial. A bubble was generated approximately every 0.8 s leading to a gas production rate per irradiated area of 2.73*10-5 mol/(s m2). Reference tests with catalyst deposited on glass and Au-coated glass as well as plasmonic substrates without catalyst showed no gas production. During all tests, the bulk liquid remained at a temperature very close to ambient (below 35°C). Analysis of the reaction kinetics on the given CZA catalyst and the amount of gas generated under sunlight allowed for the calculation of an apparent local temperature Tapp occurring on the surface of the catalyst (see Equation (2) in methods section for details) (Jones, Neal, & Hagelin-Weaver, 2008). The apparent local temperature strongly depends on the plasmonic hot spot area, which is determined by the length scale of the plasmonic hot spots Lp, estimated to be between 5 and 10 nm for the present plasmonic substrate (Jones, Neal, & Hagelin-Weaver, 2008). The area fraction of plasmonic hot spots ϕ was in the range of 0.6-1.7% (Equation (3)). This relatively small value may have contributed to a loss in efficiency, but because Tapp was found to be in the range of 194.5-219.3°C for Lp of 5-10 nm (Fig. 10) and calculated using Eq. (2), LSPR produced sufficiently high temperatures to catalyze the methanol steam reforming reaction.

Figure 10. Apparent local temperature Tapp and area fraction of plasmonic hot spots ϕ, depending on length scale of plasmonic hot spots Lp. Right inset: Top view of a plasmonic substrate with plasmon-activated hot spots area [Barnes, J., unpublished]. 4.4 - Hybrid H2 Generation System This research project demonstrates the possibility of generating hydrogen at temperatures of 235–260 °C inside a plasmonic reactor system, ideally for stationary fuel cell systems. Pairing biomethanol and solar energy allows for efficient hydrogen generation while maintaining sustainable energy practices (Real, Dumanyan, & Hotz, 2016). Additionally, practically all biomass, especially overproduced or undesirable sources, can be used to inexpensively

produce biomethanol, resulting in a zero-net addition to the natural carbon cycle (Nakagawa et al., 2011). When compared to photovoltaics, the principal benefit of this hybrid H2 generation system is the capability and ease of compressing and storing large reserves of electricity as a solar fuel, eliminating the need for large and expensive battery packs. Illustrated below is the concept for a localized residential hybrid H2 generation system (Fig. 11).

shown that hotspots typically form at the points of nanostars (Bibikova et al., 2017); therefore, the addition of the fluid would allow for a higher quantity of available reaction area. As we move forward, we will need to not only experiment with applying this in a real-world system, but we will also need to perform an efficiency analysis of the plasmonic photothermal catalyst. As a result of this work, we are one step closer to complementing traditional methods of sustainable energy generation and accelerating the transition away from fossil fuels. Our studies have shown that the proposed method has the potential to improve the efficiencies of solar fuels by using localized heating. Hybrid energy systems, such as the one proposed herein, will be able to employ plasmonics as a more efficient method of methanol steam reforming for the future. 6. Acknowledgements

Figure 11. Proposed hybrid solar H2 generation system where methanol is converted to hydrogen that is subsequently used in a fuel cell to produce electricity [Hotz, N., unpublished]. 5. Conclusions and Future Work Even though the world reserves of oil, natural gas, uranium, and coal will be soon be exhausted, most alternative energy methods are not widely used due to their inefficiency. The goal of this research was to produce hydrogen gas in a cold reactor, thereby increasing the efficiency of hydrogen production. As a result, hydrogen can be used in fuel cells to produce clean electricity. Our study successfully presented a novel methanol steam reforming method using a cold reactor with a heterogeneous FSP-produced catalyst on a gold plasmonic substrate. Our results consistently indicated the presence of approximately 42% H2 over several tests. Although this lies below literature value of 75%, a significant amount of N2 and O2 (the primary components of air) was present in GC measurements. Therefore, leakages within the reactor likely contributed to the decreased percentage of hydrogen in the sample. To improve, this study could employ better methods of GC injection to prevent hydrogen leakage. Furthermore, additional NW substrate production methods, such as varying nanosphere size, could be used to increase the quantity of hotspots and catalytically active sites on samples, producing large quantities of hydrogen (Link & El-Sayed, 1999). We are currently investigating the use of nanostar catalyst as opposed to traditional CZA catalyst, as it has been

We would like to extend special thanks to the guidance and support of Dr. Nico Hotz and Jena Barnes from the Thermodynamics and Sustainable Energy Laboratory at Duke University, without whom the completion of this project would not have been possible. We would also like to thank the North Carolina School of Science and Math and Dr. Michael Bruno for supporting this research project. 7. References Adleman, J. R., Boyd, D. A., Goodwin, D. G., & Psaltis, D. (2009). Heterogenous Catalysis Mediated by Plasmon Heating. Nano Letters, 9 (12), 4417–4423. Bibikova, O., Haas, J., López-Lorente, Á. I., Popov, A., Kinnunen, M., Ryabchikov, Y., Kabashin, A., Meglinski, I., & Mizaikoff, B. (available online 26 July 2017). Surface Enhanced Infrared Absorption Spectroscopy Based on Gold Nanostars and Spherical Nanoparticles. Analytica Chimica Acta. Brus, L. (2008). Noble Metal Nanocrystals: Plasmon Electron Transfer Photochemistry and Single-Molecule Raman Spectroscopy. Accounts of Chemistry Research, 41(12), 1742-1749. Chen, X.-J., Cabello, G., Wu, D.-Y., & Tian, Z.-Q. (2014). Surface-Enhanced Raman Spectroscopy toward Application in Plasmonic Photocatalysis on Metal Nanostructures. Journal Photochemistry Photobiology C: Photochemistry Reviews, 21, 54–80. Chou, L.-W., Shin, N., Sivaram, S. V., & Filler, M. A. (2012). Tunable Mid-Infrared Localized Surface Plasmon Resonances in Silicon Nanowires. Journal of the American CHEMISTRY

Chemical Society, 134 (39), 16155–16158. Christopher, P., Xin, H., & Linic, S. (2011). Visible-Light-Enhanced Catalytic Oxidation Reactions on Plasmonic Silver Nanostructures. Nature Chemistry, 3, 467–472. Đurović, M. D., Bugarčić, Ž. D., van Eldik, R. (2017). Stability and Reactivity of Gold Compounds – From Fundamental Aspects to Applications. Coordination Chemistry Reviews, 338 (Supplement C), 186–206. Edwards, P. P., Kuznetsov, V. L., David, W. I. F., & Brandon, N. P. (2008). Hydrogen and Fuel Cells: Towards a Sustainable Energy Future. Energy Policy, 36 (12), 4356– 4362. International Energy Agency. (2004). Chapter 10 -Energy and Development. Retrieved from International Energy Agency. (2016). Executive Summary. World Energy Outlook. Retrieved from https://www.iea. org/publications/freepublications/publication/WorldEnergyOutlook2016ExecutiveSummaryEnglish.pdf.

Ngo, H. T., Wang, H.-N., Fales, A. M., & Vo-Dinh, T. (2013). Label-Free DNA Biosensor Based on SERS Molecular Sentinel on Nanowave Chip. Analytical Chemistry, 85 (13), 6378–6383. Purnama, H., Ressler, T., Jentoft, R. E., Soerijanto, H., Schlögl, R., & Schomäcker, R. (2004). CO Formation/selectivity for Steam Reforming of Methanol with a Commercial CuO/ZnO/Al2O3 Catalyst. Applied Catalysis A: General, 259 (1), 83–94. Real, D., Dumanyan, I., & Hotz, N. (2016). Renewable Hydrogen Production by Solar-Powered Methanol Reforming. International Journal of Hydrogen Energy, 41 (28), 11914–11924. Santacesaria, E., & Carrá, S. (1983). Kinetics of Catalytic Steam Reforming of Methanol in a Cstr Reactor. Applied Catalysis, 5 (3), 345–358. Takezawa, N., Kobayashi, H., Hirose, A., Shimokawabe, M., & Takahashi, K. (1982). Steam Reforming of Methanol on Copper-Silica Catalysts; Effect of Copper Loading and Calcination Temperature on the Reaction. Applied Catalysis, 4 (2), 127–134.

Jiang, C. J., Trimm, D. L., Wainwright, M. S., & Cant, N. W. (1993). Kinetic Mechanism for the Reaction between Methanol and Water over a Cu-ZnO-Al2O3 Catalyst. Applied Catalysis A: General, 97 (2), 145–158.

Terashima, I., Fujita, T., Inoue, T., Chow, W. S., & Oguchi, R. (2009). Green Light Drives Leaf Photosynthesis More Efficiently than Red Light in Strong White Light: Revisiting the Enigmatic Question of Why Leaves Are Green. Plant Cell Physiology, 50 (4), 684–697.

Jones, S. D., Neal, L. M., & Hagelin-Weaver, H. E. (2008). Steam Reforming of Methanol Using Cu-ZnO Catalysts Supported on Nanoparticle Alumina. Applied Catalysis B: Environmental, 84 (3), 631–642.

Tesser, R., Di Serio, M., & Santacesaria, E. (2009). Methanol Steam Reforming: A Comparison of Different Kinetics in the Simulation of a Packed Bed Reactor. Chemistry Engineering Journal, 154, 69–75.

Lim, E., Visutipol, T., Peng, W., & Hotz, N. (2013). FlameMade CuO/ZnO/Al2O3 Catalyst for Methanol Steam Reforming. Proceedings of the ASME 7th International Conference on Energy Sustainability, collocated with the ASME Heat Transfer Summer Conference and the ASME 11th International Conference on Fuel Cell Science, Engineering and Technology. Link, S., & El-Sayed, M. A. (1999). Size and Temperature Dependence of the Plasmon Absorption of Colloidal Gold Nanoparticles. Journal of Physical Chemistry, 103(21), 4212–4217. Nakagawa, H., Sakai, M., Harada, T., Ichinose, T., Takeno, K., Matsumoto, S., Kobayashi, M., Matsumoto, K., & Yakushido, K. (2011). Biomethanol Production from Forage Grasses, Trees, and Crop Residues. In Biofuel’s Engineering Process Technology. InTech. CHEMISTRY

REPURPOSING CARBON BLACK NANOPARTICLES FOR USE IN CRUDE OIL SPILLS Scout Hayashi Abstract Oil spills do significant damage to marine ecosystems and are a major contributor to ocean pollution, yet there is no ideal method to contain and mitigate the damage of large marine oil spills. Surface modified carbon nanoparticles, such as carbon black used in ink production, have shown promise in removing crude oil from water, but their own inherent toxicity to marine life remains uncertain. In this work, three surface modifications to carbon black (hydroxyl group depleted, 4-aminobenzoic acid added, and 4-aminosalicylic acid added) are synthesized and tested for their ability to adsorb Sudan IV, a surrogate for crude oil. These modified nanoparticles were then tested for toxicity to brine shrimp. Surface modification with 4- aminosalicylic acid was found to be the most efficient in Sudan IV absorption and least toxic. 1. Introduction The most common method to combat oil spills is the use of polypropylene booms, which essentially create a barrier to prevent the oil from spreading. However, using this on large-scale spills is cost prohibitive. The other method is to spray a chemical dispersant such as commercial Corexit that creates an oil and water emulsion which can be more easily digested by naturally existing bacteria. However, the chemical spray can cause high mortality rates for certain organisms and may cause ecosystem collapse. There have been a few studies on using nanoparticles as oil adsorbents. The advantage of using an adsorbent is that the oil is removed entirely from the solution and when the nanoparticles fall out of solution, they take the oil with them. Also, nanoparticle adsorbents may be synthesized quickly and stored more compactly. A paper (Rodd, 2014) investigated two variants of carbon black typically used in ink production for usage in adsorbing oil. Hydrophilicity of the nanoparticlesâ&#x20AC;&#x2122; surface and the shape of the nanoparticle were found to affect toxicity and ability to adsorb oil. Additionally, oil adsorbents have more versatility when it comes to water purification than dispersants do. Dispersants rely on existing bacteria to digest the oil and restore the ecosystem to a natural equilibrium. However, not all ecosystems have the necessary populations of these bacteria so an adsorbent method may have wider application. Adsorbents may also be used as a layer in water purification filters for freshwater oil contamination. Current freshwater filters used in processing treatments are unable to handle large amounts of crude oil pollutants.

Figure 1. CNP-A Mechanism (Rodd, 2014) 2.2. CNP-F and CNP-S Synthesis CB was functionalized by adding 4-aminobenzoic acid or 4-aminosalicylic acid to 12M HCl and 6M NaNO2 to make a slurry. CB was added to the slurry and allowed to react (Fig. 2). 4-aminobenzoic acid functionalized carbon nanoparticles were abbreviated to CNP-F and 4-aminosalicylic acid functionalized carbon nanoparticles were abbreviated to CNP-S.

2. Materials and Methods

Figure 2. CNP-F/S Synthesis (from Rodd et al 2014)

2.1. CNP-A Synthesis Carbon black (CB) was washed with 1M HCl to remove impurities, then placed in a kiln at 800C for 2 hours to drive off any hydroxyl groups (Fig. 1). The result was annealed carbon nanoparticles (CNP-A).

2.3. Sudan IV Absorption Testing Varying concentrations of CNPs and CB were added to a 500 mg/L Sudan IV solution and gently agitated for 24hrs. The adsorption of the Sudan IV by the 4 carbon variants was tested by halting agitation and allowing the CHEMISTRY

CNP or CB particles settle out of solution with any bound Sudan IV. Visual examination of the fluid for the red hue of Sudan IV was done and photographed. In addition, unabsorbed Sudan IV in the remaining solution was quantified using a SpectroVis at 520 nm to check particle binding efficiency. 2.4. Brine Shrimp Toxicity The synthesized CNPs and unmodified CB particles were added to a sample of 50-150 brine shrimp and agitated overnight. Viability of brine shrimp was determined by direct visualization and manual count. Viability was reported as percent living after overnight exposure.

Figure 4. Absorbance of Sudan IV in Solution after Carbon Treatment

3. Results 3.1. Sudan IV Adsorption: Increasing concentrations were the same for each particle tested. On visual inspection, CNP-S appeared the most effective as the lowest concentration of CNP-S had already removed all the red hue of Sudan IV (Fig. 3C). CNP-F was opaque black through all concentrations, suggesting that these carbon nanoparticles did not completely settle to the bottom (Fig. 3B). Figure 5. Toxicity versus Concentration of Carbon Variant

Figure 3. Visual representation of the Sudan IV Adsorption of CB (A), CNP-F (B), CNP-S (C), CNP-A (D). Absorbance data suggested that the CNP-F had indeed spread throughout the water making data regarding Sudan IV absorption impossible to collect. CNP-F absorbance measured consistently above 1.5 rendering the data inconclusive. However, CB, CNP-A and CNP-S were able to be measured. As expected from visual inspection of the solutions, CNP-S concentrations had the lowest solution absorbances and therefore bound the most Sudan IV (Fig. 4). 3.2. Brine Shrimp Toxicity: Figure 5 shows the raw data for shrimp mortality by increasing concentrations of particles. The rates of mortality were modeled to better compare the actual rate of toxicity and account for the random variation in brine shrimp death as not every control test had a mortality rate of 0 (Fig. 6). CNP-F was the most toxic to brine shrimp with a per CHEMISTRY

Figure 6. Models of the toxicity versus concentration of carbon variant cent mortality increase of 7.87 x 10-2 per 1 mg of CNP-F added. CNP-A and CNP-S had approximately the same percent mortality increase per mg of carbon at 4.30 x 10-2 and 4.26 x 10-2 respectively. CB was the least toxic at a 1.54

x 10-2 % increase mortality per mg. 4. Discussion and Conclusions Of the three types of surface modified carbon, the most efficient and least toxic appeared to be CNP-S (4-aminosalicylic acid surface modification). Visual inspection and 520 nm spectral absorption of residual solution indicate high levels of Sudan IV absorption by CNP-S (Fig. 1-3). Therefore, we would predict that CNP-S would be the most efficient at adsorbing crude oil. At just 0.05g/L it had already adsorbed enough Sudan IV for the remaining solution to appear clear and for the absorbance to be only 0.035. It probably was not 0 due to the fact that there was some CNP interference. No other particle modification could clear the solution as well at such low concentrations. The CNP-A needed a concentration of 0.2g/L to surpass CNP-Sâ&#x20AC;&#x2122;s Sudan IV binding capacity. CNP-F was the least desirable because of its excessively hydrophilic nature. These nanoparticles dispersed too well in water, remaining in solution, and making data collection for Sudan IV binding impossible. Further study by other methodology would be needed for this particle to move forward. Moreover, CNP-F also had the highest toxicity (see below), perhaps due to its propensity to stay in solution. Unmodified CB has natural adsorbent qualities but CNP-A and CNP-S outperformed this baseline control particle. As for toxicity, CNP-F had the highest brine shrimp mortality rate while CB had the lowest (Fig. 4). CNP-A and CNP-S toxicities fell in between and were approximately the same. Because CNP-S was the most efficient at adsorbing Sudan IV and amongst the least toxic of the tested particles, CNP-S showed the most promise for use in crude oil spills.

method of adsorbing oil, they have potential applications in a variety of water treatment challenges. In addition to removing oil from an oil spill, they may also be used in filters to purify water as the large surface area allows the carbon to remove small particles of oil, especially in the case of water-oil emulsions. 6. Acknowledgments I thank my mentors and instructors, Dr. Michael Bruno and Dr. Monique Williams, for the guidance, encouragement and patience they provided me during this research. I would also like to thank the NCSSM Foundation for providing me with the resources necessary complete this project. 7. References Rodd, A. L., Creighton, M. A., Vaslet, C. A., Rangel-Mendez, J. R., Hurt, R. H., Kane, A. B. (2014) Effects of Surface-engineered nanoparticle-based dispersants for marine oil spills on the model organism Artemia franciscana. Environmental Science & Technology, 48, 6419â&#x20AC;&#x201C;6427.

5. Future Directions and Implications Testing the nanoparticles on real crude oil or its components such as the BTEX compounds (benzene, toluene, ethylbenzene and xylene) would be a critical next step in developing carbon nanoparticles (CNP) for real world use. However, another method other than measuring absorbance will need to be determined. A major obstacle in collecting data is the fact that the suspended nanoparticles tend to absorb in the same range as BTEX compounds or crude oil making data collection difficult. However, other methods of filtration or induced coagulation may be able to separate the suspended nanoparticles from the oil-water mixture and determine the adsorption of the CNPs. Brine shrimp may not be representative of other organisms or the entire marine ecosystem. It would be useful to understand how the nanoparticles interact with a variety of other organisms such as algae, bacteria, or small vertebrates. While carbon nanoparticles are an unconventional CHEMISTRY

LEAST FACTORIAL PROBLEM Nina Prabhu and Emily Wen Abstract We define a function h over rational numbers such that h(q) = n if n is the smallest positive integer such that n! · q is an integer. Setting out to investigate h(q) for all rational numbers q, we first consider h over the unit fractions of the form 1/b. By investigating n for small values of b, we find upper bounds of n when b is a power of prime numbers. Additionally, we determined upper bounds on some specific cases of b. We then generalize these results to all b based on its prime factorization, and we applied these generalizations to determine h(q) for non-unit fractions q. Other cases were then explored, such as when b is the denominator of a fractional combination. Our ongoing investigations include finding n when b is a multifactorial and proving various other conjectures based off of previous observations. Our results give us an insight into the gamma function and p-adic numbers, which speed up certain programs in computer science. 1. Introduction This paper explores the following number theoretical problem: Given a rational number q, what is the smallest positive integer n such that n! · q is an integer? We define q = a/b ∈ Q such that a ∈ Z, b ∈ Z+, and gcd(a, b) = 1. Thus, we wish to find the minimum integer positive n, such that n! · a/b is an integer. In other words, we seek the smallest integer n ≥ 1 such that b|n!. For example, consider when q = 5/6: • If n = 1, then n! · q = 1! · 5/6 = 5/6 ∉ Z, so h(a/b) ≠ 1. • If n = 2, then n! · q = 2! · 5/6 = 5/3 ∉ Z, so h(a/b) ≠ 1. • If n = 3, then n! · q = 3! · 5/6 = 5 ∈ Z, so h(a/b) = 3. Therefore, when q = 5/6, n = 3. When looking at small values of q, one might think this is a simple and straightforward problem. For example, for 2 ≤ b ≤ 5, h(1/b) = b. However, as values of q differ from each other by smaller amounts, n jumps around, as we can see below: • When q = 1/127, n = 127. • When q = 1/128 , n = 9. • When q = 1/129 , n = 43. • When q = 1/126 , n = 7. From these unexpected values, we can see that there is more to this problem than meets the eye. The Least Factorial problem has important computer science applications through the insight it provides into the p-adic valuations and the gamma function. To further explore these applications, this paper also investigates additional related problems. 2. Definitions In this section we will define functions in order to simplify our calculations. MATHEMATICS AND COMPUTER SCIENCE

Definition. Define function h over the rational numbers such that h(q) = n if n is the minimum positive integer for which n! · q is an integer. For example, as demonstrated earlier in this paper when q = 5/6, n = 3. So, h(5/6) = 3. Definition. Define function f : N → N such that f(b) = n if n is the minimum positive integer such that b|n!. Note that if b|m! for some m ∈ N, then f(b) ≤ m. Definition. For a given prime number p, the p-adic valuation of a non-zero integer n, vp(n), is the largest exponent x such that px|n. In other words, vp(n) is the number of factors of p in n. This is a well-defined function with many interesting properties. In particular note the following property of p-adic valuations:

The above equation is commonly referred to as Legendre’s Theorem. 3. Preliminary Results To gain insight into the seemingly-sporadic nature of our function f, we constructed a computer program in Java to output the values of h(1/b) for b ≥ 200. From this, we gained invaluable insight into explicit values of some cases of the denominator. To quantify our observations, we must first prove some general theorems that will provide the foundation for our further investigations. As demonstrated previously, h(5/6) = 3. Now consider when q = −5/6. For n ≤ 2, we can easily see that q · n! ∉ Z. When n = 3, q · n! = −5/6 · 3! = −5, which is an integer. So, h(−5/6) = 3. By a similar process, we can find that h(1/6) = 3. Observations such as these lead to the following theo-

rems. Theorem 3.1. For rational number q, h(q) = h(−q). Proof. Let d be the positive integer such that d = h(q). Then, q · d! is an integer. Thus, (−q) · d! is also an integer. Furthermore, by definition of function h, q · (d − 1)! ∉ Z for d > 1. Suppose for the sake of contradiction that h(−q) = g ≠ d. We know that h(−q) ≤ d, as (−q) · d! ∈ Z. Therefore, by our assumption, h(−q) = g < d, which means that (−q) · g! ∈ Z. But this implies that −(−q) · g! = q · g! ∈ Z, which contradicts the minimality of d from the definition of h. Thus, h(−q) = d. Theorem 3.2. Let a ∈ Z and b ∈ Z+ with gcd(a, b) = 1. Then, given rational number q =a/b, h(q) = f(b). Proof. Let q = a/b, where a ∈ Z, b ∈ Z+, and gcd(a, b) = 1. Let n = h(q). Thus, n! · q is an integer, so a · n!/b is an integer. So, b|a · n!. Then, because gcd(a, b) = 1, b|n!. Thus, f(b) ≤ n. Next, consider if f(b) ≤ n − 1. Then, b|(n − 1)!, so a/ b · (n − 1)! is an integer. However, this would mean that h(q) ≤ n − 1 which is not true. So, f(b) = n = h(q). From Theorems 3.1 and 3.2, we can determine that it is sufficient to only investigate h(q) for positive unit fraction q. In other words, we seek to determine f(b) for positive integers b. Next, we will reveal some interesting properties of f(b) and related functions. Theorem 3.3. f(1) = 1 Proof. As f(b) ∈ N ∀ b ∈ Z, f(1) ≥ 1. In addition, 1/1 · 1! = 1 · 1 = 1 ∈ Z, so f(1) ≤ 1. Thus, f(1) = 1.



Let x = y + a0, where a0 ∈ N. Note that a0 < p. Also, let c be the greatest integer such that pc < y. Then, define 0 ≤ ai < p such that y = ∑ci=1 pi · ai. So,

Theorem 3.4. Let p be a prime and x be a positive integer. Then define y as the maximum positive integer such that y ≤ x and p|y. Then, vp(x!) = vp(y!). Proof. We begin our proof with the following lemma. Lemma. We claim that for all prime p and non-negative integers ai < p,

Proof. First, note that for all p and ai, (pi · ai)/pj ≥ 0. So,


Next, we will consider vp(x!). Note that x = y + a0 = ∑ci=0 p · ai. So, i

Next, we will show that both expressions are at most 0. MATHEMATICS AND COMPUTER SCIENCE

We were also able to strengthen the bound for all x.

Therefore, vp(y!) = vp(x!). Theorem 3.5. Let p be a prime and x, y ∈ Z+. If vp(x!) = y, then f(py) ≤ x. Proof. Let y = vp(x!). Then, py|x! by definition. Let z = f(py). Then, z is the least integer such that py|z!. If x < z, this would contradict the minimality of z, so z ≤ x. Thus, f(py) ≤ x. Theorem 3.6. Let p be a prime and a, b ∈ Z+. If a > b, then f(pa) ≥ f(pb). Proof. Let x = f(pa). So, pa|x!. But since a > b, pb|pa|x!. So, x ≥ f(pb). Therefore, f(pa) ≥ f(pb).

Theorem 4.3. f(px) ≤ px − p · Proof. To determine f(b) we wish to find the least positive integer n such that n! is a multiple of b, or px. As shown in Theorem 4.1, the upper bound is p · x. The number of powers of p in (px)! is ∑∞i=2[px/pi] However, for every power of p, from p2 to px, 2, 3, ...x must be subtracted to account for the over-counting of powers. The number of powers of p less than px is ∑∞i=2[px/ pi], so at most p * numbers will be subtracted. This is because each time a multiple of p is subtracted vp decreases by at most [px/pi]. Since we are only concerned with multiples of p, we must multiply this subtracted quantity by p. Therefore, f(px) ≤ px − p · We are able to to obtain an equality for certain powers of 2, as shown below. Theorem 4.4. f(2(2^x)) = 2x + 2. Proof. For the sake of contradiction assume f(2(2^x)) = 2x + 1. Then,

4. Prime Powers We will begin by considering when the denominator of q is a prime power. In other words, b = px, where p is a prime number and x is a positive integer. Theorem 4.1. f(p ) ≤ p · x Proof. First, notice that there are px/p = x different multiples of p less than or equal to p · x. So, Πxi=1(p · i)|(p * x)! We can rewrite this as px · (1 · 2 · . . . · x)|(p · x)!, so px|(p · x)!. Therefore, f(px) ≤ p · x. When x < p we were able to further strengthen this bound. x

Theorem 4.2. f(px) = p · x when x ≤ p. Proof. Now we will show that f(px) > p · x − 1. Consider vp((p · x − 1)!). Then, substituting into the definition of vp:

So, px ⁄|(px − 1)!. Thus, f(px) > px − 1. By Theorem 4.1, it is known that f(px) ≤ p · x. So f(px) = p · x when x ≤ p. MATHEMATICS AND COMPUTER SCIENCE

So, f(2(2^x)) > 2x + 1. Next, consider v2(2x + 2).

Thus, f(2(2^x)) ≤ 2x + 2 Theorem 4.5. The maximum power of p, b, such that f(b) = px is p(p^x −1/p−1). Proof. First consider the power of p in the prime factorization of (px − 1)!. Then,

So, p(p^x −1/p−1) ⁄| (px − 1)! Thus, f(p(p^x −1/p−1) > px − 1. Next, consider the power of p in the prime factorization of (p * x)!. Similarly,

So, p(p^x −1/p−1)|(px)! Thus, f(p(p^x −1/p−1) ≤ px and f(p(p^x −1/p−1) + 1) > px. Thus, f(p(p^x −1/p−1) = px and f(p(p^x −1/p−1) + 1) > px. So, the maximum integer b such that f(b) = px is p(p^x −1/p−1).

So, f(2(2^1+2^0)−1 = 4 = 21+1. Proof. Let y = p(p^x+1 −1/x−1) −i where i is an integer with 0 ≤ i ≤ x. First, it will be shown that f(y) ≤ px+1. Consider vp(px+1):

Thus, f(y) ≤ px+1. Next, it will be shown that f(y) > px. Consider vp(px+1 − 1):

Theorem 4.6. f(p(p^x+1 −1/x−1)−i) = px+1 for integer i with 0 ≤ i ≤ x. Example. First, note that p(p^x+1 −1/x−1)−i can be rewritten as p(p^x +p^x−1 +...+1)−i. We will use the latter form in this example. Consider when p = 2 and x = 1. Then, first consider when i = 0. Thus, f(y) > px+1 − 1. Therefore, f(p(p^x+1 −1/x−1) −i) = px+1 for integer i with 0 ≤ i ≤ x.

So, f(2(2^1 + 2^0)−0 = 4 = 21+1. Next, consider when i = 1.

Theorem 4.7. Given positive integer denominator b = px, p|f(b). Proof. For the sake of contradiction assume there exists a positive integer b = px such that p ⁄|f(b). Then, define z such that z = f(px). So, px|z!. Also, let y be the greatest multiple of p less than z. So, p ⁄|(y + 1) * (y + 2) * ... * (z − 1) * z. Thus, gcd(p, p(p^x −1/p−1) = 1. In addition, z! = y! * (y + 1) * (y + 2) * ... * (z − 1) * z. So, x p |(y!) * ((y + 1) * (y + 2) * ... * (z − 1) * z). Therefore, px|(y!) So, f(px) ≤ y < z. This is a contradiction. Therefore, by contradiction, p|f(b). Define function cp over the positive integers such that cp(n) = a if there are exactly a rational unit fractions 1/b such that f(1/b) = n and b is a power of p. We will now prove an interesting property about cp as it relates to our problem. Theorem 4.8. Given positive integer n, cp(n) = vp(n). Proof. Let d be an integer such that f(d) = n. Then, vp(n!) ≥ d > vp((n − 1)!), since n! contains enough or more powers of p than d does, making n! * 1/d an integer, but (n − 1)! does not have enough powers of p to do so. MATHEMATICS AND COMPUTER SCIENCE


5. Generalized Denominators Let b = Πki=1 pxii, where k, p, x ∈ Z+, p are prime and if i < j, then pi < pj. In other words, b is the product of powers of k distinct prime. Theorem 5.1. f(b) = max(f(px11), f(px22), f(px33), ...). Proof. Let m be a positive integer such that m = max(f(px11), f(px22), f(px33), ...). So, pxii|m! for all positive integers i. First, we will show that f(b) ≤ m. We know that pxii|m! for all positive integers i and the pairwise greatest common divisors of p1, p2, p3, ... are all 1. Therefore, px11 · px22 · ...|m!. In other words, b|m!. So, f(b) ≤ m. Next, we will show that f(b) > m − 1. For the sake of contradiction assume f(b) ≤ m − 1. Then, b|(m − 1)!. So, px11 · px22 · ...|(m − 1)!. So, for all positive integers i, pxii|(m − 1)!. So, f(pxii) ≤ m − 1 < m for all i. However, this is a contradiction. Thus, f(b) > m − 1. Therefore, f(b) = m = max(f(px11), f(px22), f(px33), ...) Theorem 5.1 provides us with insight as to how the prime factorization of b is related to f(b). In particular when every exponent in the prime factorization of b is equal, we can prove the following result. Theorem 5.2. If x = xi for some positive integer x and all 1 ≤ i ≤ k, then f(b) = f(pxk). Proof. When all of the primes in the prime factorization of b are raised to the same power, n = f(pxk). Because all of the primes are to the same power, the maximum n we can get comes from the largest prime in the factorization. Therefore, n = f(pxk). From the previous theorem we can determine some explicit values of f(b) for the following specific cases. Corollary 5.2.1. If xi = 1 for all 1 ≤ i ≤ k, then f(b) = pk. Proof. By Theorem 5.2, we know that

Corollary 5.2.2. If xi = 2 for all 1 ≤ i ≤ k, then f(b) = 2 · pk. Proof. In Theorem 4.2, we proved that f(px) = p · x when x < p. When x is always 2, p < x for p > 2, meaning that f(p2) = 2 · p. In addition, f(4) = 4, so f(p2) = 2 · p works for all prime numbers. If this is the case, then n is equal to twice the largest prime in the prime factorization. If b is the product of multiple primes, the largest prime in the prime factorization must be at least 2, so n = 2 · (pk). In addition, we can generalize Theorem 5.2 to when the exponents of each prime factor are not all equal. Theorem 5.3. If xk ≥ xi for all 1 ≤ i < k, then f(b) = f(pxkk). Proof. Note that xk = max(x1, x2, ...xk). So, f(pxki) ≥ f(pxii) for all 1 ≤ i < k. Then, by Theorem 5.2 we know that f(pxk1 * pxk2 * ... * xk p k) = f(pxkk). Therefore, f(pyk) ≥ f(pyi) for all 1 ≤ i < k. Thus, f(pxkk) ≥ f(pxii) for 1 ≤ i < k. Therefore, by Theorem 5.1 f(b) = f(pxkk). 6. Fractional Combinations A fractional combination is defined as

where n is a rational number and k is a nonnegative integer. In this section, we consider cases of b in the form , where t is a positive integer. Theorem 6.1. Let k be a positive integer. Then, has a denominator that is a power of 2. Proof. First, it will be shown that 2 · ∈ Z. So,

Because So, Also,

∈ Z,


∈ Z, so

Thus, f(b) = pk.


So, If k ≡ 1 mod 2, then (k, k − 2) = 1. So,

a denominator that is a power of 3. Proof. Let a0, a1, a2, .. ∈ Q such that

Note that a0 = 1. Let S = {n ∈ Z| A αn, en ∈ Z such that an = αn/3fm} Assume for the sake of contradiction that S is nonempty. Then, by the Well-Ordering Principle, there exists a least element m. So, Ǝ βm, fm ∈ Z such that am = βm/3fm. Consider the xm term on the left hand side. It would be the sum of products of ai , ai , ai , where i1 + i2 + i3 = 1 2 3 m. Note that 3 of these products are the permutations of a0, a0, an. So, for all other products, i1, i2, i3 < n, so in this case, i1, i2, i3 ∉ S. The term xm on the right hand side is:

If k ≡ 0 mod 2, then (k, k − 2) = 2. So,


So, there exists some integer a such that

Let f = max(fa + fa + fa ). Then,

Now to evaluate


Thus, the denominator is a power of 2. Theorem 6.2. Let k be an positive integer. Then,




This is a contradiction. So, for all n there exists αn, en ∈ Z such that an = αn/3en. So, (1 + x)1/3 = a0 + a1x + a2x2 + ... Also, (1 + x)1/3 = ∑∞i=1 So, ai = So, always has a denominator of a power of 3 for all k ∈ Z≥0. From these two theorems we have discovered that fractional combinations in the form of and always have denominators that are powers of 2 and 3 respectively. We are currently working on combining these results with our previous investigations on prime power denominators to determine more specific valuations of f on denominators resulting from these two cases of fractional combinations. MATHEMATICS AND COMPUTER SCIENCE

7. Ongoing and Future Investigations We are working on the cases when the denominator of q is a multifactorial or a factorial. In particular, one case is when the denominator is a M1 factorial. Define the M1 factorial n!k with k < n and k, n ∈ Z+ as follows:

Or in other words:

ant mathematical function: the gamma function, Γ(n). It has the recursive property that Γ(a) = (a − 1) * Γ(a − 1), providing us with numerical values of fractional factorials. The Least Factorial problem lets us see properties of the gamma function calculated on real numbers, both integer and not. 9. Acknowledgements We would like to thank Mrs. Cheryl Gann for introducing us to the Least Factorial problem, reviewing our paper periodically, and answering any further questions we had. 10. References

The main conjecture we have regarding this is: Let p be a prime and i ∈ Z+ with i < p. Then, f(p!i) = p. In addition, we are also considering the relationship between the results of h of close numbers. The conjectures we are currently investigating are: • For all integers k, f(k) ≠ f(k + 1). • There are an infinite number of integers such that f(i) + 1 = f(i + 1). • There are an infinite number of integers i such that f(i) + 2 = f(i + 2). Recall that the twin primes conjecture states that there are an infinite number of pairs of twin primes. If we assume this, then we can prove this previous conjecture. • Similarly, there are an infinite number of integers i such that f(i) + n = f(i + n). Furthermore, for fractional combinations we have the additional unproven conjectures: • If b > k, then f(d) ≤ f(dx). • If b < k, then f(d) ≤ f(bk). 8. Conclusions Through initial observations, we determined that it was sufficient to first focus on the results of Least Factorial function for positive powers of p. For this, we developed both lower and upper bounds on the expression of the Least Factorial function for all prime powers. In addition, we devised explicit expressions for this function for some specific cases of inputs, such as primes. Then, we extended these results to composite numbers for which we determined explicit expressions for some cases. From here we moved on to evaluating the function when inputs are fractional combinations. Building on these investigations, we have many further paths we would like to explore. Overall these results provide an insight on p-adic valuations, which have significant applications in computer science. Most notably, they reveal properties of an importMATHEMATICS AND COMPUTER SCIENCE

Azose, J. (2016). On the Gamma Function and its Applications. University of Washington, 1-11. Retrieved August 16, 2016. De, A., Kurur, P. P., Saha, C., & Saptharishi, R. (2008). Fast integer multiplication using modular arithmetic. Proceedings of the fourtieth annual ACM symposium on Theory of computing - STOC 08, 1-16. Retrieved August 12, 2016. Hyde, Scott K. ”Properties of the Gamma Function.” Properties of the Gamma Function.

MODELING THE RELATIVE IMPACTS OF CONTROLS ON THE SPREAD OF METHICILLINRESISTANT Staphylococcus aureus Emily Tracy Abstract Staphylococcus aureus (S. aureus) is a common antibiotic-resistant disease with many unpleasant symptoms. This paper explores the effectiveness of four methods of S. aureus prevention: hand-washing, glove-wearing and gowning, multiple-drug therapy, and screening and isolating of patients who test positive for the disease. Each method is implemented using STELLA by modifying a baseline model. The numbers of uncontaminated patients were 290, 342, 303, 300, and 283 out of the 600 initial patients each for the baseline, hand-washing, gloving-and-gowning, universal-screening, and multiple-drug-therapy models, respectively. However, the results from the multiple-drug therapy were excluded due to modelling errors. Modelling the impacts of inhibitors of the spread of methicillin-resistant S. aureus confirmed the importance of effective hand-washing among healthcare workers and the prevention antibiotic-resistant bacteria in general. 1. Introduction Antibiotic resistance is an increasingly prevalent problem causing concern among the medical community. According to the Centers for Disease Control and Prevention, antibiotic-resistant bacteria causes at least 2 million infections and at least 23,000 deaths each year (Centers for Disease Control and Prevention, 2017). Antimicrobial resistance amplifies the potential risks associated with surgery, after which patients more susceptible to potentially life-threatening antibiotic-resistant bacteria (World Health Organization [WHO], 2017). Additionally, antibiotic-resistant bacteria are more difficult to treat than non-antibiotic-resistant bacteria, extending the required length of care for infected patients and increasing healthcare costs (WHO, 2017). The emergence of antibiotic-resistance has largely been attributed to the improper use of antibiotics (Laxminarayan et al., 2013). When bacteria mutate, antibiotics kill antibiotic-susceptible bacteria and leave only the resistant bacteria to propagate (Laxminarayan et al., 2013). For example, insufficient dosing can contribute to antibiotic resistance by leaving more bacteria that are fit to spread resistance genes among the bacterial population (Laxminarayan et al., 2013). Antibiotics are also used more than is necessary, such as when they are used treat sick animals or humans with viral diseases (Laxminarayan et al., 2013). Unnecessary use of antibiotics increases exposure of antibiotics to bacteria, allowing bacteria to develop resistance more quickly than if antibiotics were reserved for diseases that were known to be curable using antibiotics or required urgent attention. Staphylococcus aureus (S. aureus) is a common species of antibiotic-resistant bacteria, having symptoms, including boils, pneumonia, urinary tract infections, meningitis, and toxic shock syndrome (“Staphylococcus,” 2017). Main

strains of antibiotic-resistant S. aureus are problematic due to their resistance to methicillin, which dates back to the 1940s, when penicillin was originally used to treat S. aureus (Morell & Balkin, 2010). Penicillin killed S. aureus cells by destroying their cell walls, which protect cells from environmental threats and provide structural support (Morell & Balkin, 2010). However, treatment with penicillin was discontinued when it was discovered that some strains of S. aureus were creating penicillinase, which caused resistance to penicillin (Morell & Balkin, 2010). In 1959, a chemical modification of penicillin, methicillin, was introduced for treating S. aureus (Morell & Balkin, 2010). But, methicillin-resistant strains of S. aureus (MRSA) later emerged as well (Morell & Balkin, 2010). MRSA has multiple resistance mechanisms to methicillin treatment. (Morell & Balkin, 2010). The main cause of methicillin resistance in MRSA is the presence of the mecA gene, which codes for PBP2a, a penicillin binding protein that is able to resist binding to methicillin, allowing the bacterium to properly build the cell wall. MRSA also create methicillin-hydrolyzing β lactamase, which breaks down methicillin that enters the bacterium (Morell & Balkin, 2010). Factors that can cause an increase in S. aureus infections are contact with objects contaminated with the disease, proximity to a person with the disease, lack of cleanliness, and broken skin (Knox, Uhlemann, & Lowy, 2015). Methicillin-resistant S. aureus is mainly a nosocomial disease, since it is usually contracted in hospitals (Knox et al., 2015; “Nosocomial,” 2017). Because patients in hospitals have weaker immune systems and have a greater degree of contact with many healthcare workers who may see other contaminated patients, it is much easier for S. aureus to pervade hospitals. Methods of combating the spread of MRSA in hospitals often revolve around improving hygiene. Hand-washing MATHEMATICS AND COMPUTER SCIENCE

among healthcare workers reduces the likelihood that they colonize their patients with MRSA, but the effectiveness of hand-washing highly depends on the rate at which healthcare workers actually wash their hands when they come into contact with patients (Austin & Anderson, 1999). Another method of reducing the risk of MRSA infections is for healthcare workers to wear medical gloves and gowns when working with patients so the bacteria can be contained to infected patients (Abad et al., 2014). Other methods of controlling the spread of MRSA involve responsible use of antibiotics. One method is to use multiple antibiotics during treatment (Abad et al., 2014). Multiple-drug therapy can help reduce the spread of antibiotic-resistant bacteria because the selective pressure of the first drug is cancelled out by subsequent drugs, which are able to kill bacteria that have mutated resistance to the first antibiotic, thus preventing MRSA from developing in the population. Another method is screening of patients admitted to the hospital and of all patients who have had MRSA in the past (Gurieva et al., 2013). This method allows hospitals to quickly isolate S. aureus-positive patients and protect uncolonized patients from infection (Gurieva et al., 2013). Because there are multiple methods of trans-mission of S. aureus and multiple methods of antibiotic-resistance, several factors can affect the spread of MRSA. In this report, the dynamics of S. aureus in a hospital are modelled and the effects of the aforementioned inhibitors of its spread are each calculated separately. This paper aims to model the spread of MRSA in the hospital accounting for various prevention techniques and to deter-mine which is most effective. 2. Methodology For this paper, we employed the program STELLA to implement our computational models due to STELLA’s ability to process complex differential equations and model the flow of patients and healthcare workers between categories of MRSA contamination (IEEE Systems). The baseline model used in this paper is sourced from the work of Chamchod and Ruan exploring the spread of methicillin-resistant S. aureus among hospital populations (Chamchod & Ruan, 2018). Chamchod and Ruan modeled three groups of patients: uncolonized, colonized, and infected; and two groups of healthcare workers: contaminated and uncontaminated workers (Chamchod & Ruan, 2018). We use differential equations to model the rates of change of each population: dU/dt is the number of uncolonized patients, dC/dt is the number of colonized patients, dI/dt is the number of infected patients, and dH/dt and dHc/dt are the numbers of uncontaminated and contaminated healthcare workers, respectively (See Eq. 1-5 on next page). Uncolonized patients, colonized patients, infected patients, uncontaminated healthcare workers, and contamiMATHEMATICS AND COMPUTER SCIENCE

nated healthcare workers were each represented by stocks in STELLA with various inflows and outflows based on the differential equations (Fig. 6) (Eq. 1-5). The baseline model uses 600 for the initial value of uncolonized patients (i.e., the total number of patients) and 0 for the initial values of infected and colonized patients and assumes that the entire population of the hospital is initially MRSA-free (Fig. 6) (Chamchod & Ruan, 2018). Since the aim of the model is to model the effects of inhibitors of MRSA spread, the early stages of infection spread are expected to show more noticeable results than later stages during which the infection will have already stabilized within the population. By similar logic, the initial value for uncontaminated healthcare workers (i.e., the total number of healthcare workers) is set to 150 and the initial value for contaminated healthcare workers is set to 0 (Chamchod & Ruan, 2018). According to a study done by D. J. Austin and R. M. Anderson, hand-washing by healthcare workers reduces the probability of colonization of patients by a factor equivalent to the effectiveness of the hand-washing protocol, which is determined by multiplying the proportion of workers that follow hand-washing mandates by the effectiveness of each hand wash (Austin & Anderson, 1999). For simplicity, we assume that each hand-wash is 100% effective. Therefore, in the model that accounts for hand-washing, the probability of a patient receiving MRSA from contact with a healthcare worker, bp, equals 0.01(1-p) where p equals the proportion of workers following the hand-washing directive (i.e., proportion of workers actually washing their hands) (Fig. 7) (Austin & Anderson, 1999). In the same study by Austin and Anderson, the effects of using multiple drugs to treat infections were described as “creating a maximum of 2N resistance patterns for N drugs used” (Austin & Anderson, 1999). This indicates that the likelihood of unsuccessful treatment is only 1/2N because there is only one possibility in which a strain of bacteria is resistant to all of the antibiotics in use. Using multiple drugs affects ρ (i.e., the probability of successful treatment) in the original model, which was originally 0.6 (Chamchod & Ruan, 2018). However, since the probability of success does not equal the probability of failure in treatment of MRSA, the relative probabilities must be accounted for, which the original model by Anderson and Anderson does not accomplish. The probability of failure was originally 0.4. In a situation where more drugs create more possible outcomes, the frequency of being resistant to all of the drugs must decrease exponentially as the number of possible different resistance patterns (i.e. being resistant to three out of four drugs) multiplies with the addition of every new drug. Therefore, in the model which accounts for the number of drugs used in treatment, ρ=1-0.4N (Fig. 8). To model the effect of increasing the number of drugs used in antibiotic therapy, N=2 to show that the number of

Symbol U C I H drugs doubled (Fig. 8). The goal of doubling the number of drugs involved in treatment was to increase the number of drugs involved in treatment but to keep the model realistic, as it is difficult to develop and purchase new antibiotics. To account for gloving and gowning practices, we use the findings by C. L. Abad et. al that these practices reduce MRSA acquisition by 40% (Abad et al., 2014). We include this finding in our model by multiplying the probability of colonization of patients with MRSA upon admission into the hospital by 40% (Fig. 9). According to Gurieva et. al, the most cost-effective methods of universal screening per MRSA infection prevented were those that were 50% efficacious and only screened incoming patients and those who had been colonized with MRSA in the past (i.e., less effective screening protocols are less expensive because they are less thorough) (Gurieva et al., 2013). To model the effect of the most practical universal screening, the probability of being colonized with MRSA upon admission was multiplied by 1-(0.5-0.06) to take into account the screening and its effectiveness and the fact that 12% of screenings that should be performed are missed (i.e., if 12% of screenings are missed, then 0.5 of that 12% would have been MRSA positive, so 6% of MRSA-positive patients are missed and the total efficacy of the method decreases) (Fig. 10) (Gurieva et al., 2013). 3. Results and Discussion We report results from the baseline model of uncolonized patients, colonized patients, and infected patients with no interference from possible MRSA spread-inhibitors over a period of 25 days (Fig. 1). Next, we model the number of uncolonized patients; Uncolonized (U) 1 corresponds to the number of uncolonized patients in the baseline model, MDT corresponds to for multiple drug therapy, HW corresponds to hand-washing, US corresponds to universal screening, and GG corresponds to gloving and gowning (Fig. 2). The best outcome in this scenario is to have the highest number of uncolonized patients after 25 days because that shows that MRSA has not been able to spread to as many patients and the inhibitor is more effective (Fig. 2). We find that the baseline number of uncolonized patients (the purple dotted line) at the end of the 25-day period was 290 (Fig. 2). The overall trend shows that the highest number of uncolonized patients resulted from

Hc λc λi Λ a Nh bp γu ω ρ τ ϕ γc γi bhc bhi μ m1 m2 k

Value See equation 1 See equation 2 See equation 3 See equation 4

Meaning Uncolonized patients Colonized patients Infected patients Uncontaminated healthcare workers (HCWs) See equation 5 Contaminated healthcare workers (HCWs) 0.04 Probability of colonization with MRSA upon admission 0.001 Probability of infection with MRSA upon admission γuU+γcC+γiI Admission rate of hospital 8 Number of human contacts a patient needs each day 150 Total number of healthcare workers 0.01 Probability of colonization with MRSA after contact with a healthcare worker 1/5 1 / average length of stay of uncolonized patient in hospital m2k rate of decolonization 0.6 probability of successful treatment 1/14 1 / average length of treatment of infected patients m1k rate of progression from colonization to infection (1-m1-m2)k discharge rate of colonized patients (1-ρ)τ death rate of infected patients 0.15 probability of contamination of a HCW after contact with a colonized patient 0.3 probability of contamination of a HCW after contact with an infected patient 24 1 / average length of contamination 0.3 probability of becoming infectious 0.01 probability of decolonization 1/7 1 / average length of stay of colonized patient

Table 1. Variables used in the model. All variables sourced from Chamchod and Ruan (Chamchod & Ruan, 2018). MATHEMATICS AND COMPUTER SCIENCE

Figure 1. Baseline model.

Figure 2. Number of uncolonized patients in the hospital with various methods of MRSA-spread inhibition over 25 days. Baseline shown as dotted line.

Figure 3. Number of colonized patients in the hospital with various spread inhibitors. HCW hand-washing (green solid line), where 342 patients were uncolonized at the end of the 25-day period (Fig. 2). This demonstrates that hand-washing was the most effective method used to prevent the spread of MRSA. In our MATHEMATICS AND COMPUTER SCIENCE

Figure 4. Number of infected patients in the hospital with various spread inhibitors over 50 days.

Figure 5. Number of uncolonized patients in the hospital with various spread inhibitors, including gloving (GG) and gowning-and-hand-washing (GG+HW). model, colonized patients are counted as contaminated even though they are not infected because they can spread MRSA further. The lowest number of uncolonized patients at the end of the 25-day period was 283, which resulted from the use of multiple drug therapy (blue solid line) (Fig. 2), suggesting that the use of multiple antibiotics to combat MRSA actually decreases the number of uncontaminated patients over time. One possible explanation for this result is that multiple drug therapy affects the probability of successful treatment (Fig. 8). A higher probability of successful treatment reduces the death rate of infected patients, which reduces the number of possible admittances of uncolonized patients to the hospital. When the probability of successful treatment is higher, more infected patients are cleared of MRSA and become colonized patients rather than infected patients. For the number of uncolonized patients to increase, patients must be decolonized. Since the number of uncolonized patients decreased

with the addition of multiple drug therapy, the death rate must be more important to having more uncolonized patients in the hospital because its decrease outweighs the increase decolonization that is able to occur when patients go from infected to colonized. This indicates a weakness in the model number of deaths because the goal in the medical field is to reduce the number of deaths, while the model shows that an increased number of deaths increases the uncolonized patient population. Therefore, uncolonized patients can be admitted after every death. Finally, we show that gloving and gowning measures, which led to an uncolonized patient population of 303, were very slightly more effective than universal screening measures, which left an uncolonized patient population of 300 (Fig. 2). Both measures were more effective than the base model with no inhibitors of spread taken into account. We also modeled the number of colonized patients in the hospital over a period of 25 days (Fig. 3). Our results support our previous determinations about spread inhibitorsâ&#x20AC;&#x2122; effectiveness because the relative positions of the inhibitors in terms of the most and the least colonized patients were the inverse of their positions (Fig. 3). Hand-washing caused the highest number of uncolonized patients and the lowest number of colonized patients (181) (Fig. 3). The base-line model led to the second-highest number of colonized patients (215) (Fig. 3). Multiple drug therapy resulted in the lowest number of uncolonized patients and the highest number of colonized patients (221) (Fig. 3). Gloving and gowning (GG) was again slightly more effective than universal screening with 207 colonized patients and 209 colonized patients, respectively (Fig. 3). We model the number of infected patients in the hospital over a period of 50 days in order for the differences in the number of infected patients to be more prominent (Fig. 4). Similar to the results from the model of colonized patients, multiple drug therapy was the least effective inhibitor and hand-washing was the most effective inhibitor with 126 infected patients and 104 infected patients, respectively (Fig. 4). Since both hand-washing and gloving and gowning (GG) policies are fairly simple and inexpensive to implement by hospitals, their effects were combined into one model. We report the number of uncolonized patients at the end of a 25-day period for all of the MRSA spread-inhibitors and for the effects of gloving and gowning combined (GG+HW) (Fig. 5). Assuming that the colonized and infected populations follow the same pattern as above, we see that the effects of gloving and gowning combined with hand-washing are superior to any other spread inhibitor alone, resulting in 359 uncolonized patients at the end of the period, in contrast to the previously highest number of uncolonized patients, 342, that resulted from hand-washing alone (Fig. 2).

4. Conclusion Our computational models of MRSA indicate that the most effective method of preventing the spread of MRSA is implementing hand-washing protocols among healthcare workers. Multiple drug therapy could not be accounted for properly because the model produced results that were seemingly worse than that of the baseline model, but actually resulted in a lower patient death rate. Therefore, the least effective method of prevention of the spread of MRSA was the method of universal screening. The most effective method of prevention tested was the combination of gloving and gowning of healthcare workers and hand-washing, which were combined because both are simple and financially practical for hospitals to implement. Although the focus of this paper is methicillin-resistant S. aureus in hospitals, our computational approach could be extended to represent the prevention of the spread of antibiotic-resistant bacteria in the world at large, as similar controlling factors apply in hospitals as they do in settings outside of hospitals. Both are systems with inflow (i.e., births in the world correspond to patient admission in the hospital) and outflow (i.e., deaths in the world correspond to death and discharge of patients in the hospital). Our models are not entirely representative of non-hospital scenarios because birth and death rates arenâ&#x20AC;&#x2122;t always equal in the world while patient inflow and outflow in the hospital model are equal, but it is assumed that the same general patterns relating to antibiotic-resistant bacterial spread prevention apply, just at significantly greater or lesser degrees based on the ratio of the birth rate and the death rate. As long as multiple drug therapy is excluded from discussion (because it affects the death rate and then the model compensates by changing the inpatient admission rate in a way unrepresentative of birth in the larger context of the world), the relative impacts of each method of prevention would remain the same. 5. Acknowledgement The author wishes to thank Mr. Robert Gotwals for his excellent teaching of computational science. Without his instruction, this paper would not have been possible. The author also wishes to extend her gratitude to the North Carolina School of Science and Mathematics for providing an informative course in Computational Science. 6. References Abad, C. L., Pulia, M. S., Krupp, A., & Safdar, N. (2014). Reducing Transmission of Methicillin Resistant Staphylococcus aureus and Vancomycin-Resistant Enterococcus in the ICUâ&#x20AC;&#x201D;An Update on Prevention and Infection Control Practices. Turner White Communications. Retrieved from AND COMPUTER SCIENCE

tion.pdf. Austin, D. J., & Anderson, R. M. (1999). Studies of antibiotic resistance within the patient, hospitals and the community using simple mathematical models. The Royal Society. Retrieved from PMC1692559/pdf/10365398.pdf. Centers for Disease Control and Prevention. (2017). Antibiotic/Antimicrobial Resistance. U.S. Department of Health and Human Services. Retrieved from Chamchod, F., & Ruan, S. (2012). Modeling Methicillin-Resistant Staphylococcus Aureus in Hospitals: Transmission Dynamics, Antibiotic Usage and Its History. Theoretical Biology Medical Modelling, 9(25). Gurieva, T., Bootsma, M. C. J., & Bonten M. J. M. (2013). Cost and Effects of Different Admission Screening Strategies to Control the Spread of Methicillin-resistant Staphylococcus aureus. PLOS Computational Biology. Retrieved from journal.pcbi.1002874. Knox, J., Uhlemann, A. C., & Lowy, F. D. (2015). Staphylococcus Aureus Infections: Transmission within Households and the Community. Trends in microbiology 23(7), 437–444. Laxminarayan, R., Duse, A., Wattal, C., Zaidi, A. K. M., Wertheim, H. F. L., Sumpradit, N., . . . Cars, O. (2013). Antibiotic resistance–the need for global solutions. The Lancet Infectious Diseases, 13(12), 1057-98. http://dx.doi. org/10.1016/S1473-3099(13)70318-9 Morell, E. A., & Balkin, D. M. (2010). Methicillin-Resistant Staphylococcus Aureus: A Pervasive Pathogen Highlights the Need for New Antimicrobial Development. The Yale Journal of Biology and Medicine 83(4), 223–233. Nosocomial. (2017). In Merriam-Webster. Retrieved from Staphylococcus. (2017). In Encyclopaedia Britannica. Retrieved from STELLA Architect (Version 1.4.1) [Software]. Available from Weinstein R. A., Bonten, M. J. M., Austin, D. J., & Lipsitch M. (2001). Understanding the Spread of Antibiotic Resistant Pathogens in Hospitals: Mathematical Models as Tools MATHEMATICS AND COMPUTER SCIENCE

for Control, Clinical Infectious Diseases, 33(10), 1739–1746. World Health Organization. (2017). Antimicrobial Resistance. Retrieved from 7. Appendix (see following pages)

Figure 6. Baseline model of MRSA spread.

Figure 7. MRSA spread with HCW hand-washing. MATHEMATICS AND COMPUTER SCIENCE

Figure 8. MRSA spread with multiple drug therapy.

Figure 9. MRSA spread with HCW gloving and gowning. MATHEMATICS AND COMPUTER SCIENCE

Figure 10. MRSA spread with universal screening.


OPTIMIZING THE MAGNETIC CONFINEMENT OF ELECTRONS FOR NUCLEAR FUSION Jonathan Kelley Abstract Nuclear fusion is suggested to be a big step in the future renewable energy generation and spaceflight propulsion. However, high costs and construction obstacles prohibit many research labs from developing experimental devices. The focus of this paper is to computationally optimize the design parameters for a magnetic trap to potentially be used in low-cost inertial electrostatic confinement (IEC) nuclear fusion devices. Two electromagnetic coils in a biconic cusp configuration trap hundreds of thousands of simulated electrons. The confinement of each electron contributes to an overall electric potential that can be used to accelerate fusion fuel to kinetic energies for nuclear fusion. OpenCL parallel programming is used to simulate electron trajectories in biconic cusp magnetic fields and contributes to the virtual space charge. From over 1.1 million total electron simulations, we report a maximum simulated confinement time of 2368 ns without space charge and 127 ns with space charge. For a theoretical system only 1.1 m in length and 10 cm in diameter, we report a voltage drop of about 65kV -- 30% of the Coulomb barrier for Deuterium-Tritium fusion. These results suggest that limitations on smale-scale nuclear fusion research might be lower than previously thought. 1. Introduction Nuclear fusion with the aim of generating usable electric power has undergone significant research for several decades (Viswanathan, 2017). Nuclear fusion is appealing as a new source of power because of its desirable environmental considerations and virtually inexhaustible fuel supply (Ongena and Ogawa, 2016). There are currently many different strategies for generating the fusion conditions required for efficient atomic nuclei reactions. Some systems like tokamaks and stellerators involve containing a heated plasma while other systems focus on bringing solid fuel to ignition conditions. This paper explores the creation of a computational model to simulate and optimize electron capture for a biconic cusp magnetic trap used in nuclear fusion. Modern research in the development of nuclear fusion devices follows two main strategies. The first strategy involves the use of magnetic fields to contain a highly energetic plasma of light atomic nuclei(Mattei, Labate, and Famularo, 2013). This approach is commonly found in systems like stellerators and tokamaks such as the Wendelstein 7-X project in Germany and the ITER project in France. A second strategy is focused on the inertial confinement of a solid fuel pellet that is heated to fusion conditions by a laser or ion beam (Bychkov, Modestov, and Law, 2015). This technique has been successfully demonstrated in the National Ignition Facility at the Lawrence Livermore National Laboratory. While these techniques have achieved various levels of success, there are still several other methods of achieving fusion conditions with the eventual intent to generate electrical energy. In this paper, we primarily explore the inertial electroPHYSICS AND ENGINEERING

static confinement (IEC) subset of nuclear fusion research. The central concept behind IEC nuclear fusion is the heating of light atomic nuclei to sufficient kinetic energies for fusion through electric fields. Electric fields accelerate the ionized fusion fuel - typically a combination of the 3 main hydrogen isotopes - to high enough velocities where collisions result in fusion. The foremost goal of this paper is to optimize the voltage drop of an IEC system with an emphasis on meeting and exceeding the minimum particle acceleration to produce fusion. Optimizing this voltage drop will help experimental fusion researchers more easily study IEC and fusion dynamics. By optimizing the voltage drop, IEC devices become more efficient, shrink in size, and increase the fusion fuel number density. According to the Lawson criterion, kinetic energy is directly proportional to increases in fusion rates and is therefore a step closer to break-even nuclear fusion. One relatively well-known IEC device is the Polywell fusion reactor developed by EMC2 (EMC2, 2008). The Polywell consists of a arrangement of 6 electromagnetic coils that generate a quasi-spherical magnetic null region. This magnetic null region is characterized by a cusped geometry that acts as a magnetic trap for charged particles. A diagram of the Polywell and 10 electron trajectories are shown below (Fig. 1). Electrons are inserted into the null region through a face cusp and remain trapped within the null region of the Polywell for a period of time. During their circulation inside the magnetic trap, the collective charge increases and a voltage drop is created. As mentioned before, this voltage drop is essential to accelerating fusion ions to sufficient kinetic energies. The Polywell is an improvement over previous work

Figure 1. A superposition of 10 different electron trajectories (Carr, Gummersall, Cornish, and Khachan, 2011). in IEC fusion that relied on physical grids to provide the necessary voltage drop for fusion (Gummersall, Carr, Cornish, and Kachan, 2013). These devices were subject to conduction losses of the fusion fuel to the physical grids and were unable to ever produce break-even fusion rates (Kaye, 1974). A primary innovation of the Polywell is the formation of a virtual anode that provides the necessary voltage drop without a lossy physical grid. Despite the significant improvements over the simpler metal anode IEC system, the Polywell is a fairly involved machine that requires careful alignment and construction quality to operate properly. In a recent conference, Bussard elaborated that no more than 3E-5 fractional surface area of the machine can be unshielded to fully mitigate conduction losses (Bussard, 2009). Because of the size, cost, and tools required to construct the device, the Polywell is resource-prohibitive to many research labs and expensive to iterate. In light of the difficulties in exploring IEC fusion, a research group at the University of Sydney has recently revisited biconic cusp geometry as a simpler equivalent for magnetic confinement(Carr, Gummersall, Cornish, and Khachan, 2011). A biconic cusp system is more straightforward to construct when compared with a quasi-spherical arrangement as it only needs 2 electromagnetic coils and less power to operate. Because the biconic arrangement uses fewer electromagnetic coils, the same power supply can be used to create stronger overall magnetic fields than a Polywell system. The construction of a biconic system is straightforward; 2 axially concentric electromagnetic coils are spaced apart by some distance with current running in opposite directions. This system is very similar to a Helmholtz configuration where the directions of the magnetic

Figure 2. 3D model of biconic cusp system with electron gun. fields are opposite and form a cusped null region. A 3D model of a biconic system is shown below (Fig. 2). In addition to being simpler to manufacture, a biconic IEC system is also easier to simulate. Unlike the Polywell, the biconic IEC device is radially symmetric and its magnetic field can be calculated using radial components. Despite the simplified simulation requirements, there are relatively few computational papers that model the biconic cusp geometry (Hedditch, Bowden-Reid, and Khachan, 2015). For any given biconic cusp IEC device, there are 3 main coil parameters and 2 main electron gun parameters to optimize: coil separation, coil radius, coil amp-turns, electron gun separation distance, and electron beam energy respectively. Therefore, the overall goal of this paper is to determine a set of relationships between each of these parameters and their effect on the performance of the IEC device. We will then use these relationships to determine an optimized coil arrangement for some fixed parameter. From this coil arrangement, we can calculate the kinetic energy gain of affected ions and assess the ability for this system to perform nuclear fusion. 1.1 Maintaining a Potential Well For each coil and electron beam configuration, we will simulate the formation of a potential well in the null region of the IEC device. This potential well formation is highly dependent on the systemâ&#x20AC;&#x2122;s ability to capture electrons for extended periods of time. The potential well system is analogous to a secular equilibrium system where the PHYSICS AND ENGINEERING

rate of electron input is proportional to the current of the electron beam and the rate of electron loss is the reciprocal of the average electron confinement time. We can write these relationships as: (1) (2) (3) where N is the equilibrium quantity, λout is the reciprocal of confinement time, e is the magnitude of an electron's charge, Q is the space charge in the potential well, and I is the current of the electron beam. Conveniently, we find that the total charge contained in the null region of the device is equal to the current of the electron beam multiplied by the average electron confinement time. Unless explicitly stated otherwise, all simulations in this paper will assume an electron beam current of 1 amp. Using this relationship, we use the average electron confinement time within the null region as the primary metric to optimize the biconic IEC device.This then necessitates the ability to plot the trajectory and measure the capture time of any given electron. The Lorentz force can be used to help solve the trajectory of any charged particle in the biconic IEC device magnetic field: (4) Considering the complex biconic geometry, the magnetic field is dependent on the location of the electron in relation to the electromagnetic coils. This is unfavorable because an analytic solution for the electron trajectory is not easily approachable. Numerical techniques must be applied to solve for the electron confinement time. The concept of confining electrons in cusped magnetic fields is derived from behavior known as "mirror reflection." Magnetic mirror reflection occurs when a spiraling charged particle moves into an increasing magnetic field and reverses direction, essentially being reflected. Cusped geometry exploits this behavior to continuously reflect electrons as they approach the edges of the null region. Therefore, a reflection time and stopping distance can be determined from the relationship (Freidberg, 2007): (5) This equation can be solved using the particle’s initial PHYSICS AND ENGINEERING

velocity v and change in magnetic field B to determine the particle stopping distance. The relationship between the magnetic field and velocity of the particle relates the shape of the null region and the energy of the electron beam. Simply, a stronger magnetic field is required to capture higher energy electrons. Combining all design parameters of the system, we can create a model for electron confinement from input energy and starting position. 1.2 Introducing Space Charge Effects A major aspect of the Polywell and biconic cusp system that is often overlooked is the presence of space charge effects of the electron cloud accumulating in the null region. Many simulation papers have simply ignored space charge effects causing significant discrepancies with experimental results. Because the primary goal of this paper is to simulate and optimize a biconic cusp magnetic trap in the context of nuclear fusion, it is extraordinarily important for us to include these effects that significantly impact electron loss. In the context of nuclear fusion, we consider the magnetic trap’s ability to accelerate fusion fuel to be a function of the accumulated charge in the null region. If we assume the average position of all captured electrons to be located at the midpoint between both coils, we can simplify space charge effects to be analogous to a point charge. We know that the electric field by a point charge can be represented as: (6) where k is Coulomb’s constant and Q is the charge of the particle. In this context, Q would be the accumulated charge of the captured electrons and r would be the distance away from the center of the null region. While this is sufficient for distances far away from the null region, the behavior of the space charge changes when r is small. From the simulations where space charge is not accounted for, we can see that the electrons are distributed throughout the null region. The radius of this distribution is 2 cm for a 5 cm radius coil arrangement. We can then simplify this distribution to be spherically uniform and calculate the electric field inside of the null region. Using the electric flux of a spherical Gaussian surface, we know that the electric field inside of an electron cloud is equal to: (7) where R is the radius of the charge sphere. The electric field by the electron cloud for this experiment is shown

below (Fig. 3). As the electrons enter the null region, the electric field increases with point charge behavior. The electric field is highest when at the edge of the electron cloud and decreases towards the center. However, this distribution does not perfectly reflect the true distribution of electrons in the null region.

This results in the magnetic field of a single coil in spherical coordinates (Griffiths, 2017): (10)

(11) where I is the current through the loop in amps and E and K are the first and second complete elliptical integrals respectively. For the simulation, we will convert Br (Eqn. 10) and BÎ&#x2DC; (Eqn. 11) to Cartesian coordinates. We can also assume that the magnetic field at any point in the simulation is equal to the sum of the magnetic fields generated by each coil. This behavior is responsible for the creation of cusped magnetic fields and a null region of low magnetic field (Fig. 4).

Figure 3. Electric field of electron cloud. Looking back at the trajectory calculations using Lorenz force, we can now assume a non-zero electric field equal to that of the space charge dictated by the electron cloud. This makes the new trajectory calculation to be: (8) We now have a component of electron trajectory that is non-adiabatic. While the magnetic field does not do work on the particles, the electric field does perform work on the particles and will cause energy loss over time. This loss of energy makes long-term confinement to be much more difficult than in an adiabatic system. 2. Simulating Trajectories Establishing an average electron confinement time for multiple sets of coil and beam parameters requires an accurate simulation of many thousands of electrons moving through the biconic magnetic fields. Therefore, we developed a numerical integration model to plot the trajectory of a single electron. For this simulation, it is paramount that this technique is both accurate and precise; the cusped magnetic fields change rapidly in direction and magnitude around the edges of the null region. We must then mathematically determine the force of the magnetic field of each coil on the electron at every position along the electronâ&#x20AC;&#x2122;s path. To do this, we integrate the Biot Savart law over a circular loop using Gaussian units.

Figure 4. Top coil (green) and bottom coil (red) resultant magnetic field. We use the Runge-Kutta 4th Order (RK4) method for fast convergence and accurate calculations during mirror reflections. The RK4 method is more efficient than other simpler integration methods and can dynamically reduce error buildup, maintaining accuracy even for long simulations. The complete trajectory model would then be:

(9) where A is a function that returns the net acceleration by PHYSICS AND ENGINEERING

Figure 5. CAD model of biconic cusp device (left) and simulation space (right). all coils on the electron for any point in space, V is the velocity of the electron, and X is the position in 3-space. 3. Experimental Setup & Single Simulation Now that a model for electron motion has been established, we are able to begin adding various components to the simulation space. The host code for the simulation is written in Python 2.7.3 and manages the input data creation for the electron trajectory solver. The primary function of the host code is to convert input simulation parameters to arrays of data for the trajectory code to solve. The host code allows manipulation of electron gun parameters like shape, distance from the coils, and input energy. In addition, coil parameters can be changed including radius, input current, orientation, and separation distance. The secondary function of the host code is to parallelize the electron trajectory code to reduce the time of simulating thousands of electrons simultaneously. We will compute all simulations in this paper in double precision using an AMD Radeon R9 290 GPU. A typical simulation consists of a single electron gun axially concentric with a pair of electromagnetic coils with with opposite current. This configuration serves as an analog for previous biconic cusp confinement research as well as future work on smaller scale machines. We created a 3D CAD model of a physical system and the simulation space generated by the host code (Fig. 5). The coils of the host code are axially concentric about the Z axis. In every simulation, each electron returns a list of its position over time. Electrons with longer confinement times have more position values within the simulation region determined by the coils. Beam Radius (mm) Start 0.5 Stop 5.0

Beam Energy (eV) 1.0 285

Coil Current (A) 1,000 15,000

Table 1. Simulated parameter space ranges. PHYSICS AND ENGINEERING

Coil Separation (cm) 3.0 10.0

4. Simulation The simulation presented in this paper relies on properties of both the electron gun and the electromagnetic coil pair. For the electron gun, confinement times are determined by the beam diameter, injection distance, and electron energy. For the coil pair, confinement times are determined by coil radius, separation distance, and coil current. For this paper, injection distance and coil radius were held constant to reduce computation time and serve as real-world construction limitations. With two fixed parameters, we are left with a four dimensional parameter space consisting of beam radius, beam energy, coil current, and coil separation. The complexity of simulating this parameter space is O(n4). For this paper, each parameter will consist of 25 linearly increasing slices totaling 390,625 individual electron simulations. Table 1 shows the range of simulated values. Previous experimental research suggests fairly large value coil currents and beam velocities (Cornish, 2014) while simulations with no space charge suggest fairly low values. This parameter space is therefore designed to capture both high and low values for the coil configuration. 4.1 Optimizing Electron Beam Radius We must consider realistic behavior for each of the various components when optimizing the magnetic trap. One primary characteristic of the biconic cusp trap is the radius of the electron beam. We assume that the electron distribution is radially symmetric and uniform. For each radius of electron beam, we can assume that there is a finite quantity of electrons entering the null region equal to the current of the electron gun in 1 second. For the simulations presented in this paper, we assume this current to be 1 amp. To optimize the electron beam radius, we can simulate a line of electrons spanning the space between the axis of the magnetic trap and the edge of the coils. From each simulated electron, we can determine a confinement time for a given radius. The confinement times for 1000 radially distributed electrons can be segregated into 3 main groups: primary

Figure 6. 1000 simulated electrons at various radii from coil axis. capture, loss zone, and recirculation (Fig. 6). The primary capture region is shown by the first 90-100 electrons with confinement times above 120 ns. The loss zone is shown by the electrons between .0014 and .014 meters where the confinement times are fairly constant. The behavior shown in the last 200 electrons is produced when electrons are circulated around the coils and are not confined. Because we are looking to optimize confinement time, we are most interested in the primary capture region of the biconic cusp system. The confinement times for electrons at each radius can then be used to create an average electron beam confinement time. This is given by the average confinement time of all electrons from the axis to a given radius. We can plot this to identify the radius with the highest average confinement time (Fig. 7). From average confinement time from versus electron beam radius, we can identify the highest average confinement time of 378 ns at 888 micrometers. This simulation was performed with an initial space charge of 1e-12 coloumbs, a coil radius and separation of 5 cm, a coil current of 5000 amp-turns, and about 3 eV electron beam energy. Ultimately, we have a method of determining the optimal beam radius for any set of magnetic trap parameters.

Figure 7. Average confinement time vs electron beam radius.

Figure 8. No space charge electron trajectory.

Figure 9. Electron trajectory with 1nC space charge. 4.2 Confinement Times and Space Charge Another major consideration of confinement times is the space charge of the electron sphere as the system approaches equilibrium. From the results above, we know that confinement times will decrease as the charge of the electron cloud increases. As more electrons fill the null region, the adiabaticity of each electron trajectory will decrease. For this paper, a each parameter space was simulated for 3 different space charges. The final 5D parameter space includes 1,171,875 total electrons. The electron behavior throughout the time in the null region varies significantly with an increase in space charge. The trajectory of the electron with the highest confinement time for 0 space charge is shown (Fig. 8). There are several differences between the 0 space charge simulation and the highest 1nC space charge simulation as shown in Figure 9. For the space charge simulation, there are very few mirror reflections above the mid-plane. The reflections in this simulation are flat and are not as tightly centered about the midpoint as the 0 space charge simulation. 4.3 Calculating Potential for Fusion We can finally utilize the results of the simulations to determine the capability for fusion of the relatively small "desktop scale" magnetic trap. First, we need to know the total required energy of any single particle to undergo fuPHYSICS AND ENGINEERING

sion. We can make a general assumption that two particles must overcome the their Coulomb barrier for nuclear fusion to occur. In reality, other effects like quantum tunneling can be taken into account. For any pair of molecules, this barrier can be represented as: (12) According to the formula for the Coulomb barrier in Equation 12 where Z are atomic numbers and R are the nuclear radii, deuterium-tritium reactions require only about 444 KeV to fuse. Because of the planar symmetry of the biconic cusp system, most collisions would be head-on, reducing the energy per particle by half to just 222 KeV. Using the secular equilibrium equation presented earlier, we know that for an electron beam current of 1 amp, the charge of the electron cloud is directly proportional to the average confinement time. We can solve the minimum required average confinement time as a function of required kinetic energy. We will select the center of the charge sphere and the tip of the electron gun to determine accelerating voltage on the hydrogen nuclei. This must be represented as a piece-wise function because of the behavior of the electric field by electron sphere where r is the distance away from the null region and R is the radius of the electron cloud.

This formula calculates the voltage difference from the tip of the electron gun to the edge of the charge sphere as a point charge. The second portion of the piece-wise function evaluates the voltage difference from the center of the electron cloud to its edge. For the simulation presented in this paper and a confinement time of 127 ns with appropriate space charge, a voltage difference of 64,943 volts is created. This is just under 30% of the required voltage drop for deuterium-tritium fusion to occur. A minimum required confinement time would then be about 434 ns.

magnetic trap. Many of these relationships are fairly intuitive; with no space charge, slower moving electrons will be confined for longer times; greater separation between two coils will result in a larger null region; the greater the charge buildup, the greater the difficulty to contain more electrons. When compared to other Polywell simulation papers, the confinement times for the biconic cusp magnetic trap were surprisingly high given the small scale. Investigation revealed an interesting relationship where lowest beam energies produced the highest confinement times. Increasing coil current reduced the confinement time slightly over many thousands of amps. Other experimental papers suggest that stronger magnetic fields have positive results when considering fusion plasma behavior like electron recirculation. Overall, a potential difference of 64,943 volts from a 127 ns confinement time is promising. This is especially significant when considering that confinement time scales with the physical dimensions of the IEC device. 5.2 Limitations While the simulations presented in this paper are modeled with a focus on real-world behavior, it is important to recognize many of the experimental limitations and potential inaccuracies. The biconic cusp system is certainly more difficult to model than other simple magnetic traps. Characteristics of the simulation include varying magnetic fields, electron nonadiabaticity, electron drift, and space charge. The biggest factor that might cause significant variations between the simulations and experimental devices is the electron cloud space charge. The simulations in this paper assume a perfectly spherical distribution of electrons throughout the null region with a radius of 1 cm. The cloud was chosen as an approximate average distance of all mirror reflections for electrons simulated without space charge. In reality, the electron cloud is much more complex with properties beyond the scope of single electron simulations.

5. Discussions 5.1 Analysis of Results The data presented in this paper demonstrate a variety of relationships for design parameters in a biconic cusp PHYSICS AND ENGINEERING

Figure 10. Distribution of confinement times in selected parameter space.

The current parameter space simulated in this paper consists of 25 slices of 4 variables, resulting in 390,625 electron simulations. While this is a large data set, the resolution is still fairly low. Given greater computational power, an ideal parameter space would consist of 4-8 parameters with a resolution of 1000 electrons per parameter combination. We can visualize the distribution of confinement times for the space charge simulation to verify the original parameter space estimates. Figure 10 shows the ranked confinement times for the simulated parameter space. This suggests the parameter space modeled does not sufficiently cover all possible parameter combinations and that combinations with potentially higher confinement times are simply not modeled in this parameter space. 5.3 Further Testing The model presented in this paper serves as a tool to optimize the confinement of electrons for given design constraints on a biconic cusp magnetic trap. This specific model is highly flexible and can simulate millions of electrons with multidimensional parameter spaces. Future simulations would include more parameters in the parameter space like changing current and torus-shaped coils at much higher resolutions. Other unique arrangements of the electromagnetic coils could also be investigated to boost confinement times. Individual visualizations suggest that many electrons are lost through the ring cusp formed along the midplane of the coils. Future work could characterize the variety of electron loss behaviors and investigate how potential solutions would mitigate these effects. Another approach to optimizing the magnetic trap would be through an evolution process where simulation parameters are altered slightly and characteristics of the best runs would be used for a subsequent simulations. This would provide an efficient linear approach to optimization where this paper presents a computationally-intensive parallel method. 6. Conclusions The simulations presented in this paper are designed to be both numerically accurate and physically realistic. These requirements encouraged the use of mathematically defined coils, a more complex numerical integration method, and the consideration of space charge. This paper is unique in that space charge is considered and hundreds of thousands of electrons were simulated. Because we simulate the complete trajectory of each electron, future work can investigate and characterize the various loss mechanisms that result in lower confinement times. Because each simulation is performed in double precision with many integration steps, future work can also characterize the electron spiral behavior throughout

the null region. In the end, we present a relatively small IEC device only 1.1 m long and 10 cm in diameter that forms a potential well at 30% of the required acceleration for deuterium-tritium nuclear fusion. 7. Acknowledgements I would like to thank Dr. Jonathan Bennett (NCSSM) for his support in validating the models, the NCSSM Foundation, and the NCSSM Summer Research and Internship Program. 8. References Bussard, R.W. (2009). The advent of clean nuclear fusion superperformance space power and propulsion. Vol. 57 International Astronautical Congress. IAF IAA, 57. Bychkov, V., Modestov, M., & Law, C. (2015). Combustion phenomena in modern physics: I. inertial confinement fusion. Progress in Energy and Combustion Science, 47, 32â&#x20AC;&#x201C;59. Carr, M., Gummersall, D., Cornish, S., & Khachan, J. (2011). Low beta confinement in a polywell modelled with conventional point cusp theories. Physics of Plasmas, 18(11), 112501. Cornish, M. K., S. Gummersall. (2014). The dependence of potential well formation on the magnetic field strength and electron injection current in a polywell device. Physics of Plasmas, 21. EMC2. (2008). Method and apparatus for controlling charged particles. US20080187086A1. Freidberg, J. P. (2007). Plasma physics and fusion energy. Cambridge University Press. Griffiths, D. (2017). Introduction to electrodynamics. Cambridge University Press. Gummersall, D. V., Carr, M., Cornish, S., & Kachan, J. (2013). Scaling law of electron confinement in a zero beta polywell device. Physics of Plasmas, 20(10), 102701. eprint: Hedditch, J., Bowden-Reid, R., & Khachan, J. (2015). Fusion energy in an inertial electrostatic confinement device using a magnetically shielded grid. Physics of Plasmas, 22(10), 102705. Kaye, A. S. (1974). Plasma losses through an adiabatic cusp. Journal of Plasma Physics, 11. PHYSICS AND ENGINEERING

Mattei, M., Labate, C. V., & Famularo, D. (2013). A constrained control strategy for the shape control in thermonuclear fusion tokamaks. Automatica, 49(1), 169–177. Ongena, J. & Ogawa, Y. (2016). Nuclear fusion: Status report and future prospects. Energy Policy, 96, 770–778. Viswanathan, B. (2017). Chapter 6 - nuclear fusion. In B. Viswanathan (Ed.), Energy sources (pp. 127–137). Amsterdam: Elsevier.


REDUCED GRAPHENE OXIDE FIBERS FOR WEARABLE SUPERCAPACITORS Julia Wang Abstract Graphene fibers have attracted mass interest as a promising fiber electrode for one-dimensional flexible fiber supercapacitors (FSCs). The major challenges in this field are to develop highly conductive, mechanically strong, and structurally stable graphene fibers with large surface areas using a relatively easy and scalable process. Here, we demonstrate a wet-spinning process to produce graphene oxide (GO) fibers with trivalent ion Al3+ cross-linking. Upon chemical reduction, the resulting reduced GO (rGO) fibers exhibited much rougher morphology and are higher in toughness and electrical conductivity than rGO fibers produced from a divalent ion Ca2+ and acetic acid coagulation bath. 1. Motivation People around the world desire durable clothing at an affordable price. The military, in particular, would benefit from energy-storing uniforms that are light, flexible, and durable. Strong, conductive fibers could serve as the basis for wearable electronics and built-in batteries. Although graphene fibers are making their way into the market, the lack of effective, low-cost and convenient assembly strategy has blocked its further development. We hope to produce military uniforms that are lightweight and multifunctional. 2. Introduction Graphene, due to its attractive intrinsic merits, including impressive surface area, extreme mechanical strength, excellent optical transparence, and high thermal and electrical conductivities, has been extensively studied into macroscopic materials, including one-dimensional (1D) fibers, two-dimensional (2D) films and three-dimensional (3D) frameworks (Du, 2008; Lee, 2008; Seol, 2010; Becerril, 2008; Li, 2014; Xu, 2015) . Among these architectures, graphene fibers (GFs) have become a promising attractive field, due to their potential application in smart textiles, flexible and wearable energy storage or sensors (Xu, 2015). Solution-processing of graphene oxide (GO) followed by chemical or thermal reduction is a widely-studied method for the synthesis of graphene macroscopic materials. Owing to the abundant oxygen-containing groups on the basal plane and sheet edges, GO can be highly dispersed in water. The excellent dispersibility together with its large aspect ratio renders GO the formation of liquid crystals (LCs) with orientational orders, enabling wet-spinning process an optimal choice to obtain continuous GO fibers and corresponding reduced GO (rGO) fibers. The coagulation bath is one of the most important factors that influences the properties of wet-spun fibers. So far, alkaline baths (such as KOH and NaOH), divalent ions (CaCl2 and CuSO4), positive charged polymer and small

molecules (chitosan, CTAB, and diamine), and non-solvents (acetone and glacier acetic acid) have been used as coagulation agents. Among these, divalent ions, especially Ca2+, which can penetrate into the internal of the asspun gel fiber and offer ionic cross-linking, result in the highest mechanical properties and sustain high conductivity. However, these Ca2+ cross-linked fibers have rather smooth surface morphology that is adverse for the electrochemical performance. On the other hand, Abouralebi et al. found that rGO fibers from an acetone coagulation bath could give higher capacitance over Ca2+ bath. However, the mechanical properties of acetone-based rGO fibers were almost 10 times lower than Ca2+ coagulated fibers. Therefore, it is still a challenge to fabricate rGO fibers to satisfy all the requirements for fiber supercapacitors (FSCs). The trivalent ion Al3+ has been proven to be an effective cation for strengthening GO membranes. In addition, trivalent cations are better cross-linkers for alginate gel than divalent cations in mechanical performance. However, there is still no report using Al3+ as a coagulation agent for wet spinning rGO fibers. Inspired by the divalent ionic cross-linking in rGO fibers and the application of trivalent ion in GO membrane and alginate gel, we believe that Al3+ can be employed in coagulation bath for the fabrication of rGO fibers, thereby enhancing the interlayer interaction and stabilizing the fiber structure. 3. Materials and Methods 3.1 – Materials Natural graphite flakes (300 μm) were obtained from Asbury Graphite Mills USA. KMnO4, HI acid (55%), CaCl2, AlCl3, and cetrimonium bromide (CTAB) were purchased from Sigma-Aldrich and used as received. Concentrated H2SO4 (98%), HNO3, HCl (36.5%), glacier acetic acid, ethanol and H2O2 (30%) were purchased from Fisher Chemical. 3.2 – Synthesis of GO GOs were prepared from graphite powders following a modified Hummers’ method. Graphite powder (2 g) was PHYSICS AND ENGINEERING

added to a H2SO4 (98%, 100 mL) and HNO3 (33 mL) mixture, stirring for 24 hours at room temperature. Then, the mixture was poured slowly into 1 L de-ionized (DI) water, followed by filtration to collect the solid. The solid was washed using DI water to neutralize pH and dried at room temperature to obtain the intercalated graphite compounds. The intercalated graphite compounds were thermally expanded using a microwave (750 W) for 5 seconds to obtain worm-like expanded graphite (EG). EG was added to a 500 mL flask containing H2SO4 (98%, 267 mL) in an ice bath (0°C). KMnO4 (10 g) was then added slowly to the mixture under continuous stirring. After the introduction of KMnO4, the mixture was kept at room temperature and stirred for 12 hours. 1.5 L water was then added slowly to an ice bath (0°C). Shortly after the dilution with DI water, 30 mL H2O2 (30%) was added to the mixture, resulting in a bright yellow, bubbling solution. 3.3 – Fabrication of rGO Fibers On a wet-spinning apparatus with a syringe, pump, nozzle, coagulation solution, and rotating coagulation bath, GO spinning dope (15 mg mL-1) was extruded through a spinneret (24 gauge) into the rotating coagulation bath, yielding a stretching ratio of 1.3. The coagulation baths are AlCl3 1 wt% in H2O/ethanol (3/1 v/v), CaCl2 5 wt % in H2O/ethanol (3/1 v/v), 10 mg/mL CTAB in water and glacier acetic acid respectively. Displayed in Fig. 1, graphene oxide sheets form 2 and 3 bonds with CaCl2 and AlCl3, respectively, per molecule. Next, the GO gel fibers from AlCl3, CaCl2, and CTAB coagulation baths were washed with DI water. The reduced GO (rGO) fibers were achieved by hydroiodic acid (HI) at 80°C for 12 hours, followed by washing with water and ethanol and drying at 60 °C under vacuum for 8 hours. 3.4 – Characterizations The morphology of rGO fibers was characterized using a scanning electron microscope (SEM). Electrical conductivity of rGO fibers was measured by a standard fourprobe method. Mechanical properties were measured using a Q-test system with 1 mm/min extension rate and 1 cm gauge distance. All the electrical conductivity and mechanical properties were the average results of at least 5 samples. Electrochemical measurements were conducted in the solid-state using electrochemical workstation (Autolab, Metrohm, USA). Two rGO fibers were aligned, soaked with gel electrolyte of H2SO4/polyvinyl alcohol/ H2O, and dried at room temperature. 4. Results and Discussion In the pursuit of both high mechanical and electrical performance by rGO fibers, divalent ions such as Ca2+ have been widely used as cross-linkers for GO by offering inPHYSICS AND ENGINEERING

terlayer and intralayer cross-linking bridges between the negatively charged oxygen-containing acidic groups. In this work, we first employed trivalent ion Al3+ as a crosslinker in the coagulation bath for wet-spun rGO fibers. Then, we compared these fibers to those coagulated from the divalent ion Ca2+ and non-solvent acetic acid without cross-linking. Surprisingly, the Al3+ cross-linking offered the corresponding rGO fiber a very different morphology as well as electrical and mechanical properties.

Figure 1. Schematic illustration of the formation of rGO fiber from Al3+ and Ca2+ coagulation bath respectively.

Figure 2. SEM images of the surface morphology of obtained rGO fibers from various coagulation bath.. (a) Al3+ cross-linked rGO fiber; (b) Ca2+ cross-linked rGO fiber; (c) Acetic acid coagulated rGO fiber. The rGO-Al3+ fibers are more wrinkled with macro pores in the bulk structure and on the surface. In comparison, the surface of other rGO fibers (rGOF-Ca2+ and rGOF-acetic acid) are rather smooth with few grooves. This much more wrinkled surface might attribute to more positive charges of Al3+, which can provide more cross-linking sites between and within GO sheets (Fig. 2). When the trapped solvent evaporates during the drying process, the carboxyl groups sites in the same or different GO sheets which cross-link with the same Al3+ ion would shrink to 1 spot, forming a 3D bonding structure and resulting in a rough surface morphology. It is evident from the results that employed coagulation bath influence the mechanical properties of rGO fibers. As expected, rGO fibers prepared by ionic cross-linking displayed higher mechanical properties as compared to non-solvent coagulation bath acetic acid without cross-linking. For rGO fibers spun from acetic acid, the

lateral cohesion of the adjacent rGO sheets is attributed to strong van der Waals interactions. For fibers coagulated in AlCl3 and CaCl2, apart from van der Waals, the residue oxygen containing groups can also offer ionic cross-links.

Young’s modulus is the ratio of stress to strain at the leftmost part of the curve. The rGO fibers coagulated in Al3+ had the highest Young’s modulus, while acetic acid had the lowest (Fig. 3). 5. Conclusions and Future Work

Figure 3. Representative stress-strain curves of rGO fibers prepared using various coagulation baths. Sample

Tensile strength (MPa)

Breakage elongation (%)

rGO-Al3+ 164.9 ± 2.1 2+ rGO-Ca 240.9 ± 32.5 rGO-C2H3O2 145.6 ± 11.8

10.3 ± 2.1 6.2 ± 0.9 6.9 ± 0.8


Electrical conductivity (S/cm) 171.3 ± 9.1 128.0 ± 6.2 85.7 ± 0.9

Toughness (MJ/m3)

rGO-Al3+ 10.0 ± 2.1 2+ rGO-Ca 8.3 ± 1.2 rGO-C2H3O2 4.1 ± 0.8

Table 1. Mechanical and electrical properties of rGO fibers. Divalent cation Ca2+ cross-linked fibers had a tensile strength of 240.9 ± 32.5 MPa at 6.2 ± 0.9 % ultimate elongation. For comparison, trivalent cation Al3+ cross-linked fibers exhibited a tensile strength of 164.9 ± 2.1 MPa at 10.3 ± 2.1 % elongation, resulting in a toughness value of 10.0 ± 2.1 MJ/m3, 20.5 %, and 144.6 % higher than that of Ca2+ cross-linked and acetic acid coagulated fibers. This is due to the more wrinkled rGO sheets in rGO fibers originating from the trivalent cross-linking. Even with the wrinkled rGO sheets, the Al3+ cross-linked rGO fibers show the highest electrical conductivity (171.3 ± 9.1 S cm-1) in comparison with Ca2+ cross-linked and acetic acid coagulated fibers.

We made a novel discovery that an AlCl3 coagulation bath contributes to forming fibers suitable for FSCs. As a cross-linker, Al3+ has an advantage over Ca2+ and acetic acid. Al3+ has many more interactions with GO sheets, which increases toughness and electrostatic interactions. From SEM images, we see that Al3+ coagulated fibers are by far the most wrinkled. This means that it has the most surface area available for electrochemical activity. Our results disagree with Abouralebi et al. They suggested that acetic acid is a far better coagulant than Ca2+, but we believe that Ca2+ is more appropriate based our findings. Findings for Al3+ have not yet been published. Working from here, we will continue to perfect the procedure using AlCl3 in the coagulation solution. The produced fibers will be used for FSCs and other wearable technology components such as wireless charging receivers and remote actuators. Our vision is that someday our products will reach the marketplace and benefit society with portable and wearable technology. 6. Acknowledgements I would like to thank Dr. Jonathan Bennett, instructor at NC School of Science and Mathematics, who helped identify the research projects, edit this paper, and guide the researcher throughout the experimentation process. I would also like to thank Dr. Wei Gao, assistant professor and principal investigator of the lab at NC State University College of Textiles. Nanfei He, graduate student at NC State University and Ph.D. candidate for Fiber and Polymer Science, was also a tremendous help when supervising the experimentation and helping us with the use of hazardous chemicals and machinery. 7. References Abouralebi, S. H.; Jalili, R.; Esrafilzadeh, D.; Salari, M.; Gholamvand, Z.; Yamini, S. A.; Konstantinov, K.; Shepherd, R. L.; Chen, J.; Moulton, S. E.; Innis, P. C.; Minett, A. I.; Razal, J. M.; Wallace, G. G. (2014). Acs Nano, 8, (3), 2456-2466. Becerril, H. A.; Mao, J.; Liu, Z.; Stoltenberg, R. M.; Bao, Z.; Chen, Y. (2008). Evaluation of solution-processed reduced graphene oxide films as transparent conductors. ACS nano, 2(3), 463-470. Du, X.; Skachko, I.; Barker, A.; Andrei, E. Y. (2008). ApPHYSICS AND ENGINEERING

proaching ballistic transport in suspended graphene. Nat Nanotechnol. 3, (8), 491-495. Lee, C.; Wei, X. D.; Kysar, J. W.; Hone, J. (2008). Measurement of the elastic properties and intrinsic strength of monolayer graphene. Science. 321, (5887), 385-388. Li, J.; Li, J.; Meng, H.; Xie, S.; Zhang, B.; Li, L.; Ma, H.; Zhang, J.; Yu, M. (2014). Ultra-light, compressible and fire-resistant graphene aerogel as a highly efficient and recyclable absorbent for organic liquids J Mater Chem A, 2, (9), 2934-2941. Liu, Y. J.& Xu, Z.; Gao, W. W.; Cheng, Z. D.; Gao, C. (2017). Chemically doped macroscopic graphene fibers with significantly enhanced thermoelectric properties Adv Mater, 29, (14). Seol, J. H.; Jo, I.; Moore, A. L.; Lindsay, L.; Aitken, Z. H.; Pettes, M. T.; Li, X. S.; Yao, Z.; Huang, R.; Broido, D.; Mingo, N.; Ruoff, R. S.; Shi, L. (2010). Two-dimensional phonon transport in supported graphene. Science, 328, (5975), 213-216. Xu, Z.& Gao, C. Graphene fiber: a new trend in carbon fibers. (2015). Materials Today, 18, (9), 480-492.


TIME DOMAIN CALCULATIONS OF SCALAR RADIATION FROM AN ORBITING POINT CHARGE IN SCHWARZSCHILD SPACETIME Karna Morey Abstract Gravitational wave astronomy is a new window on violent mergers of black holes and neutron stars, and promises to eventually provide observations of supernovae, extreme-mass-ratio inspirals (EMRIs) into supermassive black holes, and fluctuations in the Big Bang. We focus on time domain calculations eventually relevant to understanding EMRIs but studying the scalar self-force model problem. In these calculations, scalar radiation is emitted by a point scalar-charged particle in orbit about a more massive Schwarzschild black hole. The time domain calculations use a discontinuous internal boundary condition representation for the point charge. We discuss the implementation of hyperboloidal slicing and compactification to improve the treatment of distant and horizon boundaries. Results are compared to earlier frequency domain calculations of Warburton and Barack, and to another more recently developed frequency domain code, for which there is a very high agreement relative to the predicted mesh error of our method. 1. Background 1.1 – Introduction and Motivation The direct observation of gravitational waves in 2015 by the LIGO collaboration ranks as one of the great feats of experiment in the last 25 years (Abbott et al., 2016). Gravitational waves give us the new found ability to ascertain information from astrophysical phenomena in conjunction with traditional methods. This whole new field of gravitational wave astronomy promises to allow us to learn about things that are impossible to detect using traditional methods, including binary black hole systems and other binary star systems, supernovae and burst phenomena, and background gravitational radiation from the Big Bang (“Gravitational Waves”, 2017). The importance of gleaning new information from these systems cannot be understated, as very little is known about binary black hole systems, which are impossible to detect using telescopes because they don’t emit electromagnetic waves. The field of gravitational wave astronomy may result in many new discoveries of black holes and supernovae. Gravitational waves carry energy and angular momentum away from binary systems that produce them, and therefore cause an eventual inspiral and coalescence that was observed by LIGO in their direct observation. Although LIGO detectors are only able to detect gravitational waves of binary systems with approximately equal mass ratios, the future joint NASA-ESA LISA mission will be able to detect gravitational waves from binary systems with a mass ratio of 10−5 and more extreme (NASA). The work presented in this paper further develops the theoretical framework for these extreme mass ratio inspirals (EMRIs), a task crucial to understanding these binary systems. In a similar fashion to an EMRI, a scalar charge orbiting around a massive black hole will emit perturbations

in the scalar field. This is analogous to a scalar pressure source creating sound waves, which are just perturbations in the air pressure. Because of the similar nature of the scalar wave situation and the gravitational wave case, investigation of the field arising from a scalar charge is a model problem for the gravitational EMRI. A gravitational EMRI, where waves are generated due to perturbations in the spacetime itself, is much harder to deal with than the scalar case. This is because the gravitational field is represented by a tensor, and tensor analysis is required to analyze the inspiral phenomenon and waves emanating from the system. We investigated the scalar field arising from extreme mass ratio inspirals for circular orbits of a scalar charge around a spherically symmetric, non-rotating Schwarzschild black hole using a time-domain approach. We utilized the method of extended homogeneous solutions, investigated by Barack, Ori, and Sago (2008) in the scalar case, and by Hopper and Evans (2011) for the gravitational case. Both papers conducted analysis in the frequency domain using a Fourier series. Although frequency domain calculations are common in perturbative systems (Hopper & Evans, 2010), time-domain approaches will be key in developing self-consistent self-force calculations that more accurately represent the system, especially for systems with high orbital eccentricity. By solving the time-dependent sourced Regge-Wheeler equation using the two-step Lax-Wendroff method, the scalar field can be determined for a system of given initial parameters. Through the methods described below, we can determine the behavior of the wave at all points in space and all points in time. Although we are investigating scalar fields, the methods can later be generalized to the gravitational case. In an EMRI, there is a force on the particle that is due to its own field, known as the self-force or radiation reacPHYSICS AND ENGINEERING

tion. This causes a particle in an EMRI to demonstrate an inspiral towards the central black hole. The self-force on a particle in a given orbit is of particular interest, as the inspiral can only be calculated after knowing the self-force. Detweiler and Whiting (1997), showed that the scalar field created by a point charge can be decomposed into a regularized part and a singular part, and showed that only the regularized component of the particle’s field contributes to the scalar self-force. The regularized component is a finite term, unlike the singular part that is infinite at the location of the particle. Although not the focus of this paper, the self-force will be an important future step towards modeling astrophysical EMRIs, especially for highly eccentric orbits and is an important verification technique to test the consistency of different methods (e.g. frequency domain vs. time domain). Investigating the predicted gravitational wave signals arising from EMRIs allows data collected by LISA to be interpreted based on fitting theoretical calculations to experimental numbers (Hopper & Evans, 2010). This theoretical work will be very important as signals from EMRIs start to be detected by LISA after its launch, and a theoretical model for gravitational waves produced by EMRIs will be key towards the interpretation of astrophysical phenomena. Although our theoretical work is specific to the non-rotating Schwarzschild spacetime, it is one step towards making a generalized model that can be used to understand any observable binary system in the universe. 1.2 – Circular Orbits in Schwarzschild Spacetime We focus our study on an EMRI with a large, central black hole with an orbiting, scalar point charge (Figure 1).


where (2) Equation (2) is given in geometric units (c = 1, G = 1), according to the conventions of Misner, Thorne, and Wheeler (1973). We adopt standard Schwarzschild coordinates (t, r, θ, φ), similar to spherical coordinates except adopting the areal radius as the radial coordinate and adding a time coordinate (Hartle, 2014). The radial function f(r) defining the metric will appear again in our analysis of circular orbits. The angular frequency of the circular orbit Ω is given by Kepler’s Third Law, which still holds true even in general relativity (Hartle, 2014), and is given by (3) Contrary to the Newtonian theory, there is a lower limit to the radius for stable circular orbits in general relativity. In Schwarzschild spacetime, the innermost stable circular orbit (ISCO) occurs at r = 6M (Hartle, 2014). This will be the innermost circular orbit that we will investigate due to the fact that an orbit with a smaller radius will be unstable. 1.3 – Equation for a Scalar Field on Schwarzschild Spacetime The equation for a scalar field ψ is given by the sourced Klein-Gordon equation (4)

Figure 1. A diagram of a circular orbit of a scalar charge around a Schwarzschild black hole. Note that although r is drawn as the distance to the central black hole in the diagram for sake of simplicity, it is really the Schwarzschild areal radius, also known as the reduced circumference. The Schwarzschild metric tensor is given by


In equation (4), the expression σ(t, r, θ, φ) is the source function and represents the charge density of a particle in circular orbit. The D’Alembertian operator □ is defined as (Andersson & Jensen, 2000) (5) where g is the determinant of the metric, gμν is the inverse of the metric tensor, and the expressions ∂/∂xμ and ∂/∂xν are derivatives with respect to the Schwarzschild coordinates xα = (r, t, θ, φ). This equation can be separated and decomposed into spherical harmonics Ylm(θ, φ), as shown

by Hopper (2011). The scalar field ψ is decomposed according to (6) The decomposition of the source term is given by (7) We also adopt the tortoise coordinate x (in many texts called r*), which is given by the ordinary differential equation

In the following, we drop the subscripts l, m for ease of reading. Note that equation (12) includes an extra term F(t)δ’(r − rp(t)) when compared to equation (9), as our algorithm can handle more generality than necessary for the scalar case, which will be useful when we attempt to generalize this to the gravitational case. After the algorithm is demonstrated, we will set F(t) = 0 and G(t) equal to the explicit expression given in equation (11), which is the source function for the scalar charge in circular orbits. 2.1 – Reduction of the Equation into First Order Form The first step is to reduce equation (12) into first order form. We do this by making a substitution for the derivatives of the radial function Ψlm(t, r); we define (13)

(8) Adopting this coordinate transformation along with the spherical harmonic decomposition yields

and (14)

(9) This equation is very similar to the time-dependent Schrodinger equation, except the V (r) term doesn’t represent a potential energy function, but rather a potential-like function that accounts for curvature of the Schwarzschild background the wave is traveling on. The radial function rp(t) is the radial position of the particle as a function of time (in the case of circular orbits, rp(t) = r0, where r0 is the radius of the orbit). The effective potential function Vl(r) is given by

Equation (13) gives the first time derivative of Ψ. We can use the substitutions above and derive two additional equations for the time derivatives of Π and Φ. The two equations are (15) and (16)

(10) The source function Glm(t) for circular orbits is given by (Hopper, 2011) (11) In this equation, q is the charge of the particle, r0 is the radius of the orbit, and Y*lm are the complex conjugates of the spherical harmonics. The mode frequency ωm is given by equation ωm = mΩ, where Ω is given by equation (3). 2. Methods We present an algorithm for solving the sourced Regge-Wheeler equation, given by (12)

2.2 – Algorithm for Solving the Unsourced HomogenousWave Equation The unsourced Regge-Wheeler equation (i.e. with a source function σ(t, r, θ, φ) = 0) is given by (17) Equation (17) is the homogenous wave equation for a scalar field, as the definition of homogeneity is that every term of the differential equation contains some derivative or function of Ψ. The sourced wave equation, however, is inhomogeneous because the delta function terms do not contain Ψ or its derivatives. We will first demonstrate how to solve the unsourced wave equation (homogenous case) before describing in section 2.3 how to incorporate the source terms using internal boundary conditions. The PHYSICS AND ENGINEERING

fundamental method that we will be using is a finite differencing scheme, in which we discretize space into a “mesh”, or a series of finite steps in both space and time. The mesh is broken up into zone faces, which are the integer time and spatial steps (i.e. i − 1, i, i + 1...), and zone centers, which are halfway in between the zone faces (i.e. i − 1/2, i + 1/2...), as shown in Figure 2. In terms of actual values, x and t are given by t = n∆t and x = i∆x, where ∆t and ∆x are the respective step sizes and i and n are the respective indexing variables. In the Lax-Wendroff method we will have stored values of Ψ, Π, and Φ that are located at the zone faces, as well as temporary values (known as fluxes) located at the zone centers (Press & Vetterling, 1999). These fluxes are not actual values of the wave, but approximations that are used in the method. In this initial value problem, the values of the wave at all zone faces are known at an initial time, and we “push” these values to a later time. The mesh that we are using to solve the partial differential equation has spatial steps on the horizontal axis and time steps on the vertical axis. Originally, the mesh (Figure 2) only has the initial known values of the functions Ψ, Π, and Φ (shown as black dots). We then calculate the fluxes for Φ and Π (shown as green dots), the the location of flux for Ψ (shown as an orange dot), and the locations on the mesh used to calculate these fluxes (Figure 3). The calculation of these fluxes comprise the first step of the two-step Lax-Wendroff method, which establishes all the temporary values needed. Finally, we move the all three variables one time-step forward using the fluxes (Figure 4). This comprises the second step of the Lax-Wendroff method. This needs to be done along the entire spatial domain each time, as the entire function needs to be evolved one step forward before a second step can be taken. The finite-differencing scheme is specific to the particular system of coupled differential equations that we are solving, but can be derived easily using the coupled equations above, changing the differential terms into difference terms (Press & Vetterling, 1999). The exact equations used are:

Ψ(x = xmin, xmax) = 0, Π(x = xmin, xmax) = 0, and Φ(x = xmin, xmax) = 0 on the edge of the mesh. The initial conditions were Ψ(t = 0) = 0, Π(t = 0) = 0, and Φ(t = 0) = 0 as well. We set the initial boundary conditions because we are interested in the behavior when a source is introduced and starts its orbit at time t = 0. We set the outgoing boundary conditions to zero to ensure numerical stability of results.

Figure 2. A visual representation of the mesh and the known initial values used to push the wave forward.

Figure 3. A visual representation of the first step of the Lax-Wendroff method.

Figure 4. A visual representation of the final step of the Lax-Wendroff method.

For every numerical method, outer boundary conditions are necessary. We adopt outer boundary conditions PHYSICS AND ENGINEERING

2.3 – Incorporating Source Term Using Jump Conditions To actually develop an algorithm that allows the numerical evolution of the inhomogeneous wave equation, we must now consider how to incorporate the δ and δ’ source terms from equation (12) into the algorithm. We present a method that uses jump conditions of each of the three functions, Ψ, Π, and Φ, to develop “internal” boundary conditions that can be set at the source, due to the fact that the wave equation has the non-zero functions G(t) and F(t) on the delta function source term. We then apply the Lax-Wendroff method for homogeneous systems, de-

scribed in the previous section, on either side of the internal boundary, and evolve the wave forward. This section will deal with the specifics of the algorithm developed and will derive the equations necessary for the implementation of such algorithm. To begin, we assume that all functions Ψ, Π, and Φ have antiderivatives that are smooth, and each of the functions has only a jump discontinuity. Because of this, we can assume that the functions Ψ, Π, and Φ (represented generically in the following equation by u(t, x)), obey the property (18) where Θ is the Heavside step function. Defining Θ+ = Θ(r − r0) and Θ− = Θ(r0 − r), we have

(27) We can use these jump conditions to create internal boundary conditions on each of the three functions. Let u(t, x) again represent any of the functions Ψ, Π, or Φ. Let us place the particle at a zone face, that is for some integer a, the particle will be placed at xa + 1/2. We know the value of the function u up to ua and from ua +1 onwards. For each of the functions, there will be jumps in the function and the first derivative of the function at xa + 1/2. To make our analysis of the situation easier, let us introduce two “ghost” variables, uGa and uGa+1, which are smooth continuations of the function u from the right side of the boundary and the left side of the boundary, respectively (Figure 5).

(19) Differentiating yields (20) We define the jump condition [[u]]0 as (21) where x0 is defined as the x-coordinate of the source. From the equations (13), (15), and (16), we can match coefficients on the δ functions with those in equation (16) and read off the six jump conditions, two for each variable:

Figure 5. This figure shows the location of the particle as well as the location of the ghost zone variables uGa and uGa+1. These variables as well as the jump conditions will be used as internal boundary conditions in the algorithm. We can express [[u]]0 in terms of of the above variables. This yields

(22) (28) (23)

We also know that there is a jump in the spatial derivative of u, that can be also expressed in terms of the above variables as (24)


(29) We can then write (30)

(26) and

and (31)


We can now solve for the ghost zone variables using elimination, giving (32) and (33) These equations can be applied to each of the functions Ψ, Π, and Φ. Now that we have expressions for the six necessary jump conditions in terms of the functions F(t), G(t), and r0, we can compute the time dependent ghost zone variables for Ψ, Π, and Φ using equations (32) and (33). We split the domain into two sections, one to the left of the particle, and one to the right. We then treat the ghost zone variables as time-dependent internal boundary conditions for each side. Through this method, we can incorporate the source term while still using the Lax-Wendroff Method for homogeneous equations. We iterate these calculations enough steps to determine the behavior of the wave. Now that we have derived general equations in terms of time dependent functions F(t) and G(t), we can set G(t) equal to the explicit expression given in equation (9), and F(t) equal to zero. Note that G(t) is a complex source function, and each of the three variables Ψ, Π, and Φ are also complex functions of space and time. This is because we decomposed the field into spherical harmonics, which means the radially dependent field amplitudes are necessarily complex. 2.4 – Algorithmic Road Map We explain the specific steps taken to evolve the wave forward in time below. The code used to implement this algorithm was written in C, utilizing the GNU Scientific Library and the Intel ICC compiler. 1. Specify a desired particle radius r0, black hole mass M, mesh width, and spatial step size ∆x, as well as a starting l and m. 2. Generate mesh points by numerically integrating the ODE defining r as a function of x values, given by equation (8). Ensure that the particle is at a zone face, and calculate the index of the zone faces to the left and to the right of the particle. Calculate ∆t as the greatest integer fraction of the period of revolution of the particle while still being less than the spatial step size, to satisfy the Courant condition ∆t/∆x < 1. This condition is necessary for the numerical stability of the Lax-Wendroff Method. 3. Calculate the potential function for each discrete point in x. Initialize the values of Ψ, Π, and Φ to be zero everywhere. Start with t = 0. PHYSICS AND ENGINEERING

4. Calculate the values of the function Glm(t), and use that to calculate the jump conditions as well as the ghost zone variables. 5. Perform a homogeneous Lax-Wendroff evolution one step forward on either side of the internal boundary (particle location). Enforce external boundary conditions (Ψ = 0, Π =0, Φ = 0 at both edges of the mesh) and internal boundary conditions using the ghost zone variables. 6. Iterate the previous two steps as many times as needed to determine the behavior of the wave. A run time of 130 seconds was required to calculate approximately 4 billion zone cycles, where one zone cycle involves evolving the field at one point one step forward in time. The code was run on a dual processor MacBook Pro from early 2015. 3. Results and Discussion 3.1 – Waveforms for Various Spherical Harmonic Modes The first of the important results are the waveforms (Figure 6). The specific behavior of the calculated wave is dependent upon the spherical harmonic mode, as that is the primary determinant of the frequency of the wave and the height of the potential function V (r). As expected from the equation ωm = mΩ, the frequency of the partial wave for the l = 2, m = 2 (not graphed in Fig. 6) is twice as much as the frequency of the partial wave for the l = 1, m = 1 mode. 3.2 – Energy Fluxes The energy fluxes at infinity for a particular spherical harmonic mode are given by (Poisson, 2007) (34) Note that a superscript asterisk denotes complex conjugation. The expressions for the energy fluxes were derived from the stress-energy tensor for a scalar field. The total energy flux at infinity is given by

(35) This energy flux must be averaged over one orbital period in order to obtain invariant results. To ensure accurate averaging, we integrate the result Ė∞lm one orbital period into the past and divide by ∆t, the same quantity used in the mesh spacing, as we only have solutions in time for values t = n∆t. In the code, we ensure that the time step ∆t is an integer fraction of the orbital period. We can therefore calculate the invariant average for every mode








6.769500299 * 10−5



3.506335004 * 10−5



1.369105618 * 10−5



4.829499770 * 10−6



1.618366944 * 10−6



5.259428568 * 10−7



2.512343914 * 10−5



9.454236211 * 10−6



2.691210447 * 10−6



6.935460545 * 10−7



1.700073132 * 10−7



4.044805938 * 10−7


Table 1. Values of energy flux for various spherical harmonic modes for the r0 = 6M and r0 = 8M circular orbits. Modes for which l ≠ m are not shown because of the negligible contribution to the energy flux.

Figure 6. The value of the scalar field Ψ, calculated for the l = 1, m = 1 spherical harmonic mode. Both real and imaginary parts are plotted against both the tortoise coordinate r* as well as the Schwarzschild areal radius r, given as multiples of the black hole mass M. and sum it over spherical harmonic modes. The sums were found to be exponentially convergent, meaning that the energy fluxes for each spherical harmonic l, m mode exponentially decayed to zero and therefore added up to a finite value very quickly. A table of energy fluxes for various spherical harmonic modes is shown in Table 2. We found that the fluxes for the l = m case were orders of magnitude higher than any other case and had the most contribution to the final energy flux. To interpret our results, we compare to the energy fluxes published by Barack and Warburton (2010) calculated in the frequency domain. This comparison allows us



1 - Ė∞total/ ĖBWtotal


2.473418172 *10−5

8.12 * 10−6


7.637156533 *10−5

3.31 * 10−5


3.120633929 *10−5

2.82 * 10−5

Table 2. Values for the total energy flux for various values of r0, compared to the energy fluxes published by Barack and Warburton. As described below, the relative error is small compared to the resolution of the Lax-Wendroff method. to assess the accuracy of our results and the feasibility of this method in further calculations. The results are shown in Table 2. 3.3 – Discussion By far the most important of our results is the low error between the fluxes obtained in this paper and those obtained by Barack and Warburton. For the Lax-Wendroff method, the resolution error of the method is on the order of ∆t2 or ∆x2. In this case, since our step sizes are around 0.03, our approximate mesh resolution is around 10−4 (Press & Vetterling, 1999). The fact that the overall error compared to frequency domain codes is much lower than the mesh resolution indicates a high degree of success PHYSICS AND ENGINEERING

for this time-domain code. This success indicates the feasibility of using time domain calculations in further work. In the past, time domain methods have known to have errors up to 1%, which is very different than these methods which show agreement up to around 0.01% (Hopper & Evans, 2010). In the frequency domain, the self force has to be computed over an entire orbit, and there is no way to efficiently compute waveforms and fluxes for high eccentricity orbits. To model the situations of more astrophysical relevance, a high accuracy time-domain code is of essential importance. Due to the success of the algorithm in the case of circular orbits, the next task will be to calculate the regularized scalar self-force for circular orbits. It will be necessary to regularize the self-force because only the finite regularized component contributes to the self-force. Once the scalar self-force can be verified for circular orbits, we shall move on to elliptical orbits and calculate the fluxes and the self-force for eccentric orbits on Schwarzschild. It will be of particular interest if the accuracy of this time domain algorithm will hold up in the eccentric case. If so, it is possible that a self-consistent self-force method could be implemented, where the self-force could be applied after each step. This self-consistent method could be compared against geodetic methods of applying self-force to a series of full geodesic orbits (Osburn, Warburton & Evans, 2016). Of particular interest would be the results for highly eccentric orbits, and the comparison of geodetic methods and self-consistent methods. Overall, the progress so far supports the course described above, as an accurate time domain method is necessary to generalize to eccentric and self-consistent orbits. It is very possible that we can achieve sufficient accuracy in the eccentric case to perform self consistent self force. Although there is still much progress to make on the front of calculating the self-force and generalizing the algorithm for eccentric orbits, the high accuracy of the time-domain code in the circular orbit case is an important result for modeling inspiral phenomena.

Gravitational Waves from a Binary Black Hole Merger. Retrieved 22 May 2017, from

4. Acknowledgements

Warburton, N., & Barack, L. (2010). Self-force on a scalar charge in Kerr spacetime: Circular equatorial orbits. Physical Review D, 81(8).

The author acknowledges Dr. Jonathan Bennett for assistance in research design, coding, and help in the editing of the paper. The author also acknowledges Dr. Charles Evans, Dr. Kyle Slinker, and Mr. Zach Nasipak at UNC Chapel Hill for development of the research goals and instrumental assistance in conducting mathematical analysis. The author acknowledges Mr. Joshua Abrams for instrumental assistance in developing the code for the project. 5. References Abbott, B., Abbott, R., Abbott, T., Abernathy, M., Acernese, F., & Ackley, K. et al. (2016). Observation of PHYSICS AND ENGINEERING

Andersson, N., & Jensen, B. (2000). Scattering by Black Holes. Encyclopaedia On Scattering. Retrieved from https:// Barack, L., Ori, A., & Sago, N. (2008). Frequency-domain calculation of the self-force: The high-frequency problem and its resolution. Physical Review D, 78(8). Detweiler, S., & Whiting, B. (2003). Self-force via a Greenâ&#x20AC;&#x2122;s function decomposition. Physical Review D, 67(2). Gravitational Waves. (2017). Retrieved 22 May 2017, from education/ highschool/teachers/grav-waves.cfm Hopper, S., & Evans, C. (2010). Gravitational perturbations and metric reconstruction: Method of extended homogeneous solutions applied to eccentric orbits on a Schwarzschild black hole. Physical Review D, 82(8). Hopper, S. (2011). The gravitational field produced by extreme-mass-ratio orbits on Schwarzschild spacetime. University of North Carolina at Chapel Hill Digital Repository. LISA - Laser Interferometer Space Antenna - NASA Home Page. (2017). Retrieved 19 June 2017, from Osburn, T., Warburton, N., & Evans, C. (2016). Highly eccentric inspirals into a black hole. Physical Review D, 93(6). Poisson, E. (2007). A relativistâ&#x20AC;&#x2122;s toolkit. Cambridge University Press. Press, W., & Vetterling, W. (1999). Numerical recipes. Cambridge University Press.


Left: Dr. Joseph DeSimone, Chancellor’s Eminent Professor of Chemistry at UNC, William R. Kenan Jr. Distinguished Professor of Chemical Engineering at NC State and of Chemistry at UNC, Co-founder and CEO of Carbon, Co-founder of Liquidia Technologies, Bioabsorbable Vascular Solutions, and Micell. Dr. DeSimone is also a former member of NCSSM's Foundation Board of Directors and is an inaugural inductee (2017) of the NC STEM Hall of Fame. Right: Sreekar Mantena, BSS Chief Editor; Corinne Miller, BSS Essay Contest Winner; Isabella Li, BSS Chief Editor; and Dr. Jonathan Bennett, BSS Faculty Advisor. What are the things you love most about the scientific community? What are the things you like least? I love the esprit de corps around trying new things, trying to improve the health and well-being of society. I love the motivation for trying to realize that you can make something better that impacts lives. Innovation is really important, and we’ve got lots of problems. I love the utilitarian aspects of research. Probably one of my favorite lines is from Melinda Gates, Bill Gates’s wife: “Science enables our caring to matter.” I found that really inspirational; it becomes a toolbox to really help drive and motivate you, especially about helping people and improving their lives. What don’t I like? You know, I think, on the research side, it has become very competitive to get research funds. And I think it’s leading to outcomes that we as a society and we as a nation need to be very careful about. In the biological sciences, who mostly get funding from the National Institutes of Health, the average age of a person who gets their very first grant is 43. You know, I’m over the hill and I’m 52. When Carolina hired me as an assistant professor, I was 25. I look at all the people here in Silicon Valley, where I’m at now — imagine if nobody got any funding until they were 43. It’s a problem, and I don’t like the fact that a lot of young people are being cut out of the system, they’re not moving forward. When you look at the diversity and balance of funds going to different people, of different backgrounds, it’s not balanced and reflective of society. The number of people from underrepresented

groups in sciences who get funding is low, and it’s perpetuating the situation. We need to realize that diversity is a fundamental tenet of innovation, and that funding is not being spread around in an effective away. Just look at university faculty, being not representative of the student population. And these things all contribute to what I think is a problem, and it’s something I'd like for us to be aware of. Can you tell us about CLIP (Continuous Liquid Interface Production), the basis of your newest company, Carbon? It’s an amazing breakthrough that I think of now as a software-controlled chemical reaction to grow parts. We use light and oxygen in combination to grow parts. I think of light as our chisel. Patterned light is solidifying a polymer in very selected areas and in a very selected volume fraction. Oxygen inhibits the chemical reaction that light triggers — we pioneered that. It allows us to print 100 to 1000 times faster than traditional 3D printing with really exquisite complexity and surface structure. And on top of that, we’ve invented some great materials. These materials have the properties to be durable for a wide range of applications, from running shoes to medical devices to car parts, and it all comes together with a piece of hardware that is built from the ground up to be completely controllable with software. When I grew up, most of a car was controlled by a human. FEATURE ARTICLE

You pushed a brake, that was pulling a cable, and it applied pressure to the back of the car. Or you’ve got a steering wheel that connected to the wheels that you can turn. Now, if you drive a Tesla, it’s more like fly by wire. There are electronics sitting in the back of the wheels that go to the brakes; everything is software controlled and done with codes. Our printer is designed in the same way. Everything about the printer is software controlled. So that, combined with light and oxygen, makes it the very first digital fabrication technique that makes three dimensional printing possible. What are the benefits of 3D printed objects and materials over traditional parts when manufacturing? How do you anticipate manufacturing to change over the coming years due to advances such as CLIP in 3D printing? You guys are too young to know, but when you think about making copies of one paper (like exams and things that you wrote), it wasn’t too long ago that there was something called a mimeograph machine or a copy machine. There was a master template and people worked long and hard to get a master template and then they would make lots of copies of those pages. And now, today, you have a digital laser printer. And writing and printing has all gone digital. It’s changed the way we work and collaborate. You guys, when you write something, you use Google Docs or Microsoft Word, and you pass around versions, and you edit it, and you collaborate. When you want to make lots of copies, you make just the number of copies you need. People don’t print lots of copies of books and magazines anymore and store them in warehouses. They are made on demand and you make the amount you need, where you need it when you need it. It’s changed the supply chain and disrupted everything. When it comes to 3D polymer parts, whether it’s a running shoe or car parts or medical devices, I would argue that we’re still in the mimeograph ages. There is no digital fabrication technique that has emerged to have the quality and unit economics necessary to produce real parts. 3D printing has been touted as that digital technique, but traditional 3D printing does not scale in quality or speed necessary to be a true production application. CLIP, we believe, is the first example of a truly scalable, economically-viable digital fabrication technique that will usher in a new era of what people can make, how they design them, how they’re engineered, and how they are ultimately delivered to customers. I think it’s ushering in a really profound new age of digital fabrication that’s going to have a profound impact on how people design products and what products people design. I think it’s going to disrupt supply chains, and I think it’s going to speed the economy to allow companies to go faster and make things FEATURE ARTICLE

they could never make in the past. You have founded multiple startup companies beyond Carbon. What has been your experience of working in industry compared to working in research with a university? All of my startups prior to this current one, I did as a faculty member. UNC-Chapel Hill has very good policies and procedures that are very clear that allow a faculty member to start and launch companies, often with students. They have real good conflict of interest management policies and procedures. It feels very intrusive, it’s thorough, but it benefits the students, it benefits the faculty member and the institution. After doing that for 25 years, starting several companies as a faculty member, and graduating 80 Ph.Ds in my career, half of which are women and others underrepresented in science, when I had this new invention, I decided to step away from my academic posts and moved into being CEO of Carbon. It’s very different and very similar in so many different ways. In a nutshell, we have a really good esprit de corps here. I have a team that wants to make a difference in society. We work at the intersection of hardware engineering, software engineering and molecular science. I compete, for example, with software people at Facebook and Google. I think we compete well with those employees they get to choose where they want to work, because we have a really good purpose. Engineers love to solve hard problems. We’ve got to change the way people design, engineer, make and deliver customer products. A lot of software people now are being asked to write better algorithms to push ads on you. That’s not that engaging if your craft in life is software. Who likes getting lots of ads? Noone. That’s just not a rewarding career. I love the purpose-driven aspects of our company, that’s a lot like a university. There’s a lot more pressure on me than I ever had at any university. I’ve never worked so hard in my life. Being a faculty member is a treasured society, one that I really enjoyed. I worked hard as a faculty member, no question about it, but I didn’t quite have the pressure on me that I have with 250 people working for me. When I come in to work in the morning and leave at night, I think about car payments and house payments and college tuitions. It’s a lot of pressure that I didn’t quite have at the university. Many students at NCSSM are interested in research and entrepreneurship. Did you always know you were interested in these fields, and what advice do you have for students looking to pursue this in the future? No, I was not. The word entrepreneurship was not in my

vocabulary for the longest time. As a researcher both in undergraduate and graduate school, I was very utilitarian in the Thomas Jefferson notion of doing research that can impact people’s lives. I’ve always loved that. I never thought about that in the context of being a business person and trying to drive that myself. The idea of being an entrepreneur myself was probably born out of frustration. If you are an innovator, and you are completely reliant on third parties to bring your innovations to life, that often can go sideways for a myriad of reasons that are outside your control. One of the most important is what I would say is entrenched interests. If you had an idea, a better way of doing something, and you licensed it to an existing company that had competing technologies, your new idea could get canned simply because they did not want to make the investment or because they already paid for a plant and wanted to be more cash efficient. It has nothing to do with your technology. What I love about being an entrepreneur is you get to make your ideas happen with singular focus. Early on in the academic year, there was school-wide discussion on the merits of lesser known colleges over big-name universities. Having attended a small, liberal arts college near where you lived, what was it like for you to attend such a school and how do you think it impacted your success down the road? In my case, Ursinus College was basically in my neighborhood growing up, outside of Philadelphia. My father was born in Italy. He was a tailor, and didn’t go to college himself, so I was first-generation to go to college. We didn’t have the means to attend and live at Ursinus College, we couldn’t afford my family sending me there, but what I did was work two jobs while going to school, living at home. I got a great scholarship being a “townie” which I really benefited from. I’m a big believer in a liberal arts education to change lives. And I see it firsthand; I’m living it firsthand. I think there are some big universities that have a small-college, liberal arts feel — UNC Chapel Hill is a great example of that. I think it’s a matter of how people fit and what their local circumstances are. I don’t draw the distinction too much between big schools and little schools, because I think there’s pros and cons of both. Obviously for graduate school, you have to go to a pretty big place to get the range of research in science and engineering. One of the challenges with public universities is, because of financial constraints, we are ushering students through these schools, more focused on throughput than a focus on each student. The great part about not going to public school was experimenting with classes that you may or may not be good at, or may or may not enjoy. Now it’s getting harder to drop or change classes at some of the big

public schools, because they are constrained with getting you through there quickly. What led you to pursue research over other careers with your degree? So I fell in love with research as an undergraduate. I loved the hands-on aspect of it, I loved the methodic aspects of it, and I was good at it. I loved it because it made the classroom experience that much more rich. There’s nothing like doing in a lab something that’s related to what you’re doing in class. It brought it to life for me, and I really enjoyed that. You mentioned that there is a struggle for young scientists to get funding. What can young scientists do to make themselves more likely to get funding? That’s a terrific question. There is a business phrase that strategy is all about being different, and I think the same goes in research. I think different ideas, compelling different ideas, are what I would try to focus on. As opposed to a me-too or a me-three. Work in areas that no one else is working in and say well “how do you do that” ? Bridging fields is one of the richest fertile grounds for doing something new. When I was growing up academically, there was a metaphor between an I-shaped person and a T-shaped person. An I-shaped person was very monolithic and deep in a particular subject. A T-shaped person was also deep in a particular subject but had the agility to collaborate with others. I think the more appropriate metaphor today has gone beyond T-shaped and includes π-shaped or comb-shaped. We are deep in multiple subjects. I think that’s a higher calling than just being a T-shaped person. T-shaped people and I-shaped people collaborate often via a common language. I think a common language can dumb down certain topics. A π-shaped person is more multilingual. Being multilingual is a higher calling than a common language. I’m attracted by those people who are polymaths and deep in multiple subjects, who understand at their core these multiple subjects, and are able to see the connections and do something that is very different and differentiate it. A lot of students at NCSSM do research and have experienced failure. How do you cope and move forward? In many ways, it certainly is part of the pathfinding process to find out what’s not possible and to still move forward. Failure is really all about finding the edge of possibilities. You don’t know where the edge is until you get past the edge. Having a good knack of understanding that whole process, and thinking about it as a probing way to find the edge, is how I think about failures. It’s just intrinsic to what we do. FEATURE ARTICLE

What are you interested in outside of work? I love spending time with family. In North Carolina, we had a wonderful home in Chapel Hill and a wonderful place down at Holden Beach, and we enjoyed that many weekends. Things are a lot more expensive in California, although we have a wonderful place. I do a daily exercise routine, I have to just to keep my sanity. Going for mounting bike riding in California is addicting. I also have a 13-month-old granddaughter who takes up a lot of my spare time, and experiencing that with the little one has been a lot of fun. And Iâ&#x20AC;&#x2122;ve been able to go to two Final Fours in a row at UNC-Chapel Hill and then just got back from a Super Bowl game this weekend.


BROAD STREET SCIENTIFIC The North Carolina School of Science and Mathematics Journal of Student STEM Research VOLUME 7 | 2017-2018

Broad Street Scientific 2018  
Broad Street Scientific 2018