Spring 2018-- Mapping Shifting Landscapes

Page 1

Carolina

Carolina Scientific

scıentıfic Spring 2018 | Volume 10 | Issue 2

Mapping Shifting Landscapes —UPLIFT IN THE APPALACHIANS— full story on page 43 1


Carolina

scıentific

Mission Statement:

Founded in Spring 2008, Carolina Scientific serves to educate undergraduates by focusing on the exciting innovations in science and current research that are taking place at UNC-Chapel Hill. Carolina Scientific strives to provide a way for students to discover and express their knowledge of new scientific advances, to encourage students to explore and report on the latest scientific research at UNC-Chapel Hill, and to educate and inform readers while promoting interest in science and research.

Letter from the Editors: At Carolina Scientific, we believe that scientific discoveries should be accessible to all. In this era of rapidly advancing technology, our ability to speak to new audiences has reached new heights. We’re excited to bring you Carolina Scientific’s Spring 2018 issue, and hope that over our ten years, we’ve succeeded in our mission to share our passion for science and research with a broad audience. In this edition, explore diverse topics, from the exciting advancements in virtual reality (page 14) to the effects of binge drinking on the adolescent mind (page 18). Enjoy! - Aakash Mehta and Ami Shiddapur

on the cover Geologists at UNC are piecing together the story of the unique topography of the Appalachian Mountains. Their work seeks to uncover the mystery of the range’s unexpectedly high elevations. Full story on page 43. Illustration by Meredith Emery.

carolina_scientific@unc.edu carolinascientific.org facebook.com/CarolinaScientific Twitter: @uncsci

2

Executive Board Editors-in-Chief Aakash Mehta Ami Shiddapur Managing Editor Lynde Wangler Design Editors Esther Kwon Julianne Yuziuk Akshay Sankar Associate Editors Hannah Jaggers Janet Yan Ricky Chen Sara Edwards Copy Editor Adesh Ranganna Treasurer Elizabeth Smith Publicity & Fundraising Chair Ricky Chen Online Content Manager Marwan Harwani Faculty Advisor Gidi Shemer, Ph.D. Contributors Staff Writers Illustrators Aneesh Agarwal Caroline Allen Maximillian Bazil Zoha Durrani Dylan Brown Meredith Emery Elizabeth Chen Candice Greene Carly Dinga Maddy Howell April Ferido Olivia Novak Patrick Gorman Lizzie Satkowiak Jason Guo Laura Wiser Harrison Jacobs Copy Staff Rhea Jaisinghani Sierra Archibald Rayyanoor Jawad Anna Arslan Aubrey Knier Sara Bernate Zhi-Wei Lin Peter Cheng Janie Oberhauser Haley Clapper Ami Patel Summer Epps Grace Pearsall Briana Fletcher Kevin Ruoff Candice Greene Emily Schein Jeremiah Hsu Anthony Schinelli Paige Jacky Andrew Se Kieran Patel Shreya Shah Alex Payne Sidharth Sirdeshmukh Hannah Rendulich Sophie Troyer Natalie Siegel Designers Melanie Stewart Coco Chang Zarin Tabassum Paige Jacky Wenzhong Wang Sage Snider Wilfred Wong Marques Wilson Sarah Zhang


Carolina Scientific

contents Technology and Innovation

Molecular Biology

4

Precision Medicine: The AI Revolution

26

Death of a Middleman

6

Going with the Flow

28

A Tiny Organism With Huge Potential

8

We are Star Stuff

30

The Future of Cancer Research

Printing Better Smiles

32

What This Pest Can Do May Tick You Off

10

Aneesh Agarwal

Zhi-Wei Lin

Kevin Ruoff

Shreya Shah

Dylan Brown

Jason Guo

Anthony Schinelli

Sophie Troyer

Neuroscience

Health and Medicine

12

Learning to Balance with Virtual Reality

34

A Joint Effort

14

A New Take on Linguisitic Relativity

36

Radiation Therapy on a Microscopic Scale

16

Lights...Camera...Neurons!

38

Drugs, Genetics, and Policy

18

The Physical Imprints of Addiction

40

Novel Methods to Reduce HIV Transmission

20

Discovering Life Within Death

Patrick Gorman

Rhea Jaisinghani

Rayyanoor Jawad

Emily Schein

Education Educating the Educators

24

Creative Classrooms: Using Robots to Enhance Creative Engagement

April Ferido

Andrew Se

Sidharth Sirdeshmukh

Ecology

Maximillian Bazil

22

Elizabeth Chen

Harrison Jacobs

Ami Patel

3

43

Appalachian Anomaly

46

Oysters: Reducing Global Warming

48

The History of Life

50

What’s in Your Water?

Grace Pearsall Carly Dinga

Aubrey Knier

Janie Oberhauser


technology and innovation

Figure 1. IBM Watson Computer. Image courtesy of Wikimedia Commons.

Precision Medicine: The AI Revolution By Aneesh Agarwal

T

he Canon of Medicine, an encyclopedia composed by the Persian physician Avicenna at the turn of the first millennium CE, is perhaps one of the first examples of personalized medicine being used to treat disease. Avicenna considered elements such as skeletal structure and physical body characteristics when selecting therapies for his patients. While the specifics of this ancient methodology are not supported by modern medical research, it presents an interesting parallel to today’s evolving field of precision medicine: the approach to tailored disease treatment taking into account an individual’s specific genetic and molecular profile. Dr. William Kim, physician-scientist at the UNC Lineberger Comprehensive Cancer Center, says, “There is clear heterogeneity; not every bladder or colorectal cancer has the same genomic alterations. By sequencing patients’ tumors, we are hopeful that in

some cases we can actually find what’s driving the cancer and, more importantly, try to match patient mutations with a specific therapy.”1 Precision medicine has seen tremendous growth in the past decade, much of it stemming from the identification of a kinase mutation in Chronic Myelogenous Leukemia (CML), a rare blood cancer. This mutation in CML was matched to a kinase inhibitor drug, Imatinib, which effectively treated the cancer by blocking the overexpressed kinase pathway. “This caused a paradigm shift in the way cancer was thought about,” commented Dr. Kim.1 Mutations could be directly targeted, and it was a matter of identifying and tailoring care to combat those specific biomarkers. While the concept of targeting mutations may appear elementary, the process of finding new molecular sites is a challenge—one that Dr. Kim and his lab are tackling by exploring high-frequency mutations in tumor suppressor genes of bladder and kidney cancer. Mutations in these genes result in loss of the ability to protect cells in the bladder and kidney from becoming cancerous. Identifying actionable mutations is essential, since blocking a pathway activated by a mutated tumor suppressor gene could slow or stop cancer growth. As more targetable mutations are identified by Dr. Kim’s lab and those of other scientists worldwide, physicians receive more knowledge to treat their patients’ specific cancers with greater precision. Each cancer has a unique molecular landscape which could have hundreds of mutations, making each discovery a valuable tool in the battle against cancer. A challenge arose, however, in the process of translating new information derived from lab research to knowledge that is useful for clinical care. Dr. Kim recognized the need for a more advanced, centralized knowledge base. Along with others at the Lineberger Comprehensive Cancer Center, the Molecular Tumor Board (MTB) was formed, composed of a group of oncologists, pathologists, and other scientists. These

Figure 1. Number of mutations in a spectrum of bladder, colorectal, and urachal cancers. Image courtesy of Dr. William Kim.

4


Carolina Scientific

technology and innovation

Figure 2. Copy number of DNA at each genomic coordinate across the genome. Image courtesy of Dr. William Kim. experts analyzed patient cases to determine the most effective treatments, based on the mutations present. The Clinical Committee for Genomic Research (CCGR) was also established, consisting of oncologists and ethicists, which set the rules for physicians addressing gene mutations present in their patients. It seemed like the perfect solution to the problem, but an issue quickly became apparent. Dr. Kim explains, “When we actually instituted the MTB we realized that there was a lot more uncertainty than was apparent at the beginning.”1 The MTB was able to work on some 20 cases weekly. With thousands of new, relevant research papers being published weekly, it is virtually impossible to know about every mutation and every study. “That data source is just so big, and trials are opening and closing on a weekly basis,” said Dr. Kim.1 He further explained, “We were searching for ways to do this better. That’s what led us to thinking about artificial intelligence and IBM Watson, where they have the computing power to ingest both scientific articles and website data, and really be able to output a very comprehensive report.”1 The collaboration with IBM allows the vast amount of literature to be interpreted in real time, using Watson’s advanced cognitive computing. In a study conducted by Dr. Kim and the CCGR, genomic sequencing data from just over 1000 patient cases was uploaded to the Watson system.2 Watson determined relevant mutations for each case, and if applicable, generated a list of potential therapeutic options for the mutations based on the latest findings across all scientific literature. Next, it generated a report with the list of actionable mutations and targeted drugs for the CCGR to evaluate. In 32% of cases, new, actionable mutations were identified.2 Not only did Watson identify the same actionable mutations that the CCGR did, but it also identified new ways to guide treatment, all in less than three minutes per case. “That was definitive proof that there was value in running Watson on those cases,” says Dr. Kim.2 Using the artificial intelligence capabilities of IBM Watson was a groundbreaking advancement in precision medicine. A physician can now be provided with supplemental information from Watson to create the optimal treatment plan for specific cases in a more rapid, updated, and Dr. William Kim efficient manner.

The implications of combining cancer research done in the lab with literature and artificial intelligence are enormous. Precision medicine as a field is accelerating quickly due to advancements from the work of Dr. Kim and other researchers. The future is boundless, and there is still much to learn before the ideal vision of precision medicine can be attained. Regarding what lies ahead, Dr. Kim explains that “the biggest thing that I see value in is really taking advantage of the Big Data Revolution. And by big data, I mean everything. Whether a patient smokes or not, is male or female—potentially every sort of data point. Combining that with tumor genomics and really beginning to develop multi-dimensional models will give good predictive value to whether drugs will work or not.”1 In addition to data-driven medicine, he looks towards precision immunotherapy as another step forward for precision medicine. This will harness the immune system’s power to fight cancer. So why is this work important to Dr. Kim? “I love what I do; it’s been a huge motivation seeing people in the clinic and knowing that we really need to make strides against cancer. It’s been incredibly rewarding and an honor to be able to do both patient care and research.”1 He finds the challenge of understanding and targeting mutations to be especially invigorating. Targeted therapy solved the issue of chemotherapy’s cytotoxicity—the harm it caused not only to cancerous cells, but also to healthy ones. Advancements through research in precision medicine are critical to improving patient outcomes and providing better cancer care.3 According to Dr. Kim, “Many people compare cancer to war; it’s a battle. Chemotherapy is like carpet bombing, essentially just dropping a ton of bombs and hopefully killing soldiers. Unfortunately, you end up killing soldiers as well as civilians. Precision oncology is more like a drone strike, where you’re specifically targeting the enemies that are soldiers and not civilians.”1 The movement towards more comprehensive precision medicine through targeted molecular research and artificial intelligence computing is happening now, and it holds the key to medicine’s future.

References

1. Interview with William Kim, M.D. 02/12/18 2. Patel, N.M; Michelini, V.V; Snell, J.M; Balu, S; Hoyle, A.P; Parker, J.S; et al. The Oncologist. 2018, 23(2), 179-185. 3. Hayes, D.N; Kim, W.Y; J Clin Invest. 2015 125(2), 462–8.

5


technology and innovation

Illustration by Meredith Emery

GOING WITH THE FLOW BY ZHI-WEI LIN

C

hemists perform traditional methods for polymer syn- most chemists is his inclusion of iterative exponential growth thesis in multi-step batches, similar to the process of (IEG) into the continuous flow system. IEG was a technique baking bread. From the initial preparation of making a first pioneered by Dr. Whiting and colleagues in the 1980s for dough to the final stage of baking, each step of the process the synthesis of plastics in batch production.1 This technique, is performed separately in a batch. While batch production while exponentially increasing the polymer size by doubling has been the standard practice in academia and industry the reagent at each iteration, is not widely employed due to its for the past few decades, it is low throughput and often pro- laborious procedures and inability for a sustainable scale-up. duces wasteful intermediates before the final product. Unlike Yet, to Dr. Leibfarth and his former colleagues at the Massathe mass production of bread, many batch productions are not suitable for scale-up, the process of transforming a laboratory experiment into an industrial product at large scale. Many factors contribute to the failure of scale-up, including cost, duration, feasibility and safety. This failure often limits the utility of a newly synthesized polymer either for further studies or real-life applications. Many chemists today are criticizing the traditional production method in favor of a new direction known as continuous flow chemistry, which bridges the different stages of batch production into a single, continuous process. Among these chemists is Professor Frank Leibfarth of UNC-Chapel Hill’s Chemistry Department, who is approaching traditional polymer production with a new perspective. This new perspective is rooted in the convergence of organic, continuousflow, and polymer chemistries with the goal for sustainable, high-throughput technology Figure 1. This image demonstrates the potential of Flow-IEG for different in material science and biotechnology. What sets Dr. Leibfarth apart from uses. Image courtesy of Dr. Frank Leibfarth.

6


Carolina Scientific chusetts Institute of Technology, this technique became vital to their mission of creating a sustainable automated synthesis system. In essence, Dr. Leibfarth and colleagues married the IEG technique with continuous-flow chemistry, coining the term Flow-IEG, to overcome the challenges of traditional IEG batch production. In 2015, Dr. Leibfarth and colleagues at M.I.T. demonstrated the ability of the Flow-IEG system to produce “sequence- and architecturally-defined macromolecules� in under 10 minutes.2 These macromolecules, like proteins, come in pre-determined configurations and are often used in proofof-concept studies to demonstrate the capability of the system to selectively produce the desired product. In addition to validating the Flow-IEG system, Dr. Leibfarth and colleagues further demonstrated its ability to produce over 60 grams of desired product per day, a significant improvement from batch productions. This is important because there is no validated system to mass produce desired products for various purposes ranging from drug discovery to development of new-age materials that are sustainable, bio-compatible and bio-degradable. In addition to quantity, the system should also allow for the time- and cost-effective production of complex materials while being accessible, user-friendly and scalable for different labs or industries. Flow-IEG is a promising system with the ability to meet all of these challenging demands. Dr. Leibfarth and his team at UNC continue to make significant improvements to their flow system. Recently, Dr. Leibfarth and his team demonstrated their ability to determine reactivity ratios of comonomers, a precursor to copolymers.3 Copolymers are a class of polymers widely used in a range of commercial products, such as car tires. The ability to determine the reactivity ratios of comonomers is essential in predicting and fine-tuning the structures of copolymers. What Dr. Leibfarth and his team have demonstrated is their ability to streamline a challenging process in the traditional batch production. Now, instead of multiple tedious steps to bake bread, imagine an automated system, i.e. the flow system, that allows you to bake bread at a much faster rate. Not only will you be able to produce multiple loaves of bread per day, you will also have the ability to precisely control their flavor, texture, temperature, and shape. In this system, very few ingredients are wasted while many loaves can be produced.

technology and innovation

Figure 2. A close-up of the current version of the Flow-IEG system. Rather than traditional glassware, Flow-IEG relies on a set of tubing, a pump and a heater. Image courtesy of Zhi-Wei Lin. What will you do with all of the excess bread? An option is to try out different recipes. Analogously, that is exactly what Dr. Leibfarth and his team are doing. In particular, they are applying their flow system to different areas of science, including mapping hypoxia in the body with polymer nanoparticles. Hypoxia occurs when there is a deficiency in oxygen at the tissue level and is associated with fatal complications. Dr. Leibfarth and his team are interested in synthesizing tiny polymer nanoparticles. These nanoparticles can be manipulated to track oxygen deficiency in the body and light up upon activation for imaging. Using high throughput flow chemistry, Dr. Leibfarth and his team can adjust the properties of the polymer nanoparticles to make them more efficient, safer and suitable for bio-imaging. However, mapping hypoxia is just a small tip of the iceberg of what polymer synthesis in continuous flow can offer. Dr. Leibfarth and his team are actively synthesizing and designing new-age materials to replace their traditional counterparts. Dr. Leibfarth envisions a day in which polymer synthesis in continuous flow can become user-friendly and programmable to the general public. He hopes that non-experts in chemical synthesis can have access to a system much like a personal computer that can automatically produce polymers upon request. There are still many challenges in continuousflow chemistry, but this does not stop Dr. Leibfarth and his team. They are going with the flow in a continuous progress, streamlining the obstacles along the way.

References

1. Raynter O.I., Simmonds D.J., Whiting M.C., The synthesis of long-chain unbranched aliphatic compounds by molecular doubling. J. Chem. Soc. Chem. Commun. 1982., 1165 – 1166. 2. Leibfarth, F.A., Johnson, J.A., Jamison, T.F., Scalable synthesis of sequence-defined, unimolecular polymers by Flow-IEG. Proc. Natl. Acad. Sci. USA. 2015. 112, 10617-10622. 3. Reis, M. H., Davidson, C.L.G., Leibfarth, F.A. Continuous-flow chemistry for the determination of comonomer reactivity ratios. Polym. Chem. 2018, 9, 1728-1734. 4. Interview with Frank A. Leibfarth, Ph.D. 03/01/18

Dr. Frank Leibfarth and graduate student Marcus

7


technology and innovation

Image courtesy of Creative Commons

WE ARE

STAR STUFF BY KEVIN RUOFF

L

ife on Earth, by scientists’ best estimates, has been around for four billion years, but the universe itself is predicted to be over thirteen billion years old. This begs the question, what happened for the nine billion years before life came about? Those nine billion years were preparation for what would eventually come: humans. The early universe consisted of only the smallest of elements, namely hydrogen and helium. In order to produce the heavier elements like carbon (the backbone of life), a star must have gone through an entire life cycle and died in a massive explosion (a supernovae) to create the high energy, high temperature conditions reDr. Arthur Champagne

quired to fuse smaller elements into larger ones. This would mean that all of the heavier elements come from stars and thus, humans must also. Just as Carl Sagan famously said: “The nitrogen in our DNA, the calcium in our teeth, the iron in our blood, the carbon in our apple pies were made in the interiors of collapsing stars. We are made of star stuff.”1 Arthur Champagne is a physics professor at UNC-Chapel Hill who studies where humans (and everything Carl Sagan mentioned) came from. He explains: “I am interested in the history of stars and the galaxy and how the solar system came about.”2 He and other researchers at the Triangle Universities Nuclear Laboratory (TUNL) study the nuclear reactions in stars and determine what kinds of reactions produce these heavier elements that are necessary for life. Fusion reactions in stars like the sun combine hydrogen atoms to make helium, but in hotter stars different reactions occur between bigger elements to make the heavier elements in the periodic table. Each element can be formed by only certain reactions in certain stars, and Dr. Champagne explores these different reactions. Exploring the history of stars, determining how the solar system came about, and searching for the mechanisms

8


Carolina Scientific

technology and innovation

Figure 1. Laboratory for Experimental Nuclear Astrophysics (LENA). Photo courtesy of the Triangle University Labs. that create carbon, copper, iron, sodium, and the rest of the elements means researching what types of nuclear reactions occur in what types of stars and how they work. Nuclear reactions do not just cause devastating effects in the form of bombs. They can also provide a lot of information about phenomena that scientists currently do not understand. Dr. Champagne explains that from the nuclear reactions his team models and experiments with, “the elements that get produced as byproducts tell us a lot about our history, where the elements came from, and about stars and the galaxy.”2 Nuclear reactions are modeled and predicted using computer software and then put to the test using the particle accelerators at TUNL. TUNL has three particle accelerators at its disposal, but the one that is specifically targeted towards answering astrophysics questions is called the Laboratory for Experimental Nuclear Astrophysics (LENA), which is the most intense low energy accelerator in the world.3 While powerful enough to measure nuclear reactions, LENA, built by undergraduate students, can fit in a typical professor’s office.2 Experimenters working at LENA set up parameters (i.e. type of particle beam, energy, temperature) for a specific nuclear reaction after using computer simulations to predict what they think will be the result of the reaction, and then measure what actually occurs. Using LENA, Dr. Champagne and his team were able to determine that the nuclear fusion reaction of hydrogen and nitrogen, which is the reaction that regulates energy for all stars at some point in their lives, is about two times slower than what was originally thought. Something as profound as this has changed previous predictions for the ages of galaxy clusters by a billion years! Another project of Dr. Champagne’s is trying to determine the reaction responsible for creating radioactive aluminum, which is found in meteorites as well as in the plane of the galaxy. Determining the reaction that produces this aluminum could have great effects on our models for how the Milky Way was formed. Working with nuclear reactions may not be as dangerous as some might think. Graduate students as well as under-

graduates are entrusted with running experiments. According to Dr. Champagne, there is more radiation from the walls of the lab than that created in the nuclear reactions he experiments with. A reaction of nuclear bomb proportions is not required for these experiments, which makes them much safer. The greatest danger faced is the high voltages that are required to operate the machinery. These facilities, as described by Dr. Champagne, are very safe places and consist of a great culture of people. The TUNL community consists of 20 faculty members, 40 graduate students, and 25 undergraduate students coming from UNC, North Carolina State University, Duke University, and North Carolina Central University. A small, intimate culture is what Dr. Champagne enjoys about his work. His passion for his work also comes from the satisfaction of making discoveries. it is very difficult to measure the reactions they work with, and when a model they created and predicted is confirmed to be correct by experiments, it is very exciting. It is surreal to think that every particle in the human body was once at the center of a star in a distant galaxy. It took a long time to put humans together and it all started with the explosion of a star to help spread those particles over the universe to eventually find their way to Earth and into every human being. It is also surreal to think that everything else comes from the interior of stars too! As Carl Sagan said, “If you wish to make an apple pie from scratch, you must first invent the universe.”1

The nitrogen in our DNA, the calcium in our teeth, the iron in our blood, the carbon in our apple pies were made in the interiors of collapsing stars. We are made of star stuff.

References

1. Cott, Jonathon. Rolling Stone: The Cosmos: An Interview with Carl Sagan. https://www.rollingstone.com/culture/ features/the-cosmos-19801225 (accessed February 25, 2018) 2. Interview with Arthur Champagne, Ph.D. 01/31/2018 3. Department of Physics and Astronomy. UNC Nuclear Astrophysics. https://research.physics.unc.edu/nuclearastro/ Welcome.html (accessed February 25, 2018)

9


technology and innovation

Printing

Better

Smiles

By Shreya Shah

Illustration by Zoha Durrani

F

or many of us, the painful experience of wearing braces is known all too well. Not being able to chew gum. Food getting stuck in your teeth. The tightening of the wires every two months. Shiny metal glaring while taking selfies. The beginning of a typical two-year journey of braces starts with a trip to the orthodontist office, where they make a mold of the teeth to determine the placement of the metal brackets. The material used to make this mold has been reported by some patients to have such a foul taste that it triggers their gag reflex. Alginate, the chemical used to create these molds and give it its

Figure 1. Dental brackets connected by orthodontic wire. Image courtesy of Creative Commons.

characteristically bad taste, is found in the cell walls of brown algae. Aside from sounding and tasting unpleasant, it can also be costly and take a longer time for a patient to begin their orthodontic treatment. Even after two years of wearing braces, patients often find that their teeth are not completely straight and question if the cost and the process was really worth it all. Dr. Ching-Chang Ko, a professor of orthodontics in the UNC-Chapel Hill School of Dentistry, has a potential solution to make the process of getting braces easier and enhance the overall treatment. Inspired by his passion for engineering and dentistry, he has actively sought to remove these common problems. His strong passion for solving these problems and creating innovation can be seen through his countless hours of dedication to the field of dentistry. His profession as an orthodontist is the most rewarding because he gets to work with younger people. To him, to have a profession means to truly love what you do (Figure 2).

10

Figure 2. Dr. Ching-Chang Ko. Image courtesy of Shreya Shah. Dr. Ko and his mentee, Dr. Christina Jackson, are feeding 3D scans of teeth into their newly invented 3D printer that produces brackets of braces—tiny square pieces of metal used to hold the orthodontic wire on every tooth. 3D printing can significantly de-


Carolina Scientific

As the world in the last 10 years has started becoming more digital, 3D technology has become more of a valuable tool to the field of medicine and dentistry. crease the time it takes for patients to start their treatment since it utilizes the scanned images to immediately print dental brackets. This deviates from the current norm in which unpleasant dental molds are made to determine the correct placement of the brackets. Another benefit of this technology is that 3D brackets are highly customizable; a bracket can be made for each tooth based on the patient’s tooth orientation and size.1 Currently, orthodontists place brackets on the teeth and make minor adjustments as needed (Figure 1). However, these current brackets for teeth are not a one-size-fits-all appliance, and can be a factor in imperfect dentition and in longer overall treatment time. In other words, today’s dental brackets are the reason why most patients’ teeth are not perfectly straight after braces and why their treatment is longer. Computer-assisted dental appliances have proven to reduce the time and increase the precision of the treatment itself.2 Routine 3D scanning can also easily adjust to minor changes that may occur during the treatment. However, this solution has not come easily. Dr. Ko mentions that his greatest challenge was finding a printer with a high enough resolution from which precise prescriptions of tiny brackets could be accurately fabricated. (Figure 3). Luckily, Dr. Ko and Dr. Jackson were able to find these printers at Research Triangle Park, where they were able to create their first prototype. In the future, Dr. Ko sees the need to make 3D

printers available in orthodontists’ offices that can accurately print tiny brackets on a massive scale. Adapting to new technology of personal 3D dental printers has become more necessary and desirable. In the words of Dr. Ko: “As the world in the last 10 years has started becoming more digital, 3D technology has become more of a valuable tool to the field of medicine and dentistry.”3 According to Dr. Ko, the greatest beauty of this research lies in the fact that 3D printing poses no ethical concerns or any apparent disadvantages. It is simply a more innovative, practical way to create braces with greater precision. Many orthodontists in the area with whom Ko has been talking to are considering using this technology once the scanning technology and brackets are patent-protected and FDA approved. They must also move beyond the prototype stage of the individual brackets.

technology and innovation With this in mind, Dr. Ko predicts that it will be relatively easy to adapt to this new technology within the next 3 or 4 years. He has even jokingly mentioned that he may use this new technology on his own teeth! What’s next? These patent-pending 3D orthodontic systems are only the start of the possibilities that 3D printing has to offer. Ko has focused primarily on creating custom metal braces, but 3D scanning can also enhance the accuracy of the placement of bracket fillings for Invisalign attachments as well. Beyond the field of dentistry, this research has many potential applications, such as using a similar technology to create prosthetics in the femur with greater precision and less cost. There is a great deal of innovation yet to come out of the Ko Lab, but for now we can revel in their newest innovation—the printing of better smiles.

Figure 3. Individual dental brackets currently used by orthodontists. Image courtesy of Creative Commons.

References

1. Boswell, Kayla. UNC School of Dentistry Bringing 3-D Innovation to Braces. http://www.dailytarheel.com/article/2018/01/brackets-0129 (accessed February 19, 2018) 2. Brown, M.W; Koroluk, L.; Ko, C.C; Zhang, K; Chen, M; & Nguyen, T. Am. J. Orthod. Dentofac. Orthop. 2015, 148(6), 1067-1074. 3. Interview with Ching-Chang Ko, Ph.D. 02/19/18

11


neuroscience

Image courtesy of Creative Commons.

Learning to Balance

with Virtual Reality By Patrick Gorman

W

hat if you could use visual input to manipulate how someone walks? For years, Dr. Jason Franz in the Biomechanics Lab at UNC-Chapel Hill has been investigating this very question. His team has found a correlation between reliance on a visual input and postural control, with differences in reliance depending on age. Younger people are able to maintain their balance more effectively than older people for a multitude of reasons, ranging from sensory input to control of muscles that maintain the body’s posture. While this might seem obvious, the difference observed during experiments is particularly outstanding, which leads to a number of applicable methods to solve problems related to posture and balance control.1 As it pertains to visual input, if that input can be manipulated, say with virtual reality (VR), then related problems can be negated through external manipulation. The connection between vision and motor control is not the most surprising news, especially since most people can experience this phenomenon for themselves. For example, try balancing on one foot. For some, it is easier than others, but generally, you are able to do so without great difficulty. Now, close your eyes and try to maintain your balance. It might be possible, but it requires a great deal more effort to keep upright. Younger Dr. Jason Franz people have sensors on the bot-

Figure 1. An EEG headset used to detect neurological reactions to certain locomotive responses. Image courtesy of Dr. Jason Franz. tom of their feet in addition to visual inputs, which give them heightened somatosensory feedback and control.1 However, as Franz puts it, “in old age or in neurodegenerative disease, those are the first sensors to go. They become very unreliable.”1 In response to this, the brain looks to vision as a primary source of input to gauge any corrections that need to be made to maintain walking. Dr. Franz and his lab suggested reverse engineering this mechanism so that visual input could be used to manipulate balance and control, especially in older people.2 The initial experiment in Dr. Franz’s lab found exactly what you would expect: by presenting someone with visual perturbations, locomotive responses, especially laterally, are

12


Carolina Scientific observed to control posture. They tested the normal walking pace of their subjects, measuring the width of their steps and their sacral trajectory as a baseline. This was done by placing the subjects on a treadmill with a regular projection of a hallway in front of them, moving as if they were walking normally. Then, the subjects walked with perturbations in the projection of the hallway.2 They found that the visual stimulation of a fall did result in a greater locomotive response in older people, but the greater magnitude of this response was the most surprising factor. Dr. Franz found that while standing, there was no discernable difference between young people and older people, suggesting that only when older people were subjected to the perturbations were changes in their walking and balance patterns significant.1 These findings amplified the importance of visual input to muscle control and opened up a number of opportunities for new studies and experiments. One of these studies specifically concerned the magnitude of the response of certain muscles, setting a baseline for how different muscles react to different levels of perceived visual alterations. Through the use of electromyography (EMG) and electroencephalography (EEG), the reactions to the perturbations could be detected on a neuromuscular level.3 Pertaining to balance control in response to visual input, certain muscles reacted more strongly to the changes than others, indicating which muscles were being used most for balance control while walking. Additionally, the strength with which certain muscles responded could be scaled by the size of the perturbation, which indicates not only correlation, but also a level of control of the body in response to perturbations.1 Further tests that determine how to use the VR simulation can be used to obtain more applicable solutions to balance problems and intentionally alter locomotive functions. A number of follow up experiments have evolved from Dr. Franz’s initial experiment. He says that the problem with elderly people and falls is that “most conventional balance assessments do a really good job of telling you you’re at risk of falls after you’ve fallen.”1 This has led to a line of research that uses proactive design to find preventative measures which stop falls in elderly people before they occur. This is especially necessary because older people with a history of falling have a greater tendency to fall again. Additionally, Dr. Franz’s lab has started investigating subjects with multiple sclerosis (MS) to see how they vary from the base model of people without the disorder.1 By testing people that have damage to the link be-

[Dr. Franz] says that the problem with elderly people and falls is that ‘most conventional balance assessments do a really good job of telling you you’re at risk of falls after you’ve fallen.’

neuroscience

Figure 2. A visualization of the virtual reality hallway used in the experiments to provide visual perturbations. Image courtesy of Dr. Jason Franz. tween the sensory extremities and the sensory cortex due to MS, the altered effect due to visual perturbations can be further analyzed.4 Ultimately, both of these lines of research could collectively work towards using VR perturbations as a basis for rehabilitation. If the perturbations can be used to detect changes in postural control and locomotive movement, and the response of the body due to a perturbation scales with a related magnitude, then perhaps a rehabilitation technique can be created to respond to altered movement patterns. This is certainly a prospect of Dr. Franz’s lab, and completing this challenge would alter the lives of many as they grow older.

References

1. Interview with Jason Franz, Ph.D. 02/08/18. 2. Franz, J.R; Francis, C.A; Allen, M.S; O’Conner, S.M; Thelen, D.G; Hum. Mov. Sci. 2015, 40, 381-392. 3. Stokes, H. E; Thompson, J. D; Franz, J. R. Sci. Rep. 2017, 7, 1-9. 4. National MS Society. https://www.nationalmssociety. org/ (accessed February 22nd, 2018).

13


Image courtesy of Creative Commons

neuroscience

A new take on By Rhea Jaisinghani

H

Linguistic Relativity

uman beings are able to speak and understand each other at a level of complexity beyond all other organisms. Linguistic capabilities affect our ability to perform day-to-day tasks such as ordering takeout, helping others with directions, and comprehending literature, songs, and poetry. These tasks somewhat obviously require the use of language, but what we do not automatically think about are the more complex human capabilities that could possibly be shaped by the language we speak. According to the SapirWhorf hypothesis, the language a person speaks influences their perception of the world. Cameron Doyle, Doctoral Candidate of Psychology and Neuroscience at UNC-Chapel Hill, and Dr. Kristen Lindquist, assistant professor at UNC and Director of the Carolina Affective Science Lab, delved into a similar question—does the language of emotion influence our memory for facial actions?1 Do we understand facial actions based on the labels we give emotions, like the words “angry” and “fear”, as opposed to an understanding of the facial action itself? Ms. Doyle and Dr. Lindquist decided to explore these thoughtprovoking questions. Ms. Doyle and Dr. Lindquist hypothesized that “emotion language biases perceptual memory for facial actions, causing people to remember target facial actions as appearing more similar to past Cameron Doyle

perceptions of category exemplars than they actually were.”1 Essentially, if the visualization of an English emotion word is presented to people in addition to a face displaying that emotion, individuals are more likely to remember that face as opposed to individuals not presented with the emotion word. Since prior knowledge in general helps shape our perception of our surroundings, prior language knowledge must help shape our ability to perceive other emotions. In an interview with Cameron Doyle, she described this as “not a bottom up process, but more of a top down conceptual process.”2 Top down processing occurs when our brain makes use of prior knowledge to react to external stimuli.2 In this case, our brains would make use of stored linguistic knowledge in order to process and understand different facial emotions. Ms. Doyle and Lindquist conducted three studies. In the first two studies, the research team made use of novel, alien-like faces with emotions that human beings were not necessarily able to label with English terms (Figure 1).2 In study three, however, they tested whether familiar facial actions would produce the same results—making the information more applicable to human beings.2 The goal of the third study was to “train people to have a specific prior knowledge (in this case, specific emotions) and see if that knowledge biases later facial recDr. Kristen Lindquist ognition.”2 The study included data

14


Carolina Scientific

neuroscience

Figure 1. Photo of “alien” faces and diagram of each phase within the study. Image coutresy of Cameron Doyle and Dr. Kristen Lindquist.

Figure 2. Graph of the data from the test phase of the study. Image courtesy of Cameron Doyle and Dr. Kristen Lindquist.

from 91 undergraduate students, 44 women and 47 men.1 Each member of the study was divided into two groups—the verbal condition and the control group—and were asked to complete three phases: learning, target, and test (Figure 1). During the learning phase, participants in the verbal condition group were shown one of two faces (showcasing anger or fear) and were asked to press either an “anger” button or a “fear”button. This process was repeated multiple times in order to“build a storage of prior information.”2 In the target phase, these participants were shown slightly different faces with the same labels in addition to a name of the face shown (such as Zanu or Bill), so they were presented with a slightly different angry and fearful face. The control group of participants was shown the same faces, but they were instead asked not to label them by emotion words, but by how far or close apart each face’s eyes were. This way, they were not maintaining this “cache” of prior emotion-label knowledge. When the verbal condition group entered the test phase, they were presented with the face from the learning phase, the face from the target phase, and a 50/50 morph of the two and were asked to recall the second face. The results supported Ms. Doyle and Dr. Lindquist’s original hypothesis; subjects who used the original “anger” or “fear” labels were more likely to incorrectly identify the face from the target phase. Instead of choosing the correct face, these participants were more likely to choose the face from the first phase—the one with which they associated the English labels “anger” and “fear” (Figure 2). What does this all mean? These results show that having a label for an emotion “brings online a mental simulation” that impairs your recall.2 In this case, that “simulation” was the mental image of the face associated with the emotion label—the faces shown in the learning phase. Further implications of the theory are enticing. How could the Sapir Whorf hypothesis be applied? Are the study’s findings related to cultural differences in perception? In addition to commenting on that “cultural basis” of future implications, Ms. Doyle mentioned her team’s interest in whether impairing language abilities impairs emotion perception. In order to test this in the near future, their team would use something like transcranial magnetic stimulation. This pro-

cess allows researchers to temporarily deactivate an area of the brain. If impairing language abilities was seen to also impair emotion perception, the original hypothesis regarding perceiving emotions and emotion labels would further be reinforced. It is interesting to think about whether our ability to speak influences our ability to understanding facial emotions. Does that mean mute individuals struggle to understand how their friends and family feel? Further, it is often said that body language conveys more than the spoken word. However, this study begs the question: is language necessary for body language to be effective in the first place?

Illustration by Lizzie Satkowiak

References

1. Doyle, C.M; Lindquist, K.A. J. Exp. Psychol. Gen. 2018,147, 62-73. 2. Interview with Cameron M. Doyle, M.A. 02/8/18.

15


neuroscience

Image courtesy of Creative Commons

Lights... Camera...

R

Neurons!

eady to start the weekend, your squad heads out for a night of drinking, dancing, and more drinking! While you may be having fun, your brain is undergoing major changes in response to the negative effects of alcohol. According to a national survey, about 60 percent of college students between the ages of 18 to 22 years old reported drinking in the past month, and about two-thirds of them reported binge drinking.1 Binge drinking is defined by the National Institute on Alcohol Abuse and Alcoholism (NIAAA) as a pattern of drinking that brings one’s blood alcohol concentration (BAC) to 0.08 grams percent or above.2 It constitutes the majority of drinking among high school and college students. Why is this the case? Researchers in alcohol studies have found that a combination of low sensitivity to alcohol sedation, high risk-taking behavior, and social reward seeking contributes to high rates of binge drinking in adolescents. Although some consequences of binge drinking are known— such as its link to liver disease and violence, less is known about the physiological implications on different parts of the brain, and how those impairments establish behavioral changes. Dr. Donita Robinson, Associate Professor in the Bowles Center for Alcohol Studies at the UNC School Dr. Donita Robinson of Medicine and member of the

BY RAYYANOOR JAWAD

Neurobiology of Adolescent Drinking in Adulthood (NADIA) Consortium, examines motivational behaviors and their underlying neural circuits. The Robinson Lab explores questions such as: How can addictive drugs such as alcohol and nicotine alter a person’s brain in such a way that they become biased to alcohol-related stimuli? How easily can a person change and adapt their behavior to new contexts (flexibility), and when does drug exposure lead to less flexibility? Dr. Robinson first became interested in neuroscience while taking a neurobiology course at the University of Texas at Austin and conducting undergraduate research in a lab studying functional recovery after brain damage. Specifically, she studied how the basal ganglia and the cortex communicate with each other. These two parts of the brain are significant in alcohol studies, as they are most important in developing behavioral responses. Dr. Robinson still studies the same brain regions; however, she now analyzes motivated behavior instead of general behavior. According to Dr. Robinson, what interests her the most about her research lies in the fundamentals: “It is the basic idea of seeing what the brain is doing while the animal is making decisions. That is why we do what we do. For example, when we see a cell fire to a cue or when we measure dopamine release... I think that is very cool!” 3 In order to study how binge drinking influences motivated behavior, learning behavior, and neural activity, Dr. Robinson exposes male and female rats to alcohol during adolescence, then observes brain activity in the pre-frontal cortex and the basal ganglia in adulthood. Most studies in the Robinson lab involve Pavlovian conditioned stimuli, in which a

16


Carolina Scientific neutral stimulus is coupled with a stimulus known to give a response. Rats are trained to associate a light cue with receiving a reward that is initially sucrose, but eventually replaced with ethanol. Rats that interact with the cue itself are called “signtracking animals,” and those that go directly to the trough in anticipation of the reward are called “goal-tracking animals.” Although it may seem easy to assign these labels to individual rats, some rats exhibit both sign- and goal-tracking while others can shift their approach, indicating different learning behaviors. After observing conditioned responses to the reward (alcohol), further measures can be assessed. Recently, Dr. Robinson collaborated with Dr. Ian Shih of the UNC Center for Animal MRI to reveal that adolescent alcohol exposure reduced functional connectivity between prefrontal regions and the basal ganglia. Dr. Robinson said that exposure to binge-drinking level amounts of alcohol during adolescence is sufficient to produce some of the changes in brain activity that we see in people with alcohol-use disorder. This particular study showed that the animal model was producing the same brain changes that are observed in people. However, a major limitation in human studies is the uncertainty of whether drinking habits in people arise out of a pre-existing phenotype, or repeated drinking establishes certain phenotypic patterns. Looking forward, Dr. Robinson plans to continue collaborating with other labs to study more aspects of behavioral flexibility based on binge exposure and how the prefrontal cortex and basal ganglia interact to cause these changes. One of these labs includes the Boettiger lab, which focuses

neuroscience

on elucidating the neurobiological mechanisms associated with addiction in the “CEO” of the brain. The Robinson and Boettiger labs expect to find out more about the relationship between alcohol and brain inflexibility based on sex and age using more advanced genetic tools such as DREADDS, which are engineered neuronal receptors that can be expressed and activated in specific neurons. Further down the research timeline, Dr. Robinson anticipates the development of therapeutic techniques involving the activation and inactivation of specific brain regions that are accessible in humans and have proven successful in animal models. This can eventually inform clinical therapies for people who wish to reduce drinking or even reverse the negative effects of drinking. The ultimate goal is to cure alcoholism! Dr. Robinson’s research is at the core of alcohol studies, concentrating on how changing neural circuits affects motivated behavior and behavioral flexibility. Through the efforts of neuroscientists such as Dr. Robinson, to gain insight into the effects of alcohol on brain function and development, the discovery of effective therapeutic practices is increasingly within reach.

“It is the basic idea of seeing what the brain is doing while the animal is making decisions. That is why we do what we do... I think that is very cool!” 17

References

1. National Institute of Alcohol Abuse and Alcoholism. “College Drinking.” Turning Discovery Into Health (2015). 2. U.S Department of Health & Human Services: National Institutes of Health. “Underage Drinking.” Alcohol Alert (2006). 3. Interview with Donita Robinson, Ph.D. 02/16/18

Illustrations by Candice Greene


neuroscience

Illustration by Olivia Novak

THE PHYSICAL IMPRINTS OF ADDICTION BY EMILY SCHEIN

A

ddiction has been a timeless issue in our society. While there are countless drugs and therapies to help people ward off cravings and relapses, many people find themselves falling back into bad habits and returning to the dangerous drug cycle. Dr. Kathryn Reissner, Assistant Professor of Psychology and Neuroscience at UNC-Chapel Hill, looks to solve the addiction problem. Dr. Reissner’s lab studies the cellular mechanisms of drug reward and addiction to cocaine. Her lab is primarily interested in how long-term changes induced by drugs mediate craving and relapse to use.1 Dr. Reissner also focuses on astrocytes, a commonly overlooked cell type. Astrocytes are a type of glia, one of two types of cells that make up the brain (the other being neurons). While many scientists focus on neurons, since these are the cells that communicate, glia far outnumber neurons and play an important role in maintaining many facets of brain function. Dr. Reissner studies how miscommunication between astrocytes and neurons contributes to longlasting effects of cocaine.1 The rat self-administration model allows Dr. Reissner to observe the motivational aspects of seeking cocaine, to in-

terrupt the behavioral process, and to develop candidate therapy to help prevent relapse.1 The procedure involves placing a catheter, connected to a cocaine pump, in the rats jugular vein. The animals learn to press a lever, which then gives them an intravenous infusion of cocaine. The rats learn that pressing the lever is rewarding, which allows Dr. Reissner’s lab to observe and adjust the rats’ behavior. Additionally, the rat can control when and how much drug it receives, under Dr. Kathryn Reissner careful monitoring for safety. After a rat completes the behavioral portion of an experiment (self-administration process for several days or weeks, often followed by an abstinence period), Dr. Reissner takes a closer look at the rat’s brain to view any long-lasting

18


Carolina Scientific cellular changes. To view the cellular mechanisms of the brain, researchers use fluorescent proteins to label cells and perform high-resolution imaging. Then, researchers use microscopes to view expression of fluorescent proteins in cells of interest (Figure 1). In this way, Dr. Reissner can study structural components of individual cells and communication between astrocytes and neurons. The brain communicates via neurotransmitters, which release from one neuron and travel across a small gap, called a synapse, to another neuron. Astrocytes aid in maintaining the levels of neurotransmitters.1 When thinking about drugs, the neurotransmitter that comes to most people’s minds is dopamine, known to cause pleasure. Many people associate addiction and drugs with excess dopamine. Dr. Reissner clarified the role that dopamine plays in drug use: “It is believed that excess dopamine mediates short-term effects of drugs. As a result of these changed dopamine levels, adaptations occur in glutamate signaling, such as disruptions in glutamate homeostasis.”1 Therefore, short-term use of drugs causes changes in dopamine levels, but Dr. Reissner and her lab study the more permanent changes in glutamate levels. This characterization provides a new angle with which Dr. Reissner can approach and better understand addiction and relapse. Imbalances in glutamate homeostasis, or the synaptic levels compared to extra-synaptic levels, occurs within the brain’s reward circuitry after long-term drug use. “We believe that that is related to the structural changes that we see in astrocytes,” explained Reissner. “This contributes to disruptions in signaling which may actually mediate relapse to drug use.”1 Research shows that there is a significant reduction is astrocyte volume and surface area along with down-regulation of glutamate transporters in astrocytes, following prolonged withdrawal from cocaine self-administration.2 The Reissner Lab works to look deeper into these findings and to decipher the cellular mechanisms and functional consequences. Dr. Reissner spoke excitedly about recent advancements in her lab: “A year ago, we found that astrocytes within the nucleus accumbens are smaller and communicate less with neurons after withdrawal from cocaine use.”1 The Reissner Lab discovered that this finding does not extend to other brain regions; it is only seen in the nucleus accumbens, an important part of the brain’s reward circuitry.1 This is an important step in narrowing down brain areas affected by addiction. More recently, the Reissner Lab has found that animals treated with D-serine, a naturally occurring amino acid, show reduced behavioral measures of drug craving and seeking.1 This observation implies that by replenishing a naturally occurring substance, it is possible to reduce behavior leading to relapse. These findings provide physical evidence of lasting effects of drug use on the brain and prove that a stronger understanding of glutamate’s role cellular could yield a viable target for therapeutic intervention. The Reissner Lab has made considerable contributions to finding better therapeutic interventions for addiction. While it is still unclear exactly how all of the pieces in the puzzle fall into place, Dr. Reissner has made significant progress in the field. By narrowing down which brain region to focus on and seeing the lasting physiological and chemical altera-

neuroscience

Figure 1. Fluorescently labeled astrocytes under the microscope. The astrocytes (red) fluoresce green once calcium binds to the fluorescent protein. The blue is a DAPI stain that labels nuclei of all cells. Image courtesy of Dr. Kathryn Reissner. tions in the brain, Dr. Reissner is close to fitting everything into place. This development provides promising results for the near future. Like all researchers, the Reissner Lab has faced their fair amount of setbacks. Because astrocytes are not widely studied, the tools available to study astrocytes are not as extensive as the tools to study neurons. However, Dr. Reissner views these obstacles as opportunities to learn. “It gives us the opportunity to think creatively,” Dr. Reissner commented.1 Dr. Reissner and her team think of ways to perform the experiments in the ways that they want with the technology available. This consideration also gives the Reissner Lab the chance to develop better tools and finer-tuned microscopic images to study astrocytes. Dr. Reissner has been studying addiction for almost 10 years, five of which at UNC. Dr. Reissner fell in love with the quality of research at UNC, along with the exceptional people, which means a lot coming from a Duke graduate. Dr. Reissner is driven by the human element of her studies and her deep compassion for the people struggling with addictions. The ultimate goal of Reissner’s research is to develop effective treatments that aid in treating addiction and preventing relapse. The Reissner Lab works at directly investigating therapeutic treatments and forwarding their research into clinical trials.1 Dr. Reissner speaks very optimistically about contributing to helping those with addiction, and, in her own words, “taming the beast.”1

References

1. Interview with Kathryn Reissner, Ph.D. 02/08/18 2. Scofield, M.D; et al. Biol. Psychiat. 2016, 80.3, 207-215.

19


neuroscience

Discovering Life Within Death By Maximilian Bazil

Images from Figure 1, courtesy of Dr. Mohanish Deshmukh.

C

ould proteins associated with death be the key to unlocking how we live? This question is central to the Deshmukh lab’s mission. Characterizing the multifunctional properties of proteins could break the conventional molds of neuroscience to create a new, more flexible, archetype. The human genome is only composed of approximately 20,000 genes, and yet it executes a far greater quantity of functions. “We have far fewer genes than we thought we did...we have to make do with very few proteins for multiple functions,” said Dr. Deshmukh.1 A prominent example of this concept in biology is the well-documented mechanism of cytochrome c, which functions both in the mitochondria for the synthesis of adenosine triphosphate (ATP) through oxidative phosphorylation and in the cytoplasm to induce apoptosis (regulated cell death). The discovery of cytochrome c’s adaptability led investigators, like Dr. Mohanish Deshmukh and his team, to look for other proteins with this capability. In fact, they found some that are involved in synaptic pruning, neuroregulation, and other complex processes in the brain.2 Proteins known as caspase proteases have been associated with apoptosis for decades, earning the title “Cell Death Proteins” across 18 distinct versions (called homologs) in mammals. These proteins are typically expressed as inactive enzymes, and are tightly regulated for their initiation of an inflammation-specific form of cell death, called pyroptosis. Initiator caspases are activated by cell surface receptors of the Fas and Tumor Necrosis Factor variety. Upon activation of this ligand-induced cascade, the formation of a death-inducing signaling complex (DISC) commences. In this way, initiator caspases activate effector caspases, and pyroptosis occurs.

Another pro-apoptotic cascade can also occur through a mitochondrial pathway that originates from the detection of intracellular stress, which brings about mitochondrial outer membrane permeabilization (MOMP) and the release of proteins such as cytochrome c, ultimately leading to apoptosis. It seems as though a process this sensitive to cell death would be difficult to apply to neurons, but evolution has allowed for powerful neuronal control of the caspase protease pathway. Dr. Deshmukh and his team have a track record for identifying novel apoptotic and anti-apoptotic pathways in both developing and mature neurons. In the past, they have studied a multitude of candidates that have such capabilities, including different microRNA families, members of the B cell lymphoma-2 (Bcl-2) families, and even caspase recruitment proteins, which provided results that pointed away from the anti-apoptotic mechanism they had begun to study. While they were not the first to identify non-apoptotic functions of caspases in neurons, the Deshmukh lab appreciates the tightly regulated nature of these functions and intends to expand upon them further.3 The question for them is not how to describe the mechanism by which caspase proteases act, but the mechanism behind their tight regulation and their restriction. The inspiration to pursue this research, Dr. Mohanish Deshmukh

20


Carolina Scientific

neuroscience

Figure 1. Left: Non-apoptotic role of caspases in axon and dendrite pruning. A visualization of the supported pruning mechanism. Right: Mechanism of neurite outgrowth, guidance, and branching due to caspase protease activity. Image courtesy of Dr. Mohanish Deshmukh. for Dr. Deshmukh and graduate student Emilie Hollville, was the discovery that the B cell lymphoma-2 (Bcl-2) family, which had typically been characterized by apoptotic functions, also had anti-apoptotic members approximately 20 years ago. It was interesting to them to study how these Bcl family proteins could be so close to each other in structure and function from a molecular standpoint, but were vastly different at a larger scale. The commonality is that members in both families are able to trigger the release of caspases.4 It is not known why these systems operate in such a way, but the Deshmukh lab intends to figure it out by applying novel models of their own. Many laboratories expand upon the drosophila (fruit fly) model for its simplicity, but a key factor in the development of new and more complex information about caspases lies within the mouse model.5 The Deshmukh lab studies pruning of neurons both in-vivo (performed inside a living organism) and in-vitro (performed outside a living organism). Pruning of axons and dendrites is necessary due to the presence of superfluous connections. Pruning allows for selective elimination of unwanted axons, synapses, or dendrites, and kills the parent neuron in the process. The role of caspases in this process has been supported across biomedical literature and is induced by neurotrophic factor deprivation (Figure 1, Left). A lack of Neuronal Growth Factor (NGF) induces death via apoptosis through the release of caspases. The Deshmukh lab recognized that once neurons matured, they were capable of preventing cell death and apoptosis. However, when literature came to light that caspases were involved in pruning, they were taken aback. “You have to be quite careful when you activate caspases, if you’re a neuron, about making sure you don’t end up dying,” Dr. Deshmukh offered, as he explained just how impressive it was that neurons could regulate caspases so tightly as to not destroy entire neural circuits.6 The

Deshmukh lab ran with research that had been conducted by peers who showed that neurons could even activate caspase pathways for neurite outgrowth, guidance, and branching (Figure 1, Right). Dr. Deshmukh’s use of microfluodic chambers allows him to expose different parts of the neuron to different stimuli, such as the deprivation of NGF from the axon, to understand the mechanisms of regulation and restriction. Microfluodic chambers, developed by a colleague in the biomedical research department, Anne Taylor, also allow for spatial control over caspase activation. Through this targeted approach, Dr. Deshmukh believes that studying caspases can help to prevent neurodegenerative diseases, such as Alzheimer’s, and reduce the effects of stroke. Dr. Deshmukh and his team have taken the first steps towards one day going beyond even mouse models to tackle the complexity of caspase studies. Caspases represent a novel, albeit burgeoning, theme in the biological sciences by demonstrating that proteins with strikingly similar structures can execute a variety of functions. With so many avenues for further investigation, Dr. Deshmukh’s lab will continue to explore this new frontier in neuroscience research.

References

1. Interview with Mohanish Deshmukh, Ph.D. 02/27/18 2. Hollville, E; Deshmukh, M; Seminars in Cell & Developmental Biology, 2017. 3. Annis, R.P; et al. The FEBS Journal, 2016, 283, 4569-4582. 4. Knight, E.R.W; et al. Oncogene, 2014, 34, 394–402. 5. Nakamura, A; et al. Journal of Neuroscience, 2016, 36, 5448–5461. 6. Geden, M.J; Deshmukh, M; Current Opinion in Neurobiology, 2016, 39, 108–115.

21


education

Image courtesy of Creative Commons

EDUCATING the EDUCATORS BY HARRISON JACOBS

R

esearch is an investigation or experimentation geared towards discovering and interpreting prior knowledge basis with novel applications so as to revise past theories. Often times, people think that research is confined to pure scientific discovery, as with cancer, nanoparticles and superconductors. Nevertheless, research is not limited to the Dr. Duane Deardorff physical entities that make up the natural world. Humanities-based research focuses on the human experience. For Dr. Duane Deardorff and the Physics Education Research Group at UNC-Chapel Hill, research entails studying the ways in which students learn best in courses taught in the Department of Physics. With his colleagues, Dr. Alice Churukian, Dr. Colin Wallace and Dr. Dan Young, Dr. Deardorff studies the factors that make a difference in student learning, specifically “how students learn, how [educators] teach, [and] how to match those� in a way that optimizes stu-

dent growth and understanding to material covered in class.1 Dr. Deardorff and the professors in the Physics Education Research Group analyze student performance in a variety of ways. In introductory physics courses, the Physics Education Research Group at UNC studies student performance using concept surveys, which look at learning gains over the semester by students. The Group creates graphs relating growth in knowledge to various parameters like time elapsed in the course. Teachers administered a concept survey on the first day of class, and compared it to the scores on the concept survey given on the last day of class. The concept survey is the same in both cases, so the survey serves to see how much a student has learned throughout the semester given their initial understanding of the material. The switch from traditional lecture-based learning to hands-on learning just about doubled the gains in performance on the concept survey, going from 20% to 40%. To optimize learning, students are often put into groups with peers of various backgrounds in physics schooling so that the experienced students will teach their less experienced counterparts. While some would assume that only the inexperienced students would benefit from this, in reality, the

22


Carolina Scientific

more experienced students solidify their understanding of the course material when they are able to teach less experienced students. The techniques used by the Physics Education Research Group are multifold. Aside from their concept surveys, the group uses material essential to all scientific disciplines. This includes practicum, a part of the physics exam that stresses proper laboratory procedure. Dr. Deardorff emphasizes that the laboratory component of the course would help students understand the concepts through active experimentation. Through national conferences and interactions with colleagues at other institutions, the UNC Physics Education Research Group can identify the lab procedures most successful in promoting student understanding. Additionally, qualitative surveys provide the Physics Education Research Group feedback on different teaching styles. In fact, these surveys allow for comparison between teaching styles and student performance. When asked whether or not the results gained from the various techniques used by the group were satisfactory, Dr. Deardorff said that they were promising but had room for improvement. While he would want everyone to get A’s in the course, he admits that there are students with a wide range of abilities entering the class. Currently, Dr. Deardorff says

education

that the initiatives have been focused mainly on introductory courses; however, the group is now starting to work on upper level courses. Evidently, implementing improved teaching strategies in these courses will take at least a couple of years, but hopefully the results will slowly but surely show. Ultimately, physics education research is a cyclical process, where teaching informs the research and the research informs teaching. Student performance is expected to continuously improve as long as peer institutions continue to collaborate with each other and professors remain invested in student success.

References

1. Interview with Duane Deardorff, Ph.D. 02/2/18 2. Research. https://www.merriam-webster.com/dictionary/research. (accessed February 20, 2018)

Figure 1. Physics studio group collaborating on an assignment. Image courtesy of Dr. Duane Deardoff.

23

“The switch from traditional lecture-based learning to hands-on learning just about doubled the gains in performance on the concept survey, going from 20% to 40%.”


education

Images courtesy of Creative Commons

Creative Classrooms: Using Robots to Enhance Creative Engagement By Ami Patel

M

any students within colleges and other schools across the nation are used to learning in a lecture-style environment. In lecture-style classes, the instructor “lectures” at the students, while they passively try to obtain the information by taking notes or by just listening. It looks like this: the teacher speaks on a topic with little to no interaction with their students, and the students sit and stare with minimal effort. The key factor missing in this sort of learning environment is the actual learning, especially by engagement, evaluation, and thought-processing.

Dr. Keith Sawyer, a Morgan Distinguished Professor in Educational Innovations at the UNC-Chapel Hill, strives to reinvent classroom teaching to base learning on a standard of creativity. Much of his research includes the studio model of teaching and learning in which professors guide students to lead and create. “The central concept of the studio model is the creative process, including three groups of emerging ideas: learning outcomes associated with the creative process, project assignments that scaffold mastery of the creative process, and classroom practices that guide students through the

24


Carolina Scientific creative process.”1 After studying 38 schools across the nation, the studio model not only improves students’ abilities to go through the creative process, but also increases their learning outcomes for the project.1 One way that Dr. Sawyer is aiming to reinvent the modernday classroom is by implementing the studio model in his own classroom. In his Education 390 course entitled Special Topics Dr. Keith Sawyer in Education, Dr. Sawyer uses technology in the form of various robots to engage students in hands-on learning. For example, the Sphero is a spherical robot which can move in various directions and at various speeds, and is controlled through a Smartphone application. Dr. Sawyer challenges his students to jump into using the robots with little instruction, giving the students the freedom to learn and think in a creative manner as they proceed. A student currently enrolled in Dr. Sawyers course, William Sweet, declares that “our professor allows us to work together and innovate with other creators in a way that we are able to help build up each other’s visions and ideas.” Additionally, classmate Emma Fiore claims that “this is the first class I have taken that challenges me to learn as I go on my own, unlike other STEM classes where there has been no push to truly engage in lecture.”2 Dr. Sawyer encourages students to be creative outside of the classroom as well. Each week, a guest client from a company or school comes and explains to Dr. Sawyer’s students a problem they are facing. The students are matched with a client whose needs best fit their interests, and then they serve as a consultant as a part of their final project. This is a way for them to put concepts and skills they have learned into practice. Specifically, these projects relate to a program that promotes creativity. For instance, some clients include Kidzu Children’s museum, CCEE for makerspace installation in elementary schools, and Morehead Planetarium.

education

Figure 1. Robotic prototype built by students. Image courtesy of Wikimedia Commons. Dr. Sawyer mentions that “there are many great student resources available on campus that many people do not know about, but should be utilized to their advantage.”3 He promotes such resources by assigning students to visit and engage in places such as the BeAM makerspaces located across UNC’s campus, and virtual reality simulations in the Undergraduate House Library. The BeAM makerspaces are multi-faceted, utilized for their 3D printers, laser cutters, wood labs, sewing machines, and more. These resources could be for personal use, business prototyping, or classroom projects. Whichever purpose, the spaces allow students to learn handson and stretch their creative thought processes. Overall, although the maker-based environment is a newer approach to teaching, and may be difficult to apply across disciplines, Dr. Sawyer is “hopeful that educators continue to push for more creative learning spaces, so students can thrive on higher levels of engaged creative thinking.”3 References 1. Sawyer, K. J. Learn. Sci. 2017, 27:1, 137-181. 2. Interview with William Sweet and Emma Fiore. 02/25/2018. 3. Interview with Robert K. Sawyer, Ph.D. 02/22/2018.

“[Dr. Sawyer] allows us to work together and innovate with other creators in a way that we are able to help build up each other’s visions and ideas.” 25


molecular biology

Illustration by Laura Wiser

Death of a Middleman: Uncovering RNA’s Importance By Dylan Brown

E

very part of the human body is constructed from instructions provided by DNA, the poster child of the genetic system. But DNA is somewhat shy, and it never leaves the protective bubble of the cell nucleus. To transmit genetic information, the cell instead relies on RNA, a closely related molecule formed by copying relevant pieces of DNA. RNA carries this all-important information from the nucleus to cellular machinery that decodes and uses it to construct thousands of different proteins, which then carry out vital cellular functions. Surprisingly, of the roughly 3 billion individual pieces of information that DNA encodes (collectively known as the genome), only about 2% ever gets copied and translated into a protein. Accordingly, biologists and geneticists initially dismissed much of the other 98% as “junk DNA,” an evolutionary leftover with no discernable function. However, researchers then discovered something unexpected: while the cell only translates 2% of DNA into a protein, it makes RNA copies of between 70%-80% of the DNA. Living things rarely undergo processes as complex as copying DNA for nothing, so these non-protein encoding RNA regions had to serve some purpose. Suddenly, RNA was no longer a boring middleman, nor

were the non-protein encoding segments of DNA mere “junk.” Efforts ensued to discover the purpose of these vast swaths of RNA molecules.

Figure 1: 3D representation of the tertiary structure of RNA. Image courtesy of Dr. Kevin Weeks.

26


Carolina Scientific

Dr. Kevin Weeks UNC-Chapel Hill Chemistry professor Dr. Kevin Weeks and his laboratory group have spent years studying the role and structure of these RNA molecules and have shown that the molecules play an important role in the control of genetic information. As Dr. Weeks explains, “they govern how things are expressed in certain cells, or the developmental plan of the cell. So, RNA molecules are really important for regulation. At the end of the day, what matters isn’t how many genes you have, but how they’re regulated.”1 Their research shows that a surprising amount of the regulatory capacity of these molecules comes from their structure, or how they arrange themselves in 3D space. While the DNA template takes the form of a long, coiled strand, RNA can bend, twist, and form complex, looping patterns. These structural variations allow the RNA molecules to carry out a variety of functions; the same molecule rearranged can act in a completely different manner.1 RNA molecules are far too small for their structures to be observed, however. To get around this, Dr. Weeks and his research group created what he calls “molecular microscopes”. These innovative methods, called SHAPE technology, use a surprisingly simple chemical reaction to label different regions along the length of an RNA molecule in terms of their “stiffness” or “floppiness”. Essentially, regions that are folded are more rigid, and those that remain loose or loopy are more mobile. This discrepancy is quantifiable, and a computer program developed by the lab uses these data to visualize the overall 3D structure of the molecule.1 Understanding this structure is the first step towards understanding the behavior of RNA, and SHAPE technology is now widely used by other researchers investigating genome regulation. One particularly relevant application of Dr. Weeks’ research is with viruses that use RNA rather than DNA to store and transmit their genetic information. Viruses inject this RNA into host cells, effectively hijacking them. Some, like the common cold virus, are relatively benign; however, others, such as HIV, are extremely dangerous. All of these viruses are difficult to treat medically, as storing genetic information as RNA allows them to evolve extremely quickly. Scientists originally underestimated the complexity and importance of these structures. As Dr. Weeks puts it, “now it’s clear that everyone thinks just the opposite: that RNA viruses have lots of internal structures, and that many of these structures are likely very important.”1 A better understanding of these structures could lead to the creation of new, RNA-based antiviral treatments,

molecular biology

and Dr. Weeks and his lab have contributed to research on several viruses, including an effort that mapped the structure of the entire HIV-1 genome.2 Working alongside Dr. Weeks’ lab is Amanda Osta, one of a number of undergraduates who conduct their own research into RNA structure as part of the Undergraduate Transcriptome Project. Osta works with Satellite Tobacco Mosaic Virus (STMV), a small pathogen with a short RNA genome that serves as a model for many other similar RNA viruses. Working towards uncovering the 3D structure of STMV’s genome, she says, is essential to understanding how the virus works and thus how to counter its effects. “3D interaction can affect gene expression,” she explains, and “that’s how future technology might be able to counteract RNA viruses—altering some sort of 3D structure once it’s known.”1 Dr. Weeks likewise sees tremendous future potential for these technologies and the discoveries that have followed. While it remains unproven, it is possible that researchers could tackle even currently untreatable diseases caused by “undruggable” proteins: new treatment methods could counteract the RNA before the protein is even created. The work of Dr. Weeks’ lab is thus transforming the way we look at viruses, disease, and treatment. “It may be that in ten to fifteen years,” he said, “a good number of the drugs that people take will target RNA. And if that’s true, it will probably be partially due to our technology.”1 Asked how he felt about that, he chuckled and said, “It’s really kind of fun.”

Figure 2: 2D representation of the 3D structure of the HIV-1 virus, using SHAPE technology. Image courtesy of Dr. Kevin Weeks.

References

1. Interview with Kevin Weeks, Ph.D, and Amanda Osta. 02/19/18. 2. Weeks, K.M; et al. Nature. 2009, 460, 711-716.

27


molecular biology

Yeast: A Tiny Organism with Huge Potential BY JASON GUO

Image Courtesy of Creative Commons

D

espite their seemingly simple structure, yeast may have more in common with humans than you think. Not only do humans and yeast share a common ancestor from a billion years ago, but they also share hundreds of genes with the same functions. Genes are the basic functional unit of inheritance Dr. Kerry Bloom and are composed of DNA, which carries the genetic information passed down from our parents. This comes in the form of forty-six total chromosomes, half from each parent, through a process involving chromosomal segregation. The mechanism in which these chromosomes segregate during cellular division is essential to the functionality of the daughter cells. Dr. Kerry Bloom, professor of Biology at UNC-Chapel Hill, has spent the past few decades researching essential elements of the chromosomal segregation process during cell division in yeast. Mitotic chromosomal segregation first begins with the formation of spindles that move to opposite ends of the cell. These spindle microtubules then begin to interact with the chromosome and attach to their core, also known as the kinetochore, separating the sister chromatids into their respective ends. The cells later divide into two daughter cells with identical genetic information (Figure 1). Although many internal regulators control this process, errors in spindle orientation or chromosomal segregation are possible. Occurrence of errors in chromosomal segregation, known as nondisjunction, can lead to further problems in cell division. For instance, if a

paired chromosome fails to separate, it could lead to errors in chromosome number potentially resulting in trisomy 21 (Down syndrome) or monosomy X (Turner syndrome). When he first began, Dr. Bloom focused solely on protein function rather than protein interactions with other parts of the biological system in chromosomal segregation. Ac-

Figure 1. Stages of mitosis. Image courtesy of Jay Reimer, CC-BY-SA 2.0.

28


Carolina Scientific

molecular biology

Aside from studying motor proteins, the Bloom Lab is interested in building artificial chromosomes in yeast. According to Dr. Bloom, “What it means is essentially designing chromosomes […] so we build them and label them with fluorophores and we ask how they behave. Do they get stretched? If they get stretched, how often do they get stretched? Does it coil up and look like a spot?” 1 Artificial chromosomes can also serve as gene therapy vectors. Since they can encode for a desired genetic product and are able to replicate and segregate autonomously, they can be transferred into cells that may be lacking certain genetic information and makeup for the potential function lost. Dr. Bloom hopes that one day his research with yeast will have some biomedical implications. Although Dr. Bloom’s lab appears to be focused on wetlab type research, the team also utilizes many computational tools for data analysis and interpretation. “We can ask questions in a biological system and in a computer simulation. Then, what we try to do is to match the computer stimulation to mimic the biology.”1 The Bloom Lab includes computational biologists who assist in analyzing results using programs such as Matlab and Python. He hopes to explore more from the computational side and to integrate computation, physics, and biology into studies of the biological system. At first glance, yeast may seem insignificant. To even think it could be your distant cousin is unfathomable, but who knew it played such an essential role as a model organism in understanding the chromosome? This tiny organism even has major applications in fundamentals of cell cycle regulation and cancer. “It is a very exciting time to be in biology these days—exciting in terms of seeing the change in medicine to personalized medicine and the tools we have available.”1 The next time you open a package of baker’s yeast, maybe you will not view those grainy substances as insignificant anymore. Rather, you may appreciate the breakthroughs they have made in science.

References

1. Interview with Kerry Bloom, Ph.D. 02/12/18. 2. Li, Y; et al. Proc. Natl. Acad. Sci. 1993, 10, 10096-10100. 3. Spudich, J.A., Science, 2011 Figure 2. Crystal structure of dynein on microtubule. Image courtesy of Spudich, J.A., Science, 2011. cording to Dr. Bloom, “The field has matured over the years […] only recently have we started looking at protein interaction with the DNA. It stemmed from a strong collaboration from people in computer science and physics”.1 Dr. Bloom’s lab studies a microtubule-based motor protein known as dynein. In mitosis, dynein plays an essential role in cellular division, as it is involved in processes such as centrosome separation and spindle organization. The Bloom Lab has isolated the dynein gene from yeast and has found that dynein is expendable for cellular growth (Figure 2). Dr. Bloom discovered that, in yeast, although dynein assists in forming proper spindle orientation during cell division, it does not play an essential role in spindle assembly or chromosome movement.2 In addition to dynein, the Bloom Lab researches the roles of other microtubulebased motor proteins in chromosomal and spindle dynamics.

29

“It is a very exciting time to be in biology these days—exciting in terms of seeing the change in medicine to personalized medicine and the tools we have available.”


molecular biology

THE FUTURE OF CANCER RESEARCH BY ANTHONY SCHINELLI Image courtesy of Creative Commons

A

s I hustled over to Chapman Hall at 7:30 in the morning for our interview, barely awake, Dr. Nancy Allbritton had long since woken up. As a Distinguished Professor and researcher at UNC-Chapel Hill, Dr. Allbritton spends every day, from 5:00 am until long after dark, conducting research and analyzing data to better understand the minute differences and signaling patterns between individual cells. The purpose of this, Dr. Allbritton says, “is for tumor diagnostics, determining how many drugs the patient needs to start out with, [so] you could track them over time and see how their tumors are changing and evolving their signaling networks.”1 In a nutshell, her research is centered around building devices and programs that allow clinicians and scientists to see and do things within a patient’s cells that were never before thought possible. Dr. Allbritton’s research involves loading “reporter” molecules into cells (Figure 1). Their status and use within the cell is tracked over time from within cellular arrays (Figure 2) to better understand fundamental processes within cells, and how certain errors in these processes can lead to diseases such as cancer.2,3 Over time, Dr. Allbritton hopes that these reporter molecules and substrates will become more diverse in

Figure 1. A reporter molecule; the type of which Dr. Allbritton uses to monitor cellular activity. Image courtesy of Dr. Nancy Allbritton.

type, as well as more accessible to clinicians and other scientists to use within their own diagnostics and research. The goal is to help develop more specialized therapies for patients by better adapting their rehabilitation process toward their own immune system and their particular cells. Under current cancer rehabilitation strategies, doctors use chemicals and Dr. Nancy Allbritton medicines that target large swaths of cancerous cells in the hopes of destroying most, if not all, of the tumor. However, while this strategy is successful in pushing cancer into remission, it does not address the underlying issues that caused the cancer in the first place. Therefore, many patients may see their cancer return after several years, resulting in even more treatment and longer recovery periods. Dr. Allbritton’s work is centered around changing that reality. Within the next decade or so, she says, the programs and substrates they are developing will be on the market for clinicians and hospitals to use. Despite recent media attention surrounding the epigenome and its potential relationship to cancer-causing genes, in the future, Dr. Allbritton says, that type of cancer research will be overshadowed by research on intercellular communication and cellular analysis. In the words of Dr. Allbritton: “You get an idea [based on the DNA], but not a great idea. So what we want to do is to measure downstream of the cell’s signaling activity, which is driving what the cell’s doing. That’s a much better predictor of how a cell’s going to behave than if you look all the way back at the DNA.”1 One thing about Dr. Allbritton that makes her work so interesting and exciting is the fact that she is so confident and excited about her work, even with the tremendous uncertain-

30


Carolina Scientific

molecular biology

Figure 2. The type of cellular array in which Dr. Allbritton places cells to monitor and make measurements. Image courtesy of Dr. Nancy Allbritton. ty that comes with scientific research. Scientific researchers often spend valuable time and resources on research projects that are rejected by companies and governments or, in some cases, prematurely halted due to lack of funding. In Dr. Allbritton’s case, her research projects may last anywhere from 10 to 15 years. Given their applicability to daily consumer life, the success of these projects also heavily depends on the acceptance of the products and results by governments and health organizations. Despite all of this, Dr. Allbritton is neither deterred nor nervous about the success of her research. Based on thefindings, Dr. Allbritton is confident that not only will this research project be able to reach completion, but also that it will thrive in the consumer market and help to make a major difference in the lives of many cancer patients—both in the United States and abroad. In life, you sometimes come across people who, immediately upon meeting them, you recognize them as a mover and shaker in the world. Dr. Allbritton is one of these people. Her skills and assets are being used to their fullest in her current research on intercellular communication, enabling her

to truly change lives. Her diligence and enthusiasm towards her work is phenomenal and was palpable throughout the interview; her work is a part of who she is, and her enthusiasm about science and research is downright contagious. With the future of cancer research on the line, Dr. Allbritton continues to rise to the challenge, and the fruits of her labor are going to change cancer research and patient survival rates as we know it. The insights of Dr. Allbritton’s research will enhance a doctor’s ability to make accurate cancer diagnoses in patients, and will provide more effective and patient-specific treatment options. It is an incredible development in technology that will surely take off within the near future.

References

1. Interview with Nancy, Allbritton, M.D, Ph.D. 2/6/2018 2. Allbritton, N. Research Highlights http://www. chem.unc.edu/people/faculty/allbritton/index. html?display=highlights (Accessed March 13th, 2018) 3. Lantz, R. Global Reporter Molecule Market http://ereports.asia/wp-content/uploads/2018/01/Global-ReporterMolecule-Market.gif (Accessed April, 8th, 2018)

31


molecular biology

Image courtesy of Wikimedia Commons.

WHAT THIS PEST CAN DO MAY TICK YOU OFF BY SOPHIE TROYER

P

icture this: you are hiking in the woods behind your house, or maybe you simply bump against a tree on campus. You are spending time outside, as recommended by many physicians. Later, you wake up in the middle of the night itching and with raised splotches all over your body, or maybe nausea and cramps. You are scared—this has not ever happened before. Perhaps you have never had an allergy before, or you know how to avoid your allergens and are surprised by the out-of-the-blue reaction. This is the reality for people who, after a tick bite, develop a red meat allergy. This allergy is strange for several reasons: people can develop an allergy just by being bitten when they are outside, these allergic reactions are delayed, and research points to a sugar rather than a protein allergen. Dr. Scott Commins, an Associate Professor of Medicine at UNC-Chapel Hill, stresses the importance of researching this allergy to help suffering patients. He says, “It’s a relatively new disease, so I find it satisfying to give answers to patients who have been dealing with a mysterious illness. For some of them, [there have been] decades of issues [during which] they didn’t understand what the origin of the problem was.”1 Since

there is no cure for allergies, part of the research process is figuring out how to treat them. The first step is to recognize that a food allergy is occurring; patients can then have peace of mind in knowing how to avoid allergic episodes. A simple blood test can be used to determine if a patient has a red meat allergy.1 Allergies can be annoying at best, and life threatening at worst. Patients with a food allergy can Dr. Scott Commins experience a range of symptoms including swelling, hives, nausea, and lightheadedness.1 Anaphylaxis, a severe allergic reaction that includes these symptoms and trouble breathing, can be fatal. When the immune system encounters an allergen, it thinks it has identified

32


Carolina Scientific a potentially dangerous agent, and produces Immunoglobulin E (IgE) antibodies.2 IgE stimulates cells to release histamine, a chemical that causes swelling, hives, and other symptoms of an allergic reaction. People have different allergies because their immune systems identify different allergens as dangerous. There are several factors that make the red meat allergy unique. Normally, allergens are proteins, and this is part of what makes the red meat allergy so different—people with this allergy are reacting to a sugar in the meat. Though the symptoms of a meat allergy are the same as those of other food allergies, patients can have anaphylaxis 3-6 hours after they eat instead of the usual minutes to two hour time frame.1 Another strange difference is that the development of a red meat allergy after a tick bite means that people acquire the allergy rather than have it since birth.1 The particular sugar that causes an allergic reaction in patients with the red meat allergy is galactose-alpha-1,3galactose (alpha-gal).1 The allergen was first noticed at UNC’s Lineberger Comprehensive Cancer Center, where Dr. Bert O’Neil observed a number of anaphylactic reactions to the cancer drug cetuximab, and recognized that this allergy was regional.1,3 These reactions were later discovered to be to in response to alpha-gal, since cetuximab is decorated with alphagal sugar.1 The regional link is explained by the prevalence of wooded areas where ticks can bite people. Researchers linked the cetuximab and alpha-gal allergies to red meat allergies, and then to tick bites.1 This was determined by looking at factors that the patients with the allergy have in common. To confirm this experimentally, mouse models were used to show the immune response after a tick bite.1 Why does an alpha-gal allergy indicate a red meat allergy and vice-versa? The common denominator between meat from pigs, cows, and sheep, and the specific red meat allergic reaction is alphagal.1 Alpha-gal is on the cells and tissues of lower mammals, such as the aforementioned red meats. However, humans do not have this sugar on their proteins, and neither do birds or reptiles.1 Since many North Carolinians get tick bites, the state is classified as a high-risk area. Furthermore, the research can benefit our understanding of how food allergies work in general. As Dr. Commins says: “The significance of investigating these reactions comes not only from the obvious importance of understanding a life-threatening form of food allergy, but also in defining a totally new mechanism for reactions related to an important food substance [non-primate mammalian meat].”1 This can have implications for the field of food allergies at large, because learning about a specific allergy mechanism can point to how a person develops allergies in general. Research is especially important due to the increasing prevalence of food allergies.2 No one really knows why this is, as allergies do not appear to be evolutionarily favorable. IgE antibodies are linked to fighting parasites, but in regions where parasitic diseases are less common, they do not do much apart from causing allergic reactions.4 There is so much unknown about this process, which is what makes it particularly fascinating. Dr. Commins’ future research will focus on the rea-

molecular biology

son for delayed reactions to red meat, but he cannot do this without giving patients the allergen, which puts them at risk.1 Therefore, researchers need to figure out if patients can be desensitized to the allergen.1 Desensitization is a large part of allergy treatment today, and it most commonly involves allergy shots. Allergy shots expose the patient to small amounts of the allergen, in hope that repeated exposure, increasing over time, will build up a tolerance for the allergen. An option that Dr. Commins is looking into is using a nanoparticle to place the alpha-gal sugars under the tongue or in patches on the skin.1

Figure 1. Lone star ticks in a petri dish. Image courtesy of Dr. Scott Commins. Dr. Commins hopes his research will help to provide guidance and treatment options. Since patients will not likely know that they are having a reaction to red meat, healthcare workers must properly diagnose the allergy to prevent future anaphylaxis.1 Additionally, research on what the tick bite does to the immune system could lead to a treatment that intervenes before the allergy is established.1 Another treatment option could be to isolate the cells that make the IgE antibody and disrupt them.1 The red meat allergy is becoming more prevalent globally. Since the allergy is spread by ticks, Dr. Commins thinks that the allergy can be irradicated if people do not get any more tick bites.1 Ultimately, the allergy can be easily treated if identified early and if precautions are taken to prevent its spread via tick bites.

References

1. Interview with Scott Commins, MD, PhD. 2/20/18 2. Allergic Reactions. https://www.aaaai.org/conditionsand-treatments/library/at-a-glance/allergic-reactions (accessed February 23, 2018). 3. Smith J. A Mysterious Allergy Afflicts The South. http:// endeavors.unc.edu/win2008/regional_allergy.php. (accessed February 23, 2018). 4. Oettgen, H. J Allergy Clin Immunol, 2016, 137, 1631-1645.

33


health and medicine

A JOINT EFFORT DETECTING POST-TRAUMATIC OSTEOARTHRITIS RISK

Illustration by Maddy Howell

I

BY ELIZABETH CHEN

f you hear the word ‘osteoarthritis’ and assume it only affects the elderly, you would be wrong. Osteoarthritis is a debilitating condition that can cause irreversible damage to many structures within the knee joint, including cartilage important for cushioning and joint movement. Osteoarthritis is one of the top five leading causes of disability worldwide.1 Many people develop osteoarthritis following an injury to

Figure 1. Capturing gait biomechical data with motion capture system (VICON). Image courtesy of Dr. Brian Pietrosimone.

their knee. If you sustain an injury to the ligaments in the knee—for example, the anterior cruciate ligament—you have a 1 in 3 chance of developing osteoarthritis within ten years after injury.2 Because the majority of people who suffer traumatic knee injuries are between the ages of 15 and 24, the development of osteoarthritis after an injury, called post-traumatic osteoarthritis, often occurs in younger people.3 Dr. Brian Pietrosimone This results in more years of life with a disability.2 To make matters worse, there is no cure for knee osteoarthritis, and experts do not fully understand the best practices to prevent it following a knee injury. These are some of the issues that Dr. Brian Pietrosimone is trying to tackle with his research at the Sports Medicine Research Laboratory in the Exercise and Sport Science Department. Currently, researchers approach osteoarthritis from either a biomechanical or biological standpoint. Biomechanists traditionally study what forces are being put upon our body during movement. Biologists, however, tend to focus on inflammation and factors that change the metabolism for joint

34


Carolina Scientific tissues following injury. Dr. Pietrosimone notes that the two sides develop parallel and often independent hypotheses on what causes osteoarthritis, but neither fully explain how knee osteoarthritis occurs following injury. Dr. Pietrosimone’s research has been looking at the intersection of the two: how do biomechanics change the biology of the tissues?1 One example of this interplay is focusing on how people walk, or their gait, after an anterior cruciate ligament (ACL) reconstruction. Studying gait, in addition to biological processes, is essential to understand what causes osteoarthritis. Dr. Pietrosimone has found that, following injury, the slightest adjustments in walking patterns could have a profound effect on patient outcomes. Some people take 10,000 steps over the course of a day, and the knee joint has to handle forces as much as 261% of the person’s body weight.4 Putting too much force upon the knee was thought to be the primary biomechanical cause for cartilage breakdown in the knee. Conversely, Dr. Pietrosimone has found that those who do not put enough force on their joints, known as underloading, exhibit longlasting and harmful biological changes in their joints following early injury.1 Dr. Pietrosimone’s research discovered that people who sustain a traumatic knee injury tend to underload their knee joint, because they are walking slower as a protective mechanism. Dr. Pietrosimone collected blood samples and biomechanical data using a 7-camera three-dimensional motion capture system following ACL injury and then 6 months after reconstructive surgery.2 In the early stages after injury, people with slower walking speed exhibited a higher concentration of cartilage breakdown biomarkers.2 Linking walking speed to cartilage breakdown may help develop clinical methods of predicting early-onset osteoarthritis. Dr. Pietrosimone says that discovering that underloading tissues can be just as devastating as overloading in the development of osteoarthritis is one of the most interesting recent findings. “The most exciting thing is that everyone, including me, expected certain results—we’re changing the assumptions that people were making for the past decade.” His newest research uses a new magnetic resonance imaging technology (MRI) called T1rho. This novel mode of imaging not only captures the three-dimensional shape of the cartilage, but also the specific percentage of cartilage density. Healthy cartilage is dense with cartilage proteins called proteoglycans, whereas unhealthy tissue has reduced proteoglycan density. Traditional MRI often incorrectly identifies cartilage as robust and healthy, as the technique can only measure the shape of the cartilage. Moreover, degenerated cartilage can present as the same thickness and shape as healthy cartilage, when in actuality there is little proteoglycan density. Dr. Pietrosimone’s research using T1rho found that unhealthy cartilage composition is associated with worse patient reported outcomes such as pain, functionality of daily living, and ability to participate in sports. One of Dr. Pietrosimone’s main research objectives is the early detection of degenerating cartilage composition after injury, because when patients typically seek treatment for osteoarthritis, irreversible changes have already occurred in their tissue. When someone suffers a traumatic knee injury, they are at a much higher risk for developing a chronic dis-

health and medicine

abling condition like arthritis for the rest of their life. Twenty years after an ACL injury, approximately 50% of post-injury individuals presented with osteoarthritis in their knee. Dr. Pietrosimone wants to disrupt the misconception that osteoarthritis will only affect you at age 60; if you have had a traumatic knee injury, osteoarthritis can affect you even at age 30.1 Dr. Pietrosimone’s innovative approach to studying post-traumatic osteoarthritis development by comprehensively understanding both biomechanical and biological variables will be instrumental in the future research of osteoarthritis. He predicts that his research will most likely transition into intervention and rehabilitative studies to develop guidelines that alleviate poor biomechanics and prevent damaging biological changes to cartilage. Dr. Pietrosimone’s research has created an original platform to help monitor changes in joint health early after injury, predict osteoarthritis onset, and guide preventative therapies.

Figure 2. Compositional MRI (T1rho) of knee joint with unhealthy cartilage density. Image courtesy of Dr. Brian Pietrosimone.

References

1. Interview with Brian Pietrosimone, Ph.D. 02/10/28. 2. Pietrosimone, B.G; Loeser, R.F; Blackburn, J.T; et al. J Orthop Res. 2017, 35(10), 2288-2297. 3. Pietrosimone, B.G; Blackburn, J.T; Harkey, M.S; et al. Arthritis Care Res (Hoboken). 2016, 68(6), 793-800. 4. Pietrosimone, B.G; Blackburn, J.T; Harkey, M.S; et al. Am J Sports Med. 2016, 44(2), 425-432. 5. Pietrosimone, B.G; Blackburn, J.T; Golightly, Y.M; et al. J Athl Train 2016, 51(2).

35


Images courtesy of Creative Commons.

health and medicine

Radiation Therapy on a Microscopic Scale BY APRIL FERIDO

N

ew technological innovations in radiation therapy can make a promising impact in medicine. Dr. Sha Chang, Professor and Director of Medical Physics Research at UNC-Chapel Hill, has been collaborating with Dr. Otto Zhou, Professor of Physics and Astronomy at UNC, to build an alternative compact system for existing radiation therapy treatments. This nanotechnology application is called microbeam radiation therapy, a novel system that aims to drastically improve treatment therapeutic ratio—the ratio of cancer control to normal tissue damage by the treatment. Her research integrates biology, physics, and medicine to create a leading, low-toxicity, and costeffective method for treating cancer patients. Radiation therapy uses ionizing radiation, such as high-energy x-rays, which can not only kill cancer cells, but can also harm the body. Radiation therapy aims to shrink tumors by delivering radiation through a treatment Dr. Otto Zhou

machine. The radiation therapy can either damage DNA directly or create free radicals nearby that eventually break DNA strands. The body’s natural response is to inhibit the growth of these affected cells, but radiation indiscriminately targets normal, non-cancerous structures in the same way. Animals are often used as test subjects for radiation therapy. In one study, Dr. Shang and her collaborators demonstrated the feasibility of the nanotechnology-based microbeam technology on a mouse’s brain. “There are two microbeam radiation lines going through, as shown by the DNA double strand break repair biomarker,” said Dr. Chang.1 The microbeam radiation technology precisely targeted the tumor inside of the mouse’s brain. By contrast, the macroscopically wide beam used in conventional radiation therapy produces a higher risk of damaging non-cancerous structures that are next to the tumor. The microbeam has the potential to reduce treatment complications due to normal tissue damage without compromising tumor control. The controlled process results in a higher therapeutic ratio. Dr. Chang adds, “Currently, we like to see high radiation doses covering a tumor in a way that is similar to a glove fitting your hand.”1 Similar issues can be found in surgical procedures and chemotherapy treatments, which damage surrounding areas other than the target, which are known as critical struc-

36


Carolina Scientific

Figure 2. Schema of a liposome showing a phospholipid bilayer surrounding an aqueous interior and excluding an aqueous exterior environment. Image courtesy of Creative Commons.

Figure 1. Patient being prepared for radiation therapy. Image courtesy of National Cancer Institute (CC-SA-4.0).

tures. “When the toxic drugs are injected, they are going to go everywhere in the body. In the kidney, liver, and the heart, so the toxicity determines how much chemotherapy the patient needs,” remarks Dr. Chang.1 A new way to deliver these toxic anti-cancer drugs is to conceal the drug inside of a liposome, an artificially constructed vesicle with a phospholipid membrane. Liposomes function as vehicles for distributing pharmaceutical drugs to the body by unraveling their membranes. Usually, liposomes circulate throughout the body without damaging normal cells. However, when the liposome reaches solid tumor, it can unravel itself and release its toxins into the surroundings. This method is not perfect, though, because the liposome drug packages are larger than the toxic drug particles and thus have trouble entering the solid tumors. In collaboration with Dr. Bill Zamboni of Pharmacology, Dr. Chang found that microbeam radiation may improve the tumor uptake of the anti-cancer drug delivery through liposomes. It is clear that imaging plays a crucial role in identifying

For the past twenty years, “technological innovations in both imaging and radiation treatment have been geared towards geometric and dosimetric accuracy, two goals that ensure correct radiation dosage and coverage.

health and medicine

tumors in radiation therapy. For the past twenty years, technological innovations in both imaging and radiation treatment have been geared towards geometric and dosimetric accuracy, two goals that ensure correct radiation dosage and coverage. One of Dr. Chang’s research projects involves formulating three-dimensional versions of these cancerous structures by improving the geometry of these images. These images are three-dimensional shapes of the tumors using CT, MRI, and other radiation imaging techniques. CT, short for computerized tomography, compiles several cross sections of 2D images to create a 3D image using ionizing radiation. Another imaging device called the MRI uses radio waves instead of ionizing radiation to produce images on a computer monitor. Dr. Chang’s research considers how existing radiation imaging technologies can be improved to identify the tumor more quickly and treated more precisely. Dr. Chang’s research centers on how radiation therapy can be improved in terms of effectiveness, toxicity, and cost. By integrating these three factors, Dr. Chang aims to increase accessibility to health services in radiation therapy. Dr. Chang and her collaborators are also trying to understand the working mechanism of microbeam radiation, which is still largely unknown. It is an attempt to create a more effective and less toxic method of radiation therapy without sacrificing quality of care. With all of these technological innovations racing against high rates of cancer diagnoses, Dr. Chang remarks that “we need to broaden our horizons and really ask the question and explore: how can we make radiation more effective in cancer control?”1

References

1. Interview with Sha Chang, Ph.D. 02/09/2018. 2. Grotzer, M.A; Schultke, E; Brauer-Krisch, E; Laissue, J.A. Physica Medica 2015, 31, 564–567.

37


health and medicine

DRUGS, GENETICS, AND POLICY: Using Molecular Epidemiology to Fight Malaria Figure 1. A common anopheles mosquito, through which malaria-carrying plasmodium parasites infect their host. Image courtesy of James Gathany/CDC.

I

f you’re like most people, you’ll avoid staying outside for too long during warm summer nights as much as possible, knowing that within minutes, dozens of mosquitos will have already feasted on your blood. The arrival of warmer weather will inevitably bring with it those aggravating, minivampires. While mosquitos may be no more than pesky summertime insects for many of us in the United States, they pose a much larger threat to people living in tropical and equatorial regions through the transmission of malaria. With nearly half of the world’s population at risk of being infected, it is not hard to understand malaria’s reputation as one of the world’s oldest and most deadly diseases. According to the most recent World Health Organization (WHO) report from 2017, the worldwide incidence rate of malaria currently stands at 216 million cases per year, along with 445,000 fatalities.1 At the same time, the disease is highly region-specific, with Africa and Asia carrying almost 97 percent of the world’s malaria burden.2 Despite such numbers, one essential fact about the disease is often forgotten: malaria is preventable and treatable.3 The tools necessary to eradicate malaria within a generation are theoretically available. However, doing so requires studying the disease’s clearly discernible components in addition to its inconspicuous inner workings. Dr. Jessica Lin, an assistant professor in the UNC-Chapel Hill School of Medicine’s Division of Infectious Diseases, is a co-founder and core investigator of UNC’s Infectious Diseases, Epidemiology, and Ecology Lab (IDEEL). She conducts molecular and genetic epidemiology research relating to malaria transmission and drug resistance in Southeast Asia, having been continuously collaborating with investigators in countries like Cambodia, Thailand, and Indonesia to document malaria development and its trends in those regions. Although researchers like her have been making steady prog-

ress toward the eradication of the disease over the past few decades, there is still a long way before the world will be free from its devastating costs on human life. With malaria at such a high incidence rate across different geographic regions and subpopulations, researchers face the challenging task of tackling the infectious disease from different angles, whether it be through the development of medical interventions and immunization techniques, more adequate drug policy, or even simple preventative public health measures, like mosquito nets and insecticides. Controlling malaria successfully at the medical level largely depends on the use of anti-malarial treatments and effective drug policy. For more than a decade, the WHO has continuously sponsored the use of artemisinin-based combination therapies (ACTs) as the mainstay treatment for malaria transmitted by mosquitos through the parasite, Plasmodium falciparum. Artemisinin is a derivative of the sweet wormwood plant, Artemisia Dr. Jessica Lin annua, used in ancient Chinese herbal medicine as a therapy for malarial fevers. Recently, the compound has been found to be effective against certain malarial parasites.4 The idea of ACTs is to combine clinically-derived artemisinin-based compounds with other anti-malarial drugs to create effective, fast-acting treatments for malaria. Artemisinin-based compounds have been combined with partner drugs, such as mefloquine, piperaquine, amodiaquine, and others, to create ACTs that have led the pharmacological fight against malaria.5 As with any anti-parasitic drug, one of the main challenges with ACTs is finding new drug combinations

38


Carolina Scientific

health and medicine

that are simultaneously effective and uncompromised by Figure 3. drug resistance. As Dr. Lin further explained, “one of the interPlasmodium esting cases now is that the parasites that develop resistance falciparum, the to piperaquine ACTs become more sensitive to mefloquine deadliest species and other partner drugs, so there’s a sort of back-and-forth reof malaria paralationship with ACT resistance as they are being used in thersite, in the blood apy.”6 Maintaining the efficacy of ACTs for the future requires of an infected researchers to constantly monitor the success of anti-malarial host. Image drugs in different combinations with populations in different courtesy of regions. Osaro Erhabor At the same time, the expansion and development of [CC-BY-SA 3.0]. ACTs remains a top priority. Researchers have recently created the first triple-combination ACT, in which three drugs—arte- One of the big things is that there is a much larger asympmisinin, piperaquine, and mefloquine—were combined rather tomatic reservoir of people who don’t even know that they’re than the conventional two-part ACT. Researchers may be able infected, so they’re never going to seek treatment. So, one of to link certain genetic patterns in patients with the success the questions is how much of that reservoir is actually contribor failure of this new type of ACT, potentially giving them the uting to malaria transmission?”6 Dr. Lin’s study will involve repower to target certain subpopulations. “Our leading ques- cruiting a large number of asymptomatically infected people tion [about new ACTs] is: are there parasites resistant to even to see how well they infect mosquitos and to analyze other triple combination drugs, and what are their genetic mark- determinants that affect this component of malaria transmisers? If there are parasites sion. Since every person resistant to [triple comwho is infected with mabination drugs] and you laria may not be equally have their corresponding infectious toward other genetic markers, then you people, understanding can look for those markthe molecular and geers and know where to netic components of the deploy these drugs and asymptomatic transmiswhere you need to be sion process may allow looking at alternatives.”6 researchers to determine In essence, understandwhat kinds of asymping the success or failure tomatic people actuof specific drugs and their ally contribute to malaria combinations in response transmission and to find to parasites with known ways to target those popgenetic markers can help Figure 2. Worldwide incidence map of malaria, showing especially ulations. By simultaneindicate regions in which high rates of incidence of Africa and Southeast Asia. Image courtesy ously combating malaria a specified drug would of Cibulskis et al. [CC-BY-SA 4.0]. through anti-malarial be more or less effective, drug research and studywhich would ultimately increase drug intervention efficiency ing the underlying components of asymptomatic malaria and overall efficacy. In one recent example, the failure of a transmission, Dr. Lin’s research is slowly making the goal of piperaquine-combined ACT, like dihydroartemisinin-pipera- worldwide malaria eradication a reality. quine, has spurred the Cambodia Ministry of Health to switch to mefloquine-combined ACTs in western Cambodia, but is References also prompting the search for new drug combinations. Such 1. World Health Organization. Regional and Global Trends developments are all happening in high-risk areas, with work in Malaria Cases and Deaths. In World Malaria Report 2017. from researchers like Dr. Lin. World Health Organization: Geneva, 2017. While drug resistance research only consists of a frac- 2. Malaria Fact Sheet. http://www.who.int/mediacentre/ tion of the work that Dr. Lin is involved in, understanding and factsheets/fs094/en/ (accessed March 1st, 2018). combating the entirety of the malaria crisis requires research 3. Malaria Treatment. http://www.who.int/malaria/areas/ that reveals the often unseen and unpublicized components treatment/overview/en/ (accessed March 1st, 2018). of disease transmission. The discussion of malaria eradication 4. Drug Record: Artemisinin Derivatives. https://livertox. should not only be confined to anti-malarial treatment. Dr. nih.gov/ArtemisininDerivatives.htm (accessed March 1st, Lin is currently starting a research project in Africa which will 2018). study the risk of malaria transmission from populations with 5. Artemisinin-based Combination Therapy. http://www. “silent” forms of malaria infection. These populations are in- malariaconsortium.org/pages/112.htm (accessed March 1st, fected with strains of malaria but have no tangible symptoms. 2018). “We’re about to start an infectious reservoir study in Tanzania. 6. Interview with Jessica T. Lin, MD. 02/26/18.

39


health and medicine

MAKING IT COUNT: NOVEL METHODS TO REDUCE HIV TRANSMISSION BY SIDHARTH SIRDESHMUKH

Image courtesy of Flickr Creative Commons

C

ontemporary methods for combating HIV transmission among patients have shifted from a strictly drug-driven approach towards outreach and social support due to insufficient resources. Southeast Asian municipalities, which are densely populated and rampant with drug use, are often used as model environments for understanding and developing novel methods for reducing HIV infection rates. The CDC estimated that about 1.2 million Americans were living with HIV in 2016.1 Globally, UNAIDS reported 36.7 million people were carrying the infection in 2016.2 Left untreated, it takes 5-10 years for the initial HIV infection to progress to AIDS, which presents a monumental challenge for healthcare systems.3 Highly active antiretroviral therapy (HAART) drugs are the most effective current forms of treatment for HIV infections. They also help in preventing new cases when used in antiretroviral (TaSP) and pre-exposure prophylactic (PrEP) medication regimens. Even as new pharmaceutical interventions continue to develop, problems with distribution and adherence to treatments in resource constrained regions pose a significant challenge. In countries such as Southern Africa and South Asia, health authorities and global HIV-combating organizations are simply not able to provide drugs to everyone.⁵ Strategies to leverage existing resources in low and middle income countries (LMIC) in order to implement these HIV prevention breakthroughs have not been well addressed. This is a problem which Dr. Kumi Smith, a postdoctoral fellow at the Institute for Global Health and Infectious Diseases at UNCChapel Hill, has been studying since 2009.⁶ Her work in this field began in rural China where she worked to identify causal

relationships between the distance patients travel to meet with their doctors and resultant HIV suppression, and to identify other limitations of the Chinese HIV-prevention infrastructure.⁷ Her recent research is focused on the problem of identifying patients who should be selected to receive TaSP and PrEP interventions in countries where resources are limited. Her work was conDr. Kumi Smith ducted in Thai Nguyen, Vietnam, an ideal location for studying this issue given its dense population, rampant drug use, and lack of medical resources for those infected with HIV. Dr. Smith’s latest paper for the Journal of Acquired Immune Deficiency Syndrome (JAIDS) was conducted with a population of patients in Northern Vietnam. An HIV baseline study conducted in that locale reported that 34% of people who inject drugs (PWIDs) were infected. Her analysis synthesized information about HIV infection and population contact patterns in this community – together with what is known about how easily HIV is transmitted through unsafe injection – to come up with predictions of the expected number of new cases of HIV in the future. This valuable information can educate prevention services personnel about which PWIDs should be prioritized for treatment.

40


Carolina Scientific

heath and medicine

Figure 1. Frequency of equipment sharing among PWIDs over a 3 month period. Image courtesy of Dr. Kumi Smith

Figure 2. Drug sharing and HIV incidence combined data as an indication of new cases expected to arise. Image courtesy of Dr. Kumi Smith

41


health and medicine

Figure 3. Comparison between new infections expected by age (solid line) and contact within age groups (dotted line). Image courtesy of Dr. Kumi Smith. Dr. Smith’s research not only identified the type of PWID at risk of future HIV infection, but also identified the age groups from which the most new infections originated. The researchers involved in the study at UNC’s Project-Vietnam site recruited a cohort of 1674 PWIDs with HIV whom they interviewed about their drug sharing partners over the past 3 months.⁹ This method, known as egocentric sampling, focuses on interviewing subjects to establish a larger group of “alters” who may have been in contact with a particular transmitter of HIV. This method also allows researchers to explore interactions beyond the participants in the study, which adds to the scope and accuracy of the data networks developed this way.10 Additionally, it is less expensive than larger big-data approaches that may be used in more advanced countries such as the U.S. However, Dr. Smith recognizes that asking subjects to identify their partners from memory is tricky and that measurement error must be accounted for when using egocentric sampling. “One drawback is that we’re relying on people’s recall which isn’t perfect. For instance, we know that older people are usually able to more accurately guess the ages of people younger than them, but the reverse is not always true.” These data from patients were used to establish mixing matrices to see which combinations of HIV prevalence and equipment sharing would yield the highest rates of infection. UNC researchers found that sharing is assortative, which means that the most drug equipment sharing is being done between members of the population who are the same age. Dr. Smith was able to develop a series of matrices and graphs to visualize sharing among injection drug users, which has not been done before.⁹ This is apparent in Figure 1, which maps the frequency of equipment sharing among PWIDs over a 3 month period; the darkened cells in the center of the figure, in a diagonal pattern, represent the highest rates of sharing. Figure 2 combines data of drug sharing and HIV incidence to indicate the number of new cases expected to arise in the next time period as a result of equipment sharing between particular age groups. Figure 3 makes a comparison between new

infections expected by age (solid line) and what happens with contact within the age groups (dotted line). In other words, the solid line represents what Dr. Smith would have known from solely reviewing the baseline study. This solid line communicates to UNC Project-Vietnam teams in Thai Nguyen to prioritize PrEP administration for PWIDs between ages 30-34. The dotted line indicates that PWIDs between the ages 25-34 should be given TaSP before other members of the population to minimize HIV transmission via retroviral suppression.⁸ Armed with these findings, prevention teams are able to ensure that limited resources can be administered to certain age groups first, maximizing the prevention of HIV transmission. Furthermore, by establishing matrices using the frequency of equipment sharing among PWIDs in the Thai Nguyen sample population and the age groups where drug sharing takes place, Dr. Smith was able to provide an accurate visual model for future outbreak patterns. “Through the creative use of matrices, which people haven’t really done with this kind of population, we were able to infer something that can be really helpful at a program level.”⁹ “I’m very interested in finding a way to bring this hi-tech stuff to resource constrained settings, in a reasonable and sustainable way, to make the biggest impact.” ⁹ Dr. Smith hopes to see teams working in Vietnam and similar areas of the world utilize the conclusions from her study to administer PrEP and TaSP efficiently and effectively. Even in the event of a potential national outbreak of HIV in the US, Dr. Smith was optimistic about the application of her research in combating rampant transmission.⁹ “There’s no reason why this kind of tool can’t be used in other populations. We know that mixing is happening, this helps to map matrices that can tell us who the subgroups are [for targeting intervention].” Dr. Smith’s research in Vietnam exemplifies her broader interests in influencing public policy and in finding ways to positively impact health outcomes on national and global scales.⁹

References

1. HIV and AIDS in the United States of America (USA). www.avert.org/professionals/hiv-around-world/westerncentral-europe-north-america/usa. 2. Global Statistics. www.hiv.gov/hiv-basics/overview/data-and-trends/globalstatistics. (accessed February 6, 2018). 3. Act Against AIDS. www.cdc.gov/actagainstaids/basics/w hatishiv.html. (accessed February 10, 2018). 4. History of HIV and AIDS Overview. www.avert.org/professionals/history-hiv-aids/overview. (accessed December 5, 2017). 5. Overview. www.unicef.org/esaro/5482_HIV_AIDS.html. (accessed February 6, 2018). 6. Researchers Identify Patterns of HIV Risk Among People Who Inject Drugs in Vietnam. (accessed February 6, 2018). 7. Smith, M.K; Miller, W.C; Liu, H; Ning, C; He, W; Cohen, M.S; et al. PLoS ONE. 2017, 12(5) 8. Smith, M.K; Graham, M; Latkin, C.A; GO VL. J Acquir Immune Defic Syndr. 2018 9. Interview with Kumi Smith, Ph.D. 02/13/18. 10. McCranie, A. Unit 6: Egocentric Research. ICPSR, University of Michigan, Ann Arbor, 2015

42


ecology

Carolina Scientific

APPALACHIAN ANOMALY Understanding Recent Uplift in the Appalachian Mountains By Gracie Pearsall

Image courtesy of Wikimedia Commons.

A

t some point in our lives, we have all stepped outside and wondered why the earth looks the way it does. Why does the beach look different from the Piedmont? Is Chapel Hill indeed a hill? According to Dr. Kevin Stewart, an Associate Professor in UNC-Chapel Hill’s Department of Geologic Sciences, the answers lie in the rocks. Through mineral composition, texture, and features such as cracks or layers, rocks record evidence of nearly every process the rocks have experienced. Geologists are like detectives, piecing together ancient events and environments based on evidence in the rock record. One puzzling mystery in contemporary geology is the uniquely high topography in the Southern Appalachian Mountains. Dr. Stewart and his team of graduate students have a compelling new explanation for this unusually high elevation: a recent uplift of the crust underneath the Southern Appalachians.1

The Appalachian Mountains formed around 300 million years ago (Ma) when Africa collided with North America and formed the Supercontinent, Pangea. Upon formation, the Appalachians were the size of the presentday Himalayas. The Appalachians have since eroded to their current size. Nevertheless, the elevation in some portions of the Southern Appalachians is still much higher than expected, despite the fact that the Appalachians Dr. Kevin Stewart have just been eroding

43


ecology

Figure 1. Map of the Appalachian Mountains region showing the outcrop belts of the Sedimentary Appalachians and the Crystalline Appalachians. Image courtesy of USGS [Public Domain]. for 300 Ma.2 The band of unexpectedly high elevation cuts northwest across the Appalachians (Figure 1). From south to north, the high topography starts in metamorphic rocks in the Blue Ridge Province of North Carolina, then into the folded sedimentary rocks of the Valley and Ridge province, and finally into soft sedimentary rocks in the Appalachian Plateau Province in West Virginia. The conventional wisdom is that these regions have high elevation because they are made of hard rock, which is difficult to erode. Yet, Dr. Stewart realized that the notion that high elevation in the Appalachians corresponds to erosion-resistant rock completely fails in the Appalachian Plateau because the plateau is composed of much softer sedimentary rock.1 The shortcomings of the rock composition explanation inspired Dr. Stewart to head into the field and find a better explanation for the high topography. While in the field, Dr. Stewart and his team found deposits of coarse-grained sediments in the Piedmont and Coastal Plain. The age and texture of the sediment indicate that around 15 Ma, rivers began to flow faster, and the Appalachians shed off a large amount of sediment. The team also found young fault zones, which are areas where huge blocks

of the crust have cracked and moved relative to each other , and are associated with large-scale tectonic stress.4 Both the coarse sediments and young fault zones can be explained by uplift—elevation of the crust caused by natural tectonic causes. The presence of these features led Dr. Stewart to believe that, contrary to prevailing wisdom, after being quiet for millions of years, the Southern Appalachians were uplifted around 15 Ma. The evidence for uplift mainly comes from the aforementioned hallmark superficial expressions of uplift. The rivers began to flow faster and deposited huge amounts of sediment because uplift steepened the river beds. The fault zones are cracks in the crust that formed to accommodate the rise in elevation. But what caused this uplift? The answer, according to Dr. Stewart’s research, lies deep below the surface of the Earth. The Earth is stratified into three main layers: the outermost crust, the mantle, and the core. The crust is rigid, cold, and home to our continents and ocean basins. The mantle, on the other hand, is less viscous, and because of the high pressures and temperatures within the Earth, the mantle can flow very slowly. It is in the mantle, 200 kilometers below the base of the crust under the Appalachians, where Dr. Stewart and his graduate student, Jesse Hill, found an explanation. Based on seismic data, it appears that a dense block of rock was attached to the bottom of the crust under the Appalachians, holding the crust down like a weight. The data shows that the dense package of rock had detached from the bottom of the crust, sunk into the mantle, and hot buoyant rock flowed in to take the rock’s place. This flow caused the southern Appalachians to pop up. Research Assistant Professor Berk Biryol, a colleague of Dr. Stewart’s in the Department of Geological Sciences, has used seismic imaging techniques to obtain an image, or model, of the mantle and search for evidence of the dense package of rock. These images are created by analyzing the velocity of seismic waves as they move through the mantle. Variations in color represent velocity anomalies, which can be attributed to changes in the structure, composition, or density of the rock. In these images, warm colors represent hot and buoyant areas, and the cool colors represent cold and dense areas. Biyrol’s seismic images (Fig 2-4) reveal a dense blob of rock (outlined in black) that appears to have been attached to the bottom of the crust under the Appalachians. The high topog-

44

“Understanding Earth’s history and how it works is fundamental to any question that also pertains to the environmental nature of this planet.”


Carolina Scientific

ecology

Figure 3. Seismic image of mantle overlaid with topographic map of Appalachians. Band of high relief areas outlines in black.. Image courtesy of Biyrol et al (2016).

Figure 2. Seismic image of mantle below 200 KM below the base of Appalachians. Cool colors are buoyant areas. Warm colors are dense areas. Image courtesy of Biyrol et al (2016). raphy of the Appalachians almost exactly matches this dense blob of rock below the crust (Fig 3), shown in blue and green and outlined in black. Dr. Stewart and his team think that the blob in the image is the same one that dropped off the crust and triggered uplift 15 Ma (Fig 4). Currently, Dr. Stewart and colleagues in the Department of Geological Sciences are working on getting a more accurate and complete seismic picture of the mantle under the Appalachians. They are also going into the field to look for more manifestations of uplift. Common field methods include geologic mapping, collecting rock samples, analyzing chemicals in the samples, and radioactive dating of rocks. In particular, the team is looking for more active fault zones that could have accommodated recent uplift and ancient river deposits that could show that the rivers in the Blue Ridge have changed their course relatively recently due to the uplift. Dr. Stewart believes that this uplift may still be happening now, which could mean the environment in the Southern Appalachian is tectonically unstable, and there could potentially be earthquakes in the region. When asked why people should care about geologic research and Earth science, Dr. Stewart said, “We all live on planet Earth and, for the foreseeable future, we are all going to continue to live on planet Earth. Understanding Earth’s history and how it works is fundamental to any question that also pertains to the environmental nature of this planet.”1

References

Figure 4. Seismic image overlaid with outline of high topography regions in the Southern Appalachian, which corresponds to the dense blob. Image courtesy of Biyrol et al (2016).

1. Interview with Kevin G. Stewart, Ph.D. 02/19/18. 2. Stewart, K.G; Roberson, M. Exploring the geology of the Carolinas: A field guide to favorite places from Chimney Rock to Charleston. 2007, Chapel Hill: University of North Carolina Press. 3. Stewart, K.G; Hill, J.S. A newly discovered fault zone near Boone, North Carolina and Cenozoic topographic rejuvenation of the Southern Appalachian Mountains. Geological Society of America. Abstracts with Programs. 2016; 48(7). 4. Stewart, K.G; Hill, J.S. Post-orogenic uplift of the Southern Appalachians. Geological Society of America. Abstracts with Programs. 2016; 48(7).

45


ecology

Illustration By Laura Wiser

Oysters The Popular Appetizer Capable of Reducing

Global Warming By Carly Dinga

W

hen perusing the menu at a nice restaurant, the last thing on a person’s mind is the ecological impact of the food they are about to order. In addition to being a popular appetizer, oysters serve unique functions such as improving water quality and removing carbon dioxide from the oceans. Given the boundless benefits attributed to oyster reefs, it remains a mystery as to why 90-99% of North Carolina’s reefs have been lost with little conservation efforts (Figure 1).1 To combat this, Dr. Joel Fodrie has based his research around the dynamics of reef conservation, and is optimistic that natural processes may hold the key to sustainable oyster reef development. Oysters get a lot more attention when served at a restaurant, but the truth is that they perform valuable services under the ocean. Recent research has indicated that many of the ecosystem services performed by oyster reefs have high monetary value. Part of the environmental concern for the welfare of reefs comes from their ability to remove nitrogen and phosphorus from the ocean. It is widely understood that excessive amounts of nitrogen and phosphorus can spark algae blooms that starve the surrounding ecosystems of dissolved oxygen, creating “dead zones” that are incapable of sustaining life. Oyster reefs help mitigate this issue by functioning in a similar capacity to wastewater treatment plants, only they operate free of charge.1 Taking advantage of oysters’ natural ability to purify water, scientists have opted to research how we might utilize oyster reefs in place of wastewater treatment plants. In addition to reducing the water pollution, oyster reefs act as valuable carbon sinks (places for carbon dioxide to reside), effectively removing it from the atmosphere.1 As the issue of climate change attracts attention, the ability of carbon sinks to remove carbon dioxide from the air is crucial to mitigatDr. Joel Fodrie ing the impact of greenhouse gases on

Figure 1. Oyster reef off the coast of North Carolina. Image courtesy of Dr. Joel Fodrie. climate change. Although vegetated habitats, like oyster reefs, cover a measly 0.5% of the ocean floor, they account for approximately 50% of oceanic carbon sequestration (removal of CO2 from the atmosphere to be held in liquid or solid form).1 A recent study conducted by Dr. Fodrie and fellow scientists determined that the amount of carbon an oyster reef is capable of holding depends largely on its location.2 The information gathered in this study will help focus restoration efforts to include the most beneficial locations in terms of reef survival and optimal ecosystem services. Figures 2 and 3 show an example of this artificial construction in an estuary, giving the oysters an ideal location to establish a reef. Artificial construction of reefs is also useful in reducing the impact of dredging that accompanies modern fishing practices. Dredging removes much of the naturally occurring substrate that oyster larvae require to establish reefs. Without a place to settle, larvae float freely through the ocean with little chance of producing a reef.1 At this point, one must weigh the costs of removing oysters from the ocean against the benefits they can provide when left underwater.

46


Carolina Scientific

Figure 2. Exposed test reef. This picture was taken during low tide when the reef was exposed to air. Image courtesy of Dr. Joel Fodrie. Given the valuable role that oysters play in marine ecosystems, Dr. Fodrie’s research has prioritized exploring ways to preserve these valuable ecosystems. Rather than investigating new approaches to conservation, many of the strategies implemented by Dr. Fodrie’s team take inspiration from naturally occurring processes. Recently, Dr. Fodrie has focused his research to involve testing the survival rates of oysters at different tidal levels in hopes of understanding the effects that location and depth have on a reef’s success.2 Most of the current conservation efforts concentrate on areas that support commercial harvest, but neglect the ecological needs of an oyster population. On the North Carolina coast, some restored reefs are succeeding in the rehabilitation process while others, located less than 100 meters away, have failed miserably. The contrasting success rates highlight a gap in the understanding of reef ecology that Fodrie hopes to fill. Taking the first step to reaching a better understanding, Dr. Fodrie’s team constructed 32 test reefs in four different zones surrounding Middle Marsh, NC with the intent of testing the effects of air exposure on the reefs. These zones included reefs placed at four different depths. The two shallowest depths left the reefs exposed to air during low tide while those at the deeper levels remained inundated. The team collected three samples from each reef throughout the first year to learn Figure 3. Picture of a submerged about patterns of oyster settlement, density, and reef built for the research study. general reef survival.2 Image courtesy of Dr. Fodrie

ecology

This study concluded that the depth of a reef has an impact on its size and density, indicating that those at a mid to low depth would have the most success and be able to efficiently perform ecosystem services. By the end of the study, there emerged a clear pattern of high to low oyster densities from the shallowest to deepest reefs. Additionally, the team observed three times more gastropods (predators to oysters) at the deepest levels than in the shallower water, making deep water a difficult environment for the oysters. The information uncovered through this study will help inform conservationists on the best locations to work on rehabilitating oyster reefs.2 Currently, most conservation efforts are inefficient and misguided, often resulting in wasted resources. Dr. Fodrie’s data helps focus efforts to the locations with the most potential. Moving forward, Dr. Fodrie hopes to continue investigating the natural patterns of oysters to further focus conservation efforts. This process will be complicated as external factors including climate change, harvest pressure, and reduced water quality continue to fluctuate and impact the structure of the reefs. Furthermore, the development of ecological models capable of illustrating the responses of oysters to these factors will be necessary to maximize conservation.1 Another promising avenue of research centers on the use of mobile reefs. This solution involves using a frame that provides the larvae with their required substrate so that the entire reef is capable of moving to its ideal location. This would allow the larvae to begin growing in a salty, marine environment while eventually walking the reef farther into the estuary where they can escape predators.1 While this research requires a lengthy time commitment, it is clear that everyone has a stake in the disappearance of oysters, and this is not an ecosystem that we can afford to lose.

Figure 4. Research team takes readings of different test reefs in Middle Marsh, NC to learn about patterns of oyster settlement. Image courtesy of Dr. Joel Fodrie.

References

1. Interview with F. Joel Fodrie, Ph.D. 02/15/18 2. Fodrie, F.J; Rodriguez, F.B; Gittman, R.K; Grabowski, J.H; Lindquist, N.L; Peterson, C.H; Piehler, M.F; Ridge, J.T. Proc. R. Soc. B. 2017, 284.

47


Illustration by Caroline Allen

ecology

The History of Life: Looking Through the Lens of Plant Fossils By Aubrey Knier

F

ossils are like puzzle pieces that, once put together, can unlock the history of our world. They are ancient traces that reveal to us the greatest story of all—life. Fossils have taught us about evolutionary relationships, geological change, environmental processes, and ecological functions. They have helped us uncover each disastrous and wonderful event that has shaped our world into what it is today. From fossils, we have a better understanding of our place on this earth. However, no single fossil can do this alone; fossils of all varieties must be pieced together to tell this story. Among them, there has always been a constant that has carried the tale onward: plants. Dr. Patricia Gensel, UNC-Chapel Hill’s first and only paleobotanist, has always found something intrinsically fascinating about plants. Through studying plant fossils, she challenges her interest in plants by combining it with the historical and geological components that naturally accompany fossils. She has dedicated her life’s work to researching ancient plant specimens, the interactions among them, and their relationships with one another. She has witnessed and intimately studied the captivating world of plants and all of their critical applications to the field of paleontology. Plants are tougher than animals. There have been five mass extinction events throughout geologic history. Roughly 60% of plants survived these events, compared to only 10% of animals.1 Plant Dr. Patricia Gensel fossils have been a key player

in understanding these mass extinction events, as they tell us more of the story than animal fossils do. Dr. Gensel explains, “Animal fossils say that there’s something very bad that has happened here, but just how bad is answered by the plants.” In fact, studies focusing on tracking how quickly plants return after glaciation events (an interval of time within an ice age characterized by colder temperatures and glacier advances) can provide extensive knowledge about how these glaciation events reshape vegetation and the earth’s environments.1 Plants are also sensitive to environmental change. We can learn about the climatic fluctuations that occurred millions of years ago just by looking at the leaves of a plant fossil. Leaves are preserved relatively well, thanks to a cuticle that encompasses their exterior.1 Under this cuticle are specialized cells called stomata, whose density can then be used to determine the CO2 levels present in the atmosphere during the time that the plant was living. Information about CO2 levels can help to predict the oxygen levels, temperature, and moisture of the ancient environment.1 Additionally, plant fossils uncover a lot about the coevolution of plants and animals, and their interactions in geologic history. Plants are a food source for many animals, and we can learn about which animal species coexisted with a specific plant from the holes on their leaves, or the chomps on their stems. Scientists have even been able to pinpoint the first signs of animal herbivory by studying such feeding damage on plant fossil specimens. Furthermore, paleontologists have evidence of symbiotic (mutually beneficial) or parasitic relationships between plants and animals from these fossils. This evidence can be related to the plant-animal relationships that we see around us today. Throughout her years of research, Dr. Gensel has always had a specific interest in spores. Spores are tiny units of asexual reproduction in plants that are adapted for dispersal

48


Carolina Scientific

ecology

Figure 1. Left: Light micrograph of a spore from a sabal vascular plant, Renalia. Middle: Spores extracted from the sporangia of Leclercqia cf. complexa, and early Devonian lycopod. Right: Spores extracted from the sporangia of Leclercqia cf. complexa, and early Devonian lycopod. Images courtesy of Dr. Patricia Gensel. and survival.2 They begin the life cycle of a new organism. Spores are produced in an enclosed structure within plants called sporangia. In nearly all land plants, sporangia produce genetically distinct haploid spores at the site of meiosis. For nearly 100 years, the study of spores and sporangia has been an extremely useful tool for understanding the timing and place of evolutionary events, the species present at certain points in history, and the dating of rocks. Most of Dr. Gensel’s work is interested in connecting spores to their parent plants. It is known from the living plants that we observe today that each species produces a certain type of spore that is specific to their taxonomic unit, genus, or family. Therefore, the plants that were present in a specific location can be identified if the host plant of the uncovered fossil spores are known. One can determine when a certain plant type came to be based on the relation of their spores to the ones around it, which is in the spore record. For instance, if it is known that y-shaped spores come from vascular plants, one could look in the record to see how far back y-shaped spores can be found, to infer when vascular plants came to be.1 At higher levels, spores are extremely useful in determining the occurrence of a particular event in history. Spores are small, so they are more likely to be preserved than the critical things needed in understanding a mega-fossil. Therefore, the abundance and successful preservation of spores alone have elevated our understanding of plant fossils in particular. Particular preservation modes have been essential to the conservation of plant spores throughout history. For instance, coal is formed from plant matter that has been compressed over time. Scientists can use acids that digest the coal to extract the spores inside of it. This allows for an analysis of stratigraphy, or the order and position of rock layers. This process occurs when calcium-rich rivers permineralize plants (a process of fossilization where mineral deposits form internal casts of organisms) with intact sporangia. Scientists are able to determine the specific plants that were compressed due to this permineralization and trace in detail which plants were living at that time. This information has countless applications in determining factors such as which animals coexisted with the plants, or what type of climate they lived in.1 Plant fossils play a critical role in the scientific community’s knowledge of Earth’s history, and this history is the key to both the present and future. The information provided by plant fossils enriches our understanding of current biologi-

cal processes, as well as the ones that we can expect in the future.

Figure 2. A Devonian fossil of Sawdonia ornata. Image courtesy of Dr. Patricia Gensel.

49

References

1. Interview with Patricia Gensel, Ph.D. 02/16/18. 2. Gilbert, SF. An Overview of Plant Development. In Developmental Biology, 6th Ed.; Sinauer Associates: Sunderland (MA), 2000.


ecology

Illustration by Meredith Emery

WHAT’S IN YOUR WATER? BY JANIE OBERHAUSER

T

hick, scum-laden layers of algae strangle a stagnant pond. A host of warning signs litters a deserted waterfront, while newspaper headlines expose another outbreak of foodborne illness in bold, black lettering. Such are the commercial, economic, and public health dangers related to microorganisms in our day-to-day lives. Climate change and current demographic trends have only exDr. Rachel Noble acerbated the problems associated with fecal contamination (which brings dangerous viruses and bacteria with it), and it all relates back to the state of our water.1 Today, over 50 percent of the growing global population lives within 100 miles of a coastline, placing pressure on the fragile ecosystems at the water’s edge. As sea levels rise, groundwater has less room to accommodate storm water. This increases the risk of contamination by sewage and septic systems compromised by heavy rains. Nutrients from increased runoff can also generate the risk of rapid algae growth (Figure 1). Warmer temperatures spawn breeding grounds for pathogenic and toxic microorganisms, and even strains of Vibrio bacteria like cholera that infect humans and animals (Figure 3). The combination of these ecological stressors can lead to a loss of biodiversity, causing food chains to crumble—the human race is an integral part of these food chains.1 Research into the methods by which we might detect and contain waterborne pathogens has become increasingly important. Dr. Rachel Noble, a professor at UNC-Chapel Hill’s Institute of Marine Sciences and director of the undergradu-

ate programs for the Institute for the Environment Morehead City Field Site, serves as one of the frontrunners in the field. One of her lab’s primary focuses lies in the creation of molecular diagnostic kits designed to revolutionize the way we perceive and react to contamination.1 “Right now, water quality control is 99 percent reactive,” Dr. Noble says. “We know what is dangerous, and we wait for a certain concentration to throw up a red flag.”1 Water’s reactivity currently cripples our understanding of its purity in our daily lives; moreover, it is rooted in the extensive time required to complete the testing of a single water sample, during which time pathogenic concentrations can shift extensively. According to a paper published by the Noble lab, this waiting period can stretch on as long as “18 to

Figure 1. This “starry galaxy” is a fluorescent photograph of the viruses and bacteria that live naturally in a seawater sample. Smaller dots are viruses and larger, elongated shapes are bacteria. Image courtesy of Dr. Rachel Noble.

50


Carolina Scientific

Figure 2. A QPCR assay plate. Image courtesy of Dr. Rachel Noble. 96 hours, limiting protection of public health.”2 Oftentimes, by the time contamination is detected and action is taken, the risk has expired and a significant population has already been exposed. Dr. Noble describes that as the issue of water quality became more and more relevant to today’s society, a new, proactive approach was necessary.1 The Noble lab’s rapid molecular diagnostic kits may have accomplished just that. The diagnostic kits are able to achieve the same results as traditional water testing methods in a much shorter timespan due to the elimination of “the need for a lengthy incubation step.”2 Instead, the delay is cut down to as little as one to two hours through the use of a procedure known as quantitative Polymerase Chain Reaction (QPCR). QPCR works in a manner similar to traditional PCR that’s commonly used in molecular biology labs (Figure 2). Instead of tracking pathogenic growth or metabolic activity, QPCR detects, marks, and amplifies specific viral and bacterial DNA. Not only is this method much faster, but it is it much easier for scientists to analyze the relative quantities of pathogenic materials present in a given sample.2 Results of a 2010 comparison of QPCR to current water quality testing methods revealed that although QPCR and culture-based methods produced the same findings 88% of the time, QPCR may have picked up on additional bacterial fragments left out of a traditional reading. This led researchers to question the over-reactivity of this new method. The new technique would also pose a challenge to individuals unfamiliar with molecular analysis.2 However, while more research into the automation and universal usability of the QPCR process is needed, recent developments have illustrated that the Noble lab’s molecular diagnostic kits may be applicable to much more than determining the cleanliness of our water. The pathogenic hazards of the earth and sea are oftentimes one and the same. Because it is difficult to measure the risk of contaminated produce, the FDA has recently proposed that the Clean Water Act standards for purity also carry over to the food quality industry in the form of the Food Safety Modernization Act, FSMA.1 This legislation and the desire to develop faster means of measuring the pathogen levels of

ecology

Figure 3. Algae and bacterial growth in runoff of stormwater to the ocean. Image courtesy of Dr. Rachel Noble. water led Dr. Noble and her lab to partner with a biotechnology company called BioGX. BioGX is a global leader in the creation of rapid molecular testing kits. Together, they hope to construct a kit that can test for E. coli, Salmonella, and Listeria, three leading causes of foodborne illness, all within the same short time frame.3 Dr. Noble says that her lab found the transition to the food industry a smooth one given her experience exploring the issue of shellfish and seafood contamination. “Coastal waters have sediment,” she explains, “and the methods that allow us to get rid of dirt from water gave us the experience needed to approach food safety with a deeper understanding of the pathogens involved.”1 The Noble lab is currently working with BioGx to make their QPCR kits more widely available and accessible to those involved in the food and water quality industries.3,4 So the next time you munch on a salad, spend a day at the beach, or wade in a creek after a long hike, you might have QPCR and the Noble Lab to thank.

References

1. Interview with Rachel Noble, Ph.D. 02/08/18. 2. Noble, R.T; Blackwood, D.A; Griffith, J.F; McGee C.D; Weisberg S.B. AEM. 2010. 76, 7437-7443. 3. Joyner, A. Dr. Rachel Noble works to develop rapid molecular testing kits to ensure food safety. http://noble.web. unc.edu/2016/11/18/dr-rachel-noble-works-to-develop-rapid-molecular-testing-kits-to-ensure-food-safety/ (accessed February 18, 2018). 4. Vibrio pathogens and their impact on public health and shellfish harvesting. http://noble.web.unc.edu/projects/ vibrio-pathogens-and-their-impact-on-public-health-andshellfish-harvesting/ (accessed February 18, 2018).

51


“To confine our attention to terrestrial matters would be to limit the human spirit.” - Stephen Hawking (1942-2018)

Image by Ildar Sagdejev, [CC-BY-SA-3.0].

Carolina

scıentıfic

Spring 2018 Volume 10 | Issue 2

This publication was funded at least in part by Student Fees which were appropriated and dispersed by the Student Government at UNC-Chapel Hill as well as the Carolina Parents Council.

52


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.