DUJS 11W

Page 1

Note from the Editorial Board Dear Reader, As scientific progress continuously advances forward, it is worthwhile to take a deep breath and reflect on how far we have come in solving the increasingly complex problems that Nature has thrown our way. Biotechnology has taken on new meaning and new potential. While traditional biotechnology has allowed us to bake bread and breed domestic animals, modern biotechnology is allowing us to make the impossible possible. Scientists are finding more and more ways to solve the intricacies of cancer, a disease that seemed invincible less than a decade ago. Kristen Flint ’13 describes the potential of engineered viral nanoparticles in curing cancer. Derek Racine ’13 speaks with Dr. Steven Fiering, professor of microbiology and immunology at Dartmouth Medical School, about cancer nanotherapy and its potential role in curing ovarian cancer. It is incredible how scientific advances have allowed us to manipulate the delicate plumbing of the circulatory system in treating heart disease, the leading cause of death in the world. While Amir Ishaq Khan ’14 compares carotid endarterectomy and carotid artery stenting in treating stroke, Julie Shabto ’14 discusses the use of bioabsorbable coronary stents in place of contemporary metallic stents, which can be harmful in the long run. Advancements in optoelectronics are restoring vision to the blind, a feat described by Andrew Zureick ’13. Further, Hannah Payne ’11 explains how this recent progress in optogenetics has allowed a better understanding of the brain. Moreover, Elise Wilkes ’12 describes methods for sourcing environmental organic pollutants, specifically polycyclic aromatic hydrocarbons. As we better understand not only the workings of the human body better but also those of our smaller fellow creatures, we endlessly unearth more ways to protect ourselves. While Aaron Koenig ’14 discusses the intercellular communication in bacteria, Diana Pechter ’12 poses an interesting idea on how to prevent disease on college campuses. Yoon Kim ’13 expresses gratitude to biotechnological advances in the form of poetry. As each new scientific development solves a previous problem, new problems arise that need to be solved. Daniel Lee ’13 discusses the ethical questions raised by recent advancements of in vitro fertilization. Mike Mantell ’13 describes the potential hidden risk that comes with cellular phones. This issue of the DUJS also brings you original undergraduate research on the average selenium content in a typical can of tuna in the marketplace from Christina Mai ‘12, Rebecca Rapf ‘12, Elise Wilkes ’12, and Karla Zurita ’12. In addition, Elizabeth Parker ’12 assesses the utility of microsatellites for assigning maternity in Anolis sagrei lizards. We hope you enjoy reading this issue of the DUJS, and that it provokes you to consider the implications of science beyond the walls of the classroom and laboratory. Sincerely, The DUJS Editorial Board

Winter 2010

The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge. EDITORIAL BOARD President: Jay Dalton ‘12 Editor-in-Chief: Shu Pang ‘12 Managing Editors: Daniel Lee ‘13, Andrew Zureick ‘13, Jaya Batra ‘13 Assistant Managing Editors: W. Kyle Heppenstall ‘13, Derek Racine ‘14 Layout and Design Editor: Diana Lim ‘11 Online Content Editor: Marietta Smith ‘12 Public Relations Officer: Victoria Yu ‘12 Secretary: Aravind Viswanathan ‘12 Event Coordinator: Jaya Batra ‘13 DESIGN STAFF Shaun Akhtar ‘12 Clinton F. Grable ‘14 Diana Lim ‘11 Hazel Shapiro ‘13 Andrew Zureick ‘13 STAFF WRITERS Prashasti Agrawal ‘13 Dylan Assael ‘14 Kristen Flint ‘14 Brenna Gibbons ‘12 Clinton Grable ‘14 Cristina Herren ‘12 Yoon Kim ‘13 Amir Khan ‘14 Aaron Koenig ‘14 Daniel Lee ‘13 Kellie MacPhee ‘14 Mike Mantell ‘13 Aditi Misra ‘14 Hannah Payne ‘11 Diana Pechter ‘12 Derek Racine ‘14 Archana Ramanujam ‘14 Gareth Roberg-Clark ‘14 Julie Shabto ‘14 Ian Stewart ‘14 Danny Wong ‘14 Andrew Zureick ‘13 Faculty Advisors Alex Barnett - Mathematics William Lotko - Engineering Marcelo Gleiser - Physics/Astronomy Gordon Gribble - Chemistry Carey Heckman - Philosophy Richard Kremer - History Roger Sloboda - Biology Leslie Sonder - Earth Sciences Megan Steven - Psychology Special Thanks Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Provost’s Office R.C. Brayshaw & Company Private Donations The Hewlett Presidential Venture Fund Women in Science Project DUJS@Dartmouth.EDU Dartmouth College Hinman Box 6225 Hanover, NH 03755 (603) 646-9894 http://dujs.dartmouth.edu Copyright © 2010 The Trustees of Dartmouth College


The Dartmouth Undergraduate Journal of Science aims to increase scientific by providing an interdisciplinary forum for sharing undergraduate research an

In this Issue... DUJS Science News Andrew Zureick ‘13 and Shu Pang ‘12

4

Viral Nanoparticles: A Cure for Cancer? Kristen Flint ‘14

6

Interview with Steven Fiering, Professor of Microbiology & Immunology, Dartmouth Medical School Derek Racine ‘14 8 Stent or Patch: Analysis of Carotid Stenting and Endarterectomy in Combating Stroke due to Carotid Stenosis Amir Kahn ‘14 10 Bioabsorbable Coronary Stents Julie Shabto ‘14

14

Optoelectrics and Retinal Prosthesis: The Revival of Vision Andrew Zureick ‘13 17 Enlightenment: Optogenetic Tools for Understanding the Brain Hannah Payne ‘11

19

Visit us online at dujs.dartmouth.edu

Dartmouth Undergraduate Journal of Science


fic awareness within the Dartmouth community h and enriching scientific knowledge.

In Vitro Fertiliztion Daniel Lee ‘13

The Primal Conversation: Intercellular Communication in Bacteria Aaron Koenig ‘14

23

25

Disease Prevention on College Campuses Diana Pechter ‘12 28 Cellular Phones: With Great Technology Comes Great Risk Mike Mantell ‘13 30 A Tribute to Biotechnology: A Poem Yoon Kim ‘13

32

Isotopic and Molecular Methods for Sourcing Environmental PAHs: A Review Elise Wilkes ‘12 33 Selenium in Tuna: White versus Light and Water versus Oil Packing Christina Mai ‘12, Rebecca Rapf ‘12, Elise Wilkes ‘12, and Karla Zurita ‘12 37 Assessing the Utility of Microsatellites for Assigning Maternity in a Wild Population of Anolis sagrei Lizards Elizabeth Parker ‘12 41

WINTER 2011

DUJS


News

DUJS Science News

See dujs.dartmouth.edu for more information

Compiled by Andrew Zureick ‘13 & Shu Pang ‘12

i GENETICS Gene Therapy of Multiple Myeloma in Mice Shows Signs of Success

Researchers from the Dartmouth Medical School recently conducted an investigation into the effects of using genetically modified T-cells to combat multiple myeloma. Amorette Barber, Kenneth Meehan, and Charles Sentman found that this gene therapy was successful in treating myeloma in mice. Multiple myeloma is a form of cancer that affects specific white blood cells present in the bone. This study, conducted last year, was recently published in Nature. As part of the treatment, T-cells, a type of white blood cell, were genetically modified to incorporate the protein chNKG2D. This protein is the binding site for a ligand expressed by human myeloma tumor cells. This means that the T-cells can target the myeloma cells specifically. In addition, upon contact, the T-cells containing chNKG2D can produce particular cytokines that kill human myeloma cells. It was found that this model worked in mice; the injected T-cells found and targeted the tumors present in the rodents, to great effect. Compared to a single dose of the regular or wild type T-cell, a single dose of the modified version ensured longer survival for half the sample tested, and a double dose led to tumor-free survival in all mice. In

Image retrieved from http://upload.wikimedia.org/wikipedia/commons/2/24/Red_White_ Blood_cells.jpg (Accessed 31 Jan 2011).

T-cells (right) were genetically modified to incorporate the protein chNKG2D. 4

addition, the mice developed a protective memory response to the antigens produced by these specific myeloma cells, meaning their immune systems recognize this particular type of myeloma and a relapse is therefore unlikely. It was also found that the introduction of these modified T-cells into the bodies of the mice did not require lymphodepletion, a process in which the number of T-cells and lymphocytes present in the body is reduced. This process normally accompanies T-cell infusions because it prolongs the life of the new T-cells. However, the chNKG2D Tcells do not live very long, which potentially reduces the need for lymphodepletion. This short lifespan also suggests that the T-cells induce anti-tumor immune responses in the host system, essentially “teaching” the body how to respond to this specific form of myeloma.

i TECHNOLOGY Lippman Offers Prime Explanation of Spam

Richard P. Lippman, a researcher at MIT’s Lincoln Laboratory, recently spoke at the Thayer School of Engineering. He addressed the issues surrounding the use of intellectually evolving machines to protect computers from spam, a significant online threat. Lippman noted that there is great potential for using such machines in a security capacity, as they can “automate decisions,” and “adapt to frequent changes” in spammers’ attacks. However, such machines, according to Lippman, are all too easily spooked. Lippman detailed the general manner by which such machines can be fooled. Spammers, or internet adversaries, can either directly manipulate the features of such machines to produce a desired outcome, or they can more insidiously reconFig. a defending machine’s “training data” and open the floodgates to a torrent of at-

tacks. Because both methods can have deleterious consequences for a computer, an “arms race” between attackers and defenders creating and maintaining machines has naturally ensued. Lippman described a cyclical relationship between the nature of attacks and the framework of defense. According to Lippman, spam has undergone a significant transformation since its early beginnings. Starting out as text only, spam then became pictures, then pictures and text, and finally, a synthesis of complicated designs, pictures and nonsensical words difficult for a computer to recognize. Each successive class of spam worked to fool protecting machines into believing that it was actually harmless when it obviously was not. Taking advantage of “social engineering,” a process by which the user mistakenly trusts the spam and clicks on it, the spam then proceeds to compromise and infect the user’s computer and possibly other systems connected to it.

i NEUROSCIENCE Ingestion of Common Chemicals Leads to Delayed Neurological Effects

Ingestion of substances such as antifreeze and brake fluid has long been known to cause immediate symptoms such as seizures and vomiting, but it has been recently shown that patients who survive poisoning often develop neurological problems later on as well. In a research paper by Nandi J. Reddy, Madhuri Sudini, and Lionel D. Lewis at the Dartmouth-Hitchcock Medical Center, ethylene glycol, diethylene glycol, and methanol were investigated for their capacity to cause delayed nervous system damage. Ethylene glycol is an ingredient in antifreeze, which often leads to death when consumed due to its

Dartmouth Undergraduate Journal of Science


Image by Diana Lim ‘11, DUJS Staff

Structure of ethylene glycol.

high toxicity. It acts as a central nervous system depressant, and has typically been associated with three distinct phases of poisoning. In the first stage, during the hours immediately after ingestion, patients often develop seizures and are at a high risk for death. In the second stage, after about twelve hours, respiratory symptoms begin. In the third stage, two to three days later, kidney failure develops. The paper reviewed past cases on ethylene glycol ingestion and discovered that there is a fourth stage of poisoning, which starts five days to several weeks after ingestion. This delayed stage is characterized by damage to the seventh cranial nerve, which is responsible for facial sensation and movement as well as taste. This damage causes impaired speech and difficulty swallowing, symptoms remarkably similar to those of Parkinson’s disease. However, whereas damage from ethylene glycol poisoning is mostly localized to the face, Parkinson’s disease causes damage to other areas of the body as well. Diethlyene glycol and methanol poisonings share an initial stage of high mortality followed by damage to the seventh cranial nerve. Their pathologies are also similar: in the body, these compounds are broken down by enzymes, forming intermediates that cause the symptoms to develop. Regardless of their metabolism, each of the three compounds has a similar effect on the nervous system. When asked for comments, author Lionel D. Lewis stressed that it is still unclear why the seventh cranial nerve is the most affected. The region of the brain usually responsible for delayed nervous system damage is the basal ganglia, clusters of cells that control voluntary motor function. Winter 2011

i ENVIRONMENTAL SCIENCES

i BIOCHEMISTRY

Going Green: Leadership, Climate Change, Innovation

Recently, Amanda Bird, molecular genetics professor at Ohio State University, spoke at Dartmouth’s Biology Cramer Seminar about her research on zinc regulation and the role of non-coding RNAs in zinc binding protein production. She detailed the biological importance of zinc. Zinc is an essential nutrient for the body, and 7% of the human genome encodes various zinc-binding proteins, including ribosomes, and RNA and DNA polymerases. In cases of zinc deficiency, humans show very serious symptoms including dermatitis, hair loss, diarrhea, growth retardation, and an impaired immune response. Too much zinc can be toxic to the body as well. Symptoms include respiratory and gastrointestinal disorders, as well as impaired copper and iron uptake. Zinc homeostasis is key to health. Bird’s lab uses eukaryotic yeast models such as Saccharomyces pombe to look at zinc expression on the cellular level. A family of proteins of particular interest to Bird’s lab is the alcohol dehydrogenases, which bind one or more atoms of zinc to function. Bird specifically studies the cytoplasmic Adh1 and mitochondrial Adh4, which bind two zinc atoms and one zinc atom, respectively, as shown by x-ray crystallography. They function in oxidizing acetaldehyde during alcohol metabolism, and the reverse in glucose metabolism. Bird found that Adh1 expression is decreased in zinc deficient conditions, and production of certain non-coding RNAs (ncRNAs) is increased. She concluded that ncRNAs can be regulated at a transcriptional level in response to nutrient limitation, specifically zinc. Bird’s research continues to shed more light on zinc, a metal so essential to health.

“The earth is warming and mankind is most probably responsible for what is happening,” asserted Peter Darbee, the C.E.O. of Pacific Gas and Electric Corporation (PG&E Corp.), this past fall at a Jones Seminar in the Dartmouth Thayer School of Engineering. He discussed such topics as climate change, innovation, and leadership in relation to environmental sustainability. According to Darbee, excessive carbon emissions are the most detrimental of all the problems facing the environment. His goal is to have one third of California’s power supply be a product of renewable energy sources by 2020. These include nuclear and solar power, two fields that have seen significant technological advances. Darbee, with regard to his company’s use of nuclear power, explained that while there may be controversy surrounding its safety, the newer generation of nuclear reactors is much safer. These devices feature the use of convection currents rather than pumps to circulate cooling, making power plants more secure. Additionally, recent innovation with regard to solar energy could prove to be an “immense breakthrough,” according to Darbee. PG&E Corp. is currently under contract with Solaren Corp. in which the latter company agreed to establish a satellite that will collect solar energy in outer space, convert the energy into microwaves, and beam the waves to the Earth. No longer will the process of capturing solar energy be dependent on the time of day or be interrupted by atmospheric and weather conditions. These recent innovations and their applications to increasing sustainability have earned PG&E the title of the seventh most innovative company in the United States. As a leader in the energy industry, Darbee has been “working everyday to try to reduce [his company’s] emissions” and encouraging other companies in the industry to follow suit with progressive ideas to improve environmental conservation.

Think Zinc

Image retrieved from http://upload.wikimedia.org/wikipedia/commons/2/27/Zinc-sample. jpg (Accessed 31 Jan 2011).

Zinc is an essential nutrient for the body. 5


Biology

Viral Nanoparticles A Cure for Cancer? Kristen L. Flint ‘14

F

or centuries, medical researchers and doctors around the world have raced to cure cancer, and they have had some success. Their treatment methods have included surgery, radiation, chemotherapy, hormone therapy, and biological therapy (1). With these treatments, they have helped millions of people go into remission. However, the problem with today’s treatment methods is their side effects. Cancer treatments leave patients fatigued, weak, nauseous, and with flu-like symptoms. Additionally, because the treatments target healthy cells in addition to tumor cells, patients also suffer from hair loss and irritated skin. Furthermore, the drug efficacy is low, so patients need a lot of the toxic treatment in order to receive any benefits from it. Fortunately, a revolutionary form of drug delivery is being developed. Scientists are engineering viral nanoparticles, such as the cowpea mosaic virus and the canine parvovirus, to help cure cancer.

How Viral Nanoparticles Work Viral nanoparticles are emptied virus cells that can carry drugs directly to cancer cells to kill them. Scientists have engineered viral nanoparticles from plant viruses, insect viruses, and animal viruses (2). They avoid using human viruses in order to minimize the chance of the virus interacting with human proteins and causing toxic side effects, infection, and immune response. Mostly, the scientists work with plant viruses, because they are easiest to produce in large quantities (2). Plant viruses are also ideal, because they can self-assemble around a nanoparticle in vitro and hold approximately 10 cubic nanometers of particles (3). Therefore, many molecules of cancer drugs can fit in plant viral nanoparticles. Many researchers have worked with the Cowpea mosaic virus, a vi6

Image retiieved from http://www.cgl.ucsf.edu/Research/virus/capsids/1ny7-5A-large.jpg (Accessed 29 Jan 2011).

Caspid image of the Cowpea Mosaic Virus.

ral nanoparticle about 30 nanometers in diameter created from a plant virus. Keith Saunders, a researcher at the John Innes Center, reported the first Cowpea mosaic virus generated through proteolytic processing. Saunders used plant cells to create Cowpea mosaic virus nanoparticles that were empty of RNA, meaning that the particles would be unable to infect organisms. Saunders also found that by creating the virus particles in plant cells, there was no danger to the structure of the capsid. This would provide more opportunities to create mutations that allow for changes in the protein coating, which would in turn expand the possible uses of nanoparticles. (4) One of the major benefits of us-

ing viral nanoparticles in a drug delivery system is that molecules can easily be attached to the nanoparticles’ surfaces to enable the virus cells to bond only to the cancer cells, rather than the surrounding cells. Pratik Singh, a researcher at The Scripps Research Institute, studied the canine parvovirus and tumor specificity in viral nanoparticles. Singh found that the transferrin receptor on the canine parvovirus responded to transferrin released in human bodies, even though it is a canine virus (2). In humans, transferrin is released during cell growth, so tumor cells have a lot of transferrin receptor expression. The increased expression in tumor cells attracts the canine parvovirus. If the canine parvovirus is filled with a

Dartmouth Undergraduate Journal of Science


drug to kill cancer cells, it would become a tumor-specific drug delivery system (2). Tumor cells express increased levels of other substances too, such as integrins. If viral nanoparticles are coated in a substance that bonds with integrins, they would be as tumorspecific as the canine parvovirus (3). Once the nanoparticles attach to the tumor cells, they can release whatever drug they contained and kill only the tumor cell. Alternatively, an imaging agent attached to or encased within the nanoparticles would allow scientists and doctors to image the tumor after the nanoparticle has attached to it (5). Another way to draw the nanoparticles to the cancer cells is through attaching iron oxide to the viruses and using magnets to attract them to the tumors. Alfred Martinez-Morales attached iron oxide nanoparticles to the Cowpea mosaic virus and found that the groupings of iron oxide nanoparticles had large magnetic dipoles and increased magnetic field strength (6). These characteristics would allow the Cowpea mosaic virus to be drawn to the tumors by an external magnetic device, thus facilitating imaging and targeted drug delivery.

Challenges While viral nanoparticles are useful in their specificity, there are some problems with using them for drug delivery. As viruses are made of proteins, the human immune system will attack the viral nanoparticles, even though the viruses that scientists are experimenting with now are non-human viruses. Thus, the viral nanoparticles cannot have repeated use. Researchers at The Scripps Research Institute are currently looking into ways around the immune system’s response by coating the viral nanoparticles in a special polymer substance to mask their protein composition. (2) Another problem, which has not been as extensively researched, is the toxicity of the viral nanoparticles once they are in the body. Most human viruses have been found highly toxic when used as viral nanoparticles. However, when Singh investigated the toxicity of the Cowpea mosaic virus in mice, he found it to be safe and non-toxic. Non-human viruses tend Winter 2011

to have lower toxicity when used as nanoparticles in cancer treatment. (7) However, viral nanoparticles of either type that have iron oxide connected to them have a much higher chance of being toxic to the body. Iron oxide, a substance that is not biodegradable, cannot leave the body unless the particles are extremely tiny, under five nanometers in diameter (3). More research is needed on how to break up the iron oxide particles that might be attached to a plant virus so that the particles can leave the body.

Conclusion Viral nanoparticles could revolutionize cancer treatment, acting not only as a safer, more specific form of cancer treatment, but also as a new imaging tool. The nanoparticles could create a type of drug delivery that is extremely tumor-specific with greatly reduced side effects. The viral nanoparticles would be more soluble and have higher drug efficacy than current treatments. The ease with which molecules can be attached to the viral nanoparticles and in turn fuse the nanoparticles to cancer cells is one factor that makes the nanoparticles tumor-specific. In the future, viral nanoparticles used in this form of cancer treatment could allow cancer patients to continue to lead relatively normal lives. The patients would no longer have to suffer the humiliation of hair loss or long bouts of fatigue that prevent them from doing what they love. Rather, the viral nanoparticles would take medication straight to the tumors and kill only the cancer cells, leaving the surrounding cells healthy. References 1. What You Need To Know About Cancer National Cancer Institute (2006). Available at http://www.cancer.gov/cancertopics/wyntk/ cancer/page9 (11 November 2010). 2. P. Singh, G. Destito, A. Schneemann, M. Manchester, J. Nanobiotechnology 4, (2006). 3. S. Franzen, S. Lommel, Nanomedicine 4, 575-588 (2009). 4. K. Saunders, F. Sainsbury, G. Lomonossoff, Virology 393, 329-337 (2009). 5. A. Aljabali, F. Sainsbury, G. Lomonossoff, D. Evans, Small 6, 818-821 (2010). 6. A Martinez-Morales et al., Adv. Mater. 20, 4816-4820 (2008) 7. P. Singh et al., J Control Release 120, 41-50 (2007). 7


Interview

Steven Fiering

Professor of Microbiology and Immunology, Dartmouth Medical School Derek Racine ‘13

Image retrieved from http://ocan.org/yahoo_site_admin/assets/images/ ribbon1.276204504_std.jpg (Accessed 1 Feb 2011).

Professor Fiering’s work had a focus on the ovarian cancer model.

T

he DUJS talked to Steven Fiering, associate professor in the departments of Microbiology and Immunology, and Genetics at Dartmouth Medical School, to gain insight into his research and the use of biotechnology in scientific investigation.

How have you been involved in developing mouse models for research?

One of the things we do here is that we run a transgenic mouse facility. We make genetically modified mice for people as part of their research projects. In that way I work with a lot of people around the campus and people from other institutions. And we provide them with services that are difficult to develop for a single lab. So, that gets me into a lot of interactions, which I really like.

How have you used transgenic mice in your own research?

My personal research is on developing new approaches to detect 8

and treat cancer. Getting back to the transgenic mice, I work somewhat on making new models of cancer. The current types of models that are most interesting are models in which there is a specific way to manipulate the genome in a particular tissue at a particular time and in that way we can generate the cancer. For example, we have been working with collaborators and we have developed a mouse that has a deletable p53 that can be deleted with the Cre enzyme. At the same time, you can also activate the transcription of a mutant K-Ras with Cre. So we activate an oncogene, the mutant K-Ras, and we delete a tumor suppressor gene. We are interested in ovarian cancer. We do it in the ovary by surgically exposing the ovary and putting in an adenovirus that expresses Cre. So, it infects the cells, expresses Cre, but doesn’t replicate and the cells have the genetic change.

How well does your model reflect real ovarian cancer?

Those are the kinds of questions we ask ourselves when we are trying to develop a mouse model. So, the thing that you want is that you want it to be pathologically as similar as possible. A pathologist looks at it and says it has these characteristics, which are consistent with this type of cancer. And as much as possible you want it to have the genetic changes that are most common in the type of cancer you are working with. A lot of models don’t have that. Some models put things like SV40T antigen which is an oncogene, but it comes from a monkey virus and basically there are no cancers in humans from SV40T antigens. So, that’s a lower value model. In ovarian cancer, 60% of cancers have the p53 deletion.

What has been the focus of your personal research here?

concept is that when there is a tumor, it invariably suppresses the immune system. The reality is that most tumors are recognizable; the ones we deal with clinically are those that are immunosuppressive. The idea is that the phagocytic cells support the tumor. They probably support it in two ways. One way is that they suppress the immune response because they are immunosuppressive. The other way is that they support blood vessel growth. People working in cancer nanotherapy are going to great lengths to get their nanoparticles past the phagocytes and into the tumor. They put all kinds of coverings on them to try and get them past the phagocytes. And they don’t do so well. The phagocytes are too good at finding things and eating them. So what we’re thinking is that we can use this tendency to take up nanoparticles to instead manipulate the phagocytes. When you break the immunosuppression even for a brief period of time, the immunostimulatory system kicks in and you start getting an anti-tumor response. We are working with these iron nanoparticles. If you put them into a strong alternating magnetic field, with the right characteristics, they can get really hot. That magnetic field can go into the body. You sort of have a two part system for heating these particles. One is where the particles are in the body and the other is where you are putting the field. So you get another level of control as compared to just putting something into the body. That’s one idea. The second idea is to try to detect the cancer. It is particularly important for ovarian cancer because ovarian cancer is 100% curable if you catch it early. You take out the ovary and you’re done. But it is very rarely caught early because there are no symptoms. If you had a test, a screening test, you could somehow reveal a problem here, but we are talking about detecting microscopic tumors.

We mostly work on ovarian cancer, though not exclusively. The first Dartmouth Undergraduate Journal of Science


What are you detecting the iron with?

So basically he (colleague John Weaver, professor of Radiology at the Dartmouth Medical School) said that he applies a magnetic field and because the particles are magnetized they disturb the field. And he can recognize the distortion in the field and he can do it very sensitively.

What is the pathology of ovarian cancer?

Stage 1 is when it is still in the ovary. Stage 2 is when it spreads from the ovary to other adjoining reproductive tissues like the uterus. Stage 3 is when it’s disseminated around the peritoneum. Stage 4 is when it has left the peritoneum and now it’s in the lungs, in the bones, in the brain. Those are the stagings for ovarian cancer. Stage 1 has great outcomes; hardly anybody ever dies from Stage 1. For Stage 3 and Stage 4, 60% of the women are going to die of it. It’s not a good outcome and it has not been getting any better in the last four years. It’s all about whether you can catch it early.

How has the economic downturn affected research funding?

The money situation is frustrating especially because there has been so much money invested and now we’re at the point where things are moving so fast and now the money flow is starting to trickle. It is really frustrating. The things that used to take a year now you can do in two days. I go to seminars all the time. This is amazing stuff. You go to seminars and you’re like wow that’s great. Things are working. The healthcare system is getting a lot better as far as just the ability to deal with many different healthcare problems. If the world were to increase the research money by fivefold, the changes would be so rapid.

What would be your advice to someone interested in science research?

I guess my advice to somebody is if you love it, then do it. And don’t worry about it. Don’t try and microWinter 2011

manage it. Just put your heart into it. Do the best job you can. Go to the best places that will take you. Work really hard. And just expect things to work out. You have to be able to fail. You have to be able to deal with people telling you about why this won’t work and why they’re not going to give you the money and that they’re not going to publish your paper. Once your skin is thick, you can put up with your own lab failures and put up with the reviews you’re inevitably going to get.

What do you look for in people who want to join the lab?

If people want to make progress, they have to be wide open and they also have to get along, which is actually not trivial. You have to find people that you just are comfortable with and that are on the same page as you are and that from my perspective are just interested, really interested, in doing science. They are not worried about who is going to get credit for this or that. They are not worried about all the details. They’re just saying “Let’s just get the science done.” And then they have to be of good enough spirit to work out the details later. That’s how I like to do it. If you collaborate with someone you feel is able to be your friend, then yeah we’ll work it out. Yeah we might have to compromise. Somebody has to be last author. Someone has to be first author. But as long as you communicate and your goals are basically I just want to get some science done, things can work out really well. When people are like oh no, I want to control this, or oh no, that’s mine and I want this and that, it just gets to be a headache.

Do you find that after a technique is developed, it takes a long time to get to the lab?

Less and less time. It goes faster and faster. But yeah, things always take a while. When I was a kid, I remember asking my mother, “How do living things work? Will we ever understand how living things work?” This was probably 1957 or something and it wasn’t like she knew a lot about the science. I mean the double helix was solved in ’53. But she basically said “no, no I’m sure we’ll never really understand how living things work.” And yet here we are; we have immense details. We can change any base in the mouse. We can induce oncogenes. We can turn them off; we can turn them on. They’re making 50 different combinations of some drug. It is just stunning. The tools are amazing. The tools are just amazing.

How important do you think it is to be interdisciplinary in science?

That’s the buzz word. To me a great example is this detection thing. We got involved in this cancer and nanotechnology center for excellence. We had the hard science guys and the biologists and we actually got it funded which was great. We just talked about it. And there it was.

9


Health

Stent or Patch?

Analysis of Carotid Stenting and Endarterectomy in Combating Stroke Due to Carotid Stenosis Amir Khan ‘14

P

erhaps one of the most severe medical emergencies afflicting the world today is stroke. Strokes can result in paralysis, impaired movement, memory loss, pain, behavioral change, or even death (1). Worldwide, approximately 15 million people suffer strokes annually; of these, five million become disabled while five million do not survive. In the United States, stroke is the third leading cause of death. With about 795,000 people suffering from strokes annually; nearly 137,000 do not survive. Although the incidence of stroke continues to decline in the developed world, increased life expectancy still keeps the rate quite high (2). While bleeding from vessels in the brain causes some strokes, about 90 percent of strokes are ischemic strokes. Ischemic strokes occur when there is severe damage to brain tissue because of a lack of blood supply to the brain. Brain cells are deprived of oxygen, leading to their rampant death. Ischemic strokes are classified into two categories: embolic and thrombotic strokes. Embolic strokes result from the breakage of a piece of blood clot, called an embolus, which travels up narrow brain arteries and blocks blood flow. Thrombotic strokes result from atherosclerosis, or the buildup of fatty deposits called plaque, and the resultant blockage of arterial flow supplying blood to the brain (Fig. 1) (3). While strokes can occur without warning, transient ischemic attacks

Image retrieved from www.texasheart.org (Accessed 29 Jan 2011).

Fig. 1: Stenotic artery with atherosclerotic plaque. 10

Internal Carotid

External Carotid

Common Carotid

Image retrieved from 20th U.S. edition of Gray’s Anatomy of the Human Body (1918).

Fig. 2: Diagram of the carotid artery.

(TIA) can sometimes precede the occurrence of a major stroke. TIAs are “mini-strokes” that occur because of lack of oxygen, or ischemia, in the brain but do not result in cell death. Because the ischemia is transient, neurons are able to regain function, and thus symptoms of TIAs resolve quickly. Although their consequences are short-lived, TIAs are grave forebodings of permanent ischemic damage to the brain. Indeed, people who suffer TIAs are at high risk for future stroke (3). TIAs often occur secondary to atherosclerotic narrowing, or stenosis, of the internal carotid artery. The common carotid artery sends oxygenated blood to the head and neck and branches into the external and internal carotid arteries (Fig. 2 & 3). The

external carotid artery supplies blood to the neck and face, while the internal carotid artery supplies blood to the brain. Because it is the source of blood for the brain, stenosis in the internal carotid can bear severe consequences, such as stroke or even death (4). Different amounts of stenosis carry different degrees of risk for stroke. Some stenosis may be asymptomatic, while other cases can cause symptoms such as TIA, vision loss, or loss of limb function (1, 3, 5). Importantly, the display of symptoms does not necessarily correlate with the degree of stenosis— some patients may be symptomatic with only 60% stenosis, while others remain asymptomatic despite having 80% stenosis. Nonetheless, symptomatic patients carry a higher risk for

Dartmouth Undergraduate Journal of Science


Image courtesy of Ed Uthman, MD.

Fig. 3: Diseased carotid artery.

1953 by Michael DeBakey, carotid endarterectomy (CEA) has been the preferred treatment for carotid stenosis (6). This invasive surgical procedure consists of an incision on the neck of the anesthetized patient, after which the atherosclerotic plaque build-up in the internal carotid is removed. The surgeon then uses the technology of a bioprosthetic patch to replace the removed atherosclerotic window in the internal carotid, thus restoring smooth blood flow to the brain (Fig. 4) (7). However, since its first use in 1994 and approval by the FDA in 2004, an alternative, minimally invasive procedure called carotid artery stenting

Image retrieved from www.musc.edu (Accessed 29 Jan 2011).

Fig. 6: Embolic protection device used in stenting. Image retrieved from www.anexxmed.com (Accessed 29 Jan 2011).

Fig. 4: Biosynthetic patch in endarterectomy.

Image retrieved from www.musc.edu (Accessed 29 Jan 2011).

Fig. 7: Bare-metal carotid stent.

Image courtesy of the National Institue of Health.

Fig. 5: Schematic of carotid stenting.

stroke than asymptomatic patients (5).

Standard Treatment Since its first successful use in Winter 2011

(CAS) has risen in use (Fig. 5). CAS involves the insertion of a catheter into the femoral artery, up through the aorta, and into the common carotid; the physician then inserts a wire via the catheter to reach the atherosclerotic internal carotid artery. Since there is risk of dislodging pieces of plaque during CAS, physicians use a biotechnological device called the embolic protection device (Fig. 6). This device is located on the tip of the wire, upstream from the diseased area, and serves to capture those micro emboli that may break off during the procedure. Next, physicians perform an angioplasty by inserting a balloon catheter, which inflates and presses the plaque to the artery walls. Finally, the physician withdraws the balloon and then inserts a well-known biotechnological tool: the bare metal stent (Fig. 7). This mesh-like stent

serves to keep the artery open. The physician may choose to perform another angioplasty to ensure maximum blood flow, but the procedure is concluded afterwards, with the protection device and catheters removed and the stent left intact for long-term blood flow (8). It is important to note that, when either stenting or endarterectomy is performed, it is done in conjunction with medical management. Medical management of carotid stenosis consists of the therapeutic use of lipid-lowering drugs such as statins, and antiplatelet drugs such as aspirin or Plavix. Statins serve to inhibit cholesterol synthesis, thus fighting the atherosclerotic build-up in the internal carotid, while antiplatelet drugs serve to inhibit the formation of blood clots by decreasing the clumping of platelets (9). Nonetheless, because endarterectomy and stenting use different biotechnologies, many physicians have their own opinions about which is better in combating carotid stenosis. As a result, studies have been conducted around the world in order to compare CEA and CAS. Therefore, it is important to analyze these studies in order to understand the benefits and risks of each procedure and to determine if one procedure is superior.

Trials Suggesting CEA Superiority Over the past two decades, several trials have provided data that suggest endarterectomy as the safer and more efficient choice, depending on the kind of patient. The North American Symptomatic Carotid Endarterectomy Trial (NASCET), a nonrandomized study published in 1999, was a landmark trial that first proved the efficacy of CEA. NASCET studied the effect of endarterectomy in patients with high to moderate symptomatic carotid stenosis. In this 10-year study, the results suggested that endarterectomy was beneficial for patients suffering from carotid stenosis. The overall rate of perioperative (around the time of the procedure) stroke and death was 6.5%, but postoperatively, this risk of stroke or death was significantly reduced to 2.0% at 90 days. With respect to long-term benefits, the trial showed that, even 11


eight years after CEA, the risk of any stroke was 29.4%. The NASCET study showed that CEA is safe and effective in preventing stroke in the short term and long term for patients with moderate to severe symptomatic stenosis (10). Although NASCET proved the effectiveness of CEA, trials were needed to directly compare CEA and CAS. The Stent-Protected Angioplasty versus Carotid Endarterectomy (SPACE) trial, published in 2008, was a key trial in the debate, and suggested the superiority of CEA. The SPACE trial studied patients with severe symptomatic carotid stenosis. The results supported endarterectomy over stenting in terms of stroke or death 30 days after each procedure; however, there was no significant difference in stroke or death two years after each procedure. The SPACE trial did suggest that patient age might affect choice of procedure, with a lower periprocedural rate of stroke or death for CAS patients under the age of 68. Thus, the data suggested a higher efficiency of stenting for younger patients but higher efficiency of endarterectomy for older patients. However, the SPACE trial also revealed a significant rate of restenosis (later recurrence of stenosis) in the stenting patients compared to the endarterectomy patients. This high rate of restenosis severely compromised the efficiency of carotid stenting in the study. Thus, the SPACE trial supported the superiority of CEA over CAS for the treatment of symptomatic patients (11). The Endarterectomy Versus Angioplasty in Patients with Symptomatic Severe Carotid Stenosis (EVA-3S) trial, a randomized study published in 2006, also suggested the superiority of CEA over CAS for symptomatic patients. Interestingly, EVA-3S began as an attempt to prove the non-inferiority of stenting; however, the trial was ended prematurely when data highly supported CEA over CAS. The trial did show some disadvantages for CEA, such as cranial nerve injury and a longer hospital stay, since endarterectomy is a more invasive procedure compared to stenting. Nonetheless, regarding prevention of stroke, CEA fared far better in both the short and long term. The 30-day rate of stroke or death was lower for CEA (3.9%) than for CAS (9.6%), and at six months there was a 12

significantly higher incidence of stroke or death after CAS. Therefore, although conducted to prove the non-inferiority of carotid stenting, the EVAS-3S trial showed the superiority of endarterectomy for symptomatic patients (12).

Trials Suggesting CAS Non-Inferiority While data comparing CEA and CAS is abundant, it remains conflicting. Although the aforementioned trials supported CEA as the superior treatment for carotid stenosis, a number of other trials suggested CAS is just as effective. The Endovascular versus Surgical Treatment in Patients with Carotid Stenosis in the Carotid and Vertebral Artery Transluminal Angioplasty Study (CAVATAS), a randomized trial published in 2001, first opened the debate over alternative treatments for carotid stenosis. CAVATAS studied balloon angioplasty alone (74% of angioplasty patients) or angioplasty with stenting (26% of angioplasty patients) compared to carotid endarterectomy on symptomatic and asymptomatic patients suitable for either procedure. The trial found that 30 days after procedure, both angioplasty and CEA carried similar risks for disabling stroke or death. Furthermore, CEA again carried significantly higher incidences of cranial nerve damage. Additionally, three years after procedure, there was no difference in the rate of stroke for each treatment; however, angioplasty, carried a higher rate of restenosis. This high restenosis rate, similar to that seen with stenting in the SPACE trial, posed an issue for angioplasty. Thus, CAVATAS did not encourage widespread use of angioplasty but encouraged more studies to determine its efficacy (14). The Stenting and Angioplasty with Protection in Patients at High Risk for Endarterectomy (SAPPHIRE) trial, published in 2004, attempted to show that as an invasive procedure, endarterectomy might be less desirable for patients who are at high surgical risk. By including such patients in the study, SAPPHIRE asserted that the minimally invasive CAS with embolic-protection was a suitable substitute for endarterectomy. The combined composite rate of death, stroke, or myocardial infarction

(MI) was significantly lower for CAS compared to CEA (5.8% vs. 12.6%). The rates of TIA and haemotoma were similar in both groups; however, the rate of cranial nerve damage was significantly higher in the CEA group. Therefore, this study suggests that CAS is an acceptable substitute for CEA, but like CAVATAS, emphasized the need for a large, comprehensive trial (15). This large comprehensive trial occurred with the 2010 publishing of the Stenting versus Endarterectomy for Treatment of Carotid-Artery Stenosis trial (CREST). The CREST study was conducted across North America to compare embolic-protected stenting with endarterectomy on patients suffering from either symptomatic or asymptomatic stenosis. It found that the composite rate of stroke, death, or MI was similar for each procedure (7.2% CAS vs. 6.8% CEA). However, once broken down, the individual rates of stroke and MI were strikingly different. The rate of periprocedural stroke was significantly lower in CEA patients (2.3% CEA vs. 4.1% CAS) while the rate of periprocedural MI was significantly lower in CAS patients (1.1% CAS vs. 2.3% CEA). Nonetheless, both procedures proved efficient in long-term prevention of stroke, and the rates of stroke four years after each procedure were not significantly different (2.0% CAS vs. 2.4% CEA). As the needed study that encompassed both symptomatic and asymptomatic patients with use of either CEA or embolic-protected CAS, the CREST trial showed that the fouryear rate of stroke was similar for both methods, and that CAS yielded lower incidence of periprocedural MI while CEA yielded lower incidence of periprocedural stroke. After considering the low long-term risk of stroke for both procedures, the CREST trial argued that CAS is a suitable alternative to CEA (16).

Implications With the CREST trial, an undoubtedly significant issue undercutting the results is a comparison of the impact of stroke or MI on a person’s life. A stroke is noted to have more consequences on quality of life than an MI, thus adding to the hesitation of performing CAS, which carries a higher rate of periprocedural

Dartmouth Undergraduate Journal of Science


stroke (17). Furthermore, a look at two different meta-analyses of CAS vs. CEA trials gives some interesting input on the issue. A meta-analysis from early 2010, covering 11 trials but not CREST, stated the superiority of CEA over CAS in terms of periprocedural risk of stroke but found no significant difference in terms of long-term risk of stroke. However, the analysis did state that endarterectomy carried a clearly higher risk of cranial nerve damage and MI (18). Another meta-analysis that covered 13 trials, including CREST, supported the idea of higher risk of cranial nerve damage and MI with endarterectomy; however, this meta-analysis stated that the data suggested that CAS carried a higher risk of stroke for both shortterm and long-term outcomes (19). Many different factors potentially affect the choice of either CEA or CAS. For example, regional factors may play a role. With the results of the SAPPHIRE and CREST trials, CAS has gained more credibility as a procedure in America, and such credibility is crucial in the issue of whether stenting will fall under Medicare coverage (15-17). However, as the SPACE trial was conducted in Germany and EVA-3S in France, Europe is a bit more hesitant in its use of carotid stenting. Since both those trials suggested the superiority of the traditional endarterectomy, Europeans tend to favor CEA over stenting (10-12, 14). Economic factors may also play a role. Studies have shown that the stenting procedure costs more and uses more resources compared to endarterectomy (20). However, studies also have shown that well performed stenting can result in shorter hospital stays and less impact on the patient’s daily activities (21). Details regarding the patient’s stenosis also impact the choice between CAS and CEA. Both SPACE and CREST have shown that younger patients, below 68, benefit more from CAS while older patients, above 68, benefit more from CEA, thus making age a potential factor in the procedure of choice (11, 16). Additionally, some physicians assert that CEA is more efficient for symptomatic stenosis, with data from SPACE and EVA-3S supporting them (11-13). However, the CREST trial included both asymptomatic and symptomatic patients and asserted the non-inferiority of CAS for Winter 2011

either group (16). Such arguments may make the procedural choice a controversial decision for the physician.

Conclusion In light of such data, both patients and physicians must remain updated on developing procedures in order to ensure treatment highly specific to the individual. As the baby boomers grow old, many of us may soon face the day when one of our relatives suffers from carotid stenosis. By studying the impact and efficacy of various procedures and their biotechnologies in past research, we can work with physicians to ensure our loved ones receive the most suitable treatment available. New research must continue for the treatment of carotid stenosis, and we must keep up with the progress of treatment in the medical community. This will help both physicians and patients make the best choice for treatment of carotid artery stenosis and it may even prevent a stroke from taking another life.

12. T. G. Brott et al., N Engl. J Med. 363, 11-23 (2010). 13. S. M. Davis, G. A. Donnan, N Engl. J Med. 363, 80-82 (July 2010). 14. P. Meier et al., BMJ. 340, c467 (2010). 15. S. Bangalore et al., Arch Neurol. 262 (2010). 16. M. M. Brown, R. L. Featherstone, L. H. Bonati, The Lancet. 376, 327-328 (2010). 17. D. Cohen, Costs and cost-effectiveness of carotid stenting vs. endarterectomy for patients at increased surgical risk: results from the SAPPHIRE trial. Catheter Cardiovasc. Interv. (2010).

References 1. What is Stroke? Available at http://www.stroke.org/site/ PageServer?pagename=STROKE Stroke Statistics. Available at http://www. strokecenter.org/patients/stats.htm Stroke (2010). Available at http://www. mayoclinic.com/health/stroke/DS00150/ DSECTION=causes 2. Carotid Artery Stenosis. Available at http://www.americanheart.org/presenter. jhtml?identifier=4497 3. Carotid Artery Disease. Available at http:// www.mayoclinic.org/carotid-artery-disease/ treatment.html 4. Legacy of Leadership. Available at http:// www.debakeydepartmentofsurgery.org/home/ content.cfm?content_id=287 5. L. L. Culvert, Carotid Endarterectomy, Available at http://www.surgeryencyclopedia. com/A-Ce/Carotid-Endarterectomy.html Carotid Angiography and Stenting. Available at http://my.clevelandclinic.org/heart/services/ tests/procedures/carotidstent.aspx 6. A. Klein, C. G. Solomon, M. B. Hamel, N Engl. J Med. 358, e23 (2008). G. G. Ferguson et al., Stroke. 30, 1751-1758 (1999). 7. H. H. Eckstein et al., The Lancet Neurology. 7, 893-902 (2008). 8. J. Mas et al., The Lancet Neurology. 7, 885-892 (2008). 9. J. Ederle et al., The Lancet. 375, 985-997 (2010). 10. M. M. Brown et al., The Lancet. 357, 1729-1737 (2001). 11. J. S. Yadav et al., N Engl. J Med. 351, 1493-1501 (2004) 13


ENGINEERING

Bioabsorbable Coronary Stents Julie Shabto ‘14

H

eart disease is the leading cause of death in the United States. One of the most common medical interventions performed today is the percutaneous coronary intervention (PCI), which opens clogged or damaged coronary arteries (1). Since its development in 1977, PCI has been a widely used alternative to coronary artery bypass grafting (CABG), and it relieves patients of coronary arterial blockage 90-95% of the time (2, 3). One form of PCI, balloon angioplasty, improves necessary blood-flow by inserting a balloon into the affected artery and inflating it in order to compress any plaque present and prop open the artery. A more permanent form of PCI is coronary stenting. Stenting involves placing a tiny tube-like metal structure. Stenting can be employed following angioplasty in order to prevent restenosis, the re-narrowing of an artery, or stenting can be performed in one step, in which the artery is opened and the stent is implanted (4, 5). Coronary stents have nearly eliminated the problem of abrupt occlusion (where the vessel closes), which occurrs in 5% of patients when just balloon angioplasty is performed. Coronary stents have reduced the incidence of restenosis by more than 50% (3). The development of drug-eluting stents (DES) has further enhanced post-operative experiences. DESs are coated in polymeric material that releases drugs locally. These polymers completely degrade by the time the drug has been released, but the metallic stent remains (6). DES’s are used to prevent restenosis that may occur after PCI (7). In fact, with DES, the restenosis rate is under 10% (3). The permanence of metallic stents is not necessarily ideal, however. Current metallic stents can induce late thrombosis and thickening of the inner lining of the artery as a response to arterial wall injury. The long-term effects of metallic stents in human coronary arteries are still unknown (1). Perma14

Image courtesy of the Centers for Disease Control and Prevention.

Chest X-ray of individual with congestive heart failure.

nent stents may interfere with normal vessel functions and stents implanted in children must be designed to last literally a lifetime (9). Furthermore, metallic stents that remain in coronary arteries can present difficulties in later treatment. For example, metallic stents can hinder assessments of coronary arteries through computer tomography (CT) and magnetic resonance imaging (MRI), as well as block important side branches of the artery (1,8).

The Ideal Stent The significant complications involved with metallic and polymer-coated stents call for further development in the treatment of clogged or damaged coronary arteries. An ideal stent would do its job and then disappear. Furthermore, the ideal stent would be made of biocompatible material to prevent vessel irritation and would have adequate radial force to prevent collapsing as a

result of any injury responses that occur following implantation (8). With the stent gone after it does its job, late thrombosis is unlikely to occur and the stent would not interfere with CT or MRI evaluations (6, 11). A bioabsorbable or biodegradeable stent satisfies all the requirements for an ideal stent. In addition to the clinical benefits of bioabsorbable stents, these stents may prove to be the patient-preferred option. Patients have expressed that they would rather have an effective temporary implant as opposed to a permanent prosthesis that may require surgical removal (7). Moreover, a disappearing stent would promote the restoration of the previously clogged or damaged artery to a “healthy artery,” one that can endure the pressures of a normal artery (11). A more speculative hypothesis about bioabsorbable stents is that they can be used to prevent further build up of plaque in arteries; instead of waiting for necessary PCI, a bioabsorbable stent could be implanted in the patient so that, by the time the stent fully degrades, the plaque would have regressed (11). Thus, bioabsorbable stents seem to be a viable alternative to permanent coronary stents, but a thorough analysis of different stent models and clinical studies is necessary.

Different Bioabsorbable Stent Models: The Key Players The development of bioabsorbable stents has become a hot topic in the medical device industry (10). Models of bioabsorbable stents that are currently being developed are made of either polymers or corrodible metal alloys.

Polymeric Stents

There are several polymeric bioabsorbable stents that have been test-

Dartmouth Undergraduate Journal of Science


ed. The Igaki-Tamai coronary stent and the bioabsorbable everolimus-eluting coronary stent (BVS) both use Poly-Llactic acid (PLLA). Other bioabsorbable polymeric stents include ones developed by Bioabsorbable Therapeutics and the REVA Medical stent.

Igaki-Tamai

The Igaki-Tamai stent was the first bioabsorbable stent to be implanted in humans. According to the initial 6-month results of the Igaki and Tamai, 15 patients electively underwent the stent implantation; 25 stents were successfully implanted in 19 sites in the 15 patients. Their stent is made of PLLA, has a thickness of 0.17 mm, has a zigzag helical coil pattern, and is balloon-expandable (1). The study proved PLLA to be safe in human coronary arteries. In the study, no stent thrombosis and no major cardiac event occurred within the first 6 months, meaning that there were no deaths, heart attacks, or coronary artery bypass surgeries. Full degradation took 18-24 months. Furthermore, at about 36 months, lumen size increased (12). While they had limited patients, the team viewed their initial 6-month results as promising. However, the Igaki-Tamai stent lacked a drug coating, and since focus turned to bioabsorbable stents coated with drugs, the development of the Igaki-Tamai stent halted.

BVS stent

The BVS everolimus-eluting bioabsorbable PLLA stent is the first bioabsorbable stent to have clinical and imaging outcomes similar to those following metallic DES implantation. The BVS stent has a polymer coating that contains and controls the release of the drug everolimus, which stops cells from reproducing by decreasing blood supply to the cells. (7) The BVS stent was tested by Abbott in the ABSORB trial, an open-label study in which 30 patients received stent implants. In the study, 80% of the drug was released by the 30-day followup and the drug constrained any excessive healing response (7). The stent had a thickness of 150 μm (13). Blood vessel lumen diameter decreased and there were higher-than-expected restenosis rates. These initial results led to specuWinter 2011

Image retrieved from http://archiv.ethlife.ethz.ch/images/magnesiumstent-l.jpg (Accessed 29 Jan 2011).

Magnesium stents have potential advantages over polymeric stents in terms of higher radial strength.

lation that the absorption of the stent may have occurred too quickly (11). After one year follow up, however, there was no stent thrombosis and only one patient experienced a heart attack (13). Full absorption of the stent took a relatively slow 18 months (8). One remarkable finding of the ABSORB trial was that, between 6 months and 2 years, there was an enlargement in lumen size (7). The increase in lumen size was due to a decrease in plaque size without a change in vessel size (14). This enlargement suggested that after stent absorption, a vessel could potentially become a healthy vessel again. The Abbott BVS stent is currently the bioabsorbable stent furthest along in clinical development and may in fact be cleared for sale in Europe (10).

Bioabsorbable therapeutics

The polymeric stent developed by Bioabsorbable Therapeutics (BTI) is coated with sirolimus, a drug that suppresses the body’s immune system. Both the base polymer and coating polymer of the stent are made up of bonds between salicylic acid molecules. These bonds are hydrolyzed during absorption, resulting in the release of salicylic acid, an anti-inflammatory drug that could potentially prevent restenosis.

The BTI stent has a thickness of 200 μm and is balloon-expandable. In the first human clinical trial, WHISPER, the stent was implanted in 40 patients. While full absorption was expected within 6 to 12 months, significant thickening of the inner lining of the artery occurred. Thus, subsequent development of the BTI stent is necessary.

REVA medical

The REVA Endovascular Study of a Bioresorbable Coronary Stent (RESORB) is coated with paclitaxel, a drug that inhibits cell division (11). The stent is balloon-expandable and is set into place by sliding and locking parts rather than deforming the material, which gives the stent more radial strength (8). The stent has a thickness of 150 μm (15). The RESORB trial, which began in 2007, enrolled 27 patients. At the 30-day follow-up, two patients experienced a heart attack and one needed another PCI (7). In 2008, between 4 and 6 months after implantation, there was higher-than-anticipated occurrence of repeated PCI, driven mainly by reduced stent diameter (7). Thus, the REVA Medical model is not flawless.

15


Metal alloy stents

Metal alloy bioabsorbable stents perform similarly to permanent metallic stents. So far, two bioabsorbable metal alloys have been proposed for this application: iron and magnesium. However, neither of these stents is coated with drugs.

Bioabsorbable magnesium stent

Magnesium stents have potential advantages over polymeric stents in terms of higher radial strength due to their metallic nature and biocompatibility as a naturally occurring element in the body (8). The first metallic bioabsorbable stent implanted in humans was studied in the PROGRESS-AMS trial with 63 patients (8). This stent has a thickness of 165 μm and is balloonexpandable (7). In trial, absorption of a magnesium stent in humans was rapid and mechanical support lasted days or weeks, which is too short to prevent restenosis (6-7, 11). During the first four months, major adverse cardiac events were recorded in 15 of the patients (24%) and additional PCIs were needed after initial implantations (8). After one year, 45% of the patients had additional PCI. The magnesium stent can be safely degraded within 4 months, but the high restenosis rate raises concerns (7).

Bioabsorbable iron stent

Iron is an essential component of a variety of enzymes, making ironbased alloys favorable material for bioabsorbable stents. M. Peuster, et al. performed experimental studies with bioabsorbable iron stents. The experimental iron stent has a thickness of 100–120 μm and is balloon-inflatable. The researchers implanted stents made of 41 mg of pure iron, an amount equivalent to the monthly oral intake of iron for a human, into the descending aortas of New Zealand white rabbits. During the 6 to 18 months of followup, there was no reported thrombosis or any other significant inflammatory injury response. However, the animals experienced destruction of the internal elastic membrane of arteries and products from the degradation of the stent accumulated, resulting in significant alteration of the artery wall (9). 16

Conclusion

While studies of bioabsorbable stents have proven effective, small trial sizes and too many controlled variables leave many unconvinced. Currently, there are no bioabsorbable stents commercially available. The developed bioabsorbable stents are unlikely to make their way into randomized patients (11). The stents are much larger and bulkier than current permanent stents (10). This difference in size could pose a problem with jagged, calcified plaque protruding into the lumen, which many patients have. The calcium might catch on the device, preventing its proper functioning (11). Polymeric biodegradable stents have demonstrated several limitations and long-term effects of polymer fullabsorption products are unclear (6, 16). The polymer that both the IgakiTamai and BVS stents use, PLLA, holds 1,000 mmHg of crush pressure and maintains radial strength for approximately one month. Compared with metallic stents, this radial strength is lower and may result in early recoil post-implantation (8). Meanwhile, metal alloys stents do not seem biocompatible enough to use in practice. While bioabsorbable stents may potentially be the ideal stent, further clinical studies and developments are necessary. The right balance between absorption and radial strength must be obtained, and inflammation must be prevented at the same time. Once this balance is found, though, biodegradable stents may have far-reaching implications for the treatment and/or prevention of blocked and damaged arteries.

do-coronary-stents-work-2010-02-12 (14 November 2010). 6. R. Waksman, Biodegradable Stents: They Do Their Job and Disappear: Why Bioabsorbable Stents?, Journal of American Cardiology 18, 70-74 (2006). 7. J. A. Ormiston, P. W. Serruys, Bioabsorbable Coronary Stents, Circulation Cardiovascular Intervention 2, 255-260 (2009). 8. R. Bonan, A. Asgar, Biodegradable Stents— Where Are We in 2009?, US Cardiology 6, 81-84 (2009). 9. M. Peuster, et al., A novel approach to temporary stenting: degradable cardiovascular stents produced from corrodible metal—results 6-18 months after implantation into New Zealand white rabbits, Heart 86, 563-569 (2001). 10. B. Glenn, Cleveland Clinic spinoff raises $8.5M for biodegradable stents, (22 September 2010). Available at http://www.medcitynews. com/2010/09/cleveland-clinic-spinoff-raises-85m-for-biodegradable-stents/ (14 November 2010). 11. M. O’Riordan, Now you see me, now you don’t: The bioabsorbable stent in clinical practice, (8 November 2010) Available at http:// www.theheart.org/article/1144463.do (14 Nov. 2010). 12. Editorial Staff, Biodegradable stents could be the ideal stent, (10 March 2009). Available at http://www.cardiovascularbusiness.com/index. php?option=com_articles&view=article&id=16 567:biodegradable-stents-could-be-the-idealstent (15 November 2010). 13. J.A. Ormiston, et al., A bioabsorbable everolimus-eluting coronary stent system for patients with single de-novo coronary artery lesions (ABSORB): a prospective open-label trial, The Lancet 371, 899-907 (2008). 14. P. W. Serruys, et al., A bioabsorbable everolimus-eluting coronary stent system (ABSORB): 2-year outcomes and results from multiple imaging methods, The Lancet 373, 897-910 (2009). 15. X. Ma et al., Drug-eluting stents, International Journal of Clinical and Experimental Medicine 3, 192-201 (2010). 16. J. Kahn, Bioabsorbable Stents, Journal of Interventional Cardiology 20, 564–565 (2007).

References 1. G. A. Stouffer, J. W. Todd, Percutaneous Coronary Intervention (3 November 2010). Available at http://emedicine.medscape.com/ article/161446-overview (14 November 2010). 2. H. Tamai, K. Igaki et al., Initial and 6-month results of biodegradable poly-L-lactic acid coronary stents in humans, Circulation 102, 399 – 404 (2000). 3. D. Kulick, Coronary Balloon Angioplasty and Stents (Percutaneous Coronary Inverention, PCI) (2010). Available at http://www. medicinenet.com/coronary_angioplasty/article. htm (14 November 2010). 4. R. S. Schwartz, D. R. Holmes, E.J. Topol, Journal of American Cardiology 20, 1284-1293 (1992). 5. J. Matson, How do coronary stents work? (12 February 2010). Available at http://www. scientificamerican.com/blog/post.cfm?id=howDartmouth Undergraduate Journal of Science


Engineering

Optoelectronics and Retinal Prosthesis The Revival of Vision Andrew Zureick ‘13

W

Artificial Eyes

hat was once mere fantasy has evolved into a biotechnological revolution. Groundbreaking research in the field of vision restoration has brought hope to those who are unable to see. While vision impairment—caused by a wide range of conditions including cataracts, degenerative diseases, and accidents—impacts quality of life, research in optoelectronics and retinal prostheses continues to progress in a quest for restoring eyesight.

The Human Eye: How Do We See? Light passes through many layers during the transmission of an image to the brain for visualization (Fig. 1). The cornea is a thin layer on the surface of the eye that protects the pupil and iris, the iris uses two muscles to regulate the size of the pupil, and the pupil controls how much light passes (1). The lens behind the pupil focuses light into a narrow beam onto the back of the eye, the retina. The retina is composed of ten distinct layers of cells, including photoreceptors (rods and cones), ganglion cells, bipolar cells, and nerve fibers. Cones, primarily found in the center of the retina (the “fovea”), are essential for color vision and high resolution vision, while rods, distributed along a much wider range in the retina, are essential for scotopic (dark-adapted) vision and peripheral vision (2). At the very back of the eye is the optic nerve, which connects the retina to the brain via a series of electrical impulses.

Some Causes of Vision Impairment Cataracts

The lens of the eye often becomes cloudier with age. Because the lens is essential for focusing light, one will perceive a blurry image as a result. This condition, known as a cataract, not Winter 2011

Image retrieved from http://www.aao.org/theeyeshaveit/anatomy/section-retina.cfm (Accessed 27 Jan 2011).

Fig. 1: Structure of the eye, with a cross section of the retina

only results from aging but also from other eye problems, injuries, radiation, or may even exist from birth (3).

Macular Degeneration

There is an increased risk of degenerative diseases that affect the retinal cells with increasing age. Agerelated macular degeneration (ARMD), the loss of cells in the macula—near the center of the retina—affects millions of people; symptoms start with loss of fine vision, but can lead to declining central vision and ultimately legal blindness in many cases (4).

Retinitis Pigmentosa

A genetic disorder that primarily affects photoreceptors in the retina, retinitis pigmentosa (RP) leads to incurable blindness. Symptoms include decreased night vision, decreased peripheral vision, and in more severe stages, decreased central vision (5). Randomized trials have shown that increased Vitamin A intake helps to slow symptoms of photoreceptor degeneration, meaning the cells undergo apoptosis or necrosis, but too much Vitamin A may result in liver damage (6, 7).

In some severe cases, an eye must be removed because of either a retinoblastoma—cancerous tumor in the eye—or other significant damages. In the past, an artificial eye could be put in place of the enucleated eye, but this would simply be a non-functional placeholder. This practice dates back as early as ancient Egypt, when eyes were replaced with “precious stones, bronze, copper, or gold,” as confirmed by findings in tombs (8). This practice evolved during the 16th and 17th centuries, and eyes needing to be removed were replaced with glass. In more recent times, prosthetics have governed appropriate replacements. These are commonly made with either acrylic or cryolite glass; care is given to make the product look similar to the existing eye, specifically with regard to the iris pigment (9). While this is indeed a solution as far as aesthetics are concerned, the more beneficial procedure is not just an optical prosthesis but also a visual prosthesis, both replacing an eye and restoring vision. Retinal stimulation by electrodes helps restore partial vision in cases where photoreceptors or other parts of the retina are damaged, as will be explored in the next section.

Optoelectronics Electrical stimulation of the retina and other technological approaches have become increasingly researched areas of restoring vision. It is possible to use a series of energized electrodes that can transmit information to the brain through neurons in the eye. These multielectrode devices target the retina, which communicates with the visual cortex; arrays ranging from only 16 electrodes to over 1000 electrodes have been studied, and in these cases the retina perceives not an image but rather a series of dots (10,11). As shown in Fig. 2, the microelectrode array is routed to a video 17


camera on the outside that senses light. Two different varieties of retinal implants are currently in the stage of clinical trials to determine their safety and effectiveness: subretinal implants and epiretinal implants (12-15). Both cases rely on the fact that—even during degeneration of cells in ARMD or RP—the neural network of the retina stays intact. In other words, the light sensing photoreceptors do not function, but the rest of the visual system still can function (16). For the first type of retinal prosthesis, subretinal, the implant is placed beneath the retina to essentially “replace” photoreceteptors (13). In the second type, the implant is placed on the surface of the retina and functions with healthy nerve (ganglion and bipolar) cells. A small video camera captures a light signal and converts the data into an electric signal through a microprocessor, which is transduced across these nerve cells, through the optic nerve, and ultimately to the brain for the creation of an image (14-15, 17). At the same time, this device must be engineered in such a way that it must not disturb the rest of the tissue in the eye, and exist stably in the saline environment of the vitreous (18). The US Food and Drug Administration has not yet approved these devices and methods, but clinical trials still continue to test how well they will potentially function. Fig. 3, adapted from an article in the Journal of Vision, shows a simulation of the creation of an electronarray image. In short, the electrodes tap enough nerve cells in the eye such that certain Fig.s can be outlined, and the clarity of the image reflects how optimum of a device it is. The device worn by the subject is connected to a computer, which processes the image; each phosphene, or spot of light

will determine, with a greater certainty, whether these methods prove effective. References

Image retrieved from (17), which was adapted with permission from the Department of Energy newsletter, 5 January 2008. Originally printed in IEEE Engineering in Medicine and Biology, 24:15 (2005).

Fig. 2: A schematic representation of a microelectrode array in a sub-retinal implant.

produced by electrical stimulation, is rendered into the software (7). People who have tried using this technology find it extremely useful. Though there are only a small number of electrodes compared to the millions of photoreceptors, people can make out a general sense of their surroundings. “They can differentiate a cup from a plate, they know where the door is in their home, and they can tell where the tables are,” according to Dr. Mark Humayun of the Doheny Eye Institute at the University of Southern California (18). Furthermore, the brain can take over and fill in some of the missing pieces of information especially when memory is taken into consideration (19).

Conclusion Vision, an essential element in the quality of life, brings color to the world and adds multiple dimensions to everything that surrounds us. We have many different surgical procedures for curing different ocular diseases, including cataracts, glaucoma, and myopia (nearsightedness), so why not have ways of restoring the retina? Only further research and clinical trials

Image retrieved from J. J. van Rheede, C. Kennard, S. L. Hicks, J. Vision. 1, 1–15 (2010).

Fig. 3: A simulation of the image seen by a subject using a retinal prosthesis device 18

1. M. Bear, B. Connors, M. Paradiso, Neuroscience: Exploring the Brain (Lippincott Williams & Wilkins, ed. 3, 2007). 2. Neuroscience for Kids- Retina, Available at http://faculty.washington.edu/chudler/retina.html (15 January 2011). 3. Facts about Cataracts, National Eye Institute (2010). Available at http://www.nei.nih.gov/ health/cataract/cataract_facts.asp#1a (15 January 2011) 4. Macular Degeneration: MedlinePlus (2010). Available at http://www.nlm.nih.gov/ medlineplus/maculardegeneration.html (15 January 2011) 5. Retinitis Pigmentosa: Medline Plus Medical Encyclopedia (2010). Available at http://www. nlm.nih.gov/medlineplus/ency/article/001029. htm (15 January 2011). 6. National Eye Institute, Update on Vitamin A as a Treatment for Retinitis Pigmentosa (2008). Available at http://www.nei.nih.gov/news/ statements/pigmentosa.asp (15 January 2011). 7. Drug-induced liver disease symptoms, causes, and treatments (2011). Available at http://www.medicinenet.com/drug_induced_ liver_disease/page8.htm (15 January 2011) 8. B. S. Deacon, J. Ophthalmol. Med. Tech. 4:2 pages (2008) “Orbital Implants and Ocular Prostheses: A comprehensive review” 9. I. I. Artopoulou, P. C. Montgomery, P. J. Wesley, and J. C. Lemon, J. Prosthet. Dent. 95, 327-30 (2006). 10. J. J. van Rheede, C. Kennard, S. L. Hicks, J. Vision. 1, 1–15 (2010). 11. E. Zrenner, “Restoring neuroretinal function by subretinal microphotodiode arrays” (2007). Speech delivered at ARVO, Fort Lauderdale, USA. 12. H. G. Sachs, V. Gabel, Graef. Arch. Clin. Exp. 242, 717–723 (2004). 13. S. Klauke et al., Invest Ophthal Vis Sci. In press (2010). 14. G. Roessler et al., Invest Ophthal Vis Sci. 50, 3003-8 (2009). 15. H. Benav, “Restoration of Useful Vision up to Letter Recognition Capabilities Using Subretinal Microphotodiodes” (2010). Speech Delivered at the 32nd Annual International Conference of the IEEE EMBS Buenos Aires, Argentina, 31 Aug 2010. 16. The Dept. of Energy Artificial Retina project, Available at http://www.youtube.com/ watch?v=iUz1ScDKslk (14 January 2011). 17. G. J. Chader, J. Weiland, M. S. Humayun, “Artificial vision: needs, functioning, and testing of a retinal electronic prosthesis” (2009). Available at http://www.thecaliforniaproject.org/ pdf/chader/Artificial%20vision.pdf (15 January 2011). 18. V. Kandagor, “Spatial Characterization of Electric Potentials Generated by Pulsed Microelectrode Arrays” (2010). Speech Delivered at the 32nd Annual International Conference of the IEEE EMBS Buenos Aires, Argentina, 31 Aug 2010. 19. M. Lipner, Setting sights on artificial vision (2010). Available at http://www.eyeworld.org/ article.php?sid=4742&strict=&morphologic=&qu ery=retina (15 January 2011). Dartmouth Undergraduate Journal of Science


Neurology

Enlightenment

Optogenetic Tools for Understanding the Brain Hannah Payne ‘11

N

euroscientists are scrambling to play with the new toys of optogenetic technology, but with the explosion of popular science articles and even videos of light-controlled dancing mice (1), it is important to step back and evaluate how this technology can be most effectively used to solve meaningful problems in neuroscience. Optogenetics serve as a remote control of neural activity. Groups of cells are genetically encoded to produce light-sensitive proteins, allowing high levels of control of neural activity in specific populations of neurons. This review will focus on optogenetic actuators, which drive activity using light, as opposed to sensors, which report activity, beginning with a brief overview of optogenetic technology. The review will then synthesize recent progress that optogenetics has allowed in teasing apart the role of specific subpopulations and patterns of activity in network dynamics and ultimately how complex behaviors emerge from these elements. Finally, future applications for both neuroscientific research and human disease will be discussed.

The Optogenetic Toolbox The capabilities of optogenetic technology for controlling activity have expanded tremendously since Karl Deisseroth’s group first demonstrated the efficacy of channelrhodopsin-2 (ChR2) in 2005 (2). ChR2 was taken from the green algae Clamydomonas reinhardtii and successfully used to drive activity in mammalian neurons, and is still the most commonly used optogenetic tool, although improved engineered versions such as ChETA and ChIEF will likely replace it in the future (3,4). Similar to opsins found in the retina, ChR2 responds to light in the presence of the commonly occurring cofactor, all-trans retinal. However, instead of triggering a G-protein coupled signaling cascade as do opsins in mammalian Winter 2011

retinas, ChR2 responds to 460 nm blue light by opening the ion channel at the core of its seven transmembrane domains, allowing positive ions to pass through (Fig. 1). The result is a sensitive and rapid depolarization of the cell in response to light, allowing neuronal spiking to be reliably elicited (2). Furthermore, some optogenetic tools can decrease neuronal activity, which is not easily controlled with conventional stimulating electrodes. For example, halorhodopsin (Halo) is a light-activated chloride pump isolated from the archaebacterium Natronomas pharaonis (5). Upon exposure to yellow light (560 nm), the influx of chloride ions hyperpolarizes the cell and prevents action potentials. Due to the different wavelengths of activation for ChR2 and halorhodopsin, it is possible to express both proteins in the same neuron, allowing for bidirectional control of activity at the millisecond timescale (6). The main labs involved in pioneering optogenetic technology, those of Karl Deisseroth at Stanford University and Ed Boyden at Massachusetts Institute of Technology, are attempting to make the process as transparent as possible by providing detailed protocols and genetic sequences freely online (openoptogenetics.org, optogenetics. org, and syntheticneurobiology.org). Together they have distributed plasmids containing the ChR2 gene to hundreds of other laboratories. Overall, the process can be divided into three steps: expressing the gene in desired cells, delivering light to control activity, and recording output (behavioral and/or electrophysiological) (7). Gene expression is a well-established problem, and is acheived using transgenic animals, viral infection, or electroporation (7). Light is typically delivered using fiber optic cables which in rodents or monkeys are attached to the head, while smaller organisms (nematodes, xenopus, or drosophila) are simply exposed to full-field light. Behavioral output can be observed with the use of flexible

fiber optic cables and has resulted in dramatic demonstrations of the direct link between neural activity and behavior (8). Multielectrode arrays or single electrodes can record extracellular signals from both single cells and population activity (local field potentials); patch clamping methods are used to record with great precision from single neurons; and optogenetic voltage or calcium sensors that can report activity are an ongoing area of development.

Pros and Cons Light-years ahead

Optogenetics has several key advantages over previous methods of controlling neuronal activity, such as electrical stimulation or neurotransmitter uncaging. Optogenetics is less invasive than electrical stimulation, since light can penetrate several millimeters into brain tissue (7). Neurotransmitter uncaging, in which special caged particles containing glutamate or other neurotransmitters are injected into the brain, also requires invasive injection procedures, which may be avoided by using transgenic optogenetic lines (7). Compared to both methods, the optogenetic response time is faster and spikes are more reliable (2). Additionally, the specificity of genetic encoding via promoters for specific cell types makes this technique extremely powerful for distinguishing functions of different neural populations.

Limitations

A main limitation of optogenetic probes is their low sensitivity— compared to human photoreceptors, light-activated proteins are nearly blind. In a photoreceptor-lacking retina expressing ChR2, about 1015 photons cm-2 s-1 are needed to produce a response, compared to only 106 photons cm-2 s-1 for rod photoreceptors, a billion-fold increase (9). For most 19


experimental applications this is not a problem as bright light sources are readily available; however one must consider that the visual system might be inadvertently activated in small organisms such as fruit flies and tadpoles. However, the most salient limitation may the difficulty in translating the power of optogenetics into therapeutic approaches to human disease. There are many technical and ethical difficulties faced in introducing a foreign gene to human cells, let alone into the brain. Virus particles may cause adverse reactions, and precise expression of the gene may be hard to control. Also, fiber optics would have to be permanently affixed to the head, potentially causing infection, discomfort, and the need to carry heavy batteries. Finally, the ethical considerations of having such direct control over human neural activity must be very carefully considered. At present, optogenetics remains extremely powerful for purposes of scientific research.

Illuminating Neural Networks Currently, optogenetic technology is being applied somewhat haphazardly to a great variety of problems (1, 8). While these “proof-of-principle� demonstrations are useful, they do not necessarily constitute experiments. Overall, neuroscience seeks to understand the brain from the most basic molecular level up through complex human behavior, and optogenetics holds the most promise for understanding the brain at the intermediate level of neural networks. First, optogenetics can be used to manipulate activity of individual neurons, or small subpopulations of neurons, and one can then observe the effects on the activity of the entire local circuit. Second, optogenetics can be used to manipulate activity with extreme temporal precision, allowing the function of patterns of activity to be studied. Third, these elements can then be combined to connect neuronal activity to complex brain functions such as perception and learning and memory.

Subpopulation contributions to network dynamics

A main strength of optogenetics is

20

that it allows detailed genetic specificity. Different promoters can target very specific types of neurons. Furthermore, electroporation of DNA by applying a brief electric current to open temporary pores in a cell membrane allows for labeling of just one or a few cells. These techniques allow for an understanding of neural network activity at the most basic functional level. For example, researchers led by Michael Hausser at the University College in London recently used ChR2 to investigate the role of individual somatostatin interneurons in visual processing. Somatostatin inhibitory neurons make up roughly 15% of the interneurons in the cortex and synapse primarily on pyramidal cell apical dendrites, but their function is unknown (10). The researchers expressed ChR2 in two to five somatostatin neurons in mouse visual cortex. They then recorded from neighboring pyramidal cells while stimulating the visual system with a series of moving visual stimuli. As expected, activation of the inhibitory somatostatin neurons reduced the response of some neighboring pyramidal cells, but surprisingly some pyramidal cells actually increased firing when the somatostatin neurons were activated. Although the mechanism for the paradoxical enhancement of some pyramidal cells is not known, it could be due either to inhibition of other neighboring interneurons, or by direct integration properties on the pyramidal cell dendrites (10). It is tempting to think that interneurons might be wired to affect pyramidal cells differently depending on whether they convey information from the center or surround of the pyramidal cell’s receptive field.

The role of patterned activity in network dynamics

In addition to genetic specificity, the temporal precision of optogentics is also invaluable. The ability to control exactly when a neuron fires or does not fire with millisecond precision is beginning to allow researchers to dissect the role that a specific observed pattern of activity, such as oscillation at a specific frequency, has on the overall performance of the network. The experiment described above used a regular train of light pulses at 40 Hz to drive the somatostatin inhibi-

tory neurons, but no other frequency was tested (10). This frequency was likely chosen because oscillations between 30-60 Hz, known as the gamma frequency band, have been commonly observed in the cortex, especially during activity and memory retrieval (11). The Hausser lab examined the effect of changing the oscillation frequency of both excitatory and inhibitory neuron populations in V1 cortex in another recent abstract (12). Interestingly, regardless of the frequency enforced in either subpopulation by ChR2 activation (5, 25, 40, and 75 Hz), local field potentials from layer 2/3 recorded enhanced oscillations around the intrinsic frequency of the network (20-35 Hz). Oscillations were increased more when the animals were under anesthesia than when they were awake and moving on a treadmill, which is somewhat puzzling since gamma oscillations are typically linked with active behavior (11). A final interesting finding from this study was that when optogenetic stimulation was applied in concert with visual stimulation, oscillation power was enhanced in an additive manner (12). Overall, this study demonstrates the power of ChR2 to reveal a new wealth of information about the effects of patterned activity on network dynamics. However, there is still disagreement about exactly how these gamma oscillations arise. In a study published in Nature last year, both fast-spiking interneurons and excitatory pyramidal cells were controlled with ChR2 in the mouse somatosensory barrel cortex (13). Like Havenith et al., the group found that regardless of the imposed frequency of fast-spiking interneurons, local field potentials were enhanced within the gamma range. However, activation of pyramidal cells only increased lower frequency oscillations (Fig. 2). This cell-type specific dissociation in network oscillatory activity is an interesting property that may be important in brain disorders with deficits in gamma oscillation, and optogenetics will be instrumental in working out the details.

Dartmouth Undergraduate Journal of Science


a.

b.

c.

Image retrieved from http://www.nature.com/nature/journal/v459/n7247/images/nature08002-f3.2.jpg (Accessed 29 Jan 2011).

Fig. 2: a. Example of natural gamma oscillations phase shifted by blue light pulse. b,c. Local field potential enhancement at 40 Hz was only driven by fast-spiking (FS) interneuron oscillation, not regular-spiking (RS) pyramidal cells.

Connecting Neuronal Activity to Brain Function

shifted responses towards the preferred octave of the activated region in a behavioral tone discrimination task (18).

While the above studies begin to reveal the precise connection of individual neurons and activity patterns to the dynamics of the surrounding network, optogenetics can also link network elements directly to higher-order brain function.

Learning and memory

Perception

Gamma oscillations driven by inhibitory neurons have been theorized to control the gain of sensory input, to act as an attentional mechanism (14), and to influence working memory (11). Failure of gamma oscillation is associated with brain disorders such as autism (15) and schizophrenia (16, 17). Cardin et al. therefore investigated the role of the precise timing of oscillatory activity on neural coding of whisker stimulation (13). Using a mouse expressing ChR2 in fast-spiking interneurons, they applied blue light at 40 Hz to induce gamma oscillations phase-locked to the light pulses. They then stimulated a whisker at various time points (Fig. 3a). Remarkably, precision of pyramidal cell responses increased when the whisker was stimulated at specific phases of the oscillation (Fig. 3b-f). The authors conclude that sensory transmission is decreased during the peak of the interneuron inhibitory neurotransmitter release, leading to temporal sharpening of the response during inhibitory neuron oscillation(14). Optogenetics have also been used in simpler experiments to link neuronal activity to perception. For example, stimulation of auditory cortex using nonspecifically expressed ChR2 Winter 2011

Optogenetics is also beginning to reveal novel aspects of learning and memory. For example, a recent optogenetic study found intriguingly different results when using a light-activated activity block compared to a pharmocological block (19). When tetrodotoxin (TTX) was injected to inactivate the CA1 region of the hippocampus, a timedependent affect on memory retrieval in a contextual fear conditioning task was found: when injected before training, or right before testing one day after training, TTX blocked memory of the feared context. But when TTX was injected just prior to testing after 28 days, both control and TTX-injected rats still froze in response to the context, corroborating the generally accepted hypothesis that CA1 of the hippocampus is involved in the formation and consolidation of memories, but not long term storage. However, injection of pharmacological agents necessarily takes time on the order of 30-60 minutes, a relatively long time when one considers that new memories can be formed on the scale of minutes and even seconds. Therefore, the researchers employed the light-activated chloride pump halorhodopsin (Halo) to rapidly and reversibly block activity in CA1 (19). Halo was expressed in excitatory neurons in CA1 and effectively blocked activity when activated by light. Turning the light on during training or during testing one day after training impaired recall of the context. However, unlike TTX, activation of Halo during testing 28 days later also significantly

blocked the fear response. Activating Halo 30 minutes before the 28-day testing phase did not cause any impairments in memory recall, and auditory cued-fear responses were not affected by light activation in CA1 at any stage of testing or training. Together, these novel results may indicate that the hippocampus does in fact store long-term memories, but may not be necessary for long-term memory recall if the brain has sufficient time to recruit alternate mechanisms (19).

Future Experimental Directions Overall, the sampling of findings above, although incomplete, provides a broad picture of how optogenetics are beginning to reveal the neural mechanisms underlying complex brain functions. By linking individual neurons and specific patterns of activity to network dynamics, and then linking these elements to complex tasks such as perception or learning and memory, optogenetics should make it possible to understand the brain in unprecedented detail. Many other brain functions are promising candidates for optogenetic research. In particular, Robert Wurtz has suggested that optogenetic perturbation of neuronal activity will be useful in correlating neural activity to behavior in order to dissect the mechanisms of visual attention and suppression during saccades (20). Similarly, studies of motivation have thus far relied on recording neural activity in different brain regions in response to behavior (21); by expressing optogenetic controls in dopaminergic or seratoninergic systems and directly stimulating or inactivating those neurons, the reverse experiment could be conducted in which neural activity might be shown to cause the specific motivational behaviors. The genetic specificity of lightcontrolled neurons is especially promising for studying the balance of excitation and inhibition. Excitatory/ Inhibitory (E/I) balance changes drastically during development, from mainly excitatory in childhood to roughly equal in adulthood. This shift in E/I balance is essential for proper timing of the “critical period� during which 21


Image retrieved from http://www.nature.com/nature/journal/v459/n7247/images/ nature08002-f4.2.jpg (Accessed 29 Jan 2011).

Fig. 3: a. A whisker was stimulated at five different points during the light-induced oscillation. b. Baseline response histogram. c. Responses with gamma oscillation d. Spike count was reduced for some stimulation phases e. Some spike latencies increased f. Spike precision was increased in a phase dependent manner.

experience-dependent plasticity can shape the developing nervous system (22). Monocular deprivation during this critical period causes a cortical bias towards the contralateral eye which is normally irreversible. However, some manipulations such as demyelination or decreasing cortical inhibition using pharmacology can return the brain to a plastic state (22). Optogenetics would allow more precise investigation of how E/I balance regulates the boundaries of the critical period.

Treatment of Disease Since optogenetics has already been used to selectively activate or suppress specific populations of excitatory or inhibitory neurons, it could conceivably be used to correct E/I balance in certain brain disorders. For example, schizophrenics have decreased myelin and too much excitatory activity—in a way the brain is developmentally immature (22). Increasing activity in in22

hibitory neurons, or alternatively suppressing activity in excitatory neurons, might help correct the balance. Increasing inhibition may also help restore normal gamma oscillations, which are dysfunctional in schizophrenics (16, 17). Conversely, if the critical period has passed without the opportunity for normal experience, as it has for patients who only gain sight after childhood (23), a shift in the excitatory direction might aid the development of a functional visual system even late in life. Other potential therapeutic approaches include cell-type specific versions of deep-brain stimulation with potential for treating Parkinson’s, epilepsy, severe depression and mood disorders, sleep disorders, and phobias. Additionally, some groups have begun engineering retinas with ChR2 to replace damaged photoreceptors (24,25). Perhaps if multiple opsins were expressed, such as ChR2 plus a red-shifted version, or ChR2 plus Halo (6), then functionally distinct populations of neurons in the retina could be activated differentially to provide more biologically realistic inputs. However, although the possibilities for therapeutic approaches are limited only by the imagination, any use of optogenetics in humans is light-years away due to the difficulty of introducing genes into the human brain and the ethics of directly controlling neural activity.

Conclusion In a way, the timing of optogenetic’s entrance on the stage of neuroscience is ideal. Other techniques have shown how environmental stimuli and behavioral outputs correlate with patterns of neuronal activity; the next step is to directly manipulate activity with genetic and temporal precision to elucidate the neural basis of perception, behavior, and everything in between. References

4. L. A. Gunaydin et al., Nature Neuroscience. 13, 387–392 (2010). 5. B. Schobert and J. K. Lanyi, The Journal of Biological Chemistry, 257, 10306-10313 (1982). 6. X. Han, E. S. Boyden, PLoS ONE 2, 299 (2007). 7. F. Zhang, et al. Nature Protocols, 5(3):439-456. (2010). 8. M. Rizzi et al., Program No. 388.8. 2009 Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience. Online. (2009). 9. E. Ivanova, Z. H. Pan, Mol Vis. 15, 1680–1689 (2009). 10. J. C. Cottam, S. L. Smith, M. Hausser, Program No. 450.21. 2010 Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience. Online. (2010). 11. M. W. Howard et al., Cereb. Cortex. 13, 1369-1374 (2003). 12. M. N. Havenith, H. Langeslag, J. Cottam, M. Hausser, Program No. 673.13. 2010 Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience. Online. (2010). 13. J. A. Cardin et al., Nature. 459, 663-667 (2009). 14. U. Knoblich, C. I. Moore, Program No. 673.14. 2010 Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience. Online. (2010). 15. E. V. Orekhova et al., Biol. Psychiatry 62: 1022–1029. (2007). 16. K. M. Spencer, M. A. Niznikiewicz, M. E. Shenton, R. W. McCarley, Biol. Psychiatry 63, 744–747 (2008). 17. P. J. Uhlhaas, C. Haenschel, D. Nikolic, W. Singer, Schizophr. Bull. 34, 927–943 (2008). 18. P. Znamenskiyi, A. M. Zador. Program No. 805.13. 2010 Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience. Online. (2010). 19. I. Goshen et al., Program No. 412.3. 2010 Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience. Online. (2010). 20. Wurtz, Peter and Patricia Gruber Lecture: Brain Circuits for Active Vision. 2010 Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience. Online. (2010). 21. O. Hikosaka, Program No. 218: Presidential Special Lecture. 2010 Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience. Online. (2010). 22. T. Hensch, Program No. 213.2. Molecular brakes on plasticity. 2010 Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience. Online. (2010). 23. P. Sinha, Program No. 422. Presidential Special Lecture. 2010 Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience. Online. (2010). 24. P. S. Lagali, et al., Nature Neuroscience. 11, 667 – 675 (2008). 25. S. Thyagarajan et al., The Journal of Neuroscience, 30(26): 8745-8758 (2010).

1. Anonymous. “Optogenetics and mouse (with music)” 25 May, 2010. YouTube. Accessed Dec 5, 2010. http://www.youtube.com/ watch?v=obJjXRyDcYE (2010). 2. E. S. Boyden, F. Zhang, E. Bamberg, E. Nagel, L. Deisseroth, Nature Neuroscience. 8, 1263-8 (2005). 3. J. Lin, Program No. 314.4. 2010 Neuroscience Meeting Planner. San Diego, CA: Society for Neuroscience, 2010. Online. (2010). Dartmouth Undergraduate Journal of Science


Biology

In Vitro Fertilization Daniel Lee ‘13

I

n vitro fertilization (IVF) is a medical procedure in which a human egg is fertilized outside of the body, and then reinserted into the womb (1). Approximately four million babies have been born by IVF since it was first introduced in the 1980s (1). Despite its prevalence, ethical questions have recently been raised about the usage of IVF (2). Specifically, ethicists are concerned with the potential misuse of “pre-implantation genetic diagnosis,” or PGD (2). In the early 1990s, scientists developed PGD as a way of genetically screening embryos for inherited diseases (2). The ability to test for non-essential traits, such as hair color, has emerged in recent years. This technology is the source of controversy: just what should IVF clients be allowed to select for in their fertilized embryos? While the characteristics that can be selected for today are limited—and primarily cosmetic—the future may bring about new choices with greater ethical and demographic concerns. The chief purpose of PGD remains to screen for genetic diseases. Technicians remove a cell from a fertilized three-day old embryo, and then analyze its DNA for inherited diseases (2). Embryos that will certainly produce children with diseases, such as Huntington’s chorea or cystic fibrosis, can be kept from being implanted into the womb (2). To date, screening can identify approximately 130 different inherited diseases. Additional diseases are being added as the understanding of the human genome grows.

Gender Selection The ability to identify physical traits in embryos is an extension of disease screening. The most common trait that is screened for is gender. Though Western culture does not necessarily favor one sex over another, many Asian countries, such as China and India, place a cultural premium on boys (3). As a result, PGD sex screening has beWinter 2011

Image retrieved from http://upload.wikimedia.org/wikipedia/commons/0/07/DNA.jpg (Accessed 1 Feb 2011).

Ethicists are concerned with the potential misuse of “pre-implantation genetic diagnosis.”

come prevalent in these countries (3). The result is an imbalance of men and women. In some states of India, the female to male ratio is as displaced as 810 females to 1000 males, and in some areas of China, as disproportionate as 677 females to 1000 males (3). For this reason, medical experts strongly oppose sex screening. Not only is the gender selection perceived as unethical, but serious demographic dangers underlie it. Population growth could be significantly slowed, and crimes against women increase in such forms as marriage trafficking. (3) Several countries have consequently banned sex screening, including the United Kingdom, Canada, Japan, and Australia (3). While none of these nations have a strong cultural bias for a particular gender, strong opinions and the recognition of the potential dangers of gender imbalance have kept sex screening from these countries. However, the United States has not banned sex screening. According to a 2006 survey by the Genetics and

Public Policy Center at Johns Hopkins University, 42% of 137 PGD clinics in the U.S. allow clients to select for gender (5). This is because the majority of gender screens in the U.S. have been for the purposes of family balancing (5). For instance, couples with several sons may select for a daughter, and vice versa. Given these cultural differences and the relatively small demand for sex screening in the U.S., many medical experts accept sex selection as a matter of convenience rather than as an ethical or demographic dilemma. In moderation, sex screening does not pose a danger.

Cosmetic Selection PGD screens for cosmetic purposes are not as benign as gender selection. For a time, the LA Fertility Institutes, an IVF clinic in CA, promised clients “a pre-selected choice of gender, eye color, hair color and complexion, along with screening for potentially lethal diseases” (5). This claim was backed by a medi23


References 1. In vitro fertilization (IVF): MedlinePlus Medical Encyclopedia (2010) Available at http:// www.nlm.nih.gov (10 November 2010). 2. Reproductive Genetics Institute: PGD Experts (2009). Available at http://www. reproductivegenetics.com (13 November 2010). 3. B. Kaberi, I Pre Implantation Genetic Diagnosis and Sex selection (2007). Available at http://www.ivf.net/ivf/pre-implantationgenetic-diagnosis-and-sex-selection-o2906.html (28 November 2010). 4. A. Malpani and D. Modi, Preimplantation sex selection for family balancing in India—Hum Reprod (2002). Available at http://humrep. oxfordjournals.org/content/17/1/11.full (28 November 2010). 5. G. Naik, A Baby, Please. Blond, Freckles -- Hold the Colic - WSJ.com (2009). Available at http://online.wsj.com/article/ SB123439771603075099.html (28 November 2010). 6. M. Anissimov, The Great Designer Baby Controversy of ’09 | h+ Magazine (2009). Available at http://www.hplusmagazine.com/ articles/bio/great-designer-baby-controversy%E2%80%9909 (29 November 2010).

Image courtesy of RWJMS IVF Laboratory

Sperm injection into oocyte.

cal report describing a more precise way of extracting DNA from embryos. In 2008, Dr. William Kearns, director of the Shady Grove Center for pre-implantation genetics, proposed a method of amplifying the small amount of DNA collected from fertilized embryos. A PGD screen typically does not harvest enough genetic material to sufficiently test for many phenotypic traits. In his clinical reports, however, Kearns stated that he was able to identify the genes responsible for hair, eye and skin pigmentation in 80% of his samples. Despite the science, the LA Fertility Institutes was not able to produce any such designer babies. Great public backlash followed the LA Institutes’ announcement. Among many conservative voices, the Pope condemned the method and “the obsessive search for the perfect child [that inspired it]” (6). The public outcry was seconded by medical ethicists. They reasoned that the ability to choose for a particular eye or hair color, while relatively benign on its own, could lead to the development of other screens with more dangerous consequences (3). In a survey of 999 people who sought out counseling for potential genetic screening, 10% said they would screen for athletic ability, 10% for improved height, and 13% for superior intelligence (6). The demographic risks associ24

ated with these types of selection are great. According to Kari Stefansson of deCODE, a genetics research group, access to such screening could “decrease human diversity and that’s very dangerous for the gene pool” (5). Social concerns are also relevant. “If we’re going to produce children who are claimed to be superior because of their particular genes, we risk introducing new sources of discrimination,” stated Marcy Darnovsky, associate executive director of the center for genetics and society, a nonprofit public interest group in Oakland, CA (5). Following this public and scientific opposition, the Institutes cancelled their selection program only two months after it began advertising it (6).

Conclusion While the technology to pick and choose traits from raw genetic data is not yet available, medical ethicists are already campaigning for its ban. Scientists agree that the healthy development of genetically modified babies will not justify the use of PGD for cosmetic purposes (4). Opinions about the morality of genetic screening may be debatable, but the potential demographic dangers will always remain and limit the use of PGD screening.

Dartmouth Undergraduate Journal of Science


Biology

The Primal Conversation Intercellular Communication in Bacteria Aaron Koenig ‘14

H

istorically, conventional thinking in evolutionary biology has drawn a bright line separating multicellular from unicellular organisms. Only the cells of animals, plants, and fungi were thought capable of achieving the synchrony required to function as a collective. Bacteria, in contrast, were perceived as blind, deaf, and mute, with each cell single-mindedly focusing on its own metabolism and reproduction. However, recent discoveries in the field of bacterial cellcell signaling have exposed the ubiquity of multicellular behavior in bacterial populations, dramatically altering our understanding of the microbial world.

Introduction to Quorum Sensing

The first complete descriptions of cell-cell signaling systems used by microbes to regulate gene expression were obtained from studies of the marine bacteria Vibrio fischeri and Vibrio harveyi over 30 years ago (1). The genes studied encode lightproducing luciferase enzymes and are up-regulated in response to increases in population density (1). Underlying this unusual behavior is a pheromone, or autoinducer, from the family of acylhomoserine lactones (AHL) (2). Every Vibrio cell contains the gene luxI, whose protein product, known as a synthase, synthesizes AHL (2). As the cells divide, increasing their population density, the concentration of AHL also increases (2). An intracellular receptor, encoded by the gene luxR, binds AHL when the concentration of the pheromone reaches a specific threshold (2). The LuxR-AHL complex acts as a transcriptional activator, stimulating production of the luciferase enzymes and LuxI, which produces the original autoinducer (2). Further increases to the concentration of LuxI establish a positive feedback loop, leading to population-wide production of light (2). Winter 2011

Image retreved from http://upload.wikimedia.org/wikipedia/commons/c/cf/Quorum_sensing_diagram.png (Accessed 29 Jan 2011).

Schematic diagram of Quorum sensing.

In some respects, the mechanism of cell-cell signaling used by Vibrio cells to assess population density resembles a democratic system of government. Each cell casts a vote through its contribution of pheromones. When cells detect the presence of a sufficient concentration of pheromone, analogous to the recognition of a quorum, the group takes action. A nonscientist friend of Stephen Winans, a researcher at Cornell University studying the role of bacterial signaling in plant crown gall disease, serendipitously devised the term “quorum sensing” to describe multicellular bacterial behavior (3). As described by microbiologists, quorum sensing refers to the ability of bacteria to detect population density and coordinate corresponding shifts in gene expression patterns (4).

Multicellular Bacterial Behaviors Regulated by Quorum Sensing Transcriptional regulatory proteins activated by binding to pheromones enhance gene expression in

genes whose promoters contain a specific sequence element recognized by the protein-pheromone complex. While the synthases of pheromones and their associated receptors originated early on in bacterial evolutionary history — the LuxI/R system of the vibrios has homologues in evolutionarily distant human and plant pathogens — even closely related quorum sensing systems can be repurposed to control a variety of behaviors (5). Behaviors activated in density-dependent response pathways include bioluminescence, symbiotic plant root nodule formation by nitrogen fixers, bacterial mating, antibiotic production, biofilm formation, and extracellular DNA uptake in response to harsh conditions (6). In all of these cases, bacteria only benefit from engaging in these activities collectively at high cell densities. Vibrio fischeri, for example, engages in a partnership with the Hawaiian Bobtail Squid by producing light when the bacteria grow in the nutrientrich environment of the squid’s specialized light producing organs. This relationship provides direct benefits for the squid, which can escape the 25


notice of predators on moonlit nights by counter-illuminating themselves. By using the light produced by their complement of V. fischeri, the squid can eliminate telltale shadows on the seabed. Light production only confers a selective advantage for bacteria when they are growing within the confines of the squid. Free-floating populations of V. fischeri do not express high levels of luciferase. Using the LuxI/R quorum sensing system, the bacteria are able to avoid the individual fitness cost of luciferase synthesis in the open ocean. While many bacterial species, including V. fischeri, do no harm to the hosts they colonize, quorum sensing also aids bacterial pathogens. Highly developed immune systems, like those of humans, are adept at preventing infectious disease; however, bacteria are able to avoid a host immune response by suppressing expression of virulence factors until enough cells are present to overrun the host (7). Interestingly, not all pathogens conform to this model of virulence regulation by quorum sensing. In Vibrio cholerae, the etiological agent of cholera, a two-pheromone quorum-sensing network arranged in parallel represses virulence factor production at high population densities, but permits their expression at low densities (2). More background

on the parthenogenesis of cholera reveals the evolutionary rationale for this idiosyncrasy, as it is the diarrhea resulting from cholera that allows V. cholerae to spread (2). Concurrent inhibition of biofilm formation permits bacteria to detach from their intestinal hideout and “go with the flow” (2). Biofilm formation, involving the creation of multicellular aggregates, is an important intermediate step in the progression of bacterial infection. Although significant variation exists in the structure and composition of biofilms, most bacterial species create biofilms through the secretion of exopolysaccharides, the primary constituent of the extracellular matrix that binds cells into a biofilm (8). Bacteria in biofilms are more resistant to antibiotics and the host immune system, properties useful for maintaining chronic infections (9). It is estimated that biofilms participate in 65 percent of human bacterial infections, making biofilm formation and maintenance the focus of extensive research (8). Perhaps not unexpectedly, quorum-sensing signals have been shown to make important contributions to biofilm development in many species of microorganisms (8).

Image courtesy of Nick Hobgood.

Vibrio fischeri has a symbiotic relationship with the Bobtail Squid. 26

Disrupting Quorum Sensing Systems By employing intercellular signaling, bacteria enjoy a seemingly limitless ability to adapt to the vagaries of their environment. Our body’s defenses are all too often unable to adequately respond to the challenges posed by a cooperative group of bacteria producing virulence factors or encasing themselves in protective biofilms. Fortunately, this model of host-bacterial interaction, in which bacteria always have the upper hand, is only partially correct. While fighting infections by Pseudonomas aeruginosa, a process dependent on quorum sensing, the human body is able to use its paraoxonase family of organophosphate-hydrolyzing enzymes to degrade the acyl homoserine lactone pheromone produced by the bacteria (6). Inhibition of quorum sensing, or “quorum quenching,” has also been observed in barley and fungi. This implies that quorum quenching is an evolutionarily successful strategy for combating bacterial infection (10). In an era dominated by multi-drug resistant bacteria, alternatives to antibiotics active on a broad class of microorganisms are in high demand. It is a sobering thought that Penicillum fungi, from which the “miracle” drug penicillin was derived, produce small molecules that inhibit quorum sensing (6). These molecules mimic the pheromones of specific bacterial species, competitively binding to receptors of the true pheromone to prohibit activation of the quorum-sensing pathway (6). Apparently, even the organisms responsible for sparking the mass production of antibiotics cannot solely rely on that class of biomolecules for protection. Biotechnology may yet restore Penicillum fungi to the forefront of anti-microbial warfare due to recent advances in methods of disrupting quorum sensing. The quorum sensing systems of gram-negative bacteria use acyl homoserine lactones as primary pheromones. Despite the potential for cross-species signaling, most pheromone-receptor systems are used for intraspecies communication, specific to a single bacterial species or strain (11). Exceptions do exist, however. For example, the synthase of AI-2, a pheromone struc-

Dartmouth Undergraduate Journal of Science


Image courtesy of MethoxyRoxy

Neurons are well known for communicating via synaptic connections.

turally unrelated to the acyl homoserine lactones, is encoded in the genomes of nearly 50 percent of all fully sequenced bacteria (2). A recent study by Belgian researchers affiliated with Ghent University found that the expression of genes regulated by quorum sensing in multiple vibrio species could be reduced by AI-2 quorum sensing inhibitors that bind to the LuxPQ receptor of AI-2 (12). Libraries of candidate quorum-sensing inhibitors can be rapidly screened using lethal genes linked to pheromone-dependent promoters, assaying the success of each inhibitor by locating bacteria that grow (13). A group of British researchers has taken a different approach to block bacterial communication, designing a chemically inert polymer that traps the pheromones of V. fisheri (14). The ultimate goal of quorum-inhibition research is to produce a broad-spectrum inhibitor, which could act on a single pheromone produced and recognized by multiple bacterial species, as in the case of AI-2.

Emerging Applications In the study of bacterial quorum sensing, much emphasis has been placed on applying emerging findings on the structure of intercellular signaling pathways to the prevention and treatment of bacterial infections. The potential of quorum sensing, however, also stems from its unique combination of bacteria with simple systems Winter 2011

for intercellular signaling. From a bioengineering perspective, quorumsensing bacteria create tantalizing possibilities for gene circuit design, in which well-defined chemical inputs are linked to desired outputs (6). One astonishing multicellular machine has already been developed: a micro-scale clock driven by the oscillating fluorescence output of recombinant bacteria (15). The gene circuit driving this behavior contains components from the LuxI/R pheromone-receptor pair of the vibrios, but links their activation to the expression of Green Fluorescent Protein and an enzyme that quenches the LuxI/R pheromone signal (15). Coupling the transcriptional activator to its own repressor leads to the establishment of tunable, periodic fluorescence (15). Removal of excess extracellular AHL via a microfluidics system is required for the maintenance of oscillation, which demonstrates, as in the V. fischeri – squid symbiosis, the importance of environment to the long-term stability of quorum sensing behaviors (15).

Conclusions From bacterial biofilms to networks of neurons, life depends on cellular communication as the basis of its continued survival. Although the methods of intercellular communication employed by plants, animals, and fungi are far more complex than those of bacteria, the existence of quorumsensing systems indicates that precursors of multicellularity penetrate further into the tree of life than we previously expected. Moreover, the regulatory networks associated with quorum sensing are hardly archaic. Humans consist of roughly one trillion cells, far outnumbered by the ten trillion bacterial cells estimated to reside on or around our body (16). Quorum sensing systems mediate interactions between our body and its associated microbes that enrich our lives tremendously by aiding us in digestion, protection against environmental hazards, and vitamin synthesis (16). Conversely, quorum sensing is used to maximize harm to our body by invasive pathogens, often overshadowing the positive contributions of our microbiome. The challenge of fully elucidat-

ing quorum sensing will continue to stimulate research into bacterial communication in the near future. It is likely that bacteria have been communicating using pheromones for billions of years. For the first time in history, when bacteria talk, the world listens. References 1. M. B. Miller, B. L. Bassler, Annu. Rev. Microbiol. 55, 165-199 (2001). 2. C. M. Waters, B. L. Bassler, Annu. Rev. Cell Dev. Biol. 21, 319-346 (2005). 3. C. D. Nadell, J. B. Xavier, S. A. Levin, K. R. Foster, PLoS Biology 6, 0171-0179 (2008). 4. G. M. Dunny, S. C. Winans, Cell-Cell Signaling in Bacteria (ASM Press, Washington, D.C., 1999), pp. 1-5. 5. E. Lerat, N. A. Moran, Mol. Biol. Evol. 21, 903-913 (2004). 6. S. Choudhary, C. Schmidt-Dannert, Appl. Microbiol. Biotechnol. 86, 1267-1279 (2010). 7. V. E. Wagner, J. G. Frelinger, R. K. Barth, B. H. Iglewski, Trends Microbiol. 14, 55-58 (2006). 8. D. G. Cvitkovitch, Y. Li, R. P. Ellen, J. Clin. Invest. 112, 1626-1632 (2003). 9. Y. Irie, M. R. Parsek, Curr. Top. Microbiol. Immunol. 322, 67-84 (2008). 10. S. Uroz, Y. Dessaux, P. Oger, ChemBioChem 10, 205-216 (2009). 11. S. M. Rollins, R. Schuch, Virulence 1, 57-59 (2010). 12. G. Brackman et al., Microbiology 155, 4114-4122 (2009). 13. S. Kjelleberg, D. McDougald, T. Bovbjerg Rasmussen, M. Givskov, Chemical Communication among Bacteria (ASM Press, Washington, D.C., 2008), pp. 393-416. 14. E. V. Piletska et al., Biomacromolecules 11, 975-980 (2010). 15. T. Danino, O. Mondragon-Palomino, L. Tsimring, J. Hasty, 463, 326-330 (2010). 16. B. Bassler, “Bonnie Bassler On How Bacteria “Talk”,” Video Recording.

27


Health

Disease Prevention on College Campuses Diana Pechter ‘12

Hand-Washing is Key Some health professionals believe that the key to minimizing the mass transmission of these illnesses lies in the propogation of information through hygiene campaigns and the availability of hand sanitizers across campuses. The Centers for Disease Control and Prevention (CDC) asserts that the simplest, most effective method of disease prevention is hand washing, defined as vigorous and brief rubbing together of all surfaces of soap-lathered hands, followed by rinsing under a stream of water.1 Research has shown that in addition to hand washing, the use of hand sanitizers can provide a convenient supplement. Though they do not remove soil or organic material, these products, such as Purell, can kill microorganisms through disinfectant action. In a study by White et al. (2005), a campaign to increase hand hygiene practices, coupled with the introduction of an alcohol based antibacterial gel, reinforced by messages to continue washing and sanitizing was successful in decreasing the incidence of under respiratory infections on a college campus. Other studies reported the same decrease in illness with a heightened awareness of hand hygiene.

Image retrieved from http://upload.wikimedia.org/wikipedia/commons/1/1b/OCD_handwash.jpg (Accessed 29 Jan 2011).

Handwashing is believed to be the key component to minimizing mass transmission of contageous illnesses on campus.

G

iven the crowded, communal lifestyles of college students, it is not surprising that germs spread quite rapidly. A simple cold or flu virus can infect an entire campus in a matter of weeks (1). That is why institutional health facilities are constantly looking for more effective ways to stop the spread of infectious diseases. An ad for Purell, a popular brand of hand sanitizer gel, shows a mother and daughter pushing a grimy shopping cart and suggests that they can touch bacteria-ridden items and then be purified by the germ-killing power of Purell. But does hand sanitizer really live up to the hype? If not, what is the best way to prevent the 28

spread of disease on a college campus?

Disease Transmission Infectious diseases are all caused by infective agents — such as bacteria, viruses, fungi and parasites (2). Infective agents can spread through either direct-contact transmission, which implies physical transfer of bacteria from a colonized individual to a susceptible host, or indirect-contact transmission, which involves contact of a susceptible host with a contaminated object, such as a public doorknob, water fountain, or computer terminal (3).

Increase Compliance: Change the Way Students Think However, the answer is not so simple. A significant decrease in sickness does not equate to total elimination. Even with constant reminders and messages, students often evade hand-washing standards and continue spreading germs in food courts, fraternity basements, and other frequently visited campus locations. So the question becomes: how can we change the way students think about hygiene, and how can hand hygiene cam-

Dartmouth Undergraduate Journal of Science


paigns be made more effective among a student population? Social perceptions could play a major role in why students heed or ignore health tips.

Re-Defining What is Gross The concept of “disgustingness” may contribute to student beliefs about what constitutes healthy lifestyle habits. If something disgusts us, we have a motivation to avoid it. By using this as a social tool, college students may learn to respond appropriately to stimuli that may negatively impact their health. Feelings of disgust tend to trigger disgust responses such as a distinctive facial expression, bodily withdrawal, and nausea (4). This response of revulsion contains a cognitive component, which involves an impression that an object has been contaminated. College students have been shown to reject a liked beverage after a sterilized cockroach has been immersed in it (4). Though a simple awareness of an object is not enough to elicit a disgust response, the response will surface when an object is judged to fall under a certain description, including the object’s nature, origin, or history (5). Rozin (1986) suggests that three routes of acquisition exist to establish disgust triggers: contamination (object is disgusting because it comes in contact with something disgusting), general-

ization (object is disgusting because it is similar to something disgusting), and evaluative conditioning (object is disgusting because it is paired with something disgusting through conditioning.

Effective Campaign Uses Relevant Threat How can these disgust triggers be used to encourage health consciousness among college students? A handwashing study conducted in a mid-sized university found that common threats used in hand-washing campaigns, such as spreading germs and getting sick, were not relevant enough to cause a behavior change in the target audience (6). However, emphasizing the “grossness” of not washing hands, such as urine and feces on hands, showed the highest behavior change (7). Thus, the campaign was more successful when taking into account what would be the most relevant threat for students.

Spreading Germs by Food Sharing: Impact of Social Dictates One common germ-spreading mechanism on college campus involves the transfer and sharing of food and drink. The prevalence of food sharing could make it a target of a health

Image courtesy of the Centers for Disease Control and Prevention.

campaign. However, the practice of food sharing may be embedded in social convention and thus, difficult to prevent. If someone asks you for a bite of your sandwich, what would you reply? If the person is a close friend, she might feel insulted if you deny her this privilege, since it could imply that you do not feel close enough to her to share. Sharing is personal. Husbands and wives share germs frequently through kissing and other acts of intimacy. Brothers and sisters use the same living spaces and tend to exchange germs freely. Society appears to deem familial bonds worthy of germ sharing. But the rules of friendly germ spreading are not so easily defined. If a stranger starts chatting with you on a bus and asks to have a sip of your drink, would you think that this was appropriate? If a college campus health campaign focused on the disgustingness of food and drink sharing, this could be an important step toward stopping the spread of disease.

References 1. “Handwashing: Clean hands save lives” (2010). Centers for Disease Control and Prevention. Available at http://www.cdc.gov/ handwashing/ (17 January, 2011). 2. J. Steckelberg, What’s the difference between a bacterial infection and a viral infection? (2009). Mayo Foundation for Medical Education and Research. Available at http:// www.mayoclinic.com/health/infectious-disease/ AN00652 (17 January, 2011). 3. K. Pyrek, Breaking the chain of infection (2002). Available at http://www. infectioncontroltoday.com/articles/2002/07/ breaking-the-chain-of-infection.aspx (17 January 2011). 4. C. Knapp. De-moralizing disgustingness. Philosophy and Phenomenological Research, 66, 253-278 (2003). 5. P. Rozin, L. Millman, C. Nemeroff. Operation of the Laws of Sympathetic Magic in Disgust and Other Domains. Journal of Personality and Social Psychology, 40, 703–712 (1986). 6. R. Botta, K. Dunker, K. Fenson-Hood, et al. Using a relevant threat, EPPM and interpersonal communication to change handwashing behaviors on campus. Journal of Communication in Healthcare, 1, 373-381 (2008). 7.C. Sadler, Do you really need hand sanitizer? (2009). Available at http://www.cbc.ca/ consumer/story/2009/11/09/consumer-handsanitizer.html#ixzz0fdH46wBc (17 January, 2011).

Influenza is one of the most common viruses that get transmitted from student to student around campus. Winter 2011

29


Technology

Cellular Phones

With Great Technology Comes Great Risk Mike Mantell ‘13

I

n the year 2040, cell phones have essentially taken over the world. Citizens have their cell phones glued to their ear more often than not. As a result of the constant radiation, 95% of the entire world develops brain tumors and dies. The only survivors are the Amish, the very young children, and the homeless. Is this the plot to a bad science fiction movie, or an exaggerated, yet grounded concern? Scientists cannot seem to agree. Scientists can agree, however, that brain tumors are a serious issue, causing nearly 13,000 American deaths per year. A brain tumor is an abnormal and uncontrolled growth of cells in the cranial region. These tumors can be benign, which at most would cause nearby brain regions to shift. The shifting of a brain region can have effects on its function – potentially causing visual or cognitive impairment. In the worst case, brain tumor is malignant, causing cancer, and possibly death (1). Brain tumors can sprout for different reasons – one of which is radiation.

Radiation All cell phones emit radiation. There are two kinds of radiation: ionizing, and non-ionizing. Ionizing radiation is high-frequency, and has enough energy to remove electrons, and thus ionize atoms. Enough ionization can damage DNA cells, and cause the growth of tumors, and cancer (4). Non-ionizing radiation does not have enough energy to damage DNA. The only types of ionizing radiation are gamma rays, x-rays, and UV rays. Visible light, infrared rays, microwaves, and radio waves are all non-ionizing radiation. Cell phones emit radio waves, one of the lowest frequency types of waves that are not supposed to induce brain cancer or damage DNA. The United States has even legally set a precautionary maximum radiation rate for cell phones (3). Many scientists argue that the radiation emitted by cell phones is simply 30

The scientists recorded and compared the cell phone usage of the two groups. Inskip and Linet scrutinized the data and found absolutely no correlation between the tumors and any sort of cell phone usage. Where users held the phone, how often users talked on the phone, the average length of conversations: none of these factors shared a correlation with brain tumors (2).

A counter argument

Image courtesy of Thomas Steiner, photographer.

There is much debate over the effects of prolonged cell phone usage.

too weak to have any sort of impact on humans. The legal maximum radiation emitted by a cell phone is less than 1% than that of a kitchen microwave (3). If microwaves don’t even give off ionizing radiation, then how could a cell phone ever be dangerous? Other scientists, however, argue that while cell phones emit feeble frequencies, years of constant exposure to radiation can add up and lead to health issues.

Studies Disproven

The National Cancer Institute does not classify cell phone usage as a cause of brain tumors. The NCI sponsored a study, spearheaded by Dr. Peter Inskip and Dr. Martha Linet. Inskip and Linet acquired 782 patients with brain tumors on the lining of the brain which connects the brain to the ear. Inskip and Linet also collected 799 patients of the same age, sex, and race who did not have brain tumors.

Hardell group, a team of Swedish researchers, would strongly disagree with this conclusion. The researchers conducted a questionnaire about cell phone usage to 905 subjects with malignant brain tumors, 1,254 with benign brain tumors, and 2,162 control subjects with no tumors. The Hardell group found a correlation between tumors and cell phone usage when the phone was used on the same side of the head as the tumor. They found a heightened risk of brain tumors in those who had been using phones for over ten years – and especially in the subjects under the age of 20. The Swedish conclusion: long term cell phone use does cause brain tumors (5). How could two seemingly similar studies result in such starkly different conclusions? Some critics claim that the NCI study used a faulty subject group – claiming that most patients’ tumors were not near where the patients had held their cell phones. This would mean that the subject group had brain tumors completely unrelated to cell phone use, invalidating the study. Some critics attempted to debunk the Swedish study by actually claiming that Swedes are more susceptible to brain tumors than other races, and that this study is not applicable to most Americans. Whichever side of the battlefield you choose, there is an arsenal of endless studies against you. The reason that there are no good answers to this debate is that results to this question cannot just be deduced from testing lab

Dartmouth Undergraduate Journal of Science


mice. It is understood that cell phone radio waves have weak frequencies, but there is no precedent, or test for twenty years of exposure to any kind of wave. Imagine commanding a gerbil to seriously injure a human being. All the gerbil is allowed to do is headbutt the human in the stomach. Basic Newtonian physics will tell you that the force that a gerbil can produce with a skullbash is far too low to cause any noticeable damage on human flesh. Now, imagine that the gerbil will be headbutting the human in the stomach for twenty straight years. Will the human be injured after twenty years? It is now clear that one cannot even attempt to answer this question without entering a realm of speculation. So looking back to tumors, we know that cell phones may have not yet caused tumors, but we do not know if they one day will. In such a long term situation, we are the lab rats. As the NCI and the Hardell group did, we can try to look at present day tumors, and trace them back to previous cell phone habits. But these kinds of studies are not conclusive. Consider the relationship between tobacco use and lung cancer. We know that smoking directly causes lung cancer; however, if we looked at a sample of people who had been smok-

Winter 2011

ing for ten years or less, we would be able to establish no such relationship. Tobacco use does not cause noticeably increased cancer risks mostly until 15 to 35 years later (6). Then how can we make such judgments on cell phones and brain cancer after only a decade? We can’t. We cannot make such sweeping generalizations about the long term effects of cell phones yet. We simply have not been using them long enough to accurately make these claims. But there are a few known facts about cell phone radiation. For example, the Specific Absorption Rate of radio waves is much higher in children brains than in those of adults.

Conclusion While the long-term correlation between brain tumors and cell phone use is indeterminate, there are still certain precautions that can be taken. Children should not use cell phones unless for emergencies. Ear-pieces should be used whenever possible. And phones with the lowest Specific Absorption Rate (measure of cell phone radiation) should be selected. When it comes down to it, I would rather be cautious than dead. So if you do act carefully towards cell phone use, then who knows, maybe

in the year 2040 you will be one of the 5% of remaining humans responsible for repopulating the Earth. I will also be one of the 5%. You can thank me then. References 1. Brain Tumor. Available at http://www.cancer. gov/cancertopics/types/brain (27 May 2010). 2. G. Kolata, Two Studies Report No Links to Cancer In Cell Phones’ Use” (2000). Available at http://proquest.umi.com/ pqdlink?Ver=1%Exp=05-25-015&FMT=7%DID =65378888&RQT=309&cfc=1 (26 May 2010). 3. N. Lee, Cell Phone Radiation Levels. Available at http://reviews.cnet.com/cell-phoneradiation-levels (26 May 2010). 4. Radiation Exposure and Cancer (2010). Available at http://www.cancer.org/docroot/ped/ content/ped_1_3x_radiation_exp_osure_and_ cancer.asp (27 May 2010). 5. Hardell, Lennart, and C. Michael, International Journal of Oncology, 35, 5-17 (2010). 6. Advice from University of Pittsburgh | Cell Phone Dangers and Hazards. Available at http://cell-phone-dangers.com/research/ cellPhonePrecautionPittsburgh.html (27 May 2010).

31


Poetry

A Tribute to Biotechnology Yoon Kim ‘13

Biotechnology is nothing new There’s Mesopotamia’s beer breweries Mayan fermented cacao and Viking fondue All so delicious and all still in use. But in medicine, biotech may fall out of style: Mangosteen extract to calm your fever, Medieval leeching to balance your bile. Undiscouraged, biotechnicians examine closer Zipping down to something essential in our juices, Finding out of what we’re composed, and who’s our composer? Stripping down to the structure of DNA and what it all produces “Let’s reconFig. A, T, G and C” they’re all agreeing we’ll create everything -- a better cheese a better goldfish, a better human being.

Image retrieved from http://upload.wikimedia.org/wikipedia/commons/c/c3/DNA_Furchen.png (Accessed 31 Jan 2011).

32

Dartmouth Undergraduate Journal of Science


Ecology

Isotopic and Molecular Methods Sourcing Environmental PAHs: A Review Elise Wilkes ‘12

P

olycyclic aromatic hydrocarbons (PAHs) are organic pollutants that accumulate in the environment as a result of both natural and human processes. The molecular and isotopic signatures of these compounds vary depending on production conditions, and can be exploited to trace PAH contaminants in the environment to a particular source or responsible party. Environmental forensics investigations relating to PAHs are often motivated by environmental remediation or litigation efforts and depend heavily on geochemical principles. The field has recently benefitted from the application of compound-specific stable isotope analysis (CSIA) to the source apportionment of PAHs. Although many advances have been made with this strategy since it was introduced sixteen years ago, further research is needed to overcome deficiencies in the database of isotopic signatures, technological limitations, and a lack of standardized methods.

Introduction Polycyclic aromatic hydrocarbons (PAHs) are planar, high molecular weight organic compounds of environmental concern due to their suspected mutagenic and carcinogenic properties. PAHs are composed of two or more fused aromatic rings, formed during the incomplete combustion of biomass and fossil fuels or during the slow conversion of organic matter into petroleum (1, 2). PAHs formed from these two processes can be described as pyrogenic or petrogenic, respectively (1,3,4). Although natural processes occasionally deposit these contaminants, the deposition rate of PAHs into environmental reservoirs has been greatly accelerated in recent years due to human industrial activities and fossil fuel consumption (5). A variety of environmental forensics and geochemistry techniques have emerged for studying organic pollutants over the past few decades in response to growing concerns about human impact. These techniques exploit the unique molecular or isotopic compositions of PAHs that arise from different production processes in order to provide insight into the sources of contaminants on a local or larger scale. Source apportionment techniques are of particular interest because they can be used for the purposes of environmental remediation, prevention of future contamination, and evidence in litigation (6,7). Molecular compositions of PAH mixtures are frequently determined using gas chromatography mass spectrometry (GC-MS) and gas chromatography with a flame ionization detector (GC-FID), in addition to other techniques (8). These analytical methods are capable of quantifying concentrations and ratios of particular PAH compounds. Molecular signatures can be inconclusive in the absence of other information, however, so molecular analyses are often paired with a newer technique called compound speWinter 2011

Image retrieved from http://upload.wikimedia.org/wikipedia/commons/c/c0/Polycyclic_Aromatic_Hydrocarbons.png (Accessed 29 Jan 2011).

Three examples of Polycyclic aromatic hydrocarbons: benz[e] acephenanthrylene, pyrene, and dibenz[a,h]anthracene.

cific stable isotope analysis (CSIA). CSIA has become an increasingly common and trusted analytical method for PAH source apportionment over the past sixteen years. It exploits the isotopic rather than the molecular signature of PAH compounds, a signature which tends to be less subject to interference by weathering processes (9). The aim of this review is to provide an overview and analysis of the current state of source apportionment techniques as they relate to PAHs from a broad range of sources. The focus will be on the geochemical principles and methods employed in CSIA, as well as its complementary molecular methods. Both strategies will then be analyzed in terms of known applications and limitations. Future research should be directed toward overcoming these limitations, which include deficiencies in the database of known isotopic fingerprints, and technological and methodological limitations.

Isotope Geochemistry Principles Employed CSIA is a technique which generates isotope ratio data. The main ratio of interest is that of 13C to 12C. This ratio is reported using delta notation, (Eq. (1)), which gives the permil (‰) deviation of the isotope ratio of a sample from that of a standard:

δ13Csample = [(13C/12C)sample / (13C/12C)standard – 1] ×103 (‰, VPDB)

(1)

The Vienna Peedee belemnite standard (VPDB) is the most commonly used standard for this type 33


of analysis, defining 0‰ on the δ -scale (5, 10). The primary geochemical concept underlying CSIA involves kinetic isotope effects. These effects determine which isotopes are preferentially incorporated into PAHs during formation or into their organic precursors during photosynthesis. Kinetic effects alter the isotope ratios of the resulting PAHs at each major stage and pathway to formation, as described below. These situations collectively demonstrate that the isotopic and molecular signatures of PAHs are determined both by the isotopic composition of the precursor compounds and by formation conditions (3).

Assimilation of CO 2

A kinetic isotope effect alters the isotopic composition of precursor organic materials as a consequence of CO2 assimilation by autotrophs through either a C3 or C4 photosynthetic pathway. Both photosynthetic pathways discriminate against 13C, but to different extents: C3 plants assimilate heavier isotopes slower than C4 plants. Thus, C3 plants have isotope values ranging from -22 to -30‰, while C4 plant values range from -10 to -18‰ (10). The resulting isotope ratios are reflected to varying degrees in petrogenic PAHs made from plant matter and pyrogenic PAHs formed from the incomplete combustion of plant-derived fuels.

Petrogenic PAHs

Crude oil and coal are produced by the thermal maturation of organic material of marine or terrestrial origin (3). PAHs also form during these processes and can be found in crude oil and refined petroleum products. Crude oil typically contains between 0.2 and 7% total PAHs; refined petroleum products such as diesel fuels and gasoline contain a combination of these parent crude oil PAHs and trace amounts of PAHs formed during refining processes (4). The isotopic compositions of different petroleum products will vary from one another because they originate from different sources of crude oil (8).

Pyrogenic PAHs

Pyrogenic PAHs are formed during the incomplete combustion of fuels. Organic compounds are first cracked into smaller, unstable hydrocarbon fragments during pyrolysis and then undergo a series of radical reaction pathways involving carbon-carbon bond formation, cyclisation, and ring fusion to form more stable aromatic compounds (3, 5). As these radical reactions occur, 12C is preferentially incorporated into bonds over 13C in accordance with a normal kinetic isotope effect. Thus, PAHs get progressively depleted in 13C as the number of rings in the molecular structure increases (11). Kinetic isotope effects, rather than equilibrium isotope effects, predominate in PAH formation because these processes occur rapidly and do not achieve equilibrium.

Degradation

In contrast to formation pathways that generate pyrogenic and petrogenic PAHs, degradation pathways through weathering and other natural processes do not significantly affect the isotopic signature of PAHs. O’Malley et al. found 34

that the isotope ratios of PAHs are preserved during processes such as volatilization, photolytic, and microbial degradation reactions (i.e., the isotopic signature of PAHs is conservative) (9). These conditions make it possible to use isotopic fingerprints to implicate a source in the creation of PAHs because the fingerprint can be assumed to remain constant.

Molecular Characteristics Exploited Petrogenic compounds can be distinguished from pyrogenic compounds based on different molecular “fingerprints.” The aromatic rings of petrogenic PAHs frequently contain alkyl substituents. Alkylated PAHs are more abundant than the parent PAH compounds in petrogenic mixtures, whereas alkylated PAHs are far less abundant than the unalkylated parent compounds in pyrogenic mixtures (4). Additionally, low molecular weight compounds are more common in petrogenic PAHs while high molecular weight compounds are more common in pyrogenic PAHs (2). Examples of commonly analyzed PAH compounds for molecular analyses are depicted in Fig. 1.

Methods Isotopic Methods

Compound specific stable isotope analysis pairs a gas chromatography separation method with an isotope-ratio mass spectrometer (GC-C-IRMS, Fig. 2) to yield the isotopic ratios of individual compounds in a heterogeneous sample. The gas chromatograph separates organic components from one another in complex mixtures and is attached to a combustion furnace which combusts the organic components into CO2 (5). The CO2 will have a mass of 45 or 44 depending on whether it contains 13C or 12C. The CO2 then passes continuously through an isotope ratio mass spectrometer where the isotope ratios of the compounds are determined by comparison with the 45:44 mass to charge ratio of reference CO2 (5, 10, 12). Purification procedures vary from laboratory to laboratory but usually include an extraction step and a column chromatography separation procedure. These procedures must maintain the isotopic integrity of the PAH samples in order to be useful for source apportionment. Dichloromethane is often used for the extraction, and a silica gel column is typically used for the column chromatography step (11, 13). The column purification step is necessary to separate the aliphatic fraction from the PAH fraction because the two would otherwise coelute during GC-C-IRMS analysis (7). Kim et al. recommended that additional high-performance liquid chromatography (HPLC) and thin layer chromatography (TLC) purification steps be employed as well, as detailed in Fig. 3. The authors reported no isotopic fractionation as a consequence of the purification procedures, even in cases of low yields (7). O’Malley et al. also reported a sample processing strategy involving extraction and column purification that did not alter the isotopic signature of standard compounds, even when less than 50% of the starting material was recovered (9). In addition to the δ13C analysis, hydrogen stable isotopes can be used to further elucidate the sources of PAHs. Dartmouth Undergraduate Journal of Science


Sun et al. reported δD values in conjunction with δ 13C values, and found that the combination allowed a much greater ability to differentiate among PAHs derived from petrol, jet fuel, and different coal conversion processes than using δ13C alone. Sun et al. further noted that deuterium enrichment takes place simultaneously with 13C depletion. This deuterium enrichment is consistent with expectations based on PAH formation mechanisms which typically involve dehydration steps. Dehydration allows lighter hydrogen isotopes to preferentially leave the molecular structure because C-H bonds are weaker than C-D bonds (11).

Molecular Methods

Two other gas-chromatography inlet techniques predate the GC-C-IRMS technology and are useful for source apportionment of PAHs. GC-MS and GC-FID can be used to determine concentrations and patterns of particular PAH compounds in a mixture. GC-FID reveals the relative amounts and presence of PAH compounds; GC-MS provides similar information and also includes a mass spectrum (8). Ratios of fluoaranthene/pyrene and anthracene/ phenanthrene are two commonly examined ratios in environmental forensics investigations (14). Ratios such as these can be compared to extensive chemical fingerprint databases to provide a preliminary test as to whether PAHs are pyrogenic or petrogenic. Typical analyses examine the relative numbers of alkylated versus parent PAHs and low versus high molecular weight PAHs in a mixture ( 4, 8 ).

Investigated Sources and Applications of CSIA CSIA, often used in conjunction with molecular methods, has been successfully used to allocate a broad range of sources and production conditions for PAHs from air, water, sediment, and soil samples. O’Malley et al. first demonstrated that CSIA could be used for source apportionment by showing that PAHs emitted by wood burning exhibit a different isotopic signature than those found in car soot (9). O’Malley et al. noted that low molecular weight PAHs enriched in 13C are characteristic of pyrogenic mixtures, and high molecular weight PAHs depleted in 13C are characteristic of petrogenic mixtures, providing a precedent for using isotopic data to source environmental PAHs (9, 15). McRae et al. further demonstrated the utility of CSIA by reporting that PAHs generated by coal and biomass pyrolysis, and in diesel particulates, contained substantially different δ 13C values (5). These three sources are irresolvable without CSIA, as demonstrated by high performance liquid chromatography (HPLC) data reprinted in Fig. 4, followed by a more successful isotope analysis in Fig. 5. A separate study found differences in δ13C of atmospheric PAHs which allowed the authors to conclude that automotive exhaust contributed the most to atmospheric PAHs in Beijing, while coal combustion was the major contributor to air in Chongqing and Hangzhou (16). CSIA has also been applied to tar identification (11). Environmental PAHs with extremely low, variable 13 C compositions were linked to biodegradation, suggesting that PAHs from microbially generated PAHs tend to be Winter 2011

more depleted in 13C than those derived from other processes (13). Analysis of δ 13C has also been proposed as a means of studying paleo-fire activity to learn about climate-biosphere interactions because it has been demonstrated that C3 and C4-derived PAHs formed by combustion are isotopically distinct. O’Malley et al. discovered that PAHs formed from biomass burning during forest fires largely retain the isotopic composition of the original plant material (17). In addition to research investigating which types of sources can be distinguished using isotope ratios, many successful investigations relating to particular environmental sites are published. For example, CSIA was used to allocate PAHs in sediments from St. John’s Harbor in Newfoundland to a primarily woodburning source, rather than from crankcase oil or other petroleum products (9). In another study, molecular methods attributed PAHs in an urban estuary in Virginia to wood-treatment facilities, while CSIA revealed an additional contribution by coal transport, a source that had not been revealed by previous techniques or been anticipated by the authors (2). δ13C measurements have also been used to analyze product versus source PAHs in order to learn about the mechanisms of PAH formation. These mechanisms can provide insight into sources that utilize different reaction conditions. For example, it was demonstrated that PAHs formed from different coal conversion processes could be differentiated due to a progressive enrichment of 12C accompanying higher temperatures of formation (18). This research indicated that the isotopic values of PAHs from coal are likely a function of the extent of ring growth required to form PAHs during processing, meaning that mild processes such as low temperature carbonization yield two or three ring PAHs with alkyl substituents and isotopic signatures similar to the signatures of the parent coals, while high temperature carbonization, gasification, and combustion exhibit distinct ranges of -25 to -27‰, -27 to -29‰, and -29 to 31‰, respectively, as ring condensation increases (18).

Limitations As CSIA becomes an increasingly common source apportionment tool, several considerations need to be addressed. There is currently a lack of standardized methods for CSIA with respect to PAHs. Purification procedures are continuously being modified and are inconsistent across studies. For example, differences in purification procedures were recently listed as a possible explanation for disagreements between two studies seeking to source PAHs derived from creosote wood preservatives (2). Furthermore, although GC-C-IRMS is a fairly sensitive instrument, it requires at least 10 mg/L of an individual PAH for each injected sample, which is a relatively high concentration for natural samples. New techniques need to be developed to improve CSIA for the analysis of environmental samples with low concentrations of organic pollutants, such as particulate matter for air pollution studies. One promising advance within the past year to circumvent this problem was the development of a large volume temperatureprogrammable injector technique for GC-C-IRMS analysis of PAHs, to be used in place of the more common splitless 35


injector method. This technique was demonstrated to measure samples with concentrations as low as 0.07 mg/L (19). Although a vast number of δ13C values for pyrogenic compounds have been published over the past sixteen years, two other source apportionment ratios of potential utility have been neglected. δ13C values for petrogenic PAHs are relatively limited in the literature (3). Furthermore, although Sun et al. revealed that analyzing PAH δD values in combination with δ 13C values appears to be a promising strategy for differentiating similar sources, δD data are not yet commonly measured or published (11). This is an area where future research should be directed to expand the capability of CSIA as an environmental forensics device. CSIA of PAHs can be inconclusive on its own when isotope ranges from different sources are similar. To overcome this limitation, many studies combine carbon CSIA with other molecular analyses of chemical fingerprints, including alkylated ratios, isomer ratios, low molecular weight to high molecular weight ratios, or a statistical analysis called principal component analysis (2, 8, 9). Source apportionment using chemical fingerprints faces even greater limitations than CSIA, however, and is not always reliable on its own. Molecular signatures are far more subject to interference from weathering than isotope ratios. For example, one study found that after only eighty days of weathering, parent PAHs predominate over alkylated species meaning that the two types of compounds weather at different rates (13). This change could lead to an incorrect conclusion in some cases that a mixture of PAHs was pyrogenic rather than petrogenic. Moreover, the molecular characteristics for many potential sources are not unique, further limiting the value of chemical fingerprinting independent of CSIA (17). The reliability of GC-FID and GC-MS methods for measuring molecular signatures also decreases when PAHs originate from multiple sources (8).

Conclusion PAHs can be found virtually everywhere in the environment and often demonstrate human pollution. Source apportionment of PAHs using molecular methods has been common practice for many years, used for the identification of responsible parties in mystery oil spills or environmental remediation efforts, among other applications. The field of environmental forensics was revolutionized sixteen years ago by the application of compound specific stable isotope analysis to these investigations, using relatively new GC-CIRMS technology. CSIA has proven valuable in yielding PAH source apportionment information because isotope ratios tend to remain more constant over time and provide more information than molecular techniques alone. δ 13C data have already been reported for a wide variety of PAH sources and reaction conditions, but certain isotope ratios of potential use have been neglected. These include δ13C values of petrogenic sources and δD values for PAHs from all sources. Furthermore, standardized methods still need to be established for the field, and sensitivity and concentration limits could benefit from future technology research. Moving forward, it appears that a combination of molecular and isotopic techniques rather than sole reliance on one over the other provides the greatest assurance of apportioning the correct source. 36

Acknowledgments

Special thanks to Professor of Department of Earth Sciences Mukul Sharma, Hannah Hallock, and the students of EARS 62/162 for their continued guidance and support. This paper was written as a part of EARS 62/162 (Geochemistry).

Figure 1: Examples of common PAHs. For all other figures, please refer to the following: Figure 2: T.C. Schmidt, et al. Anal. Bioanal. Chem. 378, 283-300 (2004). Figure 3: M. Kim, M.C. Kennicutt II, Y. Qian, Environ. Sci. Technol. 39, 6770-6776 (2005). Figures 4&5: C. McRae, et al. Anal. Commun. 33, 331-333 (1996). References 1. J.M. Neff, Polycyclic aromatic hydrocarbons in the aquatic environment. Sources, fates and biological effects (Applied Science Publishers Ltd., London, 1979). 2. S.E. Walker et al, Org. Geochem. 36, 619-632 (2005). T.A Abrajano Jr. et al., Treatise on Geochemistry 9, 1-50 (2007). J.M. Neff, S.A. Stout, D.G. Gunstert, Integr. Environ. Assess. Manag. 1, 22-33 (2005). 3. C. McRae, et al. Anal. Commun. 33, 331-333 (1996). T.C. Schmidt, et al. Anal. Bioanal. Chem. 378, 283-300 (2004). M. Kim, M.C. Kennicutt II, Y. Qian, Environ. Sci. Technol. 39, 6770-6776 (2005). 4. D.L. Saber, D. Mauro, T. Sirivedhin, J. Ind. Microbio. Biotechnol. 32, 665-668 (2005). 5. V.P. O’Malley, T.A. Abrajano, J. Hellou, Org. Geochem. 21, 809-822 (1994). 6. R.P. Philp, Environ. Chem. Lett. 5, 57-66 (2007). C. Sun, M. Cooper, C.E. Snape, Rapid Commun. Mass Spectrom. 17, 2611-2613 (2003). 7. Z. Muccio, G.P. Jackson, Analyst 134, 213-222 (2009). 8. C. McRae, et al., Environ. Science & Technol. 34, 4684-4686 (2000). 9. D. Kim, et al., Chemosphere 76, 1075-1081 (2009). 10. V.P. O’Malley, T.A. Abrajano, Jr., J. Hellou, 30, 634-639 (1996). 11. T. Okuda, H. Kumata, H. Naraoka, H. Takada, Org. Geochem. 33, 1737-1745 (2002). 12. V.P. O’Malley, R.A. Burke, W.S. Schlotzhauer, Org. Geochem. 27, 567-581 (1997). 13. C. McRae, et al.Org. Geochem. 30,881-889 (1999). 14. A. Mikolajczuk, B. Geypens, M. Berglund, P. Taylor, Rapid Commun. Mass Spectrom. 23, 2421-2427 (2009).

Dartmouth Undergraduate Journal of Science


CHEMISTRY

Selenium in Tuna

White Versus Light and Water Versus Oil Packing Christina Mai ‘12, Rebecca Rapf ‘12, Elise Wilkes ‘12, and Karla Zurita ‘12

C

anned tuna fish is one of the most commonly consumed types of seafood in the United States as well as a prominent source of dietary selenium. Thirty-two samples of canned tuna were analyzed using fluorescence spectrophotometry to determine whether type of tuna or packaging liquid significantly affects selenium concentrations. Light tuna fish showed a slightly greater overall mean concentration of selenium (149.5 ppb) than white style tuna fish (145.1 ppb). Tuna packed in water, regardless of type of tuna, had higher overall mean concentrations of selenium (156.9 ppb) than tuna packed in soybean oil (137.6 ppb). These results based on category, however, were not statistically significant. In order to get a sense of the average selenium content in a typical can of tuna in the marketplace, the data were collapsed across categories, giving a calculated 12.5 micrograms per can.

Introduction Selenium is a trace mineral in human diets essential for the proper functioning of the immune system. It is incorporated into selenoproteins and selenium-dependent enzymes, which facilitate antioxidant defense, muscle function, and thyroid hormone production (1). Selenium is also essential for the synthesis of the selenoprotein glutathione peroxidase (2). Furthermore, epidemiological studies spanning fifty years indicate an anticarcinogenic effect of selenium against many forms of cancer (3). Despite its importance as a micronutrient, selenium intake must be monitored because a narrow window separates selenium deficiency from toxicity. According to the Institute of Medicine, the recommended dietary intake of selenium is 55 micrograms per day for adults, and the tolerable upper intake level is 400 micrograms per day (4). Deficiency can make the body more susceptible to disease whereas high blood levels of selenium can cause symptoms of selenosis, such as hair loss, abnormal functioning of the nervous system, and gastrointestinal upsets (2). Fish is one of the primary sources of dietary selenium in the United States. The most commonly consumed fish in the United States is canned tuna, according to a report by the National Oceanic and Atmospheric Administration, making it a particularly relevant source of selenium to study (5). Knowledge of the selenium content of different brands and species of canned tuna may therefore assist in achieving a diet that avoids either extreme in selenium intake. An additional motivation for considering the selenium content of canned tuna is that tuna bioaccumulates the toxic heavy metal mercury, and selenium reduces vulnerability to mercury toxicity in humans (6). The mechanism of selenium’s protective effect on methyl mercury is unknown, but its effect on inorganic mercury toxicity is likely due to in vivo formation of mercuric selenide (7). Winter 2011

These considerations of selenium’s role in the overall health of humans motivated our project to quantify the amount of selenium in leading brands of canned tuna fish using fluorescence spectrophotometry. Because many types of canned tuna are available to consumers, our project examines variations in selenium among different species of tuna readily available in grocery stores, and between the packaging of tuna in water versus oil.

Materials and Methods Source of canned tuna samples

The types of canned tuna used in this experiment were Bumblebee Chunk Light Tuna in Water, Bumblebee Albacore Tuna in Water, Bumblebee Albacore Tuna in Oil, Chicken of the Sea Chunk Light Tuna in Oil, and StarKist Chunk Light Tuna in Water. Albacore tuna is also referred to as white tuna, and light tuna is a combination of skipjack and yellowfin species. The oil packing used was soybean oil.

Chemicals and other materials

All chemicals and materials were provided by the Department of Chemistry at Dartmouth College, and included concentrated nitric acid, concentrated hydrochloric acid, cyclohexane, 2,3- DAN (2,3-diaminonaphthalene), selenium dioxide, filter paper, hydroxylammonium chloride, pipettes, amber bottles, ethylenediaminetetraacetic acid (EDTA), separatory funnels, screw-cap Erlenmeyer flasks, pH meter, 1100W Sharp Carousel microwave, micropipets, volumetric flasks, graduated cylinders, glass funnels, cuvettes, hot plate, fluorimeter (Shimadzu RF-1501 Spectrofluorophotometer), and high pressure Teflon microwave bomb. The procedure for this work was adapted from a recently published paper by Sheffield and Nahir (8).

Preparing a standard curve

A stock solution of Se(IV) was prepared by pipetting 1 mL of selenium dioxide into a 50 mL volumetric flask and diluting with nanopure water to yield a 14 ppm selenium solution. A 1.4 ppm solution was prepared by adding 1 mL of the 14 ppm selenium solution to a 125 mL Erlenmeyer flask and treating it with 10 mL of 2.5 % hydroxylammonium chloride, 0.1% EDTA (EDTA solution), and 20 mL of water. The pH was adjusted to 2 ± 0.2 using HCl. Then 5 mL of 0.1 % 2,3-diaminonaphthalene solution (DAN) were added to the solution. After heating in a 50 °C water bath for 30 minutes, 10 mL of cyclohexane were added, which diluted the concentration of the stock solution to 1.4 ppm. Because 0.5 mL of 14 ppm stock solution were added to the Erlenmeyer flask before the EDTA solution was added, the 37


dilution yielded a 0.7 ppm solution. Serial dilutions were performed to derive concentrations of 0.35, 0.175, 0.0875, and 0.0091 ppm. Measurements were taken on the fluorimeter at these six concentrations to create a standard curve.

Preparations of reagents and solutions: DAN

To make 0.1 % DAN solution, 25 mg of 2,3-diaminonaphthalene were transferred to a 125 mL separatory funnel. 25 mL of 0.1 M HCl were added, and the flask was capped and shaken for about 10 minutes until a fairly homogenous, cloudy solution formed. Next, 10 mL of cyclohexane was added, and the flask was shaken for one minute. The aqueous layer was removed into a clean beaker, the organic layer was discarded, and the separatory funnel was cleaned with nanopure water. This extraction was repeated, and the aqueous layer was collected in an amber bottle through gravity filtration.

Preparations of reagents and solutions: EDTA Solution

based on an internal standard, a standard curve was created to give a basis from which the concentration of a sample could be calculated from the measured intensity. Eleven standards of various concentrations were prepared, ranging from 9.1 ppb to 1400 ppb Se(IV). In addition, three blank samples were prepared in the same manner as the standards but omitting the addition of Se(IV) to establish a baseline. These data were first used in a linear regression that used intensity to predict concentration, F(1,12)=3570, p < 0.001. While this linear relationship was strong (R2=.997), the original Sheffield procedure reported a quadratic standard curve because of the self-absorption that occurs at higher concentrations (8). Therefore, the data was fitted with a quadratic curve, F(2,11)=19422, p < 0.001, which showed a better fit (R2=1.00). The standard curve is shown in Fig. 1, and the equation of the curve was Concentration = -4.484 +2.229(Intensity) + 0.001(Intensity)2.

To make the EDTA solution, 2.5 g of hydroxylammonium chloride and 0.1 g of EDTA were added to a 100 mL volumetric flask. Nanopure water was added, and the contents were mixed to create a homogenous solution.

Digestion of samples and derivatization of selenium

Each can of tuna was opened and drained. Samples of tuna were taken from the center of the can and blotted with filter paper to remove excess liquid. A 0.5 g sample of tuna was placed into the Teflon cup of a microwave bomb apparatus containing 2.5 mL of concentrated nitric acid. The bomb was placed into an 1100W microwave oven and irradiated for 25 seconds at 40% power. After heating, the bomb was placed in an ice bath for 25 minutes. To reduce any Se(VI) to Se(IV), the solution was transferred into a 125 mL Erlenmeyer flask using a Pasteur pipet, and 4 mL of HCl was added and heated on a hot plate. After heating, 10 mL of EDTA solution and 20 mL of water were added to the flask. The pH of the solution was adjusted to 2 ¹ 0.2 by adding concentrated ammonium hydroxide and HCl as needed. Then, 5 mL of 0.1% DAN were added to the flask and placed into a 50 °C water bath for 30 minutes. After warming, 20 mL of nanopure water were added to the solution and 10 mL of cyclohexane were added to extract the fluorescent selenium compound, piazselenol. Roughly 3 mL of the extracted solution were placed into a 1 cm square cuvette for fluorimetric analysis.

Fluorimetric analysis

A three dimensional scan was performed on each standard and sample using a fluorimeter. The excitation frequency was set to 378 nm, and the emission range was between 450 and 650 nm with an expected selenium peak at 518 nm.

Results Determination of standard curve

Because fluorimeters lack a means of calibration

38

Fig. 1: Standard curve.

Experimental results

For experimental samples, it was necessary to correct for the mass of the samples before intensities could be compared. The value for each sample was divided by its mass to give the equation for the concentration as (Concentration = -4.484 + 2.229(Intensity/Mass) + .001(Intensity/Mass)2). It should be noted that the concentrations and intensities of the experimental tuna samples were low enough that self-absorption was not a concern; the maximum concentration was found to be 232 ppb, well below the point at which self-absorption becomes a factor. Data from 32 samples of canned tuna were collected. The samples fell into one of four categories: white tuna in water, white tuna in oil, light tuna in water, and light tuna in oil. Light tuna fish had a slightly greater mean concentration of selenium (149.5 ppb) than white tuna fish (145.1 ppb). Tuna fish packed in water, regardless of the type of tuna fish, contained larger concentrations of selenium on average than tuna packed in soybean oil. The mean concentrations of each category are outlined in Table 1 and were calculated using a standard curve equation and a standardized intensity where the intensity was divided by the mass of the sample. Dartmouth Undergraduate Journal of Science


Type white

light

Total

Packing water oil Total water oil Total water oil Total

Mean (ppb) 152.7 137.4 145.1 161.1 137.8 149.5 156.9 137.6 147.3

Std. Deviation

n

48.29 33.21 40.81 32.94 45.36 40.15 40.17 38.40 39.89

8 8 16 8 8 16 16 16 32

Table 1. Descriptive Statistics. Dependent variable is the calculated concentration.

Statistical analyses

A 2X2 ANOVA was run analyzing type of tuna (white versus light) and kind of packing material (water versus oil), with eight samples in each cell. There was not a statistically significant effect of type of tuna, F(1,28) = 0.093, p > 0.10. There was also not a significant effect of packing material, F(1,28) = 1.820, p>0.10, and there was not a significant interaction between the two factors, F(1,28) = 0.079, p>0.10.

Discussion Based on the overall mean of 32 samples across all categories, the average amount of selenium for the recommended 56g serving of tuna was 8.250 ± 2.23 µg. The amount of selenium in a 3-ounce can of tuna was 12.52 ± 3.39 µg. Selenium is toxic when consumed at rates greater than 400 µg per day, which would only be attained by eating 32 cans of tuna. The National Institutes of Health reported 63 µg of selenium in 3 ounces of tuna, a value that is higher than the experimental results (9). The experimental methods were different, and the higher result may stem from the use of dried samples, as opposed to samples that had merely been patted dry (10, 11). The slightly higher mean concentration of tuna in water than tuna in oil was unexpected because the oil used to pack the tuna was soybean oil and soybeans are known to contain selenium. Thus, it was expected that tuna packed in oil would contain higher levels of selenium (12). This expectation was also confirmed by the nutritional information on the Bumblebee brand tuna cans, which claimed that tuna in water contains 50% of the recommended daily value of selenium while tuna in oil contains 60% of the daily value (13). These discrepancies may be attributed to several confounds and limitations. To explore these possible confounds, various post hoc analyses were performed. The first possible confound was that of brand. Bumblebee tuna was used for the majority of samples, but both Chicken of the Sea and StarKist samples were used as well. Bumblebee produces all four categories of tuna that were tested, but local grocery stores did not stock Bumblebee Light Tuna in Oil. As a result, Chicken of the Sea tuna was used for that category. The light tuna in water category also contained one sample that was Chicken of the Sea and two samples that were StarKist, in addition to the Bumblebee samples. A t-test comWinter 2011

Fig. 2: Age of DAN.

paring Bumblebee tuna samples to the other brands did not reveal a significant difference between brands, t(30) = 0.295, p>0.10. The three brands analyzed are all leading brands found in the marketplace regulated by the FDA, and as such, major variations in quality based on brand were not expected. An additional confounding variable was the age of the 2,3-DAN solution used as a derivatizing agent. The original Sheffield procedure from which the procedure was adapted says of DAN: “for best results, it should be prepared immediately before analysis” (8). When making up the standard curve, standards were prepared using DAN of various ages and fluorimetric analyses revealed very little degradation in the quality of results obtained, even using 13-day-old DAN (Fig. 2). Based on these observations, DAN up to five days old was used in the analysis of experimental samples. A t-test comparing fresh DAN to day-old DAN (categories which account for 23 of our 32 samples) found no significant difference in results, t(21) = 0.17, p>.50. DAN, therefore, need not be made up fresh every day, as long as it is used within one or two days and stored cold in a dark bottle. Another issue encountered in the course of data collection was variation in coloration after digestion. The majority of tuna samples digested into a bright, emerald green solution, but a few digested samples appeared to be a thicker, yellow solution with an almost soupy consistency. This difference in coloration is surprising because the purpose of the nitric acid digestion is merely to dissolve the tuna, so it is unclear why a color change occurs. To add to the complexity, the green samples tended to give noticeably better results, with a clean expected selenium peak at 518 nm. The yellow samples, by contrast, tended to have a large peak at 484 nm in addition to the peak at 518 nm. The intensity at 484 nm per mass of sample is a significant predictor of intensity at 518nm per mass of the sample, F (1,30) = 5.34, p = 0.03. One possible explanation is that the yellow samples were only partially digested compared to the green samples, but this explanation cannot be confirmed because the tuna appeared dissolved in all cases and the procedure was consistent from sample to sample.

Possible confounding variables aside, a power analysis was run using G*Power to determine the number of samples needed to obtain a reasonably powerful (power = 0.8), statistically significant (p = 0.05) main 39


agement throughout this project. Special thanks to Charles Ciambra for his support in the laboratory. We would also like to thank Lisa Sprute for her consultation on power analyses. References 1. M. P. Rayman, Br. J. Nutr. 92, 557 (2004). 2. L. Koller, J. Exon, Can J Vet Res. 50, 297 (1986). 3. M. Navarro-Alarcon, C. Cabrera-Vique, Sci. Total Environ. 400, 115 (2008) 4. Dietary Reference Intakes: Vitamin C, Vitamin E, Selenium, and Carotenoids (National Academy Press, Washington, DC, 2000). 5. Press Release: shrimp overtakes canned tuna as top US seafood. (National Oceanic and Atmospheric Administration, 2002). 6. N. V. C. Ralston, L. J. Raymond, Toxicology, in press corrected proof. 7. D. Yang, Y. Chen, J. M.Gunn, N. Belzile, Env. Rev. 16, 71 (2008). 8. M. C. Sheffield, T. M. Nahir, J. Chem. Educ. 79, 1345 (2002). 9. Dietary Supplement Fact Sheet: Selenium, http://ods.od.nih.gov/ factsheets/selenium.asp 10. U.S. Department of Agriculture, Agricultural Research Service. 2003. USDA National Nutrient Database for Standard Reference, Release 16. Nutrient Data Laboratory Home Page, http://www.ars.usda.gov/main/ site_main.htm?modecode=12-35-45-00 11. JOAC 63, 485(1980). 12. G.N. Schrauzer, J Am College of Nutr 20, 1 (2001) 13. BumbleBee Product BUMBLE BEEŽ - Nutrition and Size Information, http://www.bumblebee.com/ Products/Individual/?Product_ID=13 14. J. Burger, M. Gochfeld, Environ. Res. 96, 239 (2004).

Image courtesy of Somara Maggi.

Selenium levels in canned tuna were determined to be independent of the liquid in which the tuna was packed.

effect of packing liquid that was hypothesized to exist. This analysis found that 123 samples would be needed to achieve statistically significant results. This corresponds well to a similar study that looked at methylmercury in canned tuna fish using samples from 168 cans of tuna fish, variations in methyl-mercury concentration of over 10% were found in a single can (14).

Conclusion Selenium levels in canned tuna were determined to be independent of the liquid in which the tuna was packed. Although selenium concentrations were higher in those samples packed in water, the result is not statistically significant. Digestion by nitric acid led to extrapolated levels of selenium that did not match those listed on the nutrition labels. Regardless of the sample type, both white and light tuna do not approach toxic levels of selenium intake unless ingested at abnormally high levels. Thus, due to the role of selenium as an essential micronutrient, consuming seleniumcontaining foods such as tuna in moderation is important.

Acknowledgements This work was supported by the Dartmouth College Department of Chemistry. We would like to thank professors Gordon Gribble and Dean Wilcox for their support and encour40

Dartmouth Undergraduate Journal of Science


ECOLOGY

Assessing the Utility of Microsatellites for Assigning Maternity In a Wild Population of Anolis sagrei Lizards Elizabeth Parker ‘12

T

o study selection in the wild, one must be able to measure both survival and reproductive success in situ. Brown anole lizards (Anolis sagrei) are an ideal model species for studying natural selection in the wild (i.e., survival), but studies of sexual selection in the wild (i.e., mating success) require genetic techniques for assigning paternity. Previous studies have assigned paternity using microsatellite genetic markers, but only when maternity was already known. In the present study, each of 29 dams and their collective 103 progeny were genotyped at seven microsatellite loci and the program CERVUS was used to estimate maternity assuming no prior knowledge of parentage. By comparing these maternity assignments to the true dam for each offspring, we determined that this methodology yields 87% success in maternity identification. Most incorrect assignments of maternity occurred when multiple dams matched an offspring at all loci, suggesting that including additional loci might improve success rates. The application of this methodology to wild populations of anoles of unknown parentage should allow for successful maternity assignments in situ, thereby alleviating some of the difficulties associated with studying sexual selection in the wild.

Introduction An animal’s fitness is related to both survival and reproductive success. To study selection in the wild, one must therefore measure both natural and sexual selection, ideally, by measuring survival, fecundity, and mating success in situ. Many studies have measured selection in the wild (1-4); however, only a handful have measured total fitness, defined as including survival, mating success, and fecundity (5-8). These studies indicate that focusing on individual components of fitness often yields misleading conclusions of the total strength and form of selection. Thus, understanding how survival, fecundity, and mating success interact is an important step towards understanding how selection shapes wild populations. Brown anole lizards are an ideal model species for studying natural selection in the wild due to their abundance, ease of capture, high site fidelity, and relatively short lifespan (9-11). These features allow for relatively straightforward measurements of natural selection arising from differential survival. Measuring fecundity and mating success in situ, however, presents several challenges. First, female anoles repeatedly lay single eggs at 1-2 week intervals throughout a breeding season that can last for up to six months (12-14). This makes it difficult to measure the total annual fecundity of individual females without transferring them to captivity and thereby defeating the goal of measuring selection in situ. Second, female anoles typically mate and produce offspring with more than one male (15). This also makes it difficult to determine the true reproductive success Winter 2011

of wild males without performing genetic paternity analyses. Genetic analyses can be used to estimate the reproductive success of adults in a population by assigning parentage and then counting the number of viable progeny each adult produces. Methodologies using microsatellite genetic markers for paternity analyses have proven successful in brown anoles (16-18); however, these previous studies have ensured that the maternal genetic contribution and, hence, maternal identity, was known a priori by taking females into captivity following mating and collecting their offspring. While transferring females into captivity allows one to identify maternity with certainty, it also alters subsequent mating dynamics and prevents simultaneous studies of natural selection. If neither parent is known ahead of time, assigning paternity from these same microsatellite genetic markers may be significantly more difficult depending on the genetic structure of the population at these particular loci. Ideally, both natural and sexual selection can be studied in the wild. Here, I test a methodology that aims to reduce the challenges associated with studying sexual selection in the wild via the use of microsatellites to assess maternity with no prior information on parentage. If maternal identity can be assigned with confidence, this will facilitate estimates of female fecundity and male reproductive success in future work. In anticipation of these future studies, I genotyped eight microsatellite loci for each of 329 tissue samples that were collected from an entire island population of brown anole dams and sires (Regatta Point, Great Exuma, The Bahamas) and for 318 of their captive-bred progeny. My specific goal in the present study was to use a subset of this large data set to test our ability to determine maternity when assuming no prior knowledge of parentage. I did this by comparing maximumlikelihood estimates of the most likely dam for 103 individual progeny hatched in captivity from 29 individual dams against the true identity of the dam, which was known with certainty.

Materials and Methods Sampling of adults in the wild

At the beginning of the reproductive season (May 2010), all adult male and female brown anoles were captured from an isolated island population on Regatta Point, near Georgetown, Great Exuma, Bahamas (23°30’N, 75°45’W). A tissue sample (2 mm, tail tip) was collected from each individual and stored at -20 °F. Snout-vent length (SVL, nearest mm) and body mass (nearest 0.1 g) were measured for each individual using a ruler and a 10-g Pesola spring scale. A subset of adult females (n = 92) from the south end of the island were transported to a captive breeding facility at Dartmouth College so that their progeny could be collected as they hatched. The remainder of the females and all adult males were released at their site of capture. 41


Collection of progeny in captivity

Gravid females were individually housed in 10-gallon glass cages in the breeding facility at Dartmouth College. Each cage contained a potted plant into which females oviposited their eggs. Plants were located directly under a 40-W incandescent bulb for warmth, and all cages were situated under two Repti Glo 5.0 fluorescent bulbs for UVB light. Females were fed an ad libitum diet of crickets (Acheta domestica) that were dusted weekly with vitamin and mineral supplements (Repta-Vitamin, Fluker Farms, Port Allen, LA). Cages and plants were watered daily. Previous studies have shown that brown anoles store sperm for several months and repeatedly lay single eggs at 11-day intervals in captivity (15, 12, 13). Although the females in this study mated only in the wild prior to capture, they produced an average of 3.46 offspring (range 0-8) over a period of four months following capture. All cages were searched on a weekly basis and new hatchlings were sexed, measured for SVL (nearest 0.5 mm) and mass (nearest 0.2 g), and transplanted to new cages. A tissue sample (2 mm, tail tip) was collected from each hatchling and stored at -20°F.

DNA extraction

I used a sterile scalpel to obtain a thin slice of tissue from each tail sample. Each tissue sample was placed in a 200 µl strip tube containing 150 µl of 5% Chelex in purified water and 1.0 µl of Proteinase-K. DNA was extracted by incubating samples for 180 minutes at 55 °C and then for 10 minutes at 99 °C on a thermocycler. Samples were then centrifuged for 15 minutes at 3000 rpm and 30 µl of the supernatant was collected and stored at -20 °F until PCR amplification.

Amplification of microsatellite loci

Each individual hatchling, dam, and candidate sire was genotyped at eight microsatellite loci: AAAG-70, AAAG-68, AAAG-91, AAAG-61, AAGG-38, AAAG-77, AAAG-76, and AAAG-94 (19). I performed PCR reactions using a total volume of 10 µl with 1 µl template DNA, 1 µl 10x Buffer, 0.6 µl MgCl2, 0.8 µl dNTPs, 0.25 µl of each primer (forward and reverse), and 0.06 µl of Taq polymerase. PCR cycles consisted of an initial denaturation step at 94°C for 5 min followed by 29 or 35 cycles of 45 sec at 94 °C (Table 1), 1 min at primer-specific annealing temperatures (Ta, Table 1), and 1 min at 72 °C, followed by a final extension for 5 min at 72 ° C. See Table 1 for a details on PCR conditions and number of cycles for each respective locus. All PCRs were performed on a DNAEngine Thermal Cycler (Bio Rad). Locus Pool 1 AAAG-70 AAAG-68 AAAG-91 AAAG-61 Pool 2 AAGG-38 AAAG-77 AAAG-76 AAAG-94

Ta (°C)

# of cycles

56 56 54 55

29 35 35 35

44 55 54 55

35 35 35 35

Table 1: PCR conditions and sequencing pools. Ta = annealing temperature. 42

Sequencing and microsatellite analysis

Loci were pooled into two sets for genotyping (Table 1) on an ABI 3730 Genetic Analyzer (Applied Biosystems) using ABI multiplex dye-labeled primers. All genotypes were scored by visual inspection of electro-pherogram traces using GeneMapper (version 3.7) software against a GeneScan 500 LIZ size standard (Applied Biosystems). One locus (AAAG-76) was difficult to score and was, therefore, conservatively omitted from subsequent parentage analyses, which were, as a result, based on a total of seven loci. For the parentage analyses described below, only those progeny and dams that were successfully genotyped at five or more of these seven loci were included.

Maternity analysis

I used the computer program CERVUS (version 3.0; 20) to estimate allele frequencies at each locus (Fig. 1) and conducted parentage analysis. I first confirmed that levels of observed heterozygosity for each locus were similar to expected levels of heterozygosity using the Hardy-Weinberg Equilibrium test as implemented in CERVUS. I also confirmed that the frequency of null alleles was less than 5% for each locus, as recommended for inclusion in parentage analyses using CERVUS. I then ran a simulation analysis on the genotype data set to determine confidence levels at both 80% and 95% for the assignment of maternity. I did this by simulating 10,000 offspring genotypes and assuming that the proportion of sampled dams was 100% and that the proportion of loci that were typed and mistyped were 98% and 1% respectively. I then used CERVUS to assign maternity by comparing the genotype of each offspring to all of the 29 potential dams in the population using a maximum likelihood analysis.

Results CERVUS assigned maternity to all but two of the 103 progeny included in our analysis. Of the 101 progeny for which CERVUS assigned maternity, 90 of these individuals were correctly assigned maternity when compared to known dams. Thus, maternity was successfully assigned in 90 of 103 progeny (87%) using these seven microsatellite loci (Fig. 2a). To gain further insight into the sources of error in maternity assignment, I separately assessed the success of CERVUS when assigning maternity at the 95% and 80% confidence levels. CERVUS assigned maternity with 95% confidence for 77 progeny, of which 74 (96%) were correct when compared against known dams (Fig. 2a). Inspection of the three individual progeny that were incorrectly assigned at the 95% confidence level revealed clear mismatches between the genotypes of known dams and those of their progeny. This indicates either an error in genotyping or in recording which progeny came from which dam rather than an error due strictly to insufficient genetic variation at these seven loci. CERVUS assigned maternity with 80% confidence for 24 progeny, of which 16 (67%) were correct (Fig. 2a). The eight incorrect assignments all occurred in situations where multiple candidate dams matched progeny at each locus (Fig. 1b). To assess the ability of CERVUS to identify maternity in situations where multiple candidate dams matched progeny, Dartmouth Undergraduate Journal of Science


I performed a second set of analyses on the subset of progeny for which multiple candidate dams matched at each locus (Fig. 2b). This situation occurred in 43 of the 103 progeny (42%). Of these 43 cases, CERVUS accurately identified maternity 35 times (81%). When CERVUS assigned maternity with 95% confidence in these situations, it was correct in all 21 assignments. By contrast, when CERVUS assigned maternity with 80% confidence in these situations, it was only correct in 14 out of 22 cases (64%). In each of these eight cases of incorrect assignment, the incorrect designations can be explained by CERVUS taking allele frequencies into account and favoring an incorrect dam that shared rare alleles with the progeny over the true dam that passed on the common allele.

Discussion Our analysis indicates that the methodology presented here is successful in determining maternity with 87% success by using seven microsatellite loci and no information about paternal genotypes. However, with only seven loci used for this population there was a relatively high frequency of situations in which at least two dams matched the progeny at all loci (43/103 = 42%). Although these seven loci were sufficient to accurately resolve maternity in 81% of these cases (35/43), this nonetheless represents a considerable source of error in the measurements. In particular, measurements made at the 80% confidence interval were only correct in 36% of the cases (8/22). To reduce the frequency of wrong assignments one would have to throw out all cases in which multiple candidate dams match the progeny at all seven loci and the match is made with 80% confidence. Under these criteria, 21% of the present data set would be considered unreliable. However, the rest of the data set would be identified with 95 to 100% confidence and the only errors would be due to genotyping errors. Thus, if the goal is to have full confidence in maternity, our analysis suggests that 21% of the data set should be considered unreliable - namely those situations in which there are multiple potential dams that match at all seven loci and maternity is assigned with only 80% confidence. This methodology, therefore, allows for 74% of data to be successfully called with >95% confidence, with the only error due to inaccurate genotyping. Our data suggest that, for a reasonable fraction of a population, one can estimate maternity and paternity when both are unknown using these seven microsatellite loci. Future use of the methodology presented here has the potential to allow for in situ measurements of sexual selections via the assignation of maternity using these seven microsatellite loci.

Fig. 1: Frequency of alleles by locus.

Winter 2011

43


Fig. 2: Accuracy of CERVUS maternity calls.

Acknowledgements Thank you to R. M. Cox for his mentorship and help with analysis, R. Calsbeek for suggestions about experimental design and manuscript clarifications, and M. C. Duryea for help with PCR, sequencing, GeneMapper analysis, and troubleshooting. Thanks to J. McLaughlan and K. Pinson for assistance collecting progeny tissue samples. I conducted this research as a recipient of the James O. Freedman Presidential Scholar Research Assistantship. References 1. R. M. Cox, R. Calsbeek, Am. Nat. 173, 176-187 (2009). 2. H. E. Hoekstra et al., Proc. Natl. Acad. Sci. U.S.A. 98, 9157-9160 (2001). 3. J. G. Kingsolver, Am. Nat. 147, 296-306 (1996). 4. A. M. Siepielski, J. D. DiBattista, J. A. Evans, S. M. Carlson, Differences in the temporal dynamics of phenotypic selection among fitness components in the wild (2010). <www.rspb.royalsocietypublishing. org/content/early/2010/11/01/rspb.2010.1973.full?si=b224b7c4-e55a4fb0-b716-81147fddc3fe> (22 November 2010). 5. A. V. Badyaev, Trends Ecol. Evol. 17, 369-378 (2002). 6. A. V. Badyaev, T. E. Martin, Evolution 54, 987-997 (2000). 7. D. J. Fairbairn, R. F. Preziosi, Am. Nat. 144, 101-118 (1994). 8. J. W. McGlothlin, P. G. Parker, V. Nolan Jr., and E. D. Ketterson, Evolution 59, 658-671 (2005). 9. R. Calsbeek, R. M. Cox, Nature 465, 613-616 (2010). 10. R. Calsbeek, T. B. Smith, Evolution 61, 1052-1061 (2007). 11. R. M. Cox, R. Calsbeek, Evolution 64, 798-809 (2010b). 12. R. M. Cox, R. Calsbeek, Evolution 64, 1321-1330 (2010c). 13. R. M. Cox et al., Funct. Ecol. 24, 1262-1269 (2010). 14. R. Andrews, A. S. Rand, Ecology 55, 1317-1327 (1974). 15. R. Calsbeek, C. Bonneaud, Evolution 62, 1137-1148 (2008). 16. R. Calsbeek et al., Evol. Ecol. 9, 495-503 (2007). 17. R. M. Cox, R. Calsbeek, Science 328, 92-94 (2010a). 18. R. M. Cox, M. C. Duryea, M. Najarro, R. Calsbeek, Evolution: in press (2010). 19. C. Bardeleben, V. Palchevskiy, R. Calsbeek, R. K. Wayne, Mol. Ecol. 4, 176-178 (2004). 20. S. T. Kalinowski, M.L. Taper, T. C. Marshall, Mol. Ecol. 16, 1099-1006 (2007). 44

Dartmouth Undergraduate Journal of Science


Article Submission

DUJS

t What are we looking for? The DUJS is open to all types of submissions. We examine each article to see what it potentially contributes to the Journal and our goals. Our aim is to attract an audience diverse in both its scientific background and interest. To this end, articles generally fall into one of the following categories:

Research

This type of article parallels those found in professional journals. An abstract is expected in addition to clearly defined sections of problem statement, experiment, data analysis and concluding remarks. The intended audience can be expected to have interest and general knowledge of that particular discipline.

Review

A review article is typically geared towards a more general audience, and explores an area of scientific study (e.g. methods of cloning sheep, a summary of options for the Grand Unified Theory). It does not require any sort of personal experimentation by the author. A good example could be a research paper written for class.

Features (Reflection/Letter/Essay or Editorial)

Such an article may resemble a popular science article or an editorial, examining the interplay between science and society. These articles are aimed at a general audience and should include explanations of concepts that a basic science background may not provide.

t Guidelines: 1. The length of the article must be 3,000 words or less. 2. If it is a review or a research paper, the article must be validated by a member of the faculty. This statement can

be sent via email to the DUJS account.

3. Any co-authors of the paper must approve of the submission to the DUJS. It is your responsibility to contact the

co-authors.

4. Any references and citations used must follow the Science Magazine format. 5. If you have chemical structures in your article, please take note of the American Chemical Society (ACS)’s

specifications on the diagrams.

For more examples of these details and specifications, please see our website: http://dujs.dartmouth.edu For information on citing and references, please see: http://dujs.dartmouth.edu/dujs-styleguide Specifically, please see Science Magazine’s website on references: http://www.sciencemag.org/feature/contribinfo/prep/res/refs.shtml

Winter 2011

45


DUJS Submission Form t Statement from student submitting the article: Name:__________________

Year: ______

Faculty Advisor: _____________________ E-mail: __________________ Phone: __________________ Department the research was performed in: __________________ Title of the submitted article: ______________________________ Length of the article: ____________ Program which funded/supported the research (please check the appropriate line): __ The Women in Science Program (WISP)

__ Presidential Scholar

__ Dartmouth Class (e.g. Chem 63) - please list class ______________________ __Thesis Research

__ Other (please specify): ______________________

t Statement from the Faculty Advisor: Student: ________________________ Article title: _________________________ I give permission for this article to be published in the Dartmouth Undergraduate Journal of Science: Signature: _____________________________ Date:______________________________ Note: The Dartmouth Undergraduate Journal of Science is copyrighted, and articles cannot be reproduced without the permission of the journal. Please answer the following questions about the article in question. When you are finished, send this form to HB 6225 or blitz it to “DUJS.� 1. Please comment on the quality of the research presented:

2. Please comment on the quality of the product:

3. Please check the most appropriate choice, based on your overall opinion of the submission:

46

__ I strongly endorse this article for publication

__ I endorse this article for publication

__ I neither endorse nor oppose the publication of this article

__ I oppose the publication of this article

Dartmouth Undergraduate Journal of Science


Write

Winter 2011

Edit

Submit

Design

47


48

Dartmouth Undergraduate Journal of Science


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.