Page 1

Experimentation Subject

2. Targetted Area

1. Laser Beam

It’s Not Science Fiction


3. Mind Control

tt physics + math engineering health biology


What is Innovation?

Jason Choe

Princeton Innovation is an


Eugene Tang

undergraduate science organization

Cissy Chen

Aditya Trivedi

dedicated to making science


Abrar Choudhury


Stephen Cognetta

Amelia Warshaw Helen Yao Bridget Zakrzewski


Eugene Lee

Kevin Zhang

more accessible to all students at Princeton. We publish articles highlighting science news on campus as well as groundbreaking research across the world. In addition to our



articles, we host many events on

Mizzi Gomes

Sarah McGuire

campus to raise the level of science



Summer RamsayBurrough

Linda Song Soumya Sudhakar

engagement and knowledge. You can read more articles and learn about our organization online at


Arunraj Balaji Dhruv Bansal Bennett McIntosh

Featured Interviewees

Julia Metzger

Prof. Peter Meyers (PHY)


Zoe Sims

Joel Finkelstein *17 (PSY)

Eric Li

Nancy Song

Sheng Zhou ‘14 (CHM)


Jason Choe


Stacey Park



Sarah Santucci

Jennifer Lee

Erika Davidoff Jessica Vo


Min Ji Suh

Contact us at

Read our tweets at 2




A Bright Idea

Local Food

Dog Human Bond

Controlling Rats with Lasers

Actually May Not Be That Great

Intertwined Genetic Fate



Chemical Weapons


Don’t Try This at Home

And the Senior Thesis


Darkside Discovering the Universes’s Best Kept Secret



FRUIT FLY 100,000 MOUSE 75,000,000

HUMAN 100,000,000,000



All brains are made of neurons. Nematodes have neurons, honeybees have neurons, frogs have neurons – and humans, of course, have neurons. The number of neurons increases, however, as we move from species to species in this list, increasing in parallel as organisms become more complex and subsequently process larger amount of information. There are 302 neurons in the nematode C. elegans, 100,000 in the fruitfly, one million in the honey bee, and 16 million in the frog. These are large numbers, yes, but still quite manageable. We can wrap our heads around these. Then we come to a bit of a leap with the human brain: 100 billion neurons. 100,000,000,000 neurons. The neuron is a complex structure, consisting of many parts. The soma, or cell body, extends in one direction into a tree-like dendrite with branches that form connections with other neurons, which feed sensory information into the neuron, where it is then compiled into a single signal and send along a long, skinny extension of the cell membrane called the axon. The signal hops down the axon, in a process called saltatory conduction, made possible by the movement of ions across the membrane, like electricity. The newly sent signal, in turn, may be sent to another neuron close-by, or another neuron in a different region of the brain, or perhaps an organ in the body. When we think on this scale of a single neuron, the process seems quite straightforward. Consider now, though, the 100 billion neurons in the human brain, each complete with clumps of ions flying frantically across membranes and signals getting re-sparked at nodes along the axon – then repeating the process again and again thousands of times per second per neuron. The complexity is astounding. Attempting to target one neuron and observe its effect is thus understand-








“WE INJECT VIRUSES INTO BRAINS SO WE CAN CONTROL THEM WITH LASERS.” ably difficult. In this dense network of interconnected cells, it’s hard to electrically stimulate just one cell by the traditional method – dropping an electrode down into the brain. Inevitably, the neighboring cells clustered around the electrode also receive excitatory stimuli. How, then, do we isolate and explore particular neuronal pathways in the brain? The fascinating technology of optogenetics is a powerful new tool that allows researchers a large degree of specificity. According to graduate student Joel Finkelstein, “we inject viruses into brains so we can control them with lasers.”


The mechanism of optogenetics is comparatively better than the traditional method of electrical stimulation because of the specificity it permits: stimulation can be restricted to only those neurons that researchers have intentionally affected and thus genetically modified. By injecting a virus into certain neurons, scientists can change the genes of these cells to create certain protein channels that are sensitive to a particular wavelength of light. Shine a laser with that exact wavelength at the brain, and the channel opens, ions rush in, and electricity flows: that neuron – and only that neuron, not the whole neighborhood – fires as a result. DEMYSTIFYING ADDICTION

Finkelstein uses optogenetics in his work in Assistant Professor Ilana Witten’s lab to study reward and motivation in the mouse brain, with specific studies focusing on the influence of addictive drugs including methamphetamine, cocaine, alcohol, and vari-

ous opiates. All of these drugs, in one way or another, have a common effect: they push dopamine levels in the brain higher and higher with each subsequent instance of drug usage. Due to the plasticity of the brain’s reward systems, Joel explains, these drugs “hijack circuitry in a way that is completely unfair,” so that these high levels of excitatory dopamine become consolidated into an entrenched memory, and drive motivation for this drug way beyond physiological boundaries. Joel endeavors to elucidate the processes in the brain that are driving this behavior – in short, what drives a meth addict to do anything to get that next fix? A typical addiction study for Joel usually begins by measuring drug-seeking behavior through CPP, or condition place preference. A mouse slightly addicted to a drug, cocaine for example, is placed in a double-chamber setup where saline is provided in one chamber in the morning and cocaine placed in the other during the afternoon. After a day of acclimatization that serves









“WE CAN MAP CONNECTIONS AMONG BILLIONS OF NEURONS.” as a control, Joel then quantifies the behavior of the individual through observing how it navigates between the two rooms. In the beginning stages of addiction, one would expect the mouse to venture back and forth between the saline and drug chambers, but by the end of the process, the mouse would likely wait in the drug chamber, eagerly anticipating the afternoon placement of drug. These experiments provide insight into the overall behavior of the mouse over the varying stages of the addiction process. Onto this framework, then, the tool of optogenetics can be used to isolate and study how different nuclei throughout the brain contribute to these components of drug-seeking behavior: encoding the memory for the drug, instantiating the drug-seeking behavior, and organizing an executive plan for obtaining the drug. REAL-TIME INFORMATION

With this new tool, the door for neuroscience exploration has been thrown wide open. By studying model organisms like mice, we can take steps toward eliciting the functions of the human brain’s 100 million neurons and determining the ways in which they connect. One application is through pairing optogenetics with an imaging mechanism, as Joel does in his study of addiction behavior. He uses GCaMP technology, with fluorescent calcium, to see the spikes in calcium levels that indicate connections between different neurons. In this way, he can see the pathways evoked by stimulating certain neurons: “the tapestry of activity that dynamically encodes specific behaviors.” With optogenetics, Joel can extract real-time information about how the brain’s circuitry operates during

different stages of addiction – he can trace what is happening in mice brains as they seek cocaine, as they consume meth, and ultimately, as their pathways become hijacked by these addictive substances. Besides addiction, optogenetics allows researchers to study other behaviors as well. Take a behavior as ubiquitous and essential as eating. We can, in fact, control eating in mice, which may have a large degree of possible ramifications for humans, especially considering that we live in a society as obsessed with body image and dieting as our own. It turns out that these sensations of fullness and hunger are encoded in the firing of neurons in a region of the brain called the lateral hypothalamus. Experiments have been done in mice to genetically modify neurons in this region so they develop many of the light-sensitive channels discussed above. In a chamber filled with cheese – a choice meal for any mouse – these mice will munch until they’re full, and then happily roam about. Shine the laser and stimulate the targeted neurons, though, and they immediately dash toward the cheese and begin snacking. Turn off the laser, and they drop the cheese. On, off, on, off; snack,



rest, snack, rest. With optogenetics, we can observe in live animals the direct causality between certain neurons and specific behaviors, and begin to map out the connections formed among these billions of neurons. And in addition to facilitating our ability to learn about the function of the brain and its 100 billion neurons, a seemingly never-ending task, optogenetics also has possible medical applications. With the ability, now, to stimulate specific neurons, we can begin to counteract the negative effects caused by age, disease, and accidents. Already, research has revealed that using optogenetics to stimulate cells in the retina may be able to restore vision in humans with eye diseases like macular degeneration. PLAYING IN THE SYMPHONY

Evidently, the possibilities for its future application are even more exciting. We can begin to explore the vast number of pathways existing in the human brain – one of the numerous synapses that are possible among the one hundred billion neurons – and then develop mechanisms to fix them when they start to go awry. Finkelstein may have characterized the potential power of this new tool best when he said: “with optogenetics, instead of just banging on the keys of the piano, we’re now listening to the symphony we’re playing in.”




Local Fo





Beneath the clear plastic window of a Princeton dining hall napkin dispenser, an information placard advertises “food for thought.” Fifty-nine percent of Princeton’s food comes from local producers, the placard reads. It says the “Princeton Greening Dining” initiative supports local growers, and, we might quickly assume, these efforts make Princeton more environmentally friendly. This emphasis on local eating reflects the larger national trend of the local-food movement. In the public consciousness, “local” food is classified alongside concepts like “green,” “natural,” and “non-GMO” – one-dimensional labels that provide consumers with straightforward solutions. In the fast-paced whir of the mainstream-media news cycle, the nuances of true “sustainability” are lost to a simplistic symbology of “green.” Yet is it really that simple? Why are we so convinced that local food is better for the environment? What does “local” even mean? And what does all this have to do with Princeton? Proponents of the local food movement – often termed “locavores” – say that shorter distances between farms and consumers mean less carbon dioxide emissions from food transportation, more money circulation through local communities, and more direct connections between food producers and consumers. These are valid points, but they are not the whole story. A far less publicized alternative counters the local trend, asserting that some local foods are actually bad for both society and the environment. According to this perspective, the local food movement oversimplifies complex



environmental and economic factors. In a 2008 study published in the journal Environmental Science & Technology, Christopher Weber and H. Scott Matthews concluded that “food miles,” the distance food is transported from farm to table, accounts for just 11% of the greenhouse gas emissions from food production. Food miles are part of the problem, then, but other factors play a much larger role. For example, the production and application of nitrogen fertilizer releases nitrous oxide, an extremely potent greenhouse gas. These fertilizer-

The local food movement oversimplifies complex environmental and economic factors. related emissions account for 75% of farms’ greenhouse gas emissions, according to an analysis by Jonathan Hillier et al., published in the International Journal of Agricultural Sustainability. Following this logic, it is much more important to use farming methods that reduce nitrogen fertilization (crop rotation is one such practice), than to focus on food miles. Alternatively, instead of prioritizing how our food is transported – such as food miles – and produced – such as fertilizer use – it may be even more effective to reconsider the types of

by Zoe S im


food we choose. Carnegie-Mellon researchers Weber and Matthews conclude that “shifting less than one day per week’s worth of calories from red meat and dairy products to chicken, fish, eggs, or a vegetable-based diet achieves more greenhouse gas reduction than buying all locally sourced food.” What we eat is more important than where it comes from. And the “local” label, while convenient, is relatively arbitrary compared to the conscious choices that we as consumers can make to reduce our the impact our food has on the environment. The pastoral community gardens hailed by the local food movement are also controversial. Locavores embrace the integration of rural and urban landscapes; however, numerous studies have shown that city residents actually have lower greenhouse gas emissions per capita than their suburban and rural counterparts. A 2008 study by researchers Glaeser and Kahn found cities’ emissions, in general, to be “significantly lower” than suburbs’. In 2009, a report in the journal Environment and Urbanization concluded that most cities’ per capita emissions are below the national average. Why? Cities’ higher population densities make a number of environmentally-friendly practices, such as mass transportation and recycling programs, feasible and efficient. While suburban and rural dwellers often drive long distances to work, school, and shopping centers, cities provide these things within a smaller area, and offer public transportation between them. Additionally, city dwellings are


Fertilizer-Related Emissions

Other Causes





Maximum savings in emissions from purchasing local food

At its best, locally grown produce can reduce only a small portion of total greenhouse emissions, potentially at the cost of other inefficiencies, such as increased fertilizer use.

smaller, which means they require less energy for heating and cooling than suburban homes. It is also more energy-efficient to heat a single large apartment building than a number of separate homes. Finally, cities use space efficiently, decreasing suburban and urban sprawl and leaving more room for natural ecosystems. Moving food producers closer to consumers would de-densify these efficient urban centers, increasing the amounts of space we use, fuel we need, and carbon we emit. It would bring us closer to our food, but farther from the sustainable future we seek. And while we’ve established that food miles are not hugely significant, accounting for barely a tenth of emissions from food production, urban centralization can even make food distribution chains even more environmentally efficient. Imagine a truck brings non-local produce to a New York grocery store – say it’s a 400-mile trip. Once this food is in the city, a hundred New Yorkers can walk down the block to buy their groceries, which would have a total of 400 “food miles.” By contrast, if 100 rural locavores drive an average of just 10 miles to buy “local” produce, their total – 1,000 miles of driving – results in a much higher carbon footprint. Furthermore, the competitive advantages of growing in different locations are environmental as well as economic. Steve Sexton, an economist for the book-turned-blog Freakonomics: The Hidden Side of Everything, demonstrates this with a model “in which each state that presently produces a crop commercially must grow a

share proportional to its population relative to all producers of the crop” – effectively imagining a future in which, for every food a state can produce locally, it produces enough to feed all its residents. The result? Using corn as an example, such a system would demand a 27% increase in acreage, a 35% increase in fertilizer

Reducing our meat and dairy consumption has much more potential to shrink Princeton's culinary carbon footprint than local purchasing. use, a 23% increase in chemical demand, and a 23% increase in fuel use. Farms are located in regions that optimize their production efficiency is optimized, yielding environmental advantages that would offset increased transportation miles. Part of the issue is that “local” exists by degrees, not absolutes. Princeton Dining Services defines “local” as anywhere within 250 miles of the University, a radius that extends as far afield as Maryland and Massachusetts. On Dining Services’ Web site, which displays its “local” suppliers on a regional map, small Princeton establishments like Terhune Orchards and The Bent Spoon are counted alongside massive

franchises: Frito-Lay Inc. for “local” chips; Tyson Foods, one of the largest meat producers in the US, for “local” chicken; and Aunt Jemima – not the friendly neighborhood waffle-woman, but the leading national brand – for “local” waffles and syrup. Furthermore, this definition completely ignores producers’ ingredient sources. It’s not hard to imagine that most “local” processors’ actual food inputs come from well beyond the Princeton region – from Frito-Lay’s corn to the Bent Spoon’s vanilla. So while it is technically true that 60% of Princeton’s food is purchased from local vendors, this may just mean that the final stage of processing or packaging occurs nearby, while the actual food has traveled vast distances. This definition of “local” is a far cry from many diners’ and locavores’ idyllic conceptions, in which we would all revert to a time before herbicides and E. coli outbreaks, when we lived closer to our food and to the earth. Yet there is a reason why commercial, industrial food production has overridden this pastoral model. The methods of the past cannot possibly support a human population that the UN expects will reach 9.5 billion by the year 2050. Moreover, the non-local economy gives us what we want: modern luxuries like waffles, year-round bananas, and affordable food. It keeps us nourished and allows us the luxury of pursuing higher education instead of plowing fields all day. Our task is not to step backwards, but to improve modern production processes for a realistic



sustainable future. Dining Services’ sustainability campaign is truly admirable, and its intentions are well-placed. In fact, Princeton’s local food purchasing, while economically motivated, is environmentally rational: it makes sense to buy chicken and waffles from New Jersey, since it’s just as feasible to produce them here as anywhere else, but oranges from Florida because that’s where they grow best. There is nothing wrong with choosing local producers on a product-by-product basis if it is economically comparable to do so. However, we cannot automatically equate local purchasing with sustainability, and Dining Services’ emphasis on local food is misplaced. This feel-good local focus becomes deeply problematic if we allow ourselves to be lulled by the napkin-dispenser palliatives, convinced that our food is “green,” and free ourselves from the obligation to think critically. If we really care – as I hope we do – about the environmental impacts of our food, we should take real action, grounded in science. As Weber and Matthews’ study indicates, reducing our meat and dairy consumption has much more potential to shrink Princeton’s culinary carbon footprint than local purchasing. Rather than pursuing an ill-defined, environmentally ambiguous “local,” Dining Services should serve less red meat, enact Meatless Mondays, or even just start with Beefless Mondays. What’s more, knowing Princeton’s local food purchasing rate runs the risk of making us passive consumers of the semi-local food we are served. Instead, it is imperative that we recognize the real consequences of our food choices, a realization which allows us become active participants in Dining Services’ sustainability campaign. We don’t have to wait for the dining halls to make more environmentally-friendly purchasing decisions, because we can vote with our plates, three times a day, by decreasing demand for certain foods. We can simply skip the hamburger, any and every day we like. It is important that we resist the unverified allure of the term “local.” Our opinions, and Dining Services’ purchasing decisions, must be driven by data and critical analysis, not the common knowledge perpetrated by media hype and certainly not a green graphic pasted on a napkin dispenser. The message on those napkin dispensers should be a springboard for conversations, and they should be conversations with more nuance and sounder science than we normally would grant them. Only in doing so will we shift the local-food debate to find solutions for the real world.



Loc al F MA YN






If states grew all the corn they consumedz, this would cause: This shows that certain foods grow best in specific regions of the nation.







The term “local” is not standarized Princeton Dining Services defines “local” as anywhere within 250 miles of the University

Eating less meat has more of an impact than local purchasing If you’re concerned about the environment, try eating a vegetarian meal





Dogs come in all shapes and sizes, but they all manage to fit perfectly in our hearts. No other animal except perhaps the cat rivals the sheer amount of affection, love, and care humans have for dogs, and for good reason. Dogs return the love unconditionally and unabashedly, faithfully playing, protecting, and listening to us their entire lives. The intensity of our bond with dogs raises the question: how did dogs and humans learn to love each other? This question remains shrouded, as current scientific analysis struggles to characterize the complexity and depth of the relationship. Humans first domesticated dogs an estimated 18,000 to 32,000 years ago. The huge range of possibilities comes from the shaky scientific data: the first domestication was so long ago that only fossil evidence can be used to determine the original point where the first dog-like organism came into being. Fossil aging tests operate remarkably precisely on a geological scale, pinpointing an age with an

“Dogs show greater social cognitive ability than any other species, including chimpanzees. They outperformed every other species on a series of tests that mainly focused around commanding the dog with gestures.� accuracy of a couple thousand years out of billions of years, but are frustratingly imprecise for pinpointing exact dates. The development of DNA analysis, however, is leading to more precise answers of where and how dogs first distinguished themselves from wolves. A study published in Novembe 2013 in Science compared the DNA of modern dogs to the DNA of European wolves from the time frame where dogs were first estimated to be domesticated, 19,000

years to 32,000 years ago. In essence, they compared DNA sequences: the more similar two species’ DNA are, the more closely related the species. The DNA was extracted from the remnants of mitochondria in fossils, since this cell component, the powerhouse of every human and dog cell, remains stable and abundant for thousands of years after fossilization. The authors found that the modern dog is more closely related to ancient European wolves than to current wolves, indicating that the modern dog and the modern wolf began to evolve in two different directions from the ancient European wolf. Other scientists dispute this, arguing that the fossil record only reflects the split of dogs and wolves once they physically changed, and that genetically, they may have split as long as 100,000 years ago. The science remains unclear on the exact date of split. The cause of split continues to be shrouded in mystery as well. A study published in Nature early last year compared the DNA of modern dogs to those of modern wolves, and found a startling difference: modern dogs can digest starch while modern wolves cannot. The study authors suggest that this may have been a key stimulator in separating dogs from wolves: dogs that could digest starch would have an easier time living around human settlements, feeding off the remnants of bread and other similarly starchy foods. Yet other evolutionary biologists think this may have developed along the way, and that dogs first were attracted to discarded carcasses left after human hunted large game. Still other evolutionary biologists think that wolves and dogs began to separate genetically naturally at first, and that humans simply accelerated the process of separation. The evidence that humans and dogs induced each other to evolve, or to coevolve, is much clearer, however. A classic study found that dogs show greater social cognitive ability

“Humans and dogs may have induced each other to evolve, or to coevolve.�

BCE than any other species, including chimpanzees. They outperformed every other species on a series of tests that mainly focused around commanding the dog with gestures. Another classic study found that strong dog loyalty comes from the powerful social structures wolves form in packs, where adhering to a hierarchal order is essential for survival. Reams of studies have associated rounder features, floppier ears, and bigger eyes in dogs with domestication. A now famous long-term study with foxes found similar changes occurred in domesticated foxes over many generations. Recent studies illuminate new insights. One particularly powerful study scanned both dog and human brains while they listened to sounds of dogs and humans progressing through various emotions. The study found that dogs have stunningly similar manners of processing voices and emotions, doing both in their temporal lobe. No other non-primate processes emotions in their temporal lobes. As a result, dogs have an easier time shifting their vocal processing to include human commands than any other species, explaining the ability of a dog to accurately read human emotions and listen to commands. Another study found that, in brain scans, dogs’ reward centers lit up when they saw human gestures associated with positive actions, such as feeding dogs treats, and had no reaction when other gestures were found. Future research is focused in two major areas: the original split of the dog from the wolf which, as described earlier, remains cloudy, and the impact dogs have had on humans. It is well known that in domesticating other animals, humans evolve as well. A clear example can be

CE found in cows: humans developed tolerance to lactose, the key component in milk, as cow domestication spread. Researchers are now looking at the ways dogs may have impacted humans. A new, controversial theory argues that humans learned to hunt from wolves, while another, more accepted theory, argues that dogs served as guards for communities which allowed humans to become more sedentary. Little DNA analysis exists, however,


but a host of new studies are underway. Dogs and humans share a long, storied history, but the details of the relationship remain murky. The subject remains an area of active research, but much light has already been shed on why we love dogs and why dogs love us. One thing remains clear, though: dogs have been a part of the human story for almost as long as we have been around, and they will continue to be for thousand of years to come.



As reports of Syria’s extensive use of Sarin gas in the suburbs of Damascus trickled out of the war-torn country a couple years ago, the world wondered in shock what kind of a country would launch chemical weapons against its own civilians. For many, the thought of chemical weapons conjures up images of WWIera clouds drifting across the no-man’s land between trenches, but governments have used such weapons more recently, and more close to home, than you may think at first, to this country and decade. Remember the clouds of tear gas drifting over the Occupy protests? Or consider the regular use by ordinary citizens of chemicals marketed as non-lethal self-defense. Mace, tear gas, and similar agents are clearly chemicals used as weapons, but, unlike the other weapons I’ve mentioned, are classified as non-lethal – and thus permissible for use by security forces – under the chemical weapons convention. Where does the difference lie? The answer lies in the physiological effects of each gas. Nerve agents such as sarin and VX are fatal at low doses because of their over-stimulating effect on the human nervous system, while the sleeping gas BZ has the opposite effect, in a way that is less lethal but still insidious and incapacitating. Meanwhile, pepper spray and other tear gases use a variety of irritating chemicals to cause intense pain and lachrymation (eye-watering), thus the general designation tear gas.

Nerve AgeNts sAriN, vX, ANd others

As the electrical signals which control our movement and senses pass through our nervous system, they come to synapses, the gaps between individual nerve cells, which would ordinarily halt these impulses. In order to pass the impulse to the next nerve cell or to the muscle at the end of the line, the first cell releases chemicals called neurotransmitters, which trigger the next cell to continue the impulse. Nerve agents prevent the breakdown of one of these neurotransmitters, acetylcholine, by attacking the protein which breaks it down in the synapse. The result is that these impulses never stop, and the victim loses control over muscles. Thus a nerve agent’s victim will convulse painfully and eventually die of asphyxiation as their diaphragm fails to function.


The sleeping gas BZ, which was allegedly stockpiled by Iraq in the run up to the first Persian Gulf war, and allegedly used by the Syrian government during that civil war, has nearly the opposite effect. Rather than preventing the breakdown of acetylcholine, BZ blocks its uptake by the cell after the synapse, blocking the same impulses that nerve agents put into overdrive. With fewer nerve impulses reaching their destination, the victim becomes confused and disoriented, and often exhibits lapses in judgment, including compulsively stripping. Even though the patient has trouble sensing and focusing on their environment, they still try to make sense of what few clues they have, resulting in hallucinations, which can be shared among several victims. Because the effects of BZ oppose those of nerve agents, VX has, with limited success, been used to counter them.

mustArd gAs

Mustard gasses are not, as their name implies, produced from the mustard plant, but are so named because they smell strongly of that famous condiment. Unlike the condiment, though, they are potent poisons which can be absorbed through the skin or the lungs. Once inside a cell, the molecule attaches one of its chlorine atoms to guanine, which is one of the four bases that make up the “ladder� of DNA. Though this isn’t in itself toxic, a protein which scans DNA for defects notices these defects, triggering it to kill the cell from within. Because mustard agents cause the death of individual cells, similar molecules are used in chemotherapy, targeting cancer cells specifically.

teAr gAs

In contrast to the fatal agents mentioned above, tear gas is a broad class of chemicals which merely irritate the skin. Capsaicin, the active chemical in both pepper spray and red peppers, triggers the same nerve cells which notice pain from heat, acid, or friction. Since these compounds directly target pain pathways, they can incapacitate the victim with little of the permanent harmful effects seen in nerve agents and mustard gas; it is for this reason that everyone from Ukrainian police to private citizens uses these compounds as less-lethal weapons. Tear gas is not, however, non-lethal. The irritation from the gas can constrict airways, especially when the victim is asthmatic, in a poorly-ventilated area, or already has their airway constrained (for instance, if someone is restraining them by holding them around the chest), leading to suffocation. For this reason, the Nobel-prize winning organization Physicians for Human Rights advocates that tear gas be reclassified as lethal under the Chemical Weapons Convention. As governments and terrorists alike continue to search for ways to control or harm many people at once, chemical weapons will continue to be deployed around the world, from terrorist-targeted subways to urban protests to desert battlefields; research then, will continue into how to stop them, or how to make them more potent.


An Inte

rview w

ith Shen

g Zhou


makes everything seem so easy,” says Sheng Zhou, a senior in the chemistry department. Though many would beg to differ with such a statement, Zhou explains that in CHM 303/304B, students are facing the theoretical as opposed to the practical— they don’t need to carry out the reactions in a laboratory. Zhou is trying to synthesize ten different compounds from one molecule for his senior thesis.

COMPLEX CHEMISTRY Upon first glance, the reactions all appear to be simple substitutions; but in the lab, it’s not so simple. “Sometimes, the reactions don’t work, and you don’t know why.” Zhou recounts the numerous difficulties with conducting novel research, the frustrations, the unexpected products and low yields. “There will be many weeks when you don’t get results. Other times, one mistake can lead to a week’s worth of progress to do down the drain.” However, no matter how grueling the research for his senior thesis is, Zhou always remembers that what he is doing has the potential to make an enormous difference in the world. For him, “research is in the service of people.”



‘14 Conduc

ted by N


ancy So


THE RESEARCH The compound Zhou is working with is artemisinin, a molecule that has been used to treat the disease responsible for over half of all human deaths since the Stone Age: malaria. Malaria is caused by the parasite Plasmodium. Throughout Plasmodium’s life cycle, it will release free radicals within human cells that attack other cells, creating a never-ending cycle. Artemisinin neutralizes these free radicals,

GLOBAL IMPACT Zhou wishes to combat both these problems through his senior thesis research. By changing one of the substituents (chain of atoms) on artemisinin, Zhou changes the way the new compound reacts within the human body. He has come up with ten different products, successfully creating four of them so far. After synthesizing the molecules, he has to test them. Ultimately, Zhou wishes to synthesize a molecule that can function as a “one pill cure to malaria” that maximizes bioavailability and activity. The molecule would be stable within the body and take longer to metabolize, allowing it to effectively neutralize most of the free radicals with only one dose. However, the new molecule must also preserve its activity, how well it neutralizes the free radicals.

“ONE PILL CURE TO MALARIA” stopping the cycle and providing a “cure” for malaria. So why haven’t we heard of such a cure for malaria? First, artemisinin has a very short half-life; it is metabolized (broken down) so quickly in the human body that it does not have enough time to neutralize all the radicals formed. Furthermore, there have been suggestions that artemisinin is neurotoxic, potentially adversely affecting brain development.


the body quickly breaks down normal artemisin





STRONGER molecule LESS neurotoxicity Zhou’s SUBSTITUENT G



ARTEMISININ Unfortunately, the senior thesis deadline might not allow Zhou to carry out his entire vision. Nevertheless, he feels that he has gained so much from the past few months alone. “[The senior thesis] keeps you humble. It teaches you endurance and perseverance. You’re doing research over the entire school year, not just ten weeks like in most internships.” But his most important takeaway was not about his research: “The senior thesis teaches you a lot about yourself. It’s a journey.” Though intimidating (it makes orgo look easy), the senior thesis not only proves to be rewarding and worthwhile for its authors, but can also be the start of something groundbreaking.

Zhou changes a substituent to combat problems


ed by S

arah Sa







You might liken it to searching for the proverbial needle in a haystack, but such a cliché does no justice to the modern hunt for dark matter – a multinational effort to find evidence for particles that have evaded detection since science was in its infant stages. I recently had the chance to meet with Professor Peter Meyers of the physics department to discuss this epic search, with specific respect to project he is currently involved in – DarkSide. Before I could even consider delving into the details of a liquid-argon-based WIMP detector, I first needed some fundamental questions answered. Namely, what exactly is dark matter? And, given that it has remained hidden throughout human history, are we in a position to guarantee its existence? Meyers explained pedagogically: Dark matter is an observational fact; the idea has been around since at least the nineteen-forties. The theory is founded upon the concept of uniform circular

motion caused by a force of gravitational attraction. Consider a star at the edge of a galaxy. Given a few key pieces of information, such as its mass and its distance from other stars, we can calculate the velocity with which it orbits the galaxy’s center of mass. Since we have fairly accurate figures for the masses of bodies we have observed in various galaxies, we can calculate the velocities with which various stars should orbit a galaxy’s center of mass. Calculations aside, we can also measure the actual velocities of various stars at the edges of galaxies. They differ from the calculated values. Significantly. The amount of mass we had originally

thought to be in any given galaxy is not nearly enough to sustain the velocities that stars in the galaxy actually exhibit. Perhaps, there is a different kind of force at play. Or, maybe, gravity acts differently at fantastically large distances. However, those two ideas aren’t very likely: There exists a dearth of reasonable theories explaining them. There simply has to be more mass in the universe than we had initially considered. Thus, we have reached the definition of dark matter: The stuff that, though largely unnoticed, plays a crucial role in ensuring that cosmic bookkeeping doesn’t defy the known laws of physics. What else do we know about dark matter? The sheer quantity of the stuff must be on an astronomical scale. By most estimates, there exists about ten times more dark matter than conventional matter in the universe. What exactly is dark matter made of, then, for it to be so prevalent yet largely unnoticed? Certainly, this isn’t the case of planet-sized chunks of rock

Weakly Interacting Massive Particles (WIMPS)

pass through conventional matter, interacting only

occasionally and virtually

imperceptibly with it

that aren’t accounted for. If dark matter we arrive at the Weakly Interacting Maswere visible, we would have been able to sive Particle (WIMP) theory of dark matsee it. Experts are fairly certain that we ter – Meyers’ area of expertise. Accordare dealing with something beyond typ- ing to the theory, WIMPs make their way ical protons, neutrons, and electrons. In through the universe, typically passing short, we’ve got a new kind of fundamen- right through matter they come across tal particle on our hands. By definition, and interacting only occasionally and these particles exerts a force of gravity on virtually imperceptibly with conventional other particles in the universe, and thus, matter – in an event that can be likened by Newton’s third law, are affected by the to elastic collisions that occur in a small gravitational forces exerted on them by percentage of matter-dark matter rendezvouses. surrounding bits So that’s the of matter. Cal- “We’ve got a of theory. We need culations sug- fundamental particle to verify it. And gest that each on our hands.” verification, of particle must course, relies have around on observation. one-hundred Every so often, times the mass dark matter may of a proton. We bounce off a can also figure particle of matout just about how densely distributed these particles ter. Such an interaction is detectable, are. Meyers cupped his hands and ex- even on the atomic level. This kind of inplained, “There may be a few right here, teraction is what Meyers and fellow professors from across the nation are trying right now,” if only for a moment. When matter and dark matter inter- to record. So how hard could it be? Well, act with each other, they typically do so the scale of such collisions is miniscule, in the form of an elastic collision. Phys- as has been discussed. The more significists postulate that the particle interacts icant hurdle to jump is the relatively low “weakly” with matter – in a fashion sim- frequency with which these events ocilar to neutrinos, but even weaker. Thus, cur – according to Meyers’ calculations,

new kind

something on the order of once every year for a detector with scale resembling his. If we want to record what happens when a WIMP collides with something, we need a target – it’s got to have something to hit. In the case of the Darkside-50 detector, a 50Kg device buried about a 1.5 kilometers beneath a mountain at the Gran Sasso National Laboratory in Italy, the target material of choice is liquid argon. In short, the detector waits for a WIMP-argon interaction and then measures the results of this collision. But that is easier said than done. The true difficulty lies in dealing with false-positive results. Radiation tends to produce results similar to those of interaction events. Cosmic rays penetrating the earth’s atmosphere would pose a major problem – to prevent this, the detector itself is buried deep beneath a mountain, within a tank of pure water. Argon, the detection medium, may produce about 1 radioactive decay per second per kilogram. A trace element found in the air we breathe, Argon may be harvested from the atmosphere. However, atmospheric argon is, quite simply, not good enough – it has been exposed to cosmic radiation, so its rate of decay is catastrophic in magnitude. The consequence is five

DESIGNED BY MARISA CHOW billion decays in the Darkside-50 detector over the course of three years – five billion readings that essentially cover up authentic WIMP-argon collisions. Hoping to eliminate this source of radioactive pollution, the team has decided that the detector will soon run on argon harvested from deep underground, where cosmic rays are absent. Considerable thought is given even to the material that the detector itself is made of. Radioactive purity on the order of one part per trillion is the ultimate goal. The detector was assembled initially in 2013, and has been running in its preliminary phase since October of last year. As this calibration run continues, the goal for the program is reducing background interference, which threatens to contaminate any and all results. Meyers notes that the DarkSide project is a relative newcomer

to the hunt for dark matter. The field is currently defined by several competing theories and detection methods. It appears as though this kind of competition allows the field to improve upon itself. So what exactly would the detection of dark matter mean for the scientific community in a broader sense? What new technologies can we expect? Meyers pensively replied, “There will be no poetry written by dark matter.” It just sits there. Mere detection requires intensive effort, let alone production or utilization. However, Meyers believes that the project’s significance is derived from the scale of the question, in a philosophical sense. Despite centuries of scientific progress, we have yet to definitively prove the modicum of understanding we currently have about a substance that constitutes

“There will be

no poetry written by dark matter .”

ad here

a vast majority of the universe. Ultimately, proof is required for any given theory about dark matter to leave the realm of speculation and enter that of scientific fact. Validation of this theory offers unprecedented insight into the big bang itself. The effects of dark matter have propagated onward from the very beginning of time – in the way that galaxies are formed, and subsequently in the way that we were formed. But there’s another pressing question at hand. Is science topping out? Are there no more questions it is capable of answering? Physicists like Meyers are expending every effort to demonstrate that this is not the case. The hunt for neutrinos lasted thirty long years, and its relatively recent success dispelled the doubters of decades past. Given five to ten years, Peter Meyers believes that history will repeat itself as a new kind of fundamental particle is confirmed – one whose remarkable ability to remain hidden over the years is paralleled only by the significance of the role it plays in the way the universe works.

Laser-emitting Apparatus

ACKNOWLEDGEMENTS Without our contributors, Innovation would not be possible. A special thanks to the following groups, departments, schools, people, and programs:

MIRTHE Andlinger Center for the Humanities PRISM Princeton Science and Tech Council Chemical and Biological Engineering Princeton Writing Program Keller Center Professor Saini Professor Stock Psychology Professor Gmachl Chemistry Ecological and Evolutionary Biology If interested in being a sponsor, email for more information.

Innovation Magazine - Winter 2015 - Princeton Journal of Science and Technology  

Showcasing Princeton's cutting edge research at the frontiers of science, engineering, and technology.

Read more
Read more
Similar to
Popular now
Just for you