Page 1

FALL 2016 | Berkeley Scientific Journal



EDITOR’S EDITOR’S NOTE NOTE In the age of instant communication and connection, one would hope that scientific information would be more easily accessed and absorbed. However, the opposite seems to hold true. With many of us on the Berkeley campus still reeling from the 2016 election, it can be disheartening to see just how easy it is to spread misinformation and for our country’s leaders to reject scientific theories and methods. Moving forward, scientists must not only fight with facts but with stories and pictures, with more ethos and pathos. It is clear that BSJ’s mission is more critical than ever: to educate young scientists and engineers in written and graphical communication and lead those specializing in the humanities to apply their skills in elucidating scientific concepts. We are succeeding grandly in this mission. In Fall 2016, the BSJ team grew to over 45 Berkeley undergraduate students, one of our largest teams in recent years. This semester, we also redesigned the layout and graphics of the journal from scratch. We hope you enjoy this visually stunning issue!

Editor-in-Chief Alexander Powers Managing Editor Harshika Chowdhary Features Editors Rachel Lew Aarohi Bhargava-Shah Interviews Editors Georgia Kirn Yana Petri Research Editor Akshara Challa Blog Editor Neel Jain Finance Director Navi Hansra Features Writers Aaya Abdelshafy Allison Chan Daniel Jiao Diana Nguyen Michelle Verghese Leandra Padayachee Katherine Liu Interview Team Catrin Bailey Collin Neale Daniel Yoon Elena Slobodyanyuk Sasinan Sangteerasintop Jordan Wong Research Team Annalise Kamegawa Avani Vaid Cameron Mandley Edna Stewart Ellis Vavra


Katie Sanko Moonsuk Jang Naomi Sales Phuong Tran Yizhen Zhang Julia Peng

Laura Zhu Robert Maxwell Jiarui Liu Soohan Woo Vidyun Bais Yu Luo Tianshu Zhao

Harrison Ramsay Hyeonji Shim Jenny Kim Rebecca Chan

Berkeley Scientific Journal | FALL 2016

The Berkeley Scientific Journal proudly presents its Fall issue: Chronos. Fleeting or enduring, time is central to scientific inquiry, and its speed is dependent on the scale it is considered at. This concept is captured by the evolutionary biology debate over gradualism vs. punctuated equilibrium. Do new species arise due to gradualism, the slow accumulation of small changes, or punctuated equilibrium, periods of stasis followed by rapid changes? Both gradualism and punctuated equilibrium proved relevant for explaining speciation; however gradual changes are only apparent on the scale of hundreds of thousands of years when a species transforms into a new species whereas punctuated equilibrium is visible in shorter time scales when species quickly diverge. The moral of the story is that time, a crucial measure, can be perceived as ephemeral or everlasting depending on the topic being studied. In this issue, you will see how the scales of time advance our scientific knowledge of the present, past, and future. From the ominously fast speed of the spread of the Zika virus to using bacteria to undo the damage of plastic waste over decades, the articles in this issue shed light on the importance of time in scientific exploration.

Harshika Chowdhary Managing Editor Alexander Powers Editor-in-Chief


14. 17. 25. 28. 44. 47. 50. 53. 61. 66. 68.

Zika: The Formidable Speed of Viral Spread Michelle Verghese The Future of Bacteria Cleaning Our Plastic Waste Allison Chan The Future of Medicine: 3D-Printed Organs Julia Peng From One Life to Another: The Stress That Defies Time Leandra Padayachee Altered Perceptions: How Substances Influence Our Perception of Time Aaya Abdelshafy Antibiotics: From Modern Medicine to Global Risk Yizhen Zhang The Mechanics of Timekeeping Katherine Liu The Evolution of Intelligence in Corvids and Apes Katie Sanko Frozen in Time Phuong Tran Food Insecurity and Global Warming Naomi Sales Can We Look into the Past? Moonsuk Jang The Fresh New Future of Preservatives Diana Nguyen Aging and Immortality Danial Jiao


by Interviews Team


Chemistry Professor Christopher Chang: Transition Metals in Cell Signaling 20. Chemical Engineering Professor David Schaffer: Gene Therapy 31. Physics Professor Richard Muller: A New Perspective on Time

Research 37. 55.

Fourier Transform Infrared Analysis of Surface Ion Traps


Akshara Sree Challa & Jnana Aditya Challa

William Tokumaru, Ishan Talukdar, & Hartmut Haeffner

Anthocyanin and Glucosinolate Nutrients: An Exploration of the Molecular Basis and Impact of Colorful Photochemicals on Human Health

Research: Bidirectional Cross-Modal Influence on Emotion Ratings of Auditory and Visual Stimuli

Harrison James Ramsey

FALL 2016 | Berkeley Scientific Journal




n 1947, deep inside the Zika Forest of Uganda, a group of scientists studying yellow fever happened upon something unexpected. A Rhesus monkey they were studying had developed a fever. However, the disease ultimately isolated from the monkey’s serum was certainly not yellow fever: it was an undocumented virus that had been transmitted by a mosquito bite. A few years later, 12 unique strains of this virus were isolated in mosquitos in the tree canopy of the forest, and the virus was named after the forest in which it was discovered. Fifty years went by, and few cases of Zika in humans were reported. Then suddenly in 2007, Yap Island in Micronesia was struck by an outbreak that ultimately affected 75% of its residents. The perpetrator was identified as the same virus that had been discovered in the Ugandan forest. It proved to be acute and non-deadly in humans, causing only mild illness and no deaths or hospitalizations.3 In 2015 during


its Summer Olympics, Zika had reached Brazil. By 2016, Zika was identified in the United States and the World Health Organization officially declared a Global Health Emergency over what had become a worldwide viral epidemic. But how did Zika get here? How did a seemingly docile virus in the depths of an African Forest circle the globe in a matter of nine years? At the same time, why would an outbreak of a virus that is non-deadly prompt a global health warning?

HOW DID A SEEMLINGLY DOCILE VIRUS IN THE DEPTHS OF AN AFRICAN FOREST CIRCLE THE GLOBE IN NINE YEARS? Zika is indeed both a non-deadly and acute virus; for the most part, the symptoms are not severe. Once infected, a patient will experience 3-12 days of fever,

Berkeley Scientific Journal | FALL 2016

red eyes, joint pain, headache, and a rash. In some cases, depending on the patient, the virus may even present asymptomatically. The Zika virus does not appear to be very threatening, but the same cannot be said for the complications that can result post-recovery: microcephaly and Guillain Barre’s syndrome. Microcephaly is a neurological condition and birth defect in which an infant’s brain is significantly underdeveloped, resulting in a head size that is smaller than normal. The first suggestion that microcephaly may be linked to Zika occurred in Slovenia when a woman suspected of having Zika gave birth to a child with intrauterine growth retardation and a reduced head circumference. A brain sample from the infant showed traces of Zika virus. Since then, the sudden increase of microcephaly cases in Brazil, in conjunction with its outbreak of Zika, has lead scientists to believe that the virus can damage the developing neurons of the fetus, which could lead

to microcephaly.7 On the other hand, Guillain Barre’s Syndrome is an illness with underlying autoimmune mechanism in which the immune system attacks the peripheral nervous system, resulting in muscle weakness and in some cases, paralysis. Twenty percent of patients are left with a severe disability from GBS, but most will experience symptoms from a few weeks to months. Only a small percent of those infected actually experience GBS; at the same time, countries such as Brazil, El Salvador, Colombia, and Venezuela have all seen increases in the number of GB patients during the Zika epidemic.8 These complications are quite dangerous; their association with Zika, though not officially confirmed, is strong. Furthermore, Zika’s life-changing complications coupled with its rapid and sudden emergence is concerning to scientists. Even more concerning is the adaptability of both the virus and its vector. The Zika virus falls under the umbrella term “arbovirus”: an RNA virus that is transmitted by arthropods, namely mosquitos and ticks. In fact, Zika’s path toward the Western Hemisphere mimics the arrival of three other arboviruses: dengue, West Nile virus, and Chikungunya. In general, arboviruses are quick to adapt, but their adaption is accelerated by human travel

and urban crowding. This way, viruses that seem contained to one region, such as a forest, can quickly emerge unexpectedly. It is suspected that viruses get from the forest to the city due to encroachment of forested habitats by people looking for either housing or adventure, as well as increased amount of air travel.5 Additionally, Zika is spread by a mosquito genus known as Aedes, and more specifically by the species Aedes aegypti and Aedes albopictus. The Aedes mosquito is notoriously adaptable and able to spread not only Zika, but also dengue, chikungunya, and yellow fever. In fact, it is suspected that the Zika initially reached the Americas because the Aedes mosquito traveled on a sailing ship from Africa and was able to adapt to the tropical climates of South and Central America.6 The possibility certainly exists that the Aedes species could adapt to colder temperatures, or could spread other arboviruses to the Western Hemisphere. Zika is still very relevant and some even say it may become an endemic disease: a disease that is native to a certain location. In the case of Zika, it is becoming a possibility that Zika will become endemic to the United States. As of October 2016, the Center for Disease Control upgraded its health advisory for Florida, stating that there is risk of local transmission. It is also advising pregnant women to take caution

The mosquito tower in Zika Forest

Zika’s life-changing complications coupled with its rapid and sudden emergence is concerning to scientists. when traveling to parts of the country, especially those where local transmission is taking place.1 Another issue that has come to light specifically in the cases in Florida has been the post-virus sexual transmission that has been documented. The virus was found in a man’s semen 2 weeks post his recovery, bringing up issues such as how long to wait post Zika recovery before trying to have a child.2 Without our effort and interference, there is great potential for the virus to continue to rapidly adapt and spread. Luckily, there are already efforts underway and research that can still be done in order to control the virus. Vector control is an important part of viral protection; it involves attempting to reduce our contact with the vector of a virus. Simply limiting exposure to mosquitoes by using bed nets, and eliminating standing water can greatly reduce the chances of disease introduction. More aggressive and widespread mosquito control using insecticides is underway, including aerial spraying in Miami, but it is proving to be quite challenging.9 A Zika vaccine is certainly on the table; the first vaccine is beginning human trials as of June 2016, and fifteen other vaccine candidates are still in development. However, the sporadic and unpredictable nature of Zika makes it inefficient and expensive to vaccinate large populations preemptively. At the same time, waiting to vaccinate patients after an outbreak has taken begun might be too late to effectively halt transmission.4 A solution to this may lie in linking arbovirus trends. We can study these arboviruses that are following a common trend of expanding after previously being restricted to remote areas. This could lead to the development of a vaccine platform that is adaptable to work for a

FALL 2016 | Berkeley Scientific Journal



Brain scans of an infant with microcephaly


range of newly emerging viruses. Some argue that a broad spectrum antiviral will be more efficient than a vaccine that is specific to one virus. However, others say that a vaccine platform has the potential to make Zika more virulent; research has shown that small numbers of antibodies to dengue, for example, can allow the Zika virus to infect macrophages in the bloodstream. It is therefore challenging to design a vaccine that is cost-effective, easy to implement, and works as intended in a variety of patients. Apart from vector control and vaccine development, there is still more we can do to aid in the effort against Zika. It is vital that people are educated about the risks of transmission so that they know to seek medical attention when necessary. We need to improve our diagnostic testing mechanisms for Zika so that they are not only more specific but also more fit for use in rural, remote areas. In terms of epidemiology, we can study the adaptation of Zika, specifically the differences between the African strain and American strain to learn more about the ways in which Zika may continue to adapt. Medically, we can find a way to identify the Zika virus in the fetus during pregnancy. We can protect the blood supply, in order to not spread the disease unintentionally by blood transfusion. Most importantly, we should not disregard Zika simply because the infection is acute; the complications have proved to be detrimental, and the virus has proved its ability to spread rapidly and unexpectedly. We have more than enough reason to continue studying the virus and its effects, work toward a functional vaccine, protect ourselves and be prepared.

Berkeley Scientific Journal | FALL 2016

1. Allen, Greg. “Zika May Be In The U.S. To Stay Listen· 4:04.” NPR. N.p., 26 Oct. 2016. Web. 2. Chang, Christopher, Kristina Ortiz, Aftab Ansari, and M. Eric Gershwin. “The Zika Outbreak of the 21st Century.” Journal of Autoimmunity 68 (2016): 1-13. Web. 3. Duffy, M. R., Chen, T., Hancock, W. T., Powers, A. M., Kool, J. L., Lanciotti, R. S., . . . Hayes, E. B. (2009). Zika Virus Outbreak on Yap Island, Federated States of Micronesia. New England Journal of Medicine,360(24), 2536-2543. 4. Dyer, Owen. “Trials of Zika Vaccine Are Set to Begin in North America.” Bmj(2016): I3588. Web. 5. Fauci, A. S., & Morens, D. M. (2016). Zika Virus in the Americas — Yet Another Arbovirus Threat. New England Journal of Medicine N Engl J Med, 374(7), 601-604. 6. Imperato, Pascal James. “The Convergence of a Virus, Mosquitoes, and Human Travel in Globalizing the Zika Epidemic.” Journal of Community Health J Community Health 41.3 (2016): 674-79. Web. 7. Mlakar, J., M.D. (2016, March 10). Zika Virus Associated with Microcephaly. New England Journal Of Medicine. 8. Smith, D. W., & Mackenzie, J. (2016). Zika virus and Guillain-Barré syndrome: Another viral cause to add to the list. The Lancet, 387(10027), 1486-1488. 9. Stawicki, Stanislawp “The Emergence of Zika Virus as a Global Health Security Threat: A Review and a Consensus Statement of the INDUSEM Joint Working Group (JWG).” Journal of Global Infectious Diseases J Global Infect Dis 8.1 (2016): 3. Web.


1. 2. 3.




verything has a beginning and an end. An apple core thrown into the dirt can be consumed by a worm, which then excretes nutritious waste for new plants to grow on. However, man-made items such as plastics, styrofoam, rubber, and aluminum are defying this natural cycle that allows growth in our ecosystem. What happens to the plastic water bottle you might have used the last time you went hiking? While an apple can be recycled into new material within two months, a plastic bottle can take more than 400 years to decompose. Plastic waste chokes the normal cycles of our ecosystem, and it is critical to find ways to flush the durable material out without harming the environment even more. While we may think of plastic as cheap, lightweight, and disposable, like our to-go package: plastic spoon, fork, container, cup, and bag, it doesn’t just “disappear” after thrown away. Instead, much of it actually collects in the ocean.

IN 2015, IT WAS ESTIMATED THAT MORE THAN 5 TRILLION PLASTIC PIECES WEIGHING OVER 268,940 TONS WERE AFLOAT AT SEA, not including the larger plastic debris.4 How did it get there? One of the most common ways that marine debris travels from land to water is by being swept through storm drains during rain storms. Rivers and waterways also wash trash into the bay. While we do not see the supposed amount of used plastic on land, most of it collects in our oceans. Most critically, the plastic in the ocean endangers marine life because it is a choking hazard and toxic. Marine animals such as sea turtles, mammals, seabirds, and crustaceans are vulnerable to entanglement encounters, which can lead to mortality. Plastics, especially polyvinyl chloride, or PVC, are toxic for our health and the environment.9 PVC releases mercury, dioxins, and phthalates, which could lead to life-long health

threats, such as cancer and damages to the immune or reproductive system.3 Plastics take a long time to decompose- but what about biodegradable plastics? Contrary to what one might expect,

BIODEGRADABLE PLASTICS MAY NOT ACTUALLY BIODEGRADE QUICKLY, due to the fact that most of it ends up in the ocean. Even in favorable environments, such as in the soil with bacteria, fungi, or hot temperatures, biodegradable plastic bags only half-decompose after 389 days.2 Degradation of biodegradable plastics takes approximately 3 years underwater, since these biodegrading conditions differ from what we see on land. In addition, heavier plastics would not be able to break down by UV light if they sink. While there may be ways to remediate the harm done, sea debris continues to increase, making cleanup programs insufficient.

FALL 2016 | Berkeley Scientific Journal


IN ORDER TO STOP PLASTIC WASTE FROM ACCUMULATING, THERE MUST BE A CHANGE IN THE SOCIAL ASPECT OF UNDERSTANDING PLASTIC WASTE. One poll done found that “biodegradable plastic” is more likely to be littered since people assume it is alright to and it can degrade.8 Educating and changing the mindset of treating plastics as a temporary item would help solve plastics from entering the ocean. Ultimately, we need to find a way to stop plastic littering, and integrating recycling into social behavior is the key. Meanwhile, there have been new ideas to help clean the large plastic soup either by reducing carbon dioxide release, or using less material or energy to degrade plastic. Although it may be a long while away until a workable solution is found,

THERE IS A LEAD IN SOLVING THE PROBLEM: PLASTIC-EATING BACTERIA. In March 2016, a Japanese research team found a bacteria that could completely degrade Polyethylene terephthalate, or PET, within 6 weeks. This plastic is found in water bottles, clothing, and packaging, and has known to be very non-biodegradable. No organisms were found to be able to biodegrade PET prior to this discovery. Out of a variety of microbes, one was responsible for PET degradation: Ideonella sakaiensis.5 It damaged PET film extensively and almost completely degraded it after 6 weeks at 30ºC. When sequencing the genome of this bacterium to find the

“We need to find a way to stop plastic littering, and integrating recycling into social behavior is the key”


SEM images of I. sakaiensis cells grown on PET film for 60 hours. Arrow heads in the left panel indicate contact points of cell appendages and the PET film surface. Magnifications are shown in the right panel. | Science main contributors to the PET hydrolytic activity, they found an enzyme this bacterium secretes: a PETase. This enzymse generates and intermediate MHET, which is taken back up by the cell and hydrolyzed by a second enzyme. This second MHET hydrolase converts MHET into two environmentally benign monomers: terephthalic acid and ethylene gylcol. The organism then uses these monomers to facilitate its growth. Despite breakthrough discoveries

such as these, it is still not clear how plastic-eating bacteria can contribute to our ocean problem unless we modify them or create a system to make them self-sustainable, efficient, environmentally friendly, or less-costly than current ways.1 Current recycling ways include using chemicals and heating with more than 700ºF. Heating releases the chemicals in the plastic into the environment, and requires proper disposal of the ashes. An improvement could be made

The I. sakaiensis bacterium discovered by Yoshida et al. (5) can attach to PET. It produces two hydrolytic enzymes (PETase and MHETase) that catalyze the degradation of the PET fibers to form the starting monomers. The monomers are then used as a carbon source by the bacterium.

Berkeley Scientific Journal | FALL 2016

“In March 2016, a Japanese research team found a bacteria that could completely degrade Polyethylene terephthalate, or PET, within 6 weeks.” to the bacteria by engineering them or its enzymes.6 Molecular biotechnology allows researchers to transfer units of genetic information between organisms.

in properly recycling plastic in order for less plastic to be washed to sea.


[1] Kale, S. K., Deshmukh, A. G., Dudhare, M. S., Patil, V. B. (2015) Microbial degradation of plastic: a review. J. Biochem Tech, 6(1): 952-961. [2] LI, W. C., TSE, H. F., & FOK, L. (2016). Plastic waste in the marine environment: A review of sources, occurrence and effects. Science of the Total Environment, 566– 567, 333-349. [3] Pivnenko, K., Eriksen, M. K., Martín-Fernández, J. A., Eriksson, E., & Astrup, T. F. (2016). Recycling of plastic waste: Presence of phthalates in plastics from households and industry. Waste Management, 54, 44-52. [4] Plastic Pollution in the World’s Oceans: More than 5 Trillion Plastic Pieces Weighing over 250,000 Tons Afloat at Sea Eriksen M, Lebreton LCM, Carson HS, Thiel M, Moore CJ, et al. (2014) PLoS ONE 9(12): e111913. [5] Shosuke Yoshida, Ka-

One possible implementation of using bacteria to clean plastic waste is through a controllable bioremediation system.7 One suggested bioremediation system in Indonesia involved the use of a whole-cell biocatalyst that can degrade PET component in plastic waste. In their devised plastic degradation system, researchers could use synthetic bacteria to degrade plastic. Because the plastic products are harmful to bacteria, they could design the bacteria to use the waste as an energy source instead, and continue growth. Therefore, there would be no need to input more material, as the bacteria are self-sustaining, unlike chemical pools or furnaces. Plastic is a critical harm to our ecosystem. Production of plastic harms workers, slow degradation of plastic in the ocean risk sea animals’ lives, and current methods of recycling plastic can be improved to reduce energy costs. Recent discoveries in plastic-degrading bacteria are a hint to cleaning up the plastic in the ocean. Bacteria can be self-sustaining on plastic, and can also be modified to produce less harmful waste. While modifications in the bacteria can make a better system to degrade the plastic, there has to be more active participation


zumi Hiraga, Toshihiko Takehana, Ikuo Taniguchi, Hironao Yamaji, Yasuhito Maeda, Kiyotsuna Toyohara, Kenji Miyamoto, Yoshiharu Kimura, Kohei Oda (2016). A bacterium that degrades and assimilates poly(ethylene terephthalate) Science 11961199 [6] thesalt/2016/03/10/469972237/this-plasticeating-bacteriumMight-help-deal-withwaste-one-day [7] php/procicgrc/article/view/86 [8] Shah, A. A., Hasan, F., Hameed, A., & Ahmed, S. (2008). Biological degradation of plastics: A comprehensive review. Biotechnology Advances, 26(3), 246-265. [9] Koelmans, A. A., Bakir, A., Burton, G. A., & Janssen, C. R. (2016). Microplastic as a vector for chemicals in the aquatic environment: Critical review and model-supported reinterpretation of empirical studies. Environmental Science & Technology, 50(7), 3315-3326.

Small round-shaped bacteria and diatoms (tiny algae) are seen on a 5-mm-long plastic in waters off the island of Tasmania, Australia. | AFP-JIJI

FALL 2016 | Berkeley Scientific Journal



Dr. Christopher Chang is a Professor of Chemistry and Molecular and Cell Biology at the University of California, Berkeley. He is also an investigator at the Howard Hughes Medical Institute. Professor Chang’s laboratory is focused on the design and synthesis of chemical tools for molecular imaging, chemoproteomics, and optogenetics. In this interview, we talk about the detection of redox-active transition metals such as copper and iron by fluorescent probes and discuss their role in biological systems and disease. Professor Christopher Chang [Source: UC Berkeley College of Chemis-

BSJin the fields of Bioinorganic Chemistry and : How did you first get involved in research

Chemical Biology?

BSJlargely thought of as static cofactors. What : Redox-active transition metals have been

has inspired your interest in specifically studying cell signaling of labile transition metals?

Bioinorganic Chemistry is a relatively new We view the periodic table as Nature’s CCfield of Chemistry, as it’s in between the CCRosetta Stone. When you look at everything :

classic fields of Inorganic Chemistry and Molecular and Cell Biology. I did undergraduate research in Inorganic Chemistry and then graduate research in Energy Science and Inorganic Chemistry. As a postdoc, I started to get more interested into Chemistry/ Biology interface. The research group we started at Berkeley pretty much became an amalgamation of the different sorts of chemistry experiences that I had up to that point.


Berkeley Scientific Journal | FALL 2016


around you, it’s made up of combinations of different elements, and so is life. At slower time scales, you sustain life through metabolism; at faster time scales, you transfer information through signaling. We got interested in these faster time scales because it turns out that most people study long-term effects. You have to start from somewhere, however, because elements cannot be created or destroyed – only arranged in different combinations. We thus wanted to study the fastest and earliest time points of signaling

Recognition- and reactivity-based approaches for metal detection [Source: "Recognition- and Reactivity-Based Fluorescent Probes for Studying Transition Metal Signaling in Living Systems"1] to understand how it occurs. The term “labile” refers to something weakly bound, exchangeable, something that would move quickly. It turns out to be a view of continuum of elements and how they mix together.

has pursued two general strateBSJgiesYourforlaboratory labile transition metal detection - “recog:

nition” and “reactivity.” What do these strategies encompass on a molecular level?

The recognition approach is sort of the tradiYour research group has developed novel fluoCCtional BSJrescent approach. It goes back to the lock-and-key probes for studying the signaling roles of :


labile transition metals. What criteria must these probes meet to be suitable for imaging in living organisms?

most important thing is selectivity, because CCyouThewant to distinguish different elements from :

each other. Selectivity is really the biggest challenge, because biology is very heterogeneous and very complicated. A human cell is different from a mouse or plant or yeast cell, or from a bacterial cell in your microbiome. Even within us, your brain is different from your liver; it’s different from your skin, from your kidney, from your heart. And so the probe must be selective for a given context (e.g. specific cell type). The second important thing would be readout, or visual response from the probe, because we do imaging where changes in color relate to signaling.

“We design probes based on hard-soft acid-base theory, as well as shape selectivity or preferences for particular oxidation states”

chemistry of enzymes. The element plays the role of the key and the probe plays the role of the lock. The idea of recognition is developing the right sets of locks to detect select element keys. The reactivity approach is sort of corollary to recognition – there are lots of keys that can fit in various locks. In the reactivity approach, the binding also causes some sort of chemical change. This gives you two filters of selectivity: not only binding, but also a reaction. Depending on the situation, we try one or the other.

BSJbeen copper- or iron-specific. How do the probes : Several probes designed in your laboratory have

differentiate between different oxidation states of these metals?

That challenge makes the part of the periodic CCtable we studied more difficult. We not only have :

elements, but also elements of different forms because they can attain multiple oxidation states. Our research goes back to the very fundamental principles of Inorganic Chemistry. We design probes based on hard-soft acid-base theory, as well as shape selectivity or preferences for particular oxidation states. For example, we can discriminate between Cu(I) and Cu(II) (note that Cu(I) is softer than Cu(II)) and so we can change a receptor or the reactivity group on the probe to suit the desired oxidation state.

FALL 2016 | Berkeley Scientific Journal



: Why has your laboratory been so interested in studying copper? What role does this metal play in diseases like Menkes and Wilson's?

One of the reasons why we study copper is CCthat it is one example of a transition metal :

that is very abundant; also copper and iron are two major elements in biology that change oxidation states. The diseases that you mention turn out to be mainly genetic and are directly related to copper dysregulation. Patients with Menkes disease are copper deficient and have a genetic mutation centred in a specific protein. Wilson's disease, which runs in families, is the inability to pump copper out of liver. Copper builds up in liver and can't get to other parts of the body. Wilson’s and Menkes diseases are models for more complicated neurodegenerative diseases like Alzheimer’s, Parkinson’s, and Huntington’s.

We noted that you have recently been BSJinvolved in a project that aimed at creating a :

diagnostic tool for monitoring copper levels in biological fluids (such as blood). Could you please tell us a little bit more about this work?

This work is a collaboration with Jeffrey CCLong's group.3 Jeff is developing porous :

materials for energy applications like carbon capture. What we decided to do is see if we could use them for biological diagnostics or for environmental applications. The first concept was making a sponge, a selective sponge for copper that you could dip into bio-fluids and then selectively remove or take up copper in that material. The “divide-and-conquer strategy” refers to having a sponge to remove or being able to separate copper out in situ before adding any sort of imaging indicator. The indicator doesn't have to directly go into the bio-fluid; the colorimetric assay can be performed on the bench top or hopefully in some take-home test kit.

Using Copper Fluor-3 sensor, you have BSJrecently shown that copper also plays an :

important role in neuronal function – it is a modulator of spontaneous activity in the brain. What are the advantages of this sensor?

I would say that the one advantage of that CCsensor is that it allowed us to go from cells to :

tissue for the first time. It was really important for the neurobiological studies because you could isolate cells from brains and then make synaptic connections. The problem is if they're just in a dish, then those connection aren't natural. What you would really like to be able to do is dissect tissue where natural connections are made. That was the advantage of those types of probes. That allowed us to see what copper was doing in circuit that was natural and intact.

Using a fluorescent probe Copper Fluor-3, Professor Chang and his research group showed that copper signalling is essential to the health of the human brain2 [Source: Lawrence Berkeley National Laboratory]


Berkeley Scientific Journal | FALL 2016

: What are some of the future directions of “One area that we are BSJyour research? looking at is the brain What we’re interested in right now is looking at combinations of elements and how those and how metal signaling CC give rise to behavior. A lesson we’ve learned is that a can dampen or control metal can serve as a signal, as well as a static cofactor, and it is just a matter of timescale. We have a whole brain activity” region of time that we have analyzed in a basic way. :

We noticed that iron is another metal that BSJyour research group has been closely inves:

tigating. Why is it important to monitor labile iron pools? How does iron affect our health?

Because it turns out that there is a certain CCamount of iron you need in your body. It is :

well known that you need it for respiration, electron transfer, oxygen binding and transport, as well as lots of metabolic types of oxidation, such as metabolizing food and drugs in your body. But it turns out that “static” iron doesn’t account for all the iron that exists in your body. And so there’s this other pool, which is called the labile pool, which has an unknown function. However, it is known to exist, because the metabolic proteins alone can’t account for all the iron needed by the body. So one of the challenges is to actually see the labile pool, to see it changing over time. Probe metal-detection technology is now BSJactively used in research. Does this discipline :

face any lingering limitations?

It’s a relatively new discipline, because CCitYes. takes from a lot of different areas and isn’t :

One area that we are looking at is the brain and how metal signaling can dampen or control brain activity. An important question we’ve been led to is whether signaling controls certain behaviors. We’re looking at sleep behavior, at regulation of fear, and at fight-orflight responses. A paper we published this summer looks at your ability to burn fat, and that copper is necessary for proper fat burning. The idea is that you are what you eat, and transition metal signaling is controlling how much energy you burn. So the future direction of our research is investigating how signaling gives rise to behavior, rather than diseases, per say.


: Thank you very much for your time!


1. Aron, A. T.; Ramos-Torres, K. M.; Cotruvo J. A.; Chang, C. J. Acc. Chem. Res. 2015, 48(8), 243442. 2. Aron, A. T.; Ramos-Torres, K. M.; Cotruvo J. A.; Chang, C. J. Proc. Natl. Acad. Sci. U.S.A. 2014, 111(46), 16280-5. 3. Lee, S.; Barin, G.; Ackerman, C.M.; Muchenditsi, A.; Xu, J.; Reimer, J.A.; Lutsenko, S.; Long, J.R.; Chang, C.J. J. Am. Chem. Soc. 2016, 138, 7603−9.

a classic field. I would call it “molecular sensing and imaging,” because there is organic chemistry, inorganic chemistry, materials chemistry, chemical biology, and analytical chemistry. The limitations are that we don’t really know how to selectively bind to or react with all the elements. There are over a hundred elements across the periodic table, and, so far, we only know how to work with less than a dozen really well. Rather than a limitation, however, I would call this more of an opportunity, because there all sorts of things - elements, length scales, tissues, animals, plants – to study and learn about. You could even do environmental sensing, such as in an ocean, or a lake, or the atmosphere.

FALL 2016 | Berkeley Scientific Journal





hree years--that’s how long New Yorker Tim McCabe has been waiting for a kidney transplant. Diagnosed with ulcerative colitis as a teenager, McCabe has been suffering from deteriorating kidneys ever since. Each day, McCabe waits by the phone, anticipating news of an available kidney, but each day, he is met with disappointment. Having left his job as a highway inspector due to his declining physical condition, McCabe now spends his days confined by nonstop dialysis treatments, hoping to survive long enough to watch his two sons grow up.1 This is the unfortunate reality for many Americans awaiting organ transplants, as there is a seemingly perpetual organ shortage crisis in the U.S. In the last ten years, the number of patients required a transplants has more than doubled, yet the actual number of transplants performed has remained stagnant. There are currently over 119,000 people awaiting an organ transplant, but in 2015, only 30,970 trans-


plants were performed, with the wait time for each transplant averaging 3 to 5 years. Every 10 minutes, another person is added to the waiting list, and every day, 22 people die waiting for a transplant.6 So what can we do to solve this issue? What if there were a way to make a new organ on demand, eliminating the need for donor compatibility and absolving the waiting list crisis? The solution may lie in 3D bioprinting, the manufacturing of new tissues and organs using 3D printing technology. This would involve taking a sample of a patient’s cells and using those cells to ‘print’ a new organ by depositing cells and biomaterial layer by layer, creating a tissue structure identical to that of natural human tissue. Over the years, researchers have developed and improved upon methods of printing vital human tissue, and this engineering technology has now advanced to a point where systematic organ printing may be on the horizon.

Berkeley Scientific Journal | FALL 2016

The general 3D tissue printing process that researchers have been using does not deviate much from traditional 3D printing; the major difference is that the printer deposits cellular biomaterial instead of synthetic material. The printing process involves three major steps: preprocessing, or the development of the computer blueprint; processing, the depositing of biomaterial; and post-processing, or tissue maturation and conditioning. In the processing stage, there are currently three main approaches to depositing biomaterial: inkjet, microextrusion, and laser assisted printing. Thermal inkjet printers heat the printhead electrically, forcing droplets of material out of the nozzle. Microextrusion printers use pneumatic (operated by air pressure), piston, or screw dispensing systems to extrude beads of material from the nozzle. Laser assisted printers use laser-induced pressure to propel cell materials onto a collector. Each printer type has its own set of advantages and disad-

vantages. For instance, microextrusion printers have limitations on material crosslinking (molecular bonding in the material) inkjet printers have limitations on material viscosity.3 Compared with non-biological 3D printing, 3D bioprinting does involve some additional complexities, including cell types, growth factors, and sensitivities of living cells. Due to these complexities, many studies of bioprinting only use a limited range of materials, mostly involving collagen, hyaluronic acid, alginate, and modified acrylates.4 The material used in the printing process needs to be easily manipulated by the machine to maintain its cellular functions and provide support for the overall structure. But current bioprinting technology already

has the capacity to revolutionize modern medicine as we know it, as there have been records of its potential for success. In 2011, six-month old infant Kaiba Gionfriddo suddenly stopped breathing due to a collapsed windpipe. After the life-threatening attack recurred for several weeks, doctors and researchers at the University of Michigan, Ann Arbor harnessed the power of advanced engineering technology and 3D printed a tracheal apparatus--a tubular device that wrapped around the infant’s tracheal tube to keep the airways open. Constructed via inkjet printer from biomaterials compatible with the infant’s body, the tube was successfully integrated into the infant’s respiratory system, where it was eventually dissolved and reabsorbed by the body.5 Of course, this is only an example of a suc-

3D bioprinting of a kidney prototype

cess in printing and implanting human tissue, as opposed to an actual organ. When it comes to printing a structure as complex as an organ, there are a number of additional factors to consider, including growth of cells, complex cell structure, and oxygen delivery. Organs are large structures, so billions of cells must be grown at a time; these cells not only assemble in multiple layers, but they also interact with each other. In addition, the organs need to be supplied with oxygen before implantation into the body, an oxygen supply system must be developed for each individual organ. But perhaps the most critical challenge in organ printing is the integration of a vascular system, or the assembly of blood vessels to enable nutrient and gas exchange.4 For Dr. Anthony Atala, director of the Wake Forest Institute for Regenerative Medicine at Wake Forest University, and his team of biomedical researchers, this posed the perfect challenge. Having already successfully printed bladders, cartilage, skin, and urine tubes that were implanted into patients, these researchers are currently working on printing an actual kidney. The first step of the proposed printing process would be to take a biopsy of the organ in question. Cells from the biopsy with regenerative potential would be isolated, multiplied, kept in a nutrient rich mixture, and transferred to a printer cartridge. A separate cartridge would then be filled with structural biomaterial. When the “print” button would be pressed, the biomaterial would deposit layer by layer to created the structure and the cells would be embedded between each layer. To resolve the issue of vascularization, a new fabrication technique would have to be implemented, involving the printing of multiple branched channels of bloods

FALL 2016 | Berkeley Scientific Journal



Successfully 3D printed bone, ear, and kidney prototype, from Wake Forest Institute of Regenerative Medicine

“The ultimate goal is to design a printing system utilizing the patient’s own cells”


vessels. Bioreactors, chemicals that help preserve the state of the tissue in the vascularization process, would maintain tissue viability and “buy” time for vessel integration and blood transfusion in this post-processing stage. Under the right physiological conditions, the cells would perform as they would in a real organ. Although the researchers have successfully printed multiple 3D kidney prototypes using synthetically grown kidney cells, the process for printing an actual kidney is still in its developmental stages. For these scientists, the ultimate goal is to design a printing system utilizing the patient’s own cells, so donor compatibility would not be an issue and rejection medications would no longer be needed.4 Even though 3D organ printing has very real implications in transforming the field of medicine, as of right now, only a small portion of the 3D printing industry’s investment has been allocated towards biology and medicine. 3D printing has become hugely popular over the years. It’s a 700 million dollar industry, but only 11 million dollars are currently invested in medical applications. However, over the next decade or so 3D printing is projected to be an 8.9 billion dollar industry, with 1.9 billion dollars invested in medical applications.2 With this emerging interest in 3D bioprinting for medicine, perhaps one day, made-to-order kidneys, bones, and hearts will be available for all.

Berkeley Scientific Journal | FALL 2016

1. Bernstein, F. (Director), & Liu Y. (Director), McCann S. (Director), (2016). Waiting List [Motion Picture]. United States. 2. Kang, H., Lee, S. J., Ko, I. K., Kengla, C., Yoo, J. J., & Atala, A. (2016). A 3D bioprinting system to produce human-scale tissue constructs with structural integrity. Nat Biotechnol Nature Biotechnology, 34(3), 312-319. doi:10.1038/nbt.3413 3. Kolesky, D. B., Truby, R. L., Gladman, A. S., Busbee, T. A., Homan, K. A., & Lewis, J. A. (2014). 3D Bioprinting of Vascularized, Heterogeneous Cell-Laden Tissue Constructs. Advanced Materials, 26(19), 3124-3130. doi:10.1002/adma.201305506Kamat P.V. J. of Phys. Chem. C. 2008, 112(48), 18737-53. 4. Murphy, S. V., & Atala, A. (2014). 3D bioprinting of tissues and organs. Nat Biotechnol Nature Biotechnology, 32(8), 773-785. doi:10.1038/nbt.2958 5. New Study Shows How Babies’ Lives Were Saved by 3D Printing (with Video).” University of Michigan. N.p., n.d. Web. 05 Nov. 2016. 6. Organ Shortage Crisis: Problems and Possible Solutions. Transplantation Proceedings, 40(1), 34-38. doi:10.1016/j.transproceed.2007.11.067




ith accumulating academic, financial, and personal responsibilities, no college student is a stranger to the idea of stress - that feeling of overwhelming anxiety constantly nagging the mind. However, could there be a chance that this “stress” was passed down from our parents, or even grandparents? The idea of gene modification from environmental influences has been studied numerous times, yet the proposal of this alteration being passed down to succeeding generations is a novel concept entirely. Recent research with mice and human subjects reveals that when negatively influenced by a certain external trauma, both species could genetically transfer these sensitivities to their offspring. Yet how can a parent’s “fear” be biologically passed down? According to conventionally accepted scientific information, the genetic sequences contained in DNA are the only way to transmit biological information across generations. Random DNA mutations, when beneficial, enable organisms to adapt to changing conditions, but this process typically occurs slowly over many generations.1 However, the theory of epigenetics, or the study of heritable changes in gene expression, does not involve changes to the underlying DNA sequence. Epigenetic change occurs from the addition of methyl (CH3) groups to certain locations on the DNA molecule, which in turn silences parts of the gene in a particular pattern and leads to a specific modification of the gene’s phenotype. The

DNA code itself is not changing; it is the parts of the expressed DNA code expressed that undergo alteration. This process is called DNA methylation, and can ultimately be influenced by several factors including age, disease, and the environment.4 Although the epigenetic concept of DNA methylation has been scientifically proven, the idea of transmitting these alterations from generation to generation is still debatable, especially in such complex subjects as humans. One controlled experiment with mice, however, found that when a mouse learns to become afraid of a certain odor, its pups will be more sensitive to that odor, even though the pups will have never encountered it themselves.3 Researchers Kerry Ressler and Brian Dias studied epigenetic inheritance by training laboratory mice to fear the scent of acetophenone through the pairing of odor exposure with electric shocks. This Pavlovian fear conditioning ultimately increased sensitivity in the mice’s olfactory bulb; however, the researchers also discovered that the naïve adult offspring of the sensitized mice inherited the same behavioral sensitivity to the smell.2 According to Dias, “[t]he inheritance takes place even if the mice are conceived by in vitro fertilization, and the sensitivity even appears in the second generation (grandchildren). This indicates that somehow, information about the experience connected with the odor is being transmitted via the sperm or eggs.”3 The researchers proposed that DNA methylation explains the inherited sensitiv-

FALL 2016 | Berkeley Scientific Journal


ity - the acetophenone-sensing gene of sperm cells had fewer methylation marks, which could have led to greater expression of the odor-receptor gene for the mice offspring. However, the research poses some unanswered questions in the end. For example, the reversibility of the effect is unknown– if sensitized parents later learn not to fear an odor, the effects on their pups still remains unknown. Another limiting factor is that the epigenetic research only involved the smell receptors. What about the other senses – sound, taste, etc?3 This research is relatively new, so we can only wait and see how Dias and Ressler continue their trans-generational research. Testing generational epigenetic inheritance with laboratory mice seems to be the best method for proving the theory of epigenetics, due to the experiment’s extremely controlled environment. Moreover, attempting this research on human patients remains very controversial, since controlled studies are neither feasible nor ethical – people are constantly influenced by social interaction as well as biological inheritance, so separating the two would prove more than difficult. Nonetheless, in a recent 2015 experiment, researcher Rachel Yehuda studied survivors of

the Holocaust as well as their offspring to test the transmission of stress from epigenetic mechanisms. Yehuda conducted research both on Holocaust victims struggling with post-traumatic stress disorder (PTSD) and on their unaffected children, and ultimately found that the traumatic exposure to the Holocaust had an effect on FK506-binding protein 5 (FKBP5) methylation in both generations, a correlation not found in the control group and their children.5 FKBP5 is an immunophilin protein, which means it plays a role in regulating the immune system. This gene is known to code for major depressive disorder since it interacts with the hypothalamic-pituitary-adrenal (HPA) axis. The HPA axis is a complex biological mechanism that controls the body’s reaction to stress, and is strongly linked to the neurophysiology of depression.7 According to Yehuda, the FKBP5 methylation in Holocaust parents was found specifically on bin 3/site 6, specific sites on the gene associated with psychological childhood stress – an alteration most likely due to the trauma from the Holocaust. Contrasted with the parents, however, the offspring showed methylation on bin 2/sites 3 to 5, a location on the gene receptor typically associated with childhood physical and sexual abuse.6 Despite these subjects having no history of such abuse, the FKBP5 methylation nevertheless occurred, prompting Yehuda to attribute these outcomes to transgenerational epigenetic inheritance. The researcher ultimately reported in her experimental conclusion that “the findings suggest the possibility of site specificity to environmental influences, as sites in bins 3 and 2 were differently associated with parental trauma and the offspring’s own childhood trauma, respectively.”6 Although her research encompasses only a limited number of subjects and can be criticized ethically, Yehuda has presented

Image of laboratory mouse with her offspring. Researchers Kerry Ressler and Brian Dias studied epigenetic inheritance by training laboratory mice and observing biological effects on the subsequent pups.


Berkeley Scientific Journal | FALL 2016

Image of children during the Holocaust. Researcher Rachel Yehuda studied both survivors of the Holocaust and their offspring to test the transmission of stress from epigenetic mechanisms.

the scientific community with the radical idea of trans-generational stress inheritance not just in animals, but in people as well. In the end, these mice and human research experiments truly offer the scientific community crucial data for future experimentation. Even Ressler claims that “[k]nowing how the experiences of parents influence their descendants helps us to understand psychiatric disorders that may have a trans-generational basis, and possibly to design therapeutic strategies.”3 But although these experiments suggest DNA methylation as the source behind generational stress inheritance, more research must be done to fully understand the molecular mechanism of such results. With further studies, we may finally be able to differentiate the border between environmental influence and biological influence, and potentially develop preventative treatment for psychiatric patients in the future.

“Knowing how the experiences of parents influence their descendants helps us to understand psychiatric disorders that may have a trans-generational basis”

References 1. Callaway, E. (2013, December 01). Fearful memories haunt mouse descendants. Retrieved September 25, 2016 2. Dias, B. G., Maddox, S., Klengel, T., & Ressler, K. J. (2014, December 24). Epigenetic mechanisms underlying learning and the inheritance of learned behaviors. Retrieved October 02, 2016 3. Eastman, Q. (2013, December 02). Mice can inherit learned sensitivity to a smell. Retrieved October 01, 2016 4. Epigenetics: Fundamentals. (2013, July 30). Retrieved October 07, 2016 5. FKBP5 Gene. (n.d.). Retrieved October 29, 2016 6. Kellermann, N. (2015, October 12). Epigenetic transgenerational transmission of Holocaust trauma: A Review. Retrieved October 08, 2016 7. Yehuda, R. (2015, August 12). Holocaust Exposure Induced Intergenerational Effects on FKBP5 Methylation. Retrieved October 01, 2016

Image Sources 1. lab_mice_pups04_5909.jpg

FALL 2016 | Berkeley Scientific Journal



Dr. David Schaffer is a Professor of Chemical and Biomolecular Engineering, Bioengineering, and Neuroscience at the University of California, Berkeley. Professor Schaffer is interested in stem cell bioengineering, gene delivery systems, molecular virology, and their applications to biomedical problems. In this interview, we talk about the role of adeno-associated viruses in gene therapy and discuss its molecular basis and directed evolution approaches. Professor David Schaffer [Source: The Schaffer Lab]

BSJin Chemical and Biomolecular Engineering? BSJin gene therapy? : How did you first get involved in research

DSbackground both on the basic sciences side : Well, I come from a family with a medical

and the clinical side. I was always interested in problems related to human health. I like molecules and I like thinking about problems quantitatively. So, if you put that all together - math, molecules, application towards healthcare - at the time, it was Chemical and Biomolecular Engineering. These days I think that this research takes place in both the CBE department as well as the Bioengineering department and reflecting that I have an appointment in both.


Berkeley Scientific Journal | FALL 2016

: What has inspired your interest specifically

DSgraduate school (that was a number of years

: Well, I began to work in gene therapy during

ago, I probably shouldn’t tell you as it is going to date me), so I have been working in the field for over 20 years. At that time, there was a lot of excitement: people were talking about sequencing the human genome, the Human Genome Project was getting underway. The genes that cause haemophilia B, haemophilia A, cystic fibrosis, muscular dystrophy, and Huntington’s disease were getting cloned and sequenced and it brought up the idea that DNA could be used as a medicine to be able to treat diseases. I thought that this would be revolutionary and got really excited because up until that point many of

Scheme of AAV integration into the human genome [Source: Genome Engineering Using Adeno-assciated Virus 1]

these diseases were simply untreatable. The big challenge that emerged in the field was that in many situations you could identify down to the base pair the sequence of DNA that you needed to deliver to be able to treat the disease; however, delivering a sufficient amount of DNA to enough cells was a problem. That’s where we really have set our sites in the past couple of decades.

BSJadeno-associated viruses (AAVs) for gene thera: Some of your publications focus on the use of

py. What has made an AVV such a highly promising gene delivery vector?

things. One is that it is harmless. All of DSusSeveral have been previously infected with a natural :

version of this virus and never even noticed because it doesn’t cause human disease. It is a safe virus, one that is somewhat stealthy as far as the immune system goes - it doesn’t cause major inflammatory responses. A second thing is that it has some level of gene delivery efficiency to a number of different cell types. Compared to other viruses that lack the ability to infect, for example, a muscle cell or a neuron, this virus is more efficient.

the mechanism by which AAVs infect BSJtheWhatcellsisand integrate into the genome? You can think of an AAV as a ball of protein surDSrounding a string of DNA. The surface of that ball :


“We are going to create better gene delivery vehicles and ... translate this towards clinical development”

of protein is what interacts with the body. For example, let’s say you would like an AAV to go to the liver and deliver a therapeutic piece of DNA. Let’s say you want to deliver a gene-encoding factor 9 (a blood clotting protein that’s missing in patients that have haemophilia B). If you inject AAV into the bloodstream, it will penetrate deep into the liver tissue, recognise the surface of an hepatocyte (a liver cell), bind it, get internalized, traffic to the nucleus, and then uncoat. The virus naturally evolved over millions of years and accumulated those properties over time. We are taking advantage of natural evolution - we ironically view this virus as gift of nature – trying to improve the efficiency with which it carries DNA inside the cells. You asked the final question: how does DNA make it into the nucleus? Recombinant AAV is a non-integrating virus so it lacks the ability to integrate itself or insert itself into the genome. As long as the cell is not a dividing, the virus will persist for many years. We like this fact because integration could potentially damage the genome. What’s the role of the helper virus when AAV is BSJused? You need a helper virus in order to get AAV to DSreplicate. If you are trying to manufacture the :


virus in culture to be able to produce it for therapeutic use you need it to make many copies of the virus. AAV is called an “adeno-associated” virus because it was originally isolated as a contaminant with an adenoviral stock and it requires the presence of an adenovirus to be able to replicate.

So what makes the efficiency of AAVs slightly BSJhigher than that of other vectors? :

DS:evolved primarily within the lung. We don’t fully Well, it’s evolution. It is a respiratory virus, so it

understand the evolutionary history of the virus but we’re given a particle that has the ability to make it into cells

FALL 2016 | Berkeley Scientific Journal


at a reasonable level. You obviously wouldn’t want to use something like wild type HIV, for example, as a delivery vehicle. HIV is highly specific to infecting T-cells and macrophages and maybe a couple of other cell types. The evolutionary forces that drove HIV into being the virus it is today are very specific for those cells and we couldn’t use HIV for infecting liver or neurons, but AAV was given to us by nature as something that had a reasonable level of infectivity on a broad range of cells. How can directed evolution be employed to BSJengineer vectors with enhanced properties? :

When are directed evolution approaches2 more useful than rational design?

DS:been “always.” This is our contribution to the The answer to the second question has so far

field; we invented the concept of applying directed evolution to make better gene delivery vehicles, to make better AAVs. The idea is that nature created AAV over tens of millions of years for its own purposes - it’s a relatively successful respiratory virus. If you showed AAV a neuron or photoreceptor or muscle stem cell (any number of cells that are ther-

apeutically important target cells), most of the time the virus would say: “What the heck is this? I have not been ever evolutionarily rewarded in my lifetime for the ability to deliver DNA to this cell.” In one sense, evolution has given us this virus that’s a good starting point but we need to improve on it and make it much more successful for our applications because nature never evolved it for our convenience to use it as a medicine. But evolution in general is a very powerful engine to create novel and useful biological function. So, what we’ve been doing is accelerated artificial evolution in the laboratory. Evolution has two steps: to create a very large and diverse gene pool and then to select the fittest. We create enormous gene pools on the order of 100 million viruses and we select the best ones for their ability to infect a neuron or a photoreceptor or a muscle stem cell, or whatever the target cell is for the particular disease we want to treat. Directed evolution, as we’ve developed it, is a very effective and powerful approach to create highly optimized versions of AAV for gene delivery to any cell or tissue target in the body. People had been doing rational design previously, but the challenge or the problem there is that, like I mentioned, this ball of protein has been endowed by nature with the

Co-founded by Professor Schaffer, the Berkeley company 4D Molecular Therapeutics develops and commercializes transformative gene therapeutic products [Source: http://www.4dmoleculartherapeutics]


Berkeley Scientific Journal | FALL 2016

ability to interact with the bloodstream, endothelium, tissue, cell surface, endosome, cytosol, and nucleus. It’s a really complicated delivery pathway and if this ball of protein is not very good at getting into that neuron, that’s all we know – it doesn’t make it into the neuron. We don’t know which of these steps along the way is responsible, mechanistically, for the poor delivery efficiency. To enable rational design to work, we need to know lots of molecular mechanisms of that full pathway. Rational design requires a lot of information to be able to design something that’s actually going to work. Evolution functions in the near absence of mechanistic information, so it’s much faster, more efficient and we can always after the fact reverse engineer the final product and understand mechanistically why it worked and therefore what was the nature of the problem to begin with. But it’s always nice to be approaching mechanism while having the solution in hand. are some of the obstacles associated BSJwithhatusing AAV technology? For example, :W

how has your group addressed the body’s immune response to the virus or penetration of dense tissues?

Going back to that list of potential barriers DSagain: interaction with the bloodstream with :

components of the blood system, interaction with the tissue, getting deep into the tissue, being able to target delivery to the desired cell type and then very efficiently being able to infect that target cell. Each one of those steps has been found in different situations to be a rate-limiting step. In our very first publication, we had dealt with that very first step, which is essentially the fact that all of us have been infected with this virus naturally. We have high concentrations of antibodies, which are our body’s initial natural defence systems against viruses. These pre-existing antibodies will neutralize natural versions of the virus because our body doesn’t know the difference between a natural virus and a therapeutic virus and it’s going to reject both. As a result, in most clinical trial today, patients who have antibodies against the natural version of AAV used in that trial are excluded from the trial. We have been evolving and engineering new versions of the virus that are resistant to the majority of antibodies in the human population. We’re going to be able to enroll a significantly higher fraction of people in the population in the trial and ultimately a higher fraction of potential patients will benefit from the therapy. Another example in the past two weeks: we’ve had papers coming out dealing

with the infection step – that last step where the virus needs to make it very efficiently into the target cell. In one paper, we created a version that’s about 300-fold better on infecting the airway epithelium and lung and, in another paper, a version that’s 100-fold better in infecting neurons. How can AAV-mediated gene therapy be BSJused to treat neurological disorders such as Parkinson’s disease and ALS? Initially, where I think the field has been DSfocused and should be focused in the past 5-10 :


years has been on rare monogenic diseases where you can point at the gene and the mutation within the gene that’s responsible for the disease. And then it becomes a straightforward hypothesis: If this gene is broken and if I deliver enough of the replacement gene, I should fix the issue. That’s where the field is focused primarily right now in haemophilia B, haemophilia A, retinitis pigmentosa, Leber congenital amaurosis (LCA), muscular dystrophy… All of these are situations in which a gene is broken and you need to supply a replacement gene. If these begin the work, in other words, if our vectors get good enough, then we could take on riskier disease targets. Only then we could start going after Parkinson’s disease or Alzheimer’s or congestive heart failure or type II diabetes. We feel that now we should focus on these rare monogenic diseases where we know exactly where the problem lies and then build up momentum to take on tougher disease targets.

How has the discovery of CRISPR Cas 9 BSJimpacted research on AAVs? CRISPR Cas 9 is an incredibly enabling capaDSbility. I will give you a couple of examples. In :


situations where a gene is broken (it is a recessive disorder) and the replacement gene is small enough to fit inside the AAV then we probably don’t need genome editing, like in haemophilia B and LCA. In situations where a gene is broken and has gained a function (an autosomal dominant disease) like Huntington’s your job is to knock out a gene that has acquired, due to a mutation, a pathological function and is causing the disease. CRSIPR Cas 9 can then go in and edit the genome to knock out that disease-causing gene. A third category is situations where it is a recessive disease but the replacement gene is too big to fit inside an AAV. Then you can potentially use Cas 9 and homologous recombination to fix the genome. Cas 9 is incredibly enabling for genetic therapies, but it, like other cargos, needs a delivery vehicle. I think that it is synergistic: if

FALL 2016 | Berkeley Scientific Journal


we end up creating the optimized vehicles and here is this terrific cargo then a new generation of molecular medicines can be created. You have briefly mentioned this before, but BSJhow effective are AAVs in clinical trials? There is actually an approved gene therapy in DSEurope that’s based upon AAV. In the United :


States, there is a treatment for a blinding disorder called Leber congenital amaurosis type II (for which a company has completed a phase 3 clinical trial). There is going to be a BLA (Biologics Licensing Application) to the FDA, which is seeking approval to market a drug. That BLA will be filed next year. Hopefully, based on very positive results in the phase 3 trial, that’s going to lead to the very first approval of a gene therapy in the United States. Earlier, there have been several clinical trials with positive results for haemophilia B and some diseases within the nervous system, like spinal muscular atrophy. These are situations where the natural versions of the virus are just good enough to start getting this efficacy and, in addition, in some cases, it is not a complete rescue, it is a partial rescue, so we feel that if we could create delivery vehicles that are 10 or a 100 fold better, then we could start going after tougher disease targets.

What future steps do you plan to take in your BSJresearch? Well, several things. The university is an inDScredible incubator for innovative technologies. :


We are going to continue to create better vehicles, better ways of making delivery vehicles and better cargos within our lab here. At the same time, we feel that we should also be translating this towards clinical development. The goal is to get the technology into as many patients’ trials and product as we possibly can. Clinical development takes place within the private sector, so several years ago I co-founded a company at Berkeley called 4D Molecular Therapeutics (http:// and we are taking this technology and getting it into clinical development both within the company as well as in partnership with other companies like Pfizer and Roche Pharma.



: Thank you very much for your time!

Berkeley Scientific Journal | FALL 2016


1. Gaj, T.; Epstein, B.E.; Schaffer, D.V. Mol. Ther. 2015, 24(3), 458-64. 2. Dalkara, D.; Byrne, L. C.; Klimczak, R. R.; Visel, M.; Yin, L.; Merigan, W. H.; Flannery, J.G.; Schaffer, D. V. Sci. Transl. Med. 2013, 5(189), 1-11.



o you ever feel as though you are stuck in time, or as if there are just never enough hours in a day? Our sense of time is unlike any other sense we experience. It is physically intangible and yet is sustained throughout our lives from the moment we are born, unless a brain disorder presents itself. As opposed to our other senses, we do not have an organ that is specifically dedicated to time perception.6 Instead, this elusive awareness is ruled by a complex interplay of variable representations.6 For example, the brain’s cerebellum and basal ganglia (BG) are largely known to be involved in this interplay. The cerebellum is responsible for precise representations of time and temporal reproductions (in milliseconds). It is also responsible for our internal clock, which regards temporal discrimination tasks (separate from our sleep cycle).10, 20 On the other hand, the BG is linked with our internal pacemaker, which keeps track of time relative to regular rhythmic intervals

(regards millisecond-second durations).6, 17 The structure contains the greatest amount of dopamine (DA) neurons in the brain.16 DA is a neurotransmitter that plays an active role in our reward system.9 However, it also strongly influences time perception by affecting the internal pacemaker. DA agonists (stimulate its action) slow our perception of time down, while antagonists (inhibit its action) speed it up.6,13 Some of the most widely used illicit and non-illicit substances such as caffeine, alcohol, marijuana, and “magic mushrooms” influence these complex representations and produce significant alterations in our perception of time. With these alterations, our awareness of the world is disturbed— thusly tying into the great mystery that is Consciousness.


Let us begin with the drug that most of us have tried at least once to enhance our at-

tention and help us tackle our busy schedules: caffeine. Although it is rarely treated as such, caffeine is indeed a psychoactive drug and is regularly consumed by about 80% of the US population, primarily in the form of coffee.12 A previous study conducted on rats showed that low doses of caffeine may slow down our perception of time, while high doses may have an opposite effect.4 This may be due to how caffeine interacts with adenosine receptors in the central nervous system, which includes our brain and spine. It primarily acts as an adenosine antagonist, which indirectly enhances DA production.19, 21 Since caffeine does not directly act on our DA receptors, it makes sense that its effects on our time perception are inconstant. McClellan Stine et al. further examined such effects by inquiring about several participants’ caffeine consumption and testing their ability to correctly assess specific durations of time.12 They witnessed a similar u-shaped pattern wherein only moderate doses of caffeine

FALL 2016 | Berkeley Scientific Journal


(approximately one 6 oz. cup of coffee) yield accurate time perception. The greatFI est amount of inaccuracies occurs with larger time intervals.4


While caffeine may be the most popular non-illicit substance, marijuana holds a similar rank in the realm of illicit substances.11 Tetrahydrocannabinol (THC) is the primary psychoactive ingredient in marijuana, and it works by attaching to the brain’s cannabinoid receptors.8 Like caffeine, THC indirectly acts upon DA transmission.3 By blocking GABA (our main inhibitory neurotransmitter), dopaminergic neurons are released and concentrate in the brain’s striatum.3, 15, 18 Just as with high doses of caffeine, infrequent marijuana users tend to estimate time as passing more quickly than it actually has, but they also they also experience a slowing down of time when asked to produce time intervals between 2 seconds-3 minutes.11, 14, 18 This means that THC causes an overestimation of external time meanwhile our internal time seems to be passing more


“In contrast with caffeine and marijuana, frequent alcohol usage does not damper its impact on our perception of time.”

slowly, which is likely due to an increase in the speed of our cerebellum’s internal clock.11, 14, 18 These results are in tune with the inaccurate time perceptions produced by caffeine at longer intervals.


In the same manner as caffeine and THC, alcohol (ethanol) also indirectly yields an increase in DA activity.1,3 It primarily does so by directly affecting the GABA system (like THC), whose neurons then extend to our reward pathway and stimulate DA release.1 However, the pleasures we get from alcohol likely stem from the released endorphins (our feel-good opioid hormones) as opposed to DA.1 In contrast with caffeine and marijuana, frequent alcohol usage does not damper its impact on our perception of time.7 Alcohol dependency is linked with impulsive behavior, which is linked with a faster internal clock.2 Like infrequent THC consumption, this fast-paced cerebellar internal clock results in an overestimation of time intervals.7 A study conducted by Lapp et al. explored how healthy men’s expectations of their alcohol consumption altered their subjective sense of time, and found that the subjects perceive time as passing more quickly in order to compensate for the presumed effects of alcohol.7 As with all aforementioned studies, this effect mostly just occurred with longer intervals of time.7


Thus far, we have only looked at how dopaminergic substances influence how we perceive time—but increasing evidence has shown that drugs working within the serotonergic system (involves the neurotransmitter serotonin, which influences our mood) also alter our sense

Berkeley Scientific Journal | FALL 2016

Figure A: Caffeine, THC, Ethanol, and Psilocin molecules, respectively

“Now, in addition to the cerebellum and BG, we can see that the PFC also plays a role in our time perception.”

of time—“magic mushrooms,” for example. These fungi contain psilocybin, which is an inactive precursor to psilocin: the culprit of their hallucinogenic effects.5 In contrast with the aforementioned time dilation effects, psilocin seems to slow time down when binding to serotonin receptors.5 This serotonergic activity is linked with the brain’s prefrontal cortex (PFC). 22 Now, in addition to the cerebellum and BG, we can see that the PFC also plays a role in our time perception.22 Wittmann et al. explored how various doses of psilocybin impacted how healthy college students perceive time. For time durations longer than 2-3 seconds, the subjects experienced time as passing more slowly than it truly was; with shorter time durations, however, their sense of time was accurate.22 This was true of both temporal reproduction and synchronization tasks.22 These results show us that even serotonergic substances only truly impact our time perception when it regards protracted durations.


Taking all of these findings into account, we can see that these substances all influence our time perception for longer durations. Such alterations could lead to harmful limitations in anticipatory planning.22 It is crucial for us to perceive longer durations of time accurately, as this sort of temporal processing is involved in several other daily tasks including driving, using tools, and anything that relies on our working memory.18 It is interesting to note that although the mentioned substances vary in terms of their legality, they all may produce disadvantageous effects (often through similar neural systems). Although these effects are reduced with regular caffeine and marijuana use, regular consumption may cause other unwanted side effects that obstruct our engagement with our environment. Despite being regarded

as pleasurable (often owing to DA transmis2(8), a012229. sion), caffeine, THC, alcohol, and psilocybin 16. Perez-Costas, E., Melendez-Ferro, all negatively impact our conscious awareness M., & Roberts, R. C. (2010). 113(2), of the world. 287–302. 17. Rhythm and the Perception of Time. REFERENCES (2011, March 10). Retrieved Novem1. Alcohol and dopamine. (2012). ber 4, 2016. 2. Cangemi, S., Giorgi, I., Bonfiglio, N. S., 18. Sewell, R. A., Schnakenberg, A., Renati, R., & Vittadini, G. (2010). 32, 3. Elander, J., Radhakrishnan, R., Wil3. Dubuc, B. (n.d.). THE BRAIN FROM liams, A., Skosnik, P. D., … D’Souza, TOP TO BOTTOM [Scholarly project]. D. C. (2013). 226(2), 401–413. In The Brain. 19. Solinas, M., 22(15), 63214. Fry, T. (2014). Caffeine and Human Per6324. ception of Time. 20. Teixeira, S., (2013). 12(5), 5675. How Psilocybin Works: Addition by 582. Subtraction - Psychedelic Frontier. (2013, 21. Volkow, N. D., 5(4), e549. May 15). 22. Wittmann, M., (2007). 6. Ivry, R. B., & Spencer, R. M. (2004). 14(2), 225-232. IMAGE SOURCES 7. Lapp, W. M., (1994). 55(1), 96-112. 23. Learn About Marijuana: Factsheets: Canb/i/2013/023/1/5/caffeine_molecule_ nabinoids. (2013, June). by_txtcla55-d5sfkw6.png 9. "Marijuana and Dopamine: The Science 24. Behind It." Leaf Science. N.p., 10 May lfaxflTbER1qdl9bfo1_r3_500.png 2014. Web. 3 Nov. 2016. 25. Mastin, Luke. "Biopsychology." Exactly pedia/commons/thumb/1/11/EthaWhat Is Time. N.p., 2016. Web. nol-3d-stick-structure.svg/220px-Eth11. Mathew, R. J., (1998). Cerebellar anol-3d-stick-structure.svg.png activity and disturbed time sense after 26. THC. Brain research, 797(2), 183-189. dam/sigma-aldrich/structure2/073/ 12. McClellan Stine, (2002). Evidence mfcd00079228.eps/_jcr_content/renfor a relationship between daily caffeine ditions/mfcd00079228-medium.png consumption and accuracy of time 27. estimation. Human PsychopharmacolTheologue-3.jpg ogy: Clinical and Experimental, 17(7), 28. 361-367. 13. Meck, W. H. (2005). Neuropsychology of timing and time perception. Brain and cognition, 58(1), 1-8. 14. Ogden, R., & Montgomery, C. (2012). High time. PSYCHOLOGIST, 25(8), 590-592. 15. Oleson, E. B., & Cheer, J. F. (2012). A brain on cannabinoids: the role of dopamine release in reward seeking. Cold Spring Harbor perspectives in medicine,

FALL 2016 | Berkeley Scientific Journal





ou probably heard this story before, but you probably don’t know how the story evolves. In 1928, a messy scientist forgets to check his bacteria cultures for a few days and comes back to find a mold growing in it. The key discovery was that no bacteria were growing around the mold. That scientist, Alexander Fleming, then worked with two other scientists to develop a drug from the mold to inhibit and kill bacterial growth. This finding was so invaluable that these three scientists won a Nobel Prize in Medicine in 1945. This drug is Penicillin. Penicillin, a group of antibiotics, revolutionized the world. Antibiotics were the first effective drugs used to treat infections caused by bacteria. As a result, antibiotics immediately played a vital role in saving millions of people around the world. People who would have died from a fever or something as small as an infected animal bite survived. Today, however, the negative


effects of antibiotic use, or shall we say misuse, grow clearer and clearer. When antibiotics are overly consumed, they will not be effective when needed. This is known as antibiotic resistance. Thus, with the improper antibiotic usage, over time, the opinion of antibiotics and antibiotic research has shifted dramatically. In September, the United Nations had a meeting, declaring antibiotic resistance “the greatest and most urgent global risk.” This is actually only the fourth time the United Nations had a meeting regarding a health issue, last time was in 2014 regarding Ebola. The significance of the antibiotic problem is so big that Margaret Chan, the World Health Organization’s executive director, also asserted that antibiotic resistance “poses a fundamental threat to human health, development, and security” at the conference. If world leaders are now scrambling to control this major problem, how did such a miracle

Berkeley Scientific Journal | FALL 2016

medicine go so wrong? When antibiotics first rolled out onto the streets in the 1940s, the effects were immediate. Within two decades, life expectancy increased by over 7%, which is approximately eight years1. As expected, the increase in life expectancy was immediately followed by a great population boom. In America alone, the population grew by 33% in the 1940s, according to the United States Census. This evidently shows that antibiotics have an overwhelmingly positive value in improving the quality of life. However, this clear benefit of antibiotics also led to a curse. To support growing populations, antibiotics began to be used in the food industry. This began the path to antibiotic abuse. Logically speaking, a larger population requires more food to support it. Thus, to meet the growing society’s demand on meat, research in antibiotics’ effect on animals began to develop. Soon

“Poses a fundamental threat to human health, delopment and security”

enough, antibiotics were discovered to increase life span and increase the weights of first rats and hamsters, then farm animals such as cows and chickens5. It has been found that antibiotics can help animals become healthier, live longer, and grow faster. In the short term, it was a blessing for the meat industry. More animals could be grown in the same space faster. This resulted in incredibly low meat prices, something we still have today. Meanwhile, parallel research in understanding the threat of antibiotic resistance grew. The first report of a resistance to antibiotics was in

the 1950s in Japan. Antibiotics were used to treat a woman with diarrheal disease caused by bacteria, but to no effect2. The first defined strain of antibiotic resistance is Methicillin resistant Staphylococcus aureus, commonly known as MRSA. Interestingly enough, this risk was predicted from the very beginning. The discoverer of antibiotics, Fleming, himself had warned, “The time may come when penicillin can be bought by anyone in the shops. Then there is the danger that the ignorant man may easily underdose himself.” This is what he had said in his Nobel Prize acceptance speech in 1945, when the world listened. Like most advice

Colonies of bacteria are grown on an agar plate. Only colonies of bacteria that are resistant to the antibiotics on the plate grow while the rest were killed off. This principle is very important in biological research.

and warnings, little actions were taken. From the very beginning, Fleming had pinpointed the danger in antibiotics – underdosing. In other words, the negative effects of antibiotics are caused because antibiotics will kill both the good bacteria that protect the body and the disease causing bacteria. Thus, if antibiotics are consumed at low does, it will allow some of the drug resistant bacteria to remain, thrive, and takeover. If the disease causing bacteria don’t have any effects immediately, it will render any future antibiotic use as fruitless. Therefore, if antibiotics were misused, one would need a higher dose or a new strain of antibiotics every time a bacterial infection occurs. That is just the “easy” part because the resistant bacteria will also grow stronger and stronger. So in a perfect world, we may be able to consume however much antibiotics needed or to create unlimitedly strong antibiotics to combat all the bacteria and the future evolutions. However, realistically, the human body is not strong enough, medicine is not advanced enough to achieve this, and the process would be too inefficient. Thus far, the largest misuse of antibiotics remains in the food industry. Livestock is fed antibiotics in the hope of getting fatter and growing faster. As the consumers, we will also ingest the antibiotics that were fed to these animals, indirectly building up our own tolerance. As a result, in 1969 a Swann report was published raising awareness to the dangers of antibiotic use in the food industry for growth promotion4. Only in 2006, the European Union banned the use of antibiotics to fatten up livestock. By 2013, 46

FALL 2016 | Berkeley Scientific Journal


the entire world. Many people argue that it marks the beginning of “modern medicine.� After decades of improper usage, opinions of antibiotics became more gray as people became more aware of the negative effects. Currently, much more research needs to be done to better understand ways to treat bacterial infections. Nations still need to have legislation to prevent misuse antibiotic misuse. Looking back, antibiotics are similar to almost any type of cutting edge technology, such as cell phones. While it changes all of our lives and is impossible to imagine life without it, it comes at a cost. References 1.

Poultry is most commonly raised on a large amount of antibiotics. This allows producers to raise more chicken in the same confined space. Additionally, the chicken are larger in size and grow signicantly faster. THis is the reason why most of the poutry we have are

Compared to 30 years ago, it is much harder to get an antibiotic prescription.

different countries passed regulations controlling the use of antibiotics in meat production. Considering the current food industry, unfortunately, it is significantly easier to regulate the use of antibiotics than to completely ban them. The primary problem is costs. If antibiotic use is eliminated, animal product costs would increase dramatically hurting the rest of the population3. Since this antibiotic resistance affects the entire world and is a major public health issue, doctors around the world are getting educated. Compared to 30 years ago, it is a lot harder to get an antibiotic prescription from the doctor today. It shows how our society adapting and improving. Doctors are wearier of the other effects of the prescribed medication, and patients take the extra step to do more research. We, as a society, are always moving forward and adapting. Even though the view of antibiotics is no longer as black and white as before, antibiotics still play a significant role in medicine. At its inception, antibiotics quickly shook





Image Sources 1. 2.



Berkeley Scientific Journal | FALL 2016

Kinsella KG. Changes in life expectancy 1900-1990. Am J Clin Nutr. 1992 Jun;55(6 Suppl):1196S-1202S. Review. PubMed PMID: 1590256. Levy SB. Microbial resistance to antibiotics. An evolving and persistent problem. Lancet. 1982 Jul 10;2(8289):83-8. PubMed PMID: 6123819. Schwarz, S., Kehrenberg, C., & Walsh, T. (2001). Use of antimicrobial agents in veterinary medicine and food animal production. International Journal of Antimicrobial Agents, 17(6), 431-437. doi:10.1016/s0924-8579(01)00297-7 Soulsby L. Antimicrobials and animal health: a fascinating nexus. J Antimicrob Chemother. 2007 Aug;60 Suppl 1:i77-8. Review. Erratum in: J Antimicrob Chemother. 2007 Nov;60(5):1184. PubMed PMID: 17656389. Sperling GA, Loosli JK, Lupien P, McCay CM. Effect of sulfamerazine and exercise on life span of rats and hamsters. Gerontology. 1978;24(3):220-4. PubMed PMID: 620944. default/files/imagecache/slide_detail/ magazine/antibiotics%2020134003%20 %C2%A9%20Alexander%20Raths%20 -%20Fotolia.com_.jpg


Dr. Richard Muller is a Professor of Physics at the University of California, Berkeley. His many interests include particle physics, geophysics, and astrophysics. He has studied the extinction of the dinosaurs, the affect of the planets on ice ages, the beginning and end of the universe, and the nature of time. He teaches the class Physics for Future Presidents. How did you get involved in physics, specifBSJically astrophysics? :

in physics dated back to high school; DMIMylovedinterest all sciences and I found biology to be too :

difficult, I didn’t really get excited by chemistry, all the things I loved in science turned out to be physics....Now, I consider myself to be an engineer as well as a physicist, but back then, it was the physics that looked like the field that held the answers to the most interesting questions.

You’ve done a lot of research in glacial cyBSJcles. If you were first a physicist, how did you :

come to correlate the two?

very backwards way, it came about because DMmyIn aclosest associate, my mentor, Louie Alvarez, :

had gotten involved in geology through his son Walter. They had addressed the question of “what killed the dinosaurs?” I got involved in that too to some extent; I wrote some papers on the subject, and at the end of their work, my thought was “what a surprise this was, that something

from space, astronomy, could cause such a big impact on things on the Earth, what else could there be?” I started thinking about the Ice Ages, and if those could be caused by astronomy, maybe by the impacts of asteroids and comets? I looked into it and discovered it was an astronomical theory that explained the cycles of the Ice Ages in terms of the changes of the planetary positions. It was a widely accepted theory, but as I read into it, I realized this theory must be wrong. Typically when I read something new, if it makes a lot of sense that’s great, but if you looked at it critically rather than accept authority, as scientists are taught, I found severe flaws in the theory. That’s when I started playing with the theory, trying some alternatives, finding an approach that worked much better than the standard theory, and publishing papers on that. That’s FALL 2016 | Berkeley Scientific Journal


what led me into geomagnetic reversals. Louie Alvarez died in 1988 after he had done this fantastic work; he had a Nobel prize for working in elementary particles in physics, but I think the piece of work that he will be most remembered for will be the discovery of the cause of the extinction of the dinosaurs.

: You heavily studied glacial

BSJcycles that have shifted from 21,000 year cycles to 100,000 year cycles; what are the environmental implications of this shift?

RMyears we’ve had an Ice Age every

: We know that for the last million

hundred thousand years, an Ice Age typically lasts 80-90,000 years, so we’ve mostly been in Ice Age, for the last 15,000 years we’ve been in an interglacial, the warm period in between, and that’s when all of civilization was developed, not surprisingly. These short periods of good weather are when we developed civilization. There

was no civilization before 15,000 years ago. It warmed up suddenly, we developed farming, farming led to the ability of people being able to create more food, not everybody had to be working all the time, which meant they could have physics professors and things which would normally be considered a waste of time because they didn’t produce food. And a new Ice age is due any millennia now. That got me into global warming.

: You’ve argued that phase

BSJstability of glacial cycles has no relation to quantum mechanics, so it has to be due to astronomical forces.

RMyear cycle; the only thing in our : It has such a regular 100,000

environment that has that kind of regularity is astronomy, not perfectly regular, but extremely regular: there’s no other mechanism that anyone has ever proposed other than astronomical that could lead to such regularity. But I would say that the

astronomical cause of the previous ice ages is firmly established. Exactly how astronomy does that is still not on firm ground in my mind. A lot of people say it is, but their theory, the Milankovitch theory, is demonstrably wrong. Some people in this field say ‘ok well what’s your alternative’ well none of the alternatives work either and they say ‘therefore we accept the Milankovitch theory’ well that’s crazy, if something is wrong it is wrong. We do know it’s related to cycles of the Earth but how we don’t fully understand.

: Explain how you researched

BSJcycles of fossil diversity and

related glacial activity with the fossil diversity, theorizing that there are (periodic passages) of our solar system that pass through the milky way every 62 million years which leads to the extinction of these species on Earth.

RMmy mind is not explained. There : The 62-million year cycle in

have been several proposed explanations including passing through the milky way... We postulate that there might be a dust region but that theory doesn’t really work either…

: Recently you published the

BSJbook ‘Now: Physics of Time’, did any of your prior research lead you to study the creation of time or was it just purely personal interest?

I had been involved in two major RMprojects studying the big bang. :

Muller is the director of the Berkeley Earth Surface Temperature Project which he founded alongside his daughter Elizabeth Muller


Berkeley Scientific Journal | FALL 2016

The first one was a study of microwaves of the big bang in which I had proposed we measure the microwaves from different directions and I had a much more sensitive way of doing it than anybody had done before and I thought we would be able to see that variation and in fact we did and thanks to that experiment I became

a professor at Berkeley and my student who was working on that experiment got a Nobel prize. That project was very successful and had to do with what was going on at the very early moments of the big bang, what can we tell by looking at these microwaves. There are a bunch of interesting theories there and they all relate to the big bang being a unique event in which space and possibly time was created...My best guess, pure speculation, is that time did not exist before the big bang was created and space did not exist before the big bang, that matter did not exist before the big bang and there was literally nothing, not even empty space. My the next experiment had to do with whether the universe would expand forever. Having worked on the big bang I wanted to look at the other end. I worked on that project for 15 years, my student took over and he won a Nobel prize. That had to do with the end of time. I had to tell my class that now with Saul working on that project I believe that within three or four years he would know the answer with whether time would go on forever or whether we would eventually stop with a big crunch. Then two years later I had told my class I can tell you now time is going to go on forever. The discovery made by Saul and the people working in that group was that the universe would expand forever. So yes my own research led me into two projects that led to a study of the two most interesting aspects of time, the very very beginning and the possible end.

: You theorize that time is

BSJexpanding because space

is expanding, therefore time travel

is not possible because the future doesn’t exist yet. Are there proactive ways to experimentally test this or do you have to wait for certain events such black-holes colliding?


: If you look at the previous theories of time, and I’ve looked at them all, it has a very bad history of failure to propose tests. Of all the theories I’ve looked at, nobody has ever proposed a test. This goes back to 1928, when Eddington said that the arrow of time depends on entropy and he didn’t propose a test. All sorts of other people have adopted his theory, they have all elaborated on it, nobody has ever proposed a test. When I first came up with this idea, I actually came up with it while writing the book. When I came up with this idea that time was due to the creation of new space, I knew I had to find a test or else it wasn’t up to my standards of science – standards that I wish everybody else shared but not everybody does these days. I came up with two tests. One is the study of gravity waves that would be produced by the Big Band that would indicate whether or not new time was being created. The second thing I came up with was as the universe is expanding, the dark energy that accelerates that expansion should also accelerate time. And the trouble with those tests is I couldn’t think of any way that we would be able to do those measurements within my own lifetime, which is not very satisfying. After the book finished, along came this discovery in LIGO of two black holes colliding. Suddenly I got very excited because based on my understanding of general relativity, I realized that this might actually be a test and I quickly worked out the numbers and it turned out that it was testable. So I looked at the LIGO experiment and I found they just barely didn’t have enough accuracy. So I got a hold of the LIGO data and I reanalyzed

“It has such a regular 100,000 year cycle; the only thing in our environment that has that kind of regularity is astronomy” it – if I could only squeeze out a little more accuracy out of their data, then I could see if my theory is right or wrong. And I could not, it turned out that the analysis had been done in the best possible way. I’m very good at data analysis – it’s one of the skills I have and I know how to take data and analyze it in a way that is optimum – and to my disappointment they had already done that. So that experiment barely misses being able to test the theory, but I realized that they turned on and within six months they saw this event. Odds are they’ll see another event in the next year or two that will be just as big, and if it’s closer, pretty good odds that’ll happen, then we’ll be able to test the theory because the signal will be stronger. So when you say “Is there something we can do?”, yeah we can watch what happens on LIGO. I’ve talked to all the people on LIGO, they’re going to look for this, and I expect to get a phone call one day and it’ll probably go something like this, “Hi Rich! Guess what? We have a new event and it’s 10 times stronger than the old one!” And I’ll say, “Wow, that’s just what I wanted! So, what about my theory?” Then they’ll say, “The bad news is that your theory is wrong.” And I’ll go, “Oh darn!” or maybe I’ll use a stronger word. But I will be proud that at least I had a theory that is falsifiable! And nobody else has done that! Now people will argue against me, they say “Well I think your theory is this.” How do you test it? You can’t. Gah, that’s not a theory! We should not accept theories in science that are not testable – just because they feel good. And maybe he’ll say, “Well, you were exactly right.” And then what an accomplishment that would be. To come up with a theory of time, brand new, one of the most fundamental things in human experience, time, and with this theory

FALL 2016 | Berkeley Scientific Journal


Dr. Muller discusses his new book “Now: The Physics of Time” which details his theory on the flow of time [Source: UC Berkeley News]

make a prediction that turns out to be right. Wow, that’ll be a great achievement! And I’m optimistic in the next few years, I’ll find out one way or the other. I’ll see which way it turns out to be.

regards to the experimenBSJtalIn methods, how has the :

measurement of cosmic microwave uniformity, and the discovery of dark energy facilitated your new theory and understanding of time?

Well it hasn’t facilitated it yet. RMWhat it does is it allowed me two :

predictions, neither of which I know how to test. So the dark energy, for example, says that the universe is not only expanding, but it’s expanding faster and faster. In my theory, if the universe is expanding faster and faster then time is accelerating too. So can we tell if the time in the past is going slower in the past than it is now? We can do this by looking at distant galaxies and seeing how rapidly things are happening. For example spectral lines which are oscillating atoms, you look at the line, it should have a lower frequency – it should be redshifted. So we could observe it that way. The problem is it is redshifted and we attribute that to the fact that it is moving away from us. Well maybe it’s not moving away from us as fast, maybe part of that redshift is actually due to this new effect, how do you separate those two ideas? And right now I don’t have any good way of


Berkeley Scientific Journal | FALL 2016

separating it. So that test is a good test in principle but not one that i know how to implement until we find an independent way to measure the recession of velocity. I’ve thought of some ways to measure that but none of those are practical. So maybe twenty years from now they will be. The other is the Big Bang, when we believe (it’s not yet definitely proven) that during the early Big Bang was a thing called inflation, when the universe was expanding much much more than now. If that’s true then there should’ve been a great deal of acceleration of time during that period. If that’s the case then the calculations people have done for gravity wave emissions during that time were wrong. So we have to redo those calculations taking into account the acceleration of time. I haven’t done those calculations yet. I don’t know whether I’ll do it or someone else will but it needs to be done. Then there is some hope that the gravity waves in that inflation error will be observed with current experiments. There was a report that they had been observed a few years ago and that report turned out to be false. They had to retract their observations a year later because what they had been seeing was background interference from a layer of dust. So they had not seen it. But with better experiments we can see it and we can better compare the two theories to see which one turns out to be right. But the cleanest, simplest test is this LIGO test with two colliding black holes. That’s what I’m most excited about. But I emphasize, theories are worthless in my mind if they don’t make predictions that allow you to prove them right or wrong. Usually they’re proven wrong. If you survive a lot of tests where people are trying to prove you wrong and they fail, then the theory becomes part of our understanding.

If your idea is that space and BSJtime are so intertwined that :

one cannot create one without the other, why is it that we can go back in space but we can’t go back in time?

You can move in any direction in RMspace, yet time moves forward in :

a way that affects our lives over which we have no control. That really is the question, and why is that? So in addressing

that question I have to address another principle of physics, which is the Principle of Causality... I have to think about another principle of physics, which is, the principle of causality, that we think that one thing causes another. When I decide I am going to drop this pen on the table, it hits the table, but I have to drop it; I had free will. I could decide to do that or I could decide to not to. Now casualty in physics is a separate principle from relativity. It’s something that stands outside of the other laws, from conservation of energy- we have many laws in physics- but causality is yet another law. Some people deny it. Some people say I have no choice but to drop it the first time and not the second time. And the reason is, all these molecules are hitting me and I am just responding to the past. And I have no choice over what I do. I cannot do things except what the past gets me to do. I remembered when I named my daughter Elizabeth. I learned years later that was the most popular name of the year. And so maybe so I don’t really have free will. It turns out that casualtywhich claims that the past determines the future- is no longer a law of physics. That has been innovation of the 20th century and it’s’ often not stated that way and it is something I spent a lot of time in the book explaining because it is absolutely true and yet not widely appreciated. The old philosophers like Schopenhauer and Nietzsche who argued that free will is only illusion, but that is repeated today by scientists who should know better by major physicists who should know better; they repeated the same thing! When Schopenhauer said that people widely believe that physics was completely deterministic, that the past completely determines the future, we know now that is not the case, at least that is not the current theory of quantum physics. Quantum physics says that identical things will explode at different times, even though they are identical. That means the past doesn’t determine the future. That is the substantial part of quantum physics as it is today. So given that, the argument that so many otherwise smart people had made that logically, we know that we don’t have free will-- Dawkins had made a career out of this, Richard Dawkins, he writes wonder-

ful books but then he writes this nonsense as if he doesn’t understand quantum physics, and maybe he doesn’t. But it’s nonsense. The argument against free will is just not scientifically valid and they are all based on assumptions that we know aren’t true. So here’s a way of thinking about the answer to your question. We do exist in time, we existed in the past. The past has all been determined later and you could sort of go back in time- it’s called memory- but what you can’t do is change things back in time. You can’t go back and say, “Oh I wish I hadn’t said that”, and not say it again because all that has been determined. The only time when things are not determined is when we get to exercise our free will. What I am saying, in a sense, is drifting away from physics. The physics is the causality, but the question I am addressing is, “Why is the moment now- the title of my bookwhy is “now” so important to us as humans?” And the answer is, because it is the only time we can exercise our free will. So here I am drifting away, some people wish I just stayed with physics in the book and yet, I think, in opening up this question of free will, I think for the first time in over hundred years that this has deep philosophy and religious implications that we can make decisions that are not based on the

Gravitational waves may be able to provide new perspectives on the concept of time

FALL 2016 | Berkeley Scientific Journal


history of what came before. We can based them on something nonphysical- I like to term “empathy”— that we care about other people. There’s nothing in the past that makes us care about other people. Dawkins would say, oh we don’t really have a choice, it’s our genes that are telling us— I really like Richard Dawkins, his book “The Selfish Gene” which everything we do, even the things that look altruistic, are being done because we are saving someone who are sharing the genes we have and we want the genes to survive. Okay, that’s a nice theory, he never proposes it for test of course, and we are all supposed to be persuaded by the fact that it sounds so plausible, but yet it doesn’t sound plausible to me. He proclaims that atheism is self-evident and logical, which is another one of these things that I go, “Where did you get that?” He just come out and makes it up and says, “This is science”. No, this is not science! So the reason we can’t go back time, we can go back in time, it’s called memory, but what we can’t do is change things in the past because we cannot exercise our free will in the past. We can only exercise our free will right now.

To you, what is the greatest BSJimportance of understand:

ing the creation of time, and why did you choose to direct your study toward it?

I don’t think we choose the RMscience what to direct our study :

towards. I think we choose things because they are fascinating; because they are of fundamental importance to our own understanding of ourselves. Why did Louis Alvarez, the physicist decide to work in geology for the first time in his life? Because the question of why did the dinosaurs die, he felt, was a really fundamental issue. And it is. It changed our view of evolution. No, we used to think before Alvarez were. That evolution was simply survival of the fittest competing with each other and that’s it. And may the best creature with. With his discovery, we now know is we’re not just fighting each other, we’re fighting survival against catastrophe. And so there is great advantage to flexibility, to being able to be more complex, to be able to survive in new circumstances.


It gives a whole different meaning to what we are, why we are here, why we’re surviving, how we will survive in the future. So I think ultimately what drives science and scientists is to get a better understanding of the reality and meaning of life. And there’s nothing more fundamental life than time. And the thought that I knew enough about time, enough about relativity theory, enough about what other people were saying that I could contribute the key in all of the work I’ve done for which I have made. Contribution has always been as with the language there, and we look at it very close and say wait a minute, this does not make sense; there is something wrong here. Actually everything I’ve done has turned out to be important has only started. With I looking at what came first in saying: “No, this doesn’t make sense. There’s something fundamentally mistaken in the prior work.” And that’s where I was when I started recently thinking about time. There are things that bothered me for years. And I thought recent I just gonna write a book about that. And as I wrote about the book, I started thinking fundamentally really in putting together the arguments why the old xxx a picture of time was no good. I got much more depth and I start thinking of things from cosmology that also proves the theory was wrong. And as I put all the stuff together and then quantum physics bringing that in putting in blackhole operation. As I put all these things together, I began to see the larger picture. That it did make sense. If xxx the time was created at the Big Bang along with space. And all these other things fit together with that insult. When I start reading the book, I didn’t claim to have any powerful conclusions in the book. And it wound up that what I really needed to do was to put together everything that I knew, to address all of the quandaries, to figure out where everybody else was wrong. And then realized there was something that explained everything. Truly out came about.

are the future directions BSJofWhat your research and in the :


Berkeley Scientific Journal | FALL 2016

I would like to improve RMtheWelltheory. Right now I’ll give an :

example when Einstein first predicted the light was deflected by the sun. He did this in a paper which he calculated using some equivalence principle that like should be deflected by the sun. Couple years later he worked out a complete theory with the complete equations. This equation showed indeed that light was reflected by the sun, produced afflicting twice as much as in the original theory show. So I feel that we’re at the stage that original theory. We will come up with the fact that when space is created, time is created, but I haven’t yet modify the equations of relativity to take that into account to come up with a full theory. That’s something I would like to do. I like to come up with other predictions. I could be tested in other realms, ideally would be a laboratory experiment. When Einstein did his original work, on relativity theory one of the things he predicted was that gravity would cause the frequency of a light beam to change. And he didn’t know of anywhere that could be tested. But a few decades later, there were two ways to be tested both with verified. One was by looking at light coming from the white dwarf star which had such intense gravity at the frequency of the atoms was actually changed by the time I’d like to work start being slowed down by the intense gravity that could actually observe. The other was in a laboratory experiment which was signed his crowning Ripka actually took gamma ray photons from a tower, and had them fall down, then measure the frequency very precisely when they reach the ground they could see the Einstein condition clear too. So there there may be things lurking that could be done, I’d like to find more of those.

FOURIER TRANSFORM INFRARED ANALYSIS OF SURFACE ION TRAPS Abstract: This study examines gold and copper-aluminum surfaces in air and vacuum via Fourier-transform infrared spectroscopy to learn more about the causes of electric surface noise in ion traps. The FTIR spectra show traces contamination, including unique fingerprints in the range of 4000 to 600 cm-1. We study how the spectra change when the surfaces are exposed to controlled hydrocarbon contamination. We also study whether procedures commonly used in ion trapping such baking for establishing ultra-high vacuum, and exposure to blue to ultraviolet light radiation alter the surface. Spectra of individual surfaces are found to differ more from each other than they do under these procedures. This work is based off of the first author’s honors thesis.



Quantum computing has exciting applications ranging from clever solutions for optimization problems to superior implementations of existing algorithms.[1] It should enable cracking of traditional cryptography, while likewise protecting information. One of the most promising ways to create a universal quantum computer is using trapped ions.[1] While quantum computing with trapped ions is well advanced, surface noise prevents progress towards smaller and more compact systems which would allow for faster and more flexible quantum computing architectures.[2, 3] Ion “qubits” are controlled by manipulating electronic and motional states of ions with lasers. Unfortunately, electric field noise limits the coherence of multi-qubit operations by destroying the quantum information in the quantum bus connecting the respective qubits.[2] After eliminating technical noise, the remaining noise is too large to be explained by Johnson noise and unlikely to be a systematic or a measurement error. Instead, the noise has been reduced by cooling the electrodes to cryogenic temperatures, by increasing the distance of the ions from the surfaces, and by surface treatment with energetic Argon ions. Concentrated laser radiation has been reported to both decrease and increase heating rates. [4-10] In particular, surface treatment experiments suggest that surface contaminants play an important role in the observed

noise.[6] Indeed, Auger electron spectroscopy shows that carbon and oxygen are present before treatment and are removed by the energetic Argon ions.[7, 8] However, the experiments in Ref. [8] also report that subsequent re-contamination from molecules present in Ultra-High Vacuum (UHV) does not let the noise reappear. Carbon contamination may be either in the form of atomic carbon or of hydrocarbons deposited on the electrodes somewhere during the fabrication, assembly or baking procedures. The studies in this paper aim at comprehending the noise mechanisms to help in future experimental planning. They also apply to numerous other fields studying electric noise near surfaces, including engineering nanoelectronics and superconducting electronics, detection of Casimir forces in quantum field theory, tests of general relativity, and hybrid quantum devices. [11-15]

Organic materials present in air tend to adsorb onto metal and form nanostructures to reduce the free energy between surfaces and the environment.[16] A common model for these contaminants is based around Self-assembled Monolayers (SAMs).[17] These molecular structures are formed from exposure to fluids and gases with organic content and organize spontaneously into crystalline-like adatom structures. They commonly appear on gold and copper surfaces, both of which are common traps electrode materials.[18-27] Many SAMs are “alkanethiols,”

FALL 2016 | Berkeley Scientific Journal


characterized by single-bond hydrocarbon chains with sulfur heads.[28] They can be removed by plasma cleaning.[29] These observations make them a candidate for the contamination leading to the elevated electric field noise observed in ion traps. This study investigates whether or not certain ion trap procedures can change the structure of organic surface contamination.


Auger spectroscopy identifies Carbon and Oxygen as the main contaminants.[7, 8] However, Auger spectroscopy cannot detect Hydrogen nor is it very sensitive to the chemical structure of the surface. Hence, it is unknown in which form Carbon and Oxygen are bound to the electrode surface. One method of characterizing the Chemical composition is through Fourier Transform Infrared Spectroscopy (FTIR). Molecular bonds vibrate at specific frequencies, which correspond to particular vibrational energies, typically in the infrared (IR) regime. Analyzing an absorption spectrum of a sample excited by IR light allows one to identify specific bonds and function groups of the adsorbed molecules and thereby construct a description of the molecular structure of the sample.[30] The primary tool of our experiment is a Fourier Transform Infrared Spectrometer (Bruker Tensor 27 FTIR). It produces a spectrally broad IR beam, spectrally filtered by a scanning interferometer. Outside the FTIR, the beam reflects off of a plane mirror, followed by a parabolic mirror for focusing (see Fig. 1). It then passes through an automated wire-grid polarizer which sets the S or P linear polarization of the beam before arriving at the entry IR viewport of the vacuum chamber containing the sample. Upon entry, it grazes off of the trap

surface at approximately 10o and exits the second IR viewport to meet the second parabolic mirror which directs the light onto the detector. Both parabolic mirrors have 30 cm foci with the sample positioned at the focus of both. Finally, the beam reflects off of a plane mirror and enters a Mercury Cadmium Telluride (MCT) detector, focused by a final parabolic mirror. Performing a Fourier transformation of the detector signal yields the desired spectrum. The goal of the measurements is to learn more about the chemical compositions of adsorbates on the trap electrode surfaces, in particular under realistic conditions for ion trapping. Before ions can be trapped, vacuum chambers containing the trap are baked to typically 200Co to desorp water and other surface contaminations so that UHV can be achieved. Furthermore, ions are typically cooled with near ultraviolet (UV) light and hence the traps are often exposed to blue and ultraviolet radiation. This study mimics the steps taken to prepare the traps after manufacturing to understand whether and how these steps affect the trap surface.


Infrared spectroscopy identifies and characterizes molecules by measuring how light interacts with the molecular bonds. The specific molecular vibrations, stretches, rocks, wags, pinches, and other changes to relative atomic orientations manifest themselves as spectral features. Consider Fig. 2: The primary region of interest in most FTIR spectra ranges from 4000 to 2800 cm-1, where most single hydrogen vibrational bonds are located. Common signals here include features of water, alcohol hydroxyls, alkanes, alkenes, and alkynes. The next region from 2800 to

Figure 1. The optical setup displaying the FTIR and external instruments.


Berkeley Scientific Journal | FALL 2016

Figure 2. Common types of chemical bonds by region in IR spectra. As this has yet to be refernced to a background, the absorbtion is measured in arbitrary 2000 cm-1 includes triple bonds, as well as carbon dioxide doublet absorption due to the air between optical elements. Signals from 2000 to 1300 cm-1 include double bonds, most of which include at least one carbon, though single hydrogen bond stretches can be found here as well. Finally, from 1300 to 400 cm-1, we choose to define the “fingerprint region,� which is best analyzed in terms of its overall shape because it is challenging to resolve bonds that can range from complex motions of alkanes to single bonds of uncommon elements. Counterintuitively, even FTIR spectra of ideal surfaces show a wide variety of signals. To begin, the IR beam is not necessarily spectrally flat. Furthermore, the light passing through air can be absorbed by molecules present in ambient air. Similarly, the light passes through various optical elements which may contain contaminants. Hence, it is important to reference the taken spectra to a background to ensure that the actual signals stem from the sample. One method would be to compare the spectra of the contaminated surface under study to that of an ideal perfectly clean one. Apart from the difficulties in preparing a truly clean sample, this method becomes difficult if the sample is in vacuum as alternating between samples is not straightforward. Another method for discerning surface absorption from other effects is to use the fact that most contaminants bound to the surface are aligned perpendicular with respect to the surface. In particular, Grazing Angle Polarization Modulation (PM) can take advantage of this by alternating between perpendicular (S) and parallel (P) components, using one signal as the background for the

Figure 3. S polarized (violet), P polarized (burgundy) and difference over sum (blue) of a typical spectra. The normalized difference of these two polarizations is in-sensitive to atmospheric noise and noise from other non-polarizing sources. In this case, the only prominent fea-ture is the signal from the silicon

other when collecting spectra.[31] In particular, the fact that P polarized light interacts stronger with the surface layer than S polarized light allows one to differentiate between surface bonded molecules and the molecules in air which do not possess a specific orientation. Since switching between different polarization can be very fast, this method allows one also to efficiently remove background fluctuations.[32, 33] Figure 3 displays the absorption spectra AP and AS of P and S polarized light, respectively, as a function of the wavenumber k as well as their normalized difference, as a function of the wavenumber k: (AP(k) - AS (k)) / (AP(k) + AS (k)). Most of the signals are common to both polarizations, leaving only small differences except in the fingerprint region. In theory, the spectra should be a flat line with absorption peaks dropping down. The drop in intensity at either end of the spectrum is due to limited infrared transmission through the optics. Spectra can also drift during and between scans due to the motion of equipment or change in detected intensity due to environmental factors. The interferogram mirror also takes imperfect step sizes or experience tilting due to hardware limitations. The amplitude of most of the spectra vanishes below 600 cm-1 so that region is not included in the analysis below. The following specific features are also present in Fig. 3: From 4000 to 3400 cm-1 there are peaks that typically correspond to water vapor in the air and condensation on our optics. From 2400 to 2200 cm-1, a pair of strong peaks corresponds to the instrument’s environmental carbon dioxide.[34] This carbon dioxide signal in the doublet region is typically the signal most prone to this effect due to its relative strength and the volatility of the environment around the setup. Retaking background scans frequently limits the impact of this atmospheric noise. The entire double bond region here is clouded by water and carbon dioxide signals as well. Due to the complex nature of the fingerprint region, one can typically at best use it to recognize that surfaces are contaminated and distinguish between them as each con-

FALL 2016 | Berkeley Scientific Journal


self should never get directly contacted to other tools except those for wire-bonding used to connect the trap electrically. All tools in close proximity are cleaned with isopropanol and distilled water. Despite all care, we expect the sample surfaces to be contaminaed. As stated in Ref. [35]: “any surface that has been exposed to the atmosphere will have a covering of adsorbents at least several monolayers thick. Additionally, trap electrode mate-

Figure 4. Difference over sum spectra of a surface (blue) subjected to water (red) and wiped dry (green). The primary change to observe is present around 3500 cm 1, corresponding to the addition of hydroxide bonds. The signals around 2900 and 1400 cm 1 indicate that molecules with alkane bonds are now present at the sur-face. The overall baseline shift and the carbon dioxide signal around 2400 cm 1 are not as meaningful,

tamination will have a different fingerprint.[30] Finally, it is possible to make out many narrow peaks clustered around 1700 cm-1 such as the carbon double bond. When performing experiments, one compares the spectra before and after a specific attempt to modify the surface to check if there is any substantial change to the contamination. While it may not be always possible to determine exactly what the contamination is, one still can comment on its molecular structure by identifying particular bonds with high precision. For example, Fig. 3 displays the effects of applying tap water to a surface and then removing it. This surface was open to the air, not in vacuum. We see a substantial water peak as well as signals in the fingerprint region around 1350 cm-1. Most prominently, alkane bonds remain around 2900 and 1400 cm-1 while all other signs of the water are removed. The carbon dioxide peaks around 2400 cm-1 fluctuate as part of the environmental noise.


Each trap surface begins as a glass substrate. The trap electrodes are produced by evaporating first a titanium adhesion layer on the substrate, followed by evaporating either gold or a copper-aluminum alloy to a thickness of 500-1000 nm. The traps are handled in a clean room only and stored in plastic containers. After they are brought into the measurement laboratory, the traps are mounted inside a vacuum chamber with clamps. Any adhesive used is UHV-safe to avoid outgassing. During the initial pumping, heaters bake the chamber to about 200Co. Extreme care is taken to keep all parts which go inside the vacuum chamber oil-free, including multi-step ultrasonic cleaning and wearing nitrile gloves while handling them. The trap surface it-


Berkeley Scientific Journal | FALL 2016

Figure 5. All surfaces tested are uniquely dirty. The upper figure displays the complete spectra. All spectra exhibit some degree of noise around 2400 cm 1 due to fluctua-tion in carbon dioxide content in the air. The signals of individual samples differ most in the fingerprint region. This lower figure displays the the region from 2800 to 3100 cm 1 of the same data. The spectra are baseline corrected again to the new bounds for clarity. All sur-faces include alkane contamination as evidenced by the peaks around 2925 and 2975 cm 1, except for the Cu-Al surface, which has peaks too small to

rials that react with oxygen will have a native oxide layer.” Fig. 5 verifies the existence of substances on numerous surfaces. In addition to the fingerprint regions shown in Fig. 5a, some alkane bonds and the aforementioned instrument imperfections are relevant to the analysis. Of the single bonds displayed in Fig. 5b, most are alkanes, which would be expected for alkanethiol contamination, especially on metal surfaces.[20, 36-38] There is a distinct lack of alkene, alkyne, carboxyl, and other common bonds in any spectra. When hydroxyl bonds are present, they are most likely to be water on the surface in addition to typical parasitic water signals from our instruments and optics. Despite their relative intensity, the bonds of the fingerprint region are too numerous and close together to be resolvable. Some fingerprints include stretches consistent with alkane “wag” and rock,” while others have a broad signal centered around 1200 cm-1 that is associated with silicon dioxide peaks from the glass substrate or possibly plasmon resonance from the metal coating.[39, 40]

uid Isopropanol temporarily obscures the 1200 cm-1 silicon dioxide signal in the fingerprint region, providing additional evidence that the signal is characteristic of the surface. The spectra shown in Fig.6 demonstrate how clean the initial surface is compared to one covered in Isopropanol, how clean it is after Isopropanol is dried off, and how persistent the initial spectra of the trap is. These tests reproduce easily and every additional experiment had the same effects. In fact, after determining that the alkane tests for this trap were complete, additional tests of all types of intentional contamination from contact with skin to application dust could all be removed with Isopropanol and return to the initial values of the spectrum. This suggests that cleaning with Isopropanol does not substantially alter the chemical composition of the initial surface contamination on traps in a way that can be observed here.We also test whether common baking procedures required for ultra-high baking have an observable effect on the surfaces. Baking is typically used to enable ultra-high vacuum conditions. However, it is known to modify the reflectance and can alter surface structures.[41, 42] We find that the bakeout procedure does not significantly alter the spectra when compared to the difference between individual but nominally clean surfaces. Furthermore, as shown in Fig. 7, three sets of week-long exposures to 200 degrees Celsius environments during baking produces similar results. The last procedure studied here investigates whether exposure to ultraviolet light can alter the surface. Because we use copper-aluminum traps in our laboratory extensively and


Figure 6. Initial surface (green), Isopropanol on surface (teal), and wiped-clean surface (red). Notice how the Iso-propanol adds hydrocarbon and hydroxyl signals around 3000 cm 1 and obscures the silicon dioxide fingerprint around 1200 cm 1.


The surface used for the first test is gold and is subject to cleaning by hydrocarbon chemicals (isopropanol and acetone) that are commonly used to clean parts used in ultra-high vacuum environments. The surface or the second test is also gold and subject to baking. The surface for the third test is an alloy of copper and aluminum and subject to blue laser, blue LED, and UV radiation. In the first test on a gold surface, we observe that applying standard cleaning Isopropanol changes the spectra drastically but almost all of its changes are undone when it is removed with a wipe. This is the same result as tap water except that Isopropanol reduces the alkane signature left by the water (around 2900 and 1400 cm-1) rather than increasing it. We also observe that the liq

WILLIAM TOKUMARU William is a recently graduated physics major from Southern California. His research is from work with Professor Haeffner in the Department of Physics. He began work there after an interesting lab tour and worked on numerous projects before this one. He and professor Haeffner realized that there as a need for studying anomalous heating so they acquired an FTIR for this project..

FALL 2016 | Berkeley Scientific Journal


Figure 7. Initial surface in air (violet), after a week of baking (red), after two weeks (green), and after 3 weeks (blue). The relative vertical placement of each spectra is arbitrary and the vertical axis has been inverted relative to previous figures for clarity. Notice that the differences between spectra in this figure are far less significant than the differences between those depicted in Figs. 5 and 6. because their FTIR spectra are cleaner than those for gold, we use copper-aluminum traps for this test. In the first test, we expose the traps to laser light near 397 nm, commonly used in our laboratory, with a total power of 130 micro Watts directed through a viewport onto the surface. The beam is focused to a spot approximately half a centimeter in diameter. The initial and final spectra are as similar as the typical variations of spectra ran back-to-back. Hence, we find no evidence that the laser caused any change. The second investigation uses a 365-400 nm blue-UV LED flashlight with 7~mW power. Even after shining it on the trap for over 12 hours, there is no effect. The third investigation uses a 290~nm UV LED at 800 micro Watts. It is attached inside the chamber about 2 millimeters above the surface of the trap. The LED is driven only for tens of minutes at a time in order to avoid damaging it. Again, there is no effect, even for in-situ UV radiation. It is quite possible that the equipment and methods are not sensitive enough to detect the change to spectra. Meaningful signals may be obscured by water and carbon dioxide fluctuations in air. Additionally, some surface signals may be due to surface roughness and polarization-dependent Fabry Perot interference effects in addition to hydrocarbon. However, the observed signals, particularly in the fingerprint region, are many orders of magnitude stronger than background noise.


This study investigates the effectiveness of PM-FTIR methods for detecting contamination of surfaces of planar ion traps in air and vacuum. Different surfaces have different contaminations after the manufacturing process, possibly deposited upon brief exposure to air. Isopropanol cleaning, baking, and blue-UV radiation produce no noticeable changes to the trap spectra beyond the deviation between individual trap spectra of 42

Berkeley Scientific Journal | FALL 2016

nominally the same particular surface. It is possible that these three experiments may cause changes to surface contamination that are at levels below the sensitivity of our PM-FTIR measurements. Future tests may include the effect of different temperatures of the surface, as well as annealing and argon treatment of the surfaces.[7, 8] Furthermore, it is possible to add a photoelastic modulator to collect data with faster polarization modulation to minimize environmental noise by retaining the background on a kHz scale. Finally, FTIR spectroscopy of the surface of a planar trap can be combined with measurements of the motional heating of ions trapped by it, in order to compare and correlate the relationship between the detected contamination with heating rates.


We would like to thank the following people: Maya Lewin-Berlin and Sonke Moeller for fabricating surfaces. Shunlin Wang of Bruker for hardware assistance. Sepehr Ebadi and Henning Kaufmann for optical design and testing. This research was partially funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Office grant W911NF-10-1-0284. William Tokumaru was supported by the Rose Hills Summer Undergraduate Research Fellowship program. Finally, we would like to thank Crystal Noel, Dr. Michael Martin, and Professor Robert Corn for assistance in editing the manuscript.

REFERENCES [1] H. Haffner, C.F. Roos, and R. Blatt. Quantum computing with trapped ions. Physics Reports, 469(4):155 – 203, 2008.

[22] Colin D. Bain, Joe Evall, and George M. Whitesides. Formation of monolayers by the coadsorption of thiols on gold: variation in the head group, tail group, and solvent. Journal of the American Chemical Society, 111(18):7155–7164, 1989.

[2] D J Wineland, C Monroe, W M Itano, D Leibfried, B E King, and D M Meekhof. Experimental issues in coherent quantum state manipulation of trapped atomic ions. J. Res. Nat. Bureau Stand., 103(quant-ph/9710025):259, 1998.

[23] Colin D. Bain and George M. Whitesides. Depth sensitivity of wetting: Monolayers of r-mercapto ethers on gold. Science (Washington, D.C.), 240(62), 1988.

[3] D. Kielpinski. Entanglement and decoherence in a trapped-ion quantum register. n/a, December 2001.

[24] Hans A. Biebuyck, Colin D. Bain, and George M. Whitesides. Comparison of organic monolayers on polycrystalline gold spontaneously assembled from solutions containing dialkyl disulfides or alkanethiols. Langmuir, 10(6):1825–1831, 1994.

[4] L. Deslauriers, P. C. Haljan, P. J. Lee, K-A. Brickman, B. B. Blinov, M. J. Madsen, and C. Monroe. Zero-point cooling and low heating of trapped 111Cd+ ions. Phys. Rev. A, 70:043408, Oct 2004. [5] Jaroslaw Labaziewicz, Yufei Ge, Paul Antohi, David Leibrandt, Kenneth R. Brown, and Isaac L. Chuang. Suppression of heating rates in cryogenic surface-electrode ion traps. Phys. Rev. Lett., 100:013001, Jan 2008. [6] M. Brownnutt, M. Kumph, P. Rabl, and R. Blatt. Ion-trap measurements of electric-field noise near surfaces. Rev. Mod. Phys., 87:1419–1482, Dec 2015. [7] D. A. Hite, Y. Colombe, A. C. Wilson, K. R. Brown, U. Warring, R. J¨ordens, J. D. Jost, K. S. McKay, D. P. Pappas, D. Leibfried, and D. J. Wineland. 100-fold reduction of electric-field noise in an ion trap cleaned with In Situ argon ion-beam bombardment. Phys. Rev. Lett., 109:103001, Sep 2012. [8] N. Daniilidis, S. Gerber, G. Bolloten, M. Ramm, A. Ransford, E. Ulin-Avila, I. Talukdar, and H. H¨affner. Surface noise analysis using a single-ion sensor. Phys. Rev. B, 89:245435, Jun 2014. [9] D T C Allcock, L Guidoni, T P Harty, C J Ballance,MG Blain, A M Steane, and D M Lucas. Reduction of heating rate in a microfabricated ion trap by pulsed-laser cleaning. New Journal of Physics, 13(12):123023, 2011. [10] Shannon X. et al. Wang. Laser-induced charging of microfabricated ion traps of physics. Journal of Applied Physics, 110(10):104901, 2011. [11] M. Aoulaiche N. Collaert E. Simoen, M. G. Cano de Andrade and C. Claeys. Low-frequency noise assessment of the transport mechanisms in sige channel bulk finfets. IEEE Trans. 11 Electron Devices, 59(1272), 2012. [12] Ikuo Suemune, Tatsushi Akazaki, Kazunori Tanaka, Masafumi Jo, Katsuhiro Uesugi, Michiaki Endo, Hidekazu Kumano, Eiichi Hanamura, Hideaki Takayanagi, Masamichi Yamanishi, and Hirofumi Kan. Superconductor-based quantum dot light-emitting diodes: Role of cooper pairs in generating entangled photon pairs. Japanese Journal of Applied Physics, 45(12R):9264, 2006. [13] W. J. Kim, M. Brown-Hayes, D. A. R. Dalvit, J. H. Brownell, and R. Onofrio. Anomalies in electrostatic calibrations for the measurement of the casimir force in a sphere-plane geometry. Phys. Rev. A, 78:020101, Aug 2008. [14] C.W. F. Everitt, D. B. DeBra, B.W. Parkinson, J. P. Turneaure, J. W. Conklin, M. I. Heifetz, G. M. Keiser, A. S. Silbergleit, T. Holmes, J. Kolodziejczak, M. Al-Meshari, J. C. Mester, B. Muhlfelder, V. G. Solomonik, K. Stahl, P. W. Worden, W. Bencze, S. Buchman, B. Clarke, A. AlJadaan, H. Al-Jibreen, J. Li, J. A. Lipa, J. M. Lockhart, B. Al-Suwaidan, M. Taber, and S. Wang. Gravity probe b: Final results of a space experiment to test general relativity. Phys. Rev. Lett., 106:221101, May 2011. [15] Nikos Daniilidis and Hartmut Haffner. Quantum interfaces between atomic and solid-state systems. Annual Review of Condensed Matter Physics, 4(1):83–112, 2013. [16] A.W. Adamson and A. P. Gast. Physical chemistry of surfaces, 6th ed. Wiley-Interscience: New York, pages 1–190, 1997. [17] J. K. Kriebel R. G. Nuzzo J. C. Love, L. A. Estroff and G. M. Whitesides. Self-assembled monolayers of thiolates on metals as a form of nanotechnology. Chem. Rev., 2005. [18] G. E. Poirier and E. D. Pylant. The self-assembly mechanism of alkanethiols on au(111). Science (Washington, D.C.), 272(1145), 1996. [19] David L. Allara Ralph G. Nuzzo. Adsorption of bifunctional organic disulfides on gold surfaces. J. Am. Chem. Soc., 105(4481):4481–4483, 1983. [20] Marc D. Porter, Thomas B. Bright, David L. Allara, and Christopher E. D. Chidsey. Spontaneously organized molecular assemblies for structural characterization of n-alkyl thiol monolayers on gold by optical ellipsometry, infrared spectroscopy, and electrochemistry. Journal of the American Chemical Society, 109(12):3559–3568, 1987.

[25] Paul E. Laibinis, George M. Whitesides, David L. Allara, Yu Tai Tao, Atul N. Parikh, and Ralph G. Nuzzo. Comparison of the structures and wetting properties of self-assembled monolayers of n-alkanethiols on the coinage metal surfaces, copper, silver, and gold. Journal of the American Chemical Society, 113(19):7152–7167, 1991. 26B. R. Zegarski L. H. Dubois and R. G. J. Nuzzo. n/a. Chem. Phys., 98:678–688, 1993. [27] Alex Goldstein Lawrence Pranger and Rina Tannenbaum. Competitive self-assembly of symmetrical, difunctional molecules on ambient copper surfaces. Langmuir, 21(12):5396– 5404, 2005. PMID: 15924468. [28] L. Strong and G.M. Whitesides. The structures of self-assembled monolayer films of organosulfur compounds adsorbed on gold single crystals: Electron diffraction studies. Langmuir, 4:546–558, 1988. [29] Kevin Raiber, Andreas Terfort, Carsten Benndorf, Norman Krings, and Hans-Henning Strehblow. Removal of selfassembled monolayers of alkanethiolates on gold by plasma cleaning. Surface Science, 595(1–3):56 – 63, 2005. [30] J. Coates. Encyclopedia of Analytical Chemistry R.A. Meyers (Ed.), chapter Interpretation of Infrared Spectra, A Practical Approach, page 10815–10837. John Wiley and Sons Ltd, Chichester, 2000. [31] Desbat Bernard Pere Eve Buffeteau, Thierry and Jean Marie Turlet. Progress in Fourier Transform Spectroscopy: Proceedings of the 10th International Conference, August 27 – September 1, 1995, Budapest, Hungary, chapter Double Beam FTIR Reflection Spectroscopy on Monolayers, pages 627–629. Springer Vienna, Vienna, 1997. [32] Robert Corn. Rapid-scan polarization-modulated Fourier transform infrared reflection absorption spectroscopy. Hinds Instruments, Inc. Spring, pages 1–4, 1996. [33] J. M. Turlet T. Buffeteau, B. Desbat. n/a. Mikrochim. Acta [Wien], II:23–26, 1998. [34] George Stanley. Bruker tensor 27 ft-ir and opus data collection program. v. 1.1. Radboud University Nijmegen., page 20, 1998. [35] C. J. Powell A. W. Czanderna and T. E. Madey. Specimen Handling, Preparation, and Treatments in Surface Characterization, chapter Interpretation of Infrared Spectra, A Practical Approach. Kluwer Academic Publishers, New York, Boston, Dordrecht, London, Moscow, 2002. [36] N.J. Geddes, E.M. Paschinger, D.N. Furlong, F. Caruso, C.L. Hoffmann, and J.F. Rabolt. Surface chemical activation of quartz crystal microbalance gold electrodes—analysis by frequency changes, contact angle measurements and grazing angle fFTIRg. Thin Solid Films, 260(2):192 – 199, 1995. [37] Kien Cuong Nguyen. Quantitative analysis of cooh-terminated alkanethiol sams on gold nanoparticle surfaces. Advances in Natural Sciences: Nanoscience and Nanotechnology, 3(4):045008, 2012. [38] Robert V. Duevel and Robert M. Corn. Amide and ester surface attachment reactions for alkanethiol monolayers at gold surfaces as studied by polarization modulation Fourier transform infrared spectroscopy. Analytical Chemistry, 64(4):337–342, 1992. [39] Ellis R Lippincott, Alvin Van Valkenburg, Charles E Weir, Elmer N Bunting, et al. Infrared studies on polymorphs of silicon dioxide and germanium dioxide. Journal of Research of the National Bureau of Standards, 61(1):61–70, 1958. [40] Claire E. Jordon, Brian L. Frey, Steven Kornguth, and Robert M. Corn. Characterization of poly-l-lysine adsorption onto alkanethiol-modified gold surfaces with polarization modulation Fourier transform infrared spectroscopy and¬ surface plasmon resonance measurements. Langmuir, 10(10):3642–3648, 1994. [41] H. E. Bennett, M. Silver, and E. J. Ashley. Infrared reflectance of aluminum evaporated in ultra-high vacuum. J. Opt. Soc. Am., 53(9):1089–1095, Sep 1963. [42] D.G. Fedak and N.A. Gjostein. A low energy electron diffraction study of the (100), (110) and (111) surfaces of gold. Acta Metallurgica, 15(5):827 – 840, 1967.

[21] L H Dubois, and R G Nuzzo. Synthesis, structure, and properties of model organic surfaces. Annual Review of Physical Chemistry, 43(1):437–463, 1992.

FALL 2016 | Berkeley Scientific Journal






ince the earliest human civilizations, humans have kept time in one form or another, either through water clocks, sundials, hourglasses or candle clocks. Though primitive, these early forms of clocks were the building blocks of modern timekeeping technology. However, even though time is such an essential part of our lives, many people do not understand the mechanics underlying clock function. Archaeological evidence has shown that the Egyptians and Babylonians began measuring time 5,000 years ago. They started by recording the length of a day by following the sun across the sky and noting the phases of the moon.1 The Egyptians also created calendars that had 12 months with 30 days each. These calendars even included 5 extra days every year to estimate the solar year. The next form of time measurement came with the invention of the sundial. The sundial, which has been invented independently


by all major cultures, works by indicating the time of day by the length and direction of a shadow cast by the sun’s light. But because such devices cannot work at night, the sundial’s counterpart, the water clock, was created to tell time during the night. The water clock is a basin of water that lets water drip from a small hole near the base of the basin. Lines were drawn inside the basin walls to denote sections of time so as the water level dropped, it would gradually reveal lines above the water level, thereby indicating the time.1 The earliest clocks, along with the sundial and water clock, are the hourglass and candle clocks. We often see these in films and animation to give an archaic setting. However, even these seemingly familiar and simplistic clocks are quite impressive in being able to accurately keep time even before the physics regarding water flow and planetary motion were understood. The candle clock works similarly to the water clock in that the wax of a candle is

Berkeley Scientific Journal | FALL 2016

melted down and the height of the candle at different moments measure how much time has passed.1 More modern forms of timekeeping include pendulums, pocket watches, and classroom clocks. These are still relatively simple compared to today’s digital clocks and beyond, but are equally interesting and important. Pendulums are quite distinguished in function from clocks and pocket watches in that they don’t have as many small components that aid it in telling time. The main parts of a pendulum are the rod and weight which together swing side to side in an oscillating motion.2 To maintain the same oscillating rate, there are specific configurations not seen from the outside. But this does not prevent a pendulum clock from eventually lagging in timekeeping. So, occasionally, a clockmaker or clock owner will need to reconfigure the cogs in a pendulum so they read time accurately. Pocket watches and clocks are unique from pendulums. Clock mechanics

“Time is a way to track the irreversible occurrences in our lives, from deaths, to food eaten, to water spilled”

are interesting in that there are many different configurations, called escapements, of clock pieces. These escapements have different efficiencies, energy conversion, and accuracy.6 Several such escapements worth noting are the Verge Escapement, and the Grasshopper Escapement.2 The Verge Escapement is likely the oldest clock escapement and consists of a crown-shaped wheel (the escape wheel) that turns vertically with its ‘teeth’ protruding to one side. These teeth push a pallet (a rod with parts that can be pushed by the crown-wheel’s teeth) causing the pallet’s rod (the verge) to rotate in a single direction. As the crown-wheel continues to rotate, it pushes the pallet into completing many cycles, and each cycle translates into a small movement of the hands on a clock moving clockwise.2

Similar yet different from the Verge Escapement is the Grasshopper Es-

capement. This escapement has mostly the same configuration as the Verge Escapement, only all the wheels and pallets are put on their side: it runs horizontally. The name of this escapement hints at the image of the escapement: the pallet position and motion in relation to the escape wheel looks like the leg of a grasshopper as well as the coupling rods on a train, connecting the wheels and rotating in place.2 All of these different clock escapements will eventually require repair because over time, their efficiency decreases. With so many small metal parts pushing against each other, friction is inevitable, which is the main source of energy inefficiency in clocks.6 More modern forms of timekeeping and clock mechanics include the atomic clock and quantum clocks. Previously described clocks and timekeeping devices relied on physical characteristics to read time. But many clocks such as the atomic and quantum clocks, use ions, atoms, and radiation waves that are not visible to precisely measure time. Inside an atomic clock, atoms of specified element, such as cesium, are pushed through a tube to an area where they are exposed to radio waves of a specified frequency. The energy from these radio waves cause the cesium atoms to resonate and change their energy state. A cesium detector at the the end of the tube registers every time a cesium atom reaches it, and will tick of a second of time once a certain frequency of cesium atoms striking the detector is met.5 A quantum clock is a specialized type of

atomic clock in that instead of using atoms of an element, it uses single ions to absorb the radiation and record the frequency and subsequently tick of seconds based on this frequency of registered ions.3 It is important to think about how these devices impact how we perceive time. Time to most people is a common occurrence that has no special meaning other than the fact that it is how we run our daily lives. What many people haven’t thought about is that time is mostly just imagined. Aristotle once described time to be a measure or number of some sort of motion. This means that time is not an independent entity and cannot exist separate from other things in the world.4 Time is directly related to the objects in our lives and is solely there to measure the motion of those objects. This notion takes time to digest, but fundamentally greatly makes sense. Time is a way to track the irreversible occurrences in our lives, from deaths, to food eaten, to water spilled. But this then asks the question: why do we bother to measure time to intricately using high-tech clocks and other precise machines if time isn’t ‘really’ there? And though there is no clear answer, and is subject to personal views, a good reason shared by most people is that time creates a sense of order in society and daily life. If we didn’t have a schedule to follow, people would just wander aimlessly all day long. They wouldn’t value their lives because they wouldn’t realize that it was passing so quickly. Days would melt together and we would never have a sense of purpose if we didn’t keep time.

FALL 2016 | Berkeley Scientific Journal


Atomic clock in a laboratory


1. Andrewes, W. J. (2006, February 1). A Chronicle Of Timekeeping. Retrieved October 10, 2016, from https://www. 2. Du, R., & Xie, L. (2012). A Brief Review of the Mechanics of Watch and Clock. History of Mechanism and Machine Science The Mechanics of Mechanical Watches and Clocks, 5-45. doi:10.1007/978-3-64229308-5_2 3. Erker, P., Mitchison, M. T., Silva, R., Woods, M. P., Brunner, N., & Huber, M. (n.d.). Autonomous quantum clocks: How thermodynamics limits our ability to measure time. Retrieved October 10, 2016, from pdf. 4. Gale, R. M. (1967). The philosophy of time: A collection of essays. Garden City, NY: Anchor Books. Retrieved October 28, 2016, from books?hl=en&lr=&id=rWChCwAAQBAJ&oi=fnd&pg=PA1&dq=philosophy of time&ots=2jU1n9LCw8&sig=M6cvkd4up4vmThgCpfG7e27kC-A#v=onepage&q&f=false. 5. Gessner, W. (2015, November). Ideal Quantum Clocks and Operator Time. Retrieved October 10, 2016, from https:// pdf. 6. Headrick, M. V. (1997). Clock and


Watch Escapement Mechanics. 1-87. Retrieved October 3, 2016, from http://www. Image Sources Banner Image: Sundial Image: Verge Escapement Image: https://blog. Atomic Clock Image: http://www.urania. be/sites/default/files/Tijd%20en%20seconden%20huidige%20atoomklok %20atomuhr%20FOCS%201.png

Berkeley Scientific Journal | FALL 2016



itty, a New Caledonian crow, became famous when National Geographic highlighted her ability to solve a puzzle that bewildered many five-year-old children.5 In order to obtain a small piece of meat floating in a tube of water, Kitty placed rocks in a seemingly unrelated tube, causing the water to rise and the meat to float to the top.5 Kitty and her fellow crows possess remarkable cognitive ability, prompting many to wonder what could be behind the evolution of such intelligence. Animal intelligence is tricky to define, despite the efforts of numerous researchers over the years. A quantitative measure of animal intelligence has yet to be agreed upon. It is often assumed that as brain size (both absolute and relative) increases, so does intelligence. However, both of these measures have been ruled out by bascre-

search. For example, cetaceans (whales and dolphins) have a larger absolute brain size than humans,10 yet humans are considered far more intelligent. Similarly, the shrew’s brain contains 10% of its body mass while a human’s contains only 2%. Generally, the definition of intelligence involves the performance of complex behaviors and the use of novel solutions to problems.10 Humans are not alone in the possession of great intelligence. In fact, advanced cognition has evolved in many taxa.9 While the intelligence of primates such as chimpanzees has been widely publicized, it has also been found that birds in the group of corvids (crows, jays, ravens, ect.) have cognitive abilities comparable to apes.4 Examples of higher cognition in both primates and corvids include object permanence (memory for objects that cannot be seen by the organism),4

the delay of gratification (control of impulsivity),4 and mental time travel (memory for past events and planning for future events)4 and tool making.1,13 Although the brains of these organisms are structuredof these organisms are structured differently,4 their intellectual abilities are remarkably similar,4 thus making the evolution of intelligence in these groups particularly fascinating. The evolution of intelligence in crows and apes can be described as convergent evolution. Convergent evolution is the development of similar traits in organisms that are not closely related. Although mammals and birds share a common ancestor with all vertebrates, approximately 300 million years separate them from their closest relative, indicating that their advanced cognition must have evolved separately.4 Currently, there are several

FALL 2016 | Berkeley Scientific Journal


hypotheses for the factors behind this convergent evolution. These factors mainly fall under the broad categories of dietary and social. Apes rely on a diet of tropical fruit, and one hypothesis behind their advanced cognition is centered on this diet. Many plants only bear ripe fruit at certain times of the year, and these plants were widely dispersed throughout the habitats of early primates11. Because these primates were often required to travel large distances to forage for food, larger brains and more complex cognition allowed for the primates to travel the most energy-efficient routes11. Corvids, however, do not rely on ripe fruit. Instead, many corvids “cache” food, and cognitive evolution would have aided their common ancestor in remembering the location of its caches3. Furthermore, corvids such as the Western scrub jay know when the food in their cache is going to spoil and become inedible3. Similarly, many corvids steal from the caches of other birds and employ complex strategies to prevent their ownyear, and these plants were widely dispersed throughout the habitats of early primates.11 Because these primates were often required to travel 48

large distances to forage for food, larger brains and more complex cognition allowed for the primates to travel the most energy-efficient routes.11 Corvids, however, do not rely on ripe fruit. Instead, many corvids “cache” food, and cognitive evolution would have aided their common ancestor in remembering the location of its caches.3 Furthermore, corvids such as the Western scrub jay know when the food in their cache is going to spoil and become inedible.3 Similarly, many corvids steal from the caches of other birds and employ complex strategies to prevent their own caches from being stolen.7 Certain species, including the scrub jay, will not cache food if they detect another jay nearby.7 They have also been known to move caches if they

Along with competition, hypotheses behind the intelligence of these groups also revolve around cooperation.

Berkeley Scientific Journal | FALL 2016

believe a competitor may have witnessed them hiding food.7 Along with dietary quandaries, apes’ common ancestor commonly faced many social challenges that promoted the evolution of intelligence.13 Firstly, primates that live in groups are often subject to amongst one another. As apes are polygamous, males are often in competition for mating rights13. Many primates keep up numerous relationships with others in their species. For example, male chimpanzees will compete for “alpha” status and therefore mating rights, requiring the formation of complex relationships with many individuals13. Meanwhile, females often collaborate to protect their young from violent males13. amongst one another. As apes are polygamous, males are often in competition for mating rights.13 Many primates keep up numerous relationships with others in their species. For example, male chimpanzees will compete for “alpha” status and therefore mating rights, requiring the formation of complex relationships with many individuals.13 Meanwhile, females often collaborate to protect their young from violent males.13 Contrastingly, many corvids are

monogamous and do not experience as much competition for mates as apes do.13 Therefore, it was unlikely that competition for mates encouraged cognitive evolution in corvids.13 In this way, the differences in mating represent an example of a hypothesis that applies to one of the taxonomic groups but not the other.13 Along with competition, hypotheses behind the intelligence of these groups also revolve around cooperation within these species. Social learning has great evolutionary benefits as individuals that can learn from others expend less energy and time learning by themselves13. Both corvids and apes have shown to learn from watching others, indicating that socialcooperation within these species. Social learning has great evolutionary benefits as individuals that can learn from others expend less energy and time learning by themselves.13 Both corvids and apes have shown to learn from watching others, indicating that social is important to their survival.13 Such social complexity, along with the capacity for social learning, requires a large amount of cognitive ability, thus encouraging the selection for intelligence. Interestingly, the brain structures of birds and mammals differ. Mammal brains contain a structure known as the neocortex, which was considered to be the part for many mammals’ advanced cognition.4 Since birds did not possess a neocortex, it was thought that intelligence in birds was impossible4. However, as experimental evidence indicating that corvid birds are capable of cognitive feats comparable to apes increased, researchers realized that the neocortex is not a requirement

for advanced intelligence.4 In conclusion, the convergent evolution of intelligence in corvid birds and apes reveals that this area of study is complex and requires more research. The surprisingsimilarities between apes and corvids indicate that convergent evolution of intelligence may be worth looking into in taxa that have been previously dismissed in terms of intelligence. Furthermore, more research should be done on the brains of corvid birds in order to determine the physical source of their complex behavior. These results could be applicable to other species. All in all, the evolution of intelligence is a fascinating subject that is far more complex than we understand.


5) Langin, K. (2014, July 24). Are Crows Smarter Than Children? Retrieved November 15, 2016, from 6) Lefebvre, L., Reader, S. M., & Sol, D. (2004). Brains, Innovations and Evolution in Birds and Primates. Brain, Behavior and Evolution, 63(4), 233-246. doi:10.1159/000076784 .com. 6) Lefebvre, L., Reader, S. M., & Sol, D. (2004). Brains, Innovations and Evolution in Birds and Primates. Brain, Behavior and Evolution, 63(4), 233-246. doi:10.1159/000076784 7) Macphail, Euan. and Bolhuis,

to determine the physical source of their complex behavior. These results could be applicable to other species. All in all, the evolution of intelligence is a fascinating subject that is far more complex than we understand.


1) Alex A. S. Weir, Jackie Chappell and Alex Kacelnik (2002), Shaping of Hooks in New Caledonian Crows. Science, 297, 981. 2) Emery, Nathan J. (2006) Cognitive ornithology: the evolution of avian intelligence. Philosophical Transactions of the Royal Society B, 361, 23-43. 3) Emery, Nathan J. and Clayton, Nicola S. (2004). The Mentality of Crows: Convergent Evolution of Intelligence in Corvids and Apes. Science, 306, 1903-1907. 4) Güntürkün, O., & Bugnyar, T. (2016). Cognition without Cortex. Trends in Cognitive Sciences, 20(4), 291-303. doi:10.1016/j. tics.2016.02.001

FALL 2016 | Berkeley Scientific Journal



“WITH THE USE OF CRYOPRESERVATION, DE-EXTINCTION MAY BECOME A POSSIBILITY IN THE NEAR FUTURE.” Should we bring back extinct species? Many endangered species in the wild are facing the possibility of extinction. However, thanks to the developments of genetic technology, extinction does not have to be permanent. It is very possible to resurrect some of the species we once lost and even prevent endangered species from ever becoming extinct. For many years, scientists have played around with the possibility of “de-extinction”1 and alternative methods to repopulating endangered species. With the use of cryopreservation, de-extinction may become a possibility in the near future. Cryopreservation is the the process in which certain cells such as the eggs and sperm are preserved in very low temperatures and then extracted at a later time when needed. By storing the cells in very low temperatures, it allows to extend the lifetime of cells outside their original


hosts. In a way, cryopreservation allows us to freeze time and preserve the eggs and sperms of endangered species and thaw them in the future for in vitro fertilization (IVF). IVF is the process of creating embryos from eggs and sperm by fertilizing them outside of the womb and then transferring the embryo into the recipient. Thus, scientists hope that cryopreservation is the solution to saving endangered species and potentially bring back extinct species. An endangered species is a group of organisms that are at risk of becoming extinct for one or more of the following reasons: destruction of their living environment, an increase in predators, and/or unsustainable breeding due to low member population. Over the years, scientists and researchers have explored numerous techniques in hopes of reintroducing endangered and even extinct animals

Berkeley Scientific Journal | FALL 2016

back into the wild. Most endangered species are kept in captivity to breed with the remaining members of their population. Some species have even been forced to breed with closely related species, also known as inbreeding. Unfortunately, inbreeding holds potential problems such as: increased reproductive failures leading to fewer offspring, genetically undesirable individuals, and widespread damaging genes. However, this is not to say that inbreeding does not occur naturally. Certain species reproduce outside their breed on their own for their own survival. Inbreeding is a double edge sword. The results of inbreeding can produce excellent quality animals. However, excessive inbreeding can make matters worse for the species. Thus, scientists have turned to cryopreservation for solutions. Survival of some endangered species heavily relies on “frozen

zoos.”3 The Frozen Zoo at San Diego’s Institute for Conservation Research was founded in 1972 as a storage for skin-cell samples from rare and endangered species. When the first samples were collected and frozen, genetic technology was still in its new stages so it was not really known how the samples would be used. “The Frozen Zoo was a wonderful idea. They just thought: ‘Well, something might happen, so we should preserve some samples for the future,” says Dr Jeanne Loring, who is leading the Scripps team working with frozen samples.3 “This is the first time that there has been something that we can do.”3 Cells preserved in frozen zoos can be added to the gene pool, increasing the chances of healthy reproduction and ultimately allowing zoos not to rely on forcing animals to breed. “If we could use animals that were already dead – even from 20 years ago – to generate sperm and eggs then we can use those individuals to create greater genetic diversity. I see it as being possible. I see no scientific barrier,” Loring says.3 Despite some exceptions, many of the frozen cells of certain species were not suitable for the

long-term preservation of undamaged DNA. Although certain species produce unsuccessful cells for cryopreservation, there are some species that produce ideal cells and work fantastically well. Scientists have successfully conducted cryopreservation on Asian elephants. The Asian elephant (Elephas maximus) worldwide population is estimated to be at around 50,00070,000 elephants, of which approximately 15,000 are in captivity. Unfortunately, its population in the wild is not reproducing at a sufficient rate to maintain the current population. Many scientists predict that the Asian elephant will become extinct within the next few decades if their fertility rates continus to decline. At the Hannover Zoo in Germany, thirty ejaculates were collected from six Asian elephants and one African elephant, a close living relative to the Asian elephant. Semen freezing experiments were conducted on ten ejaculate from one bull. Attempts to collect from the other Asian elephant bulls resulted in samples that were not suitable for freezing, or that could not be frozen at the time of collection1.The ten semen samples were evaluated and then processed for freezing with various cryoprotectant to determine the best freezing technique suitable for Asian elephant semen. The seven

Frozen DNA sample being extracted from storage

“Extinction does not have to be permanent.” different concentrations of cryoprotectant that were used were: ethylene glycol, propylene glycol, trehalose, egg yolk, glycerol, and cryoprotectant-glycerol and Me2SO. In the study, the scientists concluded that glycerol was the best cryoprotectant for freezing sperm cells. According to the study, using glycerol “increases the intracellular osmolarity and by that decrease cellular dehydration and shrinkage”. 1 Still, even with the best cryopreservation techniques, the postthaw survival rates for sperm cells are about 50%. As a consequence, fertility from artificial insemination (AI) is more worse than that with fresh semen in most cases. It is important to recognize the negative effects of cryopreservation as it is still quite imperfect. Several of the cellular organelles of sperm are enveloped by one or more membranes and it is known that membranes are particularly vulnerable to survival during cryopreservation. Sperm membranes affected by cryopreservation include the plasma membrane, the outer acrosomal membrane and and mitochondrial membranes4. There are at least two different phases the sperm cell experiences during freezing and thawing. The first relates to the effects of changing temperature and the second arises because of the formation and dissolution of ice.4 The sperm experiences a cold shock, which traditionally refers to the extreme sensitivity to sudden cooling exhibited by spermatozoa. The cell membrane becomes fragile in the cold and the chances of survival

FALL 2016 | Berkeley Scientific Journal


ever, there is still much to learn about cryopreservation before we can move on to the possibility of resurrecting extinct species. REFERENCES

Scientist holding a frozen DNA sample in the frozen zoo at San Diego’s Institute for Consveration Research.


decreases. Second, ice crystals form in the cells that can cause the membranes to rupture when thawing. The severity of the effects vary among species but are all dependent on the rate of cooling. Thus, cryopreservation is not entirely reliable for unnatural animal reproduction due to membrane vulnerability. In addition, not only is the cell’s membrane extremely vulnerable during freezing but there are also the possibility of contamination. The DNA cells are kept frozen in a liquid nitrogen storage. Liquid nitrogen can also act as a carrier for viruses, bacteria, and fungi.2 If the liquid nitrogen is contaminated, then the cells will mostly be contaminated as well. Cryopreservation appears to be successful in some instances. However, in order to bring back extinct species, scientists need to figure out the best method that works for all species so that we don’t waste precious DNA when sperm cells are lost. Unfortunately, cryopreservation has not been proven to have long-lasting effects on all mammals, although fish, reptiles, and many other species hatched through eggs have been farmed and have been successfully reintroduced into the wild. Nonetheless, cryopreservation appears to open many doors for the future of de-extinction and the survival of endangered species. How-

Berkeley Scientific Journal | FALL 2016

[1] Behr, Brita., Hermes, Robert., Hildebrandt, Thomas B., Knieriem, Andreas., Kruse, Jürgen and Saragusty, Joseph. Successful cryopreservation of Asian elephant (Elephas maximus) spermatozoa. Animal Reproduction Science, 2008. [2] Burder, David W. Issues in Contamination and Temperature Variation in the Cryopreservation of Animal Cells and Tissues. BT&C, Inc. [3] Schultz, David. Should we bring extinct species back from the dead? Science Magazine, 2016. [4] Watson, P.F., Recent Developments and Concepts in the Cryopreservation of Spermatozoa and the Assessment of their Post-thawing Function. Department of Veterinary Basic Sciences, Royal Veterinary College, 1995. IMAGE SOURCES [1] Google images | https://lh3. [2] Google Images | [3] Google Images | https:// st=1480312942147819




ood insecurity is rarely reported as a result of climate change. Oftentimes, food security is immediately related to state and governmental malfunction and poverty. Research shows that the increase in global mean temperature and extreme weather events influences biogeographic range shift that results in the movement of crops poleward. It also shows that there is a positive correlation between the alteration of latitudinal range of crops and pest distribution. The biogeographic range shift of crops can lead to trait shift and increase in biogeographic range of pests that eventually enable them to thrive in a wide range of environmental conditions. The proliferation of pests threatens food security by decreasing food production and food accessibility accordingly. These variables are vital in understanding the biological implications of climate change and understanding other factors of food insecurity. United Nations Food and Agriculture Organization estimates that about 795 million people of the 7.3 billion people in the world were suffering from chronic undernourishment in 2014-2016. Almost all the hungry people, 780 million, live in developing countries, representing one in eight of the population of developing countries.1

Global food security is threatened by the spread of pests and disease pathogens and climate change plays a significant role in the biological aspect of this problem. Research shows that the top four crops— maize, rice, wheat, and soybean—that currently produce nearly two-thirds of global agricultural calories are increasing at a rate of 1.6%, 1.0%, 0.9%, and 1.3% respectively. However, this increase in production is not enough to reach the required rate of 2.4% yield to meet the demands of increasing population in the year 2050.2 In relation to that, most of the people who are at risk and are currently affected by food insecurity are living in developing countries.3 These countries also have the lowest income and are the hardest hit by climate change.


by roughly 0.13 °C per decade since the 1950s, resulting in an overall rise of just under 1 °C today, in comparison to pre-industrial norms. Future emissions, even under best-case scenarios, are predicted to add 1°C in the next three decades. Along with mean temperature changes, there has been an increase in the occurrence of warm temperature extremes and a simultaneous reduction in cold extremes.4 Climate change also includes stronger and more frequent extreme weather events and changes in the lengths of growing season of crops. It is also shown to induce biological changes in weeds and pests. The inflation of food prices and decrease in food production can be attributed to many different factors, but the strongest correlations point to climate change. Increasing temperature can induce stress to crops that can make it more favorable for pests and weeds to thrive. Climate change can cause biogeographic range shifts to plants in order to adapt and compensate for long summers and early wintertime—both of which are related to food production. Biogeographic range shift is the expansion or contraction of a species’ area through the movement or disappearance of individuals. Climate change can cause biogeographic range shifts by inducing interspecific interactions, short-term climate extremes, and

FALL 2016 | Berkeley Scientific Journal


change can cause biogeographic range shifts by inducing interspecific interactions, short-term climate extremes, and changes in temperature and precipitation.5 eptiam, aliquatia doluptae niet rae cus aut experro corat aut dipsapiet et fugitae

IMPACT OF CLIMATE CHANGE The most critical time for many pests is winter because low-temperature extremes can significantly increase mortality, thereby reducing population levels in the following season. The warming temperature caused by climate change enables insects to increase fecundity and faster generation time.6 In addition to that, the increase in generation per year can accelerate species evolution. The increase in generation time and insect evolution can affect the severity of insect herbivory among crops that has a detrimental impact to crops and food production.

Low food supply, accessibility, and malnutrition The increase in fecundity and rate of generation per year of pests can result in increased pest herbivory. Better-adapted pests can move to expand their biogeographic ranges together with the hosts shift as a result of climate change. Elevated temperature can increase the hosts-crop’s system susceptibility to certain pathogens that are not normally pathogenic in the absence of increasing temperature. Further, an increase in biomass of pests can pose a risk to young and developing crops. The uncontrolled growth of these pests could also cause a new outbreak that can damage growing and existing crops. In relation to that, insects are often migratory which makes them better adapted to exploit new territories and resources. This behavior together with the expansion of biogeographic range of their host expand its ability to cause pest outbreak and herbivory. The increasing temperature would also increase the probability of better breeding areas and more resources.7 For example, Sambaraju et al. (2012) has demonstrated that the warming temperature caused the increase in out


break of bark beetle infestations in forests in California and Nevada. Between 1997 and 2010, more than 5 million hectares of pine trees died due to the infestation of bark beetles in the western US, most notably by mountain beetles (D. ponderosae) and spruce beetle (D. rufipennis), more than the trees killed by forest fires. This study suggests that warming summer and winter temperatures are the main cause of this outbreak.

Low Food Supply, Accessibility, and Malnutrition The increasing temperature may result to the re-emergence of pathogens, introduction of new pests to a new biogeographic range, and pest adaptation. These may cause low crop yield and low food production. The production yield for the top three crops decreases as the mean average global temperature increases.8 It is also projected that the average crop yield will continue to decline. For the major crops (wheat, rice, and maize) in tropical and temperate regions, climate change without adaptation (such as sustainable soil management and irrigation access) will negatively impact production for local temperature increases of 2 °C or more above late-20th-century levels, although individual locations may benefit.9

Food prices are expected to continue to rise as global food production declines and as the world struggles to keep pace with the rising demand and increasing population. Climate change has contributed in the inconsistencies and changes in agriculture by inducing biological changes to crops and pests. If these changes are not mitigated or actions to slow down the effects of climate change are not taken seriously, food prices will continue to increase.10


1. WFP. (2015) Hunger Statistics. Retrieved from hunger/stats 2. Deepak, R.K. (2013) Yield Trends Are Insufficient to Double Global Crop Production by 2050. Public Library of Science, 8, 1-9. 3. IPCC. (2014) Climate Change 2014: Synthesis Report. Retrieved from Kamat P.V. J. of Phys. Chem. C. 2008, 112(48), 18737-53. 4. Gourdji, S.M., Sibley, A.M. & Lobell, D.B. (2013) Global crop exposure to critical high temperatures in the reproductive period: Historical trends and future projections. Environmental Research Letters, 8. 5. Rosenzweig, C. (2011) Assessing agricultural risks of climate change in the 21st century in a global gridded crop model intercomparison. Proceedings of the National Academy of Sciences of the United States of America, 111, 3268-3273. 6. Delucia, E.H. et. al. (2012) Climate Change: Resetting Plant-Insect Interactions. Plant Physiology, 160, 1677-1685. 7. Porter, J.R. et. al.(2014) Food security and food production systems. In CliMaps of observed rates of percent yield mate change 2014: impacts, adaptachanges per year. Global map of current tion, and vulnerability. Part A: global percentage rates of changes in (a) maize, and sectoral aspects, 485–533. (b) rice, (c) wheat, and (d) soybean yields. 8. Challinor, A.J., et. al. (2014) A meRed areas show where yields are declining ta-analysis of crop yield under climate whereas the fluorescent green areas show change and adaptation. Nature Cliwhere rates of yield increase – if sustained mate Change, 4, 287-291. – would double production by 2050. 9. Porter, J.R., Cochrane, K., Howden, et. al. (2014) Food security and food production systems.

Berkeley Scientific Journal | FALL 2016


Abstract: Can eating food of an assortment of colors help one stay healthy? In this study, a randomized controlled trial helped evaluate the impact of a colorful diet on 8 healthy human adults (age 20-60) with similar demographic and dietary backgrounds. One of the daily meals of the volunteers was substituted with a handpicked ration consisting of all colors of the rainbow in the form of a Rainbow Diet Pack (RDP). Fruits and vegetables were chosen based on the exclusive molecular structure and chemical composition of the most prevalent phytonutrient(s) in each. RDP was administered daily to the intervention group (n=5) over a 10-wk intervention period. Weight loss, waist circumference, hand grip strength, and stress levels were measured. Analyses revealed that eating raspberries, oranges, carrots, broccoli, blueberries, and bananas balanced stress levels and led to weight loss, but did not impact hand-grip strength, demonstrating the healthy outcomes of a colorful diet..




The idea that one should be eating healthy to stay healthy is not a debate. Numerous studies show how particular foods individualistically effect human health, but none thus far, to our knowledge, have investigated about the combined impact of a specific diet on the human body as a whole.1-5 It is critical for us to understand which kinds of things we should eat and the ways in which their collective consumption will impact our bodies. According to Dr. Thomas J. Carlson, a distinguished pediatrician and ethnobotany researcher, choosing foods from every color in the rainbow is the key to good health.6 Each fruit and vegetable gets its natural color from the chemical composition of the exclusive phytonutrient(s) in it.19 Interestingly, the presence of one molecule in one fruit/ vegetable does not necessarily reflect the same color in another type of fresh produce. For instance, although the rich red color in most red fruits and vegetables is naturally derived from the phytonutrient lycopene, most berries such as strawberries and raspberries do not contain lycopene. Instead, they contain brightly colored chemicals called anthocyanins, which are made in plants during ripening season through the joining of a molecule of a sugar with a molecule of their colorless “anthocyanidin”

precursors.7 Anthocyanins are also found in raspberries, which are high in dietary fiber and vitamin C and have a low glycemic index because they contain 6% fiber and only 4% sugar per total weight.8 Higher quantities of fiber in the fruit, when consumed, helps lower the levels of low-density lipoprotein (LDL) or the ‘unhealthy’ cholesterol to enhance the functionality of our heart and potentially induce weight loss. The exact pigment that anthocyanins reflect is partly dependent on the variance in acidity or alkalinity in different plants. Because of the relatively high pH of the tissues in blueberry plants, these chemicals turn blue in color during the ripening process of the fruit.7 Recent research in the Journal of Nutrition suggests that the abundant antioxidant properties in wild blueberries contributes to the reduction in the development of such disorders as Alzheimer’s Dementia and cognitive loss.9 A type of antioxidants selectively found in yellow and orange colored foods are called cryptoxanthins. In a study conducted by Bovier et al., it is shown that the combination of the beta form of these carotenoids with other sources of nutrients such as lutein and zeaxanthin in carrots, oranges, and corn leads to improved visual processing speed with regular consumption in young healthy subjects.10, 18

FALL 2016 | Berkeley Scientific Journal


While green produce mainly derives its pigmentation from chlorophyll, its white counterparts get their natural color from anthoxanthins, flavonoid pigments that exhibit antioxidant properties. Among green fruits and vegetables, broccoli stands apart as the most nutritious because of the special combination in which its 3 glucosinolate phytonutrients (glucoraphanin, gluconasturtiian, and glucobrassicin) are found. This “dynamic trio” makes what are called Isothiocyanates (ITCs), the detox-regulating molecules in broccoli that enhance vitamin A in the form of beta-carotene.11 Many recent studies claim that the antioxidants in ITCs not only regulate metabolism and cholesterol levels when consumed but also act as cancer chemopreventive phytochemicals.12-13 Fruits that are on the same level as broccoli with regards to health in the white-produce family are bananas. Japanese Scientists reveal that the high amounts of vitamin B6, manganese, potassium and fiber in the ripened versions of these fruits can help prevent high blood pressure, protect against atherosclerosis, and improve immunity levels in regular eaters.14 Despite an enormous amount of scientific knowledge and evidence for the overall beneficial effect of a single fruit/vegetable and/or phytonutrient at a time on human health, no study so far, to our knowledge, has been able to conclusively link the validity of these claims to the whole human body. This offers the opportunity for one to test the combined

impact of eating a colorful diet on humans through a systematic study. The purpose of our investigation is to apply a more holistic approach to the study of how the human body is effected as a result of a diet that is composed of all the colors of the rainbow. In other words, in addition to exploring the individual foodstuff ’s role in improving health, we want to analyze the outcome of the regular incorporation of a whole pack of colorful foods into one’s meals. Consequently, this study can serve to reveal the effect, if any, of a continued and rigorous diet consisting of all colors of the rainbow on the physical and mental health of a randomized sample of the adult human population in a given demographically comparable community.


The current study is a small-scale secondary application of some of the methods used in a previously conducted study that has been reported elsewhere.1 The primary study used a randomized controlled trial to compare the effect of daily consumption of probiotic (PY) versus low-fat (LY) conventional yogurt on weight loss in healthy obese women; the outcomes tested were changes in anthropometric measurements (waist circumference and body weight). In our study, we measured hand grip strength and stress levels in addition to some of the parameters mentioned that were tested in the primary study. We created a Rainbow Diet Pack (RDP) that consisted of the following fruits and/or vegetables in the respective quantities: raspberry (3), orange (1), baby carrots (4), corn (1/2 cob) broccoli floret (3), blueberry (5), and banana (1). The choice of each kind of fresh produce was based on the specific nutritional facts and molecular composition of the phytonutrients in each (see introduction for details). As per the personal choice of its members, the intervention group (n=5) received daily administration of RDP during a 10-wk intervention period. Measurements were taken of both the study and the control (n=3) groups twice: at baseline and at the end of the intervention period. Our study design was in accordance with the Declaration of Helsinki. Participants

The external appearnece of fruits and vegeatbles offer insight to the photopigments they carry.


Berkeley Scientific Journal | FALL 2016

FIgure 1 Consort Diagram

Twenty-four (24) normally healthy human adult (ages between 20 and 60) volunteers who belonged to the same demographic identity and had similar dietary backgrounds were recruited by word-of-mouth from the local community of the student investigators and screened for health. A total of eight (8) were chosen to participate. Individuals were eligible for the study if they were nonsmokers, free of known disease, not allergic to items in RDP, not taking medications and were identified as being healthy according to the following criteria: body mass index (BMI) between 18.5 to 24.9 kg/m2 and a self-report of no diseases/illnesses in the previous 6 months. Randomization A computerized random number generator was used to assign individuals chosen to participate to either the control or the intervention group. At the end of the baseline screening, a message containing the participant’s number assignments was sent via email to the participants. Participants and the student investigator were aware of group assignment during the intervention phase. Participants were not aware of other participants who have agreed to be in the study. Before analysis, the primary investigator received an anonymized data set and was no longer aware of group assignment post data collection; no data can be traced back to the individual participant.  CONTROL GROUP Participants allocated to the control group received standard advising and were allowed to continue their diet ad libitum. Participants were not asked to consume RDP. Pre- and post- intervention measurements were taken for members in the control group. Intervention Group Participants allocated to the intervention group received standardized nutritional support. One of the

daily meals of the volunteers in this group was substituted with a ration consisting of all colors of the rainbow in the form of a Rainbow Diet Pack (RDP). RDP consists of the following fruits and veggies in the respective quantities: raspberry (3), orange (1), baby carrots (4), corn (1/2 cob) broccoli floret (3), blueberry (5), and banana (1)) will be asked to be consumed by the intervention group daily. A serving of RDP contains about 521 calories, 15 g protein, 110.6 g carbohydrate, 95 mg sodium, 38 g sugar, and 4 g fat. Adherence The interviews to recall daily adherence to RDP were conducted by telephone around once per 4-wk period. Participants had to keep a written record of their RDP observance. MONITERING ADHERANCE Dietary Intake Dietary intake was recorded daily in a diary by members of the intervention group during the 10-wk intervention period. Outcome Parameters All measurements were made at baseline and 10 weeks after start of the intervention period. Anthropometric Measurements Waist circumference and body weight were taken in traditional way using measuring tape and weighing scale and recorded at each measurement period for all partici-

FALL 2016 | Berkeley Scientific Journal


About the Author

About the Author

Class of 2016 Computer Science

Class of 2018 Molecular Environmental Biology Evolutionary Genetics Lab

Jnana Aditya Challa Jnana Aditya Challa graduated from UC Berkeley in Spring 2016 with a B.A. in Computer Science. During his time at Cal, he was heavily involved with the American Red Cross and The Berkeley Project and had worked as a Computer Science T.A. His primary interests include community service, basketball, cricket, biking, hiking, app development and computer repair. He is currently working on learning more about the effects on the human brain of drugs such as Donepezil and El Dopa on Spatial Working Memory.

pants. Baseline height was used for subsequent calculations. Hand Grip Strength. Hand grip strength (kg) was measured using a hydraulic hand dynamometer (Baseline, Fabrication Enterprises, Inc., Elmsford, NY) (15). Participants were asked to perform two force trials with their non-dominant hand in a standing position and, if not possible, from a seated position at individual comfort level. The highest value was used. Participants were asked to keep track of their personalized data. Stress Levels Both the Social Readjustment Rating Scale (SRRS) questionnaire and Hassles and Uplifts Test were emailed to all participants to assess their stress levels.16,17 Participants were required to record their responses during both measurement periods. These tests include questions that the participants were comfortable to answer about personal health, relationship status, monetary commitments and other potential stressors.


Baseline characteristics From 24 individuals who were interested in participating in the study, 16 subjects were excluded because they did not meet the health criteria. The remaining 8 subjects gave written consent, and 5 subjects were randomly allocated to the intervention group which was required to consume the RDP and 3 to the control group, which was required to consume diet ad libitum. The RDP subjects completed the 10-wk intervention program (62.5% of the randomly assigned population, Figure 1). After starting the intervention, no subjects dropped out. At baseline, there were no statistically significant differences in physical characteristics between the intervention groups or


Berkeley Scientific Journal | FALL 2016

Akshara Sree Challa

Akshara Sree Challa is currently a junior at Cal from Albany, California. She is an intended Molecular Environmental Biology major with an emphasis on Human Health and plans to pursue a career in medicine. She works closely with various hospitals across the Bay Area as a volunteer and intern. She is fascinated by Nutrition and Gene Therapy and is currently involved with research at the Evolutionary Genetics Lab. In her spare

between those who completed or did not complete the study once recruited (Table 1). At baseline, 2 of 3 (66.7%) participants in the control group and 1 of 5 (20%) in the intervention group had BMI less than 20 kg/m2. Ten weeks after intervention, these values slightly varied. Monitoring Adherence Results on intake of RDP are shown in Table 2. Protein and vitamin A, C, and D intake levels were significantly higher in the intervention group than in the control group due to the contents of RDP. Adherence to RDP was 100% according to written records of each participant. All participants in the intervention group consumed RDP, with a mean intake of 1 meal per day (target 1/day). We tried contacting 62.5% of participants by telephone at least every other week, with a mean of 5.8 contacts (target 6 contacts per participant). Body Weight, BMI, and waist circumference. Body weight, BMI, and waist circumference at baseline are presented in Table 1. Ten weeks after the program, body weight decreased to 59.43 Âą 11.57 kg in the intervention group and 63 Âą 13.72 kg in the control group. As shown in Table 2, there was significant weight reduction in the intervention group after 10 wk of study (mean difference in RDP was around -11.15 kg while that of control was around -2.89 kg). BMI reduction in each group was in the expected direction with significant effects over 10 wk for both groups. In both groups, waist circumference had decreased after 10 wk of intervention. The waist circumference decline was around -5.06 cm in the RDP group while only around -0.6 cm in the ad libitum diet group at 10 wk. This is a significant difference for the inter-

vention group when compared to the control group after 10 wk of the intervention for waist circumference (Table 2). Hand Grip Strength Hand grip strength has not changed significantly from baseline in the intervention or the control group. Mean increase in grip strength was around 1.0 ± 6.7 kg in both the intervention group and the control groups. Stress Levels Stress levels changed significantly in the intervention and control group. Mean increase in SRRS score was around 1 point in the intervention group and a decrease of 2 points in the control group (Table 2). The hassles to uplifts ratio increased significantly for the RDP group, with an increase of 0.11 units. There was not a significant improvement in the ratio for hassles to uplifts in the control group.


The aim of this study was to assess the effects of eating a diet consisting of all the colors of the rainbow in the form of an RDP once a day on weight loss, stress levels, and other indexes of health in normally healthy volunteers during a 10 wk intervention program. We found that consumption of RDP as lunch may result in positive changes in waist circumference, weight loss and stress levels as measured during the program. This was despite finding no significant differences in observed hand grip strength between the study and control groups. In spite of evidence for the beneficial effects of eating various naturally colorful produce on obesity and health, to our knowledge, this was the first randomized controlled trial that investigated the effect of consuming the RDP as a whole on weight loss and stress levels in healthy human subjects. The present study showed no significant difference in hand grip strength in this observational study with lifestyle intervention. Overall, a decline in anthropometric measurements and cardiometabolic risk factors, including weight gain and high stress levels was observed, to a degree that would be expected with an

energy-restricted diet intervention (20). The total weight and waist circumference decreased to a significantly greater extent in the RDP group than in the control group. Nevertheless, future long-term trials are required to present evidence-based recommendation regarding the beneficial effects of RDP on further body profiles. Finally, regarding the effects of RDP on hand-grip strength, despite similar changes in HGS in both groups, the study group presented slightly greater improvements in strength when compared with the control group over 10 wk. However, further comprehensive RCTs are necessary in order to institute a quantifiable implication of RDP consumption on carbohydrate absorption because statistical differences have been seen. There were some draw-backs to this study. Although the sample size of eight was enough to verify the statistically significant effects on the fundamental outcomes, this number was not representative of the general population as a whole, particularly because it did not include individuals from dissimilar demographic and dietary backgrounds. Furthermore, the study was of a relatively short duration (10 wk). Longer-term studies are required to establish whether the effects can be sustained over a longer period. This would require continued consumption of the rainbow diet for a longer duration.


This study confirms and adds to the knowledge that a colorful diet can induce a positive body profile with healthy weight loss and balance stress levels in normally healthy adults. The association between nutrition and physical and mental health among humans is, as a consequence, linear. Ultimately, it is crucial to systematically maintain a colorful diet for at least one portion of daily meals to improve overall nutritional and physical status.


Thanks to all the volunteers and donors without whose help this project could not have been possible. We would like to also thank Dr. Thomas Carlson for inspiring us to explore the topics of Medical Ethnobotany and Anatomy. Special thanks to Professor Kurt

Table 1. Baseline Characteristics of Participants†

FALL 2016 | Berkeley Scientific Journal


Table 2. Anthropometric and psychological measurements in RDP group and Ad Libitum diet group before and after the 10-wk intervention†


Spreyer for his immense encouragement and help.


1. Madjd, Ameneh et al. “Comparison of the Effect of Daily Consumption of Probiotic Compared with Low-fat Conventional Yogurt on Weight Loss in Healthy Obese Women following an Energy-restricted Diet: A Randomized Controlled Trial.” Am J Clin Nutr, American Society for Nutrition (2016): n. pag. Web. 2 Jan. 2016. 2. Lam, Patrick et al. “Effects of Consuming Dietary Fructose versus Glucose on De Novo Lipogenesis in Overweight and Obese Human Subjects.” Berkeley Scientific Journal 15.2 (2011): n. pag. 3. Neelemaat, F., Short-Term Oral Nutritional Intervention with Protein and Vitamin D Decreases Falls in Malnourished Older Adults. Journal of the American Geriatrics Society, 60: 691–699. 4. Russell MK. Functional Assessment of Nutrition Status. Nutr Clin Pract. 2015;2:211-18. 5. Leenders, M. et al. (2015), Subtypes of fruit and vegetables, variety in consumption and risk of colon and rectal cancer in the European Prospective Investigation into Cancer and Nutrition. Int. J. Cancer, 137: 2705–2714. 6. Carlson, T.J. (Director) (2015, November 4). Medical Ethnobotany (IB 117) Lecture: Eye Health, Alzheimer’s Dementia, Antioxidants. . 7. “Why Are Strawberries Red.” Strawberries For Strawberry Lovers. N.p., n.d. Web. 14 Dec. 2015.

Berkeley Scientific Journal | FALL 2016

8. “Nutrient data for raw raspberries, USDA Nutrient Database, SR-21”. Conde Nast. 2014. 9. Kay, Colin D., and Bruce J. Holub. The Effect of Wild Blueberry (Vaccinium Angustifolium) Consumption on Postprandial Serum Antioxidant Status in Human Subjects. Br J Nutr 88, 389-397 88 (n.d.): 389-97. British Journal of Nutrition, May 2002. 10. Bovier, Emily R., and Billy R. Hammond. “A Randomized Placebo-controlled Study on the Effects of Lutein and Zeaxanthin on Visual Processing Speed in Young Healthy Subjects.” Archives of Biochemistry and Biophysics, 15 Apr. 2015. Web. 1 Jan. 2016. 11. “Broccoli.” Broccoli. The George Mateljan Foundation, n.d. Web. 15 Jan. 2016. 12. Zhang Y. Allyl isothiocyanate as a cancer chemopreventive phytochemical. Mol Nutr Food Res. 2010 Jan;54(1):127-35. 2010. 13. Thompson CA, Habermann TM, Wang AH, et al. Antioxidant intake from fruits, vegetables and other sources and risk of non-Hodgkin’s lymphoma: the Iowa Women’s Health Study. Int J Cancer. 2010 Feb 15;126(4):992-1003. 2010. 14. Iwasawa, Haruyo, and Masatoshi Yamazaki. “Differences in Biological Response Modifier-like Activities According to the Strain and Maturity of Bananas.” Food Sci. Technol. Res 15.3 (2009): 275-82. Web. 1 Jan. 2016. 15. “Dynamometer.” AccessScience (n.d.): n. pag. Web. 20 Dec. 2015. 16. The social readjustment rating scale, Holmes, T. H. and Rahe, R. H. 1967, Journal of Psychosomatic research, 11(2), 213-21. 17. DeLongis, A., Folkman, S., & Lazarus, R. (1988). The impact of daily stress on health and mood: Psychological social resources as mediators. Journal of Personality and Social Psychology, 54, 486–495. 18. “In Style This Summer: Color-Coded Eating.” Feathers Fringe. N.p., 13 July 2011. Web. 5 Jan. 2016. 19. “Research.ncsu.” N.p., n.d. Web. 10 Jan. 2016. 20. Kelley GA, Kelley KS, Roberts S, Haskell W. Combined effects of aerobic exercise and diet on lipids and lipoproteins in overweight and obese adults: a meta-analysis. J Obes 2012;2012:985902.




ebruary 11, 2016. LIGO scientists officially announced the direct detection of gravitational waves. It instantly became one of the hottest topics of the day, and people expressed their excitement. Some because they understood the significance of the discovery, and some because they wanted to blend in. To better understand about the excitement, it is important to know what gravitational wave is, and why it is a remarkable achievement to detect them. WHAT IS GRAVITATIONAL WAVE Gravitational waves are, of course, waves, but where did they get “gravitational” part from? To know why these waves are specifically called “gravitational” waves, we need to first take a look at some features of gravity. According to Einstein, gravity can be explained as curved space. “Gravitational waves are ripples in the fabric of spacetime.”1 Just like heavy objects can bend space around them, propagation of gravitational waves creates ripples, or distor-

tion, in the spacetime fabric. Gravitational waves can be generated by any accelerating object with mass, which means objects we can see everywhere such as cars, can generate gravitational waves. However, to generate strong enough gravitational waves to be detected, we need much more energy than that. The gravitational waves scientists detected at LIGO was caused by two black holes, each with mass of 29 and 36 suns, colliding with each other 1.3 billion years ago. 3 suns’ worth of mass turned into pure energy which was radiated as gravitational waves. Kip Thorne, a physicist at the California Institute of Technology, said, “It is by far the most powerful explosion humans have ever detected except for the Big Bang.”2 Another distinctive feature that gravitational waves have is that it can travel freely. Most waves like sound waves or ocean waves require medium to propagate. That means, without proper medium, there cannot be waves. Electromagnetic waves, on the other hand, do not require any medium,

and that is why this is about the only tool we can use to study about the universe. Just by using electromagnetic waves, scientists could unravel many mysteries about the universe, but it has its limit. We know from our daily experience that light can easily be blocked. Also, light can be bent, distorted, or even trapped by strong gravity. Some of the most interesting objects, such as black holes cannot be studied using electromagnetic waves. Since light cannot escape from inside black holes, it is impossible to study what is going on inside black holes using electromagnetic waves. Gravitational waves, however, barely interact with matters, which means that they do not lose as much information as they travel across the universe. Gravitational waves can still be absorbed by enough masses with dissipative forces, but it is practically impossible.3 HISTORY OF SEARCH FOR GRAVITATIONAL WAVES Ever since their existence was predicted by

FALL 2016 | Berkeley Scientific Journal


Einstein in the early 20th century, there have been numerous attempts do confirm the existence of gravitational waves. In 1969, Maryland physicist Joseph Weber used two aluminum cylinders to detect gravitational waves.4 Weber’s idea was that when gravitational waves pass the cylinder, the cylinder would resonate with gravitational waves. Just like LIGO detectors are located at two different places in the US, he placed two cylinders at two different locations to rule out false signals arising from other sources. Since Weber bars are far apart, the bars will not pick up noise from the identical source. Weber bars picked up identical signals multiple occasions, and Weber concluded that coincident detections he made at two different locations are due to gravitational waves.5 However, as other scientists failed to reproduce Weber’s results, his experiment was questioned, and eventually, rejected. Weber’s experiment did not have

enough precision to detect gravitational waves, it is not surprising considering simply how much “bigger” LIGO detectors are, and Weber’s instruments only targeted narrow bandwidth of frequency; unless gravitational waves have frequency close to aluminum’s resonance frequency, aluminum bars won’t be able to resonant with incoming waves. Although Weber was not successful at detecting gravitational waves, his experiment kick-started the search for gravitational waves. In 1978, two physicists, Joseph Taylor, Jr. and Russel Hulse, discovered two neutron stars orbiting each other. Taylor and Hulse knew that this discovery would present a great opportunity to test Einstein’s general theory of relativity. Taylor and Hulse observed these stars for over 30 years and found out that they are orbiting faster and faster while getting closer to each other over time, which means they are losing energy. Taylor and Hulse’s observation agreed with theoretically calculated value from Einstein’s theory. They did not detect gravitational waves directly,

iLIGO vs aLIGO suspension comparison


Berkeley Scientific Journal | FALL 2016

“It will also present a chance for us to learn about the earliest stage of the universe”

but they provided an indirect proof of the existence of gravitational waves. They were awarded the Nobel Prize “for the discovery of a new type of pulsar, a discovery that has opened up new possibilities for the study of gravitation.”6 After Taylor and Hulse’s discovery, many projects began to make direct observation of gravitational waves. Finally, in 1990, construction began on the LIGO detectors. After the initial LIGO was built, it operated for 9 years without any success in detecting gravitational waves. However, from the initial LIGO, scientists could learn how to operate, maintain and improve such a detector. From 2008 to 2015, for 7 years, scientists used what they learned from iLOGO to upgrade the detector. One of the main tasks was to improve its suspension. Suspension is crucial to reduce undesired noise to increase its sensitivity. Advanced LIGO (aLIGO) was 10 times more sensitive than iLIGO, and as soon as it was on, it put an end to a century long search for gravitational waves.7 Gravitational wave detection was, by no means, an overnight achievement. It is a result of 100 years of dedicated work. To achieve this seemingly impossible goal, 620 million dollars have been spent to build a detector with 4-kilometer long tunnels.8 After you see “620 million dollars,” it is hard not to ask why. Why would anyone want to spend that much money on this project? Is it worth it? The short answer is yes. WHY IS IT IMPORTANT Many people are skeptical about pouring millions of dollars into science programs because they believe that i whatever discoveries scientist make will not contribute to real life at all. It is true that knowledge

humanity can learn from such science projects is priceless, but if a project were to cost 600 million dollars, there better be some practicality. Not all scientific discoveries come with obvious practical values, and even when they do have great possibilities, we oftentimes fail to recognize them right away. “It’s of no use whatsoever.” This is the answer Hertz himself gave to the question asking practical usage of electromagnetic waves. Hertz knew that his discovery has importance of providing experimental evidence that proves Maxwell’s idea, but he could not see the great possibilities and values of his own discovery.9 Obviously, no one could come up with ideas of cell phone or Wi-Fi then. As of now, it is simply not be possible to fully understand what gravitational waves can offer to humanity in the future. One imminent benefit is that gravitational waves can be a new tool that can help us uncover many mysteries. Just like telescopes and microscopes allowed us to see the world too far away or too small for our naked eyes to see, gravitational waves will greatly broaden our understanding of the world. Gravitational waves can be compared to x-ray in a way. Like mentioned earlier, there are too many obstacles that make it harder or impossible for us to observe our universe using electromagnetic waves. Electromagnetic waves can be absorbed and blocked by dust, or they might be trapped by black holes before they reach us. Gravitational waves, on the other hand, cannot be completely blocked by dust, stars, or even black holes. Just like we can use x-ray to see through some obstacles, we can, with the help of gravitational waves, see what we couldn’t see before. True importance of gravitational waves lies in the field of unknown. Janna Levin, an astrophysicist

at Barnard College of Columbia University, wondered “Are there things out there that we’ve never even wrapped our heads around with telescopes?”10 Our knowledge of universe is very limited, and there are so much more that we don’t know than we do. Gravitational waves might offer a chance to learn something that we have never imagined of. WHAT’S NEXT Humanity took its first step into the new era, but there are much more left to be done. Gravitational waves won’t just magically tell scientists all about the universe. Scientists merely got their hands on a new tool, and now it’s time to think about how to use it. We can emphasize some unique features of gravitational waves by comparing them to electromagnetic waves, but, perhaps, it is more natural to compare gravitational waves to sound waves. Like sound waves, unlike light, gravitational waves spread out to all direction, which means we cannot know exactly where it is coming from by using one detector. What we have to do to pinpoint where the gravitational waves are originated from, we need to use multiple detectors at different locations to triangulate a signal. This also indicates that gravitational waves detectors can pick up, not necessarily with the same sensitivity, gravitational waves coming from any direction. Those detectors do not need to point towards the source to detect incoming signals. It is interesting to detect gravitational waves, and study the waves themselves to learn what information they contain, but, after all, what we really want to do is to find out where the signals are originated from, and study the source of gravitational waves. To do so, we need more detectors. Two LIGO detectors are simply not enough

to tell us where exactly the signal is coming from. Many projects have already set in motion to construct additional detectors, and not all of them are going to be on Earth. The European Space Agency’s (ESA) Evolved Laser Interferometer Space Antenna (eLISA) project is aiming to build a gravitational waves detector in space. Three satellites, located 1 million kilometers apart from each other and connected by two laser arms, will be orbiting the Earth while perfectly maintaining their formation. On top of that, eLISA will be free from many external noise sources which allows eLISA to have much better precision. With its improved capability, eLISA will be able to cover much broader range of frequency.11 After the first successful detection of gravitational waves, it surely seems like more and more resources are being poured into the search. That is because the final goal is never to just confirm the existence of gravitational waves. More advanced detectors such as eLISA can be used as a telescope that can be used to observe the most interesting events in the universe. Another significance that gravitational waves has is the possibility of them being able to provide an answer to one of the most important question of all time; how did it all begin? It is human nature to wonder where we come from, and many interesting answers to that question have been offered. Unfortunately, there is no way to recreate the Big Bang in a lab, and the best way to study what happened then is too look at what it left behind. It was only after about 380,000 years after the Big Bang that the universe became “transparent” enough for light to travel freely. However, gravitational waves could travel around the hot and dense universe. According to current theory, the universe experienced an accel-

FALL 2016 | Berkeley Scientific Journal


erated expansion for the tiniest fraction of a second, and it would have created gravitational waves which can leave imprints on Cosmic Microwave Background.12 There have been many projects to find imprints of primordial gravitational waves. In 2014, Background Imaging of Cosmic Extragalactic Polarization (BICEP2) experiments found a “curly” pattern of light polarization called B-modes which was believed to be a pattern left on CMB by gravitational waves “squeezed and stretched” the space. However, following measurements found out that the signal detected by BICEP2 came from cosmic dust. Dust contribution was much higher than BICEP2 team originally expected from the data available at the time. Even though the team failed to find evidence of inflation, this does not disprove inflation itself. “The gravitation-

al wave signal could still be there, and the search is definitely on,” said Brendan Crill, a member of both the BICEP2 and Planck teams from JPL.13 The imprints of gravitational waves on the CMB, if found, will provide evidence for inflation theory, and it will also present a chance for us to learn about the earliest stage of the universe. Gravitational waves literally offer “glimpse into the past.” Since primordial gravitational waves have wavelength comparable to the size of the universe, it won’t be possible for us to directly detect it, and as it can be seen from BICEP2 experiments, it will be extremely challenging to find any sign of gravitational waves created about 14 billion years ago. It might even seem impossible, but so was detecting gravitational waves when their existence was first predicted.14

A rendered image of eLISA satelites.


Berkeley Scientific Journal | FALL 2016

REFERENCES 1. What are Gravitational Waves? Retrieved November 07, 2016 2. Cho, A. (2016, February 31). Retrieved November 07, 2016 3. LIGO & Gravitational waves. Retrieved November 7, 2016 4. Worland, J. (2016, February 11). Retrieved November 07, 2016 5. Focus: A Fleeting Detection of Gravitational Waves. (2005, December 22). Retrieved November 07, 2016 6. Press Release: The 1993 Nobel Prize in Physics. (1993, October 13). Retrieved November 07, 2016 7. About aLIGO. Retrieved November 07, 2016 8. Castelvecchi, D. (2015, September 15). Retrieved November 07, 2016 9. Luenberger, D. G. (2006). Information science 10. Billings, L. (2016, February 12). Retrieved November 07, 2016 11. LISA Gravitational Wave Observatory. Retrieved November 07, 2016 12. Planck: Gravitational waves remain elusive. (2015, January 30). Retrieved November 07, 2016 13. Greicius, T. (Ed.). (2015, January 30). Retrieved November 07, 2016 14. King, A. (2016, February 18). Retrieved November 07, 2016

BERKELEY SCIENTIFIC JOURNAL Visit our science blog for more great content ! Find us at

FALL 2016 | Berkeley Scientific Journal




ood is an essential part of life. Unfortunately for us it does not last forever. Before refrigeration, people had to find a way to store food for times of low crop yield, travel across great distances, and times of famine. Honey, salt, sugar, spices, pepper, onion, garlic, and ginger were used throughout history as natural ways to protect food from rotting and make food about to go bad easier and safe to swallow. In fact, keeping food preserved spurred an entire economy in the 14th century as well as European access to the Americas through spice trade. Since then, the Food Drug Association (FDA) in the United States has approved thousands of preservatives from acacia to zoalene for use in commercial food products. However, with recent consumer cautiousness towards artificial preservatives, there is a movement towards a more natural approach to food preservation. The basis of what makes a preservative work is the inhibition of the growth of microbes and fungi as well as the protection against damaging free radicals which can cause cancer, heart disease, and speed up the aging process, according to Louis & Parke.1 Therefore, preservatives play with a delicate balance between what can kill certain organisms that cause spoilage without damaging or killing us. A major part of drawing that line is FDA regulated concentrations and restrictions on certain substances to help maintain standards of limited toxicity.


While the FDA has approved certain preservatives for human consumption, thatdoesn’t mean they are completely free of negative side effects. Sulfur dioxide is legally used with many products, like dried fruits, fruit juices, and some meats. It keeps the fruits from looking discolored and has antimicrobial properties. Though it has been used since ancient times as a preservative, studies have shown some apparent issues of this gas that can be toxic at standard atmosphere. Individuals with asthma should be careful, as sulfur dioxide could aggrevate symptoms.2 However, it is still FDA approved at concentrations lower than .05% in foods.


he Food Drug Association (FDA) in .the United States has approved thousands of preservatives from acacia to zoalene...” Nitrates are common as well in the industry of food preservation, particularly with processed meats. They help to keep cold cuts and cured meats fresh by inhibiting the enzymes of molds. Nitrite toxicity is due to tissue death from lack of oxygen. Nitrates convert haemoglobin, an oxygen carrying protein in the blood to a defective form known as methaemoglobin. Concerns have been brought up about their use as infants are particularly sensitive and nitrates are also used in mashed vegetable baby food. There is also issues about nitrates being carcinogenic. Studies have shown “interaction of nitrite with a variety of nitrogenous compounds, including secondary amines, either in the food matrix


Berkeley Scientific Journal | FALL 2016

FIgure 1: Apricots treated with sulfur dioxide on the rightcompared to those untreated on the left

or in the digestive tract,” can result in the creation of nitrosamines within the body which lead to cancer.3 In response to consumer’s concerns towards these chemical preservatives and others, there has been a spike in the research towards using bio-based extracts to naturally preserve foods. Scientists are looking at substances like those found in fruit peels and te wall of fungi to aid in helping to keep foods from spoiling. Though some studies are looking purely at the effects of utilizing these plant extracts, some are taking baby steps towards total usage and looking at ways to combine the preservatives of today with newer methods to de crease the levels currently approved to make foods safer. Areca nuts are one of the most addictive substances in the world. Combined with their medicinal properties and popularity as a leisure food, they are high in demand in South and East Asia. The areca nut is known for its “antiparasitic effects, effects on digestive, nervous and cardiovascular systems, etc.4 However, due to the temperatures of the region where they are grown and their sensitivities to cold temperatures due to their tropical nature, it is difficult to keep them fresh enough to preserve the properties for which they are so highly valued.

However, as discussed previously, it can cause issues for those who have difficulty breathing due to pre-existing asthmatic issues. Chitosan may be a natural alternative. Commonly found in the exoskeleton of crustaceans as well as fungi walls, it can activate genes and enzymes in plants that defend against decay. Food can be further fortified with a film on the fruit surface, decreasing the amount of oxygen and carbon dioxide exchanged and preventing fruit respiration and rot.5 Chitosan offers both outer and inner defenses to protect the freshness of the fruit.

be used like sulfur dioxide to offer the same antimicrobial effects. Both are methods that achieve the same goal as the potentially dangerous substances currently being used.7 Besides chemicals and other animaland plant-derived extracts, changes in atmosphere can also be used in conjunction with chitosan to help increase shelf life. Atmosphere packagings of varying levels of carbon dioxide, nitrogen, and oxygen help keep the quality of foods.8 While chitosan is a studied substance for food preservation research, other substances are also active in the ability to enhance foods. Pomegranate peel extracts WHAT NOW? are other additives undergoing research. Though chitosan offers a natural and Biopreservatives are just being to be explored, effective way to keep our foods fresh, there and through continued research, there will is still a use and need for sulfur dioxide. likely be more and more natural extracts that Chitosan helps keep the fruit safe from will be able to offer similar or other effects to damage from the chemical conditions of keep the quality of our foods fresh in a time the environment, but sulfur dioxide plays where the food demand for the ever growing a large role in inhibiting microbial growth. population is on the rise. Both are needed in order for nuts to be There is a ways to go until we can fresh and safe to eat. Though they both make the conversion to a strictly natural decrease rate of decay significantly as inde- biopreservative industry. The FDA has offipendently,with chitosan prolonging shelf cially recognized some in their list of allowed life slightly more than sole sulfur dioxide. preservatives like allspice and spearmint. Combined, both treatments can maintain Hopefully soon we’ll be able to pick up an the quality for forty days, almost twice as orange or a fish fillet that has been treated long as untreated areca nuts. with natural additives. However, this doesn’t mean that there isn’t a benefit to utilizing chitosan in REFERENCES conjunction as opposed to solely sulfur di- 1. X Louis & Parke 1998 oxide as is traditionally done. When using 2. Prabhakar & Mallika 2014 both treatments, the final sulfur dioxide 3. Phillips, 1971; Winton, Tardiff & content was 9.3-12.5 mg/kg lower than the McCabe, 1971 upper limit of the approved concentration 4. Nadkarni 2012; Duke 2013 of 50 mg/kg.6 5. Rath and Supachitra 2015 Other methods have been explored to 6. Zhang, J., Li, X., & Wang, W. 2016 use other alternatives to sulfur dioxide to 7. Qui, X., Chen, S., & Yang, Q. 201 combine with chitosan to further increase 8. Reale, A., Tremonte, P., Succi, M., Renzo, shelf life. Citric acid or licorice extract can T., Capilongo V., Luca, T., Pannella, G., Rosato, M. P., Nicolaia, I., Coppola, R., & Sorrentino, Elena 2011



To ensure availability outside of the harvest season, microbial growth needs to be controlled. Growth is commonly measured in colony forming units (CFU) per gram. CFUs are the number of bacterial colonies that can be grown on a plate from the sample in question that is analyzed through total plate count. Sulfur dioxide is commonly used to preserve the nuts, as with other produce such as dried mangos.

FIgure 2: Could chitosan, like that found in mushrooms and shrimp be the solution?

1. com/1359/4723094402_9b9fd3dbde.jpg 2. 56/14128106270_7d26d20b8d_k.jpg 3. files/2014/04/ApricotsSulfurDioxide.jpg 4.

FALL 2016 | Berkeley Scientific Journal




he oldest verified person lived to 122 years old and died in 1997 but even today we have people claiming to be even older such as Mbah Gotho who states he is 145 years old. The current worldwide average expectancy is 71 years old but the variety of ages we live to is enormous. We still have much to learn about what leads to these differences in life spans leading us to question what aging is. Numerous studies have been done investigating the various causes of aging such as telomerase, mitochondrial signaling, sensory signaling, diet, and microRNAs. Telomerase is an RNA dependent DNA polymerase which is used to lengthen the ends of chromosomes by adding base pairs. The ends of each of the arms of the chromosome are called telomeres and they protect the cell from losing important coding regions on chromosomes. Base pairs are lost from the chromosome ends upon each division which slowly shorten the telomere. This continues until a point at which the cell can no longer divide which is called cellular senescence. The cell is then able to kill itself by the apoptotic pathway when the critical length is met during cellular senescence. In the end all telomeres are controlled by the telomerase activity and erosion during cell division and it is one of the reason why organisms such as humans cannot live forever currently. The idea that the shortening of telomeres and apoptosis is important in regulation of tumor suppression is strongly supported Investigations into the telomeres of different organisms such as lobsters can provide greater insight into their role and functions in aging. Lobster are known to have indeterminate growth meaning that they continue to grow until they die. This is unlike humans which have determinate growth and stop growing mostly after puberty. We can then think about chromosomal shortening and why lobsters are able to replicate their cells continuously but humans are not. The answer is the lobster’s telomerase which is continuously produced even after maturation of the tissues which is unlike human cells which mostly exhibit no telomerase activity after differentiation. Telomerase is only one of the many factors that plays a role in aging however so even though lobsters theoretically can keep dividing their cells continuously they usually live 30-50 years depending on gender. The fact that lobsters do not live forever or for unreasonable years shows that other factors play huge roles in determining life span such as mutations, diseases, and diet.


Berkeley Scientific Journal | FALL 2016

Studies done on telomerase and genetics are often extremely difficult to do on humans but luckily even animal models can provide tremendous information about humans. We can then look at less complicated organisms such as Daphnia pulex (clone RW20) and Daphnia pulicaria (clone Lake XVI-11) also known as a water fleas. TRAP assays were used to compared the telomerase activity in both these species and measure the telomerase activity at certain points in their life in an equal and controlled environment. The results showed that D. pulicaria showed a steady decrease in telomerase activity as it aged while D. pulex kept its telomerase activity with a significant increase in telomerase activity from 1 week to 2 weeks old and less significant increases after that. D. pulex (RW20) only had a median lifespan on 16 days compared to D. pulicaria which had an average lifespan of 79 days. The further testing showed that at around week 1 both species had similar telomerase lengths but after the first week D. pulicaria exhibited significant telomere shortening while D. pulex exhibited no shortening. This is the opposite of what one expects because D. pulicaria lives around 5 times longer than D. pulex however it exhibits telomere shortening while D. pulex does not. This study is a significant verification that telomerase is only a small part of the process of aging and determining life span. One last thing to look at is not an organism itself but a cellular anomaly in humans, cancer. Cancer cells in particular need telomerase activity because of their continuous division and growth which eventually leads to a person death. With telomere shortening in normal cells we are able to prevent tumors by limiting the number of times a cell can divide in mutations towards tumors occurs. It is well known that one of the greatest indicators of serious malignant tumors is the reactivation of telomerase. The studies have also indicated that cancer cells have usually the same length telomeres or shorter than the surrounding tissues indicating a lack of overexpression of telomerase. There are occasions where cancers use overactive telomerases and produce longer telomeres but that applies to less than 10% of cancers

Figure 1: Daphnia life spans

compared to normal telomeres or shorter telomeres observed in around 90% of the cases of cancer cells. Telomerase is a huge factor in cancer cells life cycle and is now a target for anti-cancer therapeutic drugs since most human somatic cells do not produce telomerase. We can see that telomerase plays a significant part in aging in many ways but in some instances less than one may expect. Telomerase investigation and further research can allows us to provide many different breakthroughs from extended life spans to cancer treatments. There are many other factors we must consider when we look at aging and perhaps telomerase is not the current limiting factor however it can be applied to many function understandings of other diseases as lack of telomerase activity can be seen in many genetically inherited and early onset diseases. While we look at the activity of telomerase within organisms we see not as much influence as one may expect. It is possible that the telomeres are already designed quite efficiently and long enough that telomeres are rarely often the causes of diseases and death but this is another area of research that needs to be looked into. The overall impact telomerase research is limitless at this point with its vast applications.

References 1. Schumpert, C., Nelson, J., Kim, E., Dudycha, J. L., & Patel, R. C. (2015). Telomerase Activity and Telomere Length in Daphnia. PLoS ONE, 10(5), e0127196. journal.pone.0127196 Klapper Wolfram,Kühne Karen,Singh Kumud K,Heidorn Klaus,Parwaresch Reza and Krupp Guido(1998), Longevity of lobsters is linked to ubiquitous telomerase expression, FEBS Letters, 439, doi: 10.1016/S0014-5793(98)01357-X 2. Blagosklonny, M. V. (2012). Answering the ultimate question “What is the Proximal Cause of Aging?” Aging (Albany NY), 4(12), 861–877. 3. Artan, M., Hwang, A. B., Lee, S. V., & Nam, H. G. (2015). Meeting Report: International Symposium on the Genetics of Aging and Life History II. Aging (Albany NY), 7(6), 362–369. 4. Cong, Y.-S., Wright, W. E., & Shay, J. W. (2002). Human Telomerase and Its Regulation. Microbiology and Molecular Biology Reviews, 66(3), 407–425. MMBR.66.3.407-425.2002 5. Gomes, N. M. V., Shay, J. W., & Wright, W. E. (2010). Telomere Biology in Metazoa. FEBS Letters, 584(17), 3741–3751. febslet.2010.07.031

FALL 2016 | Berkeley Scientific Journal


BIDIRECTIONAL CROSS-MODAL INFLUENCE ON Bidirectional Cross-Modal EMOTION RATINGS Influence on Emotion Ratings OF AUDITORY AND VISUAL STIMULI of Auditory and Visual Stimuli

Abstract: Previous research concerning cross-modal influences on emotional perception has focused primarily on how auditory stimuli affect emotional responses to visual stimuli. The present study examines whether such effects are bidirectional. Different participants were tested in one of these two directions of influence, using a slider-bar rating task to judge the emotionality (sadness/happiness) of stimuli in an attended modality (auditory or visual). Stimuli were presented in auditory and visual pairs, with instructions to ignore stimuli from the irrelevant modality. All stimuli (auditory and visual) had been previously categorized as sad, ambiguous, or happy. Results showed that ratings depended primarily on the emotional categories of stimuli in the attended modality (auditory or visual). In addition, participants were subject to smaller cross-modal influences from the unattended modality in both the auditory and visual attentional conditions. Bidirectional influences were thus observed, showing that perceptual influence is not limited to a single cross-modal direction.



Auditory and visual information are frequently combined to produce unique effects in many types of entertainment and performance. The simultaneous presentation of music and film, in particular, is a very powerful combination that is utilized as a way to elicit emotion in cinema. The ability of music to influence emotion has been well documented in the literature.1 Similar evidence documents the effectiveness of film segments to2 evoke emotions within participants and across participants. While these studies individually highlight the capabilities of music and film to manipulate emotion, it is important to consider specific effects that may result when auditory and visual stimuli are experienced together. Multiple studies have combined auditory and visual information in order to assess the resulting effects of such cross-modality3-9 pairings on various parameters of emotion and cognition. For present purposes, it will be necessary to recognize the distinction between experienced emotion and the perception of stimulus emotionality. With regards to the perception of emotionality in stimuli, fewer studies11, 12 have illustrated the possibility of a cross-modal influence. While research on this topic is less plentiful, it does begin to elucidate the phenomenon of interest. Until recently, research on the topic of cross-modal influence has focused primarily on the auditory-to-visual direction. This is not surprising, as combinations of auditory and visual information are frequently found in television, movies, and theatric performances as mentioned earlier, where the primary focus is on visually oc70

Berkeley Scientific Journal | FALL 2016

curring activity. Therefore, it naturally appeals to questions about whether and why the addition of music matters in these examples. The present study addresses whether these sorts of phenomena can also occur in the opposite direction. The handful of studies that have investigated the visual-to-auditory direction of influence seem to focus less on the topics of experienced affect and stimuli emotionality that have been studied in the auditory-to-visual direction, showing instead, a greater emphasis on the perception of other stimuli characteristics.13-17 In summary, while much of the research in the auditory-to-visual direction appears to focus mainly on the experience of affect, some studies have instead chosen to explore the perception of stimulus emotionality. When comparing the two directions of influence, it is apparent that the visual-to-auditory direction as a whole, has been less explored. From reviewing available research in this direction, the lack of emphasis on the topic of affect in general is clear. There are few studies of experienced affect, and even fewer studies suggesting cross-modal effects on the perception of stimulus emotionality. While some of these studies do explore participant ratings of auditory stimuli, few ask whether the emotionality of auditory stimuli can be altered by the presence of emotional visual information.

Given this review of the current literature, we hypothesized that emotionality ratings of both auditory (musical excerpts) and visual (images) stimuli will be influenced by their respective cross-modal stimuli. We also suggest that these effects will be greater when stimuli in the attended modality are ambiguous and those in the unattended modality are unambiguous (happy or sad), than from other possible combinations. We identified “ambiguous” stimuli by finding those that maximized the value of a unique difference score calculation, reasoning that such stimuli could be judged as either somewhat happy or somewhat sad. To the best of our knowledge, there are no studies to date that have investigated the bidirectional cross-modal influence on emotionality ratings of both auditory and visual stimuli, with ambiguous stimuli operationalized in this way.


I. Participants Fifty student participants (32 females, 17 males, and 1 participant who declined to report gender) were selected from the Research Participation Program (RPP) through the Psychology Department at the University of California, Berkeley. Students received one credit for participating. All participants were between eighteen and sixty-one years of age, with the average age being in the early twenties (M = 22.88, SD = 6.95). One participant declined to report their age, and was not calculated into the age range. Participants were primarily Asian, with thirty- three identifying as such. Ten participants idenAmbiguous Images

Sad Images

Happy Images

Figure 1. The images from each emotion category shown above were used in combination with short audio segments for each individual trial.

tified as Caucasian, three as Hispanic/Latino, two as both Caucasian and Asian, one as both African-American and Hispanic/Latino, and one as an ethnicity not specified. Four participants reported neurological conditions, with three of those four participants also reporting the type of medication prescribed for the listed condition. Although English was not the only native language reported, all of the fifty participants spoke English fluently, and had no difficulty understanding the instructions for the experiment. II. Materials The auditory and visual stimuli used in this experiment were chosen through the review of numerous university stimuli sets that had been made publicly available online. The auditory stimuli were selected from a set obtained from the University of Jyväskylä,23 and the24visual stimuli from a set obtained from Cornell University. The auditory stimuli consisted of film score segments spanning a wide range of genres such as romance, horror, comedy, and drama, that had been rated on levels of valence, energy, and several emotions. The visual stimuli consisted of a broad range of static photographic images, including, but not limited to, people, animals, nature, inanimate objects, and various social situations. The stimuli in both the auditory and visual sets, had all been previously rated on numerous emotions including sadness and happiness. The same selection criteria were used for the auditory and visual stimuli sets to define three conditions: sad, ambiguous, and happy. For the sad and happy stimuli, this involved computing a difference score between the value listed for the intended emotion (either sadness or happiness) and the value for the unintended emotion (the opposite emotion). For ambiguous stimuli, the absolute value of the difference score was subtracted from the addition of the happy and sad values. This simple calculation yielded an ambiguity score for each stimulus. Stimuli belonging to the highest 30 difference scores in each emotionality category of both modalities were identified. However, due to high content similarity between the top 30 stimuli in the visual sad emotionality category, the highest 95 difference scores were used instead. The 15 stimuli in each category that were judged to be the most different from each other were selected by the experimenter as final stimuli. The final stimuli set used in the experiment thus consisted of 90 stimuli; 45 auditory stimuli and 45 visual stimuli, with each category of emotion in both modalities containing 15 stimuli. The top three stimuli in both the happy and sad emotion categories from each of the two stimuli modalities were used in an instructional anchoring task designed to teach participants how to navigate through the experiment. All auditory stimuli were shortened to 10 seconds, with the first and last 2 seconds of each excerpt fading in and out, respectively. All images were cropped to the same size (8x8 inches, 72 pixels/inch). It is important to note that, although the same method for stimulus selection was used for both the auditory and visual stimuli sets, different numerical scales had been used in the original stimuli sets, as they had not been collected from the same source.

FALL 2016 | Berkeley Scientific Journal


Figure 2. The 45 trials that each participant experienced consisted of 5 random pairings in each of the 9 combinations of Auditory and Visual stimuli shown to the right.

III. Design Three independent variables were included in the experimental design; the modality of the to-be-rated attended stimuli (auditory or visual) which participants were assigned, the emotionality category of the auditory stimulus (sad, ambiguous, or happy), and the emotionality category of the visual stimulus (sad, ambiguous, or happy). The first variable (attended and rated modality) was a between-subjects variable, and the second and third variables were within-subjects variables. The dependent variable consisted of the ratings that participants made to stimuli in their attended modality, which depended on the rating condition to which the participant was assigned. Figure 2. depicts the categories of emotionality pairings presented to participants and the number of each such pairing trials. The pairings of auditory and visual stimuli were chosen at random (without replacement) for each participant on each trial. This design, and randomization however, the order of stimulus presentation was randomized was used for both rating conditions. for each participant. This resulted in participants experiencing sequences of unique auditory and visual combinations. IV. Procedure Results Participants were assigned to either the auditory or visual rating condition in an alternating pattern. Instructions Average ratings of the sadness/happiness of attended had been tailored appropriately for each of the two rating stimuli are plotted in Figure 3. for the auditory and visual moconditions, and were read aloud to the participant by the dalities. A mixed factorial design (2x3x3) ANOVA revealed experimenter. Participants were told that they would be pre- significant main effects for emotionality conditions in both sented with a series of stimuli. The modality of these stimuli auditory (F(2, 423) = 89.0, p < 0.0001) and visual rating condepended on the rating condition of the participant (auditory ditions (F(2, 423) = 92.6, p < 0.0001). A significant interaction or visual). A rating scale was shown below the instructions between rating condition and auditory stimuli emotionality that consisted of a single horizontal line, positioned between was found (F(2, 423) = 261.90, p < 0.0001). Similarly, a sigtwo smaller vertical tick marks on either side of the scale, nificant interaction between rating condition and visual stimuli which indicated the left and right extremities. The left end emotionality was found (F(2, 423) = 276.16, p < 0.0001). An of the scale was labeled “Sad”, and the right end of the scale interaction between auditory stimuli emotionality and visual was labeled “Happy”. Participants were told that this rating stimuli emotionality, was not found to be significant (F(4, 423) scale would appear on each trial, ten seconds after presen- = 0.33, p = 0.855). An interaction between rating condition, tation of the stimulus, and that this scale was to be used to auditory stimuli emotionality, and visual stimuli emotionality, record their judgment about the emotionality of the attended was not found to be significant (F(4, 423) = 0.32, p = 0.87). stimulus in that trial. Participants were also told that in each trial, a stimulus from another modality (the modality of the As expected, when participants were in the auditory other rating condition) would accompany the stimulus that rating condition, the emotional category of the musical sethey had been asked to rate. In each rating condition, partic- lections had a significant effect on their ratings (F(2, 207) = ipants were asked to ignore these additional stimuli, but to 130.77, p < 0.0001). Happy music was rated as reliably happido so in a way that would not prevent them from experienc- er than ambiguous music, and ambiguous music was rated as ing the stimuli (i.e., without covering their ears or closing reliably happier than sad music. Similarly, in the visual rating their eyes). Attended and unattended stimuli modalities were condition, the emotional category of the photographic images consistent across trials for all participants in a given rating had a significant effect on the ratings that participants assigned condition. Participants were asked to use the full scale when to them (F(2, 207) = 185.43, p < 0.0001). Again, happy images making their ratings throughout the course of the experiment. were rated as reliably happier than ambiguous images, and ambiguous images were rated as reliably happier than sad images. The experiment itself consisted of 45 trials with a single auditory-visual pairing in each trial. All participants were We now turn to the contextual effects of the emoexposed to the entire collection of stimuli in both modalities, tional condition of the unattended stimuli on ratings of the


Berkeley Scientific Journal | FALL 2016

Figure 3. The graphs above display the means of each possible combination of auditory and visual emotionality in both the Auditory and Visual rating conditions.

attended stimuli. A pairwise comparison with Bonferroni correction between the effects of happy versus sad images on ratings of the happiness of the musical selections revealed a small, but significant, difference (t(49) = 3.18, p = 0.003), in that happy images increased happiness ratings of the music and sad images decreased them. An analogous pairwise comparison with Bonferroni correction between happy and sad musical excerpts on ratings of the happiness of photographic images emotionality also revealed significant results (t(49) = 2.67, p = 0.01), in that happy music increased happiness ratings of the images and sad music decreased them. These effects show that unattended cross-modal stimuli do indeed influence peopleâ&#x20AC;&#x2122;s judgments of the emotionality of auditory and visual stimuli in both directions, as predicted. Somewhat surprisingly, there was no indication that ratings of the ambiguous stimuli were more strongly affected than the happy and sad stimuli, since the curves plotted in Figure 3. are largely parallel. When looking at the graph in Figure 3., it seems that there might be an effect in the auditory rating condition, such that sad images are more easily able to decrease ratings of ambiguous auditory stimuli than are ambiguous images. However, a significant difference was not found to exist between these two emotionality categories in affecting participant ratings of ambiguous auditory stimuli (t(24) = 1.14, p = 0.27). Interestingly, there seemed to be an opposite effect in the visual rating condition. The one case in which the curves deviate from the parallelism arises in the visual rating condition, where the ambiguous musical excerpts seemed to produce somewhat lower ratings of the ambiguous images than would be expected if the effects were purely additive (i.e., if the curves were fully parallel) and also somewhat higher ratings of the happy images than would be expected if the ef-

fects were purely additive. However, the results revealed no significant differences between happy and ambiguous musical excerpts on emotionality ratings given to ambiguous visual stimuli (t(24) = 1.95, p = 0.06). However, both of these p-values were approaching significance. In the visual rating condition, there appeared to be a ceiling effect between happy and ambiguous musical excerpts on the emotionality ratings of happy images. However, there was no significant differences between the happy and ambiguous audio (t(24) = 0.13, p = 0.90).


Emotionality ratings of stimuli in both the auditory (musical) and visual (image) conditions primarily depended on the emotionality of stimuli in the attended modality. That is, participants in the auditory rating condition, tended to make their ratings based largely on the emotion in the attended auditory modality (i.e., the musical excerpts), and participants in the visual rating condition, tended to make their ratings based largely on the emotion in the attended visual modality (i.e., the images). This was expected to occur, as unambiguous stimuli from both modalities produced high ratings for their respective emotionality. If this effect had not been present, it would have possibly indicated a discrepancy of the original stimulus ratings and their assignment to emotionality categories in the context of this experiment. Also consistent with our predictions, the perception of emotionality in stimuli of both the auditory and visual modalities can also be influenced, though to a much lesser extent, by stimuli in another modality. More specifically, the emotionality ratings that participants assigned to attended stimuli in the auditory rating condition were influenced by

FALL 2016 | Berkeley Scientific Journal


stimuli from the unattended visual modality. Similarly, the emotionality ratings that participants assigned to attended stimuli in the visual rating condition were influenced by stimuli in the unattended auditory modality. However, this finding was only shown when looking at the difference between happy and sad unattended stimuli on attended stimulus ratings. Our findings replicate previous research in the sense that an auditory-to-visual direction of influence on emotionality ratings of visual information has already been shown. However, our findings also extend previous research by showing that this effect can occur in the auditory-to-visual direction through the use of visual stimuli containing general types of information such as scenes, nature, animals, etc. Previous studies examining visual emotionality have presumably used more limited types of stimuli, as the majority of research in this direction has focused on film and how it is influenced by music, where human characters serve as foundational elements in the construction of plot lines. However, human characters are not the only elements in film, or life in general, and it is important to investigate whether the perceived emotionality of other visual components in film and images (such as those mentioned), can also be influenced by stimuli from another modality. By using images that were not confined to a single form (e.g., human faces), we feel that we were able to more accurately represent influences that may occur in every day life. Our cross-modal finding also expands the more limited amount of research showing a visual-to-auditory direction of influence. While previous studies have shown influences from visual stimuli to auditory stimuli, few have focused on the perceived emotionality of the auditory stimuli themselves. A single study22 did find evidence for bidirectional cross-modal influences on stimulus emotionality ratings, but it used images of faces and single sentence vocal recordings as visual and auditory stimuli, respectively. It remains a unique finding, as research in the visual-to-auditory direction related to stimuli emotionality has been extremely limited. However, their use of single sentence audio recordings seems insufficient to represent visual-to-auditory effects in general. It is therefore important to study how other types of visual information can shift perception of emotionality in more general types of auditory information, such as music. We aim to determine whether such bidirectional cross-modal influences on ratings of emotionality can be generalized to musical and pictorial stimuli. The present results establish such effects. An interesting effect was noted across rating conditions. It appears as if the emotionality ratings of ambiguous auditory stimuli in the auditory rating condition, were more heavily influenced by sad images, than they were by ambiguous images. In the visual rating condition, the ratings that participants assigned to ambiguous visual stimuli, seemed to be altered to a great extent by happy musical excerpts, but


Berkeley Scientific Journal | FALL 2016

not nearly as much when presented with ambiguous or sad musical excerpts. While both of these pairwise comparisons did not reveal a significant result, the p-values were trending towards significance in both cases. Most likely, a larger sample size would have yielded significant results in both of these tests. An interesting observation is the fact that bidirectional cross-modal effects were still shown, even though the experimental design explicitly asked participants in both rating conditions to ignore stimuli from the unattended modality. This may indicate that when individuals are not putting effort into ignoring these stimuli, as in the case of viewing a film or performance, that these effects would be even larger than shown here.


The sample of participants used in this experiment consisted entirely of UC Berkeley undergraduate students who were elicited through the Psychology departmentâ&#x20AC;&#x2122;s RPP system. As a result, most of these students were Psychology majors. It is likely that there are characteristics of this sample that are not representative of the general population. In the future, a larger and more accurately representative sample might be collected through an online recruitment process such as Amazon Mechanical Turk. A second limitation of our experiment involves the high degree of similarity that existed between images in the sad emotional category for stimuli in the visual modality. For instance, many of the images in the sad emotional category portrayed individuals covering their face with their hands, crying, looking down, or in other poses that are evocative of sadness. This theme dominated approximately the upper third of the sad images. Such an observation would be unworthy of mention, if it were not for the wide variability of image content in the other emotional categories. Upon evaluation of difference scores in this category, we found it necessary to search more possible stimuli to find a set that we felt were of a more general nature and suitable for use in this experiment. In future studies, it would be ideal to incorporate images that reflect sadness in more varied representations. However, we suspect that this is not easy to accomplish.


The phenomenon of bidirectional cross-modal influence in emotional responses supports the idea that music and film have a unique effect when paired, an effect that has most likely been understood by film makers for decades. Such effects could be useful in many other sub-areas of entertainment, not necessarily limited to the realm of film and visual performances, as implied by prior findings. Acknowledging the lack of research in the visual-to-auditory direction of influence, the findings of this study may be of particular importance to musically based forms of entertainment, especially those with simultaneous visual components, such as music videos.

The use and popularity of the music video has increased steadily in recent years. Essentially, such videos are the building blocks for creating more powerful productions of auditory and visual material. Knowledge of bidirectional cross-modal influence could possibly be used in assisting artists to construct film sequences with the intention of changing the ways in which listeners hear their music. Similarly, these findings could be put to use in developing a music visualizer that is far superior to what is currently available in altering perceived and experienced emotion. Knowledge of bidirectional cross-modal influence may not be entirely restricted to forms of entertainment either. While our study focused specifically on the perceived emotionality of auditory and visual stimuli, participants most likely experienced some level of emotional change throughout the course of the experiment. It is possible that the results of this study could help clinicians in incorporating more effective emotional stimuli into various forms of therapy.


We would like to thank the UC Berkeley Psychology department, and all of our RPP participants for their time and effort. We thank research assistants Liang Hao, and Sai Ting Chu for their help in running participants, and Liang Hao for his additional help in data analysis.

ABOUT THE AUTHOR Harrison James Ramsay Senior Psychology Major Palmer Lab

Completing my senior honors thesis helped to strengthen my interest in scientific research. However, I soon realized that it would be difficult to answer the types of questions that I found myself asking, if I were to continue my studies within the field of Psychology. I then made the necessary decision to reroute my path towards the direction of Cellular and Molecular Neuroscience. I am currently in my third and final year at UC Berkeley, enrolled in additional science courses that will help me to succeed in the Neuroscience Ph.D. programs that I will be applying to next Fall. I am highly interested in studying learning and memory at the synaptic level, and I would like to secure an industry job after I complete my Ph.D. I also plan on starting a business that specializes in brain health supplements. In my free time, I produce hip-hop and electronic music, promoting myself through the persona, “The Neuroscientist”.

References 1. Krumhansl, C. L. (1997). An exploratory study of musical emotions and psychophysiology. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale, 51(4), 336. 2. Philippot, P. (1993). Inducing and assessing differentiated emotion-feeling states in the laboratory. Cognition & Emotion, 7(2), 171-193. 3. Thayer, J. F., & Levenson, R. W. (1983). Effects of music on psychophysiological responses to a stressful film. Psychomusicology: A Journal of Research in Music Cognition, 3(1), 44. 4. Baumgartner, T., Esslen, M., & Jäncke, L. (2006a). From emotion perception to emotion experience: Emotions evoked by pictures and classical music.International Journal of Psychophysiology, 60(1), 34-43. 5. Baumgartner, T., Lutz, K., Schmidt, C. F., & Jäncke, L. (2006b). The emotional power of music: how music enhances the feeling of affective pictures. Brain research, 1075(1), 151-164. 6. Boltz, M. G. (2001). Musical soundtracks as a schematic influence on the cognitive processing of filmed events. Music Perception: An Interdisciplinary Journal, 18(4), 427-454. 7. Bullerjahn, C., & Guldenring, M. (1994). AN EMPIRICAL INVESTIGATION OF EFFECTS OF FILM MUSIC USING QUALITATIVE CONTENT ANALYSIS. Psychomusicology, 13, 99-118. 8. Vitouch, O. (2001). When your ear sets the stage: Musical context effects in film perception. Psychology of Music, 29(1), 7083. 9. Tan, S. L., Spackman, M. P., & Bezdek, M. A. (2007). Viewers’ interpretations of film characters’ emotions: Effects of presenting film music before or after a character is shown. Music Perception: An Interdisciplinary Journal, 25(2), 135-152. 11. Jeong, J. W., Diwadkar, V. A., Chugani, C. D., Sinsoongsud, P., Muzik, O., Behen, M. E., ... & Chugani, D. C. (2011). Congruence of happy and sad emotion in music and faces modifies cortical audiovisual activation. NeuroImage, 54(4), 2973-2982. 12. Logeswaran, N., & Bhattacharya, J. (2009). Crossmodal transfer of emotion by music. Neuroscience letters, 455(2), 129-133. 13. Geringer, J. M., Cassidy, J. W., & Byo, J. L. (1996). Effects of music with video on responses of nonmusic majors: An exploratory study. Journal of Research in Music Education, 44(3), 240-251. 14. Boltz, M. G. (2004). The cognitive processing of film and musical soundtracks. Memory & Cognition, 32(7), 1194-1205. 15. Schutz, M., & Lipscomb, S. (2007). Hearing gestures, seeing music: Vision influences perceived tone duration. Perception, 36(6), 888-897. 16. Saldaña, H. M., & Rosenblum, L. D. (1993). Visual influences on auditory pluck and bow judgments. Perception & Psychophysics, 54(3), 406-416. 17. Boltz, M. G., Ebendorf, B., & Field, B. (2009). Audiovisual interactions: The impact of visual information on music perception and memory. Music Perception: An Interdisciplinary Journal, 27(1), 43-59. 22. De Gelder, B., & Vroomen, J. (2000). The perception of emotions by ear and by eye. Cognition & Emotion, 14(3), 289-311. 23. Eerola, T. & Vuoskoski, J. K. (2011). A comparison of the discrete and dimensional models of emotion in music. Psychology of Music, 29 (1), 18-49. 24. Peng, K. C., Chen, T., Sadovnik, A., & Gallagher, A. A (n.d.) Mixed Bag of Emotions: Model, Predict, and Transfer Emotion Distributions.

FALL 2016 | Berkeley Scientific Journal