JOURNYS Issue 5.2

Page 1

Journal of Youths in Science


carbon nanotubes the gateway to nanotechnology

rebuilding the

bruised brain

AUGMENTED REALITY the harbinger of sixth sense

the quest for

immortality art by haiwa wu


The Journal of Youths in Science (JOURNYS) is the new name of the student-run publication Falconium. It is a burgeoning community of students worldwide, connected through the writing, editing, design, and distribution of a journal that demonstrates the passion and innovation within each one of us. Torrey Pines High School, San Diego CA Mt. Carmel High School, San Diego CA Scripps Ranch High School, San Diego CA Westview High School, San Diego CA Del Norte High School, San Diego CA Cathedral Catholic High School, San Diego CA Beverly Hills High School, Beverly Hills CA Alhambra High School, Alhambra CA Walnut High School, Walnut CA Lynbrook High School, San Jose CA Palo Alto High School, Palo Alto CA Mills High School, Millbrae CA Lakeside High School, Evans GA Blue Valley Northwest, Overland Park KS Olathe East High School, Olathe KS Wootton High School, Rockville MD West Chester East High School, West Chester PA Delhi Public School, New Delhi, India


All submissions are accepted at Articles should satisfy one of the following categories: Review: A review is a balanced, informative analysis of a current issue in science that also incorporates the author’s insights. It discusses research, concepts, media, policy, or events of a scientific nature. Word count: 750-2000 Original research: This is a documentation of an experiment or survey that you did yourself. You are encouraged to bring in relevant outside knowledge as long as you clearly state your sources. Word count: 1000-2500 Op-Ed: An op-ed is a persuasive article or a statement of opinion. All op-ed articles make one or more claims and support them with evidence. Word count: 750-1500 DIY: A DIY piece introduces a scientific project or procedure that readers can conduct themselves. It should contain clear, thorough instructions accompanied by diagrams and pictures if necessary. Word count: 500-1000 For more information about our submission guidelines, please see




Contact us if you are interested in becoming a new member or starting a chapter, or if you have any questions or comments. Website: Email: Mailing: Torrey Pines High School Journal of Youths in Science Attn: Brinn Belyea 3710 Del Mar Heights Road San Diego, CA 92130


SPRING 2013 Volume 5 Issue 2

CHEMISTRY 4 Carbon Nanotubes |AUSTIN SU 6 The Workings of Methylmercury| MARIA GINZBURG 8 Plastics: A Blessing and a Curse | NILAY SHAH 10 High-Temperature Superconductors| LILIA TANG


BIOLOGY The Quest for Immortality|VARUN BHAVE 11 Neuroplasticity: I Can See With My Tongue | HARSHITA NADIMPALLI13 Rebuilding the Bruised Brain| MAXINDER S. KANWAL 14 Advances in Personalized Medicine | COLLIN DILLINGHAM 16 Controlled Release Using an Oral MATTIE MOUTON-JOHNSTON, 18

Drug Delivery System DIANE C. FORBES


Imagining Numbers and Space: Synesthesia|LILIA TANG 21

PHYSICS 23 The Mathematics of Drafting|FABIAN BOEMER 24 High Speed Rail|ERIC CHEN 24


MATHEMATICS & COMPUTER SCIENCE Applications of Fourier Series & Transforms| PETER MANOHAR 26 Surgical Applications of a MATLAB-based VEDANT SINGH 28 Electroencephalography Program RUJUTA PATIL Augmented Reality| ANJANA SRINIVAS 31



The by Austin Su



edited by Johnathan Xia

The entire concept of nanotechnology seems farfetched. From science fiction to pop culture, you probably recognize nanotechnology as fleets of microscopic machines zooming in to fix—or exacerbate—some aspect of humanity. In actuality, nanotechnology is just a set of tools and devices that allows us to manipulate things on a scale of less than 100 nanometers. In this strange nano-realm, regular physics goes right out the window; gravity becomes irrelevant and quantum physics kicks in, leading to new types of materials with unexpected and sometimes revolutionary properties. Nevertheless, these unexpected properties consequently may generate unforeseen difficulties in developing this new technology. We have barely scratched the surface on what nanotechnology can accomplish; however, a certain type of molecule, the carbon nanotube, has the potential to advance nanotechnology dramatically. Through their unique properties and myriad of uses, carbon nanotubes are poised to become the building blocks of the technology of tomorrow. Nanotubes have a highly unique structure. They are formed from a graphite-like material called grapheme, thin sheets of carbon atoms bonded together with alternating single and double bonds, rolled at specific angles. The rolling angle and radius of the tubes control their electronic and magnetic properties1. There are three main types of tubes. The first, called zigzag nanotubes, has carbon atoms bonded in a zigzag pattern. The blue line shown below traces the zigzag pattern of a string of carbon atoms running the circumference of the nanotube2:

The second type, armchair nanotubes, forms when the string of carbon atoms running the circumference of the nanotube forms lines that trace out a pattern resembling the front view of an armchair, as outlined with the red line in the image below. These tubes are the only types that behave as metals rather than semiconductors2.

The third type, chiral nanotubes, twists along the cylinder axis, like a screw. The image below highlights hexagons of carbon atoms that form an adjacent pattern and twist along the nanotube axis in a right-handed screw2.


Nanotechnology? reviewed by Dr. Michael J. Sailor

Carbon nanotubes were discovered back in the 1950s, but there is much left to learn about their myriad properties and practical applications. Firstly, nanotubes are extremely strong, due to the carbon-carbon double bonds that stitch them together. Skinny tubes can even be slipped into fatter tubes, creating extremely strong double-walled nanotubes2. These complex nanostructures, less than 1/50,000th the width of a human hair, can have a strength one hundred times greater than that of steel3, leading to applications in automobile, bridge, and aircraft structures, where high strength-to-weight ratios are critically needed. Longer-term prospects for these materials include heavy-duty body armor, bendable computers and televisions, and even a space elevator3. However, there remain major drawbacks to using nanotubes as structural components, one being price. A single gram of carbon nanotubes costs over one hundred dollars3. Until methods are developed to manufacture longer carbon nanotubes more cheaply, the structural applications will only be practical in a very limited set of systems. The unique electrical properties of carbon nanotubes hold promise for many interesting electronics applications. When the nanotubes are configured in certain ways, their electrical behavior can act as a switch between conducting and semiconducting2. A computer chip’s processing power depends on the number of transistors it has, and transistors made of carbon nanotubes can be packed over one hundred times more densely than conventional silicon transistors3, thus offering a wealth of potential for the computer industry. The use of carbon nanotubes in current technology is limited by the Schottky barrier, which limits the electron flow between a conductor (i.e. a metal) and a semiconductor2. However, recent research has proven that the type of metal used in conjunction with carbon nanotubes greatly affects contact conductivity2. Another limiting factor resides in manufacture. Manufacturing nanotubes into these transistors often results in many problems; no one has figured out how to place the nanotubes in specific orientations onto a high-density device structure. If this problem can be solved for, carbon nanotubes will become a keystone of the computer industry

KRISTINA RHIM / GRAPHIC Carbon nanotubes also have implications for the fields of energy harvesting and energy storage. Today, most photovoltaic solar panels use the semiconductor silicon to convert the sun’s energy into electricity. These panels are currently too expensive to be competitive with fossil fuels. Ongoing research is trying to replace silicon in photovoltaic cells with carbon nanotubes, with the hope that the cost of the devices can be reduced significantly4. Once we have all this energy, though, how are we going to store it so we can use it when the sun isn’t out? The answer may also lie in carbon nanotubes, which make very good capacitors. A capacitor is a device that stores and releases energy repeatedly, much like a rechargeable battery. Capacitors can be charged and discharged millions of times, whereas rechargeable batteries degrade after only a few thousand cycles. This greater cycle life and their faster charging span make capacitors an attractive alternative for energy storage. However, they have two serious drawbacks. First, they lose their charge quickly, even when not in use. Second, for a given amount of energy, they are not as lightweight as rechargeable batteries. A capacitor consists of dual layers of conducting materials with an insulator placed between them5. The conductors are typically metal foils, while the storage capacity of a capacitor is directly related to the amount of electric charge that can be stored on the surface of the metal foil layers. Because carbon nanotubes are so small, as a whole they haverelatively large surface areas. When incorporated into metal foils, the resulting increased surface area yields a capacitor with a much greater charge storage competence5, which translates to a lighter device. More importantly, research has shown that a nanotube capacitor loses charge at a significantly slower rate than that of a conventional capacitor, and can be charged more quickly than a rechargeable battery. These factors have generated a great deal of hope that carbon nanotube

“supercapacitors” will someday replace batteries, leading to instantly rechargeable electronics, electric vehicles with greater range and performance, and more reliable renewable energy systems3. How about the nanobot, popularly mentioned in sci-fi novels and movies? These carbon-based microscopic robots could perform all sorts of manufacturing jobs from the atomic level upward, creating high quality goods at a lower price. They could even be used to perform complicated surgical procedures at the microscopic level. Carbon nanotubes have a role in making nanobots a reality, as they can create motors. Researchers have already made nanotube “muscles,” relying on the nanotubes’ unique electrical properties6. Picture a twisted rope of nanotubes. When this yarn is soaked in an electrolyte (any solution with ions in it—for example, salt water) and attached to a battery, the individual tubes expand and the yarn spins, producing great torque; this makeshift motor is able to spin objects two thousand times heavier than itself at speeds of almost six hundred rotations per second6. Another motor formed by nanotubes is much simpler, consisting of a gold rotor mounted on a nanotube shaft; because the nanotube is nearly frictionless, the rotor can potentially reach the frequency of microwaves7. This motor has a multitudeof applications, from stirring solutions to acting as oscillators for cell phones7. These motors prove that carbon nanotubes are not simply chemical curiosities sitting in test tubes; they can be assembled into useful structures beneficial to the technology of tomorrow. Nanotechnology has always seemed light years away; however, even with the technological barriers of today, carbon nanotubes have much potential. Who knows, in the foreseeable future carbon nanotubes might truly permeate every facet of the human race, from the food we eat to the energy we use. With carbon nanotubes, the possibilities are truly endless.

REFERENCES 1. Kibis, Oleg V. Electron Properties of Chiral Carbon Nanotubes. International Max Planck Research School. Max Planck Society, n.d. Web. 2. Yildrim, Tanner. “Interlinking, Band Gap Engineering, Tunable Adsorption and Functionalization of CarbonNanotubes.” N.p., n.d. Web. 17 Dec. 2012. 3. Perlman, Ben. “10 Uses for Carbon Nanotubes.” Discovery Channel. Discovery Communications, n.d. Web. 17 Dec. 2012. 4. Klinger, Colin. Carbon Nanotube Solar Cells. Plos One. N.p., n.d. Web. 17 Dec. 2012. 5. Stauffer, Nancy. “Saying Goodbye to Batteries.” MIT Energy Research Council : Research Spotlight. Ford-MIT Alliance, n.d. Web. 17 Dec. 2012. 6. Spinks, Geoff. “Show Us Your (carbon Nanotube Artificial) Muscles!” The Conversation. N.p., 14 Oct. 2011. Web. 17 Dec. 2012. 7. Sanders, Robert. “Physicists Build World’s Smallest Motor Using Nanotubes and Etched Silicon.” UC Berkely News. UC Berkeley, 23 July 2003. Web. 17 Dec. 2012.


the workings of


By Maria Ginzburg Edited by Ahmad Abbasi Reviewed by DR. Kathleen Matthews & DR. Saswati Hazra The ancient Chinese believed in the body. Mercury simply bonds with these compounds and thus that mercury could heal various maladies and “jumps” from protein to protein and from organ to organ [4, 6]. grant eternal life. China’s first emperor, Qin Shi Huang, was killed after Most research done on mercury’s effect on the human body has ingesting mercury pills thought to guarantee immortality, and was then concerned the compound methylmercury [4]. Methylmercury is a buried surrounded by a moat of liquid mercury along with his famous rather toxic mercury compound found in the environment, known terracotta warriors [1]. During the 15th to 18th centuries, men and for its neurodevelopmental toxicity in both animals and humans [9]. women plastered themselves, whitened themselves, and, unfortunately, Because methylmercury is an organic compound, its absorption rate hurt themselves using various poisons such as arsenic, lead, and, of and long-term retention are higher than inorganic forms of mercury course, mercury [2]. Nowadays, mercury is still an important topic for or elemental mercury, accounting for its ability to cause brain and many — controversies now circle around mercury in anything from liver damage upon introduction into the bloodstream [7, 8]. Since dental amalgams and cosmetics, to tap water and fish [3]. the primary source of methylmercury in humans is through the Today we know what previous generations did not — consumption of seafood, the FDA and EPA advise pregnant and mercury is a toxic metal. Decades of research nursing women, as well as small children, to limit their have proven that almost all known forms consumption of fish [10]. of mercury are dangerous neurotoxins, Understanding the transport of methylmercury capable of causing acute and even requires exploring the role of glutathione in chronic poisoning [4]. Mercury the process. Glutathione is the primary poisoning primarily affects the intracellular antioxidant designed to protect kidneys, gastrointestinal tract, cells from damage, but methylmercury liver, and central nervous has an affinity to both glutathione and system, and is able to easily some of its components, which include cross the blood-brain cysteine residues that contain thiols barrier [4, 5]. This toxin is [8]. Upon ingestion of contaminated also known to have caused food, the body is introduced to respiratory, immune, methylmercury in the form of the and developmental toxin bonded to a thiol, usually complications in humans the amino acid cysteine [5]. The and can stop antioxidative methylmercury is then absorbed into processes in the body [4, 6]. the bloodstream, which distributes For these reasons, mercury approximately 95% of the ingested has fallen out of favor. Now, it is amount to the body’s tissues, leaving 5% quite customary to avoid contact of the toxin to circulate in the blood [4]. with mercury at all costs; many According to research, the 95% of the remaining concerned parents and health-conscious methylmercury is then deposited in the brain and C PHI BOFA A individuals make a habit of avoiding hair. Thankfully, hair is a natural excretion route, so the R N CHEN/G purchase or use of any item that contains mercury. toxin excreted through the hair is not a danger [11]. Overall, However, what many individuals do not know is why mercury the mercury concentration of the hair on the scalp ends up to be about should be avoided. We are aware of the ill effects that might be wrought, 50 times that of the brain, which still ends up with a methylmercury but why is it that such a simple substance is so dangerously toxic? How concentration of about five times that of the blood [4]. However, is it transported throughout the body? Why doesn’t it leave the body methylmercury does not travel only to the brain and hair or remain in easily? the bloodstream; the blood transports methylmercury to the liver as Here is a basic overview: well, where a new path begins. Unfortunately, the human body cannot significantly impede mercury’s Inside the liver, methylmercury bonds with reduced glutathione to transport due to loopholes in its physiology and chemical construction. form a methylmercury-glutathione complex. That complex is then Mercury has a high affinity to sulfur compounds, which are found in transported out of the liver into the intestines [4]. There, extracellular most human organs and proteins, so can be transported very easily enzymes break the glutathione down into several 6 | JOURNYS | SPRING 2013


amino acids, including cysteine, which houses the active site for the methylmercury bond. The methylmercury-glutathione complex is now a methylmercury-cysteine complex [6]. This new complex is then mistakenly recognized by the body as the amino acid methionine and remains undetected. The gallbladder transports the complex back into the blood, and the complex cycles back to the tissues, mostly entering the neural complex and then the brain [4]. The processes of exiting through the glutathione pathway and then entering cells as a cysteine complex explain methylmercury’s continued mobility in the body. While most methylmercury stays in the body, some excretion of the compound occurs through the feces, albeit in a minimal amount [4, 6]. The substantial health risk of exposure to methylmercury begs the question: is it possible to increase excretion of this toxic compound? A partial solution is provided by chelation, a process by which certain chemical agents remove heavy metals from the body, allowing mercury to be drawn out the body in the same way that it is drawn in [12]. Recall how mercury can travel through the body using the sulfur contained in the body’s proteins and organs. In the same way, sulfurous chelating agents such as dimercaptosuccinic acid (DMSA) can form compounds with mercury to be excreted by the body [13]. DMSA is currently one of the most common chelating agents and is the US Standard of Care for the treatment of various heavy metal poisonings. Though some alternative practitioners claim chelation is effective in treating various conditions like heart disease and autism, these uses of the technique are not largely recognized by the scientific community; it is currently only applicable in the field of heavy metal toxicity [14]. Methylmercury is highly toxic and unfortunately much too accessible in today’s modern world. Even in an industrialized society like that of the United States, methylmercury poisoning is still a possibility; the threat is even worse in less industrialized societies. However, continued research on such toxins ensures that incidents like the 1951 Minamata poisoning in Japan and the 1971 Iraq poison grain disaster will not happen again. Research, awareness, and action are the only things that will eradicate future mercury poisoning.





[1] Moscowitz, C. “The Secret Tomb of China’s 1st Emperor: Will We Ever See Inside?” (2012). [2] Mapes, D. “Suffering for Beauty Has Ancient Roots.” (2008). [3] Mercury in the Environment.” (2000). [4] Clarkson, T. W., & Magos, L. The toxicology of mercury and its chemical compounds. Critical Reviews in Toxicology 36, 10.1080/10408440600845619?journalCode=txc (2006). [5] Schaefer, J. K., & Morel, F. M.M. High methylation rates of mercury bound to cysteine by Geobacter Sulfurreducens. Nature Geoscience 2, http://www.nature. com/ngeo/journal/v2/n2/full/ngeo412.html (2009). [6] Nordberg, G. F., Fowler, B. A., Nordberg, M., & Friberg, L. T. Handbook on the Toxicology of Metals. (Elsevier Inc., 2007). [7] Kershaw, T. G., Clarkson, T. W., & Dhahir, P.H. The relationship between blood levels and dose of methylmercury in man. Arch. Environ. Health 35, http://www. (1980). [8] U.S. Environmental Protection Agency. “Mercury.” [9] Carvalho, M. C., Nazari, E. M., Farina, M. & Muller, Y.M.R. Behavioral, morphological, and biochemical changes after in ovo exposure to methylmercury in chicks. Toxicological Sciences 106, (2008). [10] “’Seeing’ Mercury Methylation in Progress.” (2009). [11] “Excretion.” [12] Flora, S. J.S., & Pachauri, V. Chelation in Metal Intoxication. Int. J. Environ. Res. Public Health 7, (2007). [13] Miller, A. L. Dimercaptosuccinic acid (DMSA), a non-toxic, water-soluble treatment for heavy metal toxicity. Altern. Med. Rev. 3, pubmed/9630737 (1998). [14] Rasnake, Jarrod. “Chelation Therapy.”


P lastics: AB C lessing and a

By: Nilay Shah, Edited By: Kenny Xu


Reviewed By: Dr. Hari Khatuya, Dr. Simpson Joseph, Dr. Gang Chen, Dr. Haim Weizman

Boeing’s new 787 Dreamliner… Nylon clothing… Quick-dry Paint… Bubble-gum…[1] What do these seemingly incongruous items have in common? Plastic! All are composed of plastic polymers. Plastic’s popularity is due to its strength, durability, resistance to corrosion, and lightweight character. That’s not all; it can imitate and replace metal, wood, glass, china, stone, cloth, rubber, jewels, glue, cardboard, varnish, and leather. Plastics are found everywhere, from mundane objects such as clothing, car parts, electrical insulation, and heating insulation to more exclusive articles like the combat helmets used in the military. It has even become instrumental in space exploration [2]. Let’s start with the basics. Plastics are made of polymers, large molecules that can consist of hundreds of thousands of atoms, distinguishing plastics from other materials. Monomers, single molecules, join together to construct a polymer chain in a process known as polymerization. Then, the chains connect to form networks — chains linked to each other at various spots — in a complex, mesh-like structure [3]. Plastic has a very long history, breaching all seven continents of the world and creating multibillion-dollar corporations. Hundreds of years before the first synthetic plastics were developed, natural substances such as tree saps and animal horns were used as plastic materials. In 1869, John Wesley Hyatt revolutionized the world with the invention of celluloid, the first synthetic plastic. He stumbled upon celluloid while searching for an inexpensive substitute for the ivory in billiard balls. It consisted of cellulose nitrate “plasticized,” or softened, by camphor. Versatility of this new creation led to its use not only in billiard balls but also in eyeglass frames, combs, shirt collars, buttons, dentures, and photographic film. For forty years, celluloid remained the only prominent invention in the plastic industry. However, improving upon recently developed techniques in the field, American chemist Leo Hendrik Baekeland developed phenolformaldehyde, dubbing it Bakelite, in 1909. It was the first plastic to be made entirely of synthetic ingredients. Baekeland’s invention “marked the true beginning of the plastic industry” known today [2]. 8 | JOURNYS | SPRING 2013

LINH LUONG/GRAPHIC The word “plastic” comes from the Greek word “plastikos,” meaning “able to be molded” — a suitable derivation since plastics’ defining characteristic is that they “can be molded into desired forms, drawn into fibers (threads), stretched, and/or bent” (Goodstein). Plastics are made from petroleum (crude oil and natural gas), and can be divided into two categories, thermosets, and thermoplastics. Thermosets are highly infusible and initially soften when heated. However, if heat is applied for an extended period, they will set, or harden. Once done, these plastics can withstand temperatures of up to 500° F (260° C) before singeing. This is due to cross-linking, in which the molecules link together between chains. On the other hand, thermoplastics harden only when they are cooled and do not undergo cross-linking. As a result, they will soften or even melt at temperatures around 200° F (93° C). Most types of plastics also contain additives, which are added to attain a certain characteristic in the final product. These additives can make plastics can be used to make plastics more malleable or flame retardant [2]. Now that we have discussed the popularity of plastics, I would like you to imagine this: A wandering albatross soars over the calm ocean waves, oblivious to the danger that lurks beneath. After many miles, she spots a squid and dives down into the water. Snapping it up, she begins the long journey back to her nest; little knowing that the “food” she carries in her mouth will lead to the death of her chick. This scene is one of many that take place due to the twenty billion pounds of plastic that is deposited in our oceans annually, resulting in the death of over 1,000,000 seabirds and 100,000 marine animals every year. However, this only affects animals in the ocean, right? Why should humans care? In reality, this negatively affects the entire global ecosystem due to biomagnification. This dilemma arises at the bottom of the food chain. Zooplankton have been known to consume microscopic fragments of plastics, which can leach harmful chemicals like styrene trimer and bisphenol A into their bodies. Moreover, petroleum-based plastics

LINH LUONG/GRAPHIC concentrate other hydrophobic chemicals like PCBs and DDT, having levels up to one million times higher than the surrounding seawater because they are normally diluted in the ocean. As these chemicals travel up the food chain they become increasingly concentrated because animals’ digestive systems are not capable of breaking them down (figure 1). Consequently, this affects humans, who are at the top of the food chain. Even in light of these problems, plastics are used so prevalently because there are many desirable characteristics of petroleum-based plastics. Firstly, they are recyclable — but this comes at a high cost. Also, Plastics can be burned, with the possibility of harnessing the energy to generate electricity. However, this releases toxic fumes into the environment. Durability is another two-sided coin. While plastics can handle the wear and tear of daily use, and will not disintegrate under various weather conditions, once discarded they will stay in the environment for hundreds to thousands of years; they make up 20% of the volume in today’s landfills! Lastly, oil prices are constantly on the rise and this correlates to higher manufacturing costs for petroleum-based plastics. This problem seems too vast to solve. Plastics are saturated in our lives; therefore, eradicating them is not a viable solution. For that reason, many researchers have engaged in finding suitable alternatives to petroleum-based plastics: eco-plastics. Most eco-plastics can be categorized as either degradable, biodegradable, or compostable. In degradable plastics, the chemical structure changes because of particular environmental conditions including heat, moisture, and/or UV exposure. Large scraps of plastic are broken into smaller fragments, often microscopic, but can still have a drastic impact on the environment. On the other hand, microorganisms naturally break down the molecular components of biodegradable plastics. However, there isn’t a set time for biodegradation, and toxic chemicals may be produced from this process. Lastly, compostable plastics are also biodegradable, further generating carbon dioxide, water, inorganic compounds, and biomass at the same rate as cellulose (paper). In the final product, a compostable plastic should have completely disintegrated without any production of toxic chemicals [4]. The first type of eco-plastic is bio-plastic. Bio plastics are made up of biodegradable plastics and bio-based plastics. Biodegradable plastics, such as Petroleum-based Oxo-biodegradable plastics, are made from fossils fuels, which microorganisms will decompose. They contain a “prodegradant” additive, which can activate and quicken the degradation process. These plastics undergo a reaction with oxygen via daylight, heat, and/or mechanical stress and break down into microscopic fragments. Then, these smaller oxidized molecules undergo biodegradation. Alternatively, bio-based plastics are made from biomass or renewable resources and may or may not be decomposable. Plant-based hydro-biodegradable plastics are plastics that are both bio-based and biodegradable [5]. Additionally, some companies that are said to be eco-friendly are devising new types of plastics. ECM Biofilms is developing petroleum-based plastics containing “microbe-attracting pellets” that will result in a faster degradation time at the landfills. Novomer is developing a plastic using carbon dioxide and carbon

Figure 1[7] monoxide (reacted with liquid metal) that would not only eradicate detrimental gases from the air but also be biodegradable [6]. The ubiquitous plastic presents a serious threat to the globe, and scientists have just begun understanding its effects. Developing eco-plastics is a new area in science in which new discoveries are being made every day — one day, the conventional plastic may be no more. Perhaps, after reading this article you decide to take the first step and revolutionize your lifestyle, vowing never to use a plastic product again. If not, at least you will have become more conscious of the world around you. REFERENCES: [1] O’Brien, K. “In Praise of Plastic.” magazine/articles/2008/09/28/in_praise_of_plastic (2008). [2] Abdullah, M. G. et al. Plastics. Compton’s by Britannica 6, EBchecked/topic/591750/thermosetting-plastic(2009). [3] Goodstein, M. Plastics and Polymers Science Fair Projects (Enslow Publishers, Inc., Berkeley Heights, 2004). [4] “Degradable & Biodegradable Bags.” degradable_biodegradable_bags.asp (2009). [5] Tokiwa, Y., Calabia, B. P., Ugwu, C. U., & Aiba, S. Biodegradability of Plastics. Int. J. Mol. Sci. 10, 10.3390/ijms10093722 (2009). [6] Layton, J. “What are eco-plastics?” green-tech/sustainable/eco-plastic.htm (2009). [7] “E-Learning, E-Tutoring, School Education Support & Online Education, Digital Learning, Smart Learning.” (2012).



edited by Kevin Li

reviewed by Dr. Benjamin Grinstein and Dr. Aneesh Manohar

Imagine a computer that never has to be re-charged. The computer would never turn off randomly, and no one would ever have to worry about losing work from a low battery. At most, one would only have to plug the computer in once in a while and it would be ready to run for eternity. Is it possible for an electrical current to flow forever? This is a property of superconductors: to have current flow with zero resistance. Dutch physicist Heike Kamerlingh Onnes was working with mercury at extremely low temperatures when he discovered that at 4.2 K, all resistance disappeared. The temperature at which the resistance disappears is known as the superconducting or critical temperature Tc. However, because they operate only at such low temperatures, superconductors do not have many everyday uses [1]. Currently, physicists working with superconductors are trying to develop alloys of metals with higher superconducting temperatures, the highest being 135 K so far, under normal pressure.

KATHERINE LUO / GRAPHIC But why do superconductors only work at extremely low temperatures? The Bardeen-Cooper-Schrieffer (BCS) theory explains how superconductivity works. Resistance in metals occurs because the electrons passing through them are scattered due to imperfections [2]. Electrons passing through the lattice, or the structure of the superconductor, overcome repulsion to pair up and pass through the material without creating resistance. Quasi-particles called phonons, which are excited elastic organizations of atoms in solids and are also known as lattice vibrations, are necessary for this to work. One may wonder how the electrons can overcome this repulsion. The main reason is that they are a part of a lattice of positive atoms. When the electrons pass through, the atoms are shifted slightly towards the electron, so when another electron comes by, its trajectory will change because of this shift, leading to each electron pulling towards each other. This pairing up of electrons is called a Cooper pair [3]. These Cooper pairs have different properties than single electrons. Single electrons are fermions, which have half-integer spins, and must follow the Pauli Exclusion Principle, which implies that two electrons in a single atom cannot have the same quantum numbers [4]. Cooper pairs, on the other hand, are regarded as composite bosons, have integer spins, and are able to condense into the same energy level [3][5]. They have slightly less energy than two free electrons and create an energy gap. Cooper pair collision can create slight resistance. But when the temperature is lower than the energy gap, the resistance decreases to zero. 10 | JOURNYS | SPRING 2013

However, BCS theory does not explain how high-temperature superconductors work. This is because generally at temperatures over 30 K, the electrons cannot pair up because the entropy (disorganization) is too high in the lattice. People still do not completely understand how hightemperature superconductors work. There are many theories, but none of them are entirely correct. Many of the materials used to superconduct at high temperatures are not metals, but ceramics. Because different compounds of ceramics tend to function differently, there is no universal theory of how they work. However, scientists have found theories on how particular types of high-temperature superconductors work [3]. For example, copper oxides are known as a class of superconductors. Copper is known to be an effective conductor, so why not try to use copper alloy, or a cuprate, as a superconductor? Yttrium barium copper oxide (YBCO) was the first superconductor to maintain superconductivity at a higher temperature than nitrogen’s boiling point (77 K), with a Tc of 90 K, making it a high-temperature superconductor. It has the chemical formula YBa2Cu3O7−x and is created by heating metal carbonates to 1000 K to 1300 K. The reaction is represented by the following chemical formula: As seen in Figure 1, YBCO is a crystalline structure with copper oxide planes and chains, and layers of BaCuO3 and YCuO3. The copper oxide planes and chains, however, have empty spots in the lattice that would be for oxygen, allowing for the oxidization of the copper oxide, which enables the compound to superconduct [6]. However, yttrium is not that easy to obtain. The main source of yttrium is from a type of clay reserve in China, combined with other heavy rare earth metals. Since it is found impure, people must use expensive processes to extract the yttrium. Bismuth strontium calcium copper oxide (BSCCO) is more commonly used for electromagnets and motors because it is more accessible and has Figure 1 a higher Tc[7]. Another recently found superconductor is magnesium diboride (MgB2), which has a Tc of 39 K. Although it may seem extremely cold compared to other superconductors like YBCO, its ingredients are more readily available than YBCO’s. Like YBCO, MgB2 is a crystalline structure with hexagonal layers of boron atoms between layers of magnesium atoms, with each magnesium atom being between the centers of the hexagons, as shown in Figure 2. This is actually quite similar to the simple structure of graphite, shown in Figure 3.

Figure 2

Figure 3

REFERENCES [1] Whittingham, M. S. “Preparation, Structure, and Properties of a High-Temperature Superconductor.” (1995). [2] Eck, J. “Superconductivity Explained.” [3] Orzel, C. “How Do Superconductors Work.” principles/2010/08/03/how-do-superconductors-work/ (2010). [4] Nave, R. “Pauli Exclusion Principle Applications.” hbase/pauli.html#c2 [5] Nave, R. “BCS Theory of Superconductivity.” hbase/solids/bcs.html#c1 [6] “Yttrium Barium Copper Oxide - YBCO.” html/ybco_text.htm [7] “BSCCO.” bscco.html (2012). [8] Preuss, P. “A Most Unusual Superconductor and How It Works: First-principles calculation explains the strange behavior of magnesium diboride.” http://www.lbl. gov/Science-Articles/Archive/MSD-superconductor-Cohen-Louie.html (2002).

The Quest for

Immortality edited by Eric Chen



by Varun Bhave


MgB2 superconductivity is not completely explained by BCS theory, but it is strongly related. MgB2 is commonly considered as a high-temperature superconductor because it has the highest Tc of a BCS superconductor. BCS theory assumes that the coupling of the lattice for the pairing of electrons should be equal to the coupling of a single electron emitting and re-absorbing a phonon [8]. In MgB2, these values are different, so how does it work? This can be explained by the type of sigma and pi bonds in the structure. In some covalent bonds, the electron density is symmetrical about the axis line connecting the nuclei (internuclear axis). This means that the internuclear axis goes through the region of the electrons’ orbitals. These CHRISTINA BAEK/ GRAPHIC bonds are called sigma (σ) bonds. When orbitals are perpendicular to the internuclear axis, it is a pi (π) bond. MgB2 and graphite have strong σ bonds in the planes and weaker π bonds between the planes. However, unlike graphite, MgB2 has boron, which does not have as many valence electrons as carbon. Not all the sigma bonds are filled in the boron layers, so lattice vibrations are stronger because the structure of the lattice is weaker, which results in stronger electron pairs in the planes [8]. Because there are many different types, high-temperature superconductors still remain much of a mystery. We do not have a universal theory for them, because they generally are made up of layers of different atoms, each of which has its own properties. For example, YBCO’s superconductivity is based on the interesting oxidization of the copper oxides, but this is only relevant to the specific structure of YBCO. The materials used in high-temperature superconductors are generally hard to obtain since many of them require lanthanides, which are difficult to extract and purify. Even today, the critical temperature is still too low for general purposes, since high-temperature superconductors still have to be refrigerated with liquid nitrogen. Hopefully someday inexpensive roomtemperature superconductors will be developed, which will increase the efficiency of many circuits and electrical appliances.

The New York Times reported on May 5, 1933 the death of Li-Ching Yun, a Chinese herbalist and military instructor [1]. Li claimed to have been born in 1736, theoretically making him 197 years old; he attributed his longevity to maintaining mental tranquility and “sit[ting] like a tortoise, walk[ing] sprightly like a pigeon, and sleep[ing] like a dog” [2]. Several years earlier, a professor at Minkuo University had allegedly found records indicating, even more startlingly, that Li was born in 1677 and had been sent messages of congratulations from the Chinese government on his 150th and 200th birthdays. Many of the oldest men in Li’s neighborhood asserted their grandfathers knew him as children and that he was, even then, a grown man. Numerous other Chinese military and medical references seemingly corroborate Li’s existence, career, and longevity. Today, many discount Li’s claim as ludicrous; the oldest confirmed human being was Jeanne Calment, a French woman who was 122 when she died in 1997 [3]. However, particularly in industrialized nations, better scientific understanding of the human body and medicine, improved diets, and higher quality of life have contributed to a dramatic increase in human life expectancy. Indeed, a 2009 study in The Lancet journal estimated that half of babies born today in developed nations will live to be 100 years old. The study also analyzed past life expectancy trends; researchers concluded that life expectancy had been rising since the 1840s and showed no signs of maxing out. In fact, the likelihood of surviving past 80 years old has doubled in both sexes since 1950 [4]. The continuous rise in life expectancy has introduced into scientific discourse the possibility of functionally immortal humans who, absent unnatural death, could survive forever. This article specifically examines the innovative possibilities of life-extension substances, cryonics, and the more science fiction-like theories of mind uploading and advanced gene therapy. Apart from basic and well-known life extension mechanisms, including caloric restriction, specific diets, and certain drug supplements, enzymes like telomerase and the chemical resveratrol have been considered as possible life-extenders. Resveratrol, a substance produced by several plants in response to pathogens, has been SPRING 2013 | JOURNYS | 11

shown to significantly retard the aging process in mice [5]. Telomerase, an enzyme that helps maintain protective caps on the ends of chromosomes, is the more promising of the two. Normally, cells will divide until they reach the Hayflick limit, at which point division stops. This is because the telomeres paired with each cell’s DNA shorten with each division until they reach the critical length. With artificially increased amounts of the enzyme, any cell can undergo mitosis unbounded, preventing numerous health problems arising from the death of cells. However, telomerase also has the potential to promote the tumorigenesis at the root of cancer, which is likewise caused by the unchecked division of cells [6]. Furthermore, growth hormone therapy has been found to improve muscle mass composition, heart health, and bone density without major side effects in animals [7]. While all of these developments have the potential to slow aging, they do not truly ensure biological immortality. The arguably most developed field in theoretically ensuring immortality is cryonics, the expensive preservation of medically dead humans (either just the brain or the entire cadaver) at extremely low temperatures based upon the premise that healing or revival may be possible later. While current technology cannot resuscitate such individuals, recovery will theoretically be possible from some advanced future technology that will change the current medical definition of death. While the largest cryonics institution today is the ALCOR Life Extension Foundation, it has only administered the procedure on 115 people to date [8]. Cryonics is based on an accepted assumption, that much of a person’s memory, identity, and personality is stored in parts of the brain that do not require continuous brain activity to survive and can be revived after the cessation of neural activity (i.e. legal death) [9]. Skeptics of cryonics rather highlight the infeasibility of actually restoring brain activity into a form of meaningful expression in a living human being. Numerous unavoidable factors, like cell damage from whole-body freezing and lack of oxygen from blood circulation between “death” and a cryonics procedure, make revival impossible with current medical knowledge. Proponents of cryonics hope that advanced tissue regeneration and oxygen-debt, freezing, and past cryoprotectant toxicity reversals may be possible, especially through millions of nanorobots that would restore healthy cell structure and chemistry [10]. More futuristic revivals of cryonic brains would incorporate some sort of “mind transfer,” whereby the brains would be scanned by a computer and their information integrated with an entirely new body. There are several even more far-fetched possibilities for immortality, which become increasingly abstract and hypothetical. The first is mind uploading, a hypothetical process of copying and transferring a brain to a non-biological entity, perhaps a computer or, as recently proposed by Russian scientists, a humanoid robot [11]. Such uploading would both “back up” the brain if the body suffered injury and allow “humans” to exist within a significantly more resilient robotic or virtual reality form. This would theoretically detach the human existence from the limitations of the body, potentially allowing for increased computational capacity and lower the risks of physically dangerous activities like space travel. However, copying supercomputers would have to ensure a perfect mimicry with the functions of the real brain being transferred. Some research is being done in this field; the brain structures of a fruit fly and roundworm species have been simulated to some degree [12]. In addition, IBM and a Swiss university launched the “Blue Brain” initiative in 2005, which aims to simulate parts of mammalian brains; the program has had some success in modeling rat neural pathways [13]. The final theory of immortality discussed here is gene modification, which has already been studied extensively for its application in replacing mutated or deficient genes with normal ones. However, it has been proposed that ageing could be theoretically almost entirely stopped by preventing the activation of genes that manifest themselves later in life and chemically advance the ageing process. This would ide14 | JOURNYS | SPRING 2013

ally “fool” the body into believing itself younger than it actually is [14]. To succeed in the quest for immortality (or something very close to it) would be the crowning achievement of medicine. The science today is rudimentary at best, but developing human biological immortality would help assuage the deeply rooted human desire to avoid dying. However, the exponential extension of human existence could lead to overpopulation and economic repercussions. Ultimately, philosophers and religious figures will debate the mind-body separation, the ethics of resurrection, and the assumption that death is to be feared. However, scientists and the public, even 500 years after Spanish explorer Ponce de Leon searched for the “Fountain of Youth,” continue to be fascinated by the idea that death can be conquered. REFERENCES: [1] “Li Ching-Yun Dead; Gave His Age As 197.” The New York Times. 6 May 1933. Web. DD405B838FF1D3 [2] “Tortoise-Pigeon-Dog.” TIME. 15 May 1933. Web.,9171,745510,00.html [3] Whitney, Craig. “Jeanne Calment, World’s Elder, Dies at 122.” 5 August 1997. New York Times. Web. [4] “Half of babies ‘will live to 100.’” BBC News. 2 Oct. 2009. Web. hi/health/8284574.stm [5] Baur et al. “Resveratrol improves health and survival of mice on a high-calorie diet.” Nature, Nov. 2006. Web. [6] De Magalhães, João Pedro and Olivier Toussaint. Rejuvenation Research. July 2004. pgs 126-133. Web. abs/10.1089/1549168041553044 [7] Gustad, Thomas and David Khansari. “Effects of long-term, low-dose growth hormone therapy on immune function and life expectancy of mice.” Mechanisms of Ageing and Development (Vol. 57, Issue 1), Jan. 1991. Web. science/article/pii/004763749190026V [8] ALCOR Life Extension Foundation. Web. [9] Guyton, Arthur. “The Cerebral Cortex and Intellectual Functions of the Brain.” Textbook of Medical Physiology (7th ed.). W. B. Saunders Company, 1986. pg. 658 [10] Freitas, Robert and Ralph Merkel. “A cryopreservation revival scenario using MNT.” Cryonics (ALCOR Life Extension Foundation), 2008. Web. http://www.alcor. org/cryonics/cryonics0804.pdf [11] O’Neil, Lauren. “Human immortality could be possible by 2045, say Russian scientists.” CBC News, 31 July 2012. Web. [12] Erdos, Paul and Ernst Niebur. “Theory of the locomotion of nematodes: Control of the somatic motor neurons by interneurons.” Mathematical Biosciences, Nov. 1993. Web. [13] Herper, Matthew. “IBM Aims To Simulate A Brain.” Forbes. 6 June 2005. Web. http:// [14] Dawkins, Richard The Selfish Gene. New York: Oxford University Press, 2006. pgs. 41–42.

i can


By: Harshita Nadimpalli

with My


Edited By: Emily Sun

There is a multitude of individuals in the world who are born without vision, and countless others are met with terrible accidents that leave them blind. Over time, these people learn to live their lives again as they recover from the trauma and depend on their newly heightened senses of smell, hearing, or touch. They learn to read their favorite books by using Braille, learn to know what their dinner is through the nuances of the odors of the various ingredients in their meals, and learn to cross busy intersections using the subtleties of footsteps and the sounds of revving car engines. All of this is possible due to the brain’s incredible ability to form new neural connections after being damaged and to develop its own system of reorganization in order to recover [1]. This phenomenon is known as neuroplasticity. Neuroplasticity is thought to be an evolutionary mechanism that compensates for a lost sense by increasing the acuteness of others. In the 1960s scientists began testing alternate methods that would enable people to become independent of their eyes for visual input [2]. Then, a scientist named Paul Bach-y-Rita developed the idea of using the tongue as a surface to transmit visual information to the brain. Inspired by a stroke that his father had suffered, Bach-y-Rita constructed a primitive version of the technology that he would later develop. He began with a 20-by-20 array of metal rods in the back of an old dentist’s chair that allowed people who sat in the chair to see images when the rods, which transmitted electrical impulses, were touched to their backs [3]. This experiment yielded results of great accuracy, proving that the sense of touch could indeed be used to substitute for the function of the eye. Bach-y-Rita then decided to make a switch that would prove to be groundbreaking; he began to focus on stimulation of touch receptors on the tongue instead of stimulation of those on the skin. Although Bach-yRita died in 2006, his team of researchers at the University of Wisconsin continues to work on and further the legacy he left behind. The latest equipment they have developed is a small mouthpiece that is placed against the tongue, which allows a person to see without their eyes [4]. So how exactly does this system work? The tongue-stimulation device, called the Tongue Display Unit, or TDU, has a camera attached to it which functions as the surrogate seeing-eye of the device. The system then translates the images detected by the camera, which may include colors or movements, into a series of electrical impulses through the small square of 144 electrodes that is placed against the tongue. These electrical impulses trigger the sensitive touch receptors on the tongue, and are then conveyed to the nervous system as neural impulses [4]. These neural impulses are finally perceived as sensory information by the brain and converted to images as a substitute for vision, and the person is able to “see” the figures and pictures that the camera is seeing. The whole process works in a very similar manner to the process through which people with regular vision see; the tongue, rather than the optic nerve in the eye, receives energy in the form of an electrical impulse, and taste receptors on the tongue, rather than photoreceptors in the eye, eventually transmit information to the brain. Although more than enough electrical impulses are applied to the tongue, they do not contain nearly enough voltage to harm a person. Researchers have found that about 50 hours of practice with the TDU are needed for a person to become familiar and comfortable with the device [5]. It is important to understand why the tongue is a superior receptor to other options, such as the skin on a person’s fingertips, in terms of providing an interface for visual input. First, the tongue consists of an extremely high density of nerve fibers, receptors, and sensors that are incredibly sensitive to touch and far more closely packed together than those on the surface of the skin. Second, each individual taste

bud has a pore that opens out to the surface of the tongue and enables molecules and ions (from the electrical currents of the TDU, in this case) taken into the mouth to reach the receptor cells inside. Third, the tongue is coated in saliva, a highly conductive fluid that is ideal for electrical impulses to be carried in, which greatly contributes to the efficiency of the TDU. Furthermore, inside the brain, large parts of the cerebral cortex, which plays a crucial role in sensory input and perception, are devoted to the sensory perception of the tongue so humans can taste their foods and distinguish between edible foods and potentially harmful or spoiled foods that could cause them harm [5]. Also, electrical stimulation of the tongue is easier to detect than the electrical stimulation of a fingertip; only 3% of the voltage needed for the fingertip is needed for the tongue [6]. As a result, the tongue is an ideal candidate for the transmission of visual input to the brain. The use of TDU and similar methods are, of course, extremely beneficial to individuals whose vision is impaired, as it allows them to see in a manner that that they probably never thought was possible. However, this technology can also be very useful in many professional fields. For example, Navy SEALS worked with Dr. Bach-y-Rita to create a system based on the TDU that would allow the SEALS to see infrared images though their tongues in dark or clouded conditions underwater. Bach-y-Rita also worked with NASA to create sensors for astronauts to feel things outside of space suits while in orbit, and also with the air industry to create technology that could alert pilots to other planes or incoming missiles before their eyes register the threat [7]. The TDU has evolved since its conception. The Minnesota-based company Wicab Inc., founded by Bach-y-Rita in 1998, has created a vision device called the BrainPort, which is based on the TDU prototype. Sale of the BrainPort V100 has been recently approved in the European Union in March 2013, and Wicab has been trying to make the device available for those who cannot afford it. However, the FDA has not yet approved the device, and thus it is not being sold in the United States. The practical functions of this new sensory substitution technology, including its ability to help compensate for the loss of normal eyesight and to aid certain professions, make it a valuable tool in a constantly advancing society. REFERENCES [1] Bach-y-Rita, P., Kercel, S.W. Sensory substitution and the human-machine interface. Trends in Cognitive Sciences 7, 541-546 (2003). [2] Bach-y-Rita, P., Danilov, Y., Tyler, M. & Grimm, R. J. Late human brain plasticity: vestibular substitution with a tongue BrainPort humanmachine interface. Plasticidad y Restauración Neurológica 4, http://www. (2005). [3] Bains, S. “Mixed Feelings.” Wired 15, archive/15.04/esp.html?pg=2&topic=esp&topic_set= (2007). [4] Weiss, P. “The Seeing Tongue.” ScienceNews 160, 140 (2001). [5] Phillips, J. “The Brain: The Real Secret of Alternate Sensory Technology.” Proquest Discovery Guides, sensory/review.pdf (2011). [6] Bach-y-Rita, P., Kaczmarek, K.A., Tyler, M.E. & Garcia-Lara, J. “Form perception with a 49-point electrotactile stimulus array on the tongue: A technical note.” Journal of Rehabilitation Research and Development 35, 427-430 (1998). [7] Abrams, M. “Can You See With Your Tongue?” Discover Magazine, http:// (2003).

SPRING 2013 | JOURNYS | 15

REBUILDING the BRUISED BRAIN Discovery of a Novel Biomaterial in Nanomedicine



group of soldiers clad in camouflaged uniforms prepare to hit Despite costing the U.S. an estimated $76.5 billion per year in direct the road for yet another grueling day in the field. Their mission: track and indirect medical costs [4], treatments for TBIs are largely ineffective down the terrorists who pose a constant threat to the local population. and lead to complications such as secondary injuries via necrosis. A dispatch for the 1st Platoon arrives and the soldiers quickly mount Presently, doctors cannot help a patient recover fully from TBI. Surgeons their equipment, salute the commanding officer, and board the armored can only attempt to reduce further damage by releasing the pressure that vehicle. Moments after they disappear into a cloud of dust, a loud builds up inside the skull through surgery and drainage of excess fluid explosion thunders through the air. As the dust begins to settle, the when the injury is still fresh, as they did for Congresswoman Gabby aftermath of a powerful improvised explosive device (IED) fills the scene. Giffords after she was shot in the head [5]. The most appalling fact about A few human bodies squirm in pain while others lie motionless on the TBI is that, to this day, the Food and Drug Administration has yet to bloody dirt road. One unconscious soldier, with a dreadful injury to his approve a drug that can effectively treat the ailment [3]. head, is rushed to the nearest hospital. The staff in the emergency ward So, what are potential remedies currently available to treat TBI? One knows he will never lead a normal life again; he is the victim of an all adopted therapeutic approach by doctors is to use stem cells to replenish too common traumatic brain injury (TBI). Sadly, every five seconds, the loss of neurons in the brain. Over the past decade, there has been a someone in the world is affected by TBI, an occurrence that exceeds the dramatic surge in stem cell research. Stem cells can morph into almost any combined frequency of HIV/AIDS infections, spinal cord injuries, and type of cell in the human body (neurons, blood cells, muscle cells, etc.) multiple sclerosis and breast cancer diagnoses [1]. and provide the first hope for recovery from TBI. Unfortunately, even The brain is one of the most important human organs consisting of though stem cells can replace individual cells perfectly, the brain is not an interconnected cellular network (figure simply “individual cells.” All brain cells 1A). However, injury to this complex structure or “neurons” are intricately connected different from that occurring in other parts with each other via specialized structures of the body. For example, unlike dying skin called “synapses”. When neurons die, the cells that are continuously replaced, damaged multitudes of connections associated brain tissue deteriorates further (figure 1B). with those neurons are also lost. With Typically, the human brain can maintain the loss of connections there is a loss normal function throughout an individual’s of communication; with a loss of life. However, if the cells die prematurely by communication there is a loss of brain external injury, in a process known as necrosis, function. For stem cells to replenish lost they cannot grow back. A month to one year neurons, the connections of the surviving later, this deterioration continues to worsen. neurons need to grow across a vast fluidSoon the site of even mild brain injury expands, filled chasm in the brain and make new leaving a gaping, fluid-filled hole in the head of connections, or synapses. However, they an otherwise normal-looking individual (figure can never fully replace the connections 1C) [2]. belonging to those lost neurons. The human brain consists of some 100 Therefore, there is still a need to conduct billion cells called neurons that are interdecade’s worth of research to figure out connected by crisscrossing “highways” on various specifications that go along with CAROLYN CHU/GRAPHIC which electrical signals travel. These signals the use of stem cells to treat TBI. For code for every action and thought we experience. In TBI, neurons die, example, what types of precautions must be taken to prevent the onset often due to edema (swelling of the brain) or ischemia (reduction in of an autoimmune response that kills all of the cells foreign to the body. blood flow to the brain). Permanent brain damage from such an injury A safe and efficacious remedy for TBI continues to elude today’s medical leads to an impairment of numerous vital functions, such as muscle community [6]. control and memory [3]. To get a sense of the damage caused by TBI, After years of research, Jiasong Guo and Ka Kit Leung, at the imagine a vast island, like Manhattan, buzzing with activity to create University of Hong Kong may have crafted a solution to this dire unique products that can be exported to other parts of the world. The problem. Peering into the brain at a molecular scale, they have affected city is analogous to a region of the brain, as both contain units (people change on a macroscopic scale. Through the domain of nanomedicine – or neurons), each responsible for performing a certain function. The a new, promising approach to cure diseases – these two scientists have occurrence of a severe brain injury is the equivalent to the effects of a fabricated a new biomaterial that may present a solution for TBI. Termed magnitude 9 earthquake on the Richter scale. At the epicenter and for “self-assembling peptide nanofiber scaffold,” or SAPNS for short, their many miles beyond, buildings crumble, highways tear apart, and bridges newly developed nanobiomaterial assists neurons in recovering after TBI lead to the sea instead of land. Most of the infrastructure on the island is and prevents the formation of a permanent scar in the brain [7]. lost and the people are stranded with no means of communication. The If this approach is successful, a victim suffering from TBI can get an site of brain injury and the surviving cells of a TBI victim suffer a similar injection of SAPNS soon after injury at the locus of brain damage. After dilemma. the injection, nanofibers would spontaneously create the equivalent 12 | JOURNYS | SPRING 2013

of scaffolding onto which glia, “support cells,” can migrate and build the foundation for a permanent bridge. Once bridges are re-created, the scaffolding would disappear, leaving newly formed channels of communication. SAPNS helps surviving neurons get through to the area with damaged tissue and form new connections that adapt to the new microenvironment in the damaged region of the brain. As the surviving brain cells begin to regrow, paralyzed body parts can regain their functionality [8]. SAPNS is not the only nanomedicine that helps in the treatment of lesions; however, it is by far the best to date. The novel biomaterial has five helpful amalgamated properties that are not found in other types of nanomedicine. First, SAPNS has a minimal risk of carrying a biological contaminant present in animal-derived biomaterials, due to its composition of naturally occurring amino acids. Furthermore, SAPNS provides a true three-dimensional structure in which the neurons can grow and migrate to fill the lesion. The use of SAPNS is safe because the body has no autoimmune or tissue inflammatory response to the newly introduced substance. The biomaterial also has immediate hemostatic properties that prevent internal bleeding in the brain. Lastly, an important feature is that SAPNS can be injected into the brain in a liquid form. This makes it compatible with any shaped lesion cavity, whereas most other biomaterials used to repair the central nervous system are either solids or gelatins. Using older biomaterials makes getting the appropriate size and shape problematic, and as a result increases the risk of secondary injuries during transplantation due to improper shape [7]. Although notoriously known to be irreparable after sustaining damage, brain cells can actually regenerate to a limited extent [9]. Drs. Guo and Leung (2009) together with their colleagues capitalized on this ability using SAPNS to promote growth shortly after the trauma (figure 1D). To test the effectiveness of SAPNS on TBI, the researchers created similar lesions in two groups of adult laboratory rats and then treated one group with SAPNS and the other with a standard saline solution (control group). Comparing the brains after several weeks of rehabilitation, the researchers reported “saline treatment in the control animals resulted in a large cavity in the injured brain, whereas no cavity of any significant size was found in the SAPNS-treated animals.” They quantified how well SAPNS treated the injury by looking for macrophages, a type of white blood cell. By counting the number of macrophages, the researchers, in essence, obtained an estimate of how extensive the neuronal injury due to TBI was. Macrophages can be thought of as cells that “bury the dead.” When Guo and Leung found fewer amounts of macrophages at the damage site in the SAPNS- versus saline-treated rats, they concluded that there were fewer dead cells to “bury” in the SAPNS-treated brains. The nearly complete recovery (figure 1E) showed that the new nanobiomaterial promoted quick healing of brain tissue after TBI [7]. One highly favorable solution is to combine this technology with stem cell therapy, which has potential for providing a complete recovery [6]. By creating a permissive environment for neurons to migrate across and form new adaptive connections, SAPNS shows great promise in eliminating the number one cause of death in people under the age of 45 [10]. The finding may be as great a triumph as landing on the Moon. Enabling re-creation of just a piece of the most complex matter in the universe translates into a small step forward for the TBI patient, but a giant leap for mankind. REFERENCES [1] Centers for Disease Control and Prevention (CDC). (2010). How many people have TBI?. Retrieved from http:// [2] Morris, R. & Fillenz, M. (2003). Neuroscience: Science of the brain. The British Neuroscience Association. [3] Society for Neuroscience. (2008). Brain facts: A primer on the brain and nervous system. [4] Rockswold, G. (2012). Traumatic brain injury. The Minneapolis Medical Research Foundation. Retrieved from [5] Cho, Y. & Borgens, R. B. (2011). Polymer and nano-technology applications for repair and reconstruction of the central nervous system. Experimental Neurology, 233, 126-44. [6] Brodhun, M., Bauer, R., & Patt. S. (2004). Potential stem cell therapy and application in neurotrauma. Experimental and Toxicologic Pathology, 56, 103-12. [7] Guo, J., Leung, K. K. G., Su, H., Yuan, Q., Wang, L., Chu, T.-H., Zhang, W., Pu, J. K. S., Ng, G. K. P. & Wong, W.M. (2009). Selfassembling peptide nanofiber scaffold promotes the reconstruction of acutely injured brain. Nanomedicine: Nanotechnology, Biology and Medicine, 5, 345-51. [8] Ellis-Behnke, R. G., Liang, Y.-X., You, S.-W., Tay, D. K. C., Zhang, S., So, K.-F., & Schneider, G. E. (2006). Nano neuro knitting: peptide nanofiber scaffold for brain repair and axon regeneration with functional return of vision. PNAS, 103, 5054-59. [9] Liang, Y.-X., Cheung, S. W. H., Chan K. C. W., Wu, E. X., Tay, D. K. C., & Ellis-Behnke, R. G. (2010). CNS regeneration after chronic injury using a self-assembled nanomaterial and MEMRI for real-time in vivo monitoring. Nanomedicine: Nanotechnology, Biology and Medicine, 7, 351-59. [10] Langlois, J. A., Rutland-Brown, W., & Wald, M. M. (2006). The epidemiology and impact of traumatic brain injury: a brief overview. Journal of Head Trauma and Rehabilitation, 21, 375-78.


Figure 1. Schematic representation

of TBI and SAPNS-based treatment. (A) Neural network showing cell bodies (filled circles) and their connections. (B) Primary injury due to traumatic brain injury (TBI). (C) Secondary injury around the periphery of the primary injury. (D) Self-assembling peptide nanofiber scaffold (SAPNS) injected at the site of injury. (E) Newly formed connections (red lines) after injection of SAPNS.

SPRING 2013 | JOURNYS | 13

Advances in

Personalized Medicine by Collin Dillingham

edited by Mina Askar

reviewed by Dr. Rudolph Kirchmair On one episode of the hit TV show “House”, Dr. Gregory House diagnoses a patient with Von Hippel–Lindau disease. This uncommon illness confuses the victim’s senses, but after a few days of paying careful attention, he saves the patient’s life. Sadly, doctors like him do not exist in real life. However, breakthroughs in the medical field are helping doctors and researchers to pinpoint specific diseases and administer the appropriate treatment more accurately than ever before. Thousands of medical professionals and researchers across the globe work together to expand our collective knowledge of the human body and all of its possible ailments, as well as to develop and utilize many new methods and drugs for curing those in need. Unfortunately, our progress in the medical field is paralleled by an increasingly diversified scope of known diseases. With so many new diseases, the hundreds of different variations of cancer being just one example, medicine must become more personalized to the individual patient in order to be effective. Medical research commonly utilizes lab animals, namely mice, to test new drugs and prepare them before being implemented in humans. A new form of animal testing has recently made its way into the medical society, and scientists are calling it “Avatar” testing. No, contrary to popular belief, it does not involve putting patients into the minds of large blue creatures. The purpose of avatar testing is to take the diseased cells of a patient and implant them into a mouse. The tumor, or whatever culture is being observed, is then grown, extracted again, cut up and re-implanted into multiple mice until many mice are living with the same tumor [1]. Test mice are commonly bred with weak immune systems for two reasons: to allowdiseases to take hold in them more readily and so that thetumor implanted into them will cause them to take on a medical state very similar to their human patient. After the batch of mice with the patient’s specific tumor is ready, researchers are able to test numerous different variations and combinations of drugs and treatments in order to find the one that might be the most effective in the human patient.

16 | JOURNYS | SPRING 2013

Avatar testing, although it allows doctors to assign prescriptions that may have been overlooked before, does have its drawbacks. For instance, the process is lengthy, typically taking two to four months to prepare the mice and longer for analysis1. On top of the time, something which patients may not have enough of, the process is expensive. For one New Jersey patient who opted for avatar testing to find a treatment for his lung cancer, the testing cost over $25,000 [1]. The financial and time concerns are only part of the risk; animal testing has always been executed knowing that certain treatments that work in animals may not work in humans. The avatar testing process usually gives doctors a better insight into what may help their patient, but the results are not always foolproof. If the process is streamlined and made available to the public, however, it has the potential to open up a whole new industry within the medical field; entire laboratories and companies could be devoted to growing the avatars custom-fit for every patient, bringing greater accuracy into the science of diagnostics. Personalized approaches to modern medicine are not restricted to drugs and medications; researchers such as Dr. Steven Badylak have been pioneering the field of muscle regeneration for decades and have devised a new way to completely regrow damaged muscle tissue utilizing certain structures found in the organs of pigs [2]. These structures are called “extracellular matrices”, and serve as a roadmap for cell generation within the bodies of living things. The extracellular matrices look like webs that outline the tissues of our organs, and researchers have found that their purpose is to act as a scaffold for cells to grow on, while at the same time only allowing specific types of cells to grow. In essence, the matrix is a road map or blueprint for the cells to follow during growth Seeing the potential in a flexible, cell-growing membrane, Dr. Badylak discovered that by ridding the matrix of all living tissues and adhering it to damaged muscle, the matrices have the capabilities to guide cell growth and literally rebuild the same muscles that have been damaged [3]. The procedure has been tested and proven to regrow not just ordinary muscles, but also the complex smooth tissues of internal organs, such as the linings of our intestines and arteries [2]. For patients, this means that there is little chance of their bodies rejecting the new tissue because the majority of the new cells are their own.


If ordinary and comparatively simple muscles can be regrown using these matrices, it is logical to assume that the procedure can be extended to the more complex organs and systems in our bodies as well. This is exactly the direction scientists such as Paolo Macchiarini, a doctor at the Karolinska Institute in Sweden, want to take this field of research. The process that he and other bioengineers have developed is extremely similar to that of Dr. Badylak, using the same types of scaffolds to encourage growth, but the procedure required to form a whole organ is a bit more involved. The most significant difference between the two procedures is that building an artificial organ requires a scaffold for that organ in particular, yet the purpose of the procedure is to create an organ without sacrificing any living tissue. In order to circumvent this problem, Macchiarini had an artificial scaffold created out of porous plastic that could absorb cells and serve the same purpose as an organic one [4]. This plastic structure, while expensive, can be recreated many times without sacrificing living organs. Once the scaffold is created, it is then laced with a solution containing stem cells taken from the patient’s bone marrow and suspended in a nutrient solution. Amazingly, the cells will multiply and the organ can be fully formed in a matter of just days [4]. While expensive, as such is the case with most experimental procedures, the benefits of having an organ grown practically from scratch and tailor-made specifically from the patient’s own cells are astronomical; the months that some patients are forced to wait on waiting lists will be drastically reduced if and when this process is streamlined enough to be affordable for the average patient. On top of that, there is little to know risk of the organ failing due to the body rejecting it.

Using these new technologies, many promising advancements have been made that will almost surely carry the medical community into a new era of surgery within the coming decades. Speculative research has even been done by surgeons such as Dr. Tracy Grikscheit who believe that the next step of scaffold-based tissue engineering is to actually grow the organs inside the patient [5]. She believes that, theoretically, a tiny “seed” scaffold can be placed inside a patient’s abdomen where it can grow off of the natural blood supply and eventually be extracted and put in place of the failing organ or tissue. This idea is only in its early stages of development, though; simple pieces of organs such as intestines have been grown in mice, but there has been very little testing beyond that5. With many new research projects under way around the world, doctors and researchers are constantly improving their ability to diagnose and treat patients based on their specific needs and ailments. Trials and experimentation may take years and some techniques may not be approved for use in humans any time soon. Nevertheless, more progress is being made and more lives are being saved. REFERENCES [1] Pollack, A. “Seeking Cures, Patients Enlist Mice Stand-Ins”. [2] Fountain, H. “Human Muscle, Regrown on Animal Scaffolding”. [3] Chan, B. P., Leong, K.W. “Scaffolding in tissue engineering: general approaches and tissue-specific considerations”. [4] Fountain, H. “A First: Organs Tailor-Made with Body’s Own Cells”. [5] Fountain, H. “One Day, Growing Spare Parts Inside the Body”.

SPRING 2013 | JOURNYS | 17

Controlled Release Using an Oral Drug Delivery System Designed to Improve Treatment of Conditions such as Multiple Sclerosis BY: Madeline Mouton-Johnston1, Diane C. Forbes2 EDITED BY: Joy Li ABSTRACT


A controlled release drug delivery system suitable for oral administration of a range of desirable therapeutic agents was developed. The drug carrier was prepared from alginate beads made by adding an alginic acid solution dropwise into a calcium chloride solution. The resulting carrier spheres were loaded with a model drug and the release behavior was investigated. Such a system could have utility for the treatment of diseases such as multiple sclerosis that would benefit from improved oral delivery therapeutics.

1. INTRODUCTION: Multiple sclerosis is one of the most commonly diagnosed neurological disorders in young adults [1], and it affects approximately 250,000 to 350,000 people in the United States [2]. Multiple sclerosis is a debilitating disease that primarily damages the myelin around axons (nerves) and creates irreversible scarring in the central nervous system. The principal role of myelin is to quickly and efficiently transmit nerve signals throughout the body. Consequently, the destruction of the myelin and the damage to the central nervous system obstructs the transmission of electric signals in the body and can significantly impair motor functions [1]. The damage to the central nervous system caused by multiple sclerosis leads to a number of symptoms including fatigue, memory loss, and gastrointestinal problems, such as bowel problems and bladder dysfunction [3]. There are treatments available to slow the progression of multiple sclerosis, but there is no cure [1]. Most available treatments, such as interferon β-1a (Avonex® or Rebif ®), use subcutaneous or intramuscular injections [4]. The newest treatment Natalizumab (Tysabri®) requires intravenous injection [5]. The benefits of the treatments are not without cost to the patient’s quality of life; the injections are painful and most patients experience flu-like symptoms and general discomfort for almost a full day after taking the injection [6]. Dosing frequency typically ranges from daily to weekly, depending on the drug [7]. Other routes of administration could alleviate some of the patient discomfort; indeed, an orally delivered therapeutic would eliminate the pain associated with the injection [8]. Fingolimod is the first orally administered multiple sclerosis drug [7]; however, the therapeutic is associated with potentially fatal side effects, including a decrease in heart rate [9]. As a result, patients must remain at the doctor’s office for the first six hours following their first dose. A controlled release system may reduce the side effects immediately post-administration of fingolimod and eliminate the need for medical observation for six hours, thus improving the quality of patients’ lives. 2. PURPOSE The purpose of this study was to investigate the release behavior associated with the use of alginate beads as carriers in a controlled delivery system. A significant challenge of oral delivery is preventing the drug from disintegrating in the stomach so that it can be fully released into the upper small intestine. The stomach is very acidic and has a pH around 2 while the upper small intestine has a pH around 7.4. The development of alginate beads as a drug delivery technique is promising because the material is insoluble at low pH values, remaining intact to provide protection to the therapeutic, while dissolving to release the drug at neutral pH values. The alginate contains anionic sites, which allow the interaction between the alginate anion and the calcium cation to crosslink the chain to form a gel network [10] (see Fig. 1). The crosslinks connect the linear chain to form a well-defined structure. The advantage of using 18 | JOURNYS | SPRING 2013

alginate in an oral delivery system is that alginate is biocompatible and biodegradable [11].

Fig. 1: Linear alginate chains (a) are joined by divalent calcium cations (b), and the chains stack to form a regular physical network (c).

3. METHODS The following products were purchased from Sigma-Aldrich (St. Louis, MO): alginic acid sodium salt from brown algae, Tartrazine, and Erioglaucine. The calcium chloride dihydrate was purchased from EMD Chemicals Inc (Gibbstown, NJ). All water used in the study was ultrapure grade. The microplate reader (Synergy HT) was purchased from BioTek Instruments Inc. (Winooski, VT). The synthesis method was adapted from reports in the literature using calcium chloride crosslinking to form alginate beads [12]. A 2% w/v solution of alginic acid in distilled water was prepared by combining 0.4 g of alginic acid in 20 ml of water and mixing overnight to dissolve. A 500 ml sample of 2% w/v calcium chloride solution was then prepared by dissolving 10 g of calcium chloride in 500 ml of ultra-pure water. The alginate beads were then created by pushing 5 ml of the alginate solution through 18G, 20G, and 30G needles with a syringe into the calcium chloride solution. The purpose of the different sized beads was to compare release behavior for the different size beads and types of dyes. The beads were recovered by filtration. Typical beads can be seen in Fig. 2. A 10 mg/ml solution of dye (Eriogluacine or Tartrazine) was prepared and 20 beads of each sized bead were added with 5 ml dye solution to soak for 3 days. During this time, the beads where loaded with the dye, which serves as the model drug. Tartrazine and Erioglaucine were selected as two model agents to aid in studying the possible behavior of Fingolimod due to their water solubility and similar molecular weights. Tartrazine is an azo dye primarily used in food coloring. It has a molecular weight of 534.3 g/mol and is used in various lemon-flavored products. Erioglaucine is also a colorant with a molecular weight of 792.85 g/mol. Both compounds are water soluble as is Fingolimod which has a molecular weight of 307.5 g/ mol. Fig. 2: Alginate beads following synthesis, prior to loading with dye. Left to right: 18G, 20G, and 30G needles used to make beads. 1. St. Stephen’s Episcopal High School, Austin, TX 2. Dept. of Chemical Engineering, University of Texas at Austin

Immediately prior to the release study, the particles were recovered with filtration (see Fig. 3) and rinsed with approximately 500 ml of ultrapure water, until the rinse water was colorless, to eliminate extra dye on the outside of the particles. Each of the 6 sets of particles (2 colors with 3 sizes each) was then placed into 150 ml of ultra-pure water and stirred. 200 µl samples were taken at designated times of 1, 2, 3, 5, 7, 10, 15, 20, 25, 30, 45, 60, 75, 90, 120, 150, 180, 240, 300 minutes. Fig. 3: Alginate beads following loading with dye, prior to rinsing. The yellow dye (on the left) is Tartrazine and the blue dye (on the right) is Erioglaucine.

4. RESULTS AND DISCUSSION The alginate microspheres prepared by this method were uniform, spherical and of substantially good quality. The alginate beads could be contained in a gelatin capsule that would dissolve in the stomach in order to aid in oral administration for patients. To investigate the release of the model compounds from the alginate spheres, we obtained the spectra of these compounds. The spectra (see Fig. 4) display the absorbance over a range of wavelengths. Using the wavelength at maximum absorbance to calculate the concentration enables the most accurate determination at low concentrations.


Fig. 5: Calibration curve to calculate the concentration as a function of absorbance.

Calibration curves were used to relate the concentration to the absorbance of the dye with a linear equation in slope/intercept form. The slope/ intercept equation could be found by adding a linear trendline to the data on the graph as shown in Fig. 5 for Erioglaucine and Tartrazine; the slope and intercept values are shown in Table 1. The absorbance values on the calibration curve are found by scanning different dilutions of the dye at the wavelength of maximum absorbance determined from the spectrum graph. Absorbance values greater than 2 cannot be measured accurately, so the most accurate determination of concentration requires dilutions








Table 1: Slope and intercepts of calibration curves calculated using linear regression. Slope and intercept have units of mg/ml and the absorbance is unitless.

Both calibration curves show increased absorbance with increased concentration. This result is predicted by Beer’s Law (see Eqn. 1), which states that the concentration and the absorbance of a solution are directly proportional to each other [13, 14].

A = ε l c

Eqn.1: Beer’s Law

Where A is absorbance, ε is absorptivity, l is length, and c is concentration. For concentration units of g/ml, the absorptivity ε has units of cm2/g, the length l has units of cm, and the absorbance A is unitless. The release study graphs (Fig. 6 and Fig. 7) show the dye concentration over the duration of the experiment. The concentration was found by using the slope/intercept equation from the calibration curves of the dyes. The results from the release studies indicate that the concentration of the solution increases over a period of time until a certain point when the concentration level plateaus. The plateau is associated with the maximum release of dye (model drug) by the carrier. The plateau is achieved after approximately an hour, with most of the release in the first 45 minutes. The 18G beads have the highest final concentration of all of the bead sizes.

Fig. 6: Concentration of Tartrazine yellow dye in the release solution following release from the alginate beads. Fig. 4: Spectra for Tartrazine and Erioglaucine, yellow and blue dyes, respectively. Data is normalized (scaled to a maximum of 1) using the maximum absorbance.


Fig. 7: Concentration of Erioglaucine blue dye in the release solution following release from the alginate beads.

The initial release rate is estimated from the slope of the plot of concentration versus time at early times as shown in Table 2. The initial rate of release for each dye is comparable for all the bead sizes. However, the larger beads released more dye. The bigger beads have the advantage of holding more dye per bead, but the smaller beads have the advantage that they can be more easily packaged in a gelatin capsule for oral administration. 18G




1.33 x 10-4

1.48 x 10-5

1.17 x 10-4


1.57 x 10-4

1.57 x 10-4

6.57 x 10-5

Table 2: Initial release rate of dye estimated from slope of concentration versus time for the first 20 minutes, reported in units of mg/(ml min). SPRING 2013 | JOURNYS | 19


The wavelength of maximum absorbance was determined by scanning a sample of dissolved dye across the full range of wavelengths for visible light. A plate reader (Synergy HT, BioTek Instruments Inc.) was used to measure the absorbency values; the Erioglaucine samples were scanned at 630 nm and the Tartrazine samples were scanned at 410 nm. Dilutions of each dye were prepared to make a calibration curve. The 10 mg/ml dye was diluted 1:10 to make a 1 mg/ml solution. Another 1:10 dilution was then made to create a 0.1 mg/ml solution. The 0.1 mg/ml dilution of each dye was used to prepare dilutions of 0.1, 0.05, 0.04, 0.03, 0.025, 0.02, 0.015, 0.01, and 0.005 mg/ml.

that will have absorbance values less than 2.

5. CONCLUSIONS: This study demonstrates that alginate beads may be an effective controlled release drug delivery system suitable for oral administration. Our alginate beads release dye over 45 minutes and would provide a more delayed release than dye (or drug) alone. The size of the beads had a more significant impact on the total dye loaded than on the initial release rate. For future studies, it would be valuable to look at the effect of rinsing and storage on the release kinetics of the alginate beads as well as the effect of changing pH conditions.


Acknowledgements: This work was performed in the Graduate Research in High School Hands (GRiH2) Program of the Laboratory of Biomaterials, Drug Delivery and Bionanotechnology of the Departments of Chemical and Biomedical Engineering of the University of Texas at Austin. This program was established in June 2011 by Cody Schoener and William Liechty. Special thanks are expressed to Prof. Nicholas A. Peppas for providing laboratory space, materials, and equipment. This research was supported in part by a grant from the National Science Foundation (CBET-1033746). D.C.F acknowledges support from the National Science Foundation Graduate Research Fellowship Program (DGE-1110007).

REFERENCES: [1] “What is Multiple Sclerosis?”. 5Fsclerosis/whatisms/ (2012). [2] “Multiple Sclerosis: Hope Through Research”. multiple_sclerosis/detail_multiple_sclerosis.htm#203893215 (2012). [3] “What are the Symptoms of MS?”. about%5Fmultiple%5Fsclerosis/symptoms/ (2012). [4] “Interferon beta-1a Subcutaneous Injection”. pubmedhealth/PMH0000249/ (2012). [5] “Natalizumab Injection”. (2012). [6] “MS Injections”. (2012). [7] “Treatments for Multiple Sclerosis (MS)”. about%5Fmultiple%5Fsclerosis/treating/ (2012). [8] N. Kamei, M. Morishita, H. Chiba, N.J. Kavimandan, N.A. Peppas, and K. Takayama, “Complexation Hydrogels for Intestinal Delivery of Interferon beta and Calcitonin”, J. Controlled Release, 134, 98-102 (2009). [9] “Fingolimod”. (2012). [10] A. Augst, H. Kong, and D. Mooney, “Alginate hydrogels as biomaterials.” Macromol. Biosci., 6, 623-633 (2006). [11] H. Tønnesen, and J. Karlsen, “Alginate in drug delivery systems.” Drug. Dev. Ind. Pharm., 28, 621-630 (2002). [12] S. R. Marek, W. L. Liechty, and J. W. Tunnell. “Controlled Drug Delivery from Alginate Spheres in Design-Based Learning Course.” 2012 ASEE Annual Conference, 11 June 2012. [13] Potts, G.E. “Beer’s Law”. beers.htm?referrer=webcluster& (2001). [14] Blauch, D. N. “Spectrophotometry: Beer’s Law”. spectrophotometry/beerslaw.html (2009).

20 | JOURNYS | SPRING 2013

Imagining NUMBERS and SPACE

By: Annie Xu Edited By: An Nguyen Reviewed By: Dr. Jon Lindstrom

The World of Number-form and Spatial-sequence Synesthesia “The [picture] never seems on the flat but in a thick, dark grey atmosphere deepening in certain parts, especially where 1 emerges, and about 20.” “The track is organized around the academic year. The short ends are the summer and Christmas holidays – the summer holiday is slightly longer.” “The figures are about a quarter of an inch in length, and in ordinary type. They are black on a white ground… the picture is invariable.” [1]


ithout context, these might seem like descriptions of printed images, but they are all products of the mind. They are experiences of number-form synesthetes, people who involuntarily visualize a map of numerals in precise locations in space when thinking about numbers, and spatial-sequence synesthetes, people who perceive any sort of sequence, such as numbers or letters, in visual sequences similar to number forms. Synesthesia is a general term describing the condition in which stimulation of a sense or idea triggers stimulation in another part of the brain. Resulting experiences include seeing colors when hearing certain sounds (sound-color), perceiving numbers or letters as having colors (grapheme-color), experiencing a taste upon hearing particular words (lexical-gustatory), and imagining letters, numbers, months, or days of the week as having personalities (ordinal linguistic personification) [2]. The perceptions of synesthetes are distinctive for each individual and usually remain constant throughout one’s lifetime. These experiences seem so intrinsic that many synesthetes go through life unaware that they have a unique condition. As a result, the prevalence of synesthesia often remains unclear; estimates have ranged from 1 in 25,000 to 1 in 20, though a widely accepted statistic is 1 in 2,000 [2]. It has also been estimated that females are about 6 times more likely to have synesthesia than males, and researchers have suggested that synesthesia is more common than previously thought, with connections between time, numbers, and space being the more prevalent forms.The relation between perceptions of numbers and space was first documented in the 1880s by English scientist, Sir Francis Galton in his articles “Visualized Numerals” and “The Visions of Sane Persons.” While many people visualize numbers in a one-dimensional mental number line, with

zero or negative infinity at one end and infinity at the other, numberform synesthetes see the number “line” with twists and curves, usually unrestricted to a single plane, with each number included occupying a definite position [1]. Another distinguishing characteristic of numberform synesthesia is that synesthetes do not merely imagine a mental arrangement of numbers; the mention or thought of a number induces a vision of the number form, so that the individual actually sees his number form in the space before him. Spatial-sequence synesthesia involves the mental placement of items in a sequence, including numbers, letters, days of the week, months, and years, in explicit locations in space. A closely related form of this condition, time-space synesthesia, occurs when individuals perceive units of time as relative to their own body. For example, a patient with time-space synesthesia has described her perception of the months as a large 7-shaped figure extending around her waist about a meter from her body [8]. Depending on what time of day or year a time-space synesthete is thinking about, his or her viewpoint of the “mental calendar” will often shift, giving a particular direction or area represented by the past, present, and future [4]. A synesthete describes her visualization of the months as an “oval with myself at the very bottom, Christmas day to be precise… As I move through the year, I am very aware of my place on the oval at the current time, and the direction I am moving in” [8]. These links between cognition of time, space, and vision are sometimes even connected to other sensory qualities, such as color or texture, creating a seemingly surreal multi-sensory synesthetic experience. A variety of theories exist which seek to explain the neuroscience behind synesthesia. One theory claims that a synesthetic brain contains no anatomical differences from a non-synesthetic one, and that a functional difference is responsible for the mingling of senses. The theory proposed that the inhibition of signals to an area of the brain which processes information from multiple senses is impaired in synesthetes, so that a neural signal may activate two senses together [2]. However, a second and more widely accepted theory holds that synesthesia arises from abnormal connections between sensory regions of the cerebral cortex, so that stimulation in one area will also cause stimulation in another [5]. The regions of the brain that share a connected activation are usually close in proximity. In the case of numberform and spatial-sequence synesthesia, the cross-activation is thought to occur between the parietal lobe, which is responsible for numerical cognition, and the angular gyrus, which controls spatial cognition [6].

RHEA BAE/GRAPHIC Synesthesia is a mental condition that causes different senses and cognitive processes to be connected.

An example of how a synesthete might perceive the months of the year in colors and spatial arrangements. Source: BBC [4], based on an illustration by Carol Steen SPRING 2013 | JOURNYS | 21

Fig. 1-4 (L-R): Number-form synesthetes see a specific arrangement of numerals when thinking about numbers. These arrangements sometimes include other visual or sensory details, as approximated in these representations of number forms. Sir Francis Galton was one of the first scientists to document this phenomenon. [2] Other researchers have proposed that these types of synesthesia arise from the proximity of regions in the temporal (instead of the parietal) lobe, which plays roles in sequence coding and the representation of visual objects [7]. It has been suggested that all humans are born with these unusual links in the brain, but that the links are usually pruned away during infancy. A single gene mutation in the X-chromosome prevents such pruning from occurring and is thought to be responsible for causing synesthesia [2]. These distinctions in the brains of synesthetes seem to have no effect on other areas of cognition or perception. There are, however, advantages and disadvantages to the mixing of cognition associated with spatial-sequence and number-form synesthesia. Research has suggested a link between spatial-sequence synesthesia and a heightened ability to form memories dealing with dates and time, and that a strong association exists between spatial-sequence synesthesia and hyperthymesia, a condition in which an individual can successfully recall events and times in his own life with extraordinary clarity [8]. People with time-space synesthesia are generally considered to be more adept at remembering the dates of historical events and often plan events in their own lives using their visualized “calendar”. There has also been speculation about a possible connection between number forms and superior abilities to calculate found in individuals with autistic savant syndrome. Daniel Tammet, a savant who holds the European

MANY SYNESTHETES EXPERIENCE FRUSTRATION WHEN DEALING WITH PROBLEMS THAT DON’T MATCH THEIR VISUALIZATION OF NUMBERS record for reciting the most digits of pi and can mentally conduct enormous mathematical operations in mere seconds, describes his calculations as arising from the visualization of numbers in his head [3]. Although this mental visualization isn’t exactly number-form synesthesia, it indicates significant implications for the use of synesthetic perceptions as mnemonic devices and mathematical aids. Despite these apparent mental advantages, however, some people with the condition feel an impaired ability to think clearly. Many synesthetes experience frustration when dealing with problems that don’t match their visualizations of numbers. Hyperthymesia causes excessive details about one’s life to constantly flood one’s mind, and the visions of numerals associated with these types of synesthesia can be equally distracting. Nevertheless, the world of synesthesia provides a promising field of study in the brain and the mind. Among the studies at the forefront of research about number-form and spatial-sequence synesthesia are those comparing the effects of these mental experiences to those of nonsynesthetes. For example, a simple demonstration of the connection between numbers and space in the human mind is the spatial-numeral 22 | JOURNYS | SPRING 2013

association of response codes (SNARC) effect [7]. Numerous studies have shown that when subjects are asked to classify numbers as even or odd using a button, the responses to larger numbers are quicker when using the right hand, whereas responses to smaller numbers are quicker when they are made on the left. This association of numbers and space is actually reversed for some groups of people such as Palestinians, who use writing and number systems that run from right to left. Similarly, when number-form and spatial-sequence synesthetes are given a task involving numbers or sequences, they respond faster when the numbers are presented in a manner corresponding to their visualization [8]. It is unclear what these findings may indicate for more advanced tasks such as calculations, but the experiences of synesthetes are undoubtedly deeply rooted in the basic way they comprehend numbers and space. As a result, further study on these correlations may provide valuable insight into the fundamental workings of human cognition and perception. It is unquestionable that aside from providing synesthetes with a unique and intriguing relationship with abstract concepts, number forms and spatial perceptions make up an informative and captivating world with much in store for scientists to uncover. REFERENCES [1] Galton, F. Visualised Numerals. J. Anthropol. Inst. 10, 85-102 (1880). [2] Jensen, A. Synesthesia. Lethbridge Undergraduate Research Journal 2, (2007). [3] Kuchment, O. “A Mind That Touches the Past”. sciencenow/2009/12/14-02.html (2009). [4] Gill, V. “Can you see time?” (2009). [5] “The Neuropsychology of Synaesthesia”. (2007) [6] Hubbard, E. M., Piazza, M., Pinel, P. & Dehaene, S. Interactions between number and space in parietal cortex. Nat. Rev. Neurosci. 6, 435-448 (2005). [7] Eagleman, D. M. The objectification of overlearned sequences: a new view of spatial sequence synesthesia. Cortex 45, http://www.ncbi.nlm.nih. gov/pubmed/19665114 (2009). [8] “The Cognitive Benefits of Time-Space Synaesthesia”. http://scienceblogs. com/neurophilosophy/2009/11/19/the-cognitive-benefits-of-timespace-synaesthesia (2009).

The Mathematics of Drafting

By Fabian Boemer

Breathtaking speed, utter exhaustion, a final exertion, the last kilometer of a cycling race is a thrilling prospect. Dozens of riders in contention for the victory seemingly give up, slow down, and coast to the finish. Dozens others, meanwhile, drastically accelerate to finish within tenths of a second from each other. Reaching speeds up to 67 kilometers per hour or 42 miles per hour, the riders are held back, quite literally, only by air resistance. The ultimate winner must time his or her final sprint to perfection, making use of a common technique, drafting. Drafting, also known as slipstreaming, is employed in high-velocity sports such as car racing, bicycle racing, speedskating, and lowervelocity sports which include swimming and running. The concept of drafting is for groups of moving objects, in this case, competitors in sports, to reduce the overall effect of drag, which is the air or fluid resistance forces acting to reduce the velocity of an object. Drafting is essential in sports to reduce the average energy expenditure of the objects, whether they be a running pack or a cycling paceline. While any competitive athlete will espouse the benefits of drafting, to understand the scientific basis, we must first understand drag. Drag force is given by the drag equation:

where FD is the force of drag, is the density of the fluid, v is the velocity of the object relative to the fluid, Cd is the drag coefficient, and A is the reference area [2]. Inserting values for , v, and Cd, it is possible to solve for the drag force experienced by any object. We will consider the drag of a world-class cyclist, a racecar driver, and a runner. First in our analyses is the 2012 Tour de France winner, Bradley Wiggins, who covered the 3496.9 km course in 87 hours, 34 minutes, and 47 seconds, at an average of 39.9 kilometers per hour. [3]. Air, the fluid cyclists travel through, has a density of approximately 1.225 kg/m3 [4]. Finally, a racing bike, with crouched rider and tight clothing, has a drag coefficient of 0.52, with a frontal area of 0.55 m2 [5]. Inserting these values into the drag equation, we calculate the drag force to equal 280 Newtons. Next, we consider the 2012 Daytona 500 winner, Matt Kenseth, who covered the 200 lap course at an average velocity of 140.256 miles per hour or 224.41 kilometers per hour [6]. Kenseth’s car, a Ford Fusion, has a drag coefficient of 0.33 [7], and high-end race cars have frontal areas of about 1.8 m2 [8]. Again, air has a density of 1.225 kg/m3. These values result in a drag force of 15,000 Newtons. Lastly, we focus on 2012 Olympic 1500m winner, Taoufik Makhlaoufi, who covered the distance in 3 minutes and 34.08 seconds, at an average speed of 25.22 kilometers per hour [9]. The drag coefficient of a runner is approximately 0.9, and a frontal area of 0.478 m2 [10]. Air, still, has a density of 1.225 kg/m3. The drag equation yields a drag force of 170 Newtons. The optimal runner, rider, or driver, will orient him- or herself to minimize the air resistance force. Air resistance reduction in drafting

Edited by Kenneth Xu

riders has been measured at 44%, in drafting runners at 89% [11], and in racing vehicles at 25% [12]. A 44% air resistance reduction will reduce the force Bradley Wiggins exerts against air by 130 Newtons. Likewise,an 89% air resistance reduction will reduce the force Taoufik Makhlaoufi exerts against air by 150 Newtons. A 25% reduction in air resistance will result in a 3750 Newton air resistance reduction for Matt Kenseth. Now we consider the effects of these forces in each competitor’s respective race. Bradley Wiggins, on bike, can be estimated to weigh 80 kilograms or about 175 pounds. By Newton’s Second Law, Wiggins’ acceleration by reduced air resistance will be 1.54 m/ s2. If Wiggins were able to draft optimally for half of his race one second at a time, the additional distance he covers is given by: where we set initial velocity to 0. Inserting a = 1.54m/s2, each one-second drafting period gains .768 meters. Over the course of a 3,158,287 second race, we find Wiggins would travel 1,212 kilometers further, an entire 35% of the race course. Matt Kenseth’s car weights at least 3400 pounds, 7480 kilograms, by NASCAR requirements. A 3750 Newton force will accelerate 7480 kilograms at 0.5 m/s2. Assuming, again, Kenseth drafts one second at a time for half the race duration of 3.56 hours, we find Kenseth would travel 1600 meters further, .2% of the race distance. Finally, Taoufik Makhlaoufi, an average 70kg runner, will accelerate at 2.13 m/s2. One-second drafts over half the race duration of 214.08 seconds, would result in Makhlaoufi traveling 114 meters further, a full 7% of the race distance. Thus, our rough calculations find initially large benefits to drafting in running and cycling. While drafting in automotive racing is found to be less effective than in other sports, actual race practices suggest automotive drafting is among the most important race strategies. Likewise, though drafting has been downplayed in running, it shows significant potential in our preliminary glance. Nevertheless, even considering the rough assumptions, we have been able to apply mathematical definitions and physics equations to a commonplace sports phenomena. In certainty, the underlying physical principles are intriguing past the point of enduring hours of running, cycling, or driving. The principles of drafting, aerodynamics, and air resistance have immeasurable potential to the fields of aircraft, automobiles, renewable energy, and sailing. Successfully understanding and applying these principles can improve gasoline mileage, generate wind power, and ensure structures are able to cope with wind loads. Heating, ventilation, piping, and urban pollution are just some extensions of fluid dynamics along with one of the most popular diversions: sports. Indeed, the final thrilling sprint finish employs the concepts of air flow to entertain billions worldwide.

REFERENCES [1] “2012 Tour De France.” 2012 Tour De France. Training Peaks, 2012. Web. 18 Jan. 2013. [2] Benson, Tom. “The Drag Equation.” The Drag Equation. NASA, 10 Aug. 2010. Web. 18 Apr. 2013. <>. [3] Westemeyer, Susan. “Tour De France 2012.” RSS. Cycling News, 22 July 2012. Web. 18 Jan. 2013. < stage-20/results>. [4] “Density of some common gases.” [5] Kulju, S. “Aerodynamics of Cycling. “ [6] “Daytona 500.” Daytona 500. NASCAR, 2012. Web. 18 Jan. 2013. <>. [7] “Automobile Drag Coefficient.” Wikipedia. Wikimedia Foundation, 17 Apr. 2012. Web. 18 Jan. 2013. < coefficient>. [8] “Ford.” (2004). [9] Fazackerley, Karen. “Taoufik Makhloufi Wins Men’s 1500m Gold for Algeria.” BBC News. BBC, 7 Aug. 2012. Web. 18 Jan. 2013. < olympics/18903628>. [10] Pugh, L. “The Influence of Wind Resistance in Running,” (1970). [11] Olds, T. “The mathematics of breaking away and chasing in cycling” <>. [12] Browand, F. “Reducing Aerodynamic Drag and Fuel Consumption.” Global Climate and Energy Project (2011).

SPRING 2013 | JOURNYS | 23

High Speed Rail

The Future of American Transportation by Eric Chen

edited by Frank Pan

Whether by foot, by boat, or by horse, humans have always pursued ways to get around and about. Because humans are social creatures, the ability to go around to different places is an integral part of daily life. With different forms of transportation come trade, economic growth, and the sharing of ideas. In 1804, the invention of the steam train proved the human intellect’s ability to innovate and design new methods for ageold necessities1. Recently, a new transportation method has been developed that has the potential to completely change the future of American transportation, again. High Speed Rail (HSR) is a type of passenger transport that operates at extremely high speeds, as indicated by its name. HSR refers to a complex system of modern rail technologies that allows trains to reach higher velocities than standard commercial trains can. These technologies include advanced signaling techniques, dedicated railways, and innovative maintenance, all of which contribute to the overall effectiveness of the system. The minimum speed limit to qualify as HSR was set by the European Union, which “officially adopted Directive 96/48, which defines high-speed rail as trains capable of reaching speeds of 155 mph on dedicated, high-speed tracks or 125 mph on conventional tracks”2. The maximum speed recorded was in Shanghai, with trains reaching an official top speed of 260 mph. This technology is currently being used in countries such as France, China, and Japan, but as of now, it has not yet reached the United States. With so many countries using the opportunity to develop hi-tech transportation systems, it is important to consider, if the US desires to keep up with the pace. This appealing technology has many benefits, but it also has potential consequences that leave many uncertain as to whether or not this would actually be good for the US. Many advocates for HSR posit that the development of the system is essential to jumpstart the current stagnant and fragile economy. Analysts assert that the American economy is shifting to what is called a post-industrial knowledge economy, or an economy based on the creativity and innovative capabilities instead of manual factory labor3. However, the government has not been doing enough to follow this economic trend. Richard Florida, professor at George Mason University, states that the economy is not going to be solved by fiscal or monetary policies, but rather by deep structural reforms through programs like HSR3. Implementation of an HSR system would boost the economy in three main ways: Direct job creation, expanded tourism, and spatial agglomeration2. The first benefit is in regards to short-term economic recovery. The development of HSR would require people to develop, design, and con24| JOURNYS | SPRING 2013

reviewed by Mr. Andrew Corman

struct—all of whom would be employed through the government. China employed over 100,000 workers in its construction of a high speed rail from Beijing to Shanghai2. Another benefit would be the attraction of tourists and business travelers. Just as airports bring in visitors and their spending of money, HSR would pull travelers in and benefit the local economy. A study conducted by the U.S. Conference of Mayors reported that annual revenue would be increased by $360 million in metropolitan Los Angeles, $50 million in the Chicago area, and $100 million in Greater Albany2. The third benefit, spatial agglomeration, is the linking of business locations by shrinking distances. Greater proximity between major business locations, also called mega-regions, would decrease transit times and increase productivity. Mega-regions produce 2/3 of the economic output in the status quo, so linking these regions together in a cheaper and more efficient way has the potential to increase the economic output even further4. The HSR system would also drastically reduce US oil usage, because of its primary reliance on electricity. The benefit to less oil used is that the US also becomes less oil dependent. With less foreign oil imported, the US would gain greater economic benefit stemming from independence5. Another advantage for the United States is that the HSR system would drastically reduce air pollution. According to the World Health Organization, air pollution causes early death in 70,000 people per year in the US alone and 3 million worldwide5. The electrically powered HSR is a clean form of transportation that would help take cars off the road, and


AMY CHEN / GRAPHIC airplanes from the sky. Although HSR could potentially reduce the automobile and airline industry and thus add on to the unemployed population, the long term economic gains through spatial agglomeration would outweigh the short-term shocks. With people taking this new transit system, the carbon emission from idle cars in traffic and fuel guzzling planes in the sky would be reduced by 6.1 billion pounds6. Because of the high fatality rate, any attempt to reduce air pollution is highly beneficial to the nation. As attractive as this transportation system seems, it comes with some drawbacks. The critics of the HSR system say it will be too expensive and problematic. Currently, the economy is fragile, and it can be tipped either way by beneficial or fiscally irresponsible spending. Because of the fragility of the economy, many experts are reluctant to jump into the development of a major infrastructure plan. Walter Mead, Professor of Foreign Affairs and Humanities at Bard College, states that the HSR system is too expensive, and will plunge the economy into an even deeper recession7. He claims that the benefits of the plan are not sufficient to justify the expenses. Considering the current state of the economy, fiscal irresponsibility is not an option. If the economy collapses, the US will lose its economic and military hegemony. A major cause for concern is that periods of major economic and hegemonic decline have been empirically linked to war8. Economic crises cause a redistribution of world power, and would lead to global uncertainty and miscalculation. Periods of weak economic performance are also statistically linked to increased use of force and terrorism. Bad economic times have been empirically paired with wars, from the American Revolution to the Cold War8. With these potential impacts in mind, the reluctance of some individuals becomes

quite understandable. The national development of a high speed rail system has the potential to either jumpstart the economy or send it into a downward spiral. The advocates for the plan are extremely enthusiastic, claiming that it is the solution for air pollution, the economy, and much more. On the other hand, many are skeptical about the implementation of this plan. But with the dreaded impacts of economic collapse in mind, their claims also seem justified. Both sides are extremely well warranted, but no matter what decision policymakers make has the potential to drastically change the future of America.

REFERENCES [1] "When Was The Steam Train Invented?" Blurtit. N.p., n.d. Web. 08 Dec. 2012. [2] Todorovich, P, Schned, D, & Lane, R High-Speed Rail (2011) [3] “The Roadmap to a High-Speed Recovery”. (2010) [4] High-speed rail, the knowledge economy and the next growth wave. Journal of Transport Geography 22, 284-285 (2012) [5] “Air Pollution Fatalities Now Exceed Traffic Fatalities by 3 to 1.” (2002) [6] Dutzik, T. Why Intercity Passenger Rail? (2010) [7] “The American Interest” (2012) [8] Economic Integration, Economic Signaling and the Problem of Economic Crises (Emerald Group Publishing, England, 2010)

SPRING 2013 | JOURNYS | 25

Applications of Fourier Series and Transforms By Peter Manohar

For many years Joseph Fourier had tried to develop a function to model the distribution of heat in a metal object in order to solve the heat equation. In 1822, he succeeded and published his research in his Théorie Analytique de la Chaleur, or Analytic Theory of Heat. In this book he showed his method for developing a function that described the distribution of heat throughout a metal plate at any time. He claimed that any periodic function of a single variable could be expressed in a series of sines and cosines of that variable, and used this type of trigonometric series to solve the heat equation. This type of series that he developed was later named Fourier series in his honor. A Fourier series of any integrable periodic function gives a precise representation of the function as an infinite sum of sines and cosines. The Fourier series of an integrable periodic function with a period of 2L is defined as



The substitution replacing nx with πnx/L in the trigonometric function generalizes the series for functions of any period, not just periods of 2π. Also, the values of the constants an and bn can be derived by multiplying both sides of the first equation by cos(mx) or sin(mx) and integrating from -L to L [1]. For example, the square wave is a periodic function that has the shape of a square. It is defined as

26 | JOURNYS | SPRING 2013

Edited by Selena Chen

and is periodic on 2L. Simply by looking at the graph of the function (above), it seems impossible that it could be represented only by using sines and cosines. However, by evaluating the integrals for the constants using the function f(x), the values of the constants can be found. The value of an = 0, while the value of bn = 4/πn when n is odd, or 0 when n is even. Therefore the Fourier series for the function S(x) is

Graphing the Fourier series alongside the original function S(x), it is clear that the Fourier series is just another way of expressing the same function. Like other infinite series representations of functions, Fourier series can also approximate the value of a function using partial sums. The nth partial sum of a series is the sum of the first n terms of the series. The Fourier series approximation of a function becomes more accurate as n increases and more partial sums are taken. The picture above contains the first five partial sums for the square wave S(x) [2]. From the picture it is clear that the approximation is much more accurate at the 5th partial sum (purple) than the first partial sum (red). Generally, Fourier series are used to model periodic functions occurring in nature that cannot be expressed as the sum of a finite number of cosine and sine terms. A Fourier series of a periodic function is used to find its Fourier transform. Fourier transforms are functions that have been redefined to be a function of frequency instead of time, yielding the amplitude of the wave of the function at any frequency. The graph of a Fourier transform yields the frequency spectrum of a function, with the amplitude of the waves plotted against the frequency. The sum of all the waves described by the Fourier transform (from the

frequency and amplitude) is the Fourier series of the original function. Essentially, the Fourier transform of a wave splits it into the individual waves that make up its Fourier series. Fourier transforms are typically used to analyze sound and produce the frequency spectrum in spectroscopy. Fourier transforms are used constantly in sound analysis. The human voice emits many sound waves of different frequencies when words are spoken. A microphone takes this noise and converts it into an electric signal that is recorded. However, the recording cannot be analyzed because it is impossible to identify the frequencies of the individual waves and the intensity of the waves in the sound being emitted. Fourier transforms become especially useful in that they allow us to interpret the unreadable data into data that can be used. The picture below contains two graphs. The one on the left shows the raw recording of a sound from a microphone, and the one on the right is its Fourier transform. The Fourier transform makes it easier to see the many different sound intensities (loudness) and frequencies (pitches) contained within the original recording [3]. In spectroscopy, Fourier transforms are used to decompose a ray of light containing many wavelengths into individual waves. This can be used to measure the intensity of the waves at a certain frequency without having to measure the intensity of the entire ray at that particular frequency. In the above diagram, the sound that is being recorded includes all of the frequencies, and the Fourier transform allows the intensity (amplitude) of each individual wave at a particular frequency to be measured. This is important because the sound can be broken down into the many waves that it contains, without any extra measurements needed. Digital sound is also generated using Fourier series and transforms. Speakers generate sound by emitting waves of different frequencies and amplitudes. In order for the speaker to generate the sound described by the left image, it would create all of the waves with frequencies and amplitudes described by the Fourier transform in the right image, as the effect created by the many waves will reproduce the original sound. Essentially, the speaker is emitting all the Fourier series of the original wave to reproduce the sound [4]. Moreover, dubstep music is created using a synthesizer that modifies different types of waves. The synthesizer used by the dubstep artist generates waves using Fourier series that are then emitted through a speaker. The waves can be modified by variations of synthesis using Fourier transforms. Three types of synthesis generally used are additive, subtractive, and granular synthesis. Additive synthesis blends together multiple waves at different frequencies to create a sound. Subtractive synthesis removes different harmonic tones and frequencies from sound prior to its emission from the speaker. Granular synthesis divides a segment of sound into multiple partitions and rearranges them or removes them. The synthesizer is used to vary the length of the partitions and the arrangement. This creates either larger or smaller breaks in the sound that listeners often hear in dubstep music. Other more complex types of synthesis are also used to modify sound and create computerized effects within the music, giving it a digital quality [5]. Fourier series and transforms have a myriad of applications in the world. They play an influential role in numerous fields of society; common computer applications such as producing sound from a speaker. Even the qualities of dubstep music that make it so unique, have a basis in Fourier series. Without it, many of the world’s greatest technological advancements may not have been possible.

REFERENCES [1] Kaplan, W. Advanced Calculus (Addison-Wesley Press, Cambridge, Massachusetts, 1952). [2] Weisstein, E. W. “Fourier Series--Square Wave.” (2012). [3] K, B. “Voice waveform and spectrum.” (2005). [4] Alm, J., Walker, J., Time-Frequency Analysis of Musical Instruments 44, (2002). [5] Price, S.”Granular Synthesis.” (2005).

SPRING FALL 2013 2012 || JOURNYS JOURNYS || 27 27

Surgical Applications of a MATLAB Based Electroencephalography Analysis Program in the Treatment of Various Forms of Epilepsy by VEDANT SINGH, RUJUTA R. PATIL under the guidance of ROXANA A. STEFANSCU, R.G. SHIVAKESHAVAN, SACHIN S. TALATHI edited by RUOCHEN HUANG ABSTRACT

The central nervous system is prone to a multitude of neurological diseases, with epilepsy being one of the most common neurodegenerative conditions in children and adult populations. Epilepsy is marked by recurrent seizures caused by overactive neural activity in multiple regions of the brain. Research was conducted in order to develop a Graphics User Interface (GUI) using the programming language MATLAB that assists in analysis of electroencephalograms. The application was aimed to assist scientists and surgeons to visualize and analyze the EEG seizure data. In order to produce such a program, several well-known functions such as fread and uigetfile have been employed. The program was functional and accomplished its goal. The significance of this research is that it can help doctors in determining the appropriate surgical treatment for their patients with epilepsy and therefore expedite and improve the status of those afflicted.


INTRODUCTION The brain, an organ of the central nervous system, is prone to disease. One of these diseases is epilepsy, an enigmatic and recondite disease. Common neurological signals for epilepsy include recurrent seizures, which are caused by several factors such as genetics and head trauma. There are several types of epilepsy, but all of them have a similar neurological process of how the seizures originate. Seizures are caused by synchronized hyperactive neural activity and can be classified into two categories: focal and generalized. Focal seizures are marked by uncontrolled neuron firings that begin in a network on a hemisphere and remain in that hemisphere, whereas generalized seizures are marked by the neuron firings that originate from one hemisphere and network out into the other hemisphere. [7] Epileptic seizures are classified as tonic-clonic generalized seizures or as focal seizures because the neural activity can spread from one region of the brain to others, thereby changing the classification of the seizure from focal to generalized. The tonic-clonic generalized seizures are known for their abruptness and begin with the tonic phase, during which the muscles and the larynx contract, heart rate increases, blood pressure increases, and pupil dilation occurs. The tonic-clonic seizures end with the arrival of the clonic phase, during which respiration problems, unresponsiveness, and loss of bowel control occur. Confusion, headaches, fatigue, and muscle aches ensue. For focal seizures, the neural activity is concentrated in region of the brain. [7] Focal epilepsies, which target a region, have a high probability of causing permanent memory loss or extended memory loss if the targeted region is the temporal lobe. This type of epilepsy is known as Mesial Temporal Lobe Epilepsy and is the focus of the current laboratory research. An origin of this epilepsy syndrome can be from head trauma, because the head trauma can change the normal neural network, thereby rendering it extremely excitable to epileptogenesis. Epileptogenesis is the chemical or cellular means by which neurons propagate impulses in an irregular pattern. The capturing of the neural activity is done by an electroencephalogram, which places 30 electrodes underneath the skull to detect electrical activity in the cortex. This data is then used to diagnose the type of seizure, location of initiation of seizure, the episode length, and the strength of the seizure. It can also be used to predict future seizures by analysis of interictal spikes [7]. The aim of this project was to create a MATLAB based program that could assist researchers and neurologists analyze electroencephalograms (EEGs). The main challenge was 28 | JOURNYS | SPRING 2013

creating a program that was partially automated so that the user could input the file by a simple file select feature, analyze EEGs, and save the time periods during which irregular neural activity occurred into an easily readable text file. In order to address the problem, the program must be coded through the use of GUIDE, a GUI maker within MATLAB, with complex callback functions to analyze the input data and to store the analyzed data, and if this program were to be coded correctly, then the program would allow for easier viewing and analysis of EEGs. [1,4,7]

METHODS: In order to assess the hypothesis, research was done to create a program in the programming language of MATLAB and the GUIDE GUI maker, which is a part of the MATLAB software. The first component that was coded was a drop down menu for time options of 30 seconds, 1 minute, 5 minute, 30 minutes, 1 hour, 3 hours, and 7 hours. After coding the time options in the drop down menu, the update button was coded by connecting the “popupmenu1_sel_index” function to the set function, which changed the x-axis to the selected time option. A code was then created by using the xlimits functions to set the time options’ values. After the time option was selected, the “XLim” was set to the selected time option. In order to import data into the axes, a menu bar with a drop down “Import Data…” option was created, and the code for selecting the file upon clicking the “Import Data…” menu selection was the function “uigetfile”, which selects and stores the name of the file to be opened. The function “fid” was then set equal to the function “fopen”, which opened the file previously selected in “uigetfile”. After the pathname and filename were classified with fid and opened by “fopen”, the code used the accompanying function “fread”, which allowed for evaluation of the selected file, and set the parameters that enabled the program to read the selected file. The code also set the “fread” function to a value. The value was subsequently graphed using the function plot. The viewing panel that displays the graph did not have any coding as there was only one viewing display. Therefore, there were no conflicting displays that the code had to distinguish between. To code for the scrollbar, the GUI was coded so that it could view the selected time options and scroll through the time at one-second intervals. Using the slider button maker from GUIDE, the slider was created and coded using the switch function and several case functions. The slider was then coded so that the initial position was equal to zero and the final position was equal to the value of the time

from the drop down menu options. For example, when the interval at 30 seconds was selected, the final position of the slider would be 30 seconds. The final position was denoted by the function “xmax”, and the slider’s motion was coded using the “set”, “gca”, “gcbo”, and “get” functions. The “set” function was used to posit the current axes, “gca”, to the “XLim” defined in the drop down menu, to scroll at the interval of the previous scrollbar position, “gcbo”, and to calculate the change of position. The function “xincrements” was used to define the increments between the x-axis values of the time option selected. In the case of 30 seconds, the increments were one second and for 7 hours there were 10-minute increments. Finally, to speed up the GUI experience, the GUI was coded so that “CTRL-F” emulated the menu option “Import Data…” and “CTRL-I” launched a popup window that gave instructions.


The overall nature of the program improved from a basic presenter of EEG data, to a more complex and intuitive program that allows for axes changes and viewing options. Then the drop down menus are an effective and are functional in changing axes. But the most important aspect of the completion of this coding is the fact that the program does not have glitches in the startup or use, and this is indicative of the well written and structured coding put into the GUI program.

DISCUSSION As the results show, the GUI application was coded correctly and functional in viewing EEG signals, but the GUI was not able to analyze the data as intended. The GUI, as stated in the (problem?, introduction?), has the capability of assisting scientists and medical professionals in their respective fields by allowing easy viewing of EEG data. Medically, the GUI will likely provide an advantage by increasing the rates of success of pre-surgical and postsurgical procedures. In relation to pre-surgical operations, the GUI may allow the opportunity to meticulously review the recordings before surgical operations are considered. The common pre-surgical procedures allow for the identification of the functional and structural characteristics of the epileptic focus. Currently, video monitoring of EEGs allows for real time evaluation of the anatomic location of the epileptic focus, as well as the physical manifestations of abnormal neural activity. Neuro-electrophysiological methods that use implanted electrodes to record he neural activity in combination with neuroimaging techniques, such as MRI scans, have allowed for structural analysis of the lesions that cause the overactive activity of certain neural networks involved in the epileptic seizures. The GUI may enable future EEG data analysis for the pre-surgical operation of determining the location and severity of lesion. Once these analyses of pre-surgical EEGs are completed, surgeons can excise the lesion more accurately. This critical analysis may help surgeons better delimitate the lesion and thereby providing more effective and precise care. For post-surgical operations, the GUI may allow for a more accurate

However, the GUI is not the most intuitive and complete program to analyze EEGs. The GUI is limited by the number of time options currently available; additional time options such as 2 minutes, 3 minutes, 4 minutes, 10 minutes, 15 minutes, 45 minutes, 2 hours, 6 hours, 7 hours, 8 hours, 9 hours 10 hours, and 12 hours may allow for further EEG data investigations. Also the GUI does not have pause, fast forward, or rewind buttons; if these were to be implemented, then the GUI will allow for more manual user control and thereby may lead to further critical analysis of EEG recordings. Also, our GUI application does not have an automatic feature detection component for isolating the abnormal neural activity in the EEG signal. Our attempts so far to develop such a component in the GUI application has not led to reliable results. An additional limitation is that scrollbar does not allow for the inspection of EEG data outside of the time interval selected initially. Finally, if this program were to be combined with other software packages designed to monitor other aspects of medical importance, such as heart rate, blood pressure, pulse oximetry, and respiration movements, it could provide powerful tools to analyze and diagnose various medical conditions. This suite of medical monitoring may allow for further analysis and more comprehensive treatment of patients before and after surgery. In conclusion, this research has provided a strong foundation for expansion to a more inclusive and intuitive GUI that could be used to analyze EEG data. Currently, the GUI allows the user to view the EEG recordings in a simple manner and to analyze them with basic tools the progression of the EEG over time; however, if the improvements mentioned previously were to be implemented, the GUI could be more comprehensive and analytical.


Fig.1: Proper opening of GUI



As Appendix A shows, the GUI opened and displayed all the components that were coded for. Appendix B shows that the select file component the function “uigetfile” from the code worked. Appendix C shows that the “fopen”, “fid”, “fread” and “plot” functions all worked by correctly displaying the graph on the axes in the GUI. In Appendix D, the GUI was functional by displaying all the possible time options. Appendix E displays the GUI’s capability of changing the x-axis values to the options in the drop down menu while still displaying the graph created in Appendix C. Finally, Appendix F, shows that the scrollbar is functionally able to move the graph left and right.

assessment of surgical operations and their efficacy in a long term. Specifically, the most common post-surgical operation is to determine if the correct excision was made and how effective the excision was in treating the hyperactive neuron networks. Another possible post-surgical application of the GUI may be assisting in the determination of possible side effects arising from the surgery.



Fig.5: Axes change to selected time option Fig.2: Select File window functional in GUI



Fig.3: GUI graphs previously selected data in best fit lines


Fig.4: GUI displays all possible time options for axes 30 | JOURNYS | SPRING 2013

REFERENCES [1] Bluemcke, I., Coras, R., Miyata, H., & Ozkara, C. (2012). Defining clinico-neuropathological subtypes of mesial temporal lobe epilepsy with hippocampal sclerosis. Brain Pathology, 22(3), 402-411. Retrieved from [2] Engman, E., & Malmgren, K. (2012). A longitudinal study of psychological features in patients before and two years after epilepsy surgery. Epilepsy & Behavior, 24(2), 221-226. Retrieved from S1525505012001771. [3] Ferrari-Marinho, T., Caboclo, L. O. S. F., Marinho, M. M., Centeno, R. S., Neves, R. S. C., Santana, M. T. C. G., . . . Yacubian, E. M. T. (2012). Auras in temporal lobe epilepsy with hippocampal sclerosis: Relation to seizure focus laterality and post surgical outcome. Epilepsy & Behavior, 24(1), 120-125. Retrieved from [4] Guerreiro, C. A. M. (2012). Surgery for refractory mesial temporal lobe epilepsy: Prognostic factors and early, rather than late, intervention. Arquivos De Neuro-Psiquiatria, 70(5), 315-315. Retrieved from arttext&pid=S0004-282X2012000500001&lng=en&nrm=iso. [5] Jardim, A. P., da Costa Neves, R. S., Sales Ferreira Caboclo, L. O., Penteado Lancellotti, C. L., Marinho, M. M., Centeno, R. S., . . . Targas Yacubian, E. M. (2012). Temporal lobe epilepsy with mesial temporal sclerosis: Hippocampal neuronal loss as a predictor of surgical outcome. Arquivos De Neuro-Psiquiatria, 70(5), 319-324. Retrieved from 03&lng=en&nrm=iso. [6] Koorenhof, L., Baxendale, S., Smith, N., & Thompson, P. (2012). Memory rehabilitation and brain training for surgical temporal lobe epilepsy patients: A preliminary report. Seizure-European Journal of Epilepsy,21(3), 178-182. Retrieved from http:// [7] Lowenstein D.H. (2012). Chapter 369. Seizures and Epilepsy. In D.L. Longo, A.S. Fauci, D.L. Kasper, S.L. Hauser, J.L. Jameson, J. Loscalzo (Eds), Harrison’s Principles of Internal Medicine, 18e. Retrieved from aspx?aID=9145219. [8] Polat, M., Gokben, S., Tosun, A., Serdaroglu, G., & Tekgul, H. (2012). Neurocognitive evaluation in children with occipital lobe epilepsy. Seizure-European Journal of Epilepsy, 21(4), 241-244. Retrieved from article/pii/S1059131112000027. [9] Stretton, J., Winston, G., Sidhu, M., Centeno, M., Vollmar, C., Ili, S. B., . . . Thompson, P. J. (2012). Neural correlates of working memory in temporal lobe epilepsy - an fMRI study. NeuroImage, 60(3), 1696-1703. Retrieved from http://www.sciencedirect. com/science/article/pii/S105381191200153X. [10] Tsai, M., Hsu, S., Huang, C., Chang, C., & Chuand, Y. (2010, June 19). Transient attenuation of visual evoked potentials during focal status epilepticus in a patient with occipital lobe epilepsy.. Retrieved from pubmed/20714965

Augmented Reality The Harbinger of the Sixth Sense

By: Anjana Srinivas


magine you are on a trip to Paris, intently gazing at the wondrou-s architecture of the Eiffel Tower. Without even taking out your phone, you frame the structure with a hand gesture to take a quick photo of the building, and then receive a guided tour of the attraction on the palm of your hand. Or picture going to a clothing store, where you find a new shirt that you want to buy. In a frenzied excitement, you rush to the fitting rooms to try it on, only to find that all the rooms are occupied. No worries, you immediately take out your smart phone to scan the shirt, and see a superimposed image of yourself wearing it. Does this sound like a scene from one of those futuristic science fiction movies? Definitely not; this is reality — augmented reality! Augmented reality, or AR, is a technology used to add information or meaning to real objects and places.AR takes the view as seen by the user and enhances it with computer generated digital graphics and imagery. It helps one see beyond what he sees, simulating the possession of a sixth sense. AR technology gives direct or indirect views of a physical, real-world environment, the elements of which are augmented by computer-generated sensory inputs. These sensory inputs, which include sound, video, graphics, or GPS data,

Edited by: Jessica Yu

reviewed by: dr. john Allen

HAIWA WU/GRAPHIC serve to enhance one’s perception of reality. AR traces its origin to military applications introduced in the early nineties through the heads-up display devices (HUDs) used by jet fighters and helicopters to project night vision and target information. Later, it was used at the Boeing factory to assist maintenance engineers by overlaying schematics of the wiring mesh and maintenance instructions on top of the parts being repaired [1]. From its modest beginnings, in which only big companies have been able to afford the technology, AR has broken the wall of unfeasibility and inaccessibility. It has now found its way into consumer devices that are in daily use. Consumer devices such as smart phones and tablets contain processors, sensors, and display and input devices that project data into the user’s field of vision, corresponding with the real object or space the user is observing. These devices contain elements which often include cameras and sensors such as accelerometers, GPS, and solid state compass, making them suitable AR platforms. The user interface in an AR system is usually speech input or gestures from the user’s body movements as well as external devices such as a styluses, pointers, or wands [2]. SPRING 2013| JOURNYS | 31


superimposing graphics and text on realworld objects, as researchers at MIT have demonstrated through their “sixth sense” wearable AR device [5]. This application provides exciting possibilities for AR as it brings the digital information out of the confines of an electronic gadget to the tangible world, allowing interaction with information using natural hand gestures. The possibilities of AR are enormous — from being able to get instant information on landmarks, to more serious applications in the medical field. An example of the former would be a tourist pointing his or her camera at the White House and almost instantly receiving historic and current information about it. Applications in the medical field could enable a doctor to examine a patient and display a patient’s vital signs like heartbeat, blood pressure, and medical history on special AR goggles. AR also has immense potential in revolutionizing teaching methods and making learning more informative and interactive. The ability to visualize objects in 3D and manipulate them in real time is a compelling feature that can deliver sophisticated teaching applications. Reading a book, especially for young children, can be made an immersive experience by augmenting text with background sights and sounds. Graphically transporting the reader and engaging him as a real observer in the scene would make reading an even more enjoyable experience for both children and adults. Future AR applications will require increased processing power in microprocessors that can deliver high performance at low power and cost. Powerful image recognition and graphics technology are also at the heart of enabling seamless and smooth AR applications. Augmented reality becoming a reality and living up to its lofty promises are predicated on the advancements in high-speed wireless broadband connections that will enable larger amounts of data to be transferred with lowest latency. Since modern smart phones and Today’s AR applications use cameras, Miniaturization of electronic GPS location services, motion sensors, graphics, tablets are popular today and contain sensors that are necessary for AR technology, many AR components, cameras, and sensors could application processors, and high-speed wireless connectivity that are found on most smart applications will use the smart phone or tablet make it possible to embed an entire AR sysscreen as the rendering element. However, the tem into an eye lens. This could perhaps be phones [4]. Even though several AR-related the holy grail of AR, heralding the arrival of future AR rendering system will depend on applications are available for smart phones the future man-machine. AR has the potenAR-enabled sunglasses that are lightweight, today, none of them provide the desired user tial to change the way we view and interact comfortable, and visually appealing. While AR experience and, more importantly, don’t pass with the world around us, forever changing technology currently involves superimposing the “wow!” test. the nature of our interpersonal interactions graphics and textual content on live images Therefore, AR is still in its infancy, and perception of reality. viewed through cameras on smart phone waiting for a killer application that will propel it to the heights that its proponents claim it will screens, future AR application will involve reach.

32 | JOURNYS | SPRING 2013

REFERENCES: [1] Obst, Benjamin. “Augmented Reality.” (2009). [2] Mullins, Robert. “The Next Smart Phone App: Augmented Reality.” (2011). [3] Yuen, S., Yaoyuneyong, G. & Johnson, E. Augmented reality: An overview and five directions for AR in education. Journal of Educational Technology Development and Exchange 4, http://www. (2011). [4] Mistry, P., Maes, P. “SixthSense – A Wearable Gestural Interface.” (2009). [5] Mistry, P., Maes, P. “Unveiling the Sixth Sense.” http://www.ted. com/talks/pattie_maes_demos_the_sixth_sense.html (2009).




Free Samples & Info

5,000 10,000 1,000 POSTCARDS FULL COLOR



$149 $349 $995 FREE T-Shirt with every new quote request

4954 Space Center Dr. San Antonio, TX 78218 San Antonio 210-804-0390 Austin 512-480-0860

STAFF PRESIDENT Angela Zou EDITOR IN CHIEF Apoorva Mylavarapu INTERSCHOOL COMMITTEE MEMBERS Angela Wang (Westview), Michelle Banayan (Beverly Hills), Kenneth Xu (Scripps Ranch) CHAPTER PRESIDENTS Anunay Kulshrestha (Delhi Public School), Deanie Chen (Olathe East), Kevin Li (Palo Alto), Mohammed Alam (Mt. Carmel), Namana Rao (Blue Valley Northwest), Olav Underdal (Del Norte), Radhika Tyagi (Wootton), Rahel Hintza (Cathedral Catholic), Rohit Goyal (West Chester East), Seaton Huang (Lakeside), Tiffany Chen (Lynbrook), Wendy Tang (Mills), William Ton (Alhambra) ASSISTANT EDITOR IN CHIEF Sarah Bhattacharjee (Torrey Pines) MANAGING EDITORS Alvin Wong (Alhambra), Fabian Boemer (Scripps Ranch), Jerry Chen (Del Norte), Samarth Venkatasubramaniam (Palo Alto), Sharon Liou (Westview), Reeny Thomas (Cathedral Catholic) ASSISTANT MANAGING EDITORS Alexandra Vignau (Cathedral Catholic), Spencer Yu (Palo Alto), Vaibhav Jayaraman (Del Norte) VICE PRESIDENTS Allison Zhang (Palo Alto), Andrew Lee (Palo Alto), Danni Wang (Blue Valley Northwest), Devonne Hwang (Alhambra), Emily Veneroth (Cathedral Catholic), Gha Young Lee (Torrey Pines), Jeremy Fu (Palo Alto), Kathryn Li (Palo Alto), Lilia Tang (Palo Alto), Lucy An (Torrey Pines), Melodyanne Cheng (Torrey Pines), Michael Zhang (Westview), Parul Pubbi (Torrey Pines), Rachael Lee (Torrey Pines), Sarah Lee (Torrey Pines), Timothy Han (Alhambra), William Hang (Scripps Ranch) STAFF ADVISOR Mr. Brinn Belyea CHAPTER STAFF ADVISORS Mrs. Cheryl Stock, Mr. Daniel Hyke, Ms. Valeria Draper SCIENTIST REVIEW BOARD Dr. Aaron Beeler, Dr. Akiva S. Cohen, Dr. Amiya Sinha-Hikim, Mr. Andrew Corman, Dr. Aneesh Manohar, Dr. Arye Nehorai, Dr. Benjamin Grinstein, Mr. Brooks Park, Dr. Bruno Tota, Mr. Craig Williams, Mr. Dave Ash, Mr. Dave Main, Mr. David Emmerson, Dr. Dhananjay Pal, Dr. Erika Holzbaur, Dr. Gang Chen, Dr. Gautam Narayan

Sarkar, Dr. Greg J. Bashaw, Dr. Haim Weizman, Dr. Hari Khatuya, Dr. Indrani Sinha-Hikim, Ms. Janet Davis, Dr. Jelle Atema, Dr. Jim Kadonaga, Dr. Jim Saunders, Dr. Jody Jensen, Dr. John Allen, Dr. Jon Lindstrom, Dr. Joseph O’Connor, Ms. Julia Van Cleave, Dr. Karen B. Helle, Dr. Kathleen BoeszeBattaglia, Dr. Kathleen Matthews, Ms. Kathryn Freeman, Ms. Katie Stapko, Dr. Kelly Jordan-Sciutto, Dr. Kendra K. Bence, Dr. Larry G. Sneddon, Ms. Lisa Ann Byrnes, Dr. Maple Fang, Mr. Mark Brubaker, Dr. Michael J. Sailor, Mr. Michael Santos, Dr. Reiner Fischer-Colbrie, Dr. Ricardo Borges, Dr. Rudolph Kirchmair, Dr. Sagartirtha Sarkar, Ms. Sally Nguyen, Ms. Samantha Greenstein, Dr. Saswati Hazra, Dr. Simpson Joseph, Dr. Sunder Mudaliar, Dr. Sushil K. Mahata, Ms. Tania Kim, Dr. Tanya Das, Dr. Tapas Nag, Dr. Thomas Tullius, Ms. Tita Martin, Dr. Todd Lamitina, Dr. Toshinori Hoshi, Ms. Tracy McCabe, Ms. Trish Hovey, Dr. Xin Chen SCIENTIST REVIEW BOARD COORDINATOR Sumana Mahata CONTRIBUTING AUTHORS Anjana Srinivas, Annie Xu, Austin Su, Colin Dillingham, Diane Forbes, Eric Chen, Fabian Boemer, Harshita Nadimpalli, Jessica Yu, Lilia Tang, Maria Ginzburg, Mattie Mouton-Johnston, Maxinder S. Kanwal, Nilay Shah, Peter Manohar, Rujuta Patil, Varun Bhave, Vedant Singh CONTRIBUTING EDITORS Ahmad Abbasi, An Nguyen, Emily Sun, Eric Chen, Frank Pan, Ivan Dang, Jenny Li, Jessica Yu, Jonathan Xia, Joy Li, Kenneth Xu, Kevin Li, Mina Askar, Ruochen Huang, Selena Chen DESIGN EDITORS Apoorva Mylavarapu, Christina Baek, Grace Chen, James Lee, Stephanie Hu GRAPHICS EDITOR Wenyi (Wendy) Zhang ASSISTANT GRAPHICS EDITOR Crystal Li CONTRIBUTING GRAPHIC DESIGNERS Aisiri Murulidhar, Amy Chen, Angela Wu, Bofan Chen, Carolyn Chu, Christina Baek, Crystal Li, Eric Tang, Haiwa Wu, Kaleigh Fleischmann, Katherine Luo, Kristina Rhim, Linh Luong, Mahima Avanti, Meejin Choi, Rebecca Wang, Rhea Bae

SPRING 2013 | JOURNYS | 35



Millions discover their favorite reads on issuu every month.

Give your content the digital home it deserves. Get it to any device in seconds.