Page 1

Journal of Youths in Science

VOLUME 4 ISSUE I

Quantum Physics

PAGE 5

Quantum Computers PAGE 7

The Winged Keel

PAGE 28

Blind Spots

PAGE 30

issue design daniel liu and heather chang

art by crystal li


ABOUT

The Journal of Youths in Science (JOURNYS) is the new name of the student-run publication Falconium. It is a burgeoning community of students worldwide, connected through the writing, editing, design, and distribution of a journal that demonstrates the passion and innovation within each one of us. Torrey Pines High School, San Diego CA Mt. Carmel High School, San Diego CA Scripps Ranch High School, San Diego CA Westview High School, San Diego CA Beverly Hills High School, Beverly Hills CA Walnut High School, Walnut CA Blue Valley Northwest, Overland Park KS

PARTICIPATING CHAPTER SCHOOLS SUBMISSIONS

All submissions are accepted at submit@journys.org. Articles should satisfy one of the following categories: Review: A review is a balanced, informative analysis of a current issue in science that also incorporates the author’s insights. It discusses research, concepts, media, policy, or events of a scientific nature. Word count: 7502000 Original research: This is a documentation of an experiment or survey that you did yourself. You are encouraged to bring in relevant outside knowledge as long as you clearly state your sources. Word count: 10002500 Op-Ed: An op-ed is a persuasive article or a statement of opinion. All op-ed articles make one or more claims and support them with evidence. Word count: 750-1500 DIY: A DIY piece introduces a scientific project or procedure that readers can conduct themselves. It should contain clear, thorough instructions accompanied by diagrams and pictures if necessary. Word count: 500-1000 For more information about our submission guidelines, please see http:// journys.org/content/procedures

CONTACT

Contact us if you are interested in becoming a new member or starting a chapter, or if you have any questions or comments. Website: www.journys.org Email: info@journys.org Mailing: Torrey Pines High School Journal of Youths in Science Attn: Brinn Belyea 3710 Del Mar Heights Road San Diego, CA 92130

SPONSORS journys 1


A LOOK INSIDE In this issue, we wish to provide a snapshot of the astonishing diversity and scope of science, from quantum mechanics to sea salt. Several articles feature the work of students who have earned accolades in science fairs and competitions. Mike Wu and Stephen Yu, seniors at Torrey Pines High School, created a program that detects and monitors motion in blind spots to potentially prevent car accidents. Their research received third place in the computer science division at the Intel International Science and Engineering Fair and was also featured at the Greater San Diego Science and Engineering Fair and California State Science Fair. Point Loma High School senior Matthew Morris received a patent for his research on optimizing the design and performance of sailboat keels. He was a finalist in the 2011 International Google Science Fair and an exhibitor at the California State Science Fair and Greater San Diego Science and Engineering Fair. Selena Pasadyn, a senior at Brunswick High School, assessed the impact of educating youth on breast cancer awareness. Her research tentatively suggests that doing so results in increased commitment to screening and prevention later in life. She was a regional finalist in the 2011 Young Epidemiology Scholars Competition. -Angela Zou, Editor-in-Chief

Letter From the President.............................................3 Opinion Is the Usage of Animals in Laboratory Experimentation Ehtical?.............................................................4 Technology/Physics Pondering Reality through Quantum Physics.............................5 Quantum Computers: The Fastest of Them All............................7 Breakthroughs in Aviation.............................................9 Health Bi-Winning? A Look Inside Charlie Sheen’s Brain......................11 Oh, My Opia!.........................................................13 The Once Feared Whoop Strikes Back...................................15 Medicine Enantiomers: The Twins of Drugs.......................................17 Parasitic Worm Treatment.............................................19 A Potential Cure for Parkinson’s Disease..............................21 Lifestyle Made With 100% All Natural Sea Salt...................................23 The Toxic Beauty of Cosmetics.........................................25 Van Gogh’s Faded Artworks.............................................27 Original Research The Winged Keel......................................................28 Position and Vector Detection of Blind Spot Motion with Horn-Schunk Optical Flow.............................................30 Breast Cancer.........................................................32


letter from the president In Fall 2008, we saw a new element emerge on the periodic table of Torrey Pines High School, an intellectual beacon promoting a love for science—Falconium. Over the past four years, this element has become a thriving community on campus, inspiring dozens of Torrey Pines students to further their interests in science by contributing to the student-run science journal. Now, with a staff of over 100 active members and the release of our eighth issue, I am proud to introduce JOURNYS: the Journal of Youths in Science. This summer, we worked with Alice Fang, founder and former president, to expand the journal to several chapter schools nationwide, promoting Falconium’s mission among a more far-reaching community and student audience. Staff members voted upon a name that would represent not just the Torrey Pines Falcon, but the joys of science and discovery, a never-ending search for knowledge—thus, Falconium gave rise to the Journal of Youths in Science, a publication that we fondly call JOURNYS. As Alice heads off to college this year, the science journal remains busily active in her wake. The “scientific and creative potential” that she saw in Falconium four years ago is more vibrant than ever, keeping her vision alive in the student body at Torrey Pines and beyond. In this first issue of JOURNYS, we celebrate the collaborative nature of science, featuring the work of Torrey Pines students as well as young scientists from other high schools. We welcome several new chapters schools as up-and-coming collaborators in the JOURNYS mission: Westview High School, Scripps Ranch High School, Beverly Hills High School, Walnut High School, and Blue Valley Northwest, to name a few. This issue also features the original research of several accomplished students who participated in the Intel International Science and Engineering Fair; the articles share their insights on the physics of yacht design, detection of motion in blind spots, and more. As articles from students across the country come together in a bustling hub of ideas, one begins to truly understand why so many people devote their entire lives to science. Just as science perpetually strives to reach new heights, we live in a realm of constant change. This year we have already seen the discovery of a new potentially habitable exoplanet, groundbreaking research with synthetic DNA, and novel voice recognition software in a surge of the latest mobile phones. Here at JOURNYS, we live by that same ideal that governs the scientific world: progress. This is driven by the dedication of the staff, who devote hours to writing, editing, planning, and design; to them I express my sincerest thanks, for their efforts are truly the keystone of our mission. As we expand to a wider student audience, inspiring dozens of scientific minds, I look forward to sharing our work with you in the months ahead. -Rebecca Su, President

b art journys 3

y

n be

pu


D E P O hel by Bet

Hagos

Is t h Labo e Usage rato ry e of Anim xper a imen ls in tati on

ethi

Whether or not animals should be the involuntary subjects of laboratory research has been a disputed topic for many years. While animal rights activists protest the direct threat that it can create for the welfare of any animal involved, many researchers advocate the usage of animals in lab settings because of the undeniable advancements made in science as a result of it. The term “animal rights” is based upon the idea that utilizing the life of an animal for a study is morally wrong because the animal cannot defend itself from being tested upon and therefore should not have its life used for science. With this in mind, organizations like the People for the Ethical Treatment of Animals and the British Union Against Vivisection are angered by, what they consider, poorly regulated and cruel treatment of animals specifically bred for the purpose of testing and euthanasia after a single experiment. They have cited, for example, that researchers have and do use physical methods of reducing pain that result in loss of sensation and occasionally cause them to become brain dead instantly. People against animal cruelty claim that innocent animals are killed violent deaths every day because regulatory agencies only necessitate that these research projects involving living creatures are only scientifically justified. They have historically been known for taking action in stopping these agencies with their own hands. In one specific instance, the Animal Liberation Federation raided one particular UC Riverside laboratory that was speculated to have had a monkey that was mistreated upon removal from its mother. Researchers allegedly sewed its eyes shut and a strapped a sonar sensor on its head to simulate sensory substitution in the blind. Thousands of dollars in damage and eight months of investigation by the National Institute of Health later revealed that no corrective actions had been taken towards the University based on the fact that regulations for animal testing would not be necessary after all. The protesters acted on speculation when, in fact, even regulating agencies had actually observed there were no signs of regulation infringement in the laboratory. Is it possible that those who oppose of animal testing are quick to jump to conclusions about the cruel treatment of animals in all experiments? The Institutional Animal Care and Use Committee (IACUC) mandates very strict regulations for the use of animals by scientists for research purposes. Scientists are required to get approval from the IACUC for the following: dose and route (e.g., intraperitoneal, intramuscular, subcutaneous, and intravenous) of drug administration and duration of each treatment.The IACUC enforces that scientists should identify if administration of any drug would cause discomfort to the animals being tested and if so, scientists are asked to come up with a solution to alleviate such pains. The method of euthanasia also requires justification and approval from the Institutional Animal Care and Use Committee (IACUC). IACUC personnel make surprise visits to the laboratories working with animals to ensure that scientists are following the approved protocols. Violation of approved protocol results in hold or termination of any future studies by the laboratory in violation of the approved protocol. Contrarily, scientists emphasize the practicality of animal testing. There is no machine that could possibly model the complex interactions of the body or an organ in response to new medicines. However, animal testing can demonstrate the workings of the

27 opinion

cal?

human body in response to medications that cannot be efficiently revealed otherwise. In some cases, certain animals are the only option for testing. For example, leprosy only occurs in humans and armadillos and cannot be harvested in a culture for research; hence armadillos have been used for the formulation and testing of leprosy vaccines. Without the contribution of armadillos, this research could not have been done. Additionally, frogs have been found to be particularly helpful in the study of muscular disease because of their unique muscle configuration that allows for single fibers to be isolated and observed for long periods of time in vitro. Because of this, the current understanding of muscle physiology has advanced considerably due to animal testing. Scientists also argue that it is reasonable to use mice in applied research, since they share 99% of their genes with humans. The usefulness of animals in scientific research reminds us of why animal testing was first initiated. In 1937, an “Elixir Sulfanilamide” incident resulted in the deaths of 100 individuals immediately after consumption. A tragedy that could have easily been prevented was not, because at the time, there was no federal legislation preventing unsafe medicines from being produced and sold. So, what’s stopping advocates of animal experimentation today? Opponents of animal testing often cite pain caused to the animal as a reason to discontinue the practice. However, it must be noted that not all experiments involving animal subjects are painful, and for those that do, researchers can easily provide anesthetics intravenously for mammals or through submersion in a water-tricaine mixture for fish and amphibians. Looking back, almost all progress in 20th century medicine has relied at least partially on the usage of lab animals: insulin tested on dogs by Banting and Best in 1921 led to a new diabetes treatment, the development of the polio vaccine involved primates and, moreover, Louis Pasteur’s germ theory could not have been deduced without testing the spread of anthrax infection on sheep. Is it not the moral obligation of the scientist to find ways to cure fatal diseases? Animal research is a must for the discoveries of new medicine to treat dreadful diseases like diabetes, hypertension, heart failure, and cancer. Works Cited Balcombe, Jonathan. “Animal Testing and Animal Experimentation Research PCRM.” Physicians Committee for Responsible Medicine (PCRM) - Neal Barnard, M.D., President. Web. 17 Mar. 2011. <http://www.pcrm.org/resch/ anexp/index.html>. Bankowski, Zbigniew, M.D. Www.cioms.ch. Web. 13 Mar. 2011. <http://cioms.ch/publications/guidelines/1985_texts_of_guidelines. htm>. Christensen, Mark S. “The Use of Laboratory Animals: An Ethical Balancing Act.” Public Responsibility In Medicine and Research. Web. 13 Mar. 2011. <http://www.primr.org/uploadedfiles/primr_site_home/resource_center/articles/an%20ethical%20balancing%20act.pdf>. Cohn, Meredith. “Animal Testing - Alternatives to Animal Testing Gaining Ground - Baltimore Sun.” Featured Articles From The Baltimore Sun. 26 Aug. 2010. Web. 13 Mar. 2011.<http://articles.baltimoresun.com/2010-08-26/health/bs-hs-animal-testing- 20100826_1_animaltesting-animal-welfare-act-researchers>. “Lpag - Biomed for the Layperson.” Lpag Home. Web. 17 Mar. 2011. <http://www.lpag.org/layperson/layperson.html#history>.

opinion 4


pondering reality

through

Quantum Physics

by Michelle Oberman

Most people would not hesitate to respond yes. While this question may seem silly, upon further contemplation it becomes apparent that the answer is not so obvious; this simple adage becomes quite a complex paradox when one actually seeks an answer. Such is the nature of quantum physics, a mind-boggling branch of physics crossed with philosophy that, to put it bluntly, succeeds in complicating even the simplest phenomena. Quantum physics is by definition the study of a reality so small that the quantum mechanical model of the atom has an effect, so that the laws that govern reality as we know it contradict our perceptions of what is “real.” In this regard, the study of the quantum model of the atom can help philosophers and scientists achieve a greater understanding of the true nature of reality.

“If a tree falls in a forest, but no one hears it, does it still make a sound?”

5 technology/physics


Before one can understand the philosophical implications of quantum physics, it is important to understand the scientific basis for the theory. First, and perhaps most important, is wave-particle duality, which states that electrons, much like light, sometimes act as particles and sometimes act as waves. Waveparticle duality, strangely enough, proves that there is no inherent difference between matter and energy at the subatomic level, which supports the inference that atoms are not finite things as we have long been taught to believe. Because subatomic particles can sometimes be matter and can sometimes be energy, atoms are merely possibilities of existence that have the properties of both waves and particles. Thus, from a quantum physics standpoint, we, along with everything around us, are no more than compilations of possibilities. Werner Heisenberg, a German physicist, was interested in exploring the question that naturally presented itself after the discovery of wave-particle duality: if electrons are waves and particles, then where are they located in the atom? The resulting Heisenberg uncertainty principle states that it is impossible to identify the precise location of an electron in an atom because one cannot simultaneously measure both the position and the velocity of a particle without altering either of the measurements. In order to identify a particle’s location, or in other words, “see” it, we must shine photons on the object and gain information about the particle from the reflected photons. The photons’ momentum has virtually no effect on massive objects, but photons can cause drastic changes in the momentum of small particles, thus changing the particle’s velocity drastically. This means that position and velocity cannot both be known with complete accuracy on the subatomic level. An experiment known as Schrodinger’s cat further manifests the bizarre implications of quantum physics. In a cruel, but fortunately theoretical experiment, Schrodinger places a hypothetical cat in a hypothetical box. At this point, the cat is undoubtedly alive. However, along with the cat, a vial of a highly toxic substance is placed in the box. There is a fifty-fifty chance that the vial will open, killing the cat instantly. Without opening the box, how do we know if the cat is alive or dead? We don’t. This famous experiment introduces a key concept: the act of observing actually has an effect on the outcome. The Copenhagen interpretation, one of the many interpretations of quantum mechanics, tells us that until we open the box, the cat is both alive and dead. According to this interpretation, it becomes one or the other only when either is explicitly observed. So what does this simple experiment imply about reality? First, that reality is not an absolute. Reality is merely a compilation of possibilities, which of course correlates with the quantum theory of the atom – because an atom is but a possibility and the universe is made up of billions of atoms, the entire universe is not finite, but rather a series of probabilities that only become certain once they are observed (although according to the Heisenberg uncertainty principle this cannot take place on the atomic level). This is shown in the experiment, as the cat is both alive and dead until it is observed, so the observer actually plays a role in defining reality. The idea of an observer also relates back to the tree paradox - if no one hears the tree fall,

than does it still make a sound? According to quantum physics, there is no simple yes or no answer. If no one hears it fall (and thus, there is no observer), we must assume that both possibilities have occurred - the tree fell and made a sound and it fell and did not make a sound. Only when the tree is observed can we know with certainty what has happened. Finally, Schrödinger’s cat can be used to illustrate the many worlds’ theory, another important theory in quantum mechanics. Some physicists such as Stephen Hawking and Richard Feynman believe that for each possible state of an object, a new parallel universe is created. According to this theory, in the case of Schrödinger’s cat, there are two parallel universes that are created – one in which the cat is alive, and one in which it is dead. In this way, the many worlds’ theory proposes the notion that there are many versions of reality. For each decision point we come to, for every choice we make, a new reality is concocted, leading to a complex and exponential net of realities. Quantum physics is arguably the strangest and yet most intriguing branch of science that exists. Quantum physics takes a closer look at phenomena that seem obvious, producing unusual philosophical insights about the nature of virtually everything that surrounds us. Things that we take for granted, such as the fact that matter is finite, are questioned as quantum physicists strive to discover the true meaning of reality. While it is certainly complicated, quantum physics provides a new and astonishing view of the world around us, a revelation put best by Niels Bohr: “Those who are not shocked when they first come upon quantum theory cannot possibly have understood it.” Works Cited Davis, Raymond E., et al. Modern Chemistry. United States: Holt, Rinehart and Winston, 2006. Print. Higgo, James. A Lazy Layman’s Guide to Quantum Physics. 1999. Web. 14 Nov 2010. Jones, Andrew. Quantum Physics Overview. Web. 14 Nov. 2010. Quantum Theory. Tech Target, 22 June 2006. Web. 14 Nov. 2010. The Law of Attraction and Quantum Physics. 2008. Web. 14 Nov 2010. What is Quantum Physics. Think Quest. Web. 14 Nov. 2010.

art by clairechen technology/physics 6


by Frank Pan

Quantum Computers The Fastest of Them All 7 technology/physics 7 technology/physics

Is it possible to have a computer upload a movie, open several Internet browsers, and download an album of music, all within a millisecond? Many scientists would say yes. The theory of creating a computer capable of almost anything has been around since 1981, when Paul Benioff, a physicist at the Argonne National Laboratory, applied quantum theory (a branch of physics explaining the interactions between matter and energy at atomic or subatomic levels) to computers. Since then, quantum computers have gone a long way, coming out of science fiction and into the real world, as scientists are currently developing early prototypes. One way of measuring the superior performance of quantum computers is through FLOPS (floating-point operations per second). A current PC’s output is measured in gigaflops (billions of floatingpoint operations per second), but David Deutsch of Oxford University believes a consumer quantum computer would be able to run at about 10 teraflops, or trillions of floating-point operations per second. That means a quantum computer would be around 1,000 times faster than our current models! The next logical step in technology would be to expand our limits of speed and availability by developing a quantum computer, but is it possible within our lifetime, and would it be worthwhile? Before we dive into quantum and the quantum computer, a relatively short background of the modern computer is necessary. What is widely regarded at the blueprint of the modern computer emerged in the 1930s, when Alan Turing developed a theoretical device called a Turing machine. The device contains a tape of unlimited length that is divided into squares, which hold either a 1, 0, or nothing at all. These 1’s and 0’s make up binary code, the language of modern-day computers. Turing thought a read-write device could then read this sequence of 0’s and 1’s, translate it, and perform the instructions given. This binary tape principle is the core of modern-day computers and the rough basis of a quantum computer. However, as the foundation of quantum computers, the main points of quantum theory must be discussed as well. Quantum theory states, first of all, that energy consists of units, rather than a constant wave. Second, energy and matter may behave like particles or waves at any given time. Third, particle movement is random and unpredictable. Lastly, the measurements of two complementary values of a particle (such as position and momentum) are imperfect; the more precise one value is, the more flawed the other will be. In 1981, Paul Benioff applied the quantum theory to computers, envisioning the creation of a normal computer with quantum

principles, or “quantum computer.” In 1984, David Deutsch published a paper about a computer based solely on quantum rules that is now widely viewed as the basis and start of quantum computing. A decade later, Berthiaume and Brassard proved that a quantum computer would theoretically be faster than a current classical computer, due to the parallelism of the quantum computer, which allows it to perform two calculations simultaneously. So, how exactly does the application of quantum theory to computers enhance their performance? The quantum computer, unlike the current computer, which uses a silicon chip, would use atomic and molecular power as the memory and processor. The atoms, photons, and molecules would form what is called a qubit, which is similar to a gigabyte of memory space. Classic computers have a tape sequence of 0’s and 1’s which cannot exist in both states. With a quantum computer, the tape can be 0 and 1 at the same time. This is called superposition, and it is what gives a quantum computer the ability to make today’s fastest supercomputer look like Windows 1. To explain superposition, we may use the famous example of Schrodinger’s cat (cat lovers, you may want to skip this part). Take a living cat and place it into a thick box. Right now, the cat is quite obviously alive. Then put a sealed bottle of cyanide into the box, and close the box. Now, we no longer know if the cat is alive or dead. According to superposition, the cat can now be considered dead and alive. However, once we open the box, that superposition is lost, and the cat will be either dead or alive. The drawback to superposition is that while one can easily examine the insides of current computers without disturbing the silicon chips and wires, examining a qubit would cause it to lose the ability of superposition. To assist in understanding with this problem, scientists devised a way of indirectly making measurements, called entanglement. In quantum physics, if an atom is left alone, it will spin in every direction. Physicists take two atoms that are spinning in all directions and apply an outside force. The first atom will choose one spin, or value. At the same time, the second atom will choose the opposite spin, or value, of the first. This way, scientists know the value of the qubits without looking at them. Now that we have some background information, it’s finally time to see how the world is getting it done. In 2007, Canadian company D-Wave manipulated a 16-qubit quantum computer. D-Wave had the computer solve sudoku puzzles and other pattern matching problems. The company claimed to be able to have practical quantum computers by 2008, but technology/physics 2 was unable to achieve that goal. On October


4, 2010, researchers at UC Santa Barbara were able to entangle 3 qubits of information. Although it is far away from the goal of a 30-qubit consumer quantum computer, the entanglement of 3 qubits was a major step towards constructing a practical quantum computer. Recently, a new method for quantum computing has been proposed. Up to this point, most quantum computers have been created using neutral atoms as a processor, which are much harder to control than polar atoms. Neutral atoms cannot be manipulated easily; there is no way to attract them due to their lack of charge. Meanwhile, the charged nature of polar atoms makes them much easier to influence, but they can only be useful when cooled to a few millionths of a degree above absolute zero. Elena Kuznetsova, a researcher in University of Connecticut’s Department of Physics, recently discovered a way to control polar atoms in computers more easily. Kuznetsova was able to break down the molecules with a laser without compromising the data, allowing the processor’s results to be read with less effort. This is the equivalent of being given a pie and figuring out the recipe by scanning it versus virtually taking it apart. To avoid individual particles altogether, physicists at Bell Laboratories have devised another method of creating a quantum computer, which uses anyons, particle-like structures that exist in two dimensions, or quasiparticles, which are entities that behave as particles. If anyons are manipulated to twist into braids, they would be much more resistant to disturbances than individual particles, reducing the chance of data and calculation corruption. Unfortunately, the anyons can only store quantum information and stay together on 2-D sheets. At 3-D, the anyons easily unravel and data would be lost. The team has yet to announce whether they were able to successfully produce braided anyons, but the research could be a major step in creating a practical quantum computer.

Why should we make the switch from current PCs to quantum computers? One reason would be the depleting amount of silicon in the world. China recently announced that the levels of mineable silicon are dwindling, which means fewer resources to produce the iPhone, Xbox, LCD TV, GPS, or iMac. Quantum computers, however, use subatomic particles instead of silicon in a processor-crisis averted. Also, a quantum computer is billions of times faster than a normal PC, a major tool for scientists and the military, as well as a huge plus for gamers, Hulu users, and movie downloads. An algorithm that would normally take years for a classic computer to solve would take mere seconds for a quantum computer, meaning knowledge would be gained faster than ever. The creation of a practical quantum computer would also mean better homeland security. Police occasionally trace calls by using triangulation techniques, but tracing takes some time, leaving windows for criminals to leave a message and still be concealed. Quantum computing would leave no such window of opportunity for criminals, improving the effectiveness of triangulation. Also, quantum computers would work wonders with sonar technology, aerial combat, and missile tracking, making it a highly useful “upgrade.” So, it’s time to answer the questions from the beginning: is it possible and is it worthwhile to create a quantum computer? We have seen that a quantum computer would be 1000x faster than a current consumer PC, thus speeding everything up, from downloads, to missile tracking, to saving the world from a technological breakdown due to low silicon reserves. We have seen that, with time, it is possible to entangle and manipulate 30 qubits. A quantum computer, though seemingly a fantastical device, is indeed possible and undoubtedly worthwhile. The development of a practical quantum computer may be decades away, but it is definitely possible within most of our lifetimes.

Works Cited “Braided Anyons Could Lead to More Robust Quantum Computing.” Science Daily: News & Articles in Science, Health, Environment & Technology. 01 Nov. 2010. Web. 01 Nov. 2010. <http://www.sciencedaily.com/ releases/2010/11/101101102530.htm>. Buckley, Christine. “Physicists Propose New Method for Quantum Computing.” PhysOrg.com - Science News, Technology, Physics, Nanotechnology, Space Science, Earth Science, Medicine. 15 June 2010. Web. 29 Nov. 2010. <http://www.physorg.com/news195837003.html>. Hayes, By Jacqui. “Curious ‘quasiparticles’ Baffle Physicists | COSMOS Magazine.” COSMOS Magazine | The Science of Everything. Web. 16 May 2011. <http://www.cosmosmagazine.com/news/2038/curious-quasiparticleshave-a-quarter-charge-electron>. Howell, Dave. “The Mind-blowing Possibilities of Quantum Computing | News | TechRadar UK.” TechRadar UK | Technology News And Reviews. 17 Jan. 2010. Web. 27 Oct. 2010. <http://www.techradar.com/news/computing/ the-mind-blowing-possibilities-of-quantum-computing-663261>. “Quantum Computers: What Are They and What Do They Mean to Us?” Carolla, the Concurrent Engineering Information Site. Web. 01 Nov. 2010. <http://www.carolla.com/quantum/QuantumComputers.htm>. “UCSB Press Release: “Quantum Computing Research Edges Toward Practicality in UCSB Physics Laboratory “” Institutional Advancement. UC Santa Barbara, 04 Oct. 2010. Web. 28 Oct. 2010. <http://www.ia.ucsb.edu/pa/ display.aspx?pkey=2336>. “What Is Quantum Computing? Definition from WhatIs.com.” Computer Glossary, Computer Terms Technology Definitions and Cheat Sheets from WhatIs.com - The Tech Dictionary and IT Encyclopedia. 14 June 2010. Web. 14 Nov. 2010. <http://whatis.techtarget.com/definition/0,,sid9_gci332254,00.html>.

technology/physics 8


breakthroughs in From the musings of great minds in ancient times to the historic first flight of the Wright Flyer from Kitty Hawk, mankind has made tremendous progress in the science of flight and the dream of taking to the skies and leaving the earth behind. Despite what may seem as a lax era for growth in aviation in our current times, there have been significant improvements and breakthroughs which will undoubtedly revolutionize the future of flight.

aviation

by Siddhartho Bhattacharya

The Lift Fan

One of the most intriguing concepts of modern flight regarding aircraft is the ability of planes to take off and land in very short distances or even vertically, referred to as STOVL (Short Take-Off and Vertical Landing) or V/ STOL (Vertical/Short Take-Off and Landing). Since 1950, numerous planes have attempted to achieve this feat given the tremendous ease it would allow for landing and taking off in small areas with short (or nonexistent) runways. The biggest breakthrough in this field of aerodynamics and flight came with the Harrier, referred to often as the “jump-jet” due to its ability to take off nearly vertically, similar to a helicopter but able to resume horizontal flight with near-supersonic speeds. The Harrier accomplished this with a technique known as thrust vectoring, which essentially redirects the thrust produced by the turbofan engines (which swallow air in from the front of the aircraft) downward around the middle of an aircraft so that the power of the airflow keeps the plane hovering. Although an excellent technique, achieving vertical take-off or landing using thrust vectoring in this way is risky due to the difficulty of changing the direction of air that is flowing at these incredible speeds. As such, the number of accidents with the Harrier, including the improved version used today, has been far greater than other aircraft in the military. The revolutionary piece of modern aerospace engineering which has made STOVL much more viable is the lift fan. Engineered by Rolls-Royce, the “LiftSystem” includes the lift fan and the same thrust-vectoring technique utilized by the Harrier, with the exception of changing the thrust vector of the engine exhaust at the rear of the aircraft rather than the middle. In addition, the Lift Fan is a vertically-oriented fan (like a vertical “engine”) that is placed in the mid-front area of the aircraft which uses a fan to suck overhead air and eject it straight downward. Hence there is no redirecting of exhaust from an engine, but rather the lift fan directly takes air above the aircraft, compresses it, and thrusts it down at high speeds (without combusting it). This creates nearly twice the thrust of the system used by the Harrier (41,900 lb versus the Harrier’s 23,900 lb) as well as greater reliability. This system was first demonstrated in Lockheed Martin’s X-35 Joint Strike Fighter (JSF) which is currently to be implemented in the military as the F-35 Lightning-II. With this great piece of engineering, pilots will be able to take off and land in small locations and be one step closer to greater mastery and maneuverability in flight.

9 technology/physics


The Scramjet:

A new type of jet engine, called the Scramjet, breaks down the speed of aircraft in supersonic flight into what has been dubbed as “hypersonic” flight (flight at speeds of 5 times the speed of sound, or Mach 5 and greater). The concept of a Scramjet has existed and been experimented with since the 1950’s and 1960’s, but recently such powerful jets have been actually tested by NASA and demonstrated speeds of over Mach 5, or in one case, Mach 9.8. Traditional jet engines rely on combusting fuel (carried on the plane) and an oxidizer (air), and ejecting the exhaust out at high speeds in order to propel the aircraft. However, at speeds past the speed of sound, turbofan jet engines (such as those in jet-liners and transportation commercial aircraft) cannot operate as the mechanical blades and fans would heat up tremendously or break apart in the incredibly fast airflow. Highly sophisticated jet fighter aircraft which can operate at supersonic speeds use a ramjet, which contains no fan but instead takes incoming air which is already traveling at supersonic speeds and then slows that air down to subsonic speeds due to the shape of the internal engine “ramming” against the flow of air. Once slowed down to subsonic speeds, the air is injected with fuel and ignited to produce thrust in supersonic flight. With ramjets, speeds of about Mach 2 to 4 can be reached, with efficiency breaking down near Mach 5. The new Scramjet now takes this concept one step further by allowing air to enter the engine at supersonic speeds and then remain at supersonic speeds as it is compressed and combusted. Previously, it has been extremely difficult to sustain a burning combustion at supersonic speeds, and thus combustion has almost always been done at subsonic speeds. With the new technology, Scramjets can take advantage of the compressing power of not only the shape of the inlet, but also the powerful shockwaves common at supersonic speeds. These shock waves control and compress the air inside the engine itself, thus removing the need for any compressor to be built inside the engine. Due to this, no moving parts are needed by the Scramjet at all (with the exception of the miniscule motion of fuel injection) which greatly simplifies their design and manufacturing. The key aspect of the jet is its ability to sustain combustion of air at supersonic speeds, analogous to trying to keep a candle lit in the middle of hurricane-force winds. To do this, the fuel injection must occur at the point of high compression and near what is known as stagnation, where air flow is near zero due to the conversion of kinetic energy of the gas to internal energy. In addition, supersonic flow allows for the aforementioned shock waves to occasionally create zones of focused waves where combustion is sustainable

(as opposed to zones where a variety of shockwaves cause air to flow turbulently in multiple directions). While the Scramjet is a marvel of flight-propulsion engineering, it contains significant drawbacks which prevent a fully commercial application as of yet. The combustion can only occur at very precise temperatures and pressures, so the Scramjet can only operate at speeds significantly above the speed of sound, at which points the air is compressed to a high enough temperature and pressure to mix with the fuel and combust, with shock waves assisting in the mixing of fuel through turbulence right after the injection point. One effect which limits the engine is that the pressure of air changes with altitude, and the Scramjet must ascend at a rate according to the increase in speed and thus temperature inside the engine in order to maintain the precise ratios between pressure and temperature. Supersonic air flow slows down when it is compressed, and thus the amount of compression must also not be too large so as to reduce the speed of air below Mach 1. Additionally, the speed of sound increases in high heat caused by the compression of air and thus results in diminishing air flow velocity. Each of these strict restrictions limit the viability of widespread use of the Scramjet, but it still remains as a pioneer in the field of propulsion in aviation. Despite the challenges, NASA pushed the limits of aircraft speed with its test of the X-43A which attained speeds of Mach 9.8 on June 2007, and then the X-51A Waverider which sustained combustion for 200 seconds to reach Mach 5, signaling the start of a new era of high-speed travel. Works Cited Harsha, Philip T., Lowell C. Keel, Anthony Castrogiovanni, and Robert T. Sherrill. “X-43A Vehicle Design and Manufacture.” AIAA/CIRA 13th International Space Planes and Hypersonics Systems and Technologies Conference. Capua. May 2005. AIAA. org. American Institute of Aeronautics and Astronautics, May 2005. Web. 22 Mar. 2011. “Hypersonic Airbreathing Propulsion Branch at Langley Research Center.” Larc.nasa.gov. NASA. Web. 22 Mar. 2011. Kelly, Christina E. “Boeing X-51A WaveRider Breaks Record in 1st Flight.” Boeing.com. Boeing, 26 May 2010. Web. 22 Mar. 2011. Kjelgaard, Chris. “From Supersonic to Hover: How the F-35 Flies.” Space.com. 21 Dec. 2007. Web. 22 Mar. 2011. “The Ramjet/Scramjet Engine.” Aviation-history.com. The Aviation History On-line Museum. Web. 22 Mar. 2011. Smith, John, and John Kent. “Propulsion System in Lockheed Martin Joint Strike Fighter Wins Collier Trophy.” Lockheed Martin. Lockheed Martin Corporation, 28 Feb. 2003. Web. 22 Mar. 2011. “X-43 Hyper-X Program.” FAS.org. Federation of American Scientists. Web. 22 Mar. 2011.

technology/physics 10


Bi-Winning A

Look

Inside

Charlie

Sheen’s

Brain

by Liz Colleen Brajevich

The media has been paying particular

ed attention to Charlie Sheen lately. His recent “If you borrow and actions have led many to ve statements wonder, “What’s going on in his brain?” Is his my brain for fi be brain really from a different realm? Sheen’s seconds, you’d structure and function may have n’t brain’s just been compromised by years of substance like ‘Dude! Ca a n d lug abuse. He has admitted to using cocaine, ecstasy handle it, unp marijuana, all of which have detrimental and potentially fires permanent effects on the brain. It is possible that his “tiger’s blood” this [thing]!’ It protected him, but it is more likely that his illicit substance abuse has s t’ a th in a way damaged his brain. , m o fr t “I probably took more than anybody could survive,” Sheen recently said, referring o n e mayb to his substance abuse during an interview with ABC’s Andrea Canning. It is clear l ia str that his drug use policy was “go big or go home,” but how did this affect his brain and uh…this terre ie ultimately his ability to interact and function? realm.” --Charl Drug use frequently leads to mood swings. These alterations are caused by changed Sheen levels of glutamate, a neurotransmitter that influences the brain’s reward circuit and the brain’s ability to learn. It also directly affects the ventral tegmental area (VTA), which is a group of neurons near the midline of the floor of the mesencephalon (midbrain); it is the center of the mesocorticolimbic dopamine system. The VTA is the origin of all of the dopamine. This area of the brain is important for not only cognition, motivation and drug addiction, but also for intense emotions such as love, as well as psychiatric disorders.The increased amount of dopamine in the synaptic space, or area between two neurons used to transmit chemical signals, causes the euphoric feeling brought on by illicit drugs. These high dopamine levels provide a high but long-term impairments. The stimulant drug cocaine, which Sheen was using by the “briefcase full,” works by increasing these dopamine levels and by stimulating the brain to produce the feeling of euphoria. In a healthy brain with normal dopamine system function, the pre-synaptic neuron is stimulated and the action potential travels along the axon to the axon terminal. This triggers the release of dopamine into the post synaptic space. The dopamine then binds to receptors in the post synaptic neuron and triggers an action potential in that neuron, stimulating the brain. The receptor then releases the dopamine back into the synaptic space, where, under normal conditions, some will be broken down by enzymes, and the rest will re-enter the pre-synaptic neuron via transporter proteins. When cocaine enters the bloodstream, it blocks the transporter proteins. This causes the dopamine to build up in the synaptic space as more dopamine is released without being transported back into the pre-synaptic neuron. Blocking transporter proteins occur only while cocaine is in the bloodstream, but the damage this causes affects the brain’s function long after the initial use. Positron Emission Tomography or PET scan images measure how much glucose the brain uses. A PET scan uses radiation (or nuclear medicine imaging) to create 3-dimensional images of the functional processes taking

11 health


place in the brain. The machine works by detecting pairs of gamma rays, which are emitted indirectly by a tracer. The tracer is a positronemitting radionuclide which is placed in the body on biologically active molecules. The tracer then emits pairs of gamma rays in association with the activity of the biological molecules. Computer analysis then reconstructs the images in color. PET images of a cocaine user’s brain show that the most active neurons, those that metabolize the most glucose, have a higher presence in brains that have never been exposed to cocaine. After three months of the initial exposure to cocaine, the user’s brain still shows significantly less activity than a normal brain. Thus, Sheen’s cocaine use has caused the brain deterioration responsible for his crazy statements. In many recent interviews, Sheen has also admitted to using ecstasy. The main component of ecstasy taht interacts with the brain is Methylenedioxymethamphetamine, or MDMA. MDMA dramatically changes brain activity. In a study on squirrel monkeys, axons of neurons that produce serotonin in the cerebral cortex were dyed and measured using a PET. After 1 use of MDMA, fewer than half of these important neurons remained, and after 18 months of continued use, the quantity of serotonin-producing axons was lowered further. Decreased serotonin can cause depression, mood changes and obsessive compulsive habits; although, Sheen has not exhibited these effects. In response to Sheen’s use of copious amounts of marijuana, marijuana dispensaries have started naming varieties after Charlie Sheen. Marijuana‘s main psychoactive chemical is Tetrahydrocannabinol (THC), which handicaps the neurons ability to function. The brain’s cannabinoid receptors are directly affected by THC. These cannabinoid receptors are found on neurons in the hippocampus, which pertains to memory; the cerebral cortex, which affects one’s ability

to concentrate; and the sensory portions of the cerebral cortex which affect perception. When THC impairs the function of neurons, the brain as a whole cannot function normally. These changes affect important brain regions that pertain to memory and judgment. Long term negative effects on memory and cognitive brain function are imminent with repeated use of marijuana. Sheen’s use of marijuana may have caused him to suffer from poor memory and decreased cognitive brain function. While Charlie Sheen continues to exhibit characteristics of decreased brain function and memory as well as mood swings, one should keep in mind that these characteristics are also proven side effects of using cocaine, ecstasy, and marijuana. Although he might fancy himself to be “BiWinning,” a PET would probably reveal that there’s one place Charlie Sheen isn’t winning– his brain function. Works Cited “The Brain - Lesson 3 - How Does Cocaine Alter Neurotransmission?” Science Education NIH. Web. 17 May 2011. <http://science.education.nih.gov/ supplements/nih2/addiction/activities/lesson3_ cocaine.htm>. “The Brain - Transcript for Long-Term Effects of Drugs on the Brain Video.” National Health Institute National Institute on Drug Abuse. Web. 17 May 2011. <http://science-education.nih.gov/ supplements/nih2/addiction/videos/act4/transcriptactivity4.htm>. “Charlie Sheen Hospitalized After Cocaine Party, Says Porn Star Kacey Jordan to TMZ - Crimesider - CBS News.” Breaking News Headlines: Business, Entertainment & World News - CBS News. CBS News. Web. 27 May 2011. <http://www.cbsnews. com/8301-504083_162-20029884-504083.html>. “”Drugs, Brains, and Behavior - The Science of Addiction” - Drugs and the Brain.” National Institute on Drug Abuse. Web. 17 May 2011. <http:// drugabuse.gov/scienceofaddiction/brain.html>. “What Is A PET Scan? How Does A PET Scan Work?” Medical News Today: Health News. Medical News Articles. Web. 17 May 2011. <http://www. medicalnewstoday.com/articles/154877.php>.

health 12


Oh, My Opia! [ reviewed by Tapas Nag

13 health

ik pa

kr

ist

in

e

published in the Canadian Medical. Written by J.N Roy, M.D. about his tour through the African continent, it observes that Africans possess sharper eyesight than other ethnicities due to certain characteristics of their lifestyle. In a study on the eyesight of 5,000 Africans of 100 different tribes in various colonies, Roy noted that Africa offered sharp, intense sunlight, and a nutritious yet plain diet, both of which likely benefited eye health. In contract, he concluded, “civilized” societies’ dependence on strong artificial light contributed to a general decline in their visual acuity. Roy also believed that the “civilized” diet contained heavy, rich food that interfered with digestion and absorption in the eyes. But new theories and explanations are being introduced as well, among them the possibility that a person’s genetic makeup might influence the onset of myopia. Researchers from y

areas, identify color and details, and connect to the optic nerve, which in turn connects to the brain to produce the image one sees. A few specific forms of myopia include simple myopia, pseudo myopia, induced myopia, and degenerative myopia, among many others. These different names describe varying conditions of nearsightedness, with different causes, degrees of severity, and cures. Before extensive research and study of myopia proved otherwise, many scientists believed that myopia was caused by environmental factors in lifestyle, like intensive book reading, TV-watching, hours of staring into computer screens, and working in bad lighting. These habits, blamed for making people strain eye muscles, were believed to be prime causes of the gradual degeneration of eyesight. This was tied to the fact that myopia typically appears during the pre-teen years and intensifies throughout adolescent years as the

eye continues to develop. Still, others become nearsighted during adulthood, usually because their occupation requires use of eyes for acute precision, leading to visual stress from continually focusing the eyes at close objects for too long a time. These activities are common in modernized societies, leading to mental strain which also strains the eye muscles and nerves. In addition, modern lifestyles encourage improper diets, which hamper good blood and nerve supply and deprive the eye muscles of better nutrients and vitamins. The consensus that various lifestyle factors are the main determinant of eye health has been long lasting, corroborated by a multitude of studies. Among the earliest of these was a 1919 article

tb

Back then, the kids who wore glasses were often taunted by their peers. But now, roles are reversing as myopia, or nearsightedness, becomes increasingly more common worldwide. While myopia was a rare condition five decades ago, nowadays 25% of the global population is nearsighted, a rate that is still on the rise and predicted to be over 33% by 2020. As it turns out, myopia is not only the result of influence from outside factors, but also due to genetics. In myopia, the sclera and the collagen fibrils of the eye are affected. Myopia results when either the cornea of the eye is too curved, or more commonly, when the eyeball becomes elongated (due to stretching of the scleral collagen). With nearsightedness, light enters the eye through the lightrefracting lens but converges back before hitting the retina, the three layers of cells on the back of the eye. These cells detect light in dim

ar

by Sarah Kwan

[

Environmental and Genetic Causes of Myopia


t h e o p h t h a l m o l o g y and epidemiology departments at Erasmus Medical Center made such discoveries through one 2003 study, in which tests were done on the chromosomes of over 15,000 people. Results suggested that certain genetic variants on chromosome 15 increased the chance of myopia in individuals, establishing that myopia generally does develop in people with a hereditary predisposition for it. Yet there are still issues left unanswered: this connection fails to sufficiently explain, for instance, why children with myopia do not always have nearsighted parents. This reasoning leads scientists to believe that there is no single factor that is solely responsible for causing myopia. Indeed, there are now many studies indicating that a jumble of factors--including genes, the environment, diet, altered light situations in early life, and the presence of certain diseases, such as diabetes--play a role in its causation. However, scientists are still unclear about some finer details of this theory, such as whether specific combinations of factors are more likely than others to cause myopia, especially in school children (school myopia). Still, a myope can do many things about his or her condition, howbeit effective these various methods are. Eyeglasses and corrective contact lenses are, of course, among the most simple and commonly used means to correct vision. Orthokeratology, a form of contact lens therapy, involves wearing specially designed contacts that gradually reshape the curvature of the cornea by flattening it with pressure. Other well-known solutions include Lasik eye surgery, which uses a highly focused laser beam on the surface of the eye to reshape the cornea. Some interesting natural home remedies have even been reported to yield positive results. One can eat a diet of vitamin A-rich products, like carrots, lettuce, and raw spinach, or try Triphala, the Ayervedic (Indian) “three-fruit” herb that is usually made into decoctions for drinking and can supposedly be rinsed over the user’s eyes to heal. Aside from these, health and eye experts advise forgoing sweets and avoiding fish, meats, coffee and eggs, all of which interfere with absorption during digestion. People with myopia should take frequent breaks during reading and computer work as well, especially by looking at distant objects, and children are advised to spend some time everyday in direct sunlight. Optometrists also advise myopes to perform simple exercises for eye muscles at least twice daily. However, while eyesight can be easily prevented from further deterioration, the success rate of fully ‘curing’ myopia is very small and random--no one knows what factors will entirely cure myopia, just as no one fully knows what factors will cause it. Indeed, despite becoming a widespread condition, myopia still remains, in essence, an enigma.

Works Cited “Degenerative Myopia.” MD Support. Web. 15 Nov. 2010. <http://www.mdsupport.org/library/myopic.html>. Erasmus MC : Proof Found: Myopia Is Hereditary.” Erasmus MC: Universitair Medisch Centrum Rotterdam. Web. 15 Nov. 2010. <http://www.erasmusmc.nl/corp_home/corp_news- center/2010/2010-09/bijziendheid.erfelijk/?lang=en>. “Home Remedies Discussion Forum.” Home Remedies and Natural Cures - User Recommended. Web. 15 Nov. 2010. <http://www.naturalhomeremedies.org/homeremedies-myopia.htm>. Lavelle, Peter. “Myopia on the Rise - Health & Wellbeing.” ABC.net.au. 11 Oct. 2005. Web. 12 Apr. 2011. <http://www.abc.net.au/health/ thepulse/stories/2005/11/10/1502702.htm>. “Myopia (Nearsightedness) | American Optometric Association.” American Optometric Association. Serving Doctors of Optometry & Their Patients. Web. 15 Nov. 2010. <http://www.aoa.org/myopia.xml>. “Nearsightedness Increasing in America - Reasons, Cure.” Citizen Journalism News Platform - MeriNews. Web. 15 Nov. 2010. <http://www. merinews.com/article/nearsightedness-increasing-in-america---reasons-cure/15791409.shtml>. Roy, J. N. The Eyesight of Negroes in Africa. Web 15 Nov. The Eyesight of Negroes in Africa. Web. <http://www.ncbi.nlm.nih.gov/pmc/ articles/PMC1523645/pdf/canmedaj00376-0076.pdf>. Web. 15 Nov. 2010. <http://www.ncbi.nlm.nih.gov/pubmed/3310646>.

health 14


The Once Feared Whoop Strikes Back by Eva Lilienfeld

art by sarah gustafson

A teen experiences symptoms of the common cold, with a distinctive whooping noise accompanying every cough. Despite his illness, he visits his aunt one day and holds her newborn baby, Emma. Three weeks later, Emma develops 30 second coughing fits. Towards the end of each choking fit, her lips and nails turn blue from a lack of oxygen. Following most episodes, she struggles to catch her breath--resulting in a whooping noise--and then vomits. After five days of coughing fits, each increasing in severity, Emma is taken to a nearby hospital. Within a few hours, she dies of suffocation. Though Emma’s story could have taken place a hundred years ago at the turn of the 20th century, the same sickness afflicts infants today. Pertussis, commonly known as whooping cough, is caused by the bacterium Bordetella pertussis. A technique called gram-staining divides all bacteria into two groups: gram-positive (stains crystal violet) and gram-negative (appears red). A bacterium will only stain crystal violet if it has peptidoglycan walls. In addition to the gram-stain, bacteria are also categorized according to shape. B. pertussis is classified as a small gram-negative coccobacillus. It grows singly or in pairs and is fastidious in its growth requirements, generally growing best on a rich medium containing blood. It grows slowly compared to other species, and even small pinpoint colonies of bacteria take 3-6 days to grow. A doctor must be on the watch for whooping cough in order to diagnose it. Generally, doctors diagnose pertussis by analyzing the patient’s symptoms and testing his or her blood for antibodies against the bacteria. Once the case is confirmed, the patient takes a dose of antibiotics--more advanced stages require higher doses. Whooping cough symptoms vary depending on the age of the patient. Initial symptoms are similar to

15 health

those of a common cold: sneezing, a runny nose, a dry cough, and sometimes a slight fever. Infants make a whooping sound comparable to a horse’s neigh when inhaling. Pertussis symptoms can also include vomiting between coughing spasms (more common in infants) and diarrhea. However, doctors most often diagnose pertussis based on the classic presentation of a laryngeal spasm (a contraction of the laryngeal cords that prevents inhaling, while exhaling remains easier), paroxysmal cough (entailing difficulty breathing), vomiting, and the distinctive “whoop” sound that last one to three months. Such characteristic symptoms as larynx spasms and nausea are caused by toxins from the bacteria that are secreted into the host’s throat and seep into the larynx. Younger patients are more susceptible to complications, which can occur even if pertussis is diagnosed and treated. The most common complication is a secondary bacterial infection. Meanwhile, 4% of pertussis patients experience rib fractures from coughing, and 5% develop pneumonia. Though antibiotics do not serve to eliminate symptoms altogether and mainly work to lessen their severity, they can also help prevent secondary infections and complications. Whooping cough is highly contagious during the first stages of the disease, when the patient is experiencing only cold-like symptoms. When an infected person laughs, coughs, or sneezes, tiny droplets of infected saliva are released into the air. The disease is spread primarily through inhaling the droplets or coming into contact with contaminated utensils. The whooping cough epidemic attacks the nation in waves approximately every 2-5 years. In California, whooping cough has been gradually on the rise for at least 10 years. In 2010, more citizens were diagnosed with pertussis than in any other year since 1950. More than 50% of the cases were in pre-teens, teens, and adults, the minority of patients being infants and the elderly. Professionals credit the rise in the disease to unawareness about maintaining immunity; for example, many children and teenagers were not vaccinated or given booster shots before visiting foreign countries, where they became infected by the bacterium and then brought it home to their pediatricians’ waiting rooms. The vaccine for pertussis, also known as the DTP shot because it protects against diphtheria, tetanus, and pertussis, needs to be boosted every 10 years. Teens and adults who are irresponsible about vaccinations are


WARNING not only putting themselves at risk for pertussis, but also any infants they come into contact with. This relies on the concept of “herd immunity,” which refers to immunity in a group of people as a result of widespread infection or vaccination. More specifically, a person who has not been vaccinated is protected from the disease if everyone around them is vaccinated. Under these conditions, it is much harder for the bacteria to eventually reach the vulnerable person. Additionally, there are fewer sick people to introduce bacteria into the community. Thus, infants who are too young to be vaccinated are only protected if everyone around them is vaccinated. Getting a vaccine not only protects the one getting the shot, but it also protects other, more vulnerable members of the community. In addition to promoting safety, vaccinations are a cost-effective means of keeping citizens out of hospitals and off of antibiotics. Beginning in July 2011, California will require students in 7th-12th grades to be vaccinated against pertussis to prevent the spread of the disease. The vaccination program will directly reduce the number of cases in teens, and is also likely to lower case numbers for all age groups because teens are often carriers who bring the disease to their younger siblings. Though pertussis is still a dangerous disease, technology has developed since the turn of the 20th century. Scientists and doctors have been able to supply mass quantities of a pertussis vaccine, diagnose symptoms, and treat it with antibiotics. However, the best way to avoid the deadly disease is to be proactive and get vaccinated.

Works Cited “Gram-negative Definition - Medical Dictionary Definitions of Popular Medical Terms Easily Defined on MedTerms.” Web. 05 June 2011. Halperin, Scott A. “The Control of Pertussis--2007 and Beyond.” The New England Journal of Medicine 356.2 (2007). Web. 5 June 2011. “Pertussis Claims a Ninth Infant in California | Bad Astronomy | Discover Magazine.” Discover Blogs | Discover Magazine. Web. 05 June 2011. “Pertussis: MedlinePlus Medical Encyclopedia.” National Library of Medicine - National Institutes of Health. Web. 05 June 2011. “Pertussis Questions and Answers.” Vaccine Information for the Public and Health Professionals. Web. 05 June 2011. Wendelboe, Aaron M. et al., “Transmission of Bordetella Pertussis to Young Infants.” The Pediatric Infectious Disease Journal 26.4 (2007): 293-99. Web. 05 June 2011. “Whooping Cough (Pertussis).” KidsHealth - the Web’s Most Visited Site about Children’s Health. Web. 05 June 2011.

health 16


Enantiomers: The Twins of Drugs

by Margaret Guo reviewed by Dhananjay Pal

Imagine a health headline on a national newspaper: Cure for Cancer Discovered Thanks to Tiny Optical Isomers. This could become a reality. Two decades ago, single-optical isomer drugs, or enantiopure drugs, were considered alien and bizarre. Yet due to recent advances in synthesis, purification, and separation methods, they are becoming the critical stepping stone for new treatments and cures, opening a myriad of possibilities, as well as warnings, for the future. Upon first glance, two enantiomers—isomers with the same chemical formula but with different orientations of the side groups around the central atom—appear quite similar. They share most of the same chemical and physical properties: the same chemical formula, the same bond angles, the same boiling point, and the same melting point. In fact, their only difference lies in the direction in which they rotate polarized light, which are dextrorotary (clockwise rotation) and levorotary (counterclockwise rotation). When placed side by side, enantiomers look like mirror images of one another, with as much likeness as the left and right hand of the same person. Enantiomers are also nonsuperimposable, meaning that regardless of how they are oriented, the components of the molecules do not overlap, just as the the left and right hands do not become identical no matter how the hands are arranged. To find the orientation and configuration of molecules like enantiomers, which are called chiral compounds, scientists utilize a method known as x-ray crystallography. In this process, they define the two different isomers as either R (clockwise) or S (counterclockwise forms). This seemingly insignificant difference in optics leads to amplified effects at the overall drug level with consequences both deadly and inspiring. In the past, all drugs were made from racemic mixtures, or mixtures with equal amounts of both enantiomers. Separation of the two enantiomers was impractical, costly, and seemingly unnecessary. After all, if both enantiomers had virtually identical properties, what would be the purpose of so-called “purifying” the drug? In the 1960s, the drug Thalidomide, a racemic mixture composed of R-thalidomide and

17 medicine

S-thalidomide, was used to inhibit the effects of morning sickness in pregnant woman. However, after its release, over 10,000 children of women using this drug were born with severe birth defects such as phocomelia, the development of flipper-like limbs. This led to a massive recall of the drug once flaunted for its extensive animal testing during the clinical stage. Consequently, Thalidomide was branded as one of the most catastrophic pharmaceutical failures of the 20th century. Scientists were left baffled—all asking the same question: “Why?” Research quickly followed the fiasco. Within a few years, scientists had discovered their fatal flaw: the nature of enantiomers was far more complex than ever imagined. The R-form of the enantiomer showed the reputed characteristic of inhibiting morning sickness. By itself, it seemed effective in preventing morning sickness and producing desired results. However, the S-form of the organic compound thalidomide produced teratogenic effects which caused fatal defects in embryonic formation. These disastrous effects were believed to result from the increased affinity that S-thalidomide has with embryonic cerebrum proteins, which have a direct effect on limb formation. The specific orientation of the S-enantiomer allowed it to interact with the protein to a much greater extent than the R-form did, a difference that proved to be fatal. The Thalidomide incident spurred greater interest in a previously erudite and unknown region of enantiomers. Pharmaceutical giants spent much of their resources developing better techniques to separate enantiomers in hopes of finding an enantiopure drug. Of the various techniques experimented, the most favored were crystallography and chiral gas chromatography. Crystallography uses a resolving agent or chiral selector to crystallize an enantiomer from aqueous to solid form while chromatography separates the racemic mixture on a stationary phase, usually containing or coated with the chiral selector. The goal of both is to separate the two enantiomers—whether by crystallizing it out of the solution or moving the mixture down a solvent

(


(

(

front—while not affecting or interacting with the other one. The first successful, reproducible separation process, the gas chromatograph, was done in 1966 using an isoleucine lauryl ester mixture as the stationary phase for separating two amino acid enantiomers. Once the racemic mixture was separated into enantiopure mixtures, the optical purity, or the ratio of the two enantiomers, was measured to determine the efficacy of the separating process. The advantages of enantiopure drugs over racemic drugs have varied, depending on the case, and the biological effects of single enantiomer drugs over their counterpart racemic drugs still remain unclear in some cases. For example, ibuprofen, a well-known pain reliever, affects the human body in the same way regardless of the optical purity of the drug, because of the compound’s tendency to racemize, or become a racemic mixture, within the body. However, in other cases, the effects of the two different enantiomers are radically different. Aside from the previously mentioned Thalidomide, other drugs have proven benefits in their enantiopure stages. R- Ethanbutol, for instance, is used to treat tuberculosis, while S-Ethanbutol causes blindness. The R-enantiomer of modafinil, which is used in the drug Armodafinil to treat sleepiness, has been proven to last longer than its counterpart S-enantiomer. While the exact reasons between the various discrepancies are unknown, this factor has become of high interest in the development of new drugs. Over 80% of the drugs on the market today have chiral compounds in them. Some remain racemic mixtures, and others are enantiopure. The use of single enantiomer compounds as active ingredients in drugs has exponentially increased over the years. In 1985, the use of enantiopure drugs was virtually unheard of, but by 2001, every new synthetic drug on the market contained single enantiomer varieties of the chiral ingredients. The main advantage of using enatiopure drugs is their relative safety. The FDA (Food and Drug Administration) has strict regulations regarding minimum optical purity allowed for a drug to be marketed as enantiopure. These precautions are set up to prevent a catastrophe similar to the Thalidomide incident of the 1960s. The number of single-enantiomer drugs currently on the market is on the rise. Twenty years ago, there were three certified enantiopure drugs on the market. Today, they account for a majority of the medicines, both prescription and over-the-counter drugs, that are on the market. Enantiopure drugs are also used to circumvent legal issues. Many companies produce single enantiomer forms of the same drug, such as Lipitor (which is used to prevent heart attack or stroke) and Tamiflu (used to treat flulike symptoms) to extend copyright expiration dates and compete against generic brands. As companies turn away from conventional, racemic drug synthesis and pour more of their resources for research and production of chiral drugs, an unmistakable transition is occurring. This change signals the beginning for a renewed search for a medical breakthrough in which enantiomers hold the center spotlight. The trend in the world of drugs has become subject to much scientific scrutiny. Though some remain doubtful of the true advantage of using single-enantiomer drugs, these optical isomers have become the next step toward medical breakthroughs for curing virtually all diseases, including cancer and Alzheimer’s. These identical twins of science have led pharmaceutical studies to new heights and new possibilities. While there is a current standstill in pharmaceutical research for these drugs, the outlook for enantiomers in determining the next phase of drug revolution remains highly optimistic. The potential remains.

Works Cited “Chiral Chemistry.” Chemical & Engineering News. 14 June 2004. Web. 13 Mar. 2011. <http://pubs.acs.org/cen/coverstory/8224/8224chiral.html>. Darrow, Jonathan J. “The Patentability of Enantiomers: Implications for the Pharmaceutical Industry.” Stanford Technology Law Review, 2007. Web. 7 Mar. 2011. Davankov, Vadim A. Analytical Chiral Separation Methods. International Union of Pure and Applied Chemistry, 1997. PDF. Erb, Sandra. “Single-Enantiomer Drugs Poised for Further Market Growth - Pharmaceutical Technology.” Pharmaceutical Technology - Pharmaceutical Manufacturing & Development News & Research for Scientists. 3 Oct. 2008. Web. 21 Mar. 2011. <http://pharmtech.findpharma.com/pharmtech/article/articleDetail.jsp?i d=385859&sk=&date=&pageID=3>. Fleischer, Heidi, Dirk Gordes, and Kerstin Thurow. “High-Throughput Screening Applications for Enantiomeric Excess Determination Using ESI-MS.” A Practical Resource for Analytical Chemistry and Life Science - American Laboratory Home. 18 June 2009. Web. 21 Mar. 2011. <http://new.americanlaboratory.com/913Technical-Articles/666-High-Throughput-Screening-Applications-for-Enantiomeric-Excess-Determination-Using-ESI-MS/>. Jacoby, Mitch. “Imparting Chirality To Metals.” Chemical & Engineering News. American Chemical Society, 4 May 2009. Web. 13 Mar. 2011. <http://pubs.acs. org/cen/news/87/i18/8718notw6.html>. Schmidt, Karen F. “Mirror-image Molecules: New Techniques Promise More Potent Drugs and Pesticides | Science News | Find Articles at BNET.” Science News. 29 May 1993. Web. 21 Mar. 2011. <http://findarticles.com/p/articles/mi_m1200/is_n22_v143/ai_13827694/>. Scott, Raymond PW. “Chiral Gas Chromatography.” Chromatography Books Online (free Access Pdfs). Web. 15 Mar. 2011. <http://www.chromatography-online. org/Chrial-GC/contents.html>. “Thalidomide.” Chromatography Books Online (free Access Pdfs). Web. 15 Mar. 2011. <http://www.chromatography-online.org/topics/thalidomide.html>. “The Two Enantiomers of Citalopram Bind to the Human Serotonin Transporter in Reversed Orientations.” PubMed.gov. National Institute of Health, 3 Feb. 2010. Web. 13 Mar. 2011. <http://www.ncbi.nlm.nih.gov/pubmed/20055463>. Wisor, Jonathan P., William C. Dement, Lisa Aimone, Michael Williams, and Donna Bozyczko-Coyne. “Armodafinil, the R-enantiomer of Modafinil: Wakepromoting Effects and Pharmacokinetic Profile in the Rat.” Pharmacology Biochemistry and Behavior 85.3 (2006): 492-99. ScienceDirect - Home. Nov. 2006. Web. 21 Mar. 2011. <http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6T0N-4MFKD0H-1&_user=10&_coverDate=11/30/2006&_rdoc=1&_fmt=high&_ orig=gateway&_origin=gateway&_sort=d&_docanchor=&view=c&_searchStrId=1686568213&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=ccb263350d3596c5f3ba26914bb28730&searchtype=a>. Zimmer, Carl. “Answers Begin to Emerge on How Thalidomide Caused Defects.” New York Times. 15 Mar. 2010. Web. 15 Mar. 2011. <http://www.nytimes. com/2010/03/16/science/16limb.html>.

medicine 18


rm Para o sitic W tre atm

ent

by Be t

hel

Hago From an Expert in the Field s Parasitic worm treatment is a very interesting research field, particularly casting hope for those suffering from autoimmune diseases like Type I diabetes, Crohn’s disease, ulcerative colitis, multiple sclerosis, etc. The concept for the therapy developed from surveys showing that autoimmune diseases are more common in western countries than in the developing world, where sanitation is poor and the worm infestation rate is high. When a foreign body invades, it will often be tackled by the host immunity. The host will attempt to destroy the parasite by creating an inflammatory zone, but sometimes it is not possible to destroy the parasite, and the parasite will survive for a long period in the host environment. Eventually, the parasite and host attain a mutually symbiotic condition. Here the host and the parasite both benefit, but this is only possible when the parasite’s impact on the host is minimal or at a sub-pathological level. The host initiates the inflammatory reaction against the parasite, and the parasite tries to overcome the situation by producing an anti-inflammatory effect, essentially teaching the host not to be extreme in its response. This may help regulate the immune system so that it will no longer attack the host’s own tissue in an overactive response, thus reducing the symptoms of autoimmune diseases. However, many of the precise mechanisms and implications involved are still being researched. Some questions still remain: where will we obtain the parasitic worms on a large scale? How will we determine which species of worm to use in a specific patient or disease? Will the worms develop resistance to drugs or cause severe reactions in certain patients or cases? A 35-year-old man in San Francisco was experiencing chronic, severe stomach pains due to a deadly autoimmune disease. Desperately seeking any possible way to relieve his extreme pain, the man intentionally had himself infected with parasitic worms. A few years later, he now lives a relatively normal life, free of pain. Was that just a special case? How did that work? Had the man completely lost his mind? Perhaps the man may have just been daring, but studies supporting his actions exist. Recently, there has been increased interest in this unique treatment, which is known as helminthic therapy, for various types of this disease. Just as blood-sucking leeches were used in old medical procedures to drain away evil spirits, unpleasant parasites are nowadays being invited to infest targeted parts of a patient’s body to alleviate symptoms. While it is reasonable to question the seemingly antiquated use of worms in modern times to treat diseases, more and more people are crediting these microscopic creatures with helping to activate dormant mechanisms of the immune system, which are characteristic of certain autoimmune disorders like allergies, type I diabetes, multiple sclerosis, Crohn’s disease, celiac disease and ulcerative colitis. Parasitic worms such as hookworms, roundworms, and whipworms are notorious for causing diseases such as malaria, sleeping sickness, and river blindness. But interestingly, in comparison to industrialized nations, developing countries that are more exposed to parasitic infection have a lower rate of autoimmune disease and allergies, indicating that these parasites can have beneficial 19 medicine

effects on humans. Indeed, although only one of the three parasites proposed for further research is currently approved for human clinical testing, studies in mice and humans show that helminthic therapy plays a key role in suppressing ulcerative colitis, type 1 diabetes, asthma, and rheumatoid arthritis. Many cases have demonstrated such a positive response after implementation that the benefits for patients cannot be denied. However, despite its promise, access to helminthic therapy is currently limited. When doctors prescribe helminthic therapy, patients must first be subjected to a series of evaluations and blood tests to ensure that they are in absolute need of it to suppress extreme pain or to slow the progression of a disease. It is actually reasonable that such benefits exist; evolutionarily, the human species has had a great deal of contact with these worms through unsanitary practices such as eating raw or unclean meat. The body attempts to destroy the invader by first creating an inflammation zone, but this does not always kill all parasites. A fraction of them remain and form a mutually symbiotic relationship with the host. Because parasites have lived with our species for so long, many of them do not pose a large threat to the human immune system and do not have a high degree of pathogenicity. In fact, they even share genes related to the expression of some autoimmune disease, most likely because of the parasites’ co-evolution with humans. Luckily, that is just what the body of a person suffering an autoimmune disease needs to fully activate dormant T-regulator cells.


These specialized T-cells help keep internal balance by suppressing the body’s attack on an unwanted agent when an attack is unnecessary, such as in the presence of antigens containing potentially targeted viral information that is actually essential to fighting off viral infection. Since autoimmune disease can be characterized by unnecessary attacks on the body’s own system, the simple presence of a helminth can tip the balance in the other direction and stop excessive reactions. Therefore, deliberate exposure to specific parasitic worms can harness the power of regulatory mechanisms in the immune system and effectively treat a patient suffering from a deficiency of active T-regulator cells. Additionally, parasitic worms can produce anti-inflammatory molecules that have properties linked to repairing wounds and mucus generation. Thus, some damage in the form of internally bleeding sores caused as part of an autoimmune disease could actually be reversed or slowed down. Moreover, increasing mucus production to a healthy enough level could act more effectively as a barrier between tissue and bacteria that would cause inflammation. Symptoms of type 1 diabetes could also be ameliorated by purposeful worm infection, because it may block pathways taken by cells that attack insulin-producing beta cells. While there isn’t as much support for this, the future still looks bright for many autoimmune disease sufferers. In a different way, the parasite can benefit the host in regulating the immune response by countering the inflammatory response on it, thereby reversing the effect of the host attacking its own cells which is the nature of autoimmune disease. Researchers are currently exploring the use of specific worms for various diseases, including multiple sclerosis, while waiting for the United States Food and Drug Administration to approve other parasitic worms to reach the status of Investigational New Drugs. Some are even interested in eventually creating helminth supplements for independent self-help administration. The way helminthic infection has allowed individuals to manipulate their malfunctioning immune systems to handle some of the symptoms of their disorders shows promise for both the lives of other patients and the future investigation of autoimmune diseases. Sources Elliott, D.E., and J.V. Weinstock. “Helminthic Therapy: Using Worms to Treat Immune-mediated Disease.” PubMed. PubMed U.S. National Library of Medicine, National Institutes of Health, 2009. Web. 29 Dec. 2010. <http://www.ncbi.nlm.nih.gov/ pubmed/20054982>. “Diseases Caused by Parasites : Malaria, River Blindness, Sleeping Sickness, and More : Parasitology.com.” Parasitology : The Biological Science of Parasites, Their Hosts, and the Relationship Between Them. Prentice-Hall, Inc. Web. 28 May 2011. <http:// parasitology.com/diseases/index.html>. “Helminthic Therapy « Immunologica UK.” Immunologica UK. Wordpress and BytesforAll. Web. 29 Dec. 2010. <http://www.immunologica.co.uk/helminthic-therapy/>. “Immunotherapy and Helminthic Therapy.” THE MEDICAL NEWS | from News-Medical.Net - Latest Medical News and Research from Around the World. Web. 29 Dec. 2010. <http://www.news-medical.net/health/Immunotherapy-and-Helminthic-Therapy.aspx>. Jabr, Ferris. “For the Good of the Gut: Can Parasitic Worms Treat Autoimmune Diseases?: Scientific American.” Science News, Articles and Information | Scientific American. 1 Dec. 2010. Web. 29 Dec. 2010. <http://www.scientificamerican.com/article. cfm?id=helminthic-therapy-mucus>. “Multiple Sclerosis and Helminthic Therapy or Worm Therapy. Scientific Evidence for Helminthic Therapy (worm Therapy) plus Personal Accounts” Worm Therapy Founded by Jasper Lawrence and Dr. Marc Dellerba, PhD.,. Autoimmune Therapies. Web. 29 Dec. 2010. <http://autoimmunetherapies.com/candidate_diseases_for_helminthic_therapy_or_worm_therapy/multiplesclerosis_helminthic_therapy.html>. “Rise Up People.” Waiting for the Cure. Wordpress 2011, 4 Dec. 2010. Web. 29 Dec. 2010. <http://waitingforthecure.com/I/2010/12/04/ rise-up-people/>.

medicine 20


p a

ial cur t n e ef ot

or

PARKINSONâ&#x20AC;&#x2122;S by Selena Chen reviewed by Bruno Tota

21 medicine

art by sarah bhattacharjee


Parkinson’s disease is a degenerative disorder in the central nervous system that affects motor skills, cognitive processes, and other functions pertaining to the human body. Often characterized by symptoms such as rigidity and postural instability, it may lead to tremors and difficulty in walking, movement, and coordination. Parkinson’s disease has affected over 6.3 million people throughout the world, 1.5 million of whom are American citizens. Although this is a common disorder, there is presently no treatment to permanently cure it. Several medications, such as levodopa and carbidopa, have been successful in providing symptom relief by causing nerve cells to replenish lowered levels of dopamine, a neurotransmitter necessary for the brain to function actively. Sometimes, however, the disease will not respond to any type of drug, and more serious cases may undergo an alternative therapy known as Deep Brain Stimulation, or DBS. While the use of medication on a patient may gradually wear off, scientists have found that DBS has resulted in greater success in overcoming the effects of Parkinson’s disease. This new technique in treating disorders not only prevents most common symptoms in patients but also has been proven to be both safe and scientifically advanced. Deep Brain Stimulation is a surgical procedure used to treat disabling neurological symptoms that result from disorders like Parkinson’s disease. Currently, it is used only for patients whose symptoms cannot be treated and controlled with medication and has been shown to help reduce the severity of symptoms such as tremor, rigidity, stiffness, slowed movement, and impaired walking. Deep brain stimulation uses a neurostimulator, a surgically implanted, battery-operated device the size of a stopwatch that is similar to a pacemaker. Before the procedure of DBS begins, a neurosurgeon locates the exact area within the brain where electrical nerve signals are causing Parkinson’s disease symptoms; this is where the neurostimulator will be implanted. Once preparations are made for DBS therapy to occur, electrical impulses are sent from the neurostimulator along an extension wire leading into the patient’s brain. This electrical stimulation is applied to areas of the brain controlling movement, in turn blocking any abnormal nerve signals nearby which may cause tremors and other symptoms. In 2008 and 2009 Frances M. Weaver, Ph.D., of Hines VA Hospital in Illinois, conducted a randomized trial with several colleagues to compare the results of DBS with those from patients undergoing medical therapy for Parkinson’s disease. A total of 255 patients diagnosed with Parkinson’s joined the trial. The participants were randomly chosen to receive either deep brain stimulation treatment or medical therapy that was monitored by movement disorder neurologists. 60 patients received DBS in the subthalamic nucleus of the brain, a central location in the basal ganglia system, which controls motor skills and learning abilities. 61 patients received DBS treatment in the globus pallidus of the brain, located within neural tissue, and 134 patients received medical therapy. After 6 months, researchers discovered that DBS patients gained an average of 4.6 more hours per day without involuntary movement, while the medical therapy group had gained an average increase of 0 hours without involuntary movement during the trial. Interestingly enough, motor function was not only controlled but also improved significantly in deep brain stimulation patients: 71% of DBS patients experienced major improvement in motor function within 6 months,

compared to only 32% of medical therapy patients. Only 3% of DBS patients had clinically worsening scores while 21% of medical therapy patients declined. Overall, patients that had undergone deep brain stimulation experienced significant improvements in regards to their quality of life and motor function skills when compared with patients in the medical therapy group. Unlike previous surgeries known for treating Parkinson’s disease, Deep Brain Stimulation therapy does not damage healthy brain tissue by destroying nerve cells; rather, it simply blocks electric signals from targeted areas in the brain, making the procedure reversible and adjustable. This is yet another advantage to DBS, since stimulation from the neurostimulator can be easily altered if the patient’s condition changes, even without further surgery. Although most patients may still require certain medications even after undergoing DBS, they generally reduce drug consumption greatly and as a result, experience the side effects associated with Parkinson’s medications less frequently. Overall, Deep Brain Stimulation therapy is proving to be a beneficial addition to the treatments available for Parkinson’s disease. It has recently garnered attention among research centers and organizations nationwide that are eager to explore its full potential and refine its technique. For instance, the National Institute of Neurological Disorders and Stroke (NINDS) is supporting continued research on DBS to comprehensively determine its safety, reliability, and effectiveness as a treatment for Parkinson’s disease. NINDS-supported scientists are currently trying to determine, in particular, where in the brain DBS surgery is most effective in reducing symptoms. Meanwhile, the US Food and Drug Administration has also approved of DBS for Parkinson’s disease after its potential in alleviating symptoms and reducing drug consumption was demonstrated. Deep Brain Stimulation therapy is still new to medical technology, but it has shown its potential in becoming a great addition to the treatment of Parkinson’s disease in our society and world today. As for many other medical interventions, DBS will be further improved by development of nano-devices. Works Cited “Deep Brain Stimulation for Parkinson’s Disease Information Page.” National Institute of Neurological Disorders and Stroke (NINDS). Web. 05 June 2011. <http://www.ninds.nih.gov/ disorders/deep_brain_stimulation/deep_brain_stimulation.htm>. “Deep Brain Stimulation Treatment For Advanced Parkinson’s Disease Patients Provides Benefits, Risks.” Science Daily: News & Articles in Science, Health, Environment & Technology. Web. 05 June 2011. <http://www.sciencedaily.com/ releases/2009/01/090106161510.htm>. Marcus, Mary Brophy. “Deep Brain Stimulation Brings Good and Bad for Parkinson’s - USATODAY.com.” News, Travel, Weather, Entertainment, Sports, Technology, U.S. & World USATODAY.com. Web. 05 June 2011. <http://www.usatoday. com/news/health/2009-01-06-parkinsons-brain-stimulation_N. htm>. “Parkinson’s Disease Information Page.” National Institute of Neurological Disorders and Stroke (NINDS). Web. 05 June 2011. <http://www.ninds.nih.gov/disorders/parkinsons_disease/ parkinsons_disease.htm>. Talan, Jamie. “Parkinson’s Disease - Symptoms, Diagnosis, Treatment of Parkinson’s Disease - NY Times Health Information.” Health News - The New York Times. Web. 05 June 2011. <http://health.nytimes.com/health/guides/disease/ parkinsons-disease/overview.html>.

medicine 22


al r u t a n l l a % 0 0 1 made with

SEA SALT R by

e b e c c a Ku a n

art by evan simpkins

23 lifestyle


The words “100% Sea Salt” catch the eyes of shoppers as they cruise through the aisles of a grocery store, reminding them in large, bold print of this natural version of salt. Sea salt is left from the evaporation of sea water and consists of sodium chloride and trace elements like sulfur, magnesium, zinc, potassium, calcium, and iron. Manufacturers have recently been placing emphasis on its purported health properties; however, are these claims rooted in scientific fact, or are they simply an elaborate marketing ruse? Even though sea salt is heavily marketed in the 21st century as a natural and healthier alternative to regular table salt, the actual evidence suggests that there is virtually no difference between the two. There are many different types of salt with the chemical formula NaCl. A basic classification of salt divides the “family” into two groups: coarse salts and finishing salts. Coarse salts are salts with large grains, while finishing salts are refined salts with delicate flakes and moist crystals that dissolve very quickly in water due to greater surface area. Many finishing salts are prized as specialty salts, harvested by hand in special areas around the world and used for beautiful table-side presentations. Sea salt is considered a coarse salt and regular table salt a finishing salt. Sea salt is derived directly from a living ocean or sea with little or no processing. Since all the minerals from the water are left intact, the coarseness, flavor, color, and elemental components of sea salt depend on the water source. In fact, because of the varying levels of mineral constituents, there are many types of sea salt, including flake salt, grey salt, French sea salt, Hawaiian sea salt, Italian sea salt, and smoked sea salt. Table salt, meanwhile, is mined from underground salt mines, and most minerals are removed from it. Additives--such as iodine, a nutrient naturally found in sea salt--are included to prevent clumping. Table salt is more common and less expensive than sea salt, even though both have the same basic nutritional value. Salt is an important component of our diet, as it is a main source of the 1500-2300 mg of sodium recommended each day. Sodium, most importantly, ensures that nutrients pass through cells and maintains balance of body fluids by regulating the amount of water going into and out of cells, both crucial tasks. Deficiencies in sodium levels can lead to hyponatremia, a serious condition in which excess water enters cells and causes them to swell, resulting in symptoms ranging from nausea and headaches to seizures and coma. Additionally, salt plays an indispensable role in maintaining blood pressure. Although the common belief is that salt only raises blood pressure, it actually regulates blood pressure and stabilizes irregular heartbeats. If blood pressure is too high, the risk of illnesses like heart disease or heart attack, greatly increases, but on the other hand, if blood pressure is too low, weakness and exhaustion may result. The American College of Obstetrics and Gynecology has also found that low-salt diets during pregnancy are associated with higher rates of stillborn births and low-birth weight infants. There is also evidence that the aging body in particular needs a stable sodium level in the body due to decreasing sodium retention in the kidneys, an occurrence that increases the risk of hyponatremia. If salt is such an important part of human diet, why is there such a stigma attached to it? The primary explanation for this is that our increasingly organic-oriented market has deemed salt as a “processed food,” claiming that it has been marred by chemical additives. Marketers proceed to claim that sea salt is healthier because it has not been altered by humans. Even some physicians believe that there sea salt offers more health benefits. However, to date, there have been no studies that conclusively prove any difference between sea salt and table salt. Additionally, sea salt and table salt have the same chemical makeup and therefore, sea salt is no more nutritious than regular table salt. The human body’s need for salt stems from its need for sodium and chloride, two elements that the human body cannot form on its own. The amount of sodium and chloride in table salt and in sea salt is the same. The identical nutrition value ensures that all the health benefits of salt will be present regardless of whether sea salt or table salt is consumed. Therefore, generally, in regards to human health benefits, sea salt and table salt are indistinguishable. Though salt’s numerous health benefits make it an essential component of human diets, the widely marketed “healthy, all-natural” sea salt is no different from regular table salt. The major distinction between the two types of salt is price and source, with sea salt harvested by evaporating bodies of water and table salt created by refining salt from salt mines. Next time you see an advertisement for a product made using 100% all natural sea salt, perhaps it would be better to avoid spending extra money for the nonexistent health benefits. Works Cited

“Dead Sea Salts Research and Studies.” San Francisco Bath Salts Company | The Authority on Bath Salts. Web. 15 Nov. 2010. <http://www.sfbsc.com/dead-sea-salt-research>. “Kosher vs. Table vs. Sea Salts : Recipes and Cooking : Food Network.” Food Network - Easy Recipes, Healthy Eating Ideas and Chef Recipe Videos. Web. 8 Nov. 2010. <http://www.foodnetwork.com/recipes-and-cooking/kosher-vs-table-vs-sea-salts/index.html>. Reynolds, Denise. “Sea Salt Versus Table Salt: Is One Better Than The Other?” Healthy Theory. Web. 23 Aug. 2011. <http://www. healthytheory.com/sea-salt-versus-table-salt-is-one-better-than-the-other>. “Salt Health Benefits - Vital for a Healthy Life.” Salt Health. Web. 15 Nov. 2010. <http://www.salthealth.org/?content=vitalmedicine>. “Sea Salt & Gourmet Salts - Guide | SaltWorks.” Sea Salts & Bath Salts | SaltWorks. Web. 8 Nov. 2010. <http://www.saltworks.us/ salt_info/si_gourmet_reference.asp>. “Sodium Chloride Balance and Salt Balance.” Salt Health. Web. 15 Nov. 2010. <http://www.salthealth.org/?content=essence>. “What Does Sodium Do for the Body? - Life123.” Articles and Answers about Life - Life123. Web. 8 Nov. 2010. <http://www.life123.com/ health/vitamins/sodium/what-does-sodium-do-for-the-body.shtml>. Zeratsky, Katherine. “Nutrition and Healthy Eating.” MayoClinic.com. Mayo Clinic, 27 Aug. 2009. Web. 8 Nov. 2010.

lifestyle 24


THE TOXIC BEAUTY OF

COSMETICS

art by cassie sun

by Serin You

25 lifestyle


Cosmetics are a key component to fashion nowadays--the average American uses up to ten personal cosmetics a day. In the 20th century, under the influence of theatre, arts, and Hollywood, makeup was established as a sign of fashion, and products such as sunscreen, hair dye, and deodorant were widely introduced. However, as cosmetics become increasingly popular and widely used, pressing problems are emerging-namely, the questionable ingredients in cosmetics and their possible effects on users. Makeup may seem appealing, but its inner contents are not as pretty and attractive as its packaging. Some cosmetics are more toxic and less regulated than most people assume, leading them to carelessly use toxic makeup. Substances often found in shampoos, conditioners, mascaras, eyeshadows, moisturizers, and lipsticks--like parabens, 1,4-dioxane, and neurotoxins--all may lead to increased cancer risk, hormone disruption, learning disabilities, and asthma. It may seem exaggerated to say that the low concentrations of these chemicals would cause cancer, but a nationwide average of ten products a day makes these chemicals detrimental to health. Parabens are a class of chemicals that are used to lengthen the shelf-life of many cosmetics products, yet they also mimic hormones like estrogen and increase the risk of breast cancer. They have been found in breast cancer tumors, albeit in low concentrations. Meanwhile, 1,4-dioxane is often produced in trace amounts as a byproduct during the manufacture of certain ingredients in makeup. It has been implicated as a carcinogen, a chemical compound that triggers mutations in the DNA of a normal cell and leads to uncontrollable cell growth, or cancer. In addition to increasing the risk of cancer, 1,4-dioxane also causes eye and skin irritation. Neurotoxins, which are also found in some personal care products, can directly affect nerve cells, or neurons, by interfering with membrane protein and ion channels. Many types of venom found in organisms like spiders, snakes, and scorpions are neurotoxins that cause paralysis. Lead, found in many lipsticks, is a common neurotoxin that damages the nervous system and causes brain and blood disorders. If makeup is so unhealthy, why is it manufactured and marketed all around the world—and more specifically, in the United States? Government regulation on personal care products and cosmetics is fairly weak, and the Food and Drug Administration (FDA) does not provide sufficient assistance in regulating safe cosmetics. It has only assessed 20% of chemicals found in cosmetics for safety and has only banned eight products from the market since 1938. Furthermore, the FDA does not require a

full list of ingredients on cosmetic products, generating the issue of label dishonesty. Various cosmetics companies label their products “natural,” “herbal,” or “organic,” but these claims could be false and meaningless since the government does not inspect their validity. To prevent further health issues with toxic makeup, several solutions are currently being implemented. One solution toward healthier cosmetics is a federal mandate for the complete removal of toxic substances in makeup. Many European countries, for example, have already banned certain harmful chemicals from cosmetics. An alternative to this solution would be a new law that permits the FDA to regulate cosmetics more thoroughly, so that consumers will feel comfortable and safe using all cosmetic products. Meanwhile, green chemists are starting to develop nontoxic substances that can be used as alternatives to harmful chemicals. If their efforts are successful, then makeup users would not have to worry about hazardous ingredients like carcinogens, neurotoxins, and parabens. To improve the safety of consumers, the FDA currently asks for notifications of any health care problems, such as rashes or infections, after using cosmetics. Once the FDA identifies a trend in symptoms, it will start removing unsafe products responsible for the problem. Until further steps are made in improving cosmetics, there are some alternatives to using hazardous ones. For instance, instead of consuming storebought products that contain hazardous chemicals, using homemade cosmetics or using fewer cosmetics is a safer option. Works Cited Campaign for Safe Cosmetics : Index. Web. 13 Apr. 2011. <http://safecosmetics.org/index>. Considine, Glenn D. “Carcinogen.” Van Nostrand’s Scientific Encyclopedia. 9th ed. Vol. 1. New York: John Wiley and Sons, 2002. Print. Makeup-artist-world.com. Web. 27 Nov. 2010. <http://www.makeup-artist-world.com/ historyofmakeup.html>. Moyer, Melinda Wenner. “The New Toxic Threats to Women’s Health.” Glamour 1 Mar. 2011: 177-78. Safecosmetics.com. Web. 5 Apr. 2011. “Neurotoxin.” ISCID - International Society for Complexity Information and Design. Web. 4 Dec. 2010. <http://www.iscid.org/encyclopedia/ Neurotoxin>. O’Connor, Siobhan, and Alexandra Spunt. No More Dirty Looks: the Truth about Your Beauty Products-- and the Ultimate Guide to Safe and Clean Cosmetics. New York: Da Capo Lifelong, 2010. Print. Wednesday, Bryan Walsh. “About Face - Toxic Cosmetics - TIME.” Breaking News, Analysis, Politics, Blogs, News Photos, Video, Tech Reviews - TIME.com. 7 July 2010. Web. 13 Apr. 2011. <http://www.time.com/time/specials/packages/ article/0,28804,2002338_2002332,00.html>.

lifestyle 26


VAN GOGH

Van Gogh's Faded Artwork

by Kristine Paik

artwork by claire chen

Vincent Van Gogh may have been shrouded in anonymity during his lifetime, but he is widely recognized today as a painting genius. For more than 120 years, his artwork has been the object of love, scrutiny, and reverence all over the world. Most famous for his use of vibrant colors in impressionist artworks, Van Gogh created many works, including Starry Night and Sunflowers, that are heavily studied and used as references in modern-day art and art history classes. In creating his masterpieces, Van Gogh expressed his moods through his vibrant colors. However, in recent years, scientists and historians have been dismayed by the decay in his paint colors from lively to muddy - a change that misrepresents his original emotions. Van Gogh had a special interest in chrome yellow and often mixed the bright color with white leadbased paint to produce an even lighter color. Chrome yellow was a relatively new invention during the early 1800s and was created as a version of lead chromate, which had an orange tint to it. Initially extracted from the mineral crocoite-once used as pigment by the French chemist Nicolas Louis Vauquelin-lead chromate contains the element chromium, which produces an intense yellow color. By Van Gogh’s time in the late 1800s, lead chromate was widely used in paint because chrome pigments tended to dry faster and had more permanence--qualities that painters greatly desired. Unfortunately, lead chromate is toxic. The lead inside the paint, when ingested, poisons the body and causes health problems by inhibiting organ and brain function, as shown by recent studies. Lead inhibits normal cell function by being absorbed through the channels that allow calcium to enter cells, thus affecting the central nervous system and leading to both nerve and muscle problems. High doses of lead ultimately cause death. However, since Van Gogh had no knowledge of the dangers, he 31 original research

27 lifestyle lifestyle 29 27 lifestyle

used many lead-based paints in his painting and sometimes ingested the hazardous material through the habit of biting his nails. Since crocoite is a rare mineral, chrome yellow paint is now often created artificially by dissolving lead (II) nitrate and potassium chromate separately in water and then mixing the two solutions together, producing a precipitate that, when dried, possesses the same bright chrome yellow that Van Gogh used in his artwork back in the 19th century. Scientists have discovered that chrome yellow is chemically unstable, and through extremely thin X-rays, recorded that UV light from sunlight stimulated chemical reactions in the pigment to cause fading and browning to darker shades of chrome yellow. The deterioration of the paints happened at different rates, which confused many scientists, but in general, the yellow and white paint that Van Gogh commonly mixed together to lighten the chrome yellow for his paintings reacted under exposure to sunlight and darkened the overall paint with time. The chromium in the yellow paint oxidized due to sunlight reacting with barium sulfate, which was added to the lead white paint in the manufacturing companies used by Van Gogh. Unlike past painters, Van Gogh was part of the generation that bought their paints instead of making their own, which meant the painters were unaware of which chemicals were used to produce their colors. The scientist team under Koen Janssens responsible for the study of the changes in Van Gogh’s art came to the conclusion that the intense yellow turned into a darker hue after finding the contaminant barium sulfate in the darkened painting between Van Gogh’s Banks of the Seing and View of Arles with Irisis. Sunlight paired with the presence of barium and sulfur produced chemical instability and therefore had the tendency to oxidize faster, as shown by works where Van Gogh mixed his favorite chrome yellow along with white.

This new knowledge warns art galleries to remove all Van Gogh artworks into darker rooms in order to keep the paints from undergoing further oxidization and losing all the intensity that the painter had intended to portray. Now, Koen Janssens and the team of scientists who discovered the reasons for the changes in color of Van Gogh’s masterpieces have begun new studies in hopes of reverting the oxidized paints to their original intense colors. Van Gogh may have suffered from intense mental complications and lead poisoning from chewing on his paint covered nails, but he had a way of conveying emotions through his painting techniques and the new types of paints that resulted from the Industrial Revolution. In order to preserve the Dutch artist’s feelings for the next generation, it is imperative that the colors are kept from further deterioration. Works Cited Rothman, Joshua. “Van Gogh’s Fading Flowers”. boston.com. 20 February 2011. Web. 13 March 2011. <http://www.boston. com/ae/theater_arts/articles/2011/02/20/ van_goghs_fading_flowers/> Moseman, Andrew. “Faded Sunflowers: Why Van Gogh’s Yellows Are Turning Brown”. Discover Magazine. 15 February 2011. Web. 13 March 2011. <http://blogs.discovermagazine. com/80beats/2011/02/15/faded-sunflowers-why-van-goghs-yellows-are-turningbrown/> Derbyshire, David. “Have Scientists Finally Discovered Why Van Gogh’s Paintings Are Turning Brown?”. Daily Mail. 15 February 2011. Web. 13 March 2011. <http://www.dailymail.co.uk/sciencetech/ article-1356914/Van-Goghs-paintingsturning-brown--scientists-discover-why. html> The Van Gogh Gallery. 15 January 2011. Templeton Reid, LLC. Web. 13 March 2011 <http://www.vangoghgallery. com/>. Douma, M., curator. Chrome Yellow. In Pigments through the Ages. 2008. Web. 13 March 2011. <http://www.webexhibits.org/pigments/indiv/history/cryellow.html>


challenges associated with the canting keel. VO70s use a 10,000 pound lead bulb at the end of a thin 14 foot long blade. Hydraulic rams are used to rotate the keel up to 40 degrees from vertical, generating approximately 100,000 Newton Meters of righting moment. My interest in canting keels began with an offer to crew on, what was at the time, one of the fastest monohull sailboats in the world. This thoroughbred racing yacht had a canting keel. I was very excited is orr M w e h to have the opportunity to sail on a racing yacht that tt a by M represented the state of the art in naval architecture and marine engineering. My excitement, however, was Yacht designers work hard short lived. The vessel suffered a failure of the rudder post, and use exotic materials to save just a not the keel, before the race which put the boat in dry dock and few pounds when designing a racing yacht. Imagine saving out of commission. I later learned that, although very fast, thousands of pounds and ending up with a less expensive boat “canters” were also very fragile. Even the yacht that I was that is safer and faster. Using hydrodynamic force instead scheduled to race on had a history of mechanical breakdowns of gravity to produce righting moment is an innovation that and only finished one of the four races it had started because of structural failures. In fact, mechanism and structural failures offers these benefits and more. Ballast weight has been used to stabilize sailboats were very common with the entire fleet of canters. Seeing the throughout history, from ancient times to the modern racing boat out of the water was an eye opener. Standing next to yacht. The very first approach was to stack stones in the bilge the huge keel and ballast bulb gave me, for the first time, an or bottom of the boat. Effective, but not efficient, this approach appreciation of the magnitude of the powerful forces that have remained basically unchanged for thousands of years. Ballast to be managed to swing these massive loads. There are many issues that sailboats using canting keels stones were used in ships from before the time of Christ to the fast clipper ships in the nineteenth century. Over the must conquer. Though reliability will inevitably improve over last 150 years, refinements in keel design and ballast location time, these are some issues that cannot be eliminated through have improved stability and performance. Fixed keel designs better engineering or improved design: The hydraulic rams need to work against a significant evolved from full keels, running the length of the vessel, to fin mechanical disadvantage when canting the keel and ballast keels typical in sailboats today. Ballast moved from stones bulb. The hydraulic rams need to generate at least 25,000 lbs. piled inside the hull to deep high aspect keels ballasted with of force just to hold the keel in place when canted. lead outside the hull. The modern ballasted fin keel serves A large structural framework is required to support two functions. Because it has a center of mass well below the hull, the modern keel with a ballast bulb does a much more the reaction loads and isolate the hull from the forces generated efficient job of keeping a boat upright than internal ballast. by the large hydraulic rams. Ballast weight is a large component of the gross The second function of the keel is to act as an underwater wing, weight of the vessel. Righting moment requirements are great or in technical terms, a vertical, hydrodynamic lifting surface when sailing up wind but the same ballast weight is carried which keeps the boat from slipping sideways when sailing up when sailing downwind even though it is not needed. wind. Sailboats cannot sail directly into the wind; however, the The 10,000 lbs. ballast bulb represents over 60 percent efficiency of the modern fin keel allows boats to sail much closer of the overall weight of the 16,000 lbs. vessel and is suspended to the wind than sailboats could 100 years ago. Refinements of 14 feet below the boat on a very thin, high aspect keel. The the fixed fin keel continue with the addition of trim tabs, fixed wings and ballast bulbs. The latest embodiment is to cant, inertia of the bulb needs to be overcome when changing course, or rotate the keel to maximize the leverage the ballast bulb generating huge torsional loads on the keel. Many failures has against the force of the wind. However, the tradition of resulted in the bulb snapping off the keel. Canting sailboats require the addition of canards or using the force of gravity has remained the primary driver to dagger boards to replace the loss of the primary underwater counteract the force of the wind. lifting surface, adding significant complexity and underwater A number of different open ocean class sailboats use canting surface area to the vessel. Lowering and raising the dagger keels. The “Volvo Open 70” class best exemplifies the design boards need to be coordinated with motion of the keel when original research 28

T

l e e K d e he Wing


tacking or jibing. The variety of extreme dynamic sea conditions make it very difficult to build in sufficient design margins. Insufficient design margin is a major factor behind canting keel mechanism reliability and subsequent safety issues. The canting mechanism requires the use of dynamic seals. Catastrophic failure and subsequent keel damage could cause seal leakage to be large enough to endanger the vessel and crew. The electrical power needed to generate the hydraulic pressure for the rams is significant and difficult to generate on a sailboat. It takes approximately ten seconds to tack or jibe with a canting keel. Even with these issues, sailboats using canting keels retain the status of being the world’s fastest ocean going monohulls. Standing next to the boat, I wondered if there was a better way to generate righting moment other than by moving lead ballast weight alone. I had an idea for a simple mechanism to control a wing attached to the bottom of the keel. This wing would use the flow of water, or hydrodynamics to generate righting moment. Some ballast is required for initial stability, but a significant percentage of ballast weight would be eliminated by using hydrodynamic force to supplement the gravitational force from the inertial mass of the keel and ballast bulb. The wing would be attached to a shaft at an obtuse angle, so as the shaft rotates, the angle of attack of the wing would pitch down. The wing would be rotated straight back when sailing down wind and rotated forward when sailing up wind. The weight of the ballast bulb could be reduced by up to 50 percent with a small penalty of additional drag from the wing. Since I had not seen or heard of anything like it before, I did a patent search on the concept and filled an application with the US Patent and Trademark Authority. I was granted an international utility patent on the concept in 2009. Performance and safety advantages of this concept include: The same righting moment 100,000 nt. m. as the canted ballast bulb at 15 knots with two meters additional wetted area form the wing. That would be the maximum wetted area penalty if you were to replace all of the ballast weight. Of course the actual wing area would depend on the percentage of ballast replaced by hydrodynamic force and the weight of the wing. The keel is fixed with no loss in lifting surface. There is no need for supplemental dagger boards. The reduction in overall weight reduces the overall wetted area and drag. Time to tack and jibe is reduced by 50 percent. No seals are required to seal the actuator shaft. The shaft tube can be extended above the water line, similar to a centerboard trunk. The wing and shaft could completely break off the keel and not endanger the vessel or crew. Since rotating the wing does no work to lift the ballast weight against gravity, power requirements are reduced from 289 (10 seconds to tack a canting keel) to 29 horse power (5 seconds to tack a wing keel) Like winglets on an airplane wing, the horizontal wing on the bottom of the keel reduces the effect of tip vortices. Turbulence from the wing tips reduce lift, the plate effect from the wing increases the effective aspect ratio of the keel. Fore and aft hull trim is automatically 29 original research

accommodated as the wing rotates. The center of mass of the wing is aft when sailing downwind and moves forward when sailing upwind, eliminating the need for water ballasting to prevent pitch polling when sailing downwind. Stresses are confined in the keel. No internal structural framework is needed to support the loads. The drive motor can be mounted to the keel bolts. Hydraulics are not needed as loads are small enough to rotate the wing with a gearhead DC motor. Righting moment is a function of speed. The faster the boat goes, the more force is generated by the wing. I described the hydrodynamic lifting blade concept to a number of prominent yacht designers. Dr. Len Imas, Associate Professor at Stevens Institute of Technology’s Davidson Labs (who did the original analysis and tow tank testing on the Volvo 70 design), along with other researchers at the lab, took the time to explain the structural engineering challenges to me in detail. Being able to meet and talk with some of the most respected scientists and designers in the industry was, so far, the most rewarding aspect of this project. They all agreed that the idea had merit. They also agreed that the approach could theoretically provide the forces required, make the boat faster, safer, and even less costly. However, the design would have to be proven before an investor would spend the money necessary to build a racing yacht using my keel concept. My project was intended as an initial proof of concept for the hydrodynamic wing. To test the concept, I built a remote control functional model capable of being fitted with either a canting or hydrodynamic keel. I was unable to have access to a fluids lab so I built my own open flow bench using a rebuilt 4 horse power Jacuzzi pump. My high school physics teacher supplied the instrumentation and data acquisition software. I was able to measure up wind and downwind drag, along with the righting moment, as an angle of heel, upwind. This was a comparative test only, as the model was too small to generate results that would apply at full scale. Work Cited Authorities, G. o. (1967). Principles of Naval Architecture. (J. Comstock, Ed.) New York, New York, USA: The Society of Naval Architects and Marine Engineers. Browning, J. (2007, January 10). The Cutting Edge of Yacht Design Today. Retrieved January 25, 2011, from Sail Texas: Http://www.sailtexas.com/modernyachtdesign finalb.html Davis, D. (2005, August). How it works- Volvo Open 70 Offshore Sailboat. Retrieved April 5, 2010, from Popular Mechanics: http:// www.popularmechanics.com/outdooors/boating/1681731.html Eshback, O., & Souders, M. (1975). Handbook of Engineering Fundamentals. New York, New York, USA: John Wiley and Sons. Harrington, S. (2010, December 2). Open Flow Bench and Keel Design. (M. Morris, Interviewer) Carsbad. Imas, L., & Delorne, M. (2009, June 24). Sailing Vessel Keel Design. (M. Morris, Interviewer) Hoboken , New Jersey. Larson, L., & Eliasson, R. (2000). Principles of Yacht Design. Blacklick: International Marine. Lee, B. (2010, June 5). Keel Design Concept. (M. Morris, Interviewer) Santa Cruz. Pugh, J. (2010, April 20). Keel Design Concept. (M. Morris, Interviewer) San Diego. Rousmaniere, J. (1995). Annapolis Book of Seamanship. New York, New York, USA: Simon and Schuster. Spurr, D., & Wadson, T. (2010, October/November). Juan K. Professional Baot Builder, pp. 72-79. Stewart, G. (2010, April 15). Keel Design Concept. (M. Morris, Interviewer) Streuli, S. (2008, December). Maxi’d Out. Sailing World, pp. 24-33.


Position and Vector Detection of Blind Spot Motion with Hornby Mike Wu and Stephen Yu Schunk Optical Flow

Blind spot negligence is a leading cause of car accidents. Checking blind spots is a crucial component of turns and lane changes, and when careless drivers forget, a crash may result. While current blind spot tackling technology does exist (many institutions have created radar technology or extended mirrors to try to prevent blind spot collision), the goal of our project was to create a cost effective method to find information about motion in the blind spot. Using Matlab and a video camera, the main method used to reach our goal is called Horn-Schunck Optical Flow. This algorithm looks at the pixel movement between two subsequent frames to generate a vector arrow of motion. Given an initial frame of a video, every single pixel has a specific brightness, a luminescence factor. Because of this, it is possible to track the individual brightness of a pixel between frames to calculate the distance moved. The vectors were categorized into two areas: car and background vectors. Since our goal concerned the rear blind spot, we generalized westward vectors to be the car vectors (color coded green). The rest of the vectors were designated to be background (opposing movement) and colored red. To look at the position of the car, a method called box capture was developed. Essentially, a blue box was created to surround a car when one is present and follow it throughout the frames. Before the parameters of the box can be specified, the presence of a vehicle must be confirmed. Thus, the ratio of car to background vectors was set to be at least 1:10 for a box to even be considered. Once confirmed, the initial center of the box was set by the average (x,y) coordinates of all the green car vectors since each point on the frame is sized on a xy scale. The size of the box was fluctuated to change. In the real world, as the blind spot car approaches the driving car, its size appears to increase. In order to account for this and make the box enlarge to accommodate for the increase in appearance, the box size was made directly proportional to the car to background ratio (See Figure 1). Therefore, as the car gets closer, it generates more car vectors, thus allowing the box to get bigger. One thing to note is that optical flow is not a perfect system. When creating a vector, it searches for the closest pixel similar in brightness; this may not always be the right pixel. Thus, computational errors create green vectors where no forward motion is present; these “outliers” skew the center of our box. To fix the problem, a standard deviation filter was used: from the initial center, all vectors three deviations away were eliminated and the center was recalculated. The process was done again in a rinse-andrepeat cycle with two and then one standard deviations until a very accurate box was generated. From here, the remainder of the position coding focused on making sure that the box created is the right

box. Hence, numerous thresholds were applied to limit box movement, etc. to try to prevent inaccuracy. For example, at least 50% of the green car vectors must be located in the final box or else we nullified the box based on insufficient evidence of the presence of a car. In addition, it is illogical for a box to appear on the right side of frame and suddenly jump to the left side within a single second. Since it can be assumed that the second box generated is faulty, a movement threshold was put on the box so that, given a stable box is existent for 5-10 frames, any boxes that deviate a substantial horizontal distance from the previous frame could be nullified. In a nutshell, all the thresholds applied help make sure that our system isn’t showing a blind spot car when in fact nothing is there (see Figure 2). With a more solid calculation of position achieved, the limits of our source code was tested. Since Optical Flow is dependent on cameras and visual cues, we were hesitant to assume that the system would work at night. However, the headlights of the car produced sufficient vectors for a box to generate and enlarge. Not only has that, the lampposts in the distance and the headlights of the opposing cars created vectors that served as background. Furthermore, to prevent limiting the detection to specific vehicles, videos of motorcycles and bicyclists were taken and put through the source code. The result was as expected; the objects were detected and tracked throughout the frame. In short, our system was able to detect anything with forward motion in the blind spot. This is a big improvement because it can help alert drivers of pedestrians or cyclists that may be ignored by other blind spot detection systems. Up to this point, the code was only able to detect vehicles approaching from the rear blind spot. However, there are specific instances where optical flow collapses and is not able to find an object where there is one, such as a car that approaches from the front and falls back into the blind spot. All the vectors generated by the car and the background would be red background vectors; essentially, the car diffuses into the scenery since there is not forward motion. Furthermore, Optical Flow cannot cover for the situation when a car enters the blind spot and stays there at relatively the same velocity as the driving car. Since the speeds are equal, minimal vectors will be generated and the blind spot car will not be detected, leaving room for possible accidents. To solve these problems, a new method was developed: the Hough transformation for wheel detection, which allowed circles to be found within an image. When applied to a car video frame, this method allows the wheels of vehicles to be found. With trials, the Hough Transformation was paired with edge detection because edge detection allows excess pixels and noise to be deleted. Since only outlines of objects are considered, this reduced the time it took to find the wheels (see Figure original research 30


3). A great application of Hough Transformation is multiple objects overtake. Sometimes, there are several objects in your blind spot: perhaps one car in the adjacent lane and another car two lanes away and another car three lanes way. It was crucial that our code could differentiate the vehicles in the case of heavy traffic. To solve this, a stereo depth filtering system was used: two cameras were placed at a converging angle to give a measure of distance, mimicking human eyes. The pixelâ&#x20AC;&#x2122;s brightness in the gray scale output images were directly related to distance from the camera, making it possible to isolate a single lane of cars with a threshold while running optical flow or edge detection/ Hough Transformation on the isolated image. The depth filtering can be cross applied to create a unique blinking system because in the end, the goal is to let the driver know that an object is in the blind spot with a blinking light placed under the wing mirror. The presence of a car in the adjacent lane would shine a red light, signaling danger; the presence of a car two or three lanes away would shine a yellow light, showing that caution should be taken for potential danger; the lack of any cars would shine a green solid light (see Figure 4). In conclusion, the dual layered system of Optical Flow layered on top of Hough Transformation provided a solid depiction of position. If used, it would be able to detect the presence of an object (car) in the vast majority of situations. But of course, every system has its limitations. The major problem with the source code is time lag: it takes two to three minutes to process a five second video, which is unrealistic for a real world setting. Thus, a memory system was developed to basically track the box throughout the frames so that, instead of scanning the entire frame, only those outward from the position of previous box would be scanned. Small things like that helped reduce the time so that, with better technology, the program could be easily run in real time. Overall, the findings of this project could provide the first steps to reducing car accidents for safer driving.

Figure 1

Figure 2: The movement threshold fixes a bad box generation.

Figure 3

Figure 4

31 original research


BREAST CANCER

by Selena Pasadyn

Introduction The Impact of Breast Cancer With a projected 207,090 newly diagnosed cases each year and 39,840 deaths, breast cancer is the second most common cause of cancer death in women. Prognosis for breast cancer is greatly dependent on tumor stage at the time of primary diagnosis. According to data collected by the National Cancer Institute, women diagnosed with stage I breast cancer have a 92% 5-year survival rate, with stage II, III, and IV having 82%, 47% and 14% 5-year survival rates respectively. The importance of screening as a means for early detection and intervention is well documented. The most common recommendations are that all women perform monthly breast self-exams (BSE), and women over the age of 40 undergo yearly mammograms. Multiple sources suggest that women should be taught breast selfexams from an early age, enabling them to become familiar with their breasts and thus increase their likelihood of noticing abnormalities in the future. Our study seeks to offer an additional modality for promoting breast health, by focusing on teenage women in a school environment. Application of the Health Belief Model to Breast Cancer Screening

Breast Cancer Education in High School Women To the best of our knowledge, only two studies have previously been conducted to explore the topic of breast health in high school females. The first study conducted in three Ohio schools, found that a majority of high school girls had limited knowledge about breast cancer, and that most had not previously performed a breast selfexam. A second study was conducted in Turkey, and found that the female high school students had insufficient knowledge about breast self-examination. As far as we are aware, no studies have been conducted assessing the impact of teaching high school women about breast health on themselves, their family, and the community. Study Aims The primary aim of this study is to assess knowledge regarding breast cancer, family history of breast cancer, and screening practices in the family of high school women living in a suburban community. The secondary aim is to increase breast cancer awareness in this population through implementation of intervention. The tertiary aim is to assess whether increasing awareness in high school students leads to an increase in breast self-exams and mammograms performed by their mothers and grandmothers. Methods Study Sample and Design The subjects targeted by this study were junior and senior female students at a public high school located in a middle class suburban community. All subjects were Caucasian females between the ages of 16 and 18. The sample was self-selected and consisted of 58 girls. Study Approval and Instruments The protocol was reviewed and approved by the school system and the Institutional Review Board (IRB) of Case Western Reserve University. Questions assessing students’ breast health knowledge

were derived for the purpose of this study, and influenced by the “Breast Self-exams by Teenagers” questionnaire. Data Collection Procedure All junior and senior female students attending the local high school were given equal opportunity to participate. Any student wishing to participate was required to obtain parental signature of the consent form. All of the participants who completed the first survey were asked to attend the “Breast Cancer Empowerment Reception” for approximately 50 minutes at the school. The reception was held with a trained speaker informing the girls about breast health. Immediately following this reception, students were administered the second questionnaire, identical to the first. The intent of the second questionnaire was to measure the effectiveness of the intervention. The study design originally planned to administer the third and final questionnaire one month after the second questionnaire. However, due to low participation in the second survey, data collection was terminated. Statistical Analysis Data is summarized using mean, standard deviations and medians for continuous data, and frequencies and percentages for categorical data. Questions regarding general breast cancer knowledge were dichotomized into correct vs. incorrect responses. Bivariate associations between each of the general breast cancer awareness questions and knowing family breast cancer history, as well as having a positive family history for breast cancer using the Pearson and Mantel-Haenszel Chi-square test. Correlations between pre and post intervention results were assessed using the Pearson and Mantel-Haenszel Chi-square test. Two-tailed p-values of <0.05 were considered to be significant. All analyses were done using SAS v9.2. Results A self-selected sample of 58 females completed the first survey. Of these participants, only 30 participated in the full intervention attending the “Breast Cancer Empowerment Reception,” with only 16 who completed the second survey. The average age of the participants (N=58) was 16.6 years. Approximately half of the subjects were able to identify breast cancer as the most common type of cancer in women. While most girls (96.6%) reported that all women should do breast exams, only 84.5% knew that they should be done every month. By comparison, only 34% of subjects knew that women should begin yearly mammograms after the age of 40 years. Sixty-three percent of subjects reported having asked their family about history of breast cancer, and nearly 35% reported having breast cancer in the family. While pre-intervention data showed only 63% of the girls knew that breast cancer is the most common type of cancer in women, all (100%) of girls were aware of this fact after the intervention. The question showing the greatest improvement was the age at which women should begin to get yearly mammograms. Table 1: Basic Population Demographics of Junior and Senior Female Student Participants (N=58) Mean age (at screening in years) Race (% Caucasian) Median household income‡‡ Persons below poverty‡‡ (% of individuals with income below poverty threshold) ‡‡ Foreign born persons (% of persons born outside the U.S.) Language other than English spoken at home‡‡ (% of persons)

16.6 (0.7) 100% $56,288 4.6% 4.3% 6.6%

Analyses conducted using survey analyses: Mean or Percentage (%) (standard deviation). ‡‡Results from 2006 U.S. Census Bureau for community out of which participants were recruited. Table 2: General Breast Cancer Awareness In High School Junior and Senior Women Pre-intervention at Baseline‡ (N=58) Breast cancer is the number one most common type of cancer in women. 10% of women in the United States will develop breast cancer at some point in their lives. Not all lumps in the breast are cancer. About 20% of breast lumps turn out to be cancer. All women should do breast self-exams. Breast self-exams should be done monthly.

Baseline N (%) 30 (51.7%) 27 (46.6%) 54 (93.1%)

original research 32 14 (24.1%) 56 (96.6%) 49 (84.5%)


Breast cancer is the number one most common type of cancer in women. 10% of women in the United States will develop breast cancer at some point in their lives. Not all lumps in the breast are cancer. About 20% of breast lumps turn out to be cancer. All women should do breast self-exams. Breast self-exams should be done monthly. Women should begin to get yearly mammograms after the age of 40. I have asked about my family breast cancer history. I have history of breast cancer in my family. My mother/female guardian does breast selfexams (BSE). Yes No Do not know My mother/female guardian gets mammograms. Yes No Do not know My grandmother does breast self-exams (BSE). Yes No Do not know My grandmother gets mammograms. Yes No Do not know

30 (51.7%) 27 (46.6%) 54 (93.1%) 14 (24.1%) 56 (96.6%) 49 (84.5%) 20 (34.5%) 37 (63.8%) 20 (34.5%) 21 (36.2%) 22 (37.9%) 15 (25.9%) 43 (74.1%) 1 (1.7%) 14 (24.1%) 13 (22.4%) 16 (27.6%) 29 (50%) 29 (50%) 1 (1%) 28 (48.3%)

Analyses conducted using survey analyses: Mean or Percentage (%) (standard deviation) Table 3: General Breast Cancer Awareness In High School Junior and Senior Women Who Completed the Intervention, Data at Baseline and Post Intervention‡ (N=16) Baseline N (%)

Post-Intervention N (%)

Breast cancer is the number one most common type of cancer in women.

10 (63%)

16 (100%)

10% of women in the United States will develop breast cancer at some point in their lives. Not all lumps in the breast are cancer. About 20% of breast lumps turn out to be cancer. All women should do breast self-exams. Breast self-exams should be done monthly. Women should begin to get yearly mammograms after the age of 40. I have asked about my family breast cancer history.

8 (50%)

16 (100%)

15 (93.8%) 3 (18.8%)

16 (100%) 10 (62.5%)

16 (100%) 13 (81.3%) 2 (12.5%)

16 (100%) 16 (100%) 16 (100%)

11 (68.8%)

13 (81.3%)

8 (50%) 3 (18.8%) 5 (31.3%)

8 (50%) 5 (31.3%) 3 (18.8%)

8 (50%) 3 (18.8%) 5 (31.3%)

9 (56.3%) 5 (31.3%) 2 (12.5%)

10 (62.5%) 1 (6.3%) 5 (31.3%)

13 (81.3%) 1 (6.3%) 2 (12.5%)

7 (43.8%) 2 (12.5%) 7 (43.8%)

7 (43.8%) 1 (6.3%) 8 (50%)

8 (50%) 1 (6.3%) 7 (43.8%)

7 (43.8%) 2 (12.5%) 7 (43.8%)

I have history of breast cancer in my family. Yes No Do not know My mother/female guardian does breast selfexams (BSE). Yes No Do not know My mother/female guardian gets mammograms. Yes No Do not know My grandmother does breast self-exams (BSE). Yes No Do not know My grandmother gets mammograms. Yes No Do not know

Discussion Additional Strengths As expected, the data confirms that there is a significant association between asking about family history and having family history of breast cancer (P=0.013). This suggests that there is some validity to the data. Despite the small sample size, it was clearly demonstrated that the participants’ knowledge regarding breast cancer and breast health was increased after the intervention. This indicates that intervention in the form of posters, pamphlets, announcements and a speaker can be truly beneficial in increasing breast cancer knowledge. Another strength of the study is related to the benefit that the participating subjects gained. These benefits include increased

33 original research

awareness, learning how to perform breast self-exams, learning about the importance of getting yearly mammograms after the age of 40, the benefits of early detection, treatment options, and resources available to women for screening and treatment in the community. In spite of the small sample size (N=16), we found an increase in the amount of knowledge that the students had about family history and mother’s screening. This suggests that participants may have gone home and inquired about these topics, thus raising awareness in their households. In addition, it is noteworthy that this study was well received by the school staff and administration. This project also received community support, including funding from a local women’s organization. Limitations Efforts were made to communicate to study participants the confidentiality of the questionnaire responses in compliance with IRB protocol, to facilitate higher participation and encourage honest responses. However, due to the sensitive nature of breast cancer, there is some possibility that not all responses are accurate. The population was self-selected and may not be representative of the junior and senior female population in the school. In addition, the results obtained from this study population cannot be extrapolated to other high school settings, but may be similar to those from other suburban primarily middle class Caucasian Midwestern female high school student populations. The fact that the survey was collected electronically may have biased the sample of participating students to those with more advanced technological skills. The second sample may also have additional biases. We found that in our initial sample, 35% of subjects reported having a family history of breast cancer. This is significantly lower compared to the 50% of subjects reporting history of breast cancer in the second survey. This suggests that perhaps subjects who had a higher “perceived susceptibility” of breast cancer were also more likely be interested in engaging in learning more about the disease. A significant limitation to our study was a low number of participants, especially for the second questionnaire (N=16). However, it should be noted that this is a preliminary work in a largely unexplored area, and is necessary for further development. Final Thoughts Although it cannot be concluded with certainty due to the small sample size of the second survey, we believe that there exists the possibility that intervening through breast cancer education in female junior and senior students may provide secondary benefit to participants’ families and perhaps even to the community as a whole. Health classes across the United States should emphasize the importance of monthly BSE’s and mammograms after the age of 40. Topics such as drugs, alcohol and sex education are being covered; however, breast health is often not discussed. To lower the number of diagnoses, we must begin in the education of the youth. Works Cited American Cancer Society. “Cancer facts and figures 2010.” (2011): n. pag. Web. 29 Jul. 2011. <http://www.cancer.org/research/cancerfactsfigures/cancerfactsfigures/cancerfactsand-figures-2010>. Anderson, B.O. “Early Detection of Breast Cancer in Countries With Limited Resources.” Breast Journal. 2.9 (2003): S51-S59. Print. Apanktu, L.M. “Breast cancer diagnosis and screening [Electronic version].” American Family Physician(2000): 596-602. Web. 29 Jul 2011. Harrison’s Online. “Breast Cancer: Introduction.”AccessMed 86. (2010): 1-21. Web. 29 Jul 2010. Hochbaum, G.M. “Public participation in medical screening programs: A sociopsychological study .”PHS Publication 572. (1958): n. pag. Web. 29 Jul 2011. Karayurt, O., et al. “Awareness of breast cancer risk factors and practice of breast self examination among high school students in Turkey.” Pubmed. 2003. <http://www. ncbi.nlm.nih.gov/pubmed/18928520>. Ludwick, R., Gaczkowski, T. “Breast self-exams by teenagers.” Cancer Nursing (2001). 315319. Web. 27 Jul 2010. Montazeri, A., et al. “Breast self-examination: Do religious beliefs matter? A descriptive study.” Journal of Public Health Medicine. (2003): 154-155. SAS v9.2. SAS Institute, Inc., Cary, NC. Susan G. Komen for the Cure. “Breast self-exam.” (2008). <http://cms.komen.org/komen/AboutBreastCancer/EarlyDetectionScreening/EDS 3-3- 3?ssSourceNo deId=292&ssSourceSiteId=Komen>. U.S. Census Bureau. “State and county quickfacts.” Web Jan. 28 2011. <http://quickfacts.census.gov/qfd/states/39/3909680.html>.


STAFF POSITIONS President Rebecca Su (Torrey Pines) Chapter President Angela Wang (Westview), Kenneth Xu (Scripps Ranch) Vice President David Koh (Scripps Ranch), George Bushnell (Scripps Ranch), Kevin Li (Westview), Melodyanne Cheng (Torrey Pines), Myung-hee (Rachael) Lee (Torrey Pines), Parul Pubbi (Torrey Pines), Sarah (Hye-In) Lee (Torrey Pines), Sharon Peng (Torrey Pines), Yuri Bae (Torrey Pines) Staff Advisor Mr. Brinn Belyea Treasurer Avinash Chaudhary (Torrey Pines), Eden Romm (Torrey Pines), Jim Liu (Scripps Ranch), Keming Kao (Westview) Secretary Alwin Hui (Scripps Ranch), Claire Chen (Torrey Pines), Karina Lin (Westview), Maarya Abbasi (Torrey Pines), Selena Chen (Torrey Pines) Scientist Review Board Student Coordinator Sumana Mahata Scientist Review Board Amiya Sinha-Hikim, Andrew Corman, Brooks Park, Bruno Tota, Craig Williams, Dave Ash, Dave Main, David Emmerson, Dhananjay Pal, Gautam Narayan Sarkar, Hari Khatuya, Indrani Sinha-Hikim, Janet Davis, Julia Van Cleave, Karen B. Helle, Kathryn Freeman, Katie Stapko, Lisa Ann Byrnes, Maple Fang, Mark Brubaker, Michael Santos, Reiner FischerColbrie, Ricardo Borges, Rudolph Kirchmair, Sagartirtha Sarkar, Sally Nguyen, Samantha Greenstein, Saswati Hazra, Sunder Mudaliar, Sushil K. Mahata, Tania Kim, Tanya Das, Tapas Nag, Tita Martin, Tracy McCabe, Trish Hovey Staff Authors Adrianna Borys, Alwin Hui, Angela Wang Angela Zou, Anita Dev, Anvesh Macheria, Apoorva Mylavarapu, Austin Su, Avinash Chaudhary, Ayesha Kapil, Bethel Hagos, Bhavani Bindiganavile, Brandon Huang, Brent Parker, Brian Choi, Caleb Huang, Carolyn Lee, Cassie Sun, Choohyun (Kristine) Paik, Christine Li, Daniel Boemer, Daniel Guan, Daniel Liu, Daniel Shi, David Chang, David Koh, Divya Kothandapani, Eden Romm, Emma Dyson, Eric Lu, Ethan Song, Eva Lilienfeld, Fabian Boemer, Florine Pascal, Frank Pan, George Bushnell, Hana Vogel, Hannah Tang, Harrison Qi, Harshita Nadimpalli, Hyeimin (Lucy) Ahn, Jim Liu, Jimmy Huang, Joanna Zhang, Jourdan Johnson, Justin Song, Karina Lin, Keming Kao, Kenneth Xu, Kevin Li, Kiernan Panish, Kira Watkins, Kyle Jablon, Lucy An, Maarya Abbasi, Margaret Guo, Maria Ginzburg, Mariam Kimeridze, Marina Youngblood, Mary Ho, Medeeha Khan, Melodyanne Cheng, Merle Jeromin, Michael Do, Michael Zhang, Michelle Oberman, Mimi Yao, Mitali Chansarkar, Myung-hee (Rachael) Lee, Nandita Nayyar, Nathan Manohar, Nick Wu, Nikita Morozov, Parul Pubbi, Peter Khaw, Rebecca Kuan, Rebecca Su, Rekha Narasimhan, Ruochen Huang, Sampreeti Chowdhuri, Sarah (Hye-In) Lee, Sarah Bhattacharjee, Sarah Hsu, Sarah Kwan, Sarah Watanaskul, Selena Chen, Serin

You, Shannon Lee, Shaoxiong Liu, Sharon Liou, Sharon Peng, Snow Zhu, Stephanie Kang, Steven Shao, Sumana Mahata, Summer Bias, Tenaya Kothari, Tiffany Sin, Tyler Simowit,z Wenhao Liao, William Huang, Willie Wu, Yuri Bae Founder/President Emeritus Alice Fang (Stanford) Editor in Chief Emeritus Ling Jing (Dartmouth) Editor in Chief Angela Zou (Torrey Pines) Managing Editor Carolyn Lee (Westview), Fabian Boemer (Scripps Ranch) Assistant Editor in Chief Apoorva Mylavarapu (Torrey Pines) Senior Editor Bethel Hagos (Torrey Pines), Christine Li (Westview), Jimmy Huang (Westview), Margaret Guo (Torrey Pines), Michelle Oberman (Torrey Pines), Ruochen Huang (Torrey Pines), Sarah Bhattacharjee (Torrey Pines), Sarah Hsu (Torrey Pines), Snow Zhu (Westview) Physics Editor Ethan Song (Torrey Pines), Harrison Qi (Westview), Nathan Manohar (Torrey Pines), Rekha Narasimhan (Torrey Pines) Chemistry Editor Daniel Guan (Westview), Florine Pascal (Torrey Pines), Hyeimin (Lucy) Ahn (Torrey Pines), Nandita Nayyar (Torrey Pines), Rebecca Kuan (Torrey Pines) Biology Editor Amber Seong (Torrey Pines), Anita Dev (Westview), Brandon Huang (Westview), Eva Lilienfield (Torrey Pines), Sarah Watanaskul (Torrey Pines), Serin You (Torrey Pines) Editor Alwin Hui (Scripps Ranch), Anvesh Macheria (Scripps Ranch), Austin Su (Scripps Ranch), Caleb Huang (Scripps Ranch), David Boemer (Scripps Ranch), David Koh (Scripps Ranch), George Bushnell (Scripps Ranch), Hannah Tang (Westview), Jim Liu (Scripps Ranch), Keming Kao (Westview), Kenneth Xu (Scripps Ranch), Michael Do (Scripps Ranch), Michael Zhang (Westview), Nick Wu (Westview), Sharon Liou (Westview), Wenhao Liao (Scripps Ranch), William Huang (Scripps Ranch) Design Editor Daniel Liu (Torrey Pines), Heather Chang (Torrey Pines) Graphic Editor Wenyi (Wendy) Zhang (Torrey Pines) Assistant Graphic Editor Crystal Li (Torrey Pines) Graphic Designers Amber Seong, Amy Ng, Angela Wu, Apoorva Mylavarapu, Cassie Sun, Catherine Li, Choohyun (Kristine) Paik, Claire Chen, Crystal Li, Divya Kothandapani, Hanna Lee, Hyeimin (Lucy) Ahn, Jennifer Kim, Kirsten Lee, Lucy An, Mandy Wang, Mary Ho, Megan Chang, Michelle Oberman, Rama Gosula, Sarah Bhattacharjee, Sarah Gustafson, Sarah Kwan, Selena Chen, Serin You, Sikyung (Stacy) Lee, Summer Bias, Wendy (Wenyi) Zhang Comics Editor Choohyun (Kristine) Paik Assistant Comics Editor Lucy An Web Design Alice Fang (Stanford), Tiffany Sin (Torrey Pines), Tushar Pankaj (Westview) Blog Editor Marina Youngblood Assistant Blog Editor Shannon Lee journys 34


JOURNYS 2011.2012

art by ben pu

JOURNYS Issue 4.1  
JOURNYS Issue 4.1  
Advertisement