Matter Magazine Spring 2013

Page 1

Singularity Explained Briefing: Exonian Research

Math, Magic & Money The Science Journal Of Phillips Exeter Academy

DEEP

FREEZE:

Cryonics & the Future of Death

Spring 2013 MATTER

SPRING 2013


Matter, Exeter’s science magazine, was founded in 2009 with a simple mission: to inform the PEA community about the latest in scientific research. This year, we’ve made some radical changes to our organizational structure. At its heart, Matter is now a club dedicated to making science accessible, relevant, and exciting for the Exeter community. We just happen to publish a magazine. By bringing together budding science writers to survey the latest in research, startups, and summer internships, we hope to create a culture at Exeter where science is as cool as lacrosse. To streamline our publication process, we’ve established editorial, design, and business boards in addition to our traditional co-head system. Together, these three teams generate content, lay out pages, and fund printing costs. Once upon a time, Matter was a themed magazine; each issue used to be centered on a specific scientific topic (nanotechnology, crystals, polymers, etc). We feel that this format, while staying true to our founders’ goal of analyzing science through an interdisciplinary lens, is no longer effective at delivering science accessibly to the general reader. So, we’ve revamped the magazine to reflect you, the reader’s, interests. What’s happening now in scientific research and the start-up scene? What’s on the horizon? Having overcome several hurdles in getting funds, designers, and writers for our fledgling venture, Matter is poised to become an extracurricular force on campus. We plan on publishing and distributing for free; issues will be available at mattermag.org. Without further ado, we present the fourth issue of Matter magazine. -Benj and Sid

Sid Reddy ‘13

PUBLISHER

Benj Cohen ‘15

CHIEF EDITOR

CHIEF GRAPHIC DESIGNERS

Ashley Keem ‘14 & David Liu ‘15

WHAT’S NEW EDITOR Paul Lei ‘15

WHAT’S HAPPENING EDITORS Eriik Friis ‘15 & Morgan Burrell ‘15 WHAT’S NEXT EDITORS

Christina Savvides ‘15 & Stephanie Chen ‘15

WHAT’S FEATURED EDITORS Hansen Shi ‘14

& Sam Helms ‘14

MATTER

SPRING 2013

EXECUTIVE MANAGER Katie Liptak ‘15 BUSINESS MANAGERS Paul Lei ‘15 & Saaketh Krosuri ‘14 PHOTO EDITOR

Stefan Kohli ‘14

Special Thanks

Donna Shrader


WHAT’S HAPPENING 4 4 5

The Aftermath of Hurricane Sandy | Margaret Cohen ‘15 Set in Stone | Charles Minicucci ‘14 The Intern Experience | Christina Savvides ‘15

WHAT’S NEW 6 6 7 8 8

4

A Green Apple | Erick Friis ‘15 Stem Cells in the Spolight | Christina Savvides ‘15 Mind-Machine Interfacing in Quadraplegics | Christina Lee ‘14 American Offshore Wind Energy Projects | Erick Friis ‘15 A Pacemaker for the Brain | Christina Lee ‘14

WHAT’S FEATURED 9 12 14 16 18

The Singularity | Morgan Burell ‘15 Exonians on the Leading Edge: Briefing on Student Research | Stephanie Chen ‘15 Deep Freeze: And the Future of Death | Hansen Shi ‘14 The Wizardry of Wall Street | Sid Reddy ‘13 Turning Predictive Modeling into a Sport | Sid Reddy ‘13

WHAT’S NEXT 19

Islets, Injections, and Diabetes | Paul Lei ‘15

MATTER

SPRING 2013


WHAT’S HAPPENING

SET IN STONE

oldest fossil discovery links dinosaurs to closest relatives

by Charles Minicucci ‘14

F

humerus joint, along with some broken vertebrae found in an area called the Manda Beds of Tanzania, has been dubbed Nyasasaurus and given the spectacular title of oldest dinosaur fossil ever discovered. Nyasasaurus himself would have been nearly as big as a sports car— two to three meters long and less than a meter tall at the biggest, according a study published in Biology Letters on December 4. The fossil is fairly old—discovered in the early 1930s—but Sterling Nesbitt of the University of Washington has recently taken another look at it. He dates the specimen at 243 million years

A

W T D

4

I

old, ten to fifteen million years before paleontologists previously thought the first dinosaurs evolved. However, both the structure and haphazard growth of the bone suggest that it belonged to an early dinosaur, or a close relative. This bone could deb the missing link between the silesaurids, the dinosaurs’ closest chronological relatives, and the dinos themselves. What makes this recent finding so interesting, says Nesbitt, is the fact that it pushes back the dinosaur record closer to a mass extinction that occurred 252 million years ago. “It appears the dinosaurs arose in the shadow of the greatest extinction of all time.” n

THE AFTERMATH OF HURRICANE SANDY U

BY MARGARET COHEN ‘15

ith speeds as high as 89 mph recorded during the storm, Hurricane Sandy wreaked havoc throughout several states. Prior to hitting Manhattan, the city had already halted service on its bus and train lines, which closed schools and ordered about 400,000 people out of their homes. Due to excessive flooding, New York and New Jersey’s major airports were shut down, causing issues for fliers heading to the Northeast. Water seeped into subway stations in Lower Manhattan and into the tunnel connecting Lower Manhattan and Brooklyn. Hurricane Sandy was responsible for 2.8 million power outages throughout the Northeast. The storm surges were boosted by a full moon, which already brings the highest tides of the month. Forecasters had said the storm was likely to collide with a cold front and spawn a super-storm that generated flash floods and snowstorms. In the U.S. alone, the hurricane took 113 lives. Sandy left damage that will take

MATTER

SPRING 2013

L

years and millions of dollars to repair. Many are beginning to wonder if climate change was the major cause of the hurricane. The storm wandered north along the U.S. coast, where ocean water was still warm during the early parts of autumn, which pumped energy into the swirling system. It grew even larger when a cold jet stream made a sharp dip southward from Canada down into the eastern U.S. The cold air, positioned against warm Atlantic air, added energy to the atmosphere and expanded the size of Sandy. Here’s where climate change comes in. The atmospheric pattern that sent the Jet Stream south is known as a big pressure center located over the very northern Atlantic Ocean and southern Arctic Ocean. What led to the pressure center over the Atlantic was a climate phenomenon called the North Atlantic Oscillation. This is a climatic phenomenon in the North Atlantic Ocean of fluctuations in the difference of atmospheric pressure at sea level between the Icelandic low and the Azores high.

Recent research by climate scientists has shown that as more arctic ice sea melts in the summer—because of global warming—the NAO is more likely to be negative during the autumn and winter. A negative NAO makes the Jet Stream more likely to move in a big, wavy pattern across the U.S., Canada and the Atlantic, causing the kind of big southward dip that made Sandy so powerful. As ocean temperatures rise, hurricanes are more likely to form along the North East coast of the US, causing scientists to believe that future hurricanes even more powerful are headed our way. n


THE INTERN EXPERIENCE why spending summers at a medical research lab proved to be a revealing and rewarding experience.

by Christina Savvides ‘14

p until eighth grade, I spent my summers away at summer camp, traveling with my family, or asleep. So, when the summer before ninth grade rolled around, I decided it was time for something new. Because of my interest in biology, specifically in molecular genetics, I decided to apply for a summer internship in a lab. The age cutoff for working in a research lab is usually sixteen years old; it depends on what biosafety hazard level the facility is classified under, and

altered and cause abnormal growths in the intestines. These growths are called polyps, and a large number of them can cause life-threatening cancer. When the body’s immune system detects polyps, it signals an influx of different types of white blood cells into the affected tissue regions. These cells engulf and mark cancerous cells for destruction. An issue with the immune response is that it can sometimes cause excessive cell proliferation and inflammation, meaning it simply accelerates the multiplication of cancerous cells and makes the polyps

THE AGE CUTOFF FOR WORKING IN A RESEARCH LAB IS USUALLY SIXTEEN YEARS OLD; IT DEPENDS ON WHAT BIOSAFETY HAZARD LEVEL THE FACILITY IS CLASSIFIED UNDER, AND OF COURSE, ON YOUR PASSION FOR SCIENTIFIC EXPLORATION. of course, on your passion for scientific exploration. Luckily, I found a position at Case Western Reserve University in the Department of Genetics. There was a postdoctoral student in my lab, Dr. Jason Heaney, who was involved in two projects during the time that I was with him. The project I worked on under his purview involved using a mouse model to examine the role of inflammation in colon cancer. Each year, 143,460 people are diagnosed with colon cancer and approximately 51,690 will die each year from this disease, making it the third most common cancer. Colon cancer is generally caused when common genetic signaling pathways such as the Wnt, β-catenin, K-ras, p53, transforming growth factor (TGF)- β, or the DNA mismatch repair (MMR) proteins are

bigger (worse). My lab’s research focus was on the development of drugs that block the aspects of the immune system’s response that promote uncontrolled growth. One such drug is currently in the process of being patented, and in the near future will undergo the tedious process of being approved by the FDA. I returned to the lab again last summer, this time researching risk factors of colon cancer under Dr. Stephenie Doerner. Americans have a much higher risk of developing colon cancer in comparison to people of other nationalities because of our diet and lifestyle. Dr. Doerner’s research is focused on preventative medicine that high-risk individuals could use to avoid developing colon cancer. Using mouse models, we have been successful in dramatically reducing

the incidence of colon cancer. In fact, if you know someone who is a high risk individual for colon cancer, it is likely that some of the medical advice they regularly receive from their physicians derives directly from our research. Based on my positive experience interning at a medical research lab, I would highly recommend it to anyone interested in the academic aspect of biology. The best place to look for an internship is at a local medical research institution or university. If there aren’t any opportunities for high school students available in your area, there are competitive summer programs across the country that provide lodging and financial aid to interested students. Summer internships are a rewarding way to spend a vacation and enrich your high school learning experience. n

MATTER

SPRING 2013

5


WHAT’S NEW

A GREEN APPLE Erick Friis ‘15

STEM CELLS IN THE SPOTLIGHT Christina Savvides ‘15 r. Shinya Yamanaka and scientist Sir John Gurdon shared the Nobel Prize this December for their decadeslong groundbreaking work in stem cell research. In the early 60s, Sir Gurdon showed that differentiated cells—cells that have already specialized for certain biological functions—can be forced to revert to their undifferentiated, immature state as stem cells. His landmark experiment demonstrated that the nucleus of a frog’s mature intestinal cell could be transplanted into an immature frog egg cell to produce a perfectly normal tadpole. Thus, the differentiated intestinal cell must have contained all the genetic information necessary for a single egg cell to differentiate into the myriad types of cells present

5

6

n recent months, Apple has made a push to make their business model more environmentally friendly. Their new data center in Maiden, North Carolina exemplifies this effort. Apple designed the data center to be the greenest in the world, with onsite

electricity bill. The onsite solar array is the largest of its kind (user-owned) in the world, and they are currently constructing a power plant that runs on biofuels, which will hold a similar record. Power plants like theirs burn anything created by biological organisms. This particular plant will run on

APPLE CEO TIM COOK HAS PROMISED TO BRING SOME MAC PRODUCTION BACK TO THE US. THIS WOULD OFFSET THE DETRIMENTAL EFFECTS TO SUPPLY CHAIN MANAGEMENT, DOMESTIC JOB GROWTH, AND GREENHOUSE GAS EMISSIONS ASSOCIATED WITH MANUFACTURING ABROAD... renewable energy production and an efficient chilled-water cooling system to run and cool the servers. The center utilizes two record-breaking systems to produce 60% of their 20-megawatt MATTER

SPRING 2013

biogas, which is created when microscopic organisms break down compost and other organic waste. The remaining 40% of the data center’s electricity bill is acquired through a local nonprofit

called NC GreenPower, which provides electricity generated via renewable energy technologies. Altogether, this data center has run on 100% renewable energy since the end of 2012. On the consumer-electronics end of Apple’s operations, Apple CEO Tim Cook has promised to bring some Mac production back to the US in 2013. This would offset the detrimental effects to supply chain management, domestic job growth, and greenhouse gas emissions associated with manufacturing abroad, particularly in China. Part of many companies’ choice to manufacture abroad is that environmental regulations are often more relaxed outside the US, allowing them to avoid fines and taxes for polluting practices. Manufacturing in the US would also decrease Apple’s carbon footprint by saving the fuel normally burned during transportation of assembled products to the US, where the majority of Apple consumers reside. n


WHAT IF THERE WAS A WAY TO UN-INVASIVELY FORCE A CELL TO REVERT TO ITS IMMATURE STATE? IN 2006, DR. YAMANAKA ANSWERED THIS QUESTION BY REPROGRAMMING A DIFFERENTIATED CELL’S GENETIC CODE AND “INDUCING” IT TO BECOME A STEM CELL. in a tadpole. The finding increased hope that the same stem cells currently being used to treat various cancers may eventually be generated en masse via differentiation. The caveat to Gurdon’s success, however, was that in removing the nucleus from the intestinal cell, he killed the cell. What if there was a way to un-invasively force a cell to revert to its immature state? In 2006, Dr. Yamanaka answered this question by reprogramming a differentiated cell’s genetic code and “inducing” it to become a stem cell. His method, which produces “induced

pluripotent stem cells,” or iPSCs, by introducing a specific set of genes into a differentiated cell, can be carried out on connective tissue cells. This is a tremendous accomplishment both because it “revolutionizes our understanding of how cells and organisms develop,” as a Nobel Committee press release puts it, and because it allows future stem cellbased treatments to avoid the current controversy surrounding embryonic stem cell extraction. Stem cells have historically come from embryonic

tissue, which involves interfering with clusters of precursor cells that become a human fetus. Induced pluripotent cells can be generated from connective tissue, which is abundant and easy to sample from the human body. As such, iPSCs are a major breakthrough for stem cell therapeutics. Effective therapies for neurodegenerative diseases, certain cancers, and limb regeneration are now closer than ever to reality. n

7

MIND-MACHINE INTERFACING IN QUADRAPLEGICS Christina Lee ‘14

or the first time in nine years, 53-year-old Jan Scheurmann is able to move an arm, twist a wrist, and feed herself chocolate by controlling a robot arm with her mind. Researchers at University of Pittsburgh School of Medicine and University of Pittsburgh Medical Center have made this a possibility for Scheurmann by setting two quarter-inch square electrode grids with 96 contact points on regions of the brain that control right arm and hand movement. Diagnosed years ago with spinocerebellar degeneration, a condition in which connections between the brain and muscles weaken for no apparent reason, Scheurmann is now able to perform a wide range of arm motions

necessary in her day-to-day life. Brain-computer interface (BCI) technology is largely responsible for opening up Scherumann’s world. When Scherumann is imagining and thinking of moving her arm or hand, neurons in her motor centers fire electrical signals that the 96 contact points in the electrode detect. These signals are then analyzed by a machine learning algorithm that runs on the BCI. The algorithm interprets the signals— Scheurmann’s intended movement—as a series of mechanical actions on a robotic arm. In short, the BCI turns her dreams into reality. This new development brings hope for the future of quadriplegics like Scheurmann. “This is the ride of my life,” Scheurmann said. “This is the

This is the ride of my life, This is the rollercoaster. This is skydiving.

rollercoaster. This is skydiving. It’s just fabulous, and I’m enjoying every second of it.” n MATTER

SPRING 2013


WHAT’S NEW

PACEMAKER FOR THE BRAIN Christina Lee ‘14

potential treatment for Alzheimer’s looms on the horizon as clinical trials of brain pacemaker-like devices were carried out for the first time in the United States. The device was surgically implanted in the brains of Alzheimer’s patients by researchers at Johns Hopkins University in late November and December of 2012. These devices aim to enhance memory and reverse cognitive decline by sending low-voltage electrical stimulus directly to the hippocampus, the part of the brain responsible for memory and learning. Alzheimer’s disease is characterized by memory loss, forgetfulness, and overall cognitive deterioration that deprive patients of the ability to take care of themselves. While the causes of Alzheimer’s are not fully understood, research so far has shown the disease is linked with the accumulation of small peptides (molecules made of multiple amino acids) around the synapses of neurons, or the

8

spaces between brain cells where information from one neuron is passed on to another. These clumps of peptides— called neuritic plaques—cause difficulty in neuron communication in the hippocampus, ultimately leading to poor ability to retain or form memories. At the moment, there is no cure for Alzheimer’s, only treatment that alleviates symptoms of of the neurodegenerative disease. The past few years have been focused on research for drug treatments that inhibit neuritic plaque formation, but no significant progress has been made. This dearth of meaningful results has pushed Alzheimer’s researchers to seek alternative venues of treatment—in this case, direct stimulation of the brain. The new implant produces electrical impulses that occur 130 times per second and travel through wires connected to the ‘pacemaker’. Holes are drilled in the skull to imbed wires into the fornix of the brain—a bundle of neural fibers in the brain responsible for

relaying information to the hippocampus. Impulses going through the wires facilitate the transfer of information between neurons in the hippocampus, which would otherwise be blocked by the neuritic plaque in between them. About forty Alzheimer’s patients are expected to undergo this brain implant surgery in the next year. These advancements in the Alzheimer’s treatment are being made in the face of the increasing impact Alzheimer’s is estimated to have in the next fifty years. Today Alzheimer’s affects 5.3 million Americans, and it is predicted that by 2050, 14 million Americans will be afflicted by the condition. Finding a cure, or at least an effective treatment, is critical to preventing this tragedy of societal degeneration. n

AMERICAN OFFSHORE WIND ENERGY PROJECTS Erick Friis ‘15

n the midst of a limping worldwide movement towards energy sustainability, the Obama Administration has worked hard to grow an American presence in the global renewable energy market and make the US a leader in adopting innovative approaches to energy production. In December, U.S. Energy Secretary Steven Chu announced a new effort to promote offshore wind projects. Under his plan, states like Maine, New Jersey, Ohio, Oregon, Texas, and Virginia will receive investments from the Department of Energy to instantiate projects that MATTER

SPRING 2013

should be commercially active by 2017. Not only will a growing renewable energy sector prevent continued destruction of the blue planet, but it will also support up to 200,000 domestic manufacturing, construction, operation, and supply chain jobs according to the Department of Energy. In the past, wind turbine construction has been promoted by clean energy tax credits like the Production Tax Credit. With the renewal of the tax credit and other new government investments, Secretary Chu hopes to follow the goals of Obama’s National Offshore Wind Strategy to “help the nation reduce its greenhouse

gas emissions, diversify its energy supply, provide cost-competitive electricity to key coastal regions, and stimulate revitalization of key sectors of the economy by investing in infrastructure and creating skilled jobs.” n


THE SINGULARITY 9

As human beings, our success in survival can be attributed to our intelligence— an intelligence far superior to any other on earth. But not for long. Morgan Burrell ‘15 examines the impact of computers in our future.

ithin the century, computers will surpass humans in their ability to recognize patterns and solve problems, dramatically changing our lives forever. They will become an even more essential part of society, and become involved in every aspect of our existence. Computers are already everywhere in our lives—they allow us to communicate with one another from opposite ends of the globe, keep track of the majority of the world’s money, and were used to measure and cut the pages you are holding in your hands. We have programmed them to be ‘intelligent’--to recommend music based on the user’s preferences, to detect fraud in online payment systems, even to interpret spoken languages. Without a doubt, the computer has come a long way from weighing in at several tons and only being

able to add small numbers. One cannot help but be impressed with the rate of technological progress in computer technology. In fact, the rate at which computers have grown, in terms of raw computational power and problem solving ability, is almost alarming. History shows that roughly every two years, the number of transistors in a microchip doubles. What this means is that computers are getting faster at an exponential rate. The tasks they are capable of executing and the problems they are able to solve are becoming increasingly complex. The prospect of a person asking his or her phone a question and receiving a spoken, humanlike, and occasionally snarky answer was completely unfathomable just ten years ago. Yet here we are. One cannot help but wonder: will computers ever become more potent than their human creators? MATTER

SPRING 2013


But when this breaking point is reached, and we as humans develop software for computers that rival the cognitive ability of our own brains, computers will attain such a degree of intelligence that our lives will be changed forever. According to Ray Kurzweil, an MIT graduate and director of engineering at Google, yes--they will. Kurzweil, a futurist, makes predictions. He believes that not only is it possible that computers will become more intelligent than humans; it is inevitable. Based on his findings, computers have been catching up to humans for hundreds of years, and will not stop any time soon. And when they finally do equal humans in their level of intelligence, a “point of no return” will be reached. At this point, computer intelligence will become self correcting—true “Artificial Intelligence.” The rate at which computers improve their problem solving ability and capacity for pattern recognition will also become even faster, due to the fact that

10

Scientists argue our current exponential rate of technological growth is coming to an end.

MATTER

SPRING 2013

they will take on the responsibility of their own creation and design, rather than leaving the job to their less intelligent and less efficient human creators. Our current AI’s are already something to brag about—they range from being able to predict changes in the stock market to diagnosing human illnesses. But when this breaking point is reached, and we as humans develop software for computers that rival the cognitive ability of our own brains, computers will attain such a degree of intelligence that our lives will be changed forever. This event is known as the “Singularity,” and is a hot topic of discussion throughout the scientific community. The word ‘singularity,’ a term used

in astrophysics, means a point at which ordinary physics no longer applies. In Kurzweil’s theory, however, it means a point at which humans will become something so different from today’s people that they will not even be considered, by our standards, to be human, due to the emergence of a “superintelligence” superior to our own. The specific effects that the singularity will have on mankind are still unknown. It is possible that the flourish of computer intelligence could lead to anything from a world similar to that of the Jetsons to humans and computers literally living as one, enhancing one another’s abilities. What is so exciting about the singularity is that its impact would not be limited to the field of computer science. Computers have become integral parts of the scientific and commercial worlds. It is possible that the increase in raw computational power and improvements in machine learning algorithms that occur at and after the singularity could lead to extraordinary advancements in business analytics, cancer genomics research, augmented reality software, and other technologies that affect the lives of billions. But perhaps such an inflection point in computer intelligence is not attainable. Many scientists argue that our current exponential rate of technological growth is coming to an end. They also say we will not fully understand the complexities of the human brain any time soon; and since building an AI comparable


to a human brain in pattern recognition capability first requires a thorough understanding and characterization of the human brain, it seems like the singularity may be a ways off. After all, advancement in computer technology is not only dependent on the improvement of electrical hardware, but also on the sophistication of the software we write and execute on our machines. What good is a computer that can add sixty digit numbers in milliseconds, but can do nothing else? Humans must tell computers how to solve problems first—computers don’t just come up with programs on their own. In order for us to create an intelligence that rivals that of the human brain, we must first construct a complete model of the neocortex— the region of the human brain that handles sensory perception, language, and conscious thought—so that we can implement one on a computer. Kurzweil has made progress toward such an understanding of neocortical cognition, and recently published a book titled How to Create a Mind: The Secret of Human

Thought Revealed that details a series of thought experiments and logical deductions that allow him to reverse-engineer (only partially) the brain’s structure as a hierarchical pattern recognizer. However, the model he builds is far from complete. And it is this lack of knowledge that holds us back from the singularity, claims Paul Allen, another giant of the tech industry. Allen, cofounder of Microsoft, says that as we delve deeper into the human brain, the brain itself is revealed to be even more complex. This idea of knowing more leading to realizing how little you actually know is called the complexity brake. As we add to our understanding of the human brain, we realize there is much more to be learned and that perhaps our previous assumptions about its function were wrong. To reverse-engineer the human brain, as Kurzweil suggests, we would first need to completely understand it—a goal that, while possible due to the brain’s finiteness, is still far from fruition due to the multiple complexity brakes we have

encountered (and have yet to encounter) in our study of it. Finally, Allen says, even if we could reconstruct the human brain entirely out of circuits, we still wouldn’t be able to put it to any use. Simply knowing the anatomy of a bird does not mean we can make a computer simulation of flight—it is necessary to see the bird in action, to understand how and why it can fly. It is believed that this incredibly deep and philosophical understanding is too complicated to be attained in the upcoming decades. Maybe with better analytical tools and insightful models we will be able to understand how the human brain “thinks.” We may possess these tools in a very short amount of time, if Kurzweil’s predictions hold true. Maybe his next book holds the secret to building an artificial neocortex. Regardless, the data he has collected is undeniable—computers have been seeing an exponential rate of pattern recognition and processing capacity, which does lead many to believe that an intelligence explosion is possible (even inevitable) in upcoming years. So, for you believers, here’s the real question: how soon will we reach this point? Will our generation witness the critical point when computers irreversibly change our lives? The sky’s the limit when it comes to improvements in computer technology, but the horizon for Kurzweil’s superintelligence is hazy. Only time will tell what we as humans are capable of creating. n

Morgan Burrell is an editor and writer for Matter Magazine. His interest in science was sparked his freshman year when he learned to program. His favorite fields are physics and computer science. “Science is in everything,” Burrell says. “And when you start to understand how it all works, the world becomes absolutely fascinating.”

MATTER

SPRING 2013

11


EXONIANS ON THE LEADING-EDGE: BRIEFING ON STUDENT RESEARCH

Stephanie Chen ‘15 gives us the inside scoop.

ANIKA AYYAR ‘14 & EMMA HEROLD ‘13

12

Last summer, Senior Emma Herold and Upper Anika Ayyar interned at Stanford University with Dr. Seung Kim—a professor in the Department of Developmental Biology and an Exeter alumnus. Ayyar originally met Dr. Kim at a Stanford medical school lecture. Impressed by his sharp pedagogical skills and passion for his field of study, she asked him to visit Exeter and speak at assembly. He agreed, and subsequently invited a few Exonians to work in his lab. Ayyar’s and Herold’s work focused on discovering gene function through the use of transposable elements. Transposable elements, or “jumping genes,” are swaths of DNA that actively ‘jump’ to different locations in the genome. In some instances, these elements insert themselves in the middle of another gene, knocking out that gene’s normal transcription and downstream expression. By loading transposable elements with transcriptional activators—DNA sequences that help researchers locate transposable element insertion sites—Ayyar and Herold were able to determine which genes had been knocked out in a fly with a specific set of observed physical features. Their experiments involved an activator called LexA and two transposable elements, Piggybac and Gal-4, which behave slightly differently and offer some versatility in researcher’s ability to target genes of unknown function. So, what? By observing the difference in

physical traits between a control group of flies and flies that have been injected with transposable elements, Ayyar and Herold were able to link a subset of a fly’s genetic code to observable phenotypes, or physical characteristics. Ayyar wrote the following on her blog: “I am inserting LexA into coding regions in order to disrupt functions of unknown genes. Based on the physical mutation the insertion causes, I can use the location of LexA to determine which specific gene was interrupted to cause that mutation. Thus, I can deduce the function of that specific gene. For example, say I injected Piggybac (carrying LexA) into a fly embryo, and LexA were to land on a gene that has not yet been characterized. Say, for example that the insertion of LexA onto that specific gene caused the fly to have shorter hairs than normal flies. I would be able to conclude that LexA landed on a gene that controls hair length, and disrupted the normal expression of that gene.” By repeating this experiment with different transposable elements and different insertion sites, they were able to classify genes as being involved in a number

of important biological processes. While at Stanford, Ayyar and Herold developed sample stocks of flies with LexA inserted in their genomes. This year, they will help run Biology 374—an experimental biology research course—in which the fly stocks they made last summer will be sent to Exeter for students to process using transposable elements and send off for sequencing. Herold said she enjoyed the hands-on experience, and that the people they worked with were “generous, funny, and kind.” Ayyar called it “the coolest thing ever.” To read more, visit: Dr. Seung Kim’s page at http://seungkimlab. stanford.edu Emma Herold’s blog at labstuff.tumblr.com Anika Ayyar’s blog at drosophilandi.tumblr. com

RAVI JAGADEESAN ‘14

Upper Ravi Jagadeesan participated in a program at MIT for high school students last year called PRIMES, or Program for Research in Mathematics, Engineering, and Science—a year-long program that began last January. Ravi’s PRIMES group worked on a project that involved searching for patterns that are hard to avoid in permutations of generalized lists of ordered objects. A permutation, by rigorous definition, is “a rearrangement of the elements of an ordered list S into a one to one correspondence with S itself ”. In plain English, a simple example of a permutation might be the rearrangement of the set of numbers {1,2,3} to {3,1,2}. Jagadeesan examined permutations

STEPHANIE CHEN class of 2015, at Phillips Exeter Academy, is from the suburbs of Chicago. Her interests in science lie in the biological sciences. Having had a love for science for several years now, she is very excited to be writing and editing for Matter Magazine.

MATTER

SPRING 2013


Photo, MIT

MATTHEW DAITER ‘14 Last summer, Upper Matthew Daiter had the opportunity to work at the MIT Media Lab—known for its eclectic and innovative mix of research in human cognition, molecular machines, and several other fields of “social computing.” There, he collaborated with Travis Rich, an MIT graduate student, to develop software for electroencephalography. An electroencephalograph, or EEG, is a headset that detects different levels of radiation emitted by a person’s brain. In practice, EEGs are usually built as a series of pads—with saline solution applied to their undersides to enhance their electrical conductivity— that stick to the head. What makes EEGs so

compelling is that the alpha, beta, and gamma waves they measure can be studied to characterize emotions and feelings as radiation signals in the brain. According to Daiter, states of concentration and meditation are easier to read than anger and happiness. Of course, EEGs can be leveraged more broadly in the general categories of “neurotherapy, biofeedback, and brain computer interface”. Daiter first heard about the MIT Media Lab through a friend. The research underway in the various groups in the lab piqued his interest, and so he contacted several professors to see if there were available positions for a high school student. Fortunately, he found one. Daiter enjoyed his time at MIT, and said that there were “so many great parts about it.” One

of them, he mentioned, was that he “loved working on a team,” and that the collaborative aspect of his work was a “highlight” of the experience. As for the future of EEG technology, Daiter believes that they are already quite sophisticated on the software end, but that hardware-wise they could be more user-friendly. To use an EEG in its current state, you must go through the tedious process of applying saline solution to each individual pad on your head. Once that and a few other impediments to mass use are fixed, Daiter believes EEGs will become more marketable and useful to the general public. For more information, please go to http:// emotiv.com

The experience of working in a real scientific environment is invaluable. with specific rules of construction—for instance, that elements should be alternatingly less than and greater than their neighbors. He and his partner used computer simulations to determine how often short patterns, or recurring sequences of elements, occurred in such permutations. The goal, as stated in a public summary of his work, was to “determine approximately how difficult it is for alternating permutations to avoid patterns of length at most six.” According to Jagadeesan, “This theory of pattern avoidance in permutations has connections to computer science, algebraic combinatorics, algebraic geometry, and representation theory”. Joel Lewis, a postdoctoral student at the University of Minnesota, was Jagadeesan’s mentor. Nihal Gowravaram, who attends Acton-Boxborough high school in Massachusetts, was his partner. On Saturdays, Jagadeesan would go to MIT to discuss ideas and proofs with his mentor. Finding patterns took a lot of thinking, Jagadeesan said. “Sometimes when you aren’t

working, an idea comes.” A lot of Jagadeesan’s work was independent, and could be done at Exeter. For more information, please see http://web.mit.edu/primes

ALAN GUO ‘14

As one of PEA’s Student Council Summer Research Fellows, Upper Alan Guo interned last summer at Woods Hole Oceanographic Institution (WHOI), the largest independent oceanographic research institution in the country. Guo worked under Dr. Lloyd Keigwin and alongside several other researchers interested in paleoclimatology, the study of climate change over thousands to millions of years. Guo spent most of his time gathering fossilized shells of microscopic organisms called foraminifera. The calcium carbonate in the shells were used to deduce their age and the temperature at which they resided. Through such work, Guo and his colleagues at WHOI “hope to continue to contribute to the ongoing discussion on climate change and provide new and accurate information

pertaining to our particular interest”. For Guo, the most interesting part of the experience was the data analysis. He says, “We used all different kinds of resources to try and explain certain phenomenon and/or trends we saw.” He also found “the experience of working in a real scientific environment” to be “invaluable.“ Guo looks forward to traveling to the Arctic in the coming summer with his mentor at WHOI and continuing his foray into paleoclimatology research. n MATTER

SPRING 2013

13


DEEP FREEZE: AND THE FUTURE OF DEATH

Hansen Shi ‘14 addresses the ethics of Cryonics. ake a look at the website of the Cryonics Institute of Clinton Township, Michigan, and you will almost certainly be reminded of a TV ad. In the middle of the page, under six stock photos of varying resolution, reads the non-profit institute’s slogan: “Your Last Best Chance For Life—and Your Family’s”. Indeed, my (and many other visitors’) first impression of the Cryonics Institute—indeed of cryonics in general—is that of self-congratulatory skepticism, perhaps because

14

the idea of freezing peoples’ bodies for resuscitation in the distant future mirrors too closely for comfort plot devices of several well-known science-fiction movies (Han Solo in The Empire Strikes Back, for example). But disbelief is not the only reason many are uncomfortable with cryonics. The increasing possibility of a future in which cryonics plays a part in the prolonging of death has brought to the public sphere a debate about the practice’s morality.

The Story of Cryonics

The story of cryonics begins, regrettably, with a work of science fiction called “The Jameson Satellite”, which ran in the July 1931 edition of the sci-fi magazine Amazing Stories. At the time of publication, Robert Ettinger, future college physics teacher and “father of Cryonics”, was twelve. “The Jameson Satellite” tells the tale of one Professor Jameson, who has his corpse beamed up into space, where it lays dormant for ages, to be discovered millions of years later by a race of cyborg men with organic brains and mechanical bodies. Ettinger was influenced profoundly by this story, and grew up with the belief that it was only a matter of time before biologists discovered the secret to eternal life. To Ettinger, the conclusion was so obvious that when he privately published his now-seminal work, The Prospect of Immortality, in 1962, the first thing he did was send it off to 200 people he selected from Who’s Who in America. He didn’t receive much of a reaction; however, when the book was picked up by a publisher, Boston MATTER

SPRING 2013

University professor of biochemistry and science fiction writer Isaac Asimov, Ettinger became an “overnight celebrity”. He and his ideas were argued over in publications including The New York Times, Time, Newsweek, and Paris Match. He travelled the talk show route, appearing live on TV with David Frost and Johnny Carson. The stage had been set, and people have been talking about cryonics ever since.

Defining Death: The Case for Cryonics

Supporters of cryonics divide the issue into ideal cryonics and non-ideal, or real, cryonics. According to the supporters, ideal cryonics assumes medical legitimacy— i.e. that the process actually works—and is justifiable by the notion that medical treatment cannot be limited to a time frame. Most would agree that a life should always be saved, even if the process takes twenty years; supporters of cryonics argue that cryonics is the same, only with a much longer (indeed indefinite) time frame. Of course, ideal cryonics is exactly what it claims: ideal. Rather, what supporters of cryonics wind up arguing for is what they call non-ideal cryonics, in which the guarantee of success is removed. Remember that the entire premise of cryonics is that bodies are frozen and restored when the necessary technology is available. Non-ideal cryonics makes no promises about whether that technology will ever exist, or if freezing a body is really a viable way to preserve a person. Yet, according to supporters of cryonics, non-ideal cryonics is still justifiable.


No one has lived forever. What if it’s awful? What if everyone who gets to live for two, three hundred years ends up taking their own lives?

hundred years ends up taking their own lives? Some might argue that the government has a fundamental duty to protect its citizens, even from themselves.

The Verdict

The reason has to do with the definition of death. Defenders of cryonics see a fundamental difference between legal death and another type of death—a more “real” death—they refer to as information-theoretic death. According to the theory of information-theoretic death, a person is really only truly dead when his or her “information” can no longer be retrieved: when the brain is destroyed in such a way that his or her personality, memories, etc, are irretrievably lost. And here’s where the disagreement begins. Supporters of cryonics have found that information theoretic death and legal death don’t always occur at the same time; indeed that information theoretic death can occur minutes, hours, or even days after legal death. Yet, as they point out, all of our current medical practices consider legal death the last stop in patient treatment. Therefore, invoking a tradition of medical hierarchy that dates back to the Greeks, the supporters of cryonics claim that they are the last stop on the ladder of treatment, and as such, that their practice is beyond justifiable; that it is a moral imperative.

The Case Against Cryonics

Perhaps the main critique of cryonics has to do with its social implications. What would happen in a society in which a select class of people can live forever? The possibilities range from vast inequality to the establishment of a permanent ruling class; indeed, the prospect of immortality being achieved by a select few leaves plenty to the imagination. There’s also the issue of government. To what extent will our government be involved in cryonics? The issue puts our representatives in an extremely awkward situation: if they choose to intervene, they will suddenly be put in charge of choosing who gets to live and who gets to die. And if they don’t, then they are faced with all the possibilities outlined in the previous paragraph. Finally, there are the mental health implications. Supporters of cryonics only claim a physical health benefit: what about the effect on the patient’s psychological state? Obviously, no one has lived forever. What if it’s awful? What if everyone who gets to live for two, three

In 1798, British political economist Thomas Malthus predicted the end of the world. He argued that the population was increasing too fast relative to the increasing supply of food, and that a worldwide hunger catastrophe was inevitable. What he didn’t account for was technological progress in agriculture and medicine. Between the years of 1798 and 2013, technology improved so rapidly that we ended up avoiding Malthus’s apocalypse entirely. But imagine if we had simply given up. Imagine if society had looked at Malthus’s 1798 graph and decided that there was nothing left to be done. Where would we be? This is what cryonics is. The critics are right; cryonics is based on an assumption of the future. But why not? We’re all familiar with the hockey stick graph of technological growth. While we can’t say that there’s no reason for it to stop, we have no choice but to believe that it will continue: there is simply no other alternative. The truth is, many of the issues we are facing—the bacterial resistance apocalypse, the energy crisis—we have chosen to resolve by waiting for the technology to become available. Death is no exception. n

Hansen Shi is an upper in Webster North from Dublin, California. Raised in Silicon Valley by an engineer father, Hansen grew up around math and science, fields that he has continued to pursue at Exeter. He is currently enrolled in a year-long genetics course, which will culminate in a 10-week-long research collaboration with Stanford University. MATTER

SPRING 2013

15


THE WIZARDRY OF WALL Sid Reddy ‘13 Talks About the Science of Stock Trading In 1973, two economists—Fisher Black and Myron Scholes—won a Nobel Prize for deriving a precise, logical way of calculating the real value of an option based on stock price. ave you ever wondered how hedge funds make money? Sure, they’re just investing in the stock market for their clients. But what makes successful firms like D. E. Shaw Group and SAC Capital Advisors different from the bottom feeders? Part of the answer lies in their use of sophisticated algorithms that execute stock trades in milliseconds to reap massive short-term returns. High-frequency trading, or HFT, has become a popular investment strategy in recent years with rapid increases in computing speed and memory. Instead of placing long-term bets on stocks and analyzing company fundamentals the way folks like Warren Buffett do, firms are designing automated systems that await pre-programmed market conditions before buying and selling shares according to equations derived from mathematical models. HFT, which is grounded in an electrifying field of study called quantitative finance, has some interesting ethical issues that have yet to be addressed by policy makers. In the meantime, investment firms are racing each other to come up with smarter automated trading strategies that might give them an edge over the cutthroat competition. For a Math 590 term project this fall, a friend and I explored trading strategies derived from one of the most popular models in quant finance: the Black-Scholes options pricing method. Stock options are a financial innovation that gives investors the opportunity to place bets on the movement of stock prices without actually

16

MATTER

SPRING 2013

buying and selling shares of a company. A “call” option, which sells for much lower than the price of its underlying stock, gives you the right to buy a share in Google at a specific “strike price” on a future date. So, if you think Google’s stock price is going to soar from $750 to $900, you might want to buy calls that “mature” two months from now at a strike price of $750. If the share price shoots up above $750 in two months, then you can exercise your right to buy cheap Google shares and immediately resell them at their high market price to make a handsome profit of $900-$750=$150 per option. If, instead, your bet goes sour and Google’s share price plummets to $600, then all you lose is the money you paid for the options (since they are now worthless). If instead of buying options you had bought actual shares in Google at $750, you’d have lost $750-$600=$150 per share, which stacks up pretty high compared to the relatively

low price of options. So, where does Black-Scholes come in? The prices of stock options are determined by their supply and demand in the market, and because of that, they can often be undervalued. Intuitively, we can say that as stock price grows greater than an option’s strike price, the option should become more valuable; similarly, when stock price falls below the option’s strike price, the option should become worthless (why would you buy Google shares at a price higher than the normal market price?). In 1973, two economists—Fisher Black and Myron Scholes—won a Nobel Prize for deriving a precise, logical way of calculating the real value of an option based on stock price, maturity date, strike price, stock volatility, and several other measurable financial variables. What made this method of options pricing so useful to traders was that it allowed them to determine if the market price, or “premium,” of an option was lower,


STREET higher, or roughly equal to its ‘real’ value. In the former case, investors could perform options arbitrage and, at least theoretically, make free money. Options can be arbitraged—or taken advantage of—in a couple of ways. The simplest one is exercise arbitrage, in which a call is priced below its exercise value (measured as stock price minus strike price). Say Google call options with a strike price of $600 are selling for $50, and that Google’s current stock price is $750. I could simply buy the call for $50 and exercise it right away to make a net profit of

market price of the option, so that once the portfolio is constructed and set in motion, we can track ‘real’ option price. When the market-determined option premium dips below the value of the replicating portfolio, we buy the option and sell our replicating portfolio to claim the difference of $75-$50=$25 without exposing ourselves to any risk (since the portfolio and option have the same “cash flow,” or price movement). If the premium jumps above the replicating portfolio’s value, then we do the opposite: sell the option and buy a replicating portfolio to make a similar $25 profit.

inconsistencies remain in the model’s use. We experienced this firsthand when we wrote open-source code implementing the Black-Scholes model on a mock portfolio that traded in real-time. The mathematics work out nicely in predicting option prices, but only to a degree. In recent years, ethical issues associated with the use of Black-Scholes and automated trading in general have been voiced in the public forum. Ultrafast trading algorithms employed by HFTs are suspected to be responsible for recent increases in market instability and stock price volatility: unexpected “black swan”

High-frequency trading, or HFT, has become a popular investment strategy in recent years with rapid increases in computing speed and memory. Instead of placing long-term bets on stocks and analyzing company fundamentals the way folks like Warren Buffett do, firms are designing automated systems that await pre-programmed market conditions before buying and selling shares according to equations derived from mathematical models. ($750-$600)-$50=$100 per option. In practice, such arbitrage opportunities don’t appear very often since most options have bounded prices and can only be exercised on their expiry date. Black-Scholes options arbitrage, however, works a little differently. Say Google call options that expire in one month are selling for $50, but our Black-Scholes equations tell us that they’re worth $75. What the mathematics of Black-Scholes is really saying is that we can cleverly build a portfolio of Google shares and near-riskless government bonds that replicates the $75 value of the option. This ‘replication’ is accomplished without consulting the

My project partner and I would be rich by now if it wasn’t for one thing: the Black-Scholes model makes some rigid assumptions about financial markets in order to derive its equations. The model doesn’t work well in high volatility conditions, and it falls apart when maintaining a replicating portfolio involves massive amounts of trading since exchanges often place transaction costs on such activity. Relaxing these assumptions has become a hot topic in quantitative finance, and is the subject of both academic and industrial research. The financial wizardry behind Black-Scholes has become more sophisticated since Black and Scholes published their paper in ’73, but problems and

events or “flash crashes,” in which stock indices like the Dow Jones drop hundreds of points and rebound in the span of a few minutes, are largely considered to be the results of interactions between proprietary HFT algorithms executed by different firms. Federal regulators are beginning to wake up to the realities of twenty-first century financial activity and are making moves to cage these “cheetahs” of the investing world. Until then, however, mathematical modeling and high-performance computing will continue to play an important role in institutional investing and the millisecond movement of money through the world’s financial markets. n

A four-year resident of Soule Hall, SID REDDY has been involved in scientific research and student publications throughout his high school career. His academic interests lie in computational biology, financial engineering, and digital media design. In his free time, Sid enjoys reading The Economist and browsing Quora. MATTER

SPRING 2013

17


WHAT’S NEW

TURNING PREDICTIVE MODELING INTO A SPORT A young start-up company has applied the competition model to some of the toughest problems in machine learning and statistical analysis . Sid Reddy analyses the impact of competitive data science.

ou’re a pro tennis player. Around you, crowds cheer as the Australian Open finals kick off to a start. You’ve got your eyes on the prize, and you’re determined to clinch the win. You’ll do whatever it takes to win this game, because your pride is on the line. Athletes aren’t the only people who engage their killer instincts through competition. Anyone who has ever played any sort of game can understand that turning seemingly mundane tasks into a contest where players win points and earn badges is a powerful way to engage people and solve problems. Companies leverage this primal aspect of our personality all the time. Duolingo—a tech start-up in Pittsburgh—has built a wildly successful language-learning site around the idea that mastering a new lingo can be turned into a game. Foursquare harnesses similar game dynamics to draw customers to local businesses in exchange for product discounts and social media-driven rewards. These principles of gamification are now being applied to the world of predictive

18

modeling. The computerized systems that simulate drug-delivery dynamics, predict insurance liabilities from car accidents, and detect fraud in online payment patterns; all of them rely on a subfield of computer science called machine learning. ML, analogous to statistical analysis, uses sophisticated algorithms to create predictive models of interesting phenomena observed in data. Say I have a thousand pictures of tanks in a jungle that have been labeled as “pictures of tanks.” Say I then take another five thousand unlabeled pictures of jungle. I could train one of several predictive models at my disposal to find tanks in these unlabeled images with (ballpark) 80% accuracy. Such a tool would come in handy if I wanted an alarm to go off at a military base when a surveillance camera ‘sees a tank’ rolling over the nearest hill. While some machine learning problems are relatively straightforward to tackle, others require novel and ingenious combinations of algorithms to solve. Often enough, the organizations (companies,

university research labs, etc.) that face such challenges don’t have the technical expertise to meet them. This is where Kaggle—a young, San Francisco start-up—steps in. Kaggle allows organizations with challenging ML problems to open their data sets to a community of predictive modeling experts; the company allows labs and corporations to host months-long competitions on their site (Kaggle.com), offering cash prizes to top scorers. Kaggle has been wildly successful in making professional data analytics available to organizations low on ML know-how. It has also cultivated an extraordinarily rich competitive programming culture that encourages students to learn ML by working with real-world data sets. Start-ups like Kaggle find innovative ways to solve entire ecosystems of problems. By gamifying predictive modeling, they’ve killed three birds with one stone: outsourced analytics for companies, competition for machine learning pros and students, and incubation of novel ML techniques. n

A four-year resident of Soule Hall, SID REDDY has been involved in scientific research and student publications throughout his high school career. His academic interests lie in computational biology, financial engineering, and digital media design. In his free time, Sid enjoys reading The Economist and browsing Quora. MATTER

SPRING 2013


WHAT’S NEXT

ISLETS, INJECTIONS, AND DIABETES PAUL LEI ‘15

he disease diabetes mellitus affects approximately 25.8 million people of all ages in the United States. With the revolutionary research currently underway in labs across the country—and even the world—one day these people could be cured. The cause of diabetes lies in the pancreas, the organ that controls blood sugar levels. The pancreas’ endocrine cells— which secrete the hormones that regulate sugar absorption and release in the bloodstream—are called islet cells. These islet cells sense blood sugar levels and accordingly release the hormones insulin, which tells cells to absorb glucose (a compound necessary for the production of cellular energy), and glucagon, which turns on the secretion of glucose into the bloodstream from liver cells. Patients with either type I or type II diabetes have insufficient levels of insulin in their blood. In the case of type I— the more serious condition—the body has failed to produce enough insulin altogether. Currently, there exist two standard practices of diabetes treatment in the U.S.: regular intakes of insulin (which ultimately fails) and pancreas transplantation. Pancreas transplantation is usually only recommended with late-stage patients, because of the complexities of the procedure and the risks involved in carrying it out. But an innovative procedure has emerged in the last decade. Medical professionals and researchers intent on abandoning large-scale pancreas transplants are now testing a new method called “islet transplantation.” Instead of transplanting the entire pancreas, isolated islet cells (accounting for 1-3% of pancreas tissue) are injected into the liver. This simpler form of surgery has shown great promise in its efficacy and reduction in operative risks. A significant number of trial patients have benefited from the procedure, becoming

insulin independent for several years. Dr. Ji Lei at the Massachusetts General Hospital Islet Isolation Laboratory says, “Islet transplantation is the true cure for severe type I diabetes” In a few years, islet transplantation will become standard procedure for dealing with type I diabetes. Unfortunately, the issue of donor shortage remains even with islet transplantation. Three million people in America have type I diabetes, and human donors cannot possibly supply all cells for transplantations. To tackle this problem, another groundbreaking technology is in R&D: cross-species transplantation. Pigs, because of their abundance and the relative similarity of their insulinproducing genes with those of human’s, can be used to harvest donor tissue. Porcine islets regulate glucose levels in the same range as humans’ do, and pig insulin has already been used in human treatment for several decades. One impediment to using pigs to facilitate islet transplantation is that the human immune system classifies galactose, which is prevalent in pig tissue, as an antigen and targets it for destruction. This kind of immune reaction would result in pig organ rejection in a human body. To work around the problem, scientists have begun engineering pigs that lack galactose. Another strategy adopted to counter rejection is called encapsulation. Researchers are now attempting to surround islet cells with a capsule that avoids T-cell (a type

of white blood cell) contact, which would prevent a destructive immune response and allow for the unhindered transfer of insulin. So far, results have been promising. In one experiment, a monkey with transplanted and genetically modified pig islets survived insulin-independent for an entire year. In the future, porcine cross-species transplantation may become a part of all types of transplant surgeries. Scientists have made considerable progress in preventing organ rejection, but still have ways to go before the technology becomes commercially viable. Nevertheless, the prospect remains that the lives of millions of patients will be dependent on what we now typically encounter on the dinner table. n

MATTER

SPRING 2013


Mattermag.org MATTER

SPRING 2013


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.