Page 1

Fall 2018 ISSUE



About Quanta

Quanta Student Magazine is an entirely student-run science initiative, designed for students to share their scientific passions with their peers. Since its founding in 2010, Quanta has periodically released to The Bishop’s School and the broader community a print issue as well as online articles in its Quanta Now website. Because contribution to the annual print issue requires time-intensive collaboration and in-depth coverage of science, technology, engineering, and mathematics topics, the staff develops the concepts and works on the publication over the summer. Quanta’s management and editorial team invite experienced and accomplished Quanta Now contributors to be part of the following year’s annual print issue. This year, our writers had a wide range of interests, including topics from physics, computing, health, and outer space. Despite our diverse passions, the 2018 Quanta issue merges these topics and shows the intrinsic connection between them. Throughout the school year, the Quanta staff also runs our online publication, Quanta Now. Through this venue, we publish monthly articles about new and exciting scientific topics. Please peruse this issue at your leisure and check our website,, to read the latest Quanta Now articles, find past print issues, and learn more about Quanta and science in general.

Quanta Staff Editor-in-Chief ......................................................... Alyson Brown Operations Manager .................................................... Jake Stenger Associate Editor ............................................................... Aaron Liu Head of Layout ............................................................ Sara Michael Head of Upper School Submissions ..................... William Olson Head of Middle School Submissions ............ Sebastian Hayden Junior Editor .......................................................... Eliana Petreikis Writers and Staff Editors .................................... Meredith Hunter, Emma Myer, Tobey Shim, Katy Silva, and Emily Zhu Faculty Advisor .............................................. Dr. Pamela Reynolds


• Fall 2018 • Quanta •

Letter from the Editor


came to The Bishop’s School in 6th grade with very high expectations for myself. I had many dreams and aspirations, some reasonable and some not. I wanted to take the hardest classes, be the leader of a club or three, walk across the quad before noon, and become one of the awe-inspiring maroon polo wearing seniors that towered over me. Now, six years later, I recognize that some of these goals were dreams showing little understanding of the importance of traditions. Today I am reflecting on my experiences these past years and I realize that many of my goals have come true. I am a senior and I am proudly wearing the colors that represent our school and our accomplishments in this institution. This past summer, I was able to dedicate a small part of my time, with my minimal painting skills, to the artwork on the walls of the senior rec room. And I was fortunate to participate in an internship that has opened my eyes to endless professional opportunities ahead. But the rest of my time was spent primarily working as the new Editor-in-Chief of Bishop’s Quanta Student Magazine and reviewing fascinating articles submitted by both middle and high school students. The famed French scientist Marie Curie once stated, “Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.” During my time here at Bishop’s, I have made an effort to welcome the unknown and to follow my own unique path. This has not always led me to where I originally thought I would be, but rather to where I needed to be at that moment. I encourage you all to do the same! Follow your dreams but be open to possibilities yet unknown, and consider deviating at times from the course you originally set for yourself. In this edition of Quanta, our writers, editors, and I strove to illuminate many aspects of science, from abstract concepts to concrete objects. As students about to join the real world, our knowledge is powerful. We have the capability to shape our generation by learning and sharing with each other what we have mastered. This is why the Quanta 2018 issue includes articles written by multiple middle school writers, in addition to upper school students. We want to make science exciting for all ages and to acknowledge that we can all learn from one another, regardless of age. Through this issue, we hope to share our passion for discovery—from gemstones to supercomputers to the truth behind artificial sweeteners. I invite you not only to read our articles but also to delve deeper into a scientific topic that interests you. Taking just a few minutes to ask a question or to do some research could lead you somewhere magical. As hardworking Bishop’s students, our knowledge may be extensive by the time we reach senior year, but the unknown is infinite and still waiting to be explored. I urge you to take a risk, try something new and be open to unanticipated possibilities! The Quanta staff hopes that you will enjoy our 2018 issue and that we inspire some new ways of thinking about our world.

Alyson Brown (‘19) • Fall 2018 • Quanta •


PHOTO CREDITS Cover: CC image courtesy of Zeiss Microscopy on Flickr Harder Problems, More Power: The World’s Fastest Supercomputer— pg. 6 PC: Oak Ridge National Laboratory The Color of Gemstones— pg. 8 PC: Royal Collection Trust Lighting The Path: A New Way To Treat Brain Cancer— pg. 9 PC: and An Algorithm To Rule Them All?— pg. 12 PC: William Olson (‘19) The Unappetizing Science of Paleofeces— pg. 14 PC: Not So Sweet: The Truth Behind Artificial Sweeteners— pg. 15 PC: The Dawn of Quantum Communication— pg. 18 PC: The Future of Spacecraft Propulsion— pg. 20 PC: Tobey Shim (‘20) A Guide To Healthy Living— pg. 22 PC: CC courtesy of Gellinger on The Inner Workings of Nuclear Weapons— pg. 24 PC: Saving Our Planet’s Species— Victoria Is Pregnant!— pg. 26 PC: Alyson Brown (‘19)


• Fall 2018 • Quanta •

TABLE OF CONTENTS 6 8 9 12 14 15 18 20 22 23 24 26

Harder Problems, More Power: The World’s Fastest Supercomputer Jake Stenger (‘19) The Color of Gemstones Aaron Liu (‘19) Lighting The Path: A New Way To Treat Brain Cancer Alyson Brown (‘19) An Algorithm To Rule Them All? William Olson (‘19) The Unappetizing Science of Paleofeces Sebastian Hayden (‘19) Not So Sweet: The Truth Behind Artificial Sweeteners Meredith Hunter (‘20) The Dawn of Quantum Communication Eliana Petreikis (‘20) The Future of Spacecraft Propulsion Tobey Shim (‘20) A Guide To Healthy Living Emma Myer (‘23) Autism— The Struggles and Successes Katy Silva (‘23) The Inner Workings of Nuclear Weapons Emily Zhu (‘23) Saving Our Planet’s Species— Victoria Is Pregnant! Quanta Now: April 10, 2018 Article Update Alyson Brown (‘19)

• Fall 2018 • Quanta •




everal months ago, I had the opportunity to visit UCSD’s Comet Supercomputer. Row after row of sleek ceiling-to-floor shelves of processors flashed in hundreds of places and supported data cables like a wall covered in ivy vines. Hot air blasted from fans loud enough to make it hard to hear someone talk. Yet, this machine possessed just 1.4% of the processing power of Summit, the world’s current most powerful supercomputer. Located in the Department of Energy’s Oak Ridge National Laboratory in Tennessee, Summit powered up in June of 2018 to provide an unparalleled research tool for scientists with numbers to crunch.

Light supercomputer with a tested peak speed of 122.3 petaflops, or 122,300,000,000,000,000 floating point operations per second. This speed measurement differs from the clock speed (cycles per second) typically measured for normal computers. While standard central processing units, or CPUs, operate on integers, supercomputers are designed to perform operations on floating point numbers, values represented in a form similar to scientific notation that can store a vast range of numbers. This distinction, along with their massive size, make supercomputers able to tackle physical simulations and mathematical problems that a laptop could not make a dent in.

The new Summit supercomputer surpasses China’s Sunway Taihu-

Behind the machine’s slick black case are IBM’s POWER9 CPUs and Nvid-


• Fall 2018 • Quanta •

ia’s VOLTA Graphics processing units (GPUs) packed together in groups, or nodes, along with random access memory (RAM) that quickly stores information while a program is running. Once used only to render graphics, GPUs have proven to be exceptional at running many calculations in parallel and are now used in tasks like bitcoin mining and scientific computing. To store the input and results of experiments, Summit is wired to a file storage system with a 250 petabyte (250 million gigabyte) capacity, enough space to hold about 74 years of high definition video. Just as important as storage capacity is the ability to transmit data, and Summit delivers in this area as well with 1,500 gigabytes per second of read and write disk access. This means that the machine’s brain must get fed data

fast enough to keep it at peak speed. To make calculations, Summit essentially tosses electrons back and forth on a massive scale, and this inevitably generates a lot of heat. Four thousand gallons of water must pump through Summit’s copper veins each minute to carry away thirteen million joules of heat per second, the same amount of power as 60 thousand desktop iMacs. When it comes to delivering the highest performance, the engineering of the machine’s infrastructure matters just as much as how many transistors are packed in each processor. In this regard, Summit’s designers claim to have built not just the fastest supercomputer, but also the smartest. The IBM-made Summit may have cost taxpayers $175 million, but it will do much more than win back from China the coveted title of operating the fastest computer. A truly diverse range of scientists will use the computer, from biologists to quantum physicists. Its unmatched power will make feasible simulations that not long ago would have taken decades on the fastest computers. The Department of Energy touts Summit as a scientific and economic investment, giving the United States a leg up in competition to make the biggest discoveries. Summit is a versatile machine that can be applied to just about any data-intensive field of study. Its advent is especially exciting for fields such as machine learning, quantum physics, nuclear fusion and fission, and computational biophysics. Many other projects flocking to Summit involve the detailed simulation of physical phenomena, in some cases down to the detail of subatomic particles. The XGC government funded research project is already using Summit to run virtual experiments on hypothetical fusion reactors, potentially

bringing the world closer to harnessing the same energy source that powers the stars. The researchers focus in particular on the behavior of particles at the boundary between the ultra hot core and the turbulent outer edge of plasma

machine that can perform over a billion billion machine learning calculations per second, power that can be applied to a myriad of problems. One team working through the National Cancer Institute will use machine learning on

“Four thousand gallons of water must pump through Summit’s copper veins each minute to carry away thirteen million joules of heat per second, the same amount of power as 60 thousand desktop iMacs.” in the superheated, high pressure environment of a fusion reactor. Perhaps the Oak Ridge National Laboratory, the birthplace of the Manhattan project, will finally make possible the harnessing of nuclear fusion for useful energy. One of Summit’s most anticipated application is in machine learning powered artificial intelligence. Typical machine learning algorithms process vast amounts of data to automatically train an algorithm to perform a complex task. Summit will allow artificial intelligence researchers to quickly optimize algorithms more complex than ever before. IBM built Summit with machine learning in mind; memory chips are packed close to the processors to minimize the time it takes to move data, and Nvidia’s GPUs are built specifically for training neural networks. The result is a

• Fall 2018 • Quanta •

Summit to find hidden correlations in vast volumes of health data with the goal of helping doctors diagnose and treat cancer better. Another group plans to use machine learning to trace connections between specific genes and susceptibility to opioid addiction. Summit may seem like a technological triumph that will maintain its relevance for a generation, but just around the corner at Oak Ridge is the six year old machine that preceeded Summit, a reminder that nothing is permanent in supercomputing. After all, the power of a cutting edge supercomputer built 25 years ago now resides in the pockets of millions. But for now, the world’s most intense scientific problems will be tackled by a humming mountain of metal in Tennessee.




ver wonder why emeralds are green, why rubies are red, and why sapphires are blue? Where do all these colors come from? To figure this out, we have to look into the chemistry and the structure of these famous gemstones.

rounding ions or ligands (in aqueous solutions). In the case of rubies, the chromium ion is surrounded by oxide anions (O2-) like the other aluminum atoms in the crystal structure. The negative charge of the surrounding oxide anions are attracted to the positive charge of the chromium ions. The oxide nuclei attract the outer electrons of

absorption of all other colors. Those other colors have been absorbed by the outer electrons, allowing them to move from the ground state to excited state. In emeralds, a similar phenomenon occurs. However, in this case, the chromium ion is surrounded by silicate ions (SiO44-). The silicate ions are weaker in their interactions with chromium than those of the oxide ions in rubies; this is a result of the greater distances between the atoms in emeralds and electronegativities of the different anions. Therefore, there is less of a split between the orbitals of the chromium ions, which means the gap is too small for higher energy light to be absorbed, while lower energy light, such as red light, would be absorbed, leading to the reflection or emission of blue-green light.

The pure forms of these gemstones are actually clear or colorless. Rubies and sapphires are colored varieties of the mineral corundum, a crystalline version of aluminum oxide (Al2O3). Emeralds, along with aquamarine, are colored versions of the mineral beryl, made of beryllium aluminum silicate (Be3Al2SiO6). The colors come from impurities in these minerals. These impurities are metals, specifically positively charged metal ions, which replace some of the aluminum and beryllium ions (in Beryl) that Despite having the same chemical constitute a major part of the impurity, the chemical composicrystal structures of these gemtions of the minerals corundum stones. In rubies and emeralds, and beryl lead to the difference in the metal that replaces some of color. Other gemstones, such as the aluminum ions is chromium. turquoise and sapphires, also have Chromium is a metal known for colors that crystal field theory having many oxidation states; can usually explain. Surrounding The diamonds, rubies, emeralds, and sapthis suggests that the chromium ions phires on this Imperial State Crown show- anions interact with metal ions, in rubies and emeralds are of differcausing the reflection of light of case the versatile chemistry of gemstones. ent oxidation states. However, that is different energies. In turquoise, not the case; chromium in these two the copper ions in the crystal gemstones both have the oxidation structure cause a blue green color; state of 3+, which means it has three the chromium ion, which causes the in sapphires, titanium gives a deep blue fewer electrons than its elemental form. electron orbitals to split or pull apart color; iron gives certain garnets a deep So why are these gemstones so different into different energy levels. This split red color. The right combination of in coloration? in orbitals allows for the difference be- chemical compounds and elements has tween the excited and ground states of yielded items that have been cherished The difference in color is due to a phe- the orbital electrons to be just right for by humans for millennia. nomenon described in Crystal Field the emission of red light, making this Theory. This theory states that the light gemstone red. In other words, such a emitted, after an electron of a metal configuration leads to the reflection ion is excited, is influenced by the sur- or transmission of red light and the


• Fall 2018 • Quanta •



n November 30, 2018, the first fluorescence guided surgery (FGS) brain tumor symposium in the United States will be held at the Leon & Norma Hess Center for Science & Medicine, in New York City. Neurosurgeons, neurologists, and oncologists from around the world will gather at Mount Sinai Hospital to hear experts discuss FGS and the use of 5-aminolevulinic acid (5-ALA) to remove malignant gliomas. A glioma is a type of tumor that occurs in the brain and spinal cord. This kind of tumor affects glial cells that surround and help nerve cells function.The objective of this conference will be to discuss the progress made recently in the field of surgical resection of brain cancers and to address the use of 5-ALA in combination with other intraoperative visualization technologies.

must be removed; otherwise the cancer will spread to nearby unaffected cells and ultimately result in the patient’s death. A critical aspect for the surgeon during this surgery is to distinguish tumor cells from normal healthy tissues, allowing for minimal invasive surgical tissue removal. In the operating room, surgeons have had to rely on their experience, eyesight and basic lighting to determine which cells are part of the tumor. They also have used instruments and their hands to examine how the tumor feels to the touch, in order to determine if a tissue is cancerous or healthy. But even with the use of technology such as a pre-operative MRI scan to help them make this decision, experienced sur-

geons have not always been successful at determining which are the cancer cells to be removed. The cells of some of the brain tumors are nearly invisible or so minute in size that they cannot be detected by the human eye. As a result, the survival rate statistics of brain tumor patients are very disturbing. With over 700,000 Americans living with a brain tumor, 20% of which are malignant, and an average survival rate for all malignant brain tumor patients of only 34.7%, an estimated 16,616 people will die from brain cancer in 2018. The good news is that with children 0-19 years of age, the average survival rate for malignant brain tumors is much higher. Statistically, 73.9% of children will survive their brain cancer diagno-

All brain tumors present a danger to the patient, whether malignant or benign. More than any other type of cancer, however, brain tumors can have lasting and life-altering physical, cognitive, and psychological effects on a patient’s life. A benign tumor does not grow into nearby tissue or spread to other parts of the body the way a malignant tumor can and it may not need to be removed depending on its location in the body. But a benign brain tumor can nevertheless affect a patient’s ability to function and even be deadly if it interferes with portions of the brain that are responsible for critical bodily functions, such as breathing, or structures, such as blood vessels or nerves. With malignant brain tumors, however, there is little choice. In order for the patient to survive a malignant brain tumor, the affected tissue

• Fall 2018 • Quanta •



sis, but it is important to point out that these pediatric brain tumors are still of significant concern. They have now become the leading cause of cancer-related deaths for children under the age of 19. The data kept by the Centers for Disease Control and Prevention shows that in 1999 approximately a third of cancer deaths in children were caused by leukemia and about a quarter were

all physicians and surgeons agree on one thing: leaving behind infected cells during the resection of the brain tumor can result in the cancer spreading once again to healthy tissue and the tumor growing back and ultimately taking the patient’s life. If even a few cancerous cells remain post surgery, they can quickly multiply and spread. Since tumors are not always visible to the naked

“With over 700,000 Americans living with a brain tumor, 20% of which are malignant, and an average survival rate for all malignant brain tumor patients of only 34.7%, an estimated 16,616 people will die from brain cancer in 2018.” caused by brain cancer. As of 2014, however, the percentages were the opposite and since then it has been determined that over a third of cancer deaths in children and adolescents are caused by brain cancer. Over 100 different types of brain tumors exist, many with their own subtypes, and each of these has different but still very high mortality rates. Glioblastoma multiforme is the most common of these cancers and also the deadliest form of brain cancer. Since this cancer is so aggressive, patients frequently only have one to two years to live after their diagnosis. It is the type of tumor which arises from glias. These glias are the non-neuronal healthy cells that are maintained by the continuous change of biochemical and physiological paths in the brain. Whether the brain cancer is glioblastoma multiforme or one of the other types of aggressive or less aggressive tumors,


eye or felt by fingers, they can be missed during tumor resection, and this leads to a recurrence of cancer in about 20 to 50 percent of the patients. With such dire statistics, surgeons focus during surgery on the margin around a tumor. They need to decide how much of the healthy tissue around the tumor to remove in order to make sure that all cancerous cells are resected from the patient. This decision becomes infinitely more complicated when surgeons are dealing with the brain, as each healthy cell removed could result in affecting the daily life of the patient. This is the huge challenge with brain surgery – how to ensure that the entire tumor is removed. The issue then becomes, however, how can a surgeon minimize the extraction of healthy tissue while still assuring the removal of all cancerous tissue? This is the question that neurosurgeons have been faced with since these types of surgeries first began.

• Fall 2018 • Quanta •

Experts have been looking for new ways to expand tumor visualization in the operating room and there have been recent important developments in this field. Fluorescence image-guided surgery (FIGS) and fluorescence-guided resection (FGR) are surgical techniques now used to maximize tumor removal and minimize unintended damage to the nearby tissue. FIGS and FGR combine the application of photosensitizer before the operation and the use of fluorescence detection during the surgery by lighting what is called the surgical field. With the use of a specific wavelength, surgeons can see the fluorescence through a long-pass filter that allows them to see the parameters of the tumor more clearly. Clinical trials have been conducted using a synthetic dye called indocyanine green (ICG) to accomplish this procedure. The day before surgery, brain tumor patients who were enrolled in this clinical trial were injected with ICG and then during the surgery, under the fluorescent light in near infrared range of 800 nanometers, the tumors lit up literally like neon signs. Though most common, ICG is not the only dye that has been used to light brain tumors. Methylene blue is an FDA-approved visible contrast agent that appears dark blue. When it is diluted enough, methylene blue acts as a near-infrared fluorescent dye that has also been used to identify neuroendocrine, urologic, and parathyroid tumors. Methylene blue is relatively safe but its use has been known to lead to cardiac arrhythmias, coronary vasoconstriction, and even decreased cardiac output in rare cases. Fluorescein sodium is another fluorescent drug that can be used intravenously to improve the visualization of brain tumor tissue. However, the use of this agent is limited because it is based primarily on non-specific vascular outflow, which means it cannot be controlled by doc-

tors to visualize specific areas of the brain. One of the most promising agents used so far to detect tumors, however, is 5-ALA. Based on a recent analysis by the National Institute of Health, this agent has produced some of the best results during brain surgeries. The 5-ALA agent has also been used clinically not only for tumor detection but as an FDA-approved substance for tumor treatment known as photodynamic therapy. This agent is usually given to a patient topically or in an oral form and then during the surgery the malignant brain tumor cells exhibit a violet-red fluorescent color. Cancer specific FGS with 5-ALA has been used successfully to resect malignant gliomas in Europe and in other parts of the world as well, after studies clearly demonstrated the clinical benefits of this agent. This technology is promising because it has led to gross total resection of brain tumors in 65% of patients. This is a high percentage as compared to only a 35% success rate under standard lights surgery. Each year, over 15,000 people in the United States undergo surgeries to remove brain tumors and this new technology has helped save countless lives. The use of 5-ALA has been limited due to its relatively high costs and inconvenient method of administration. In addition, a surgeon may still not be able to visualize all of the fluorescently tagged cancerous cells. 5-ALA provides real-time imaging and good detection of the tumor margins, but it is not flawless and experts have examined the possibility of developing new technology to further the effectiveness of 5-ALA in order to help more patients. As a result, in order to enhance this process even further and confirm that all fluorescent cells are removed, doctors are now using fluorescent probes

to detect the presence of the dye. This eliminates reliance on the surgeon’s own visual observations. A new scanning fiber endoscope (SFE) now can help to better detect the margins of the brain tumor. This new tool called a “smart” probe targets tumor cells and is used to detect the fluorescent glow produced by 5-ALA. Currently, there are two types of probes used on the mar-

logical and medical advances, however, the perfect and fail-safe fluorescence imaging probe does not exist yet. In order to be flawless, the technology must provide perfect contrast between the tumor and the healthy tissue around the tumor, while at the same time not having any negative impact on the patient. The current challenge for scientists is to design fluorescent imaging

The scanning fiber endoscope fluoresces the 5-ALA dye under its light. ket. One type is known as the enzyme reactive activatable fluorescent probe, which works by using what is called enzymatic cleavage and the work is done mostly outside of the cells. The second type of SFE is a “molecular-binding” activatable fluorescent probe. This one is activated in affected cells by breaking down the complex molecules in the lysosome. In order to operate on cancerous brain tumors, doctors can use these types of SFE probes to extend detection into the near infrared range. Despite all of these scientific, techno-

• Fall 2018 • Quanta •

probes that would have high selectivity for tumors and also high tumor to background ratios, while still having minimal toxicity result to the patient. As this medical field grows, the scientific knowledge available will also grow and develop. This technology is still a work in progress, but doctors have high hopes for the new techniques that will be discussed at the fluorescence guided surgery brain tumor symposium at the end of this year, and for the many innovations to come which will extend patients’ lives.




vs NP is considered one of the biggest yet slowest developing fields in the realm of computer science. Since its introduction in 1971 by Stephen Cook, currently a computer science and mathematics professor at the University of Toronto, the hype around the problem has considerably died down. Even recently, a poll published in the Association for Computing Machinery’s Special Interest Group on Algorithms and Computation Theory newsletter showcases that, in 2012, 81% of computer science and mathematical researchers thought P did not equal NP, or P ≠ NP. However, 53% of those polled thought that a definitive proof of whether P equals NP or not would emerge before the start of the 22nd century. But what is P vs NP? The issue of P vs NP is derived from computational complexity theory, which focuses on classifying problems based off their difficulty and ability to check solutions.

one in the list. In mathematics, a P problem could look like 3x + 5 = 11, where we are solving for x. The solution can be easily found in a reasonable amount of time by subtracting 5 from both sides of the equation and then dividing both sides

take and this could be relatively easy to check just by seeing if the salesman started in his home town, went to all other hundred cities, and returned back to his hometown while traveling less than the maximum distance . However, computers would not be able to solve this problem in a reasonable

“If a valid proof that P = NP were developed, all encryption algorithms would have to be redeveloped and banking and national security systems would be vulnerable.”

P stands for polynomial, referring to the ability to solve P problems in polynomial time, or reasonably quick. Solutions to P problems can also be quickly checked for accuracy or validity. An example of a P problem in relation to computer science is the sorting of a list in ascending order. Suppose we had a random list of numbers, [10, 4, 22, 9, 17, 7, 2]. This list could be easily sorted using simple algorithms like a bubble sort or a merge sort to get the solution, [2, 4, 7, 9, 10, 17, 22]. And this solution can be checked by simply comparing the numbers in order to see if the previous number is less than the current


by 3, which results in x = 2. This solution can be easily checked by plugging the number 2 back into the equation instead of x to confirm that 6 + 5 = 11. NP stands for non-deterministic polynomials. Non-deterministic means that a problem’s possible solution can be easily checked for validity but it is hard for computers to solve the problem in a reasonable amount of time. A good example of a NP problem is the traveling salesman problem. Say a salesman started in his hometown and wanted to travel to a hundred other randomly preselected cities in the contiguous United States and then return to back to his home in under a certain amount of miles traveled. Someone could offer a solution of a certain route of cities to

• Fall 2018 • Quanta •

amount of time since an algorithm would go through every single possible path that the salesman could make and then determine the one traveled across the least distance. Every P problem is technically an NP problem since the solution can be easily checked for both. However, the reverse is not the case, since NP problems cannot be solved for in a reasonable amount of time. Therefore, all NP problems are not P problems. This relationship is similar to squares and rectangles in mathematics, where every square is a rectangle, but not every rectangle is a square. However if all rectangles were squares, this would mean that NP problems would become P problems, therefore P = NP.

Researchers are trying to derive proofs that P = NP by looking at solutions for algorithms of NP complete problems. These NP complete problems are the gray area in between NP problems and NP-Hard problems. NP-Hard is the classification for problems that cannot be solved for in a reasonable amount of time and their solutions cannot be checked easily. Any problem pertaining to infinity could be classified as a NP-Hard problem since the only way to truly validate if the action is occurring forever would be to wait around forever to see if the action stops. This brings us to NP complete problems which cannot be solved for in a reasonable amount of time, and their solutions are more difficult to validate than a regular NP problem. The methodology is that if a universal, omnipotent algorithm can be developed to solve and validate the more difficult NP complete problems in a reasonable amount of time, then it could easily handle regular NP problems. In addition to its importance in theoretical computer science and mathematics, the P vs. NP problem has practical implications.The theory that P ≠ NP is used as the backbone for almost all modern cryptography and banking systems. If a valid proof that P = NP were developed, all encryption algorithms would have to be redeveloped and banking and national security systems would be vulnerable. Most cryptographic hash functions are in the NPHard classification. Under our current assumption that P ≠ NP, these hash functions can only be broken by way of brute force with a lot of computer power and millions of years. However, with

Above is a diagram that shows the categorizing of problems if P ≠ NP, compared to if P = NP. The model on the left depicts a distinction between NP and P. Also, in that same model, NP-Complete belongs to both NP and NP-Hard. However, in the model on the right, NPHard’s scope has expanded due to the equalitization of P, NP, and NP-Complete. a master algorithm, the time required to invert a cryptographic hash function could be drastically decreased to minutes. Also, transportation and shipping efficiency could be improved by the use of new optimization algorithms that could vastly outperform today’s approach of essentially guessing and checking to find a good shipping route. But the most important application of P = NP would be utilized in protein folding simulations, which are crucial to the development of finding cures for different types of cancer. An accurate proof that P = NP would lead to drastic revolutions in today’s technology based industries.

• Fall 2018 • Quanta •




n most climates, fecal matter left outdoors decomposes and disappears. Few traces of its contents, let alone of the human DNA it contains, are ever left. But in the 1970s, when archaeologists first excavated Hinds Cave in the Lower Pecos River area of Texas, something was different. The hot, dry Southwestern climate had left a wealth of desiccated perishable artifacts in the cave, preserved by the lack of moisture in the air. Most notably, however, the arid Texas air had left millenia-old, fossilized human excrement waiting to be found. This was human coprolite. This was paleofeces. Mitochondrial DNA sequences in the excrement confirmed that these paleofeces belonged to the ancestors of the Lower Pecos River area’s modern-day Native American tribes. Inside, however, was something even more interesting: DNA sequences from pronged antelope, bighorn sheep, and cottontail rabbit, as well as chloroplast DNA sequences from eight different plants. In other words, the paleofeces contained the remains of what would have been a typical Native American meal. This discovery highlighted the tremendous value of paleofeces: they provide otherwise-inaccessible dietary insights from the time and place of a given sample. The archaeologists at Hinds Cave were not the first to realize the tremendous value of paleofeces. As early as the 1890s, John William Harshberger had hypothesized that the elements of our predecessors’ diets (especially any seeds) might be preserved in fossilized excrement. He correctly posited that the study of this excrement could provide researchers with seemingly-lost dietary information. Unfortunately though, early studies of paleofeces were


Hinds cave in Texas, an archeological site that yielded paleofeces with a rich history. not very exhaustive. For example, the human coprolites uncovered in Bennett Young’s 1910 excavation of Salts and Mammoth Caves in Kentucky yielded little more information than that previous residents of the area subsisted on a diet including sunflower seeds and hickory nuts. Early-twentieth-century researchers are not necessarily to blame for this, however; the technology available at the time was no match for the DNA sequencing techniques that we have developed in the last few decades. Even so, it was not until the 1950s and 1960s that paleofeces research took on a more serious, scientific tone. The field of paleofeces was revolutionized thanks to the work of botanist-turned-archaeologist Eric Callen and parasitologist Thomas Cameron. Their excavations in the Huaca Prieta prehistoric settlement in Peru led to the discovery of a faster, more efficient method of processing coprolite: rehydrating it with trisodium phosphate. This method continues to be used today in paleofeces-related work, although it is supplemented with various other chemical techniques. Not every coprolite reacts the same way when mixed with trisodium phosphate. For

• Fall 2018 • Quanta •

example, with the work done in 2001 on the Hinds Cave paleofeces samples, a mix of double-distilled water and a variety of DNA extraction buffers did the trick. Whatever the particular needs of a given human coprolite sample are, though, it goes without saying that careful reconstitution is entirely worth it. Just a decade ago, in the Paisley Caves of Oregon, paleofeces dating 14,000 years ago were discovered, suggesting that the accepted model for human migration to North America could be inaccurate. In 1969, Henry Johnson Hall was able to rehydrate parasite ova from paleofeces in the Great Basin in Utah, a step towards revealing which parasites may have infested our predecessors. All in all, the unappetizing science of paleofeces may just be the key to gaining a richer understanding of the people and cultures that came before us. Human coprolites are like time capsules, holding otherwise-unattainable genetic and dietary information within them. But if paleofeces work is a prime way for modern-day researchers to gather information on cultures long-gone, how will future researchers study our own culture and lifestyle?



ab safety rules could be preventing you from making huge discoveries! Such was the case for Constantin Fahlberg, a Russian chemist, who bit into his bread roll one night and discovered a surprisingly sweet taste. The taste was coming from his

the food industry: benzoic sulfide. This substance is now known as saccharin, the first artificial sweetener.

Saccharin has a metallic aftertaste that detracts from its similarity to sugar. Luckily, Michael Sveda, a student at the University of Illinois, accidentally discovered a substance called cycla[Artificial sweeteners taste mate that improved the taste when mixed with saccharin. While workmuch sweeter than an ing on a new fever-reducing medicaequal amount of sugar. For tion, he didn’t wash his hands before and discovered this subexample, saccharin tastes smoking stance, which is thirty times sweeter three hundred times than sugar. Unfortunately, the story end there. Years later, rats givsweeter than sucrose, and doesn’t en cyclamate developed bladder cansucralose tastes six-hun- cer. The news of this spread until the FDA banned cyclamate. dred times sweeter. ] own hands. Fahlberg rushed back to his lab to taste everything in sight. He soon found the substance that would change

Still, the discovery and use of artificial sweeteners continued. Artificial sweeteners were exclusively marketed to people with diabetes until the sugar

• Fall 2018 • Quanta •

shortages that occurred during both world wars. This coupled with Americans’ evolving cultural belief that being thin equates to being healthy led to the rise in popularity of artificial sweeteners. Second generation artificial sweeteners were discovered the same way. Sucralose was discovered when a scientist accidentally tasted the chemical he was working on after he misheard his boss say the word test as taste. For all you chemistry students, the lesson here is to eat lunch without washing your hands.

Why does sugar taste so sweet? When you eat sugar, sucrose molecules trigger sensory cells on your tongue. Receptor proteins on these sensory



cells have “pockets.” Molecules with the right size and shape bind with the “pocket” which initiates a cascade of events within the cell. This cascade ends in the release of a signaling molecule which notifies the brain that you are tasting something sweet. The same thing happens with salty, sour, and bitter tastes. Sucrose (natural sugar) binds quite well with sensory proteins in sweet-tasting cells, but most artificial sweeteners create even stronger bonds. As a result artificial sweeteners taste much sweeter than an equal amount of sugar. For example, saccharin tastes three hundred times sweeter than sucrose, and sucralose tastes six-hundred times sweeter. Once the brain receives the signal from the cranial nerves that something sweet has been tasted, the signal is redirected to the primary taste cortex. A signal is then sent to the part of your brain associated with reward-pathways, causing your body to release dopamine. This means that people are hardwired to get pleasure from eating sweets! A study comparing the brain scans of a dozen women who were not regular consumers of artificial sweeteners demonstrated that sugar elicits a stronger response from the reward-pathway system of the brain than artificial sweeteners do, despite them tasting sweeter. However, the reward system can be altered. In a comparison study, women who regularly consumed artificial sweeteners showed an even stronger activation in the reward-pathway part of their brains than the non-consumers. This suggests that the brains of people who regularly consume artificial sweeteners are less likely to distinguish


between artificial sweeteners and real sugar. After all this processing by our nervous system, our digestive system does not recognize sweeteners as food. They are not broken down, so the body does not receive any calories. That means you can taste something better than sugar without any consequences! Right?

Are artificial sweeteners a safe alternative to sugar? A sugar free dessert sounds amazing, but although there are no calories digested, there are still effects on the body. Scientists have been studying the effects of artificial sweeteners since they were first discovered, yet little is known about their influence on the human body. Some studies show that artificial sweeteners can help you lose weight while others say they make you put on pounds. Further, research on the subject may be unreliable since many studies were sponsored by the food industry. And safety data is even more convoluted because study subjects often consume more than one type of artificial sweetener. As science advances, additional rigorous studies have been providing more reliable evidence.

Will artificial cause cancer?


Fortunately, artificial sweeteners do not cause cancer. Although there was a study conducted in 1970 that found rats given cyclamate had a high risk of developing bladder cancer, this was later proven false because the specific breed of rats used in the study already

• Fall 2018 • Quanta •

had a greater risk of bladder cancer. The media reported on cyclamate’s carcinogenic risk, but these claims had no real scientific backing, and they were later disproven. Nevertheless, cyclamate has remained banned in the United States. Most of the studies conducted have been for first generation sweeteners, and long-term studies have yet to be conducted for second generation sweeteners. The bottom line is that there is no scientific evidence of artificial sweeteners bearing a carcinogenic risk.

Can artificial sweeteners help people lose weight? It doesn’t seem that they are the shortcut to easy weight loss. Although there have been studies showing that artificial sweeteners help you lose weight, there is more evidence that they actually cause consumers to gain weight. One possibility for why this happens is that when a person consumes artificial sweeteners they tell themselves that because their drink was sugar-free, they can have a cupcake. This means that the person ultimately consumes the same number of calories as they were before they started using artificial sweeteners. Physiology’s explanation to this question is that when you consume an artificial sweetener, the sweet taste signals your digestive system to expect calories. The body releases insulin to store those calories in your muscle, fat, and liver cells, but no calories actually arrive. Without sugar to break down, your body adapts and trains your insulin response to store fat in your fat cells rather than storing sugar in your liver cells as glycogen, a form of sugar. This

means that when you regularly drink a diet soda, your body will start storing fat in your cells instead of energy-producing sugars.

What other effects do artificial sweeteners have? A recent study found the most probable explanation for why artificial sweeteners can lead to weight gain. A team of Israeli scientists studied the effect of artificial sweeteners on rats’ gut microbiomes, the ecosystem of microorganisms that live in the gastrointestinal tract. They found that rats who consumed artificial sweeteners had more gut bacteria that were efficient at turning food into fat. This finding was shown to be present in humans as well. Another study in humans examined how gut bacteria changed after five days of consuming the maximum amount of artificial sweetener recommended by the FDA. The results were similar to those found for the rats. Four of the seven subjects had a drastic change to their gut microbiomes, but three of the subjects did not. Those who had a change had 50% less of the bacteria called Bacteroidetes and 50% more of a bacterium called Firmicutes in their gut. This difference meant that they had more bacteria that could turn food into fat. What was most surprising about this study is that three of the subjects did not have this response. So, it seems that artificial sweeteners only affect people with a certain genetic makeup! That explains why some people gain weight when they drink diet sodas, and others do not.

Are there any benefits? Despite the controversial health effects of artificial sweeteners, scientists have discovered a surprising use for sucralose, also know as Splenda. Due to its extremely strong bonds, Splenda does not break down when ingested or processed. Scientists are now using it to track water pollution. Sucralose can only be found in human waste products. As a result, when scientists find a source of water that contains sucralose,

For example, one research team is using sucralose to track waste water and its effects on coral reefs off the coast of Florida. The scientist running the study predicts that sucralose can be a major benefit in aiding this type of research.

How can I eat guilt-free sweets? Unfortunately, the research is not promising. Although most of the studies conducted have been on rats and human studies have had small sample

[ Due to its extremely strong bonds, Splenda does not break down when ingested or processed. Scientists are now using it to track water pollution. ] they know it is contaminated with waste water. When waste water goes through a treatment plant, bacteria in the water gets rid of most types of waste. However, just as our guts do not recognize sucralose as consumable, the bacteria in the water-treatment plants do not recognize it either. This means the sucralose passes through the system unaltered and ends up wherever the waste water ends up.

sizes, artificial sweeteners seem to be adding to the epidemic of obesity they were created to counteract. There may not be a way to have dessert without the addition of calories. The best solution is to eat natural sugars like evaporated cane juice, sucanat or coconut sugar because the more processed sugars are, the worse they are for you. A little bit of sugar here and there is not bad for you; in fact it is good for the body!

Scientists track sucralose levels to make sure waste water is not contaminating clean drinking water reservoirs or that it is not disrupting ecosystems.

• Fall 2018 • Quanta •




hen did you last physically write a letter, seal it in an envelope, and send it to a desired recipient? Although letters formerly served as the only way to communicate long-distance, they have become outdated. Instead, people desire quicker forms of communication as they struggle to fit all of their activities into their tightly-scheduled days. Sending emails, making phone calls, and using social media is thought to be much more efficient than communicating in person or through paper. However, prioritizing speed and efficiency compromises privacy. As people find ways to hack into conversations, maintaining security has become increasingly problematic on levels from retaining privacy on a social media account to keeping outside organizations from interfering with government data. Due to crises such as cyber attacks, people search for more secure forms of communication to maintain privacy and secrecy. A potential way to increase security lies


in quantum entanglement, which describes the unique behavior of linked particles. When two particles are entangled, their quantum states are linked. According to quantum mechanics, these particles exist in every quantum state until they are measured. Measuring one particle instantly determines the state of the other, and a particle immediately reflects any disturbance made to its linked particle. This “instant” determination or “instant” mirroring is observed no matter how far the particles are from one another. The particles will exhibit linked behavior whether they are on opposite sides of a room or on opposite sides of the galaxy. If this sounds a bit confusing, imagine a pair of special clone twins. One is named Amadeus and the other is named Albert. They were separated from birth, so Amadeus lives in Germany. He resides in the mountains and enjoys farming and taking care of his goats. Albert lives by a lake in Austria, and he enjoys writing poetry and painting landscapes. Nobody really comes

• Fall 2018 • Quanta •

across Amadeus or Albert, as they both live in isolated environments. However, there is some sort of link between Albert and Amadeus. If somebody comes by and measures the diameter of Albert’s hand, the diameter of Amadeus’s hand is instantly determined as well. It is impossible to describe Albert independently; all measurements concerning Albert instantly apply to Amadeus, and vice versa. In addition, anything done to Albert is instantly mirrored in Amadeus. Of course, this example is not an entirely accurate model of the behavior of entangled particles, as the behavior of particles is quite different from what we observe during our everyday lives. However, it does illustrate that whatever is done to one system is instantly determined or mirrored in the other. This “instant” mirroring provides the key to hack-proof communication. In theory, quantum keys, which are composed of long strings of linked photons, can be shared between different loca-

tions to communicate and exchange information. Because these quantum keys are linked, they would instantly mirror any tampering or eavesdropping. Anyone using the quantum keys would know immediately that the system had been compromised. But if physicists use this system of communication, they would be using something that they do not entirely understand. Quantum entanglement seems to break the set of rules, or natural laws, observed in everyday life. The outcome of everyday actions, such as throwing a ball or a dropping a china cup, can be predicted by laws and rules that govern the way that the physical world works and how objects behave. Among these rules is the universal speed limit; as explained in Albert Einstein’s Special Theory of Relativity, nothing can exceed the speed of light. As the speed of an object increases, its mass also increases. We do not observe this in everyday life because a noticeable mass increase only occurs when an object reaches speeds much higher than what is observed in day-to-day life. For example, imagine a paper airplane. As the paper airplane’s speed increases and reaches the speed of light, its mass increases. Because you want the airplane to travel as fast as possible, you need to apply a force to continue accelerating the paper airplane. If the paper airplane becomes infinite in size, then an infinite amount of force is needed to accelerate the airplane. In order to reach the speed of light, the paper airplane would become infinitely massive, so an infinite amount of force would be needed to accelerate it. Because an infinite amount of force is not attainable in everyday life, the speed of light limits how fast objects can travel. There are also paradoxes caused by instantaneous transmission. When two people, say Amadeus and Albert, are moving at different speeds, the concept of time for each person is different. Be-

cause Amadeus travels at a higher speed than Albert, time stretches and his clock runs slower than Albert’s clock. Albert decides to send an instantaneous message to Amadeus. However, because time is moving slower for Amadeus, he would actually receive the message before Albert sent the message. In other words, the effect (Amadeus receiving the message) ends up occurring before the cause (Albert sending the message), thus creating a paradox. So how can particles instantly mirror each other, no matter the distance separating them? So far, physicists do not have a complete understanding of how the particles mirror each other instantaneously. However, many physicists believe that the communication between two entangled particles is not exactly the same as the communication of information between two people. Think back to Albert and Amadeus. Tell Albert that if he hears the beginning phrase of a Shostakovich symphony, at that exact moment Amadeus will hear the beginning phrase of a Haydn symphony. Now tell Amadeus the same thing. When either Amadeus or Albert hear the beginning phrase of their symphony, they will immediately know that the other person is hearing the beginning phrase of their piece at that exact time. In this way, Amadeus and Albert have communicated at a rate faster than the speed of light, thus violating the universal speed limit. However, this is not really a form of communication, as no true information actually passed between the two. There are still questions about whether entangled photons transmit data at the speed of light, but it is not necessary to completely understand something in order to use it. Complete understanding of quantum entanglement was not the only obstacle for quantum communication. Formerly, quantum entanglement seemed to be great in theory but unattainable and

• Fall 2018 • Quanta •

useless in reality. Because the link between two entangled particles is quite delicate, regular atoms get in the way and disturb the quantum states of entangled particles. Sending quantum communications was limited to 100 kilometers, as any greater distance disturbed the quantum states of the particles. But what about space? Space is a vacuum, so there are no atoms or molecules to create collisions and break bonds between the particles in space. In August 2016, Chinese scientist Jian Wei Pan and his team launched a satellite, named Micius, into outer space. This satellite sent a laser beam through a special crystal that created pairs of entangled photons (also known as light particles). The satellite then sent the entangled pairs to different ground stations on earth. These stations were separated by 500-2,000 kilometers, which broke the previous record of 100 kilometers. Because these photons were distributed through space instead of being sent through air on earth, no air particles disturbed the quantum states of the photons. This showed that it is possible for two particles to have an uninterrupted bond for much more than a few kilometers. Because entangled particles have proven the ability travel longer distances, quantum entanglement can be used to create both an efficient and secure form of communication. No matter the distance that separates the particles, any disturbance is immediately reflected, notifying those communicating that their system has been compromised. Even though it seemed unattainable before, recent discoveries and experiments on quantum entanglement show that the possibility of secure, quick communication is just over the horizon.



The Tsiolkovsky rocket equation. Due to the natural logarithm, a significant increase in the amount of fuel is required to allow the spacecraft to go any faster.


umanity’s advances in space exploration have been impressive, especially considering that space technology has existed for less than a century. However, the most elementary tools behind spaceflight have not changed very much over the years. The only known way to get into space is with a rocket carrying fuel that triggers a chemical reaction, which in turn exerts enough force to push the rocket out of Earth’s gravitational influence.

the rocket heavier, which increases fuel needs further, and so on. A huge amount of fuel is consumed just getting into space, which leaves much less for actually accelerating the spacecraft once it is out of orbit. This principle is why long-distance or even interstellar space travel is quite difficult, even with modern technology. However, all hope is not lost for deep space travel. A little bit of unconventional thinking combined with technological advances can go a long way.

As reliable as our chemically-fueled rockets have been, they do still have their limitations. There is a principle known as the “tyranny of the rocket equation.” Essentially, rockets are very limited in the speeds they can reach and the payloads they can bring. A heavier rocket requires more fuel to reach escape velocity. Unfortunately, the addition of more fuel makes

In-space refueling


Since the tyranny of the rocket equation concerns escaping Earth orbit, the simplest solution is perhaps to fuel spacecrafts outside of Earth’s gravity well. This is not an entirely new idea. Numerous space agencies have already launched orbital refuelling missions, sending fuel into orbit to extend their

• Fall 2018 • Quanta •

satellites’ service duration. An extension of this concept leads to orbital propellant depots. These depots are caches of fuel pre-placed in orbit for the use of later spacecraft. Orbital propellant caches are not just limited to stations around Earth, either: A theoretical Mars mission might require the spacecraft to dock with a depot orbiting the Moon to allow it to refuel before making the second leg to its final destination. By circumventing the fuel problem, the mission would have significantly more freedom in its payload and the kind of scientific instruments it could bring along. Some plans for orbital refueling are totally detached from Earth, with the fuel itself being obtained from space. We have the technology to separate water into hydrogen and oxygen through electrolysis, which can be used to power spacecrafts. Ice deposits at the lunar poles or farther out in the asteroid belt could become

refueling hubs for deep space missions.

Slow-accelerating engines While chemical rockets are still the only known way to actually escape Earth’s gravity well, spacecrafts have more options once they are in space. Due to the absence of air resistance, any acceleration the spacecraft undergoes is essentially there to stay. Over long distances, even the smallest accelerations can add up, propelling the spacecraft to impressive speeds. This is the concept behind technologies like the ion thruster. Ion thrusters utilize a neutral gas, usually xenon, as fuel. They ionize the gas by stripping away electrons. Next, the positively-charged ions pass through an electric field and finally eject themselves from the back of the engine.The thrust from each ion is minuscule, so acceleration is slow. However, with a reliable power source such as an onboard nuclear reactor, ion drives can keep running for years, eventually producing impressive speeds. Two NASA probes, Deep Space 1 and Dawn, have already successfully used ion thrusters to navigate far away from Earth. Some visionaries want to remove the need for onboard power generation altogether. They have instead looked to the sun, harnessing its power with the solar sail. Similar to ion thrusters, solar sails are designed to accelerate spacecrafts slowly over long periods of time. However, their source of propulsion is entirely external. Solar sails have large surface areas, in order to catch photons coming from the sun. Although

photons are massless, they each carry a tiny amount of momentum. As with the ion thrusters, the force from these photons can add up over time, propelling a spacecraft forward. In 2010, Japan’s IKAROS spacecraft successfully employed a solar sail to reach Venus in just half a year, without the need for any fuel. The solar sail concept is even

it could be a very effective way of accelerating a spacecraft quickly. Theoretical improvements to nuclear power such as controlled fusion reactions could supercharge this principle, producing even more thrust per pulse. As effective as pulse propulsion might seem, it is also perhaps the most controversial option. The 1967 Outer Space Treaty,

“Some plans for orbital refueling are totally detached from Earth, with the fuel itself being obtained from space. We have the technology to separate water into hydrogen and oxygen through electrolysis, which can be used to power spacecrafts.”

workable farther away from the sun, as high-powered lasers can be used to produce the same effect, pushing the sail forward.

Explosive engines In contrast to the slow and steady ion thrusters and solar sails, some propulsion methods rely on pulsed power the rapid release of large pulses of energy to generate thrust. One of the most dramatic examples of this concept is nuclear pulse propulsion. Nuclear pulse spacecraft would trigger nuclear detonations to produce massive amounts of thrust. While the idea of essentially attaching a nuclear warhead to the back of a spacecraft sounds terrifying,

• Fall 2018 • Quanta •

to which 107 countries have agreed, explicitly bans nuclear weapons in space. It is unclear if pulse propulsion would even be permissible under international law. Even if it was, the potential consequences of an engine failure close to Earth are enough to give pause to any pulse propulsion projects. This article is just a brief survey of prominent possible propulsion technologies. There are countless more concepts for space travel, ranging from mundane improvements to our current chemical engines to faraway ideas like antimatter engines and warp drives. Regardless of how fanciful some of these technologies may seem, the future of spacecraft propulsion is only just beginning.




n order to grow, stay fit, feel and look your best, it is important to get enough sleep, eat nutritious food, drink plenty of water, and exercise. If you don’t, you may not function well, due to hunger, dehydration, or fatigue. One way to begin your path to good health is by getting enough rest. Getting enough sleep can lead to less feelings of stress and grogginess. It is good to stick to a regular bedtime routine to prevent feeling tired and moody throughout the day. Some studies have shown that adequate sleep can also contribute to improved memory, and focus during the day. When having trouble sleeping, try listening to calming music, reading a book, or taking a warm shower. It is best to not use electronics 15-40 minutes before going to bed! Studies have shown that exposure to the blue-andwhite light that phones, iPads, and laptops emit makes it difficult to release melatonin. Melatonin helps your body know when to wake up and when to fall asleep. Some teens need more sleep than others, however teens need approximately 8-10 hours of sleep each night.

trients. They allow the body to grow. For most teens, three meals is not enough to provide sufficient calories and nutrients for an entire day. Small snacks such as fruits, vegetables, and yogurt are a good solution. As a rule of thumb, half of a meal should consist of fruits and vegetables. Try to eat plenty of colorful fruits and vegetables, from yellow corn to blueberries. Do not skip breakfast or other meals. Skipping just one meal can lead to a drop in blood sugar levels, causing irritability. In addition, skipping meals leads to a slower metabolism and increased susceptibility for weight gain. Avoiding some foods is also important for a healthy lifestyle. Experts strongly recommend against regularly consuming foods and drinks with large amounts of added sugar and no other nutritional value, such as soda or candy.

It is well known that exercise is very important to good health: exercise facilitates increased energy, a stronger heart, more flexibility, and deeper sleep. It is recommended that teens do at least one hour of physical activity each day. Exercise can be a team sport, a run, swim or bike ride, or any physical activity that gets the heart rate up for an extended period of time.

Foods high in trans fats and cholesterol can also be a problem for heart health. Other fats, however, such as omega-3 fatty acids, have important health benefits and should not be avoided.

Next, eating a balanced diet is key. Foods such as fruits, grains, vegetables, dairy, and proteins contain a lot of nu-

Lastly, drinking lots of water is very beneficial to the body. Drinking water increases energy, flushes out toxins,


• Fall 2018 • Quanta •

“The National Institute of Health found that people who engaged in physical activity increased their life expectancy as much as 4.5 years.” improves your complexion, and boosts your immune system. It is also necessary to drink more when working out. When exercising, the body loses fluid through sweat. Therefore, additional water intake is necessary to maintain homeostasis inside. Abiding by these suggestions will not only make you feel better, but also give you a longer and happier life. For example, a study done by the National Institute of Health found that people who engaged in leisure-time physical activity had life expectancy gains of as much as 4.5 years. Not only that, daily exercise and health practices also decrease the risk of heart failure, heart attacks, and diabetes later on in life. Clearly, it is necessary to find balance in life so that we can feel and be our best both inside and out.



utism Spectrum Disorder, or ASD, is a neurological disorder that affects an individual’s social, communicative, and cognitive abilities. The Center for Disease Control and Prevention’s Autism and Developmental Disabilities Monitoring Network has calculated that in 2018, ASD has been diagnosed in 1 out of 59 children: 1 out of 39 boys and 1 out of 151 girls. Based on these statistics, ASD affects boys 4.5 more times than girls. Someone with autism learns and interacts with the world differently than a neurotypical person. There is a spectrum classifying the individual severity of autism, with categories ranging from just gifted to greatly challenged. This is all determined by the child’s ability to learn, think and problem solve. Autism was first characterized in 1943 by Dr. Leo Kanner after he observed 11 children with similar behaviors, who showed more interest in their surrounding environment than their interactions with other individuals. Autism was originally thought to be a type of schizophrenia caused by bad parenting and negligence. After decades of research, we now know that this is not the case at all. Although today the actual cause of autism is still unknown, scientists have come to the conclusion that ASD is actually a genetic disorder. This conclusion is based on studies done with multiple trials on twins that started in 1977. The study revealed that the chance that two fraternal, or nonidentical, twins will both be diagnosed with ASD is 3-10%, while the chance that two identical twins will both be diagnosed with ASD is 80-90%. This supports the fact that extra or missing genes can lead up to ASD. There is a speculation that Autism Spectrum Disorder can also

be caused by environmental factors, such as either parent being at an older age or multiple pregnancies within less than one year of each other. Although scientists do not fully understand all the causes of autism, there are some theories that now are proven not true. One false theory about the causes of ASD is that a possible cause is vaccines. This myth was spread in the late 1990s and early 2000s but it was later proven wrong by a number of studies. Indicators of Autism Spectrum Disorder can be identified in babies as young as 6-18 months. Some signs that a young baby has ASD are a fixation on objects or a disregard for others around them. Older babies and toddlers might ignore being called by name or engage in repetitive movements, like rocking back and forth or flapping their arms. People can even be diagnosed with ASD later in their lives, from ages 19 to 60. One of the greatest challenges that adults with autism face is forming and maintaining friendships. Some reasons for this is that people on the spectrum have difficulty dealing with both social anxiety and social conflict. Scientists recommend that treatment should start as soon as ASD is diagnosed to have the greatest impact. The most effective therapies are applied behavior analysis, or ABA, and occupational therapies, which help people develop skills used for daily life. Also, there are speech therapies, which help people with speaking and oral problems, and physical therapies, which help the person move around easier by using methods like hydrotherapy or heat treatment rather than surgeries. Some pharmaceutical treatments, such as Risperidone and Aripiprazole,

• Fall 2018 • Quanta •

are also available to help contain behavioral issues stemming from ASD, such as self-harm and aggression. However, supposed “cures” for ASD on the internet should not be trusted since they are not scientifically backed up, such as GcMAF, which is an unlicensed blood product, or MMS, a banned bleach used for human consumption. Neither of these supposed “cures” work, and the user could get seriously hurt. There is currently no way to entirely eliminate the impact of Autism Spectrum Disorder, but now people are starting to take into account that individuals with autism are actually very smart and talented. For example, Microsoft has a program specifically for people with ASD to attract talent and foster inclusion as they design their products. Research is being conducted to learn more about

Autism Spectrum Disorder has been diagnosed in 1 out of 59 children: 1 out of 39 boys and 1 out of 151 girls.

individuals with ASD, and as this research continues it is clear that people on the spectrum are not challenged; they are just people who go through the same struggles and triumphs that neurotyical people do.




uclear weapons are a hot topic in the news lately as the public is worried about their potential use. This is because they are an extremely dangerous category of weapons. At the center of a nuclear blast, everything is vaporized due to the extreme heat: around 300 million degrees Celsius or 500 million degrees Fahrenheit. This heat wave, along with the radiation and shock wave, will cause all sorts of injuries and ailments. Many will die from their burns resulting from either the heat or fires started by the blast. Others will die from the building and structures that will have collapsed due to the shock wave. Countless more will die from the disorders caused by radiation from either the initial explosion or the radioactive fallout that will occur later. Radioactive fallout affects people far from the blast since it can easily be carried away by wind or waterways. This radiation affects the cells in the body that actively divide: hair, intestines, bone marrow, and reproductive organs. It also increases the risk of leukemia, cancer, infertility, and birth defects. In Hiroshima, over 150,000 people have died due to the nuclear bomb though the exact death toll is unknown. Scientists still do not know the full extent of radiation or how many have died due to it or how many more will die. They are unsure if the radiation could lead to even more health concerns over time and through generations ahead. There are two types of nuclear bombs: fission bombs and fusion bombs. Both of these perform the same task but in different ways. A fission bomb works by splitting the nuclei of atoms of heavy metals such as plutonium or uranium-235 into two smaller, lighter nuclei


by shooting a neutron at the nuclei of those heavier metals. This process is known as induced fission. The split releases two to three neutrons and some heat. It also leads to gamma radiation being released as the two new atoms settle into their states. Scientists use uranium-235 and plutonium because these are some of the few elements that can undergo induced fission. Uranium-235 is an isotope of uranium that is more commonly used than plutonium. An isotope is achieved by changing the number of neutrons in the nucleus

• Fall 2018 • Quanta •

of an, in this case uranium, atom. The fission process requires a neutron to be shot at an atom of uranium-235 to start it. Scientists created a neutron generator to shoot one neutron at the nucleus of uranium-235 which leads to the split. Those neutrons, that go flying out from the split, hit and are absorbed by other uranium-235 nuclei. This leads to more splits which, then, cause more neutrons to be released. This huge chain reaction releases an immense amount of heat and radiation.

The specifics of the inner workings of a nuclear bomb are still relatively unknown. However, the basic layout is well-understood. For a fission bomb, a few key elements are necessary. This includes a neutron generator, uranium-235, and a tamper. The neutron generator is usually made with polonium and beryllium-9, an isotope of beryllium, which are separated by foil. Polonium is a highly radioactive metal, so when the foil is broken, the alpha particles released meets with the beryllium-9. This turns it into beryllium-8, another isotope of beryllium. In the process of turning beryllium-9 into beryllium-8, neutrons are released. These neutrons kickstart the uranium-235 chain reaction. The uranium-235 in the bomb must be split into separate subcritical masses. Subcritical masses are two masses of material, in this case uranium-235, that are unable to sustain a nuclear chain reaction. In a subcritical mass, the atoms of the uranium-235 are too far apart for the chain reaction to occur. This way the bomb will not detonate before it is meant to. But when it is time for the bomb to explode, the subcritical masses of uranium-235 come together to form a supercritical mass which will enhance the fission process. Supercritical is the reverse of subcritical. This is where the atoms are very tightly bunched together. The final critical element in a fission bomb is the tamper. The tamper is typically made of uranium-238, another isotope of uranium. The uranium-238 contains the fission reaction so that as much material as possible fissions before the bomb explodes. It also reflects any stray neutrons back into the reaction.

The other type of nuclear bomb is a fusion bomb. Fusion bombs are much more efficient than fission bombs. Fusion is the exact opposite of fission. Fusion is the process by which the nuclei of two atoms combine to form a single atom. This is usually done with deuterium and tritium, both of which are hydrogen isotopes. At extremely high temperatures, they combine and release a lot of energy. One fusion bomb design involves two parts: a fission bomb and a cylinder. The cylinder has a casing of uranium-238, the tamper. Inside, there is a hollow rod of plutonium-239 in the center surrounded by lithium deuteride. Between the fission bomb and the cylinder is a shield of uranium-238, and the remaining space is filled with plastic foam. The first step in the detonation of a fusion bomb is the implosion of the fission bomb inside. This gives off radiation, heat, neutrons, and a lot of pressure. Heat builds up inside the bomb, and eventually the tamper expands and burns away. This first explosion provides the heat necessary for the fusion. Next, the compression shock causes fission to occur in the plutonium rod, which gives off more radiation, more heat, and more neutrons. Furthermore, when the lithium is hit by a neutron, it forms helium and tritium the latter of which reacts with deuterium and initiates the fusion reaction. With the high temperature and pressure, the tritium and deuterium start to fuse together, starting the fusion reaction that takes place inside the bomb.

All of the fission and fusion creates so much radiation and heat and eventually, the bomb explodes. This type of explosion has a 10,000-kiloton yield, which would equal one million tons. These nuclear weapons are extremely dangerous and powerful weapons that can cause mass destruction. Although nuclear weapons are a demonstration of our advancement in technology and our understanding of this world, we must remember that

“At the center of a nuclear blast, everything is vaporized due to the extreme heat. It is around 300 million degrees Celsius or 500 million degrees Fahrenheit.” they were designed to kill people. Even the inventors of such destructive weapons may not have known the full extent of their destructive capabilities, which will have persistent negative effects for a very long while.

The neutrons released from the fusion then cause fission to occur in the uranium-238 (the tamper and the shield).

• Fall 2018 • Quanta •


SAVING OUR PLANET’S SPECIES— VICTORIA IS PREGNANT! Quanta Now: April 10, 2018 Article Update

Alyson Brown (‘19)


fter visiting the famed San Diego Zoo Safari Park in late 2017, I became interested in the efforts of its Institution for Conservation Research (ICR) to preserve animal cells through the park’s Frozen Zoo®. The ICR Frozen Zoo was first established in 1972 as a repository for skin and other cell samples from rare and endangered species with the goal of cryogenically preserving the living cells for possible creation of breeding programs and scientific research. Scientists had a vision that the frozen cells could be used in the future to help with endangered species. Work on this project began although at the time that the first samples were collected from animals and put into deep freeze, genetic technology was still developing, and there was no certainty as to how the science would evolve over time. The ICR room containing the Frozen Zoo is rarely visited by tourists but its value is


limitless – so much so that a duplicate facility has been created in another San Diego location to preserve the cells, in the event that a wildfire or other local catastrophe damages the content of the valuable metal tanks in one of the two facilities. Among the many ICR programs is the San Diego Zoo’s effort to save the near extinct population of the northern white rhinoceros. At the time I wrote my Quanta Now article, Frozen Zoo and Genetic Engineering: Saving Our Planet’s Species (April 10, 2018), there were only two northern white rhino females left in the world: Najin, born in captivity in 1989, and her daughter Fatu, born in captivity in 2000. Just a few weeks earlier, on March 19, 2018, Sudan, the only remaining male of the species, and father and grandfather of Najin and Fatu, respectively, had died. This was devastating news to the world

• Fall 2018 • Quanta •

and especially to the ICR members, as they had suffered their own loss not too long earlier when the Safari Park’s sole northern white rhino, the beloved Nola, had died on November 22, 2015. For the last few years, since traditional breeding was no longer a viable option to save the northern white rhino because of the remaining living animals’ relation to one another, the reproductive scientists at the ICR had begun to search for other ways to save the species. Deriving stem cells from the frozen tissues of all accessible northern white rhinos was the first step. Then, the plan became to one day use these cells to create a northern white rhino embryo and bring the species back from the brink of extinction through a surrogacy program. As a result, in November 2015, six southern white rhino females were brought to the Safari Park to serve as surrogate mothers to the

northern white rhino frozen embryos the team was planning to create and implant. These six young females ranging in age from 5 to 8 years old are themselves an example of a success story. Southern white rhinos almost went extinct at the end of the 19th century, plunging down to what was thought to be only 20 animals at one point. After the discovery of an additional 100 southern white rhinos in Kwazulu-Natal, South Africa in 1895, new hope evolved that the species could be saved. With this small group of rhinoceroses, and decades of conservation efforts by numerous organizations around the globe including the San Diego Zoo ICR, the population of the southern white rhino has gradually increased. With over 90 southern white rhino calves born at the Safari Park alone in the last three decades, the ICR has played an integral part in saving this species. These traditional breeding and rescue efforts, however, are no longer an option for the northern white rhino. But through the amazing technology of artificial insemination there is hope now to save the northern white rhino as well. While researchers still face many challenges ahead, when they started this project they were hopeful that a northern white rhino calf could be born within the next 10 to 15 years to one of these southern white rhino females using this new science. And these scientific methods could also be used in the future to preserve other critically endangered species, such as the Sumatran and Javan rhinoceroses. Since my online article was published in Quanta Now, some new developments have occurred and the ICR has made significant progress in the fight against extinction of the northern white rhino. It was confirmed on May 17, 2018 that Victoria, one of the six southern white

rhino females in the program, was successfully impregnated through artificial insemination. If Victoria can carry her calf to term over the 16 to 18 months gestation period, researchers hope that someday she could be implanted with a northern white rhino embryo and give birth to a northern white rhino calf. But, this is still many years away. Victoria and her crash of female southern white rhinos will need to successfully birth and then raise their southern white rhino artificially inseminated babies before scientists will be ready to inseminate them with northern white rhino embryos created from cells in the Frozen Zoo. But, as they say, a journey of a thousand miles starts with one step, and Victoria’s pregnancy is something to shout about!

course also the Safari Park to discuss the conservation efforts of the ICR. I was thrilled to learn that a research project I had started working on in 2017 would be part of my ROSA program. During the ROSA Safari Park visit in July 2018, I met reproductive physiologists Barbara Durrant, Parker Pennington, and Christopher Tubbs who shared with me how they have tracked the ovulation of the six southern white rhino females getting them ready for artificial insemination and they explained the unique role the ICR has played in these conservation efforts. From giving the rhinos regular ultrasounds to inducing pluripotent stem cells from frozen skin tissues, each technological advance brings the scientists one step closer to saving the northern white rhino species.

I had the opportunity to learn even more about this journey to saving the northern white rhino this past summer,

As we discussed Victoria’s pregnancy, it was impossible but to be moved emotionally by the impact of this new ac-

The ovulation chart tracks when the female rhinos are at a prime reproductive time in their cycles, and the best moment to artificially inseminate them.

as a participant in the UCSD Reproductive Oncofertility Summer Academy (ROSA) program. As part of this training, I did classwork learning about the female reproductive system and treatments for infertility, while visiting children’s hospitals, laboratories and of

• Fall 2018 • Quanta •

complishment. Although it is still very early in this process to save the northern white rhino, significant progress has been made right here right now in San Diego, and scientists from around the world have high hopes for the future of the species.


Quanta Fall 2018  
Quanta Fall 2018