Page 1


03 19 23



Art by Richard Li

The Journal of Youths in Science (JOURNYS), formerly known as Falconium, is a student-run publication. It is a burgeoning community of students worldwide, connected through the writing, editing, design, and distribution of a journal that demonstrates the passion and innovation within each one of us.

Torrey Pines High School Scripps Ranch High School San Diego Jewish Academy San Diego, CA

Ed W. Clark High School Las Vegas, NV

For more information about submission guidelines, please see

Contact us if you are interested in becoming a new member or starting a chapter, or if you have any questions or comments. Website: // Email: Journal of Youths in Science Attn: Mary Ann Rall 3710 Del Mar Heights Road San Diego, CA 92130 1 | JOURNYS | SPRING 2018


issue 9.3 - spring 2018

03 05 07

09 10 10 Psychology 11 11 13 Technology 15 17 17 Environment 19


21 22 miscellaneous 23

Nanotechnology and Cancer Katie Lin

The Latest in Cancer Treatment: Personalized Medicine Rachel Lian

An Interdisciplinary Journey: Machine Learning in Genetics Melba Nuzen

How Stress Can Kill You Alina Luk

Recollection Imperfection: An Alteration of Past Memories Sara Reed

Hashing: Converting Objects to Numbers Daniel Liu

Expansion Microscopy Saeyeon Ju

Nanotechnology in Space Exploration Sydney Griffin

The Future of Energy

Flora Perlmutter

Interview With Dr. Jyllian Kemsley Jonathan Kuo

Interview With Beili Zhang Claire Wang

Science Opportunities & GSDSEF Summaries Instant Ramen Happiness Sumin Hwang


Nanotechnology and Cancer By Katie Lin

Art By Jeanette Ju

Nanotechnology is a field of science that focuses on objects between 1 and 100 nanometers (nm) in length. One nanometer, equivalent to one-billionth of a meter, can be hard to quantify since it is so tiny; in perspective, a single strand of hair is about 80,000 nm in diameter, and a single sheet of paper is about 100,000 nm thick [1]. Nanotechnology can be applied to scientific processes, medicine, and technology through imaging, modeling, and manipulation of atoms or molecules. On the nanoscale, matter possesses different physical, chemical, and biological properties. In the human body, for instance, billions of chemical processes work together to sustain life by regulating bodily functions and protecting individuals from disease. In diseases, however, the miniscule size of cells makes them hard to properly distinguish, and sometimes the body has trouble effectively fighting back. Nanotechnology can solve these problems in the medical field by diagnosing, preventing, and curing diseases we do not yet fully understand. For example, scientists today are using nanotechnology to detect molecular changes in potentially cancerous regions of the body, helping early diagnosis of cancers [2]. As nanotechnology continues to evolve in complexity and use, it offers novel alternatives to current forms of cancer treatment, opening the future of medicine to an entirely new frontier. Chemotherapy is a common treatment which utilizes specialized drugs that kill cancer cells. However, it causes dangerous side effects due to the inability of chemotherapeutic agents to distinguish between cancerous and noncancerous cells, leading to the death of both healthy and cancerous cells. One organ that chemotherapy damages is the bone marrow, which produces white blood cells, leaving patients’ immune systems in an extremely weakened state. In these patients with compromised immune systems, even mild diseases are fatal. Drug dosages must be limited in order to ensure patients’ 3 | JOURNYS | SPRING 2018

safety, but reducing drug dosages can also reduce the efficacy of the treatments. Nanoscale drug delivery uses special nanoparticles to specifically target cancer cells, avoiding the side effects that result from current day cancer treatments. [3] Nanoparticles can be used in many different ways to help treat cancer patients; one example is passive tumor accumulation, which specifically targets tumors through the enhanced permeability and retention (EPR) effect. The EPR effect is the idea that certain molecules, like nanoparticle anticancer drugs, tend to accumulate in tumors at much higher concentrations than in healthy cells. When researchers at the California Institute of Technology used nanoparticles to treat gastric tumors, they found that the nanoparticles and drugs only entered the tumor region, supporting the validity of the EPR effect in humans [4]. Although it may seem that passive tumor accumulation and the EPR effect provide easy solutions to the issue of targeted cancer treatment, there are still problems that limit the success of the EPR effect and potential nanotechnology cancer treatments. Passive tumor accumulation can be restricted if connective tissue surrounds tumors, interfering with the nanoparticles’ ability to reach the tumor region. Passive tumor accumulation through the EPR effect is not a completely established treatment option at the moment; however, while taking EPR’s deficiencies into consideration, new designs of nanoparticle drug delivery are currently being developed. Active tumor targeting is another way nanoparticles can be used to treat cancer. While passive treatment utilizes the EPR effect to transport drugs into tumors, active targeting of nanoparticles focuses on specifically moving nanoparticles into affected regions. Active tumor targeting is accomplished through modification of nanoparticle surfaces using small particles, antibodies, or other molecules. In brain tumors, which are extremely hard to treat due to the blood-brain

barrier that divides blood and brain tissue, active targeting serves as a potential non-invasive (non-surgical) treatment [5]. On the nanoscale, molecules and atoms behave differently. For example, at the nanometer scale, gold changes color, has a different melting point, and has different catalytic properties than it does at the macroscopic level. Another possible method of cancer treatment uses gold nanoparticles in conjunction with cell-signaling proteins such as tumor necrosis factor (TNF). TNF, a multifunctional molecule that helps regulate biological processes and can kill cancer cells, becomes more effective alongside gold nanoparticles to treat tumors [6]. Gold nanoparticles are relatively easy to synthesize, and the FDA has approved them for use in disease treatment research. This new technology can also enhance current forms of cancer treatment such as radiation. Nanotechnology presents a multitude of options for cancer treatment due to its versatility and effectiveness. Nanoparticles attack specific cancerous tissues, avoiding the harmful side effects of other treatments such as chemotherapy or radiation, which dramatically weaken the body through destruction of both healthy and cancerous cells. As nanotechnology continues to improve, these forms of treatment will only become more innovative and effective.

References: [1] Size of the Nanoscale. United States National Nanotechnology Initiative website. nano-size. Accessed March 31, 2018. [2] Working at the Nanoscale. United States National Nanotechnology Initiative website. nanotech-101/what/working-nanoscale. Accessed March 3, 2018. [3] Treatment and Therapy. National Cancer Institute at the National Institutes of Health website. https://www.cancer. gov/sites/nano/cancer-nanotechnology/treatment. Published August 8, 2017. Accessed March 3, 2018. [4] Than K. Nanoparticle-Based Cancer Therapies Shown to Work in Humans. Caltech website. news/nanoparticle-based-cancer-therapies-shown-workhumans-50221. Published March 21, 2016. Accessed March 31, 2018. [5] BĂŠduneau A, Saulnier P, Benoit J-P. Active targeting of brain tumors using nanocarriers. Biomaterials. 2007;28(33):49474967. doi:10.1016/j.biomaterials.2007.06.011. [6] Cuenca AG, Jiang H, Hochwald SN, Delano M, Cance WG, Grobmyer SR. Emerging implications of nanotechnology on cancer diagnostics and therapeutics. Cancer. 2006;107(3):459466. doi:10.1002/cncr.22035. 4 | JOURNYS | SPRING 2018

The Latest in Cancer Treatment:

Personalized Medicine

By Rachel Lian

Art by Saeyeon Ju

>What is Cancer?

Cancer is a disease characterized by mutations that cause cells to divide uncontrollably, forming malignant tumors that disrupt bodily functions. Though all the causes of cancer are not completely characterized, scientists believe that inherited mutations and external factors such as poor lifestyle habits contribute to its onset. Current treatments include corrective surgery, chemotherapy, radiation, or a combination of treatments to eliminate or reduce the number of tumors in a patient’s body. However, these treatments are not always effective, and given the complexity of cancer and the specificity of individual cancer cases, finding the right cure is a daunting task.

>Why Personalized Medicine? Traditionally, cancer patients with the same type and stage of cancer receive the same treatment. However, this one-sizefits-all approach is often ineffective and comes with negative side effects, such as the destruction of healthy body cells [1]. As scientists have gained a deeper understanding of human genetics, they have learned that these traditional methods are not always effective due to genetic differences among patients. Finding a way to address patients’ genetic differences while curing their cancer may help provide successful treatments for many more people. This is where personalized medicine comes into play. A new player in the field of cancer treatment, personalized medicine creates cures specifically tailored to individuals based on their

genetic makeup [2]. The first step in building a personalized plan is to sequence a tumor in order to determine its molecular and genetic makeup. Based on this information, the patient is then given medication that is likely to be effective with minimal side effects. A personalized plan also includes genetic testing of the patient in order to determine the likelihood of cancer relapse after treatment. By sequencing the patient’s DNA in a laboratory, abnormalities in the genome such as irregularities in chromosome number can be determined, which can then be analyzed to predict disease progression. Other factors, such as environment, lifestyle, and disease history are also taken into account when deciding the proper treatment for a patient [2].

>Targeted Therapy The most widely used form of personalized medicine is targeted therapy, which involves drugs that act on specific


genetic changes in certain genes and chromosomes. This differs from traditional therapy, where all rapidly-dividing cells are killed, including healthy ones, which is why a patient’s hair falls out when undergoing chemotherapy. In targeted therapy, however, the drugs interfere with cancer cell gene expression, preventing further division of only cancer cells. There are currently several types of targeted therapies being used to combat cancer, including monoclonal antibodies, signal transduction inhibitors, gene expression modulators, apoptosis inducers, angiogenesis inhibitors, and hormone therapy [3]. Many of these therapies have self-explanatory functions; for example, apoptosis inducers cause cancer cells to undergo apoptosis, or controlled cell death. However, it may

be harder to deduce the purpose of other types like monoclonal antibodies and angiogenesis inhibitors, so these will be further explained. The use of monoclonal antibodies is a type of immunotherapy that encourages the immune system to attack cancer cells more effectively [3]. These laboratory-produced antibodies bind to specific antigens on the surface of cancer cells and trigger an immune response, activating the body’s defenses against target cells [4]. Monoclonal antibodies work well when used as supplements to chemotherapy and radiation treatment since they make cancer cells more vulnerable to traditional medications. This is where another type of targeted therapy known as angiogenesis inhibition comes into play. Angiogenesis, or the

formation of new blood vessels, is vital for tumor growth, as the extra cells in tumors need to be supplied with additional oxygen and nutrients carried in blood in order to survive. Normally, cancer cells send chemical signals that stimulate angiogenesis and increase blood flow to tumors, but small molecule inhibitors of angiogenesis can be used to interfere with this process [5, 6]. Angiogenesis inhibition involves the interference of angiogenic vascular endothelial growth factor (VEGF), causing a decline in blood vessel growth and ultimately making cancer cells more vulnerable to chemotherapy and radiation treatment. On the other hand, traditional treatment can actually increase VEGF expression in tumors and make it impossible for the treatment to work [7].

>Future Prospects

Personalized medicine provides a promising future for cancer treatment, but several issues limit its widespread use. One problem is the immense genetic diversity of the human population, which prevents proper treatment from being available to a multitude of people. Another challenge with personalized medicine is the fact that healthcare companies are reluctant to cover treatments that require extensive testing and customized therapy. In fact, only 5% of insurance companies cover genetic testing [3]. However, as personalized medicine becomes more commonplace and boasts higher success rates, this stepback will likely lessen. In the meantime, it may be beneficial to consider genetic testing, especially if there is a risk of developing cancer based on family history or other factors. If someone is diagnosed with cancer, researching and considering all of one’s options is important before deciding on a particular treatment, which may be a form of personalized medicine. With rapid advancements in genomic analysis, identifying an effective plan for each individual cancer patient no longer seems more like a distant dream, but instead like an approaching reality.

>References [1] Mendes E. Personalized Medicine: Redefining Cancer and Its Treatment. American Cancer Society website. Published April 3, 2015. Accessed February 11, 2018. [2] Verma M. Personalized medicine and cancer. Journal of Personalized Medicine. 2012; 2(1): 1-14. doi: 10.3390/jpm2010001. [3] Targeted Cancer Therapies. National Cancer Institute at the National Institutes of Health website. treatment/types/targeted-therapies/targeted-therapies-fact-sheet. Accessed February 15, 2018. [4] Understanding Targeted Therapy. Published May 2017. Accessed February 15, 2018. [5] Quinn D. How Brad Pitt Helped Angelina Jolie Through Her Cancer Crisis. People. Published September 20, 2016. Accessed February 15, 2018. [6] Angiogenesis and Angiogenesis Inhibitors to Treat Cancer. personalized-and-targeted-therapies/angiogenesis-and-angiogenesis-inhibitors-treat-cancer. Published July 2017. Accessed February 18, 2018. [7] Duffy AM, Bouchier-Hayes DJ, Harmey JH. Vascular Endothelial Growth Factor (VEGF) and Its Role in Non-Endothelial Cells: Autocrine Signalling by VEGF. Accessed February 18, 2018. 6 | JOURNYS | SPRING 2018

AN INTERDISCIPLINARY JOURNEY A BRIEF LOOK AT CODING AND MACHINE-LEARNING IN GENETICS BY MELBA NUZEN || ART BY MADISON RONCHETTO In today’s world of technology, the road to discovery is paved with complex, multifaceted problems. To begin looking for solutions, we must find equally complex methods to tackle these challenges. Our road was paved in the 1950s when people first discovered DNA as the blueprint for our human systems—a design that remained unchanged throughout our lives. However, as research progressed, we discovered that our characteristics rely on more than just the nucleotides of DNA; there are certain chemical compounds and proteins that can modify the expression of DNA, collectively referred to as the epigenome. The epigenome can increase the production of specific proteins or turn certain genes on or off when necessary [1]. All of this occurs without altering the actual DNA code itself; epigenetic proteins instead interact with DNA. Recently, a collection of institutions have begun exploring epigenetics in the field of cancer research. The connection between cancer and epigenetics is fairly simple: the epigenome includes proteins called transcription factors that can inhibit gene expression by blocking DNA transcription. This can stop cells from multiplying by altering gene expression—if certain genes aren’t expressed, the cell cannot divide. Modulation of transcription factors is essential to the proliferation of cancer cells, formation of tumors, and tumor metastasis to other organs, which produces secondary tumors [2]. In a study done in mice, researchers sampled various epigenomes and found a group of enhancer genes called metastatic variant enhancer loci (Met-VELs) that are frequently located near bone cancer genes [3]. The activation of these enhancer genes was required for the formation of secondary tumors, while inhibiting transcription factors that coordinated with Met-VELs interrupted metastasis. Ultimately, this decreased the growth of cancerous tumors and prevented relapse in mice. Of course, there are many more variables to test before such research can be extended to humans. But the fundamental takeaway from this example is clear: new scientific discoveries lead to a deeper understanding of genetics, which inspires solutions to challenges that have major impacts on humanity. So how do these discoveries, understandings, and solutions come about? A variety of fields, such as artificial intelligence, mathematical statistics, and computer programming are combined in careers like biostatistics and bioinformatics to address some of these challenges. Let’s take a closer look at the previously mentioned study of bone cancer in mice. The activation of Met-VELs by transcription factors was just one of thousands of interactions found when epigenetic proteins interacted with enhancer genes. 7 | JOURNYS | SPRING 2018

So how do we begin discovering what each protein does when it binds to its respective gene? And before we tackle that question, how do we even map out DNA strands and their epigenetic counterparts? To sort through the billions of base-pairs in the human

genome—which translates to millions of bytes of data— scientists turn to computers, or more specifically, programming languages. For example, take R, a powerful language designed for data analysis. Counting the number of nucleotides in a string of DNA would look something like this: library(stringr) seq1 <- “TCTTGGATCA” count1A <- str_count(seq1, count1C <- str_count(seq1, count1G <- str_count(seq1, count1T <- str_count(seq1,

c(“A”)) c(“C”)) c(“G”)) c(“T”))

Six lines of code tell the computer to read through a string of characters, seq1, and count all of the As, Cs, Gs, and Ts. Using the library stringr and the function str_count, this code creates four variables that hold the number of times their respective letter appears in seq1. To compare DNA before and after a mutation, the code would resemble this: library(stringr) seq1 <- “TCTTGGATCA” count1A <- str_count(seq1, c(“A”)) seq2 <- “TCATGGATCA” count2A <- str_count(seq2, c(“A”)) if ( count1A == count2A ) print(“true”)

This program compares two strands of DNA, seq1 and seq2. Using the process described above, the computer generates two variables that represent the amount of “A” characters found in seq1 and seq2. Then, the code compares those two variables, returning true if there is an equal number of “A” characters found in both sequences. This idea can be implemented for all four bases to compare much lengthier DNA strands and determine whether or not strands contain the same number of specific bases. Of course, these are simple examples to illustrate how coding algorithms can be utilized. With a few lines of code, computers can analyze millions of strands of DNA in many types of coding languages. Now, the question to answer is how DNA interacts with proteins, and what overall effect that has on a biological system. For this complex problem, we venture off the beaten path to a more complex solution: artificial intelligence. When artificial intelligence is mentioned, images of selfdriving cars and evil robots often come to mind. However, artificial intelligence can play a large role in the field of bioinformatics, particularly in genetics. Machine learning is one such application of artificial intelligence that specializes in the independent analysis of data by algorithms. This will be useful for looking at transcription factors and their roles in cell development [4]. Within the subfield of machine learning, there are two general methods for addressing problems: supervised and unsupervised learning. As the name suggests, supervised learning teaches the machine how to analyze data by inputting annotated data points to train the machine to recognize an expected output. In the case of epigenetics, this means training and testing a machine learning model to recognize enhancer genes by inputting a series of known enhancer genes and non-enhancer genes; this way, the model can make an educated guess as to whether or not a new piece of data is an enhancer gene or not [5]. If we give our model examples of DNA that contain transcription start sites (TSS) as well as DNA that does not contain TSSs, the algorithm will theoretically be able to recognize a pattern and then find TSSs by itself.

On the other hand, unsupervised learning comes into play when it’s preferable to avoid giving a model pre-determined labels or groups. An application of this type of learning could be determining the functions and effects of specific transcription initiation complexes. Given enhancers and their respective proteins along with their impact on associated functions, a machine learning model can group proteins together based on similar effects. This occurs in one of two ways: generative or discriminative modeling. The former type of modeling groups data based on similar characteristics, whereas the latter draws a boundary between data points [6]. When dealing with unknown variables, such as the functions of proteins, discriminative modeling is used more often, since scientists have few predetermined groups to classify proteins into.

Figure 2: Unsupervised learning in grouping data [6]

With these methods, the machine can then conclude that a certain group of enhancers and their epigenetic counterparts halt the proliferation of cancer cells, as seen in Met-VELs. Though our journey, filled with pit-stops at various science disciplines, took us on a winding and tangled road, the combination of coding, machine-learning, and genetics has led us to a fascinating discovery full of potential. But this explanation covers only the basics of such a revelation; in reality, studying epigenetics and cancer cells is just one application of the interdisciplinary study. At the moment, combining fields of interest is the road leading us toward the future. Our journey will continue as we synthesize a variety of concepts to take on complex, ever-diversifying problems and explore new solutions that will impact humanity for years to come.


Figure 1: Supervised learning in recognizing transcription start sites in DNA [6]

In regards to epigenetics, the model would sift through megabytes of DNA to pick out notable genes of interest, allowing for more time to be allocated toward concentrating on analyzing how the DNA interacts with transcription factors [6].

[1] Epigenomics Fact Sheet. National Human Genome Research Institute website. Accessed February 8, 2018. [2] Davis CP. Understanding Cancer: Metastasis, Stages of Cancer, and More. OnHealth. Accessed February 8, 2018. [3] Researchers Inhibit Cancer Metastases via Novel Steps - Blocking Action of Gene Enhancers Halts Spread of Tumor Cells. Case Western Reserve University School of Medicine website. cfm?news_id=1026&news_category=8. Accessed February 12, 2018. [4] Marr B. What Is The Difference Between Artificial Intelligence And Machine Learning? Forbes. Published September 15, 2017. Accessed February 18, 2018. [5] Marr B. Supervised V Unsupervised Machine Learning -- What’s The Difference? Forbes. Published March 16, 2017. Accessed February 18, 2018. [6] Libbrecht MW, Noble WS. Machine learning applications in genetics and genomics. Nature Reviews Genetics. 2015;16(6):321-332. doi:10.1038/nrg3920.


H ow Stress ca n Ki ll You by Alina Luk // art by Lesley Moon There are many forms of stress, but a particular type known as chronic stress is a potential hazard to health. Chronic stress is any form of prolonged emotional pressure, suffering, or anxiety, including stress from major life-changing events or small everyday hassles. Although minor worries and problems such as getting stuck in traffic may seem trivial, these daily doses of stress become harmful when they persist for many years and are likely to result in depression or other issues [1]. Stress can have myriad detrimental effects on multiple facets of human health. A stressful situation elicits the release of stress hormones into the bloodstream. An important stress hormone is adrenocorticotropic hormone (ACTH), which stimulates the release of epinephrine (adrenaline) and norepinephrine (noradrenaline) [2]. These hormones prompt fight-or-flight responses like rapid heart rate, breathing, and increased blood pressure, evolutionary reactions to danger controlled by the sympathetic nervous system. Dr. Hans Selye’s general adaptation syndrome (GAS) describes the three stages of stress: alarm, resistance, and exhaustion [1]. When overwhelmed with stress over a long period of time, a person’s body adapts by producing excess amounts of cortisol, which suppresses the immune system and increases its susceptibility to diseases. Stress can also lead to insomnia, mental diseases, heart attacks, and cancer, and the body’s efforts to fight stress by increasing cortisol levels may result in heart disease, in which arteries are clogged due to increased fat and cholesterol levels. According to a study by Dr. Carolyn Aldwin, director of the Center for Healthy Aging Research, people who thought they suffered from stress and depression were 48% more likely to die or have a heart attack than people who did not perceive their lives as stressful [3]. Fu r t h e r m o r e , prolonged stress is associated with alcohol and drug abuse for people who turn to 9 | JOURNYS | SPRING 2018

these substances for mental and physical relief. Although negative stress, or distress, for an extended duration of time is harmful to the body and mind, positive stress, or eustress, can be beneficial to people’s health. Eustress, such as a homework assignment that is challenging but doable, can act as an opposing factor to distress and serve as a guide for stress management. Not only is this moderate amount of stress a form of motivation that drives people toward their goals, but it also contributes to the excitement and nervousness of a situation and promotes a greater sense of satisfaction afterwards [4]. Health problems caused by chronic stress can be prevented by exercising, changing diet, and learning how to adapt to pressure [3]. A 30-minute daily aerobic exercise routine such as biking or swimming can improve mood and help with weight loss while promoting physical health. Exercise releases endorphins, neurotransmitters that produce the “feel-good” mood and act as stress relievers. Changing one’s diet may also help to reduce cortisol and cholesterol levels, which are factors that contribute to cardiovascular diseases. Since excessive amounts of stress are detrimental, relaxation is a necessary step to stress relief [1]. Whether by watching TV or doing yoga, stress-reducing activities can promote longevity and a healthy mentality. Although life has its unpredictable stressful moments, there are ways to combat stress through exercise and relaxation. By educating others about the effects of stress on their health and sharing different methods to help relieve stress, people can lessen the impacts of health risks and illnesses on their lives.

References [1] Daily Life. The American Institute of Stress website. http://www. Published January 23, 2018. Accessed February 26, 2018. [2] Sinatra S. What Stress Can Do To Your Body. Heart MD Institute - Dr. Stephen Sinatra’s Informational Site. https://heartmdinstitute. com/stress-relief/what-stress-can-do-to-your-body/. Published February 27, 2018. Accessed February 26, 2018. [3] Neighmond P. Best To Not Sweat The Small Stuff, Because It Could Kill You. Shots: Health News from NPR. sections/health-shots/2014/09/22/349875448/best-to-not -sweat-thesmall-stuff-because-it-could-kill-you. Published September 22, 2014. Accessed February 26, 2018. [4] What is Eustress? True Stress Management. https:// Published October 16, 2017. Accessed February 26, 2018.

Recollection Imperfection: An Alteration of Past Memories By Sara Reed || Art By Seyoung Lee

Sensory memories begin the memory-making process; by storing basic information about the five senses, they allow stimuli to be briefly retained after they are experienced [1]. These accurate, “ultra-short-term” memories decay almost immediately, usually within 200 milliseconds to 500 milliseconds. Although they are short, sensory memories give the brain enough time to encode experiences into their next form, short-term memories [2]. Short-term memories, which last slightly longer than sensory memories—around 20 seconds to 30 seconds—are also referred to as “scratch-pad” memories. They typically last for 10 seconds to 15 seconds and hold small amounts of information, about seven items or less. Information that is frequently repeated through a process called rehearsal has a higher chance to become a long-term memory, which stores information over a long period of time [2]. The creation of a long-term memory, known as consolidation, is a process that requires a physical change in the structure of neurons. During the learning process, neural networks—circuits of neurons in the brain—are strengthened, altered, or created. Neurons electrochemically communicate with one another by releasing neurotransmitters at special junctions called synapses where circuits of neurons are reinforced. This process of storing memories shields the mind from the sheer magnitude of information that is confronted on a daily basis and allows only relevant information to be kept [2]. Consolidation utilizes a phenomenon called long-term potentiation, which occurs when the same group of neurons fires together so often that the brain strengthens their synaptic connections to one another. The synchronous firing of these neurons prompts molecular pathways that promote their subsequent synaptic activity, demonstrating the brain’s ability to rewire itself by strengthening important connections [2]. It is a common misconception that consolidated memories

remain stable over time, but Dr. Karim Nader of McGill University argues that once a memory is consolidated, it is susceptible to change. The very act of remembering a certain event can cause it to change, making the memory unreliable. While his research so far has been limited to rats, he believes that similar principles can be applicable to human memory. If further experimentation does not disprove Nader’s hypothesis, reconsolidation research could be beneficial for people with post-traumatic stress disorder (PTSD). PTSD, most commonly diagnosed in war veterans, is caused by a traumatic experience and causes symptoms including flashbacks to the event and anxiety. In a study conducted by Dr. Nader and his colleagues, they found that fearful experiences are stored in the amygdala and “require protein synthesis in order to be restored or reconsolidated” after they are retrieved. Indeed, by experimentally blocking the production of “protein in the amygdala immediately afterwards, the memory is lost,” potentially providing a novel therapy to cure PTSD [3, 4]. As these hypotheses are explored in future human studies, the understanding of the brain and its ability to retain memories will be changed dramatically. These discoveries have the potential to redefine the future of psychology by helping advance treatments in patients with mental disorders such as PTSD. And while it may seem odd that recalling a memory can leave it vulnerable to modification, understanding this process will help unlock doors blocking the curious mystery of how the human mind functions.

References [1] Mohs RC. How Human Memory Works. How Stuff Works. https://science. Published May 8, 2007. Accessed February 20, 2018. [2] Mastin L. Memory & the Brain. The Human Memory. http://www.human-memory. net/brain.html. Accessed February 11, 2018. [3] Miller G. How Our Brains Make Memories. Smithsonian. https://www.smithsonianmag. com/science-nature/how-our-brains-make-memories-14466850/. Published May 2010. Accessed February 13, 2018. [4] McGill University. The Biology of Induced Memory. ScienceDaily. https://www.sciencedaily. com/releases/2002/12/021211083732.htm. Published December 11, 2012. Accessed February 12, 2018. 10 | JOURNYS | SPRING 2018

Hashing: Converting Objects into Numbers By Daniel Liu // Art BY Anna Jeong Example Situation Imagine that you have 20 friends you want to invite to a party. You have their names and phone numbers, and you are able to easily recall each person’s number and send them an invitation. Now, imagine that you have one thousand, or maybe even one million contacts. You cannot memorize all of their names and phone numbers, so you store their information in your computer with every single person’s name and phone number as separate entries in an array, or a contiguous block of items in a computer’s memory. An array behaves like a bookshelf in which a stored object can only be accessed by its index, a whole number (in a zero-indexed array) indicating where that item is located. In order to access an entry in such a large array, you need to know the index of that entry. However, there is no correlation between names and indexes (you cannot directly remember every single person’s name and their corresponding index), therefore you cannot access an entry in the array with only a person’s name. The easiest way to solve this problem is to search through each person’s information and check if it matches that of the person you are looking for. With millions of entries, a computer is able to run this algorithm in seconds. However, with billions or more entries, the computation time to access such information can quickly lengthen. How can the speed be increased? One simple method is to sort the array in increasing order, so that it is much faster to search through. Another method involves the use of special “data structures,” which store data in different arrangements so that it can be efficiently modified or retrieved. Most data structures yield approximately similar efficiency with the aforementioned sorting method [1]. However, one data structure, known as a hash table, can easily accomplish the task of searching for people’s phone numbers by their names using a technique known as hashing [2], in which different objects are converted into whole number indexes, creating a correlation between the elements and their indexes.

Hashing Hashing has numerous uses, from searching through large volumes of text (DNA sequences, websites, etc.) to cryptography [2]. The basic idea is the same for each application: an object is converted into a whole number that acts as a unique identifier for an item and can be used as the index for the item in an array. A hash function takes the input object and outputs the number, or hash, corresponding to that object. It is designed as a oneway function; converting hashes back into the object is nearly impossible in most cases. That property allows hashing to be 11 | JOURNYS | SPRING 2018

useful when encrypting data. If the same object is used as the input again, the output is the same—it would not make sense for the same object to yield two different hash values. In short, in order for a hash function to work well, it must map any two different inputs to two different outputs such that each input gets its own unique hash number. Hash functions can create a correlation between the object being accessed and the object’s index in an array, which solves the previous problem in which names and indexes are unrelated. For example, if the name “Alice” maps to 3 (using some hash function), then her phone number can be placed in the array at index 3 (Figure 1). Note that the hash function is just a theoretical construct in this case. Using different hash functions can create different mappings. If Alice’s phone number needs to be accessed, then the name simply needs to be hashed again, producing the index 3 and consequently her phone number. If another name such as “Bob” is used, it will hash to another number. Placing arrays and hashing together create the hash table data structure. The process of accessing a person’s phone number is thus very fast; simply calculating the hash for some object provides access of the item at a specific index in an array. The main benefit of a hash table is that the index of an item does not need to be stored and can instead be easily calculated every time that item is accessed. When hashing text (or in this case, names), the letters can be converted to numbers: “a” can map to 1, “b” can map to 2, “c” can map to 3, and so on. This is an example of a simple hash function.

To hash more complicated objects, many different properties can be combined to form the index. For example, a hash function for a “food” object may somehow combine its name, taste, and color into one hash value.

(HashMap/HashSet) and C++ (unordered_map/unordered_ set) use chaining in their standard implementation of hash tables [4, 5]. Most importantly, the hash function needs to perform well in many aspects. In general, a hash function needs to generate outputs that are as uniformly distributed as possible in order to maintain a lower frequency of collisions, all at a quick and efficient rate. These advantages are especially useful in computer science and other fields in which hash functions are extensively studied.


Figure 1: Model of hashing function

Collisions So far, there have not been any hard limits for the hash numbers because they can be infinitely big or infinitely small. However, that is not the case in reality. Let’s say the output hash number has to be less than 100 (this is an oversimplification of memory limitations on computers, but it still illustrates the point). Now, let’s plug every possible lowercase, three-letter combination into the hash function. Exactly 26³ = 17,576 combinations will be formed. However, since the output has to be a whole number less than 100, major problems will arise due to the fact that 17,576 different “items,” or inputs, cannot fit into 100 different “containers,” or outputs, without any two items going into the same container, more famously known as the pigeonhole principle. Many words will hash to the same number, and as a result, completely different words might have the same hash. These hashing “collisions” occur in real computers too. There could be a potentially infinite, if not a very large number of combinations for any input. The hash table also takes up memory, so the hash function’s output range cannot be very large. Extra memory needs to be allocated to store all of the items that are inserted into the hash table. There are a few methods to resolve collisions in a hash table. A straightforward option is to design a hash function to work for a very specific data set. However, this is very inflexible for whoever uses the hashing function. In most cases, two intuitive methods, “open addressing” and “chaining,” can solve the problem of collisions, but not without slowing down the insertion and access operations in hash tables. By using a good hash function, a low number of collisions can be maintained, therefore making such methods very feasible in practice. In addition, both methods are very simple; open addressing uses another mathematical function to find another available location to place an item, and chaining allows multiple items to be saved at the same index. Python, a popular programming language, uses open addressing and skips indexes in its hash table implementation (dictionary/set) [3]. Other programming languages such as Java

Armed with a hash table, the problem of accessing people’s phone numbers by their names can be solved easily and efficiently. What about other usages? Hashing is used in large databases with enormous amounts of entries. Emails, names, addresses, and even locations can be hashed. Very large files can also be hashed to verify the integrity of their contents. An initial “checksum” or some hash value can be generated for a file so that when it is downloaded or transported, the initial checksum can be compared to the final checksum generated after the transfer to verify that no data has been corrupted. Cryptographic hash functions, which have higher requirements than normal hash functions, are frequently found within online currency or cryptocurrency algorithms [6]. These types of secure hash functions can also be used to protect passwords and sensitive information by taking advantage of the one-way property of hash functions. Hash functions can compute identifiers for objects to speed up many operations that involve accessing or checking different objects of arbitrary size. Although collisions will happen within hash tables, hashing is still a very feasible way to boost computational efficiency. Hashing is a very widely used algorithm and appears in many fields that involve computers because of its usefulness. As such, many modern programs rely on hashing for efficiency and security reasons. Though hash functions are relatively old concepts, they still remain prominent in recent and even future technology.


[1] Binary Search. GeeksforGeeks. Published January 16, 2018. Accessed February 20, 2018. [2] Hashing Set 1 (Introduction). GeeksforGeeks. https://www. Accessed February 22, 2018. [3] Luce L. Python dictionary implementation. Laurent Luce’s Blog. Accessed February 22, 2018. [4] Nurkiewicz T. HashMap performance improvements in Java 8. Tomasz Nurkiewicz Around Java and Concurrency. http://www.nurkiewicz. com/2014/04/hashmap-performance-improvements-in.html. Accessed February 20, 2018. [5] std::unordered_map::bucket_size. http://www. Accessed February 22, 2018. [6] Northcutt S. Hash Functions. SANS Technology Institute website. Accessed February 20, 2018. 12 | JOURNYS | SPRING 2018

Expansion Microscopy By Saeyeon Ju // Art by Richard Li When the microscope was invented in 1590, humanity was introduced to a whole new world of organisms unable to be seen with the naked eye, allowing scientists to begin exploring the natural world from a molecular perspective. Today, microscopes are utilized in biological research to analyze cell and tissue composition and function [1]. But while prior advancements in microscope technology have often focused on improving their magnification and resolution, expansion microscopy seeks to physically enlarge specimens to make their cellular components easier to visualize. For most specimens, scientists prepare and place tissue samples under a microscope to analyze internal structures. However, one of the main challenges in neuroscience is visualizing large 3-D structures such as the brain, as it contains more than 100 billion neurons with nanoscale synaptic connections. Neuroengineer Dr. Edward Boyden and colleagues Dr. Fei Chen and Dr. Paul Tillberg at the MIT McGovern Institute for Brain Research recognized this limitation in current microscopy methods [2]. After discussing how to increase the size of the tissue instead of increasing the magnification of microscopes, the team began research on swellable polymers, hydrogels that expand in size without dissolving in water. Their goal was to incorporate these polymers into biological samples to modify them instead of finding solutions to microscope limitations [3]. The MIT team’s research focuses on a polymer known as polyelectrolyte gel, which has the ability to absorb a substantial amount of water. They also use sodium polyacrylate, a substance found in diapers, for its absorbency properties. When the polyacrylate is injected into the tissue, the polymer creates a network of threads that anchor 13 | JOURNYS | SPRING 2018

to molecules within the tissue. As water is added to the sample, the gel expands in volume, pulling the anchored molecules apart while maintaining their relative location within the tissue [4]. In a previous experiment, the researchers discovered that when treated experimental and control tissues were scanned with a confocal laser scanning microscope, the shape of the tissues changed by less than 1%. The team used the conclusion that there were minimal inaccuracies between the controlled and modified samples as a starting point for the next problem: expanding brain tissues. First, they treated the brain tissue with chemicals to make it transparent and then added fluorescent molecules to allow certain proteins in the sample to anchor to the polyacrylate. After infusing the gel into the brain tissue and adding water, the tissue swelled to four and a half times its original size [3]. With the expansion process in the brain tissue complete, the team was able to view the samples’ tiny structures at a much closer level, capturing extremely detailed images of synaptic connections between brain cells. Three years ago, in a paper introducing expansion microscopy, Dr. Boyden and his team demonstrated that it was possible to reach a resolution of 60 nanometers (nm), 42 times smaller than a bacterium. More recently, he has been able to reach a resolution of 25 nm, 10 times the width of a DNA strand, and even describes in his paper how light microscopes that previously viewed objects at a resolution of 300 nm are now able to be adjusted to view objects at a resolution of 70 nm [4]. This is because as the size of biological tissue increases, it is easier for scientists to view intricate details at the nanometer level. While the precision of expansion microscopy does not yet match that of scanning electron microscopy or

transmission electron microscopy, which have resolutions of 5 nm and 1 nm, respectively, expansion microscopy is often more ideal, as its equipment is less expensive and complex than that of electron microscopes [5].Furthermore, electron microscopes may allow researchers to identify the shapes of molecules, but do not give information on what types of molecules are in the tissue samples. Different molecules can only be differentiated by labelling them with fluorescent proteins and antibodies, a key step in

the expansion microscopy process that cannot be done through electron microscopy [6]. Electron microscopes are also not widely accessible and require more extensive sample preparation [5]. By using expansion microscopy, Dr. Boyden and his team will be able to reconstruct complete brain circuits, giving researchers the opportunity to understand the microscopic mechanisms within circuits and observe how communication between cells can lead to changes within the human body [5]. Expansion microscopy could allow doctors and researchers to view and analyze deterioration in the brains of patients with neurodegenerative illnesses such as Parkinson’s and Alzheimer’s

disease [4, 6]. This technology also isnâ&#x20AC;&#x2122;t limited to the brain: expansion microscopy can be used in the rest of the body to differentiate between specific cell types such as osteocytes and osteoblasts in bone tissue, crucial information in finding treatments for any disease [5].

Expansion microscopy is a significant development in the difficult task of mapping the wiring of the brain and understanding the functions of its molecules and is applicable to visualizing different areas of the human body, from chromosomes to organelles to blood vessels. As scientists make detailed observations of the neural circuits involved in neurodegenerative diseases or the passageways that viruses take to infect other cells, they can use this new technique to prevent the spread of disease or even cure illnesses currently deemed incurable.


[1] Jacobson R. How Science Sprung from the Depths of the Disposable Baby Diaper. Public Broadcasting Service website. newshour/science/material-baby-diapers-maymake-brain-easier-study. Published January 15, 2015. Accessed January 28, 2018. [2] Chen F, Tillberg PW, Boyden ES. Expansion microscopy. Science. 2015;347(6221):543-548. doi:10.1126/science.1260088. [3] Callaway E. Blown-up brains reveal nanoscale details. Nature. 2015;517(7534):254-254. doi:10.1038/nature.2015.16667. [4] Thomas B. Scientists Use Diaper Polymers to See Microscope Specimens. Discover. http:// diaper-polymers-swell-microscope-specimens/. Published January 23, 2015. Accessed January 28, 2015. [5] Trafton A. Expansion Microscopy Could Enable Better Brain Circuit Mapping. ReliaWire. Published April 19, 2017. Accessed February 4, 2018. [6] Preston E. Ed Boyden: A Neurobiologist Thinks Bigâ&#x20AC;&#x201D;and Small. Quanta Magazine. www. Published January 18, 2018. Accessed February 11, 2018.

14 | JOURNYS | SPRING 2018

Nanotechnology in Space Exploration by Sydney Griffin art by Nathaniel Chen On April 12, 1961, Russian astronaut Yuri Gagarin became the first person to successfully travel to space. Since then, the world has sent over 2,000 satellites into space, seen 12 people walk on the moon, and lost 18 astronauts during spaceflight. The entirety of humanityâ&#x20AC;&#x2122;s history in space has been about going big and reaching further, but now, it seems like it might be time to go smaller. Nanotechnology is the manipulation of matter on atomic, molecular, and supramolecular levels, and the possibilities it holds are endless. Advancements in nanotechnology have been used in the fields of medicine, farming, engineering, and now, space exploration. One of these new inventions is NASAâ&#x20AC;&#x2122;s Electronic Nose (ENose) sensor, a device the size of a postage stamp that is capable of detecting harmful toxins in the air. The sensor combines cell phone alert technology and nanosensor array instruments to detect certain chemicals in the vicinity and alert its user by emitting specified frequencies of electromagnetic radiation. The sensors measure concentrations of substances in parts per billion (ppb) by volume and have a response time of under 2 seconds. They have been proven to detect at least 15 hazardous gases, including hydrogen, hydrogen peroxide, nitrogen dioxide, ammonia, chlorine, hydrogen cyanide, hydrazine, methane, benzene, acetone, formaldehyde, dinitrotoluene, and other chemical warfare agents, as well as harmful proton and gamma rays [1]. This selectivity is achieved through the use of multiple sensors overlayed with carbon nanotubes, covalentlybound carbon atoms arranged in a chicken wire pattern and rolled into tubes of no more than ten 15 | JOURNYS | SPRING 2018

atoms in length All of the sensors have different polymer coatings, as well as doped metals (impure metals for the purpose of modulating their electrical properties) and pattern recognition software. These sensors use absorbatemodulated-resistance (in which the presence of a gas causes a change in resistance of a circuit) as well as pattern recognition software to identify the concentrations of specified gases in the air. The ENose was briefly taken into space for testing, and according to Meyya Mayyappen, NASA’s chief scientist in exploration technology, was immensely successful and is ready to be deployed [2]. In fact, there is a possibility it could be incorporated into a Mars rover in a 2020 launch to test if the sensors are compatible with alien atmospheres [3]. By detecting possibly harmful chemicals in space, the ENose can further ensure the safety of astronauts. There is also the possibility that these sensors can be modified to test liquids, eliminating the need for astronauts to send blood and urine samples back to Earth for testing. Currently, astronauts must send down samples so that doctors can check their health. If these sensors are adjusted, they could provide immediate laboratory results of body fluids and allow sick astronauts to receive proper treatment more quickly. Using nanotechnology, we could also produce temperatures inside space suits or shuttles that more closely resemble the conditions we have on Earth. Nanotechnology enables scientists to make windows that change tint in response to temperature, thus controlling the amount of sunlight that enters shuttles [4]. Another area this technology would be beneficial is near Venus, where both NASA and Roscosmos, the Russian space agency, have launched probes towards. The surface of Venus is incredibly hot, with temperatures even higher than Mercury’s, despite the latter being closer to the sun. With such high temperatures, the ability to provide a cooler environment for astronauts is essential. When temperatures reach roughly 300 °C, the life of most data-gathering technology is reduced to 80,000 hours, or about 9 years. Venus has a surface temperature of 467 °C, further reducing that lifespan to less than half a year, which is not nearly long enough to gather necessary data over a long period of time [5, 6, 7]. The use of heat shields and reflective barriers created with new nanotechnology using materials such as advanced borides, carbide-99, and indium and antimony tin oxide can help increase the lifetime of the probes. One of the most important developments from spaceage nanotechnology is a lightweight space suit that is more flexible and easier to repair than current suits. The suits would comprise of three layers of bio-nano robots that would repair any damage done to the suit. The inner layer of these robots could also respond to physical damage done to the astronaut; if an astronaut gets sick or injured during a space flight, the suit could dispense medicine to sustain the astronaut until they reach a medical professional. The second layer would repair any holes or punctures in the suit, continually

protecting the astronaut from the harsh conditions of outer space. The outermost layer would also have the ability to seal any possible holes or punctures that may occur, increasing the lifetime of the suit [8, 9]. One more application of nanotechnology in space exploration is its use in the manufacturing of materials for space transportation. It currently costs $10,000 to send a single pound of materials into space, but nanotechnology could decrease this cost. Carbon nanotubes have properties that are far superior to construction materials used today. They are 40 times as strong as graphite fibers, have a tensile strength 100 times that of steel, and are only one-sixth of the weight of steel, making it cheaper to lift into space. They also conduct electricity better than copper, are great conductors of heat, and can be arranged to form conductors or semiconductors depending on atom placement [9]. The advancements scientists are making in nanotechnology could launch humans further into space exploration than ever thought possible. Nanotechnology in new sensors will make it safer to send astronauts into space; recently-developed heat shields and space suits will make it easier to collect information from nearby planets; and carbon nanotubes will increase the efficiency of transporting goods into space. The use of nanotechnology is changing the future, utilizing the power of the small to explore the infinite vastness of outer space.

REFERENCES [1] Electronic nose sensor. National Aeronautics and Space Administration website. nanosensor_cell_hybrid_instrument.pdf. Accessed February 4, 2018. [2] Clark S. Nanotechnology Can Launch a New Age of Space Exploration. The Guardian. nanotechnology-can-launch-a-new-age-of-space-exploration. Published April 17, 2012. Accessed February 4, 2018. [3] Davies A. Smart Wall Uses Nanotechnology to Control Indoor Temperatures. TreeHugger. ravenbrick-smart-solar-windows.html. Published February 5, 2018. Accessed February 8, 2018. [4] Anthony S. The International Space Station Will Soon Become the Coldest Place in the Known Universe... For Science! ExtremeTech. https:// Published February 3, 2014. Accessed February 8, 2018. [5] Cofield C. Mission to Venus: NASA and Russia May Explore Hellish Planet Together. Published March 14, 2017. Accessed February 8, 2018. [6] Effect of Heat on Electronic Devices. Apiste Corporation. Accessed February 12, 2018. [7] Boysen E. Nanotechnology in Space. Nanotechnology Now. www. Published April 29, 2007. Accessed February 12, 2018. [8] The Next Giant Leap. National Aeronautics and Space Administration website. nanotech. Published July 27, 2005. Accessed February 12, 2018. [9] Northon K. NASA’s Next Mars Rover Progresses Toward 2020 Launch. National Aeronautics and Space Administration website. press-release/nasas-next-mars-rover-progresses-toward-2020-launch. Published July 15, 2016. Accessed February 12, 2018. 16 | JOURNYS | SPRING 2018

The Future of Energy: Nuclear Fusion Climate change is one of the greatest threats the environment is currently facing. Due to heavy reliance on fossil fuels for energy, humans are releasing an unprecedented amount of carbon dioxide (CO2) into the atmosphere. Although oceans and forests help compensate for some of this excess gas, rapid deforestation and rising ocean temperatures continue to increase the levels of CO2 in the atmosphere. If left unchecked, the earth will experience global warming, which will result in more frequent and intense storms and hurricanes, flooding of coastal cities, ocean acidification, and other environmental disasters. Global warming has already been proven by events such as Hurricane Harvey, mass coral bleaching, and arctic ice melting; it is clear that as time passes, its effects will only increase in severity. Since reliance on fossil fuels is detrimental to the planet, there is a pressing demand to find alternative energy sources that are more reliable and sustainable. Alternative energy sources such as solar panels, which harness light energy from the sun, and hydroelectric dams, which generate electricity through water powered turbines, can assist in local energy production for a short time; however, they are not large or sustainable enough to completely solve the worldwide energy crisis [1]. Fusion may be the answer to the worldâ&#x20AC;&#x2122;s demand for clean energy.


Fusion is the process in which atoms with light nuclei fuse to form heavier nuclei, releasing energy and producing heat that can be harnessed to generate electricity. This phenomenon occurs under conditions of high temperature and pressure and is most commonly seen in stars, which are fueled by fusion reactions between hydrogen atoms. On Earth, we can replicate these conditions by making a plasmaâ&#x20AC;&#x201D;a gas that has been heated to such extreme temperatures that it ionizes, losing an electronâ&#x20AC;&#x201D;and then confining it in a vacuum container using magnetic fields. Conditions inside these plasma environments are extremely conducive to the process of atomic fusion since atomic nuclei are brought closer together. Scientists use this to their advantage by fusing the nuclei of deuterium and tritium, isotopes of hydrogen with one and two neutrons, respectively, in order to release energy, creating helium and ejecting a neutron in the process [2].

by Flora Perlmutter art by Daniel Kim 17 | JOURNYS | SPRING 2018

Fusion vs. Fission

While fusion deals with combining light atoms to release energy, nuclear fission splits heavier atoms into two smaller ones, thus releasing binding energy. Binding energy is the force that holds neutrons and protons in the nucleus of an atom together; the value of this force depends on the type of atom. Figure 1 shows the binding energy for deuterium, tritium, helium, and uranium. When an element is converted into another element that has a higher binding energy through the addition or loss of protons, energy is released. Because the binding energy curve has a peak in the middle, both fission and fusion—which occur at opposite ends of the spectrum—can produce elements that have higher binding energies than those of the original elements. Although both fusion and fission produce no CO2 and provide a constant source of energy, fusion is the preferred process to use for energy production as it is generally safer due to the more stable nature of the light elements involved. In contrast, fission splits more dangerous heavy and unstable elements apart. Since fusion reactions do not use heavily radioactive elements such as uranium, the process can generate large amounts of energy without the drawback of radioactive waste that fission creates [3]. Another benefit of fission is the ease of obtaining its required reactants; deuterium is a component of water that can easily be separated, and tritium is produced using lithium, a stable and safe element.

Figure 1: Binding energy graph [4]


Although fusion is a relatively safe way to obtain large amounts of energy using materials that are easy to obtain, it is extremely difficult to engineer a controlled and contained system that can confine plasma for a long enough time at a high enough temperature and density for fusion to take place. The tokamak is a leading candidate for a practical fusion reactor that can be used to produce energy on a large scale in the future [5]. This torus-shaped machine is surrounded by a magnetic field and has walls able to withstand both high temperatures and the impact of neutrons that bounce inside it at high velocities during the fusion process. However, current tokamaks are only able to contain high energy reactions for very short periods of time, and the walls of the machine wear down quickly due to the impact of the neutrons. Additionally, controlled fusion cannot currently be achieved in an economical manner. To date, no design has produced enough energy to initiate a reaction, and much progress is needed before fusion can become a viable source of energy [6].

THE FUTURE OF FUSION If sustainable fusion energy becomes a reality, we will have a clean and reliable source of energy that can provide for the entire planet. Fusion runs on abundant, safe, and sustainable materials, and, unlike fossil fuels, releases no CO2, helping curb climate change. With fusion we can rely less on rapidly depleting energy sources such as oil and coal, and instead have a more accessible source of energy for the foreseeable future. Fusion has the potential to be more efficient than any fossil fuelconsuming energy source currently in use, and the fuel itself (primarily deuterium) exists abundantly in the Earth’s ocean— about 1 in 6,500 hydrogen atoms in seawater is deuterium [1]. Since seawater is easier to access and is more plentiful than fossil fuels, fusion could potentially supply the world’s energy needs for millions of years. With global warming on the rise and conventional fuels declining in supply, fusion provides a novel yet effective solution to scientists’ search for more sustainable energy sources.

References [1] Chen FF. An Indispensable Truth: How Fusion Power Can Save the Planet. New York, NY: Springer; 2011. [2] What is Fusion? European Joint Undertaking for ITER and the Development of Fusion Energy website. http://fusionforenergy.europa. eu/understandingfusion/. Accessed February 28, 2018. [3] Nuttall WJ. Fusion as an Energy Source: Challenges and Opportunities. Institute of Physics. file_38224.pdf. Published September 2008. Accessed February 28, 2018. [4] Brünglinghaus M. Binding energy. European Nuclear Society website. htm. Accessed March 16, 2018. [5] Tokamak. ITER website. Accessed March 31, 2018. [6] Nuclear Fusion Power. Nuclear Fusion: WNA - World Nuclear Association website. Accessed March 16, 2018. 18 | JOURNYS | SPRING 2018

INTERVIEW WITH DR. JYLLIAN KEMSLEY by Jonathan Kuo Dr. Jyllian Kemsley is currently the executive editor for policy coverage in Chemical & Engineering News (C&EN), a weekly magazine published by the American Chemical Society (ACS). This interview was conducted on April 2, 2018. Was science something that you were always interested in, or did it develop as you grew up? I went to college thinking that I wanted to be a lawyer, mainly because I idolized my aunt, who was a lawyer. Both of my parents and stepparents are in the medical field, and you can chalk it up to teenage rebellion if you want, but I knew that I didn’t want to do medicine. At the liberal arts college I attended, I started taking chemistry off-sequence and really enjoyed it. The chemistry department had a fantastic faculty and basically turned me into a chemistry major.

How did you find your way from college to becoming an executive editor of C&EN ? Coming out of college, I thought that I wanted to go to graduate school, but I wasn’t certain it was the right path for me. I took a job with a pharmaceutical company, Merck, instead, and that was really interesting to start because I was working in process research, which is the bridge between basic research and manufacturing. Usually, a drug candidate was starting to go into clinical trials, which involved a lot of procedures that you don’t really learn as an undergraduate chemistry major. We also did stability testing, where we stored drugs for various lengths of time at different temperatures and humidity to track how much they degraded under those conditions. But, the reality of the job was that it was about 80% - 85% HPLC (high pressure liquid chromatography), 15% GC (gas chromatography), and the rest was things like IR (infrared) spectra, titrations, and a few other techniques. Scientifically, I did not find that very interesting, so after two years I left to go to graduate school. Toward the end of graduate school, I was trying to figure out what I wanted to do next. I didn’t want to pursue an academic career any longer, and I didn’t want to go back into industry. And then, a friend of mine who was doing an science policy internship in DC with the American Association for the Advancement of Science called me one night and happened to mention that there were these science writing programs around the country. C&EN is a weekly news magazine, and I’ve been a member for about 25 years. I’d been getting it on my desk for something like seven to eight years at that point, and it never occurred to me to ask Who writes this? And so I started looking at science writing, and I found the National Association of Science Writers (NASW); I was really lucky in that it was sort of the moment when all of the stars lined up for me. The program that was best for me was the science writing program at the University of California, Santa Cruz (UCSC). I had never done journalism in high school or college, or worked on a paper or a magazine or a publication like yours, so I really needed to learn how to report and write. UCSC gave me those skills and it was also only 19 | JOURNYS | SPRING 2018

one year and relatively inexpensive. The program was immensely valuable for me; it taught me skills that I really needed to have, and it taught me well.

What did you do in the UCSC science writing program? For three quarters, during fall, winter, and spring, you do concurrent classes and an internship—it was very much a learn-by-doing program—and then the final requirement is a full-time summer internship. I actually pitched an internship to C&EN—they had never had an intern before. I said that I would be willing to work for free if they could fly me out DC for two weeks to start and get a sense for what the magazine, the people, the production and other factors were like. My husband and I had a house in San José. Even if I had gone to an internship in DC or London and was paid just enough to live on, my husband was still the one covering the mortgage at home. It was a reasonable tradeoff for us, and not only has the internship continued since I inaugurated it back in 2003, it became paid in the following years. I did the internship and then I freelanced for a couple years and had three kids. I have a 13-yr old daughter and 11-yr old twin sons, and I joined C&EN’s staff full-time in 2012.

And you’ve been there since? Yes. I spent ten years as a science reporter and was then promoted to become editor of policy coverage last October.

As an executive editor, do you determine what stories are investigated? Yes. I have three staff members under me. All three of them are very experienced journalists, and I usually rely on them to pitch to me. I do have the role of deciding what they spend their time on, but they can always come to me with topics they’ve thought of writing about. With younger, less-experienced reporters, I would probably have to spend more time on story development. I also manage and edit freelance contributions, which are generally a mixture of ideas that are pitched to me and ideas that I assign. One example that’s come out in our issue today looks at Canada’s proposed budget for the fiscal year that started yesterday. Their government is pretty much increasing funding for science across the board and emphasizing basic research. In that same issue is a report on the US budget bill— the big omnibus bill that was passed just a week and a half ago— where there were increases across the board for research. That was, however, entirely Congress’s viewing; President Trump’s proposals were to largely cut science research funding. There were cuts reaching up to nearly 50% for certain programs, so it is an interesting contrast between the two countries. See More: policy/researchFunding/news-US-science-federal-spending/96/i14.

Do you receive more content from your staff or from freelancers? Per person, I definitely receive more from my staff. One thing that we are trying to do more of is international coverage rather than being as US-centric as we have been in the past. One of my main goals this year is to identify more international freelancers to bring in more coverage—we’d like to do better covering China, India, Japan, South Korea, and South America.

What would you say are the largest challenges of working in your field? Time, in a couple of different ways. I remember that when I first came on staff, my biggest fear was that I wouldn’t come up with enough story ideas to keep me busy. Yeah, that has never, ever been a problem. There has always been far more stuff that I want to write about either when I was a reporter or now as the editor, far more ideas than we have the time to do. One of my struggles right now in terms of US science policy, particularly at the federal level, is that a lot of the action is around environmental regulations and climate change, so it can feel like that’s all we’re writing about. You want to cover what’s going on, but I’m trying to be conscious that there are a lot of other areas where science and policy intercept.

Even though you didn’t have any prior journalistic experience in high school or college, did you still enjoy doing English? Yeah, I took AP Lit in high school and liked it. I also did Academic Decathlon. I’m pretty sure I was brought on the team for science and math, but I wound up winning a medal for the essay. I remember when they announced my name, the two coaches turned and were like, Where did that come from? In general, I like reporting. I like having written, but not necessary like the writing process itself, especially for long features. I always like having done them, especially when I’m tackling really important topics. I do all our laboratory safety coverage for example, and I wrote a story last year on graduate student mental health and suicide. Those are really rewarding stories to do, because they have the potential to really make a difference in people. They can be very draining at the same time, and we’re always very focused on accuracy, which ups the pressure; when you get something wrong, it could possibly mean someone gets injured or killed. I always feel a lot more pressure on those types of stories. See More:

What has been your most memorable experience in your career? I would say my most memorable stories are the ones on laboratory safety, like the one on mental health that I just mentioned that can have such an impact. However, for one story, I got to visit the High Explosives Application Facility at the Lawrence Livermore National Laboratory, which is an underground facility where they do traditional and improvised explosive devices research. Their hallways have lines on the floor, and you have to stay between the lines: this is to prevent people from standing against the walls because reverberations from explosions could cause injuries. Some labs working with larger detonation energies have zig-zaggy entries so that if there is an explosion, the vibrations have to propagate through all the turns, which will dampen them by the time they reach the hallway. There are whiteboards next to the entrances where people have to sign in and out of labs so if there’s an incident,

Photo Credits: Linda Wang/C&EN

they know who’s in there. They also arranged for a test detonation to occur while I was there, which was pretty cool. Depending on the size of their test detonations, they have a warning alert system in order to inform people that a detonation is occurring rather than an actual earthquake to prevent people from getting hurt. It was a good lesson for me too; we wind up doing a lot of reporting by phone, and for that particular story it was really valuable—the physicality of the laboratories and how they do this work safely is not something I would have picked up over the phone. See More: articles/89/i29/Examining-Explosives.html.

Have you ever had a mentor during your career? There have been several people who have helped me in various capacities, but I don’t think there is anyone I would single out. What I will say is that I’ve had really great editors and managers at C&EN who’ve given me a lot of freedom to really write about whatever I want, as long as I can make a case that chemists will find it interesting. This means I have written about laboratory safety, mental health, and explosives, but I’ve also done other stuff. I did a story a couple of years ago on psychedelic medicine and resurrecting research that started in the 1950s and 1960s on things like ecstasy, ketamine, and psilocybin to treat mental health disorders that aren’t responding to other current treatments. I’ve written about lethal injections, drugs, and the Deepwater Horizon Oil Spill, and it’s a lot of fun.

Do you have any advice for students interested in science writing? I would especially recommend joining the National Association of Science Writers, especially while you’re a student since it’s cheaper. Members receive information about internships and job opportunities, which is a particularly valuable resource. They also have forums where people can ask questions. Looking for more resources on science writing? Here’s more information from Dr. Kemsley: 20 | JOURNYS | SPRING 2018


Beili Zhang is a senior formulation scientist at Vertex Pharmaceuticals. Vertex Pharmaceuticals is a global biotechnology company that invests in scientific innovation to create transformative medicines for people with serious and life-threatening diseases, with a particular focus on cystic fibrosis. In cystic fibrosis, a mutation in an ion transporter causes the production of abnormally viscous mucus in the pancreas, lungs, and other organs. In the pancreas, blockage of enzyme release causes patients to experience problems with digestion, while buildup of mucus in the lungs obstructs breathing and creates an environment that encourages microbial growth. This interview was conducted on April 8, 2018. Sometimes I work in the lab, which involves writing experimental reports. As a scientist, you should always keep a lab notebook because you have to keep track of what you did so that others can repeat your work. For instance, there are cases where we leave a lab unfinished for a long time, and we transfer to a different company. We cannot fake our data because others need to be able to look at our work and reproduce it.

What do you find most interesting abut your job?

Can you explain your occupation and some of your daily responsibilities? At first I was trained to be a chemist, but I’m now called a “formulation scientist” since I switched from general chemistry to pharmaceutical chemistry. I do drug development research by doing experiments with chemicals. Chemicals can exist in different physical states, and formulation scientists manipulate the physical states of chemicals while adding different ingredients to test their effects. Our job is to make drugs stable: stomachs are very acidic, so drugs will naturally be degraded without protection. The drugs that we make will later be on shelves in drug stores, so they also need to be stable at room temperature. Additionally, our body has strong protection mechanisms where it tries to eliminate substances that it does not recognize. Our goal is to get the drugs into the bloodstream so that the blood can carry the drug to the target organ in the body, such as the brain or the kidney. My daily work can be divided into three tasks: finding the best state of the drug, formulating drugs by testing, and computer modeling. When we’re finding the best state of the drug, we need to consider its purpose. Sometime we want drugs to act quickly, like when we have headaches; in order to do so, we need make it dissolve faster. In other cases, like for chronic diseases, we don’t necessarily need fast drug activity, so the optimal result would be a drug that is sustainable for a long period of time. Other times, we need a combination of these two qualities. Then, we formulate our drugs by testing different ingredients. As for computer modeling, I put the needed drug property into the system, and we try to predict outcomes or show general trends. We collect animal data, and then we try to connect individual data to the overview of the results. Now that drugs are developing faster, we try to minimize animal testing, even though it is still necessary. Computer modeling is a recent trend because we are becoming more aware about animal protection. We also try to predict according to previous knowledge. 21 | JOURNYS | SPRING 2018

My job as a formulation scientist is very meaningful: though not directly, we lengthen people’s lives. Most people that have cystic fibrosis can’t even live past their twenties, but with our medication they can live just like normal people. My colleagues and I have spent the past 20 years in the pursuit of medicines that strike at the core of cystic fibrosis. We discovered and developed the first medicines to treat the underlying cause of cystic fibrosis, a rare but life-threatening genetic disease. When I think about all the people that could benefit from our research and when patients come in and show their appreciation, I feel like all my hard work was worth it. I know that I’m contributing to society by making a difference in other people’s lives.

Do you have advice for high school students looking into pursuing a career in medicine or chemistry? If you’re really interested in the medicinal industry, you need to have a solid foundation from your college studies—obviously a science background and preferably a major in chemistry or biology. Chemistry is indeed the most important course for all students who are looking into careers relating to pharmaceutical research, but you want to be strong in math and physics as well. Those fields can really take you down the road deeper into your research topics which you often can’t do with only expertise in chemistry. Chemistry actually gives very high pay, so you get a lot of reward even though you spend extra years in school working hard. But if you’re truly enthusiastic about what you’re doing you won’t feel tired because it’s fun.

What is the biggest challenge you’ve faced in your career? For me the hardest part is the nonstop learning process. With this field of study, new works get published every day, and we have to always keep up with the latest advancements.

Has your interest in chemistry changed since high school? No, my interest in chemistry has never changed, and I’m one of the few lucky ones who chose the right path at the very beginning. A lot of people take some time to discover their true passion. Nowadays, since there is so much noise surrounding all of you and so much information given to you, you have to be able to listen to yourself and decide what you really like, because that is what will take you further down the road.

Genomics Approaches to Mitochondrial Disease The Salk Institute for Biological Studies invites interested high school students to join in a lecture on the topic of Genomics Approaches to Mitochondrial Disease, by Vamsi Mootha and hosted by Marc Montminy and Inder Verma. For questions about the event, contact Lina Echeverria at or (858) 453-4100 x1076.

Lecture, Local

7/12/2018 (4:00 p.m.)

SciVid The event is open to anyone passionate about science and engineering. Those interested must create and submit high-impact two-minute videos that interest audiences of concepts in science. The videos should teach, explain, amaze, amuse, or fascinate the judges, and should not in any way promote a product, facility, or service. First place winner will receive $1,000 in cash, 2nd wins $500 cash, 3rd place $300 cash, and the People’s Choice winner will receive $700 cash. For more information on the contest and submission requirements, see

Competition, National

Submit from 9/1/2018 to 10/15/2018

River of Words Youth Art and Poetry River of Words® is a program of The Center for Environmental Literacy and a part of the Kalmanovitz School of Education. Acknowledged pioneers in the field of place-based education, River of Words has been inspiring educators and their students for over twenty years with an innovative blend of science and the arts. One of the program’s most noteworthy events, conducted in affiliation with The Library of Congress Center for the Book, is a free, annual international poetry and art contest for children in kindergarten through twelfth grade. For more information, visit

Competition, National

Submission due 12/1/2018

Winners Evaluating the Effects of A211D and A211T Mutants on PKA | Vainavi Viswanath (11), Benjamin Konecny (12) Repurposing AZD4547 for Neuroblastoma | Andrea Liu (11) Transforming Rod to Cones: An Intein-based gRNA/Split-Cas9 system for Retinal Architecture Reconstruction and Vision Restoration | Daniel Zhang (12) It’s Never Too Late! | Dominic Schneider (7)

Honorable Mentions Genes Related to Melanoma Displaying the Ultraviolet Signature Mutation | Judy Qin (11) Placebo Effect | Raquel Chaljon (8) Emotional Cognition in Autism with Virtual Reality! | Mauricio Sosa (8) 22 | JOURNYS | SPRING 2018

common ingredients in instant ramen Propylene Glycol This liquid alcohol is used to maintain the crisp texture of the instant noodles. propylene glycol is also used in: -tobacco products -antifreeze

Tertiary Butylhydroquinone The main ingredients in ramen noodles like wheat flour, salt, and vegetable oil are usually preserved for long periods of time with TBHQ. tertiary butylhydroquinone is also used in: -perfume -resins -lacquers -biodiesel

Monosodium Glutamate This sodium salt is naturally present in foods such as tomatoes and potatoes. Itâ&#x20AC;&#x2122;s often used as a flavor enhancer to provide a savory taste to things like soups, canned foods, and the flavor packet in ramen noodles.

Vegetable Oil The vegetable oil used in instant noodles are often unidentified. While canola and cottonseed oil are unsaturated, palm oil is very high in saturated fat, which may be detrimental to cardiac health and cholesterol levels.

BPA Many instant noodles come in styrofoam cups that can contain BPA, an endocrine disruptor. That BPA can wear off into the cup and into the ramen â&#x20AC;&#x201D; especially after boiling water is added.

how bad is it really? top ramen beef flavored calories: 190 total fat: 7 grams

chicken flavored calories: 190 total fat: 7 grams

mcdonalds big mac calories: 540 total fat: 28 grams

chicken nuggets (10) calories: 440 total fat: 27 grams

board and brew turkado calories: 775 total fat: 35 grams

chicken club calories: 995 total fat: 47 grams

23 | JOURNYS | SPRING 2018

china is the country that consumes the most ramen per year

the average total length of noodles in a ramen packet

51 meters $140

the cost it would take to eat ramen for every meal for an entire year

nissin ramen currently the largest producer of instant ramen in the world, also the producers of the infamous Top Ramen

founded by Momofuku Ando (1910-2007)

locations 29 plants in 11 countries

net sales $3.2 billion per year

ACS – San Diego Local Section The San Diego Local Section of the American Chemical Society is proud to support JOURNYS. Any student in San Diego is welcome to get involved with the ACS – San Diego Local Section. Find us at! Here are just a few of our activities and services:

Chemistry Olympiad The International Chemistry Olympiad competition brings together the world’s most talented high school students to test their knowledge and skills in chemistry. Check out our website to find out how you can participate!

ACS Project SEED This summer internship provides economically disadvantaged high school juniors and seniors with an opportunity to work with scientistmentors on research projects in local academic, government, and industrial laboratories.

College Planning Are you thinking about studying chemistry in college? Don’t know where to start? Refer to our website to learn what it takes to earn a degree in chemistry, the benefits of finding a mentor, building a professional network, and much more! 24 | JOURNYS | SPRING 2018


EDITOR-IN-CHIEF Dear Reader, Welcome to the third and final issue of JOURNYS for the 2017-2018 school year! This is the first time in seven years we have published three issues in one year, which speaks volumes about the incredible hard work and dedication of the staff. Since all components of JOURNYS, including work on our publication, outreach, and fundraising, are completed outside of school, it is evident that our students are exceptionally driven by a passion for our organization and for our mission—a passion that we strive to continuously nourish in our work. This issue primarily highlights the importance of technology, old and new, in helping science explore the world around us. From nanotechnology and expansion microscopy to nuclear reactors, our writers have examined scientific endeavors to understand and shape nature from the nanoscale to the macroscale. And here in JOURNYS, we too have begun improving our use of technology through revamping our website and social media outlets, efforts to bolster our online presence and continue making our work accessible to any student, teacher, or supporter who wants to learn more. One of the core rules of netiquette, or online etiquette, is to “Remember the human.” For us, however, such an adage is not a precautionary reminder; rather, it represents the personal significance JOURNYS has in our hearts. Our members have used our articles for volunteer projects aimed at educating children in hospitals, submitted research that has been recognized as a finalist in the national Broadcom MASTERS competition, and helped our publication win First Place with Special Merit in the 2017 American Scholastic Press Association’s national annual magazine contest. Through JOURNYS, we both have had the possibility to work with dedicated students on a daily basis, and the hard work and achievements that our students perform constantly amaze us. Without any doubt, we are proud to have led this organization for the past year. So, Reader, while you flip through our final issue of the year, we’d like you to remember the human in each of our articles. Behind every article, there is a student who has struggled to break down dense scientific content into more easily digestible information; numerous editors—both students and professional scientists—who have debated word choice and sentence structure and tested the accuracy of scientific claims with a fine-tooth comb; artists who have created graphics in a variety of formats specific to each topic; and designers who have woven the efforts of their peers into cohesive and visually appealing designs. Behind every issue, there are students establishing chapter schools around the nation, building collaborations with scientific companies and organizations, and setting up fundraisers to finance all of the projects that we execute. Each of our articles reveals the behind-the-scenes story of countless students who have coordinated with each other to make JOURNYS possible. Lastly, we’d like to thank you for your continuous support; Mrs. Rall, our staff advisor, for her never-ending encouragement and guidance; and the San Diego American Chemical Society for their generous sponsorship. We hope that you enjoy this issue and perhaps learn something new about science that you didn’t know before. We have loved being a part of JOURNYS, and we wish the best of luck for our new leadership board next year! Cheers, Jonathan and Stacy

25 | JOURNYS | SPRING 2018


PRESIDENT Jonathan Kuo

ASSISTANT EDITORS-IN-CHIEF Ethan Tan, Colette Chiang, Angela Liu


SECTION EDITORS Minha Kim, Kevin Ren

COORDINATORS Jonathan Lu, Claire Wang, Jacey Yang, Emily Zhang, William Zhang

COPY EDITORS Jonathan Kuo, Stacy Hu



CONTRIBUTING WRITERS Sydney Griffin, Sumin Hwang, Saeyeon Ju, Rachel Lian, Katie Lin, Daniel Liu, Alina Luk, Melba Nuzen, Flora Perlmutter, Sara Reed

DESIGNERS William La, Stacy Hu, Deborah Chen, Anna Jeong, Daniel Kim, Dennis Li, Likith Palabindela, Aaron Sun GRAPHICS MANAGER Richard Li GRAPHIC ARTISTS Nathaniel Chen, Anna Jeong, Jeanette Ju, Saeyeon Ju, Daniel Kim, Seyoung Lee, Richard Li, Lesley Moon, Madison Ronchetto

Dr. John Allen (University of Arizona), Mrs. Amy Boardman-Davis (Kaiser Permanente Fresno Medical Center), Mr. Brian Bodas (Torrey Pines HS), Dr. Richard Borges (Universidad de La Laguna), Dr. Alexandra Bortnick (UCSD), Mr. Mark Brubaker (La Costa Canyon HS), Dr. Gang Chen (Sorrento Therapeutics), Mr. Daniel Garcia (UCSD), Ms. Christina Hoong (UCSD), Ms. Samantha Jones (UCSD), Dr. Kelly Jordan-Sciutto (University of Pennsylvania), Ms. Greta Kcomt (Cal State University San Marcos), Dr. Hari

CONTRIBUTING EDITORS Jonathan Kuo, Brian Driscoll, Jessie Gan, Farrah Kaiyom, Kahyun Koh, Eliana Nuñez, Likith Palabindela, Heidi Shen, Arda Ulug, Claire Wang, Briani Zhang STAFF ADVISOR Mrs. Mary Ann Rall WEB TEAM Sumin Hwang, Ryan Heo, Emily Zhang

Khatuya (Vertex Pharmaceuticals), Dr. Caroline Kumsta (Sanford Burnham Prebys Medical Discovery Institute), Dr. Corinne Lee-Kubli (The Salk Institute), Dr. Tapas Nag (All India Institute of Medical Sciences), Dr. Arye Nehorai (Washington University in St. Louis), Dr. Julia Nussbacher (UCSD), Ms. Chelsea Painter (UCSD), Dr. Kanaga Rajan (UCSD, Sanford Consortium for Regenerative Medicine), Ms. Ariana Remmel (UCSD), Dr. Amy Rommel (The Salk Institute), Dr. Ceren Tumay (Hacettepe University), Dr. Shannon Woodruff (HP Inc.) 26 | JOURNYS | SPRING 2018

Profile for JOURNYS

JOURNYS Issue 9.3  

JOURNYS Issue 9.3  

Profile for journys7

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded