Page 1

Journal of Youths in Science

11

Volume 10 Issue 2

A STUDY OF BALLET Katherine Izhikevich

20

KETAMINE TREATMENT FOR DEPRESSION Jade Nam

30

CRYPTIC ONTICS Mikella Nuzen


Contact us if you are interested in becoming a new member or starting a chapter. or if you have any questions or comments. Website: www.journys.org // Email: eic@journys.org Journal of Youths in Science Attn: Mary Anne Rall 3710 Del Mar Heights Road San Diego, CA 92130 1 | JOURNYS | FALL 2019


table of

03

Correlating Time-Of-Day With Peak Academic Performance Of High School Students Tanja Gens

CONTENTS 08 Journal of Youths in Science issue 10.2- fall 2019

Beamcats Beamcats

11

Defying Physics: A Study of Ballet Katherine Izhikevich NR2B Subunit of the N-methyl-D-aspartate Receptor in Neuronal Death Marie Kazibwe

15 20 22

Manipulating the Human Mind Nathaniel Chen

24 30 33

Ketamine: A Treatment for Depression Jade Nam

A Real-Time Detection System using Advanced Imaging Techniques to Diagnose Lipohypertrophy in People with Insulin Dependent Diabetes Rohan Ahluwalia Cryptic Ontics Mikella Nuzen

Study of Convolutional Neural Networks for Early Detection of Diabetic Retinopathy Rachel Cai

and 40 A Correlation Between Sun Exposure Skin Cancer Sara-Marie Reed

41 43

Interview with Bradley Fikes Sua Kim Interview with Heather Buschman Allison Jung 2 | JOURNYS | FALL 2019


CORRELATING TIME-OF-DAY WITH PEAK ACADEMIC PERFORMANCE OF HIGH SCHOOL STUDENTS By Tanja Gens // Art by Seyoung Lee

Abstract We examined the influence that time-of-day has on a student’s self-assessment of student learning. 196 surveys were completed by a selected population of students from West Valley High School in Fairbanks, Alaska. The survey was conducted with students in four subjects (English 10 Honors, Algebra 1, Physical Science, and World History), taught respectively by four teachers over multiple periods. Responses to the nine survey questions reveal that the learning experience of students in 3 out of the 4 subjects, was similar regardless of the time-of-day, with differences being in the hundredth place on a scale of 1 to 5. Students reported a higher learning in the English class regardless of the timeof-day, which may be due to the fact that it was an honors class. A majority of the students consider themselves as a night-person rather than a morning-person. The study is limited by the modest size of survey population, the limited number of questions, a lack of open-ended questions in the survey questionnaire, and a lack of information on students’ grades due to protocol.

Introduction

If schools truly intend to maximize student learning, then it is imperative that they consider the impact of time-of-day on student learning outcomes. Being a high schooler myself, I find that it takes me the first period of the day to become fully attentive, my alertness peaks during period two and three, I find it hard to concentrate right after lunch during period four, and academic fatigue sets in during period five and six, waning my attention towards the end of the school day. I surveyed randomly selected peers to assess whether my personal experience is unique or whether it is a common phenomenon in my age group. I believe that thoughtful scheduling of classes may result in gains in student performance, and therefore should be seriously considered by school districts during annual planning processes. Review of current literature reveals that there is considerable work already done in this field. Research documents that the time-of-day has a direct correlation with several factors such as a student’s attention span, alertness, and cognitive abilities, that relate to academic performance [3]. Hansen et al. (2005) further validate the above claims as they show that sleep deprivation, particularly in adolescents, causes students to be distracted and unable to focus completely on the subject being taught [2]. My survey was in the same vein as the previous researchers, as it attempted to further investigate the sleep-pattern and its impact on my peers. Besides sleep deprivation, the time spent in school also plays a role. Fatigue tends to set in and grow as a student spends more time in school [5]. Pope (2016) also found that fatigue can cause a decrease in test scores during the second half of the school day [5]. A student’s circadian rhythm, or body clock, also plays a role in this analysis. Shapiro and Williams (2014) reported that adolescents are expected to be awake and alert during times that are not aligned with their circadian rhythm [6]. An adolescent’s body begins producing melatonin at around 11 pm and continues to produce melatonin until 8 am, peaking at around 7 am [6]. 3 | JOURNYS | FALL 2019


This report was particularly relevant for my study of the students from West Valley High School in Fairbanks, Alaska since the school starts at 7:45 am, a time which coincides with a high melatonin production time, rendering it as suboptimal time for maximizing academic achievement. However, the optimal performance of a student may also vary depending on an individual’s body clock and personal time-of-day preferences [3][7].

Methods & Materials

My sample population for this study comprised of students attending West Valley High School. I began by obtaining a master list of the teachers who taught core classes at West Valley from the counselor’s office. The core subjects included Global Studies, Science, Math, and English. I filtered this to exclude any Career Technological Education or Special Education classes taught by these teachers. I further filtered this list to retain only semester one classes, as I conducted my survey during semester one. Then, I culled the list to only include classes that are taught by the same teacher during multiple times of the day (see Appendix A for bell schedule). As an example, AP Calculus, taught by Mr. Grubis during periods one and three, was considered a valid choice. I ensured that the master list did not include off-campus or online classes, as these could have introduced additional bias. I numbered the remaining classes on the final culled list by core subjects to represent clusters which I then used for randomized cluster sampling. I used a random number generator on my calculator to select the four clusters being sampled out of a total of 44. All students within the selected clusters were given a survey (Appendix B) that was administered by the class teacher. The students were requested to complete the survey within the class period. The anonymous survey included nine simple questions to gather basic information about the students’ selected class and time, as well as the student’s perception of their alertness, learning experience, and personal time-of-day preferences. Following the Institutional Review Board’s

protocol, I avoided asking any questions that could potentially trigger a negative reaction despite the fact that this restricted the information I could collect and subsequent inferences I could make. For example, I used information pertaining to ‘Merit’ and ‘Honor’ roll in lieu of questions pertaining to a student’s GPA as a performance indicator. I did not use GPA because the Institutional Research Board protocol for surveying students explicitly prohibits students questions regarding GPA unless the information is already published. Four of the nine questions I asked, used a five-point Likert scale, one being strongly disagree to five being strongly agree. I made this a single blind experiment by allowing the teachers to proctor the survey and by not revealing to the students that I had designed this experiment. Narrowing the list from which my sample population was randomly chosen helped to eliminate some potential bias. Choosing only core subjects helped to make the comparison between the classes more meaningful as most of these classes require higher intellectual and academic skills. Selecting classes that were taught by the same teacher during different periods promoted direct control and eliminated the impact of the teacher’s style on student learning as an extraneous variable. Ensuring that no online classes were listed helped to restrict potential bias introduced from allowing students to self-study the subject after school hours. Finally, ensuring that no off-campus classes were surveyed helped to avoid complexities of interpreting the impact of travel time between classes.

Results Table 1: Statistics derived from the response data for the statement “I normally learn a lot in this class”: English 10 Honors (Sprankle)

The response to the statement “I normally learn a lot in this class” was rated on a Likert scale of 1 to 5, with 1 being strongly disagree and 5 being strongly agree. The averages and the standard deviation of the responses from English 10 Honors students are presented in Table 1.

Period 1 (n=14) Period 4 (n=15) Average

4.231

4.200

Standard Deviation

0.439

0.579

4 | JOURNYS | FALL 2019


Figure 1: Average of the Likert scale response for the statement “I normally learn a lot in this class”: English 10 Honors (Sprankle). Error bars represent +/- 1 standard deviation.

English 10 Honors class averages for the agreeance to the statement “I normally learn a lot in this class” based on a 5 point Likert scale are shown. One standard deviation of the mean is also shown on the bar chart.

Table 2: Statistics derived from the response data for the statement “I normally learn a lot in this class”: Algebra 1 (Hawkins)

Period 3 Period 4 Period 6 (n=25) (n=28) (n=27) Average

3.680

3.679

3.667

Standard Deviation

0.852

0.579

0.877

The response to the statement “I normally learn a lot in this class” was rated on a Likert scale of 1 to 5, with 1 being strongly disagree and 5 being strongly agree. The averages and standard deviation of the responses from Algebra 1 students are presented in Table 2.

Figure 2: Average of the Likert scale response for the statement “I normally learn a lot in this class”: Algebra 1 (Hawkins). Error bars represent +/- 1 standard deviation.

Algebra 1 class averages for the agreeance to the statement “I normally learn a lot in this class” based on a 5 point Likert scale are shown. One standard deviation of the mean is also shown on the bar chart. Table 3: Statistics derived from the response data for the statement “I normally learn a lot in this class”: Physical Science (Bostwick)

Period 2 Period 4 Period 6 (n=12) (n=19) (n=10) Average

3.750

3.789

3.700

Standard Deviation

0.754

0.713

0.949

Figure 3: Average of the Likert scale response for the state- The averages and standard deviation of the rement “I normally learn a lot in this class”: Physical Science sponses from Physical Science students are pre(Bostwick). Error bars represent +/- 1 standard deviation. sented in Table 3. 5 | JOURNYS | FALL 2019


Table 4: Statistics derived from the response data for the statement “I normally learn a lot in this class”: World History (Holloway)

Period 1 Period 3 (n=20) (n=26) Average

2.800

2.154

Standard Deviation

1.196

1.008

The averages and standard deviation of the responses from World History students are presented in Table 4.

Figure 4: Average of the Likert scale response for the statement “I normally learn a lot in this class”: World History (Holloway). Error bars represent +/- 1 standard deviation.

World History class averages for the agreeance to the statement “I normally learn a lot in this class” based on a 5 point Likert scale are shown. One standard deviation of the mean is also shown on the bar chart. Figure 5: Averages for agreement to the statement “I consider myself to be a night person” and “I consider myself to be a morning person” by period based on a 5 point Likert scale are shown in red and blue, respectively.

The results plotted on Figure 5 show that students in every period are more likely to consider themselves a night person than a morning person. This is based on questions 7 and 8 in the survey (Appendix B). Figure 6: Average value of awakeness and attentiveness as self-reported by students by period based on a Likert scale.

This figure indicates that the awakeness and attentiveness of students is lowest during period 1 and gradually increases until period 4, then gradually decreases again.

6 | JOURNYS | FALL 2019


Discussion

Careful analysis of the results show that the student’s learning experience and performance was not significantly different under the influence of time-of-day. In all four subject areas, the error bars, which represent one standard deviation, of the reported data overlapped, indicating that there were no significant difference between the values reported. The differences in average values were perceivable only to the hundredth place (i.e. 3.75 versus 3.79) in three out of the four subjects. The most revealing finding of this study was that most students identified themselves as a “night person” versus a “morning person”. The causes for this trend merit further research and investigation. Students were consistent within the rating of individual subjects, regardless of time-of-day, however, their learning experience varied from subject to subject. English appears to be ranked the highest, followed by physical science, algebra, and then history. It is possible that the high ranking of english is due to the fact that this was the only honors class surveyed in this study. The question pertaining to how awake and attentive the student felt resulted in a pattern that conforms with my personal experience. The reported attentiveness of the students increased steadily during the first half of the day, peaked during period 4, and then showed a slight decline towards the end of the school day. There were several limitations to this study. Not being able to collect information on a student’s grade point average, which is a more quantitative and comparable indicator of a students learning, was the biggest shortcoming. Although there was a 98% return rate on the survey that was administered to nearly 200 students,

Conclusion

Assuming that the surveyed population in this study is representative of a larger population of high school students in interior Alaska, we conclude that a majority of high school students in interior Alaska identify themselves as more of a night-person than a day-person. The survey does not indicate any clear pattern on the influence of time-of-day on self-reported learning experience for the students of different subjects. The limited data indicate that learning is likely perceived to be higher in advanced placement classes than the regular classes. The study is limited by modest size of survey population, limited number of questions, and a lack of open-ended questions in the survey questionnaire. 7 | JOURNYS | FALL 2019

the sample size may still not be sufficient to decipher large scale trends. Implementing the survey to the same subject taught by multiple teachers may help to eliminate the bias in results that may occur due to an individual instructor’s teaching style. A possible source of error is the potential of misinterpreting the question or a range of judgement of the same question by different students. For example, alertness could be defined and perceived differently by different individuals. Lietz (2009) points out that even minor details in the formulations of questions can evoke different responses influencing the inferences drawn from a survey-based study [4]. Furthermore, a student’s self-assessment of learning can be flawed or more closely related to the attitude toward the subject rather than the gain in content knowledge as indicated by grades [1]. To extend this project further, besides sampling a larger population and same classes taught by multiple teachers, I would also like to explore the confounding causes of alertness, awakeness and learning experience during different times of the day. For example, in Alaska, the seasonal effects on a student’s circadian rhythm and cognitive understanding due to extended periods of darkness could influence their academic performance. I would also modify my approach and use open-ended questions in my survey. For example I would extend survey question 5 (Appendix B) and ask the student to justify why the student gave that specific score. Such information may provide more insight into the real reason for differences in self-reporting of learning in a specific class, and may further help to know if time of the day actually matters to the students within an academic environment.

Literature Cited

1. Dunning, D., Heath, C., & Suls, J. M. (2004). Flawed self-assessment: Implications for health, education, and the workplace. Psychological Science in the Public Interest, 5(3), 69-106. 2. Hansen, M., Janssen, I., Schiff, A., Zee, P. C., & Dubocovich, M. L. (2005). The impact of school daily schedule on adolescent sleep. Pediatrics, 115(6), 1555-1561. 3. Hines, C. B. (2004). Time-of-day effects on human performance. Catholic Education: A Journal of Inquiry and Practice, 7(3), 390413. 4. Lietz, P. (2010). Research into questionnaire design. International Journal of Market Research, 52(2), 249-272. 5. Pope, N. G. (2016). How the time of day affects productivity: Evidence from school schedules. Review of Economics and Statistics, 98(1), 1-11. 6. Shapiro, T. M., & Williams, K. M. (2014). The Causal Effect of the School Day Schedule on the Academic Achievement of Adolescents. 7. Wile, A. J., & Shouppe, G. A. (2011). Does time-of-day of instruction impact class achievement?. Perspectives in Learning, 12(1), 9.


IMAGE CREDIT: CERN WEBSITE

The Beamcats ART BY AMY GE

Every year, CERN (The European Organisation for Nuclear Research) hosts a competition called Beamline for Schools (BL4S), which aims to provide an enriching, once in a lifetime experience for aspiring physicists. Although the task is relatively simple — coming up with an experiment given particular resources — the idea of thinking outside the high school curriculum is daunting, and seemingly impossible at times. We, the Beamcats (our team name — an amalgamation of Beamline and our school mascot, the Bearcat), would like to take this opportunity to share our experience in the competition. In 2017, we began to consider the possibility of an alternative method for cancer treatment using subatomic particles. It’s a widely known statistic that about 1 in 2 individuals in the UK will get diagnosed with cancer at least once during their lifetime. It has also been estimated that in the United States, 15,270 people of ages 19 and under were diagnosed with cancer in 2017 alone, 1790 of these cases died of the disease. Cancer itself manifests due to mutations in the host’s DNA caused by both hereditary factors and a multitude of environmental factors like smoking and ionizing radiation (like UV light from the sun). The cancerous cells multiply rapidly, which can cause damage to organs and results in the tumors cancer is often associated with. In some cases, the cancerous cells can even spread to other parts of the body, making it even more dangerous.Cancerous cells are more vulnerable to damage due to ionizing radiation than normal cells. This is ironic as ionizing radiation is also one of the factors that leads to the development of cancer. As such, it is often also treated with radiation therapy wherein different

forms of radiation, particularly X-Rays and Gamma rays, are used to kill the cancerous cells. However, this therapy is usually used in conjunction with other forms of treatment. In fact, 48% of breast cancer patients in the US with stage 4 cancer received a combination of radiation and/or chemotherapy in 2013. For one reason or another, all of us had a collective interest in the topic, be through a personal connection, from classes, or simply due to curiosity. After scouring through past journals and scientific reports, we came across a report from the 1970’s that outlined the use of negative pion beams for radiotherapy. However, we were disappointed to find that this research had been abandoned due to its high price tag as well as the fact that, on paper, it seemed that this type of therapy was quite similar to proton therapy. We believed that if the conceptual viability of pion therapy as an alternative to proton therapy could be demonstrated, it may provide an incentive for research into cheaper generation of the pion beams and, as such, help advance methods of treating cancer. We thought this idea would be perfect to explore in our proposal, and CERN would be an ideal place to further the discontinued research. Therefore, our original proposal consisted of exploring the use of pions as an alternative to conventional radiation therapy. 8 | JOURNYS | FALL 2019


Negative pions are part of a group of particles called Mesons. This means that they are made up of one quark (the bit that make up protons and neutrons) and an antiquark (a quark with its charges flipped). A negative pion in particular has a negative charge (down and anti-up quark). Since this is the same charge as electrons, when these particles are captured by hydrogen nuclei, they replace the electron in its orbit around the nucleus. Since physical systems move towards the lowest energy state, this pion can be transferred to heavier atoms if it gets close enough so that it is at a lower energy state. However, since this pion has a mass much greater than that of an electron, the pion’s orbit around the nucleus isn’t stable. As such, it deteriorates and the pion is absorbed into the nucleus. This releases around 140 MeV of energy (MeV stands for Mega electron Volts and is a unit of energy that is more appropriate on the scale of atoms than Joules). This release of energy results in the production of ionizing fragments of the nucleus. This property of the negative pion to be absorbed into the nucleus and produce ionizing fragments when it comes to rest is what makes it particularly promising for radiation therapy. These fragments can proceed to damage the DNA of the cancerous cells and as such, result in their destruction. As mentioned before, cancerous cells are in general more vulnerable to DNA damage due to radiation than ordinary cells. This is what makes radiation therapy effective. In 2017, we decided to go ahead with this idea and we sent our proposal and video to CERN. Although our team, the Beamcats, was among the ones shortlisted for the BL4S, we didn’t win. However, we refused to be deterred. All of us were extremely passionate about the subject and we were determined to pursue this idea because we saw its potential to make a lasting impact. Thus, instead of starting from scratch on another topic, we stuck to the one we already had and pushed to improve it. Therefore, in 2018, we decided to apply again to the BL4S, following up on the same overarching premise as the previous proposal with a far more systematic approach. Our first step was to establish the Bragg Peak as our method to quantify our results. The Bragg Peak itself is simply a peak on a graph that plots the energy lost by ionising radiation (alpha particles, protons, pions, etc) as it passes through a substance (the graph is called a Bragg 9 | JOURNYS | FALL 2019

Curve). We determined the independent variable to be the depth of the scintillator (the detector that sends a signal when hit by a particle) and the dependent variable to be the charge deposited at the chosen depth. This time around, our proposal was selected for BL4S! We had a little trouble later on with the details of the experiment, but this was not a problem at the proposal level since we had been selected. The support scientists helped us define the experiment more precisely and even helped improve our proposal on a more practical level. One of the most important things we realised after we were selected was how important it was that we persevered and weren’t discouraged by failure. It took a great deal of improving upon previous proposals and refining experiments for us to be selected. So, if any of our readers have applied in past competitions with an idea they still believe in, we suggest you continue to work on it! If it is special to you, you can definitely convince CERN of how great your idea truly is. Once we were selected, our support scientists (Cristovao and Gianfranco) told us that pions could not be used for this experiment, as they would be impossible to identify amongst all the particles,which have the same momentum and an insufficient time of flight resolution. Furthermore, it would take a lot of water (the medium we chose to plot our Bragg Peak in, due to its abundance in the human body) to absorb the pions at the momenta we could use. Finally, while it is theoretically a good idea to measure the Bragg Peak through graphite oxide, as we had originally suggested, it would be hard to change the depth of the graphite oxide, as it is a solid at room temperature and would have to be cut into appropriate sizes for every measurement (or combinations of preset sizes used). Since the human body is mostly water, the scientists pointed out that simply measuring the Bragg Peak of protons (instead of pions) through water would be more helpful. We took their advice and modified the experiment. This modification still worked with our initial concept as it simply added data to proton therapy instead of pion therapy. After this, we were provided with some resources to build the foundations of programming that would be essential in data analysis. Since most of us had no significant prior experience with programming, the task was daunting to begin with, but luckily we had scientists and volunteers willing to help us all throughout it.


IMAGE CREDIT: CERN WEBSITE

When we reached CERN, we were greeted by Markus and Sarah. They were our first points of contact and were the ones who gave us tours of the facilities. Markus also explained the functionings of different parts of the LHC. We spent most of the first day visiting different sites in Geneva with them and getting to know the scientists. The next few days were spent entirely on safety training (fire, radiation, etc.). When we began experimenting, we worked in three and a half hour shifts in the actual control room and spent between three and a half to seven hours analysing the data we were gathering. We were not expected to come into the control room knowing how any of the equipment worked or how to troubleshoot when the experiment went wrong, but we were expected to learn quickly. Cristovao would often simply ask us what we thought when we asked him a question we ought to know the answer to. In our experiment, we came across a number of unexpected problems, all of which were interesting to explore. For example, at one point, our data contained a lot of noise and even suggested that we had particles moving faster than light. As this is obviously not possible, we checked our logs and realised that we had increased the width of the collimator, the opening that lets particles in, allowing more particles in than necessary. While this gave us more data points, it seemed that this increase in particles was triggering the two Time of Flight detectors. This led to apparent faster than light speeds and even some negative times. This was the first issue we encountered with the experiment and, as such, we were initially startled. However, as the experiment progressed, we gradually improved at identifying and solving problems. During the data analysis times, we started with sample data, as the first few sets of data we collected were not of the standard required for the experiment. The first few skills we learned were plotting Time of Flight graphs and identifying particles. It was here where we first saw in person why the idea of using pions would not have worked as the peak of the pions was the same as that for electrons. This made them impossible to separate. As we graduated from the practice data to our experimental data, there were times we struggled with the analysis. However, just like during the experiment, they too revealed some interesting possibilities. For example, a few of our Time of Flight plots showed an extra peak which did not seem to come from the flight of any of the particles in the beam. Once again referencing our logs, we realised that this only occured when the pressure of the cherenkov had been increased. With the help of the support scientists, we understood that this extra peak

came from a particle travelling part of the way as a proton, interacting with the gas in the Cherenkov detector, and producing a kaon (a lighter particle) that completes the journey, thus resulting in the Time of Flight being between that of the proton and kaon. This was a possibility we had never seriously considered before, and it provided us invaluable experience with troubleshooting when the data from an experiment didn’t result in the expected graph. Furthermore, this also drove home the importance of the e-log that we kept of every change we made to the experiment’s conditions. Having now completed the beamline project, we recently received USB sticks from CERN containing all of our data and the virtual machine we need to analyse the data. As such, our analysis of this data has continued even though we are not physically at CERN. Upon completing this analysis, our group intends to write-up the results and hopefully, with some help from the scientists, have them published. Our overall experience at CERN taught us that exploring science outside the classroom can open up a plethora of different opportunities in research, problem solving, and engaging with the scientific community. Although daunting, it is well worth the effort to go beyond the curriculum, as this allows one to reach depths of knowledge and curiosity that inspire for years to come.

Works Cited 1. “Cancer Risk Statistics.” Stages | Mesothelioma | Cancer Research UK, 12 Sept. 2018, www.cancerresearchuk.org/health-professional/cancer-statistics/risk. 2. “Cancer Statistics.” National Cancer Institute, www. cancer.gov/about-cancer/understanding/statistics. 3. American Cancer Society. Cancer Treatment & Survivorship Facts & Figures 2016-2017. Atlanta: American Cancer Society; 2016 4. Raju, M R. NEGATIVE PION BEAMS FOR RADIOTHERAPY. lss.fnal.gov/conf/C711204/p.33.pdf. 10 | JOURNYS | FALL 2019


DEFYING PHYSICS

A STUDY OF BALLET by katherine izhikevic h

art by seyoung lee

INTRODUCTION For centuries, dance has combined art, culture, and entertainment as one cohesive sport. Among many different styles, the most classical dance is arguably ballet. Ballet can be split into two categories: en flat (Figure 1) and en pointe (Figure 2). Though it looks rather simple, en flat is a difficult ballet type that uses a canvas shoe strapped to a dancer’s foot. En pointe is the more physics-defying, complicated-looking style of dance that places a female dancer’s foot inside of a wooden, planked shoe which enables her to stand directly on her toes. Although ballet appears to be “physics-defying” at first glance, the reality is far from so. Rather, it is a testament to how dancers take advantage of the original laws of Newton. Many components that may seem to be unexplainable to the common eye will be covered in this article; chiefly, ballet’s relationship with friction between the shoe and the floor, as well as the involved momentum when completing turns and jumps.

> Figure 1: En flat

Figure 2: En pointe <

FL AT VERSUS POINTE As ballet evolved, so did the specific shoes to create the desired effect of combining grace with seemingly inhuman skill. This transformation resulted in “pointe” shoes: a tool that increases a dancer’s ability to rise on one’s toes and an opportunity to multiply the number of turns one can complete. Figure 3 shows a rélévé on demi-pointe, or a rising of the heels until they are off the ground and only the pad of the foot remains in contact with the floor. Figure 4 shows the same rélévé on pointe. This begs the question: how can standing on a wooden block, which challenges the natural anatomy of the human foot, increase the number of turns one can complete? The answer lies in friction.

Figure 3: rélévé on demi-pointe 11 | JOURNYS | FALL 2019

Figure 4: rélévé on pointe


Decreasing Frict ion To Increase The N umber Of Turns Compared to flats, pointe shoes decrease the area of surface contact between the material of the shoe and the floor [1]. This reduction introduces a cutback in “rotational traction [a type of friction between two surfaces] that the dancer can utilize,” [1] thus allowing a dancer to easily turn. “Low translational traction means the [ballet] shoe tends to slip” [2] enough to create a smooth turn, but not so slippery that it makes the dancer lose their balance and fall. This combination of the right amount of friction between the shoe and the floor is what allows a dancer to complete over 10 turns on pointe from a single preparation— a feat that seems to contradict what we know to be true about physics, when in reality, is only possible because of physics. Friction is also pertinent to the

force required behind a turn, where the maximum force needed to push onto a shoe is equal to the static friction between the pointe shoe and the dance floor [1]. As the surface area of contact between shoe and floor increases, a “greater initial force” is required of a ballet dancer with a larger pointe shoe platform in order to complete an equal number of turns as someone with a smaller platform [1]. In addition to the larger magnitude of force required to turn on a bigger pointe shoe, the need for balance is also magnified. Consequently, the strength of a ballerina is required to keep both the initial force and the aspect of balance throughout the turn. However, a decrease in friction also has its drawbacks. Without a counterbalancing force to control a dancer’s spin, dancers are at risk of potential-

ly slipping off of their block and injuring themselves. To avoid slipping, dancers are known to use a chemical called rosin (also used for string instruments), which, when powdered along the block of the pointe shoe, is known to increase the static coefficient of friction and simultaneously decrease the kinetic coefficient of friction [2]. In this case, the static coefficient of friction is an interaction between the floor and the pointe shoe that decreases motion, whereas the kinetic coefficient of friction reversely increases motion. Dancers also use splashes of water along the bottoms of their shoes to increase traction with the floor. This practice is often taken with caution, for using larger quantities of water will produce the opposite intended effect and make the shoe too slippery [1].

Using Momentum to Turn Indefinitely Not only does the physics behind turning extend beyond the coefficient of friction between the shoe and the floor, it also extends to the momentum behind each turn. For example, Figure 5 depicts a fouetté turn en flat and en pointe, where the stationary leg bends inwards into the other leg and swings outwards. Ken Laws, a professor emeritus of physics at Dickinson College, claims the leg “stores momentum” [3]. As the dancer comes off of pointe and pliés while completing foutée turns, she “regains momentum” with every pause. While keeping the stationary leg in a constant pattern of rotation, she in turn “saves” some momentum from the preceding turn to the proceeding turn. This results in a cycle of storing momentum in the leg as the dancer pushes out, deviating from her spin axis, and then transferring momentum all over again [3]. > Figure 5: fouetté turn en flat and en pointe

Furthermore, dancers conserve a specific type of momentum— angular momentum. Angular momentum can remain constant as long as an object is rotating in a closed system with no external torque-related forces applied [4]. This conservation explains “the angular acceleration of an ice skater as she brings her arms and legs close to the vertical axis of rotation,” [4] just as a dancer who brings her leg in during the Foutée turn. A dancer’s angular momentum is “her rate of spin multiplied by her moment of inertia” [3]. Thus, if her momentum stays constant but she decreases her inertia by tugging in her leg, she will spin faster. The friction constant between the pointe shoe and the floor will prevent her from turning endlessly, such as in a pirouette turn where one does not push one’s leg out and more simply holds it in place. 12 | JOURNYS | FALL 2019


Using Momentum to “Hang in the Air” Momentum is not just involved in ballet turns; it is equally prevalent in ballet jumps. The grand jeté (Figure 6), for example, is a leap in which a dancer jumps straight into his/her splits in the air. The dancer follows an arc from their leaving the floor to landing back on it. Dancers coined the term “hang time” [3] to describe the phenomena of a dancer who could make people believe they were floating. Ken Laws explains that the reason the ballet dancer appears to hang in mid-air is because “once the dancer leaves the floor, [they’re] like a ballistic missile: [their] center of gravity follows a fixed parabola” [3]. After their legs stretch the most at the vertex, they complete the landing by taking advantage of their “center of gravity’s vertical motion” which further extends their jump.

Figure 6: grand jeté

Men versus Women: The Counter Movement Jump and the Plié The grand jeté can height of men and wombe done regardless of en who completed a whether the dancer is CMJ test, it was conwearing pointe shoes cluded that men demonor flat shoes. However, strated a “greater jump there is one clear disheight through applytinction that it makes: ing a larger concentric why is the achievable impulse [a muscular height at the vertex contraction that causes much greater for men muscles to compress] than for women who and, thus, [achieved] a complete the jump? greater velocity throughFigure 7: Counter The answer, as all balout most of the concenlet teachers feverently repeat to their protégés, lies Movement Jump tric phase and at take-off. The larger concenin the plié, or the bend of the knees. The way to tric impulse and velocity achieved by men was measure this component is found in the Counter Move- attributed to them demonstrating a larger [center of mass] ment Jump (CMJ, Figure 7) where the “jumper starts displacement during the unweighting/eccentric phase [this from an upright standing position, makes a preliminary phase causes muscles to stretch and lengthen themselves in downward movement by flexing at the knees and hips, response to a previous opposing force] of the jump (i.e. then immediately and vigorously extends the knees and greater squat depth), which subsequently enabled them hips again to jump vertically up off the ground” [5]. This to achieve greater center of mass displacement during the jump test has been used to contrast results based on gen- concentric phase of the jump, but with a similar moveder, among other variables, to create the correct training ment time to women” [7]. The study concluded that men programs for athletes [6]. The CMJ can also be applied to were able to gather a larger concentric impulse at the beballet with its many jumps, including single to triple tours ginning thus being able to jump higher. In the case of the en l’air (wherein men are capable to jump and turn in the aforementioned tours en l’air, this explains why men are air simultaneously). able to jump high enough in order to complete the turns In a study experimenting the difference in the achieved in the air. 13 | JOURNYS | FALL 2019


Future Research Although there appears to be a lot of discussion about the physics behind ballet, the athletic science community could benefit from more research based on the anatomical aspects of athleticism over the specific physics behind the sport. For example, it could provide information on which stretches would benefit dancers the most in strength or in preventing an injury. Based on the limited research presented here about the physics of ballet, there is not enough data that chooses to focus on the anatomy of a pointe dancer. Such research could be done with CT scans that compare different levels of dancers or a single dancerâ&#x20AC;&#x2122;s development as she begins pointework. The ballet community is also silent on the potential injuries of pointework when they could perhaps relate those injuries to physics, such as too much force causing over-rotation and leading the foot to slip with eventual bone damage. The discussion of the sciences and their work within pointe could give dancers a new vantage point of their dancing, and more importantly alert them of any sort of damage they should be avoiding while on pointe to maintain the desired effect of the grace of ballet with only the safest of results.

References [1] Clifton, G. (2009) The Coefficient of Friction of the Pointe Shoe and Implications for Current Manufacturing Processes. [ebook] New York City: Columbia University. Available at: https:// dance.barnard.edu/sites/default/files/inline/clifton_thesis_final.pdf. [2] Center for Sports Surface Research (Penn State University). (2004) Traction (Center for Sports Surface Research). [online] Available at: https://plantscience. psu.edu/research/centers/ssrc/research/infill/traction. [3] Kunzig, R. (2008) The Physicist Who Figured Out Ballet. [online] Discover Magazine. Available at: http://discovermagazine.com/2008/the-body/11-thephysicist-who-figured-out-ballet. [4] Courses.lumenlearning.com. (n.d.). Conservation of Angular Momentum | Boundless Physics. [online] Available at: https://courses. lumenlearning.com/boundless-physics/chapter/conservation-of-angular-momentum/. [5] Linthorne, N. (2001) Analysis of Standing Vertical Jumps Using A Force Platform. [ebook] London: Brunel University, pp.1-2. Available at: http://people.brunel.ac.uk/~spstnpl/Publications/VerticalJump(Linthorne).pdf. [6] McMahon JJ, Rej SJE, Comfort P. Sex Differences in Countermovement Jump Phase Characteristics. Sports (Basel). 2017; 5(1):8. doi:10.3390/sports5010008 [7] Courses.lumenlearning.com. (n.d.). Types of Muscle Contractions: Isotonic and Isometric | Lifetime Fitness and Wellness. [online] Available at: https://courses. lumenlearning.com/fitness/chapter/types-of-muscle-contractions-isotonic-and-isometric/. Figures: [1](2014;http://thedancewearguru.blogspot.com/2014/09/ballet-technique-shoesfull-sole-vs.png) [2](2019;https://upload.wikimedia.org/wikipedia/commons/thumb/5/50/ PointeShoes.jpg/800px-PointeShoes.png) [3](2006;https://www.flickr.com/photos/bleach226/213766400.png) [4](2006;https://www.flickr.com/photos/bleach226/213766400.png) [5](2016;https://casadedanza.files.wordpress.com/2016/12/captura-de-pantalla-2016-12-19-a-las-12-34-47.png) [6](2016; https://cynthiawoong.com/product/jete-brooch/.png) [7](2017;http://trackfootballconsortium.com/wp-content/uploads/2017/10/counter-movement.png) 14 | JOURNYS | FALL 2019


NR2B Subunit

of the N-methyl-D-aspartate Receptor in Neuronal Death Abstract

BY MARIE KAZIBWE // ART BY MIKELLA NUZEN

When oxygen levels and other nutrients in the brain become insufficient, ischemic stroke occurs as a result of severely reduced blood flow. Following said stroke, delayed neuronal death leads to glutamate-induced excitotoxicity, where an excess of amino acids is released from depolarized cells. The N-Methyl-D-aspartic (NMDA) receptor is critical for normal Central Nervous System (CNS) function and for fundamental excitatory neurotransmission. Ro 25-6981 is confirmed to be a highly potent blocker of the NR2B subunit of the NMDA receptor. This blocker was applied to hippocampal slice cultures after Oxygen Glucose Deprivation (OGD), a stroke simulator, to observe the significance and potential role of the NR2B subunit of the NMDA receptor. Post 24 hours of the

OGD, I measured the fluorescence of the slices. I found that the control slices (no NR2B blockage) showed less fluorescence than the slices that the blocker had applied. I also found that slices with a 1 µM concentration of the blocker showed less fluorescence than the slices with a 10 µM concentration. Therefore, more blocker to the NR2B subunit contributes to a brighter slice. Since fluorescence is an indicator of cell death, I concluded that the NR2B subunit plays a key role in neuronal death-post stroke. This research is instrumental in understanding stroke at the cellular level, which will eventually lead to applicable methods for reducing post-stroke, ischemic induced, delayed-neuronal death.

1. Introduction 1.1 Ischemic Stroke

Ischemic strokes consist of 87% of all strokes, making them the most common type of stroke. During the stroke, the ATP levels in the core (the site of the stroke), drop 85% below average. Most cells in the core die through necrotic cell death as a result of anoxia (absence of oxygen) and hypoglycemia (low blood sugar) (Li et al., 2012). Necrosis, or premature cell death, in the core is accompanied by excitotoxic cell damage and glutamate release. The penumbral cells that surround the core are subject to excessive excitatory amino acid release from depolarized cells in the core, a process known as excitotoxicity. Excitotoxicity has the potential to damage or kill neurons and is a cause of neuronal death post-ischemia.

1.2 N-Methyl-D-aspartic receptor Glutamate is the major excitatory neurotransmitter in the mammalian CNS (Meldum et al., 2000). Glutamate targets 3 classes of glutamate-gated ion channels: α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptors, Kainite, and the N-Methyl-D-aspartic receptor (NMDA). The NMDA receptor is critical for both normal CNS function and excitatory neurotransmission (Lai et al., 2002). The NMDA receptor is activated by glutamate and glycine binding, a key element required to open the ion channel and Figure 1. Diagram of Excitotoxicity as permit calcium entry. This receptor plays a large role a result of excessive Ca2+ influx across in excitotoxicity by being excessively activated by an a membrane. uncontrolled increase in extracellular glutamate. 15 | JOURNYS | FALL 2019


The influx of glutamate creates more Ca2+, activating cytotoxic intracellular pathways. Enzymes including phospholipases, endonucleases, and proteases are triggered and damage cell structures (such as the cytoskeleton), the plasma membrane, and DNA. This process, known as excitotoxicity, is the leading cause of neuronal death after ischemic stroke. The receptor is composed of one NR1 subunit, and one of four NR2 subunits: NR2A, NR2B, NR2C, or NR2D. NR2 subunits have distinct electrophysiological and pharmacological properties that can influence the temporal and spatial distributions of the receptor. Different properties make some subunits more targetable for neuroprotection post-stroke than others.

1.3 NR2B Subunit Several studies have reported that the NR2B subunit is responsible for glutamate-mediated neuronal survival and death (Yu et al., 2018). Most death-signaling pathways, such as the Death Activated Protein Kinase 1 (DAPK1), are activated by receptors containing the NR2B subunit. The stimulation of the subunit mediates neuronal death by activating the neuronal death-signaling complex (NDC) associated with receptors.

1.4 Ro 25-6981 Ro 25-6981 is an activity dependent, highly potent blocker of N-methyl-D-aspartate receptors containing the NR2B subunit (Lynch et. al., 2001). Ro 25-6981 has been confirmed as being NR2B selective in a study done by Dr. Fisher that characterized the interaction of Ro 256981 with NMDA receptors in a variety of in-vitro tests.

1.5 Oxygen Glucose Deprivation Oxygen Glucose Deprivation (OGD) is an in-vitro model for stroke (Tasca et. Al., 2015). In this model, slice cultures are placed in a glucose-free media and then placed into a deoxygenated atmosphere to imitate the loss of oxygen and glucose to the brain in ischemic stroke. This model is widely used in ischemic stroke studies.

1.6 Statement of Purpose The role of the NR2B subunit, its specific signaling pathways, and its effect on ischemic-induced neuronal death are currently unknown. I am seeking to assess the effect of blocking the NR2B subunit on ischemic-induced neuronal death.

2. Methods 2.1 Cell culture

My mentor dissected two hippocampal slices from euthanized Sprague Dawley rats, and sliced the hippocampus into smaller slices using the Siskiyou tissue slicer. Next, 3 hippocampal slices were placed onto each semi-porous plate insert of a 6 well plate. The semi-porous nature of the insert allows the media to diffuse into the hippocampal slice. I prepared each well of the plate to contain 10 mL of Fetal Bovine Serum (FBS) to provide cultures with nutrients necessary for survival. The plates were placed into an incubator maintained at 32 degrees Celsius, as to closely match with the internal body heat of a Sprague Dawley rat. The plates remained in the incubator for 7 days.

2.2 Oxygen Glucose Deprivation (OGD) Methodology Seven days after the dissection, I prepared a sixwell plate with 10 mL of a saline solution in each well to be used as the media for the slices during the OGD period. Afterwards, I took them out of the incubator and transferred the culture plate inserts from the 6-well culture plate with FBS into a new 6-well plate with the saline solution. I then placed the 6-well plate with saline media into the hypoxic OGD chamber located inside the incubator, to mimic hypoxia in ischemic stroke. Cultures remained in the OGD chamber for 90 minutes.

Figure 2. Diagram of cell culture materials used in experiment

16 | JOURNYS | FALL 2019


2.3 Post-Oxygen Glucose Deprivation During the 90-minute OGD period, I prepared new media for the cultures to be placed in post-OGD. Two wells of the postOGD plate contained 10 mL of pure FBS (the control group), another two wells contained 10 mL of FBS and 1 µM of Ro 256981, and the last two wells contained 10 mL of FBS and 10 µM of Ro 25-6981. I took the cultures out of the OGD chamber, removed the culture plate insert from the saline media plate, and placed the insert into the newly created media. Approximately four trials of this experiment were done, allowing for imaging of 24 hippocampal slices for each concentration of Ro 25-6981.

Figure 3. Well-Plate diagram of media used post-OGD

2.4 Imaging

2.5 Analysis

Exactly 24 hours after the 9-minute OGD period, I pipetted a drop of propidium iodine (PI) onto each individual hippocampal slice. 10 minutes after the PI was placed onto the slices, I removed the culture plate insert from its respective well and placed it under a microscope. The microscope setting was under red fluorescence in order to measure the fluorescence of the slices, a measure of cell death. I took pictures of the slices using ImageJ and measured the levels of fluorescence in the cultures.

To analyze the results of my project, I calculated the difference of fluorescence intensity between the slice and the background to find an accurate measure of fluorescence. A “One Way ANOVA Test” by Social Science Statistics was done to measure the significance of the data collected.

3. Results 3.1 CA1 Region of the Hippocampus

In the CA1 region of the hippocampus, the control group’s fluorescence intensity was 24.506 units higher than the 1 µM group, indicating a clear increase in cell death. An ANOVA test was done to test the significance of these findings through analyzing the differences between the control, 1 µM, and 10 µM groups. The f-ratio value was 7.31704, and the p-value was 0.002675. Because of this p-value being less than 0.05, the null hypothesis was rejected, and the data was therefore confirmed as significant.

3.2 CA3 Region of the Hippocampus The CA3 region had a near identical trend of fluorescence as the CA1 region, with the hippocampal slices of 1 µM of Ro-25 6981 in their media post-OGD fluorescing 20.215 units more than the control group of the CA3 region. The hippocampal slices that had 10 µM of Ro-25 6981 in their media post-OGD fluoresced 13.543 units more than the 1 µM Ro-25 6981 slices, which further supports the hypothesis that increased blockage of the NR2B receptor increases cell death post-ischemia.The same ANOVA test was used to calculate the significance of these findings through analyzing the difference between the groups. 17 | JOURNYS | FALL 2019


3.2 CA3 Region of the Hippocampus Figure 6. Graph of fluorescence of cells in the DG region in the hippocampus The 1 µM Ro-25 6981 hippocampal slices fluoresced 15.183 units more than the control group, and the 10 µM slices fluoresced 14.834 units more than the 1 µM slices. The same ANOVA test was used to calculate the significance of these findings through analyzing the difference between the groups.

4. Discussion In my research, I found that blocking the NR2B subunit of the NMDA receptor increased cell death post-ischemia. In all 3 regions of the hippocampus, the use of Ro 25-6981 to block the NR2B receptor after ischemia led to a rise in cell death. The trend in the data is illustrated in Figure 7. My research also confirms that increased blockage of the subunit increased the amount of cell death. In each region of the hippocampus, the 10 µM Ro 25-6981 slices fluoresced more than the 1 µM slices. This suggests that in this project, increased blockage of the NR2B receptor is the cause of increased neuronal death, suggesting that the NR2B subunit plays a crucial role in pathways related to neuroprotection and ischemia. This research is clinically relevant due to the confirmation that the NR2B subunit is involved with restoration of cells post-ischemia and can be used to further research of the importance of the NMDA receptor, as well as further investigate the application of this research into humans. While my research produced significant findings, limitations such as amount of trials could have affected the results. In the future, research on the molecular pathways of the NR2B receptor can be applied for a more comprehensive study about the role of the NR2B subunit of the NMDA receptor.

Figure 7. Graph of the effect of various Ro 25-6981 concentrations on fluorescence intensity of the hippocampus This graph shows the measured flourescence of the hippocampal cells minus the measured flourescence of the background for each region of the hippocamput at each Ro 25-6981 concentration.

18 | JOURNYS | FALL 2019


Figure 8. Image of a hippocampal slice after 90-minute OGD

Figure 9. Image of a hippocampal slice after 90-minute OGD and media containing FBS + 1 µM Ro 256981

Figure 10. Image of 2 hippocampal slices after 90-minute OGD and media containing FBS + 10 µM Ro 25-6981

*Pictures were taken 24 hours after the start of the OGD period. Propidium iodine was dropped on the slices 10 minutes before the pictures were taken in order to view flourescence. (credit, student researcher)

5. Conclusion The goal of this research was to investigate the role of the NR2B subunit in ischemic-induced neuronal death. I hypothesized that the blockage of the NR2B subunit would lead to an increase in ischemic-induced neuronal death due to its hypothesized role in signaling pathways post-neuronal death. To test the hypothesis, I used OGD to induce ischemia in Sprague-Dawley hippocampal slices. After 90 minutes in the OGD chamber, the slices were placed into either pure FBS, or one of two Ro 256981 concentrations: 1 µM or 10 µM. Fluorescence, a measure of cell death, was measured 24 hours after the induced ischemia in each region of the hippocampus. I

References

found that increased blockage of the NR2B subunit of the NMDA receptor led to an increase in neuronal death, and that a higher concentration of Ro 25-6981 led to a higher amount of ischemic-induced neuronal death. The effects and implications of blocking the NR2B subunit of the NMDA receptor were unknown before this research, and such information can lead to more knowledge of receptor-mediated pathways of delayed neuronal death post-ischemia. This research, paired with molecular studies of the NMDA receptor ischemia pathways, can ultimately lead to increased knowledge of ischemia pathways and the development of neuroprotection post-ischemia.

1. Ha JS, Lee C-S, Maeng J-S, Kwon K-S, Park SS. Chronic glutamate toxicity in mouse cortical neuron culture. Brain Research. 2009;1273:138-143. doi:10.1016/j.brainres.2009.03.050. 2. Li V, Bi X, Szelemej P, Kong J. Delayed Neuronal Death in Ischemic Stroke: Molecular Pathways. Advances in the Preclinical Study of Ischemic Stroke. March 2012. doi:10.5772/32850. 3. Martin HG, Wang YT. Blocking the Deadly Effects of the NMDA Receptor in Stroke. Cell. 2010;140(2):174-176. doi:10.1016/j.cell.2010.01.014. 4. Meldrum BS. Glutamate as a Neurotransmitter in the Brain: Review of Physiology and Pathology. The Journal of Nutrition. 2000;130(4). doi:10.1093/jn/130.4.1007s. 5. Lynch DR, Shim SS, Seifert KM, et al. Pharmacological characterization of interactions of RO 25-6981 with the NR2B subunit. European Journal of Pharmacology. 2001;416(3):185-195. doi:10.1016/s0014-2999(01)00868-8. 6. Stanojevic M. 8 Important Roles of Glutamate Why It's Bad in Excess. Selfhacked. https://selhacked.com/blog/glutamate/#Health_Benefits_of_Glutamate. Published February 21, 2019. Accessed February 14, 2019. 7. Tasca CI, Dal-Cim T, Cimarosti H. In Vitro Oxygen-Glucose Deprivation to Study Ischemic Cell Death. Methods in Molecular Biology Neuronal Cell Death. 2014:197-210. doi:10.1007/978-1-49392152-2_15. 8. Yu A, Lau AY. Glutamate and Glycine Binding to the NMDA Receptor. Structure. 2018;26(7). doi:10.1016/j.str.2018.05.004.

19 | JOURNYS | FALL 2019


Ketamine: A Treatment For Depression BY JADE NAM

Introduction: Depression Characterized by persistent sad, anxious, or “empty” feelings, depression affects more than 3 million people in the U.S. Clinical depression is a serious mental illness; it affects almost every aspect of a patient’s life, such as their diet and sleep schedules, and often acts as a hindrance to his or her daily activities [1]. Depression may be caused by an imbalance of the hormones that regulate people’s moods and the ways in which they perceive the world. There are countless numbers of chemicals working with the nerve cells, and the ultimate goal of treating depression is to be able to regulate these chemicals and people’s moods [2].

What is Ketamine? Ketamine is a type of dissociative anesthetic, also known as a hallucinogen. Like Ecstasy, Ketamine is considered a “party drug;” however, in the 1970s, the FDA approved Ketamine for medical uses. It is now

ART BY LESLEY MOON

used as an anesthetic during surgical processes because it induces a detached feeling from sensations and surroundings. With a medical professional’s prescription, Ketamine can be used to alleviate patients’ depression [3].

How Does Ketamine Work? Many studies have tried to explain how Ketamine treats the symptoms of depression very quickly, but further research is needed in order to fully understand how Ketamine works. There are two studies examining two different ways in which Ketamine alleviates depression:

Reducing Bursts in

LHb

This study led by Yan Yang, a professor at the Chinese Academy of Sciences, focuses on the relationship between the lateral habenula (LHb) and depression. The LHb is a part of the brain that mediates the communication between the forebrain, midbrain, and hindbrain. During the study, the researchers noticed that rats with depressive behaviors experienced rapid bursts in their LHb. These bursts occur

20 | JOURNYS | FALL 2019


when LHb neurons fire at a quick interval. When the researchers provoked their LHb to burst more rapidly, the rats’ depressive behaviors increased. Another study led by Yihui Cui, a professor at Zhejiang University, discovered that a protein called Kir 4.1 might be responsible for the bursts in the LHb. The levels of Kir 4.1 were higher among the rats that showed behaviors related to depression, whereas blocking Kir 4.1 reduced the behaviors. Ketamine silenced the bursts in LHb within minutes. Fluoxetine hydrochloride (a.k.a. Prozac), a type of commonly used antidepressant, was unable to silence the burstings in such a short amount of time [4, 5].

Activating G proteins G proteins help pass on extracellular signals to the inside of a cell. In a study, Dr. Rasenick, the co-founder of PAX Neuroscience Inc., discovered that people with depression have higher numbers of inactivated G proteins in lipid rafts. Lipid rafts are a part of the cell membrane, and they play an important role in transferring signals. Inactivated G proteins in the lipid rafts could be responsible for some depressive behaviors, as they relay messages that reduce communication between brain cells. In order to fully activate the G proteins, Selective Serotonin Reuptake Inhibitors (SSRIs) accumulate on the lipid rafts and push the G proteins off the lipid rafts. When G proteins are pushed off and activated, they relay messages that increase neural signals, which is known to reduce the symptoms of depression. This process takes a few days to be completed by the SSRIs. However, when Ketamine was used instead of SSRIs, the process only took 15 minutes [6].

Benefits of Ketamine: Patients treated with Ketamine felt that their number of suicidal thoughts decreased, as well as their recurring sadness. The largest positive changes were seen in areas such as listlessness, sadness, and lack of concentration. 71% of patients treated with Ketamine showed positive changes. The reason why Ketamine is seen as an effective antidepressant is because of how quickly it acts to alleviate symptoms of depression. In addition, it is also effective among patients who have been resistant towards other standard antidepressants [7].

Side Effects of Ketamine: When patients were treated with Ketamine, several of them reported of psychotic symptoms, such as delusions and hallucinations. Others have reported that they experienced dissociative symptoms, which include disconnected sensation and “out of body” experiences. Other side effects include drowsiness, blurred vision, and increased heart rate and blood pressure. 16.7% of the patients experienced side effects that 21 | JOURNYS | FALL 2019

impaired their functioning. Many other patients, including both the patients that responded positively to the treatment and those who didn’t show a significant response, experienced mild levels of side effects [7]. One of the most dangerous side effects of Ketamine treatment is its potential for abuse. The benefits of Ketamine are astonishing and Ketamine can be used to help many people with severe clinical depression; nevertheless, the fact that Ketamine is a highly addictive drug cannot be ignored. In addition, long term effects of Ketamine are unclear. Further research is needed in order to determine the full effects Ketamine has on patients [8].

Conclusion: With additional research about its long term effects and ways to reduce addiction, Ketamine has a potential to become one of the most effective treatments for depression, especially for patients who are resistant toward many of the standard treatments. They will be able to receive treatments that have low risks and are able to quickly alleviate depressive symptoms.

References [1] “Depression.” NIMH. The National Institute of Mental Health, n.d. Web. 25 July 2018, https://www.nimh.nih.gov/health/topics/ depression/index.shtml [2] “What Causes Depression?” Harvard Medical School. Harvard Health Publishing, 11 April 2017, Web. 28 December 2018, https://www.health.harvard.edu/mind-and-mood/what-causesdepression [3] Davis, Kathleen. “What Are the Uses of Ketamine?” Medical News Today. Healthline Media UK Ltd,. October 12, 2017. Web. 25 July 2018, https://www.medicalnewstoday.com/articles/302663.php [4] Makin, Simon. “Getting the Inside Dope on Ketamine’s Mysterious Ability to Rapidly Relieve Depression.” Scientific American. Springer Nature America, Inc. 2 March 2018. Web. 26 December 2018, https://www.scientificamerican.com/article/getting-the-inside-dopeon-ketamine-rsquo-s-mysterious-ability-to-rapidly-relieve-depression [5] Yang, Yan, Wang, Hao, and Hu, Hailan. “Lateral Habenula in the Pathophysiology of Depression.” Science Direct. Elsevier Ltd., 23 November 2017. Web. 18 January 2019, https://www.sciencedirect.com/ science/article/pii/S0959438817302908 [6] Newman, Tim. “How Does Ketamine Relieve Depression So Quickly?” Medical News Today. Healthline Media UK ltd, 25 June 2018. Web. 19 September 2018, https://www.medicalnewstoday.com/articles/322233.php [7] Tracy, Natasha. “What Are the Side Effects of Ketamine For Depression?” Healthy Place. Healthy Place, 20 September 2017. Web. 29 December 2018, https://www.healthyplace.com/depression/depression-treatment/ what-are-the-side-effects-of-ketamine-for-depression [8] Brunk, Doug. “Long Term Effects of Ketamine Uncertain.” MD edge. Clinical Psychiatry News, 16 February 2018. Web. 30 December 2018, https://www.mdedge.com/psychiatry/article/158863/depression/ long-term-effects-ketamine-uncertain


Manipulating Manipulating the the Human Human Mind Mind by Nathaniel Chen Art by Brian Cheng

Walking into a room piled with hundreds of research papers and textbooks, I greeted one of the head bioengineering professors at UCSD. Having a long history in bioengineering, he pioneered the creation of remote insulin detectors and worked on a couple other large projects. I had the opportunity to come help one of his newer teams, which was in the process of developing a portable Transcranial Magnetic Stimulation device. We sat across from each other, and he began by telling me this scenario about a mentally dysfunctional teenager*. Let’s call him Jeff for now. Jeff is first seen sitting at a dinner table - not really sitting, but jumping out of his chair and throwing his food while screaming because he could not mentally handle the sight and smell of so much food. His mother runs to him and hugs him tightly, finally calming him down. But he can’t talk to anyone at the dinner table nor has he ever talked to anyone coherently in his whole life, only able to express his ideas through moans and grunts. A VCR (recorder) has been set up across the room by some scientists to capture all of this. Why? Because he is one of the first test patients about to undergo this new experimental TMS treatment. *highly autistic

So what is Transcranial Magnetic Stimulation (TMS) in the first place? It is a type of device that uses high-powered magnets to manipulate processes in the mind. It was invented a decade after Magnetic Resonance Imaging (MRI) as a sort of derivative. Magnetic Resonance Imaging used colossal scanners to map the brain. They were made of incredibly powerful magnets strong enough to suck a metal chair across the room [9] into its imaging chamber, and certainly strong enough to tear any metal implants out of a human. So an alternative was developed with smaller, more concentrated magnets. Instead of lying down in an MRI machine for brain scans, patients scanned with TMS could sit comfortably in a chair and doctors would place one or two roughly 6 cm diameter solenoids on top of the patient’s head. The brain was then imaged to find neurological diseases such as stroke, multiple sclerosis, amyotrophic lateral sclerosis, etc. Using magnets, scientists expected results from TMS scans to be roughly the same, albeit a bit weaker than MRI of course. Although test results generally went according to plan, there were strange side effects. Patients were seen having delusions and seizures after being scanned. For those scanned longer by TMS, they came out acting strange, almost as different people than they came in as. This was very unexpected. After much needed research, scientists and doctors came to a general conclusion that while TMS pulsates magnetic fields into the brain to scan it, the magnets inadvertently causes three things to happen. First, they disrupt electrical signals on the surface of the cranium. With trillions of ionized neurotransmitters and action potentials running through the brain at any given time, the magnets of a TMS are uniform, yet localized enough to cause a specific electrical shift in a cluster of these neurons, disrupting the way neurons send information. Second, it causes some specific neurons to release neurotransmitters, including dopamine, one of the body’s naturally producing “pleasure drug”. Third, the magnets alter the flow of cerebrospinal fluid in the brain. When turned on, the magnetic fields would drag ions within the fluid, and thus, the whole current along faster, slower, or in a different direction. This physical alteration in cranial fluid dynamics would then manifest as a mental alteration. What is crazy is that all of this i s happening at a depth of only a few 22 | JOURNYS | FALL 2019


centimeters below the skull. The magnet is not strong enough to reach below the surface of the brain, yet it somehow greatly influences the brain at all depths. There must have been some way to utilize this potential brainaltering property. So, to find its uses for it, pioneering scientists such as Alvaro Pascual-Leone began testing TMS on patients with disorders varying from substance addiction to depression. After fine-tuning coil designs and application locations, now at locations among the dorsolateral prefrontal cortex (in charge of many motor decisions), TMS disease treatment turned out highly positive. It could rehabilitate drug dependent patients and relieve the chronically depressed with around 75% efficacy. It even has the potential to physically treat damaged parts of the brain, facilitating human recovery from sports concussion and PTSD*. Its effects are mind-boggling— literally. *Author Robert Koger describes people with Post-Traumatic Stress Disorder (PTSD) from war as such: “the turmoil they experience isn’t who they are; the PTSD invades their minds and bodies.” [7] TMS would, in essence, “rewire the mind” to force PTSD out, so that the brain can function in a pattern that does not keep circling back to thoughts of war.

There are also different types of TMS treatment: regular TMS, slow and repetitive rTMS, and more. These all have different effects, with fast bursts usually yielding quicker, yet more transient, changes, . Low frequency treatment, on the other hand, caused gradual changes that stuck. Jeff was filmed again after an year of TMS sessions, sitting at the dinner table. With tidy clothes and a fork at his side, he was able to use the fork to pick up food and able to communicate some basic ideas and needs to his parents. It seemed as if his extreme condition had been cured. The ability of TMS to change people to this degree still remains nebulous and mysterious, causing significant ethical debates over its usage*. While it has the potential to open new frontiers in medical treatment, it also raises ethical, not to say existential concerns, about human rationale. Dr. Robert Sapolsky of Stanford University describes TMS as a harbinger of future technologies, which could redefine the concept of such thought processes like morality. If measly magnetic devices such as TMS could turn a person’s personality completely different in a matter of hours, then how much control do we even have over ourselves? * (especially since it’s not regulated by the FDA)

More research has been leading to this conclusion: that although we may think we have a choice to act, reality says otherwise: our choices are desires, governed ultimately by outside influences, indirectly through the thousands of advertisements and branding we come across every day, or more directly through devices such as TMS. As 19th century German philosopher Arthur Schopenhauer put it, “Man can do what he wills but he cannot will what he wills.” Ultimately, TMS has been agreed to be one of these avant garde neuroplasticity treatment options that is last resort— only if counseling and drugs did not work. Its rate of long term 23 | JOURNYS | FALL 2019

success, after all, has varied between 30-60% depending on treatment type, and its effects are still being researched. Other forms of mind-altering treatments would include NIR (Near Infrared Spectroscopy) devices, ECT (Electroconvulsive Therapy, also known as the infamous electroshock therapy), DBS (Deep Brain Stimulation), and psychoactive drugs. With less than 40 years of research under its belt, investigations into this amazing device has exploded in the past 10 years. Just a few years ago, more ways of how it affects the brain and more applications of it were discovered. Because of low government regulation, TMS is being practiced, documented, and investigated liberally all over the world. Some licensed people I know, including one doctor at UCSD, even tried it on themselves, self-documenting various behaviors and mental state. Being able to seal contracts with governments and commercial medical groups, TMS researchers are finding opportunities everywhere to apply their studies because of this device’s accessibility, adaptability, and advancement. TMS has truly altered the field of psychological treatment, one mind at a time.

References [1] Diana, Marco, et al. “Rehabilitating the Addicted Brain with Transcranial Magnetic Stimulation.” Nature Reviews Neuroscience, vol. 18, no. 11, 2017, pp. 685–693., doi:10.1038/nrn.2017.113. [2] England, Care New. “How Does TMS Work.” Butler Hospital, www. butler.org/programs/outpatient/how-does-tms-work.cfm. [3] facebook.com/thebrainstimulator. “Compare Brain Stimulation Techniques and Applications.” The Brain Stimulator TDCS Devices, thebrainstimulator.net/brain-stimulation-comparison/. [4] “Faculty Profiles.” UC San Diego Jacobs School of Engineering, jacobsschool.ucsd.edu/faculty/faculty_bios/index.sfe?fmp_recid=12. [5] Harris, Sam. “Waking Up with Sam Harris #91 - The Biology of Good and Evil (with Robert Sapolsky).” YouTube, YouTube, 11 Aug. 2017, www. youtube.com/watch?v=kNLOJ-3rL60. [6] Kobayashi, Masahito, and Alvaro Pascual-Leone. “Transcranial Magnetic Stimulation in Neurology.” The Lancet, vol. 2, Mar. 2003, pp. 145–156. [7] Koger, Robert. Death’s Revenge. Createspace Independent, 2013. [8] “MRI Scan (Magnetic Resonance Imaging): What It Is & Why It’s Done.” WebMD, WebMD, www.webmd.com/a-to-z-guides/what-is-an-mri. [9] practiCalfMRI. “How Dangerous Are Magnetic Items near an MRI Magnet?” YouTube, YouTube, 12 Nov. 2010, www.youtube.com/ watch?v=6BBx8BwLhqg. [10] Schopenhauer, Arthur, et al. The World as Will and Representation. Cambridge University Press, 2018. [11] Stern, Adam P. “Transcranial Magnetic Stimulation (TMS): Hope for Stubborn Depression.” Harvard Health Blog, Harvard Health Publishing, 23 Feb. 2018, www.health.harvard.edu/blog/transcranial-magnetic-stimulation-for-depression-2018022313335. [12] Stokes, Mark G., et al. “Biophysical Determinants of Transcranial Magnetic Stimulation: Effects of Excitability and Depth of Targeted Area.” Journal of Neurophysiology, vol. 109, no. 2, 2013, pp. 437–444., doi:10.1152/ jn.00510.2012. [13] Thut, Gregor, and Alvaro Pascual-Leone. “A Review of Combined TMS-EEG Studies to Characterize Lasting Effects of Repetitive TMS and Assess Their Usefulness in Cognitive and Clinical Neuroscience.” Brain Topography, vol. 22, no. 4, 2009, pp. 219–232., doi:10.1007/s10548-0090115-4.


A Real-Time Detection System using Advanced Imaging Techniques to Diagnose Lipohypertrophy in People with Insulin Dependent Diabetes by Rohan Ahluwalia Abstract Insulin Dependent Diabetes is a chronic condition that affects over 200 million people worldwide. While there is no treatment for this specific condition, diabetic patients keep control of the disease by administering external insulin either through multiple daily injections (MDI) or insulin pump therapy. These continuous injections into the skin cause the development of excess adipose tissue underneath the skin, leading to blunted and reduced absorption of insulin. This results in a phenomenon known as lipohypertrophy (LHT), and it goes largely undetected since itâ&#x20AC;&#x2122;s hard to detect until the condition is severe. When people with diabetes continue to administer insulin into these low absorption areas, it can lead to further aggravation of the condition, leading to long-term

Introduction Insulin Dependent Diabetes is a chronic condition where the body needs external administration of insulin throughout the day to manage blood glucose levels. While people with type 1 diabetes do not produce insulin on their own, people with type 2 diabetes also can become dependent on exogenous insulin. There are more than 200 million insulin-dependent people with diabetes throughout the world [10]. Management of type 1 diabetes is measured through a test called HbA1C, which determines how well a person has managed their glucose levels over the past three months. Elevated HbA1c (e.g. >7) indicates that the personâ&#x20AC;&#x2122;s average glucose has been high, which is toxic to the body and can cause damage to tissues. Long term exposure to elevated glucose levels can lead to diabetic neuropathy, retinopathy, tissue damage, and limb loss [1] . Lipohypertrophy refers to a lump under the skin caused by an accumulation of extra fat at the site of many subcutaneous injections of insulin [2]. It may be unsightly, mildly painful, and may change the timing or completeness of insulin action. This problem occurs in many people with insulin-dependent diabetes since these individuals have to regularly receive insulin injections.

Art by Seyoung Lee complications and poor glycemic control for patients (as seen by elevated levels of HbA1C). The purpose of the project is to create a method for automated detection of lipohypertrophy in insulin dependent patients before the condition becomes severe. The approach is to use ultrasound technology paired with an advanced algorithm to detect the existence of fat-build up and properly identify high & low absorption regions of the body. Using a developed algorithm and a set of images from the Profile Institute of Metabolic Research in Germany, an accuracy of 89% was achieved in properly diagnosing lipohypertrophy. In conclusion, this device is an effective tool in diagnosing and determining the severity of lipohypertrophy. A device, like the system developed here, has the potential to greatly improve the control of insulin for diabetes patients and decrease long term complications from the condition. Issues with lipohypertrophy include delayed and reduced reaction of insulin in the body, reduction of space to give insets, and insulin shots. The skin also gets damaged and there is no treatment available that is able to fix the skin completely [2]. When the lipohypertrophy becomes very severe, the only method of treatment is liposuction. This would get rid of the access fat build-up in the subcutaneous fat layer. Choosing an injection site is hard for physicians as it is difficult to identify the location of the lipohypertrophy. Three main complications of unmanaged diabetes have the strongest correlation to poor glucose management: diabetic retinopathy, diabetic neuropathy, and diabetic nephropathy. 24 | JOURNYS | FALL 2019


Methods 4.1 Image Gathering / Basic Data Set

Ultrasound imaging is an effective way to measure whether a region of the dermis contains lipohypertrophy. I used images that were acquired from another study by the Profil Institute for Metabolic Research to determine the delayed absorption due to LHT regions throughout the body [4]. This data set had images that were previously identified to have lipohypertrophy present as determined by clinicians.

Edge Detection

I used edge detection to identify the regions of subcutaneous fat tissue. Edge detection algorithms identify congruent points in an image at which the image brightness changes sharply. Multiple edge detection techniques were evaluated including Roberts, Sobel, and Canny [3,6]. To determine which edge detection system was used, all of the edge detection systems were run through the algorithm. Based upon these results, the Canny technique was used since it had the highest accuracy. As shown in figure 1, the optimized canny edge qualitatively has the most noise reduced for this specific application.

Figure 1 – This shows the different variations of the edge detection system.

4.3 Optimization of Algorithm

Extended Edge Detection – I was able to further optimize the Canny Edge Detection System for best results. Canny edge was selected partly for the reason that the Gaussian filter could be changed to provide results for specific applications. This was utilized in order to reduce excess noise resulting in the most significant lines in the image being intensified. Initially, the Gaussian filter was optimized so that significant lines were emphasized and everything else was taken out of the image. To do this, the Gaussian filter was set with a smoothing coefficient of 4, which was a balance between reducing noise, and preserving important lines. However, this still resulted in inclusion of some lines that were unnecessary, which led to inaccurate results. In order to circumvent these issues, a unique approach was taken. Since the Gaussian filter was able to be modified, the coefficient could range from 1 to 8. From this, each of the coefficients were summed to emphasize the lines that appear in 25 | JOURNYS | FALL 2019

all of them, while still keeping the noise to a minimum. After summing the processed images, another filter process was implemented. This filtering process would construct an image with lines that were present in more than 4 of the processed images. This was able to reduce the noise since the dermis, fat, and muscle boundaries were present through most of the images; it produced significantly better results and the algorithm was better able to diagnose lipohypertrophy. Feature Extraction – In order to increase accuracy, a feature extraction technique was used to remove excess lines from interfering with the diagnosis of lipohypertrophy through three main steps. First, after the edge detection techniques processed the images, the images were summed in a vertical array. For example, if the image was 50 pixels in width, each pixel where there was a line present (value of 1) was added to the line present in vertical array. So, if there were 10 pixels where a line was present, the value in the array would be 10. Through this process all of the pixels were computed in the processed images. From this, the vertical array had the pixel location and intensity (amount of times in the horizontal direction) for each of the lines.


Now, since a pixel would only have lines that are completely straight, another process was implemented in order to capture lines that spanned more than one pixel in the vertical direction. For this process, each pixel in the vertical direction was then summed with the previous and next five pixels in order to capture curved lines. Now that the areas that contained lines were exaggerated in their intensity value, it was simple to determine which lines were significant lines, such as the dermis, fat, and muscle boundaries. A search algorithm was developed to search for the lines with the highest intensities. Once these lines were found, a threshold was determined to reduce the noise in the image, allowing for more accurate determination of thickness of the sub-cutaneous region. As seen in the image, the algorithm is able to summate the image into a 1 by Pixel Length matrix. Now for each line of the matrix, it shows the distance (width) of the image. The algorithm is now able to reduce the number of the lines that are irrelevant to the processing. In order to determine which lines were either the subcutaneous, dermis, or muscle boundaries, the relative location of the significant lines were analyzed. Since these lines were defined in the images, they were the most significant lines after the processing of the images. Since the dermis line would be the first significant line, and the muscle would be the last, the final line would have to be the subcutaneous layer. Once the subcutaneous layer was determined, a function was developed to measure the distance between these lines. Through this process, the algorithm was able to accurately determine the significant lines and was able to determine the thickness of the region of interest.

Based on this value, the severity of lipohypertrophy was also determined, estimating the recovery time necessary before injections can be given again.

4.5 Algorithm

The algorithm is a modified edge detection algorithm with matrix calculations to compute the subcutaneous fat of images and detect Lipohypertrophy. This was implemented in Python on Google compute engine.

Figure 4 â&#x20AC;&#x201C; This figure shows the process in which each image is processed and analyzed, and how it comes to the result of the diagnosis of LHT

4.8 Error Reduction of algorithm

Figure 3 â&#x20AC;&#x201C; This figure shows the result from the edge detection and feature extraction process. A ultrasound image is transferred though two main steps

4.4 Classifier

The final step of the algorithm was to develop a robust classifier that is able to utilize the edge detection and feature extraction techniques to predict the presence of Lipohypertrophy. Using BMI as a feature in the algorithm, an estimated thickness for lipohypertrophy was determined. For example, if the BMI of the patient was 24, there would be a normal thickness of 4mm, so any value above this would be considered lipohypertrophy.

Figure 5 â&#x20AC;&#x201C; This graph shows the results of the algorithm optimization process. As shown the trend for each major improvement showed a significant decrease in the error. Algorithm 27 was then selected based upon the criteria.

26 | JOURNYS | FALL 2019


Throughout the design process, the algorithm was constantly improved upon, using various methods to reduce the error in the final model. Process such as optimized edged detection to emphasize important lines, and feature extraction to reduce extraneous lines, increased the accuracy in determine the proper boundaries for the algorithm. Through the entire process, a multitude of classical methods were utilized to increase the final accuracy of the model.

4.8 ROC Curves

The following shows Receiver operator characteristics (ROC) curves of different iterations of the lipodetect algorithm.

patient with Lipohypertrophy being scanned by an ultrasound and then these images being processed, and results would be given back to the patient.

4.8 System Architecture

The detection system consists of a portable ultrasound that connects to a mobile device. The mobile app sends images to the cloud. The backend applications process the images and sends results to the mobile device. The system is able to scan and detect regions of Lipohypertrophy in real-time.

System Development

4.8 Mobile App Development

Figure 6 - This is an ROC curve showing the multiple algorithms accuracy, the final algorithm is much higher than the chance line. The final version (v27) shows the highest Area under curve (AUC). This was another analysis of the results to determine which algorithm had the highest accuracy. The final algorithm, Alg27 had the best results in both of the graphs. There was the least error, variation and highest accuracy and this was the algorithm that was implemented into the real-time detection system.

Solution: Developing a Real-Time Detection System

Figure 7â&#x20AC;&#x201C; This figure show the general architecture and process for the final solution

4.8 Solution Overview

The solution consists of four major steps including the 27 | JOURNYS | FALL 2019

Figure 8 â&#x20AC;&#x201C; This is the solution architecture, explaining how the connection between the ultrasound and the algorithm

The mobile application was developed in Android studio by connecting the algorithm to the ultrasound transducer. The app uses Firebase console to upload and retrieve results from the Google Compute Engine. The application is connected to the Lumify Device and is able to gather the images scanned quickly and effectively. The mobile application is a consumer device that is able to create a real time detection system. The code implements the lipodetect algorithm in python. When an image is uploaded, the Pub/Sub software triggers the Google Compute Engine to run the algorithm on the image. Once the image is processed the results are placed into a Compute Engine Bucket, which are then be retrieved by the mobile app. PubSub â&#x20AC;&#x201C; This is the main method that creates a closedloop system that is able to detect Lipohypertrophy in real time. Once an image is uploaded into the Google Compute Engine Bucket, a notification is triggered. This notification tells the algorithm to pick up the file from the bucket and process the image. Now the algorithm processes the images, determining results. This produces the results that are then placed back in the google compute bucket where the files are retrieved from the mobile applications.


Clinical Evaluation and Results There were two phases of the study conducted, a pilot study and a Clinical Evaluation of the real-time detection system. Study Hypothesis – The Real-Time Detection System will have diagnosis accuracy and depth detection accuracy of over 85% and will be able to properly identify the thickness of the subcutaneous fat regions. Evaluation of Subjects – There will be both Diabetic and Non-Diabetic Subjects imaged and processed. Protocol Summary – The patients will be scanned to determine Lipohypertrophy in highly injected regions of the body. The patient will be scanned in four regions of the body where most of the injections are given. Based upon this, regions of 5cm by 5cm will be scanned to create optimal injection regions for that area. After the region have been scanned, ground truths for the IMT distance will be gathered. Once these ground truths have been gathered, the study is complete.

to increased amount of injections in this region. The algorithm was able to find the IMT of both of these scans.

Non- Lipohypertrophy Testing

To determine if the algorithm would be able to run in both environments, where Lipohypertrophy is not present would have to be tested. This patient had no Lipohypertrophy present, shown in the scan. This patient was tested in the same area of thigh so the most accurate results could be achieved. The IMT was only 0.4 mm, which is normal for someone who has muscular thighs; this difference is huge. The closed-loop system was able to determine where there was LHT and where there wasn’t. This closed-loop system can be implemented into doctor’s offices.

4.10 Clinical Evaluation

There were 10 patients in the clinical evaluation study, 5 people with diabetes and 5 people without diabetes. Both of these patients were scanned in regions of the body were most

4.10 Pilot Study Lipohypertrophy Testing

This algorithm was tested in two human patients: a patient with Insulin Dependent Diabetes and one without. The type 1 diabetic was tested in the thigh areas to determine where Lipohypertrophy was in the patient. Using a handheld ultrasound, the area was determined and then compared against in the same area on a non-diabetic patient. These were both images for LHT regions of the body of

Figure 9 – This figure shows the results from patient 1 testing. This patient is a diabetic patient. patient 1, type 1 diabetic. The closed loop system was able to determine that LHT was present in both of these images. As shown it is extremely prominent in the first image, and this is due

Figure 10 – This image shows testing from a non-diabetic patient

Figure 11 - This shows a confusion matrix for the results from the human trial. The algorithm had a high accuracy and sensitivity injections take place such as the arms and thighs. These regions were scanned in a 5 cm by 5 cm area and results were gathered. 28 | JOURNYS | FALL 2019


Each of the patient’s inter-media thickness was determined by the algorithm. A confusion matrix was created based upon the results gathered from the study, verified by licensed professionals. The algorithm had a high accuracy of 89%, which supports the study hypothesis. Along with creating a confusion matrix of the results, each of the individual patients had an optimal injection contour map created. This contour map shows where injections should be given or where injections should not be. This contour map shows three different patients and how specific regions of their body is affected by Lipohypertrophy.

Figure 12 – This showed the contour maps for optimal injection from the patients. This was the data that was outputted from the system. Overall, the study was a success since the real-time detection system meets the claim that it is an accurate and reliable model for detecting Lipohypertrophy in Insulin Dependent Diabetics.

Conclusion and Future Works The results from the testing data supports the conclusion that this data will create an accurate system for the diagnosis of Lipohypertrophy along with determining the severity of the disease. A next step for the project would be to implement a more in-depth machine learning algorithm to increase accuracy, but this would require a larger data set of lipohypertrophic images. Based on the promising results, application of this detection system is feasible in doctor offices as of today. This system can detect and notify patients of optimal injection regions along with undetectable and severe Lipohypertrophy. If this device was able to do this properly, there are many benefits for diabetic patients. Initially, they would be able to reduce the amount of injections they would give since every injection would be a high accuracy. This would also allow the patients to heal the regions with severe lipohypertrophy, reducing pain during injections. Another benefit is that they would be able to bring their HbA1C into optimal regions, significantly reducing the long-term complications that can arise from diabetes such as nerve damage. Overall, this device is an effective tool for diagnosing lipohypertrophy and would be able to greatly benefit people with diabetes by providing them with optimal injection. regions. 29 | JOURNYS | FALL 2019

References American Diabetes Association. (n.d.). Complications. Retrieved from http://www.diabetes.org/living-withdiabetes/complications/ Barola, A., Tiwari, P., & Bhansali, A. (2017, August 11). Insulin-mediated lipohypertrophy: An uncommon cause of diabetic ketoacidosis. Retrieved from https:// casereports.bmj.com/content/2017/bcr-2017-220387 Dr.S.Vijayarani, & Mrs.M.Vinupriya. (2013, October). Performance Analysis of Canny and Sobel Edge Detection Algorithms in Image Mining. Retrieved from http://www. rroij.com/open-access/performance-analysis-of-cannyand-sobel-edgedetection-algorithms-in-image-mining. php?aid=43752 Famulla, S., Hövelmann, U., Coester, A. F., Hermanski, L., Kaltheuner, M., Kaltheuner, L., . . . Hirsch, L. (2016, September 01). Insulin Injection Into Lipohypertrophic Tissue: Blunted and More Variable Insulin Absorption and Action and Impaired Postprandial Glucose Control. Retrieved from http://care.diabetesjournals.org/ content/39/9/1486 Heinemann, L., & Krinelke, L. (2012). Insulin Infusion Set: The Achilles Heel of Continuous Subcutaneous Insulin Infusion. Journal of Diabetes Science and Technology,6(4), 954-964. doi:10.1177/193229681200600429 Katiyar, S., & Aarun, P. (n.d.). Comparative analysis of common edge detection techniques in context of object extraction. Kruschitz, R., Wallner-Liebmann, S., Hübler, K., Hamlin, M., Schnedl, W., Moser, M., & Tafeit, E. (2009). A measure of obesity: BMI versus subcutaneous fat patterns. Aktuelle Ernährungsmedizin,34(03). doi:10.1055/s-0029-1223885 NIDDK. (2017, February 01). Diabetic Kidney Disease. Retrieved from https://www.niddk.nih.gov/healthinformation/diabetes/overview/preventing-problems/ diabetic-kidney-disease World Health Orginization. (2018, October 30). Diabetes. Retrieved from https://www.who.int/newsroom/fact-sheets/detail/diabetes


Cryptic Ontics

Art by Seyoung Lee

by Aayush Desai, Ameya Kunder, Anushree Ganesh, Jinal Shah, Kiranbaskar Velmurugan, Pulkit Malhotra, Roshni Sahoo and Satchit Chaterjee

introduction The Cryptic Ontics group was created with the intent to combine creativity and science through the platform CERN, the European Council for Nuclear Research. CERN’s mission is to uncover the mysteries behind the universe’s existence. One of the points we concentrated on was by which basis were we define a class of objects, such as electrons. What criteria could we possibly use for distinguishing one member of a class from another? We empirically observed a set of phenomena which greatly resembled each other in important ways and now we have chosen to refer to each instance of said phenomena as an electron. We wished to study such dependencies of a muon’s (elementary particles, akin to electrons, but with a much greater mass) properties on the gravitational and electromagnetic fields and shed some light on these shrouded concepts of identity and homogeneity.

original research process Our research began with the following setup:

in the order of GeV/c (1 gigaelectron/ the speed of light). Scintillators that emitted a photon every time a charged particle flew through were also an essential part of the setup. Its response time was very fast, perfect for finding the speed of the particles directly by dividing the distance traveled with the time difference between two scintillators (also known as TOF, or time-of-flight). TOF systems also helped us distinguish one particle from another. Since every particle in the beam had a uniform momentum, the system allowed us to identify each particle and calculate its mass by measuring its velocity. Delay wire chambers (DWCs) and micromegas both use the charge of the incoming particles to create a cascade of ions, giving us the position coordinates of the particles hitting them. The MDX27 in Figure 1 was our magnetic field, produced by passing a chosen current from 0 to 240 A. It gave us an approximate magnetic field strength of about 0 to 1 Tesla. The purpose of this setup was to calculate the angle by which muons would be deflected under the given magnetic field. Our first goal was to verify the fact that our results correlating the current/field, momentum, and deflection were in agreement with the known equation for the Lorentz force, F = qE + qv × B. The force exerted on a charged particle q moving with velocity v through an electric E and magnetic field B is represented by the entire electromagnetic force F on the charged particle.

changes and challenges A spill of charged particles was initiated by accelerating particles using constructions called Faraday cages that block electrostatic/electromagnetic influences and an oscillating magnetic field. Collimators were also used to narrow the diameter of a beam and focus the particles into our experimental area. By the time the particles reached the target, their momentum was

Changes to this setup included the implementation of more DWCs, both before and after the magnetic field. In order to reduce ‘noise,’ or unwanted particles from cosmic rays, we set up a ‘trigger:’ Only particles that hit both the TOF scintillators and DWCs 1 and 2 were recorded. An important challenge was that it was impossible to isolate muons. This was due to the fact that the muon filter we used was a thick iron block, which would cause an unusable amount of scattering. The principle behind a muon filter is the idea that muons penetrate far more deeply into solid matter than the 30 | JOURNYS | FALL 2019


other elementary particles which spill out of the beam. It is very difficult to calculate or measure how much energy the muons lose, and how much they scatter when passing through the iron; as a result, we decided to include all particles that came with the beam instead of just the muons.

continued research Raw data files were converted to .root files so they could be analyzed using the ROOT data analysis framework. We were also provided with CERN accounts so that we could use the CERNBox, a local cloud storage service, and SWAN, a Jupyter notebook service used to host cloud-computed code and conduct data analysis in the browser itself, to collaborate with support scientists. Example code snippets and information were shared to us through our SWAN accounts to aid us in the process of data analysis. The job of the data analysis team was to determine the calibration constants for the detectors and plot histograms. We used these positions to calculate the angle of deviation— the magnitude of the deflection the particles suffered.

results The following were our results: :

Training took up much of our time. For example, our support scientists emphasized the importance of making hourly checks on the gas pressure within the gaseous detector in order to ensure that the gas was flowing from the cylinder to the detector smoothly. Our readings for the angle of deflection resulted in a beam momentum of +10 GeV/c and -10 GeV/c. A “positive” beam momentum implies positively charged particles in the spill, such as protons, positrons, and anti-muons. A “negative” one implies negative particles, such as electrons and muons. One peculiar problem lied with the negative beam: its frequency was terribly low. Due to the fact that we had kept a strict trigger, we were normally getting only about a hundred particles per spill. With the negative beam, however, the number of particles per spill dropped to a single digit. In order to not lose data with those parameters, we opted to start a run with the negative beam just as the last shift ended and let it running overnight so that by the time the first crew arrived the next day, we’d have a decent amount of data. 31 | JOURNYS | FALL 2019


Although the means were not quite as accurate as before, results were still possible with each particle’s initial angle, albeit with a wider distribution.

conclusion

Not only did the mean decrease in magnitude on the decreasing current, but its sign also becomes negative with the negative beam. We eventually did try using a muon filter. Scatterplot:

Reconstructing influential physical theories from scratch often helps in uncovering unknown logical connections and eliciting instructive empirical checkpoints. We intended to probe this line of thought a bit further. With raw data that showed us a correspondence between the current in a wire and the deflection of a charged particle, we intend to derive the mathematical relationship between the Lorentz force on a particle and the particle’s charge, velocity, and other parameters the force depends on, in an attempt to put ourselves in the shoes of our predecessors. Despite this, all we have done is pass charged particles through a device with a particular electrical current. We subsequently asked ourselves this: what is the need to posit an intermediate quantity known as the ‘magnetic field’? Why not directly conclude that it is current which causes the deflection, instead of using ‘field’ as a stepping stone? Certainly, there seems to be no empirical justification for its existence.

references CERN’s ‘Beam and detector’ document https://beamline-for-schools. web.cern.ch/sites/beamline-for-schools.web.cern.ch/files/BL4S-Beam-anddetectors_2018.pdf Britannica, T. E. (2017, June 08). Lorentz force. Retrieved from https:// www.britannica.com/science/Lorentz-force 32 | JOURNYS | FALL 2019


Study of Convolutional Neural Networks for Early Detection of Diabetic Retinopathy by Rachel Cai Introduction Diabetic retinopathy (DR) is the leading cause of blindness in the working-age population of the developed world. Presently, detecting DR is a manual, timeconsuming process that requires a trained ophthalmologist to examine and evaluate digital fundus photographs of the retina. Computer machine learning technologies such as Convolutional Neural Networks (CNNs) have emerged as an effective tool in medical image analysis for the detection and classification of DR in real time. During the summer and school year of 2018, I had the opportunity to intern at the National Institutes of Health (NIH), National Library of Medicine (NLM), and the Lister Hill Center to study Convolutional Neural Networks for the early detection of diabetic

Art by Seyoung Lee

retinopathy. In our study, I adapted and compared the effectiveness of several CNNs (Inception, VGG16, and Resnet) for the classification of DR stages. We also designed experiments to evaluate the effect of different parameters (sample and image sizes) on training.

Diabetic Retinopathy Diabetic retinopathy is the leading cause of blindness in the working-age population of the developed world. It is estimated to affect over 93 million people. Around 40-45% of Americans with diabetes have some stage of the disease [16]. Progression of vision impairment can be slowed or averted if DR is detected in time, but treatment can be difficult as the disease often shows few symptoms until it is too late to cure. Combined with delays, miscommunication, confusion, and minimal follow up caused by human error, DR is definitely not to be underestimated.

Computer Vision through Convolutional Neural Network Convolutional Neural Networks (CNNs) have a great record for application in image analysis and interpretation, including medical imaging. Presently, large CNNs are used to tackle highly complex computer vision tasks with many object classes to an impressive standard. 33 | JOURNYS | FALL 2019


CNN for Diabetic Retinopathy detection Convolutional Neural Network is a feed-forward neural network. It mainly consists of an input layer, many hidden layers (such as convolutional relu, pooling, flatten, fully connected and softmax layers) and a final multi-label classification layer. CNN methodology involves two stages of processing: a time consuming training stage and a real-time prediction stage. During the first stage, millions of images go through many iterations of CNN architecture to finalize the model parameters of each layer. In the second stage, each image in test dataset is fed into the trained model to score and validate the model.

training, we ran 25 epochs with a short circuit abort if there was no improvement after 6 epochs. Note: Sample size consisted of 1,000 samples; 750 images for training and 250 for testing/validation. The samples are always split 3 to 1, training set to testing set. The test set and the validation set are the same in our experiment.

Datasets Kaggle DR competition dataset Our main dataset is based on the [Kaggle Diatebic Retinopathy Detection competition] (https://www.kaggle. com/c/diabetic-retinopathy-detection) which was carried out in 2016. The main dataset contains 35,000 eye images with 5 stages of DR disease.

Messidor dataset We also look at the Messidor dataset which contains 1,200 images with 4 stage DR progression. Although the Messidor dataset is smaller, there are fewer labeling errors.

Figure 1: CNN for DR Detection The output of the above framework as shown in Figure-1 will emit a multi-class prediction for the likelihood that the image is in that class, with confidence scores on each category such as: • 52% Category-0 No DR (Normal) • 17% Category-1 DR • 27% Category-2 DR • 2.5% Category-3 DR • 0.8% Category-4 DR However, there are two issues with CNN methods for DR detection. One is achieving a desirable offset in sensitivity (patients correctly identified as having DR) and specificity (patients correctly identified as not having DR). This is significantly harder for a problem with five classes containing normal, mild, moderate, severe, and proliferative. The second problem is overfitting. Skewed datasets cause the network to over-fit to the class most prominent in the dataset. Large datasets are often the victim.

Stages of diabetic retinopathy with increasing severity Figure 2 shows the 5 class DR classification, ranging from 0 (No DR) to 5 (Proliferative DR).

Our Work Methodology and Training Platform Our experiment was conducted on the hosted Linux platform with an NVidia Tesla K80 GPU. The environment was hosted by Google Colab and Kaggle Kernel. The implementation was based on Keras/TensorFlow framework. For most of the

Figure 2: Different Stages of Diabetic Retinopath

Unbalanced training data set Skewed datasets cause the network to over-fit to the class most prominent in the dataset. Large datasets are often massively 34 | JOURNYS | FALL 2019


skewed. Figure 3 shows how the training data are distributed in our DR datasets.

CNN Architectures There are various CNN architectures proposed in academia: VGG16, Inception, ResNet, GoogLeNet. In our study, we evaluated the performance of InceptionV3 vs. VGG16 vs. ResNet.

InceptionV3

Figure 5: Inception Network Architecture

Figure 3: Training Data Distribution between 5 stages of DR

X- axis: category of DR, Y-axis: number of training samples

In the Kaggle dataset with 35,000 images, less than three percent of images came from the 4th and 5th class. This means changes had to be made in our network to ensure it could still learn the features of these images, without enough training. To overcome the difference in data points distribution, I used sampling with the replacement statistics technique to boost the data samples in category 2, 4 and 5: Figure 4 shows a balanced distribution after using replacing techniques. The graph on the left displays evenly distributed samples for the left eye and the right eye. Both have over 12,000 images to test and train. The graph on the right displays evenly distributed samples from each DR level. Every level has 5,000 images to test and train, which is much less skewed to favor training and testing for category 0.

Figure 4: Rebalancing training data between eyes and DR stages 35 | JOURNYS | FALL 2019

The Inception microarchitecture was first introduced by Szegedy et al. in their 2014 paper Going Deeper with Convolutions. The goal of the inception module is to act as “multi-level feature extractor” by computing 1x1, 3x3 and 5x5 convolutions within the same module of the network.

VGG16 and VGG19

Figure 6: VGG Network Architecture The VGG network architecture was introduced by Simonyan and Zissermain in their 2014 paper: Very Deep Convolutional Networks for Large Scale Image Recognition. The VGG network is characterized by its simplicity, using only 3x3 convolutional layers stacked on top of each other in increasing depth. Reducing volume size is handled by max pooling. Two fully-connected layers, each with 4096 nodes are then followed by a softmax classifier (above). The “16” and “19” stands for the number of weight layers in the network.


Optimizing CNN

ResNet50

Data Augmentation Four different transformation types are used here, including flipping, rotation, re-scaling and translation to boost the training of the images. See Table-2 for explanations:

Figure 7: ResNet Architecture In a deep convolutional neural network, several layers are stacked and trained to the task at hand. The network learns several low/mid/high level features at the end of its layers. In residual learning, instead of trying to learn some features, we try to learn some residual ( the subtraction of features learned from input of that layer). ResNet (short for Residual Network) does this using shortcut connections, directly connecting input of nth layer to some (n+x)th layer. Training this form of networks is easier than training deep convolutional neural networks and the problem of degrading accuracy is resolved.

Comparison between InceptionV3, VGG16, and ResNet On the high level the difference among the three networks are: 1. The weights for Inception V3 are smaller than both VGG and ResNet, coming in at size of 96MB, so it is faster to download and train Inception network; 2. For VGG Network: it is generally slower to train and the network architecture weights themselves are quite large. CNN

Accuracy

AUC

InceptionV3

55%

0.61

VGG16

59%

0.67

ResNet50

56%

0.50

Table 1: Comparison between CNNs Table-1 shows the initial test result on 1,000 image sampling (750 for training and 250 for testing) from the Kaggle dataset. VGG16 gives a slight better performance on both Accuracy and AUC (area under the curve) result. ResNet50 has a worse result which we think might be due to the number of layers (50) we are using not being sufficient.

Transformation

Description

Rota0tion

Rotate 0-360 degrees

Flipping

Flip horizontally or vertically

Rescaling

Randomly scale with scaling factor between 1/1.6 and 1.6

Translation

Randomly shift between -10 and 10 pixels

Table 2: Image Transformation Techniques

Study on different image sizes Various image sizes were tested for training through the Inception architecture, with the results shown below in Table-3. From our experiment, the 512x512 image size still gives the best result. However, 512x512 can only go through Inception network, as it will run out of memory in VGG and ResNet. Image Size

Accuracy

224x224

53%

299x299

59%

512x512 62% As described above, there are various image preprocessing

Table 3: Experiment results on different image sizes (Inception)

Study on different image preprocessing techniques techniques we can apply before the training. Table-4 shows the experiment results with preprocessings through the Inception architecture, and the vertical flipping has the best result. Transformation

Accuracy

90 degree Rotation

60%

Horizontal Flipping

63%

Vertical Flipping

72%

Re-scaling

59% 36 | JOURNYS | FALL 2019


Table 4: Experiment Results on different image transformation techniques (Inception)

Study on different sampling techniques and different data sample size Because of the skewed data distribution among 5 DR categories, we only have a few hundred data samples in category 4 and category 5. One technique to compensate for the low data volume in these categories is to replace and reuse the sample data, shown below in Table-5: Sampling Technique

Accuracy

Reuse Data Sample

60%

No data reuse

35%

Table 5: Experiment results on reusing data samples (Inception) Data sample size

Accuracy

AUC

Training Time

1000

59%

0.67

12 min

1500

62%

0.66

12 min

3500

57%

0.63

12 min

10000

64%

0.66

54 min

20000

64%

0.69

76 min

35000

61%

0.66

ImageNet) to the new image domains. Since the final categories are different between ImageNet (categorization for common objects) and DR (categorization for eye disease), we usually use the no-top-layer pretrained model, and then add several layers of normalization and a fully connected layer on the top. We will then retrain the final classification layer. Table-7 shows how we add custom layers on top of the existing pretrained model (VGG16): Layer (type)

Output Shape

Param Number

input_1 (InputLayer)

(None, 224, 224, 3)

0

vgg16 (Model)

(None, 7, 7, 512)

14714688

batch_ normalization_1

(Batch (None, 7, 7, 512)

2048

flatten (Flatten)

(None, 25088)

0

fcDense1 (Dense)

(None, 4096)

102764544

fcDense2 (Dense)

(None, 4096)

16781312

fcOutput (Dense)

(None, 5)

20485

Table 7: Custom Built VGG16 Network using fully connected layers Table-8 shows our results on training time and accuracy achieved between the pre-trained model and trained from scratch model for a training set of 1,000 images: Training Method

Accuracy

AUC

Training Time

144 min

VGG Pretrained

59%

0.67

12 min

Table 6: Experiment results on different data sample size (VGG)

VGG From scratch

52%

0.46

12 min

Table-6 shows the result for different sampling size, but the variance on the result is not significant. The accuracy result peaks for the 10,000 and 20,000 data samples. Note that the training time doesnâ&#x20AC;&#x2122;t go linearly with the training sample size. This is because with more data samples, the less number of epochs the CNN could complete. Once Tensorflow detects that there are no improvement after 6 epochs, it aborts the training.

Training with pretrained model Transferred learning is a technique to transfer a model parameter pretrained in a common public dataset (e.g. 37 | JOURNYS | FALL 2019

Table 8: Experiment results comparing pretrained and scratch-trained VGG network

Optimization, Attention Map In most of our study in this project, we use pretrained models. Attention Map, as shown in Figure-8, is a mechanism used to expand the capabilities of neural networks. They enable focusing on specific parts of the input and they can improve the performance results of neural processing. Figure-8 shows fundus images and the attention maps exerted on the region to help processing.


Table 9: Custom built VGG Network using attention map layers

Experimentation Results for Attention Map Feature Table-10 shows the result of training with and without attention map optimization: Test Name

Accuracy

AUC

With Attention Map

59%

0.67

Without Attention Map

58%

0.64

Table 10: Experiment results comparing attention map feature (Inception) Figure 8: Attention Map Feature Table-9 shows the layers we are adding on top of pretrained model (vgg16) to build attention map: Layer (type)

Output Shape

Param Number

Connected to

Notes

input_1 (InputLayer)

(None, 224, 224, 3)

0

vgg16 (Model)

(None, 7, 7, 512)

14714688

input_1[0][0]

batch_ normalization_1

(BatchNor (None, 7, 7, 512)

2048

vgg16[1][0]

for attention map

dropout_1 (Dropout)

(None, 7, 7, 512)

0

batch_ normalization_1[0] [0]

for attention map

conv2d_1 (Conv2D)

(None, 7, 7, 64)

32832

dropout_1[0] [0]

for attention map

conv2d_2 (Conv2D)

(None, 7, 7, 16)

1040

conv2d_1[0] [0]

for attention map

conv2d_3 (Conv2D)

(None, 7, 7, 8)

136

conv2d_2[0] [0]

for attention map

conv2d_4 (Conv2D)

(None, 7, 7, 1)

9

conv2d_3[0] [0]

for attention map

conv2d_5 (Conv2D)

(None, 7, 7, 512)

512

conv2d_4[0] [0]

for attention map

multiply_1 (Multiply)

(None, 7, 7, 512)

0

conv2d_5[0] [0]

for attention map

global_ average_ pooling2d_1 (Glo

(None, 512)

0

multiply_1[0] [0]

for attention map

global_ average_ pooling2d_2 (Glo

(None, 512)

0

conv2d_5[0] [0]

for attention map

The attention map optimization brings a slight edge in performance. Most of the tests in our study use the attention map feature.

Model Prediction with AUC scores: AUC is an abbreviation for Area Under the Curve, as shown in Figure-9. It is used in classification analysis to determine which model predicts the classes best. Figure-9 shows the AUC score for VGG16 network for 1,000 training samples.

Figure 9: AUC Result 38 | JOURNYS | FALL 2019


Further Analysis using precision/ category matrix and heatmap Precision

Recall

f1-Score

Support

0

0.72

0.75

0.74

6704

1

0.12

0.25

0.16

826

2

0.26

0.12

0.16

1561

3

0.13

0.01

0.02

261

4

0.41

0.07

0.12

200

Micro Average

0.57

0.57

0.57

9552

Macro Average

0.33

0.24

0.24

9552

Weighted Average

0.57

0.57

0.56

9552

Figure 10: Precision Per DR Category (0-5) As shown in Figure-10, the precision (positive predictive value, number of true positives divided by the sum of total true positives and false positives)/recall (true positive rate, number of true positives divided by the sum of total true positives and false negatives) on each of the DR categories are different. Category 0 and Category 4’s highest precision scores are in a way expected, since it is easy to tell whether the patient has no DR or severe DR. It would be much more difficult to tell whether the patient has medium DR or mild DR.

Figure-11: heatmap on actual/predicted result distribution

X-axis: actual DR category; Y-axis: predicted DR category; Value in matrix: number of data samples

And similarly, I show the heatmap (which measures the distribution of precision/recall on each of the DR categories) in Figure-11 to see the breakdown of each category on prediction vs. actual classes. As expected, most of the data points are 39 | JOURNYS | FALL 2019

concentrated on the top left corner for category 0. Improving the accuracy in other DR categories will be the key to improve the overall accuracy of training framework.

References 1. Varun Gulshan, Lily Peng, Marc Coram. “Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs” JAMA Network, December 1, 2016. https://static.googleusercontent.com/media/research.google.com/en// pubs/archive/45732.pdf 2. Harry Pratt, Frans Coenen, Deborah Broadbent, Simon Harding, Yalin Zheng. “Convolutional Neural Networks for Diabetic Retinopathy” 20th Conference on Medical Image Understanding and Analysis (MIUA 2016) July 25, 2016. https://www.sciencedirect.com/science/article/pii/ S1877050916311929 3. Carson Lam, Darvin Yi, Margaret Guo, Tony Lindsey. “Automated Detection of Diabetic Retinopathy using Deep Learning” Proceedings AMIA Joint Summits on Translational Science. May 18, 2018. https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC5961805/ 4. Casanova, Ramon, Santiago Saldana, Emily Y. Chew, Ronald P. Danis, Craig M. Greven, and Walter T. Ambrosius. “Application of Random Forests Methods to Diabetic Retinopathy Classification Analyses.” PLOS ONE 9, no. 6 (2014): 1-7. Accessed December 26, 2014. www.plosone.org. 5. Sinthanayothin, C., J.F. Boyce, T.H. Williamson, H.L. Cook, E. Mensah, S. Lal, and D. Usher. “Automated Detection of Diabetic Retinopathy on Digital Fundus Images.” Diabetic Medicine 19 (2002): 105-12. 6. Usher, D., M. Dumskyjs, M. Himaga, T.H. Williamson, S. Nussey, and J. Boyce. “Automated Detection of Diabetic Retinopathy in Digital Retinal Images: A Tool for Diabetic Retinopathy Screening.” Diabetic Medicine 21 (2003): 84-90. 7. Jaafar, Hussain F., Asoke K. Nandi, and Waleed Al-Nuaimy. “Automated Detection And Grading Of Hard Exudates From Retinal Fundus Images.” 19th European Signal Processing Conference (EUSIPCO 2011), 2011, 6670. 8. “National Diabetes Statistics Report, 2014.” Centers for Disease Control and Prevention. January 1, 2014. Accessed December 26, 2014. 9. “Diabetes.” World Health Organization. November 1, 2014. Accessed December 26, 2014. http://www.who.int/mediacentre/factsheets/fs312/ en/. 10. Xu Kele, Feng Dawei, Mi Haibo, “Deep Convolutional Neural NetworkBased Early Automated Detection of Diabetic Retinopathy Using Fundus Image” , Second CCF Bioinformatics Conference, 23 November 2017. https://www.mdpi.com/1420-3049/22/12/2054 11. Hussain F. Jaafar, Asoke K. Nandi, Waleed Al-Nuaimy. “AUTOMATED DETECTION AND GRADING OF HARD EXUDATES FROM RETINAL FUNDUS IMAGES “, 19th European Signal Processing Conference, September, 2011.https://www.eurasip.org/Proceedings/ Eusipco/Eusipco2011/papers/1569416955.pdf 12. Christian Szegedy, Wei Liu, Yangqing Jia. “Going Deeper with Convolutions”, September 2014, https://arxiv.org/abs/1409.4842 13. Karen Simonyan, Andrew Zisserman. “Very Deep Convolutional Networks for Large-Scale Image Recognition”, September 2014, https:// arxiv.org/abs/1409.1556 14. Kaimine He, Xiangyu Zhang, Shaoqing Ren. “Deep Residual Learning for Image Recognition”. Dec 2015, https://arxiv.org/pdf/1512.03385.pdf 15. Adam R. Kosiorek, Alex Bewley, Ingmar Posner. “Hierarchical Attentive Recurrent Tracking”. Published as a conference paper at NIPS 2017. June 2017, https://arxiv.org/abs/1706.09262 16. “ Facts About Diabetic Eye Disease.” National Eye Institute, U.S. Department of Health and Human Services, 1 Sept. 2015, nei.nih. gov/health/diabetic/retinopathy. https://nei.nih.gov/health/diabetic/ retinopathy


A Correlation Between Sun Exposure and Skin Cancer by Sara-Marie Reed

What is Skin Cancer? Skin cancer is when mutations occur within a skin cell’s DNA. As a result, the cell multiplies rapidly, among other consequences. There are three major types of skin cancer. The most common are basal cell carcinoma (BCC), which is typically found in the basal cell layer, the deepest part of the epidermis. It is an uncontrolled, abnormal lesion that commonly resembles an open sore or scar. They normally do not metastasize beyond the original tumor site, but it can spread to different areas of the body in extremely rare cases. Like basal cell carcinomas,

Is there a Correlation? In England, scientists conducted a study with 960 “population-ascertained cases,” 513 population, and 174 sibling controls to understand the relationship between sun exposure and melanoma. The results from the experiment showed a tendency of sun-sensitive reported sunburns that was associated with increased melanoma risk in people more than or equal to twenty years old. Neck and head melanomas were found on people with less sun exposure on holidays at low latitudes. Overall, the clearest relationship was between “average weekend sun exposure in warmer months” and increased melanoma risk. In another study conducted by researchers from Harvard Medical School and Brigham and Women’s Hospital, the relationship between the potential risks and the chances of developing skin cancer were investigated. Nurses between the ages of 25 and 42 were chosen through questionnaires by associations between factors, like hair color, lifestyle, sun exposure, and development of skin cancer. Results showed that out of 108,916 participants, 6,955 developed basal cell

What to Do

Art by Amy Ge

squamous cell carcinomas (SCC) often spread to other parts of the body. SCC are another major type of skin cancer, originating in the squamous cells located in the uppermost layer of the epidermis. Symptoms include scaly red patches, warts, and open sores. The third type of skin cancer is melanoma. It begins in the melanocytes, a layer of cells found in the epidermis. Melanocytes are composed of melanin, a brown pigment which gives the skin color. While melanoma is less common compared to squamous and basal cell skin cancer, it is much more dangerous. It is more likely to metastasize to other parts of the body if not immediately treated. Its black and brown tumors can form anywhere on the skin. carcinoma, 880 developed squamous cell carcinoma, and 779 developed invasive melanomas that moved below the epidermis. Women who had a history of five or more blistering sunburns between the ages of 15 and 20 had an 80 percent increased risk of melanoma, and a 68 percent increased risk of SCC or BCC. However, a combination of exposure to UV rays, and all of the other factors found no association with exposure and risk of melanoma. The women who had more exposure were more than twice as likely to develop SCC or BCC than those of less exposure. Overall, the researchers came to the conclusion that the potential to develop SCC or BCC was correlated with exposure to sunlight in both adulthood and early life, in comparison to melanoma which was mostly associated with sun exposure in youth. “Host factors, including red hair, sun reaction as a child/ adolescent and number of blistering sunburns between ages 15 and 20 years of age, were strong predictors of all 3 types of skin cancer.” While this study provided evidence of a link between skin damage and sun exposure, it only focused on Caucasian women, so it is unclear as to whether the effects of skin cancer are affected due to race and/or gender.

Both studies highlighted the importance of avoiding sunburns, as they can increase the risk of developing skin cancer later on in life. Preventing sunburns, by using sunscreens in the 30-50 SPF range, wearing UV clothing, hats, sunglasses, and staying away from tanning oil can greatly decrease the chances of getting skin cancer.

References Skin Cancer Foundation. “Skin Cancer Foundation.” Squamous Cell Carcinoma (SCC) - SkinCancer.org, 2019, www. skincancer.org/skin-cancer-information. Newton-Bishop JA, Chang YM, Elliott F, et al. Relationship between sun exposure and melanoma risk for tumours in different body sites in a large case-control study in a temperate climate. Current neurology and neuroscience reports. https://www.ncbi.nlm.nih.gov/pubmed/21084183. Published March 2011. Accessed February 24, 2019. Harvard Medical School & Brigham and Women’s Hospital. “Just Five Sunburns Increase Your Cancer Risk.” NHS Choices, NHS, 2014, www.nhs.uk/news/cancer/just-five-sunburns-increase-your-cancer-risk/#what-kind-ofresearch-was-this.


Interview with bradley fikes by Sua Kim

Can you introduce yourself and give a brief description of how you came to your current position? My name is Bradley Fikes and I am 61 years old. I write as a biotech reporter for the San Diego Union Tribune. My interest in journalism began in high school where I wrote for my high school newspaper. Then, I wrote for the Daily Aztec, starting off with reporting on politics, but later on transitioning to biotech in 1990 where my love of scientific writing grew. I was able to adapt my skills as a writer and enter the field of science without actually being a scientist. From there, I started working at the North County Times (which was later purchased by the San Diego Union Tribune in 2012). So that’s how I got here— a love of writing and science that fuels my passion of learning about the world, even as I do my job.

How did you find out that being a reporter was right for you? What inspired you to pursue your career? I was interested in journalism since a young age, and it is something that I’ve been doing now for a very long time. There were also a wide variety of topics that I could write for, biotech being one of them. At the same time, I could also learn and fulfill my interests about the scientific world, which is one of the main reasons why I am able to do my job for a long time. My curiosity and passion for science made me want to pursue journalism as a biotech reporter. Often times, I have to research the topic I am writing about; the whole process of furthering my understanding about advancements and news in biotechnology piques my interest and always keeps me wanting to learn more. As a reporter, I could contribute to society in a way that both intrigued me and informed the public. I can’t imagine myself doing any other job. 41 | JOURNYS | FALL 2019

What are some things you do as a reporter? Normally, I would receive new articles or magazines through embargo access, so that I can learn more about them before being publicized. I read as much as possible beforehand to help me both understand complicated topics and develop into a stronger reporter. I then interview scientists and patients through phone calls, but I try as much as I can to go and talk to them to hear about their perspectives. This helps me to get a better idea of what I am going to write about.


What is your favorite part(s) of your job? Some of my favorite parts of the job is getting to learn new things about the growing field of biotechnology. With firsthand experience by going to meetings, conferences, biotech conventions, and reading scientific papers, I learn so much about society’s advancements in science. It’s like going to college everyday!

Who were some of your mentors you have met/ worked with, and how have they helped you? A few of my mentors are Kathy Day and Gary Robbins. Kathy Day, a former editor at the San Diego Union Tribune, has given me a lot of helpful, practical advice such as experimenting with new techniques when interviewing scientists or reminding me of my strengths. Gary Robbins is a science writer here, and we often work together on some stories. He is better at viewing the bigger picture, while I tend to focus more on the details. Our skills complement each other to produce better articles and work.

Why did you choose biotech? If you were to write about other topics, what would they be? I chose biotech because science is based more on facts, which I was drawn to. I thought it was much more interesting because there were always new topics to learn about such as new treatments for life-threatening diseases. Before choosing biotech, however, I wrote about politics. But after some time, I felt like the topic seemed repetitive and boring because the area of politics was heavily concentrated on the people’s popular opinions. I don’t see myself writing about any other topics anytime soon.

You have been a biotechnology reporter for a very long time. Looking back at your experiences, what were some of the most memorable, or your favorite reports? When I first started writing about biotechnology in the 90’s, it was a fairly new area of coverage. Because I was one of the first reporters to work in the field of biotech, I think the thrill of doing so made writing about it much more interesting. There were lots of memorable events in the field of biotechnology, but one of the biggest that I still remember today are from the early 2000s, when I covered one of the largest biotech companies (at the time) called Life Technologies being bought by an even bigger company called ThermoFisher Scientific. This was one of the first major actions in the area of biotech taken place back then. Another one of my favorite reports was about when scientists researching STEM cells in 2006-07 first invented regenerative medicines for lost body parts. This placed a really big impact in the ‘biotech world’ and marked the start of the first innovations of its kind.

Do you have any advice for high school students looking into/interested in becoming a reporter, or any advice in general? For reporting, I advise students to try to write for high school, college, or even community newspapers, about any topic that interests them. I also highly encourage students to look for internships related to writing. Even writing simple things such as blogs can help grow your skills as a writer. Starting to write is the most important step. Whether it be experimenting in labs or writing a review about the latest TV show, I believe that starting to take action to know what your passions are is the most important step. This interview was conducted on Feburary 12th, 2019. 42 | JOURNYS | FALL 2019


Interview with Heather Buschman by Allison Jung

Can you introduce yourself and tell us how you got into scientific writing? My name is Heather Buschman. I have always been interested in science. I went to a science and technology school for undergraduate, and then I went to work on my PhD at UC San Diego. That’s when I really spent all of my days working at the bench and on my thesis. I studied bacteria, specifically the ones that cause strep throat, and how they can cause disease and how the immune system responds. A couple of years later, I started thinking about what I should do after this. I started to learn other skills and quickly realized that the things I like doing the most are things that scientists usually hate, such as working on presentations or writing a paper or a grant proposal. Those are the times where I really felt like I lost track of time and felt a great sense of accomplishment. I started looking into careers of science writing, and one of the first things I did was go to a science writers course at UC San Diego. I took that class and found ways to freelance. I built my writing portfolio in parallel with finishing my PhD.

Between doing work in a lab and science communication, how did you know science communication was the path for you? Since I was on the path of completing a Ph.D. in science, it was hard to choose. Veering off of that path can be difficult, but I think it is actually a lot more acceptable now. Staying in academia for your whole career is difficult to do now because many people are getting a Ph.D., and there’s only so many spots to be a faculty member and run your own lab. What they used to call the alternative careers are today much more the norm. One of the nice things about writing is that you don’t have to do either or. There are many practicing scientists who write on the side in their spare time. It’s nice that you can write anywhere anytime. 43 | JOURNYS | FALL 2019

I saw that you teach science writing at UCSD. Can you tell me about what that is like and what are some aspects of teaching that you enjoy? Teaching science writing at UCSD is really fun because I have come full circle. I first took the class and now fourteen years later, I am now teaching that class. The big lesson here is how important networking is and staying in touch with people. I took that class in 2004, but I always stayed in touch with my instructor. She would recommend me for different freelance jobs, so when she retired from teaching that class last year, she recommended me to take her place. Teaching for extension is something I do on the side. It’s a lot to take on, but I teach this course with a friend of mine. It is really nice to share the workload, and she comes from a journalism background, so we are a nice pair. It’s really fun to teach people all of the things that I learned while I explored this career path. I went to a science and engineering oriented undergrad university, so there was limited emphasis on writing. Now I get to pass what I know on to people, and they really seem to enjoy it. We have news quizzes and open mic times, and we get to meet a lot of people from various fields.


Can you tell us about your job and what a typical day looks like? My day job is managing communications and media relations for UC San Diego health sciences. We are in charge of multiple things: UCSD hospitals, clinics, the school of medicine and pharmacy, and all of the research that goes on there. We cover two main things: owned media and earned media. Owned media is everything that the university owns that needs to be filled with content like websites, blogs, newsletters, and magazines. We are always coming up with new story ideas and writing about them or making videos. I started a podcast and it’s really cool because I still feel really close to science. I talk to scientists, read their papers, and what’s coming out soon. Then, I use my creativity to figure out how best to make that story fit one of our channels. Earned media is the media relations part of my job title. This involves pitching story ideas to local newspaper reporters and putting out press releases. There really is no typical day, which is why I love it. I could be writing different pieces that may be in different stages.

What are some of the challenges within your field? Similar to many other fields, things are always changing. For example, 15-20 years ago, people in my position only typed up press releases and mailed them out. Now, we have the internet and social media. We can be more proactive about sharing articles with different groups, and we are constantly adding new things. People might say, “let’s start a tumblr blog” or “let’s start a podcast.” The ever-evolving nature is part of the fun though.

What is the most interesting story you have worked on? There was a story about a husband and wife who are faculty members at UC San Diego. One is a psychiatrist, and the other is an epidemiologist. When they were in Egypt for a vacation in 2015, the husband contracted a bacterial infection. It could not be treated with antibiotics, and he was medevaced back to UC San Diego. He was in a coma for three months due to an abscess in his abdomen that was filled with bacteria. He was going to die; however, his wife who’s the epidemiologist, started doing research to learn about an older therapy that uses phages - viruses that specifically infect bacteria cells. There used to be research in this field, but it sort of fell out of favor because we could use antibiotics. However, certain places like Eastern Europe didn’t

have the antibiotics, so they continued phage therapy research. She looked this up and told his doctors, “what do you think of giving him phages as treatment?” They said if you can find the right kind of phage, we are willing to give it a try. And she did. She found the phages, and the doctor got use approval from the FDA. They gave him the first known treatment of intravenous phages in North America. The next day he woke up and told his daughter “I love you.” Now, he has recovered. The husband and wife have raised money to treat more patients with phage therapy. We followed this journey with them for a long time. We wrote stories about them and got a lot of media coverage. It all kind of came together in this whole package which can be found at health.ucsd.edu/phage.

What has been the most memorable experience in your career? It was seven years ago when I worked at what’s now called the Sanford Prebys Medical Discovery Institute. My job there was quite similar to my current job, but they don’t have patients or clinical stories— it’s just a lot of research. It was exciting because I happened to be there at the dawn of institutions using social media. It was 2010, and more institutions were making efforts to expand their influence on social media. We did tasks, such as starting a blog for the institution or managing facebook and twitter. It was fun being at the beginning of something that is so prevalent in our lives today.

What advice would you give to a student interested in pursuing a career in scientific writing? I always tell people that you just have to do it. Writing is a trade, not a profession like a doctor or a lawyer. To be a writer, you do not need the degrees, licenses, or certifications that those professions require. Anyone can change his or her title on linkedin to “freelance science writer.” Perhaps you’ll be surprised when things start coming your way. Don’t be afraid to just call yourself a writer even if it’s just writing on your own blog or contributing to a student publication. There are many groups that have blogs and newsletters who would love to have people contribute. If you can get started, you can slowly build up this portfolio or a list on your resume for places that you have written for. It’s great because you can just do it in your spare time and build that portfolio in parallel with what you’re already doing. This interview was conducted on Feburary 12th, 2019. 44 | JOURNYS | FALL 2019


ACS San Diego Local Section The San Diego Local Section of the American Chemical Society is proud to support JOURNYS. Any student in San Diego is welcome to get involved with the ACS San Diego Local Section. Find us at www.sandiegoacs.org! Here are just a few of our activities and services:

Chemistry Olympiad

The International Chemistry Olympiad competition brings together the worldâ&#x20AC;&#x2122;s most talented high school students to test their knowledge and skills in chemistry. Check out our website to find out how you can participate!

ACS Project Seed

This summer internship provides economically disadvantaged high school juniors and seniors with an opportunity to work with scientist-mentors on research projects in local academic, government, and industrial laboratories.

College Planning

Are you thinking about studying chemistry in college? Donâ&#x20AC;&#x2122;t know where to start? Refer to our website to learn what it takes to earn a degree in chemistry, the benefits of finding a mentor, building a professional network, and much more!

www.sandiegoacs.org 45 | JOURNYS | FALL 2019


Co-Editor in Chiefs Ethan Tan and Katherine Izhikevich

Co-Presidents Johnny Lu and Claire Wang

Assistant Editors-in-Chief Jessie Gan and Angela Liu

Vice President William Zhang

Section Editor Katherine Izhikevich

Coordinators Sua Kim, Nathaniel Chen, Allison Jung

Copy Editors Arda Ulug and Daniel Kim

Scientist Review Board Coordinators Claire Wang and Johnny Lu

Design Managers Anna Jeong and Daniel Kim

Contributing Writers Rohan Ahluwalia, Rachel Cai, Mikella Nuzen, Jee Hoo (Jade) Nam, Nathaniel Chen, Marie Kazibwe, Katherine Izhikevich, Sara Reed

Designers Anna Jeong and Daniel Kim Graphics Manager Seyoung Lee Graphic Artists Seyoung Lee, Amy Ge, Lesley Moon, Brian Cheng, Mikella Nuzen Media Managers Anna Jeong and Katherine Izhikevich

Contributing Editors Rhea Gandhi, Heidi Shen, Arthi Matrubutham, Amrita Moturi, Jade Nam, Rinna Yu, Aidan Zhang, Briani Zhang, Jesse Zhang Staff Advisor Mrs. Mary Ann Rall Scientist Review Board Members Megan Aubrey, Ricardo Borges, Pranjali Beri, Daniel Garcia, Caroline Kumsta, Tapas Nag

Letter from Presidents and Editor-in-Chief Dear Reader, Perhaps our last issue of this decade, Issue 10.2 holds a certain emotional significance among our team. Indeed, JOURNYS has seen its fair share of student leaders and contributors who have brought their diverse interpretations of the natural world to the table. Now twelve years from our founding, JOURNYS has built a collaborative community of students connected by an enthusiasm for scientific discovery. JOURNYS has expanded beyond our bi-annual publications — this past summer, we taught middle schoolers about scientific journalism and graphic design through the JOURNYS & iGEM Summer Camp. We also founded the Young Scientist Club at the La Jolla Public Library, where we taught elementary schoolers about a wide variety of science topics through exciting hands-on activities. It is our hope that JOURNYS can cultivate a lasting passion for the sciences and promote scientific literacy not only in high school students, but also the general public. None of this would be possible without our incredible JOURNYS team or the support from our advisor Mrs. Rall and the generosity of the San Diego American Chemical Society. We’d also like to extend our gratitude to our readers: thank you for taking the time to read these pages; we hope that you’ve found something that piques your interest! Happy reading, Johnny, Claire, and Katherine 46 | JOURNYS | FALL 2019


Journal of Youths of Science

2019-2020

Profile for JOURNYS

JOURNYS Issue 10.2  

JOURNYS Issue 10.2  

Profile for journys7
Advertisement