Volume III, Issue III

Page 1

Volume III, Issue III

April 2017

The University of Miami’s FIRST Undergraduate Scientific Magazine



A D V E R TIS wit h UMiami

E Scientific


We’re the first undergraduate scientific magazine at the University of Miami. Have a business that you’d like to share with our readers? For more information contact: scientificabusiness@gmail.com


contents News Malnutrition, 6 HHMI, 10

Capturing Science Through Photography Through the Looking Glass, 12

Innovations in Science MEGMA, 16 Artificial Intelligence in Healthcare, 19


Retinoblastoma, 20 Student Profiles, 24

Ethics in Science Genetic Modifications, 27 P-Hack, 28 Designer Babies, 30

Health Science Gym, 34 Heart Disease, 26 Happiness Hacks, 38 Blood, Sweat & Less Tears, 40 Great Minds Die Alike, 42

In Loving Memory of David Lin This issue is dedicated to remembering the life and legacy of our Innovations editor, David Lin. Our staff described David as driven, brave, versatile, innovative, and an all around incredible person. Our cover serves to honor the strength David represented, as well as all of those who have also faced Leukemia and Lymphoma. While we mourn our loss, we consider ourselves lucky to have known him. We extend our condolences to all of those affected by the painful loss of a truly remarkable ‘Cane.

“when you lose all of your feathers, fly higher”

I have experienced the pain of losing my mother and older sister to a disease that has plagued our world. It is with a heavy heart that I address the reader this issue. Recently, our university community lost David Lin — a trusted friend, peer, colleague and brother. I had the esteemed pleasure of knowing David as a friend, as my advisee and in my role with this magazine. David’s presence on our campus community was felt in and outside the classroom. His memorial service was filled with his peers’ admiration and heartfelt loss. However, even now as I write this, it is difficult to articulate the impact that he has had on all of us. He was a humble individual that everyone knew, not because he wanted to be known but because we all wanted to know him. I personally witnessed his development from the shy and reserved freshman to the confident and driven editor of this publication. Despite all that he was going through, he still found the energy and time to cheer others up and continue his voice in our magazine. In our last issue, David wrote about “Plants as Communication Devices” and showed us that, despite his personal struggles, his mind was both focused and thirsty for innovations in all of the sciences. David brought the best out in all of us and we will forever be grateful. I realize that I will never get to ease his concerns or jokingly poke fun at him and receive a beaming smile in return. However, I smile every time that I think of these moments. He has left a lasting

impression on me and I will forever be a better man for knowing him. For those of you seeking comfort, know that the section that he took so much care in establishing within Scientifica will live on, as will his enduring spirit. Every time I read of a new discovery, I’ll think about my friend and how his legacy will inspire future generations of eager scientists — and I invite you all to do the same. His devotion to his family, friends, colleagues, and alma mater make him forever a Miami Hurricane. This issue, I leave the reader with the courage to continually ask all the important questions and to trust that science will someday put an end to this horrific disease. Thank you to all that made it out to our first of three “Good Eats” fundraisers entitled N2ice Cream. We were successful in raising part of our $1,000 goal toward our American Cancer Society (ACS) donation in David Lin’s name for this year and we look forward to continuing this fundraising series for years to come, with a percentage of our proceeds going to the ACS. Please come out and enjoy our foods, just as David would have.

Roger I. Williams Jr., M.S. Ed Director, Student Activities Advisor, Microbiology & Immunology Editorial Advisor, UMiami Scientifica

Dear Scientifica Reader, This issue marks my last semester as Editor in Chief. Following this issue, I will be handing over the reins of the magazine to the young, innovative minds that drive this publication forward with their endless enthusiasm. I learned a lot during my tenure with this magazine. I watched the passions of our members take form in beautiful and informative works of scientific discourse. I witnessed the growth of an organization fueled by the love of understanding the world around it. Most importantly, I met the future leaders of our scientific community — future doctors, engineers, and scientists, but leaders all alike. I am inspired by these people who have put so much of themselves into the pages of Scientifica. I hope that this magazine likewise inspires you, our reader, to embody the principles to which the University of Miami commits itself: the diversity of thought, the freedom of inquiry, and the search for truth.

Henry Mancao Neuroscience, Microbiology & Immunology and Economics Class of 2017 Editor-in-Chief, UMiami Scientifica

On January 7th of this year, our Scientifica family suffered a huge loss, the passing of our Innovations editor, David Lin. David published over 9 stories over the course of 4 issues, and one of his last stories, which I had the honor of co-writing during this past fall semester, became the cover story for our Zika issue. David embodied all of the values upon which Scientifica was founded —he was creative, innovative, dedicated, and hard working. David was an excellent mentor to the people he oversaw, and as I can say about both Lin brothers, he never saw good enough as enough. I had the honor of seeing his evolution through the magazine, from a very shy and humble writer, to an empowered and well respected editor. His section reached new heights under his leadership. David Lin fulfilled many roles on this campus—and he was always on the move—if he wasn’t pushing a box full of magazines from Cox to Hecht and Stanford with Rick and Aalekhya—he was running late for a COSO meeting—and if he wasn’t late for that, I know he was heading up to see his COISO fam. But, I think the two roles he was best at were being a friend to all that knew him, and being a great twin brother to his other half, Rick. Upon hearing the news, the editorial board of Scientifica thought long and hard about how to honor David. We would think of one idea, scrap it, and try to come up with something better. Soon we realized we would be scrapping ideas forever if we didn’t decide on one. So, Scientifica has decided to honor in the best way we know David would want us to—by giving back in his name When asked what he was hooked to for our latest issue, Hooked, David replied “good food”. So, Scientifica will be having 2 more good food fundraising events, where every single penny will go towards our goal of 1000 dollars or more for the ACS in David’s name. Lastly, I’d like to thank those of you who came out to our N2ICE Cream Fundraiser, shared our event on Facebook, or donated from a distance! We were able to raise 350 dollars towards our goal, and we are endlessly thankful for your generosity. I would also like to thank Dr. Tegan Eve and Dr. Arun Malhotra for kindly providing us with the resources needed for this event because without them this event would not have been possible. We have been forever changed by David’s presence in our lives. To all those impacted by David’s passing: we share your heartbreak. We would also like to offer the entire Lin family our deepest condolences and ask that you always keep them in your thoughts. David, I dedicate my last issue with Scientifica to you, knowing that your legacy will always be kept in the hearts of many, including all of our readers. I hope we make you proud now and always.

Jennifer Chavez Microbiology & Immunology ‘17 Managing Editor, UMiami Scientifica

editorial board Henry Mancao, Editor in Chief Jennifer V. Chavez, Managing Editor Michaela E. Larson, Design Director Joshua Kleinman, Copy Chief Sneha Ramasamy, Copy Editor Natalia Beadle, Photo Editor Sumanth Potluri, Business Manager Ambar Jivraj, Business Associate Devi Nallakumar, Marketing Director Dan Arndorfer, Distribution Manager Corey Fehlberg, Distribution Associate Christina Valcin, Distribution Associate Roger Williams, M.S. Ed., Editorial Advisor Victoria Pinilla, Board of Advisors Liaison section editors Gabrielle Eisenberg, Innovations in Science Rick Lin, News Justin Ma, Ethics in Science Anum Hoodbhoy, Research Srividya Kannan, Capturing Science Renuka Ramchandran, Health Science staff writers Rachel Colletti Steven Lang Ramya Radhakrishnan Chidera Nwosu Alyssa Lafitte Grant de la Vasselais contributing writers John Tsatalis David Wolber Shivesh Kabra Trevor Birenbaum designers Manuel Pozas Samantha Mosle Melissa Huberman radio show John Wiltshire, host Lydia Livas, host Kellen McDonald, contributor Matthew Aldrich, contributor

Empty Calories, Empty Stomachs:

The Global Burden of Malnutrition - John Tsatalis


unger — a universal concept that once drove humans to a nomadic existence hunting animals, and then once again to settle and begin farming. It is a feeling that transcends race, border, and time. Eliminate hunger, and humans begin to populate the earth, build cities, produce science, and create works of art that still exist today. Acquiring food defines the workings of our daily schedule, the intricacies of international trade, and even the dynamics of how we fight our wars. Within the context of a society where food is relatively cheap and easy to find, it is hard to understand the clout of hunger. We are desensitized to the importance of a proper diet. Instead, most conversations center around cuisine that is not up to par for our standards. “Not enough flavor.” “Too many carbs.” “Too dry.” “Brussels sprouts, really?!” We take our food for granted until our mothers chide us to remember those who go hungry. The classic definition of hunger is tied closely to the feeling of fullness. Modern science has revealed that this theory is deeply flawed; there is much more to a proper diet than caloric intake. What we eat is equally important to how much we eat. Any biochemist could scientifically and mathematically explain the delicate interplay of proteins, carbohydrates, and fats necessary to optimize the body’s functioning. Unfortunately, there is a disconnect: 85 percent of Americans do not consume the recommended daily intakes of vitamins and minerals. In wealthy countries around the world, shifting diet patterns are leading to poor nutrition. The result? A compromised immune system, stunted growth in children, soaring health care expenses, and a loss of productivity. In the United States, malnutrition afflicts two very different groups: those who have access to healthy food but choose unhealthy alternatives, and those who lack the access and means to eat healthy. Changing the habits of the first group presents a much more manageable challenge compared to the second group. On one hand, you can picture this group as the one friend that orders fast food too often and can recite the overhead menus from Sausage McMuffin to Double Decker Taco Supreme. On the other extreme pole are diets that claim to be the final say in weight loss. The infamous Atkins diet asks users to cut carbohydrates from their diet, while others restrict intake of critical macromolecules. Of course, it is also impossible to ignore the prominence of protein powder and pre-workout if you ever take the time to step foot into a gym. Science has shown that intentionally creating an imbalance in our diets can have harmful effects, such as the interruption of the conversion of ATP into energy and the inability to synthesize certain proteins that require amino acids only acquired through food intake. What most people do not realize is that adopting a normal, healthy diet will accommodate those trying to bulk up and those trying to lose weight. Taking extra steps is unnecessary and costly. Most importantly, the concept of nutrition is a highly personal set of eating criteria that does not align between any two people. The other form of malnutrition in the United States afflicts the poor. The erosion of the middle class over the past several decades and the disproportionate replacement of low-income jobs since the 2008 Great Recession have

resulted in growing segments of the population who simply cannot afford to eat healthy food. Food deserts — areas where healthy food choices are not available for miles — compound the issue. Vast areas of the United States harbor tens of millions of people who live below the poverty level. Appalachia, the Black Belt stretching across most of the South, and the bevy of Native American reservations speckling the West are an unrecognizable reflection of American prosperity. These people are tethered to their situation by a lack of food options, their potential stunted by limiting diets. On a more personal level, the city of Miami has one of the largest gaps between the rich and the poor of any city in the United States extant. The homeless and the impoverished are quite literally living in the shadows of our skyscrapers. Programs for education and resources for acquiring healthy food are essential to help lift the poor from the cycle of hunger. Expanding our perspective to a global level, the image of malnutrition becomes even bleaker. Asia contains approximately two-thirds of the world’s malnourished, while Sub-Saharan Africa contains the population with the highest prevalence rate of malnutrition. The impact of this is evident in the global standing of countries in these areas. Additionally, this extreme malnutrition has a tangible impact on the youth in these countries. One in four children in the world are physically and mentally stunted because of malnourishment. This weighs on their prospects in an increasingly competitive global business environment. India has the most undernourished people of any country in the world, while China’s rapid economic ascension has resulted in a 50 percent reduction in the undernourished population (150 million people). Owing to the disparity between the urban wealthy and the rural poor in China, telling comparative scientific studies have made startling revelations about the impact of hunger and malnutrition. Most children do not receive necessary calories, but even those that do often do not eat the right things to stay healthy. Nearly 50 percent are anemic due to a lack of iron in their diet. Even modest government subsidies and attempts to feed the impoverished are met with distrust and apprehension by the lower class. The result has been that rural children in China trail their urban peers in cognitive and motor skills. This alarming trend is echoed across the developing world and has deep implications for equality issues in these countries. The relationship between climate change and nutrition is not immediately evident, but the effects of global warming will have an adverse effect on weather, agriculture, and ultimately health. In poor countries, agriculture is a large sector of the economy. Higher temperatures, rising sea levels, and natural disasters have the potential to wreak havoc on crop yields. The result could be a 50 percent increase in the impoverished population of the world from roughly 700 million to over one billion. The impact would have a chain reaction, ultimately leading to higher prices for food across the globe. This would have an inevitable impact on the ability of individuals to afford and access healthy and nourishing food. Our responsibility to mitigate this impact is one of ethical, economic, and humane motives.

MINORITY STUDENTS IN HEALTH CAREERS MOTIVATION PROGRAM UNIVERSITY OF MIAMI MILLER SCHOOL OF MEDICINE The Minority Students in Health Careers Motivation Program (MSHCMP) pr omotes diver sity in the health and allied health professions by providing students from underrepresented backgrounds with an opportunity to develop skills that will increase their competitiveness for admission to schools of medicine.

MINI MED SCHOOL EXPERIENCE Designed to be a mini first-semester medical school experience, the Motivation Program is a full-time, seven-week program that focuses on enhancing strengths and minimizing barriers that may limit participants from being competitive applicants for medical school. Students receive classroom instruction in select science courses from the medical school curriculum, shadow physicians and attend supplemental workshops that help them develop the necessary skills to compete. Upon successful completion of this program which runs from Sunday, June 4th, 2017 through Friday, July 21st, 2017, each participant will have a holistic perspective of his or her readiness for medical school. 9

The Howard Hughes Medical - Steven Lang

The laboratory is the cradle of discovery and the altar of science. Whether waist deep in an Everglades marsh or seated behind a cell culture hood, the laboratory is a place where scientists work to unpack the mysteries of the natural world. So why is it that the word “lab” strikes dread into the hearts of most college students? Perhaps it has something to do with how science is taught. Most traditional teaching labs dismiss the spirit of inquiry on which science is built and instead reduce “scientific investigation” to a series of reproducible steps resulting in a specifically-prescribed outcome. The University of Miami is breaking out of the mold of traditional laboratory education by trailblazing an innovative, new approach: The Howard Hughes Medical Institute (HHMI) Integrated Chemistry and Biology lab. We sat down with Dr. Malancha Sarkar, faculty in biology, to learn more about the HHMI integrated laboratories, which foster scientific thinking and prepare students for a future of work in serious research settings. The HHMI integrated laboratory was the brainchild of two UM faculty: biology professors Dr. Michael Gaines and Dr. Dave Janos (retired). Gaines and Janos envisioned an experience where students can work in small-group settings to develop original hypotheses, which can then be tested with the guidance of faculty. They presented their idea to HHMI, one of the largest international funding organizations for biomedical research. According to Sarkar, the Institute received the proposal well for its unique approach in deviating from “cookbook science.” With only six sections of classes, students participating in HHMI labs are hand-selected by UM faculty over the summer term. Students this semester have been working tirelessly on a variety of projects. In one section, students are working to create amphiphilic fluorescent nanoparticles to track and label neurons in zebrafish. In another section, students are investigating how inorganic nanoparticles serve as catalysts in the breakdown of charged environmental toxins. Sean Walson, UM freshman and HHMI student, appreciates the “largescale implications in terms of what our nanoparticles can be used for on both a medical and industrial level.” According to Avi Botwinick, UM freshman and HHMI student, “the flexibility in HHMI lab to self-direct your learning makes for a more engaging classroom experience.” Perhaps what most distinguishes the HHMI lab from a traditional teaching lab is that neither the student nor the instructor know the outcome of the investigation. Sarkar praised this concept for teaching students that “in research, there are more failures than success. Students find that sometimes results of their experiment do not support their hypothesis.” She went on to say that not knowing the outcome is the most exciting part of the course because it provides to students a true glimpse into the dimly-lit workshops in which scientists work and even stumble. The most well-suited students, Sarkar explained, “are naturally motivated and curious to know how things happen.”

ical Institute @ UM


through the looking glass: Refraction in Optics - Natalia Beadle


Photography, although generally recognized as an art form, is also a science. Photography can be described as the recording of light or other electromagnetic radiation to create lasting images. This can be done using an electronic image sensor, or by a chemical light sensitive material, such as film. The development of cameras, and many photography techniques, rely on principles of physics and other sciences. One of these techniques is the use of refraction, which can create stunning, wide-angle images and is frequently used in abstract photography. In these glass sphere photos, there are some prominent visual effects: the object seems bent, and the image appears either magnified and right side up, or zoomed out and inverted. These effects are essentially all caused by refraction, which occurs when a wave enters a medium that has a different density. It can occur with various types of waves, including sound, water, and, of relevance to this article, light. The varying densities of the media change the speed of the wave, causing it to bend at the interface. The extent to which a wave bends depends on the refraction indices of the media through which it passes. Because glass, with a refraction index of about 1.5, is denser than air (refractive index = 1), the wave bends, therefore distorting the images shown in these photos. The angle of refraction also depends on the angle of incidence. When the angle of refraction is greater than a critical angle (defined as the angle of incidence at which the angle of refraction is equal to 90 degrees), total internal reflection occurs, and the light wave bounces off the interface at an angle equal to the incident angle. Refraction also causes the inversion of the image. This occurs with converging (or convex) lenses, which includes camera lenses and our eyes; with those types of lenses, the image is usually corrected so that is seems right side up. In a camera, several mirrors and/or prisms ‘correct’ the image, and in humans, the nervous system flips the image that is formed on our retinas. Inversion occurs because convex lenses are curved on both sides, causing the light rays that enter on one side to converge at a point on the other side. The point at which the light rays meet is the focal point. But past the focal point, the light rays continue on the opposite side of the focal point, thereby inverting the image. This image inversion can be seen in most of the glass sphere photos. The magnification seen in a few photos can be explained by refraction as well. Because the image is refracted through the convex lens, the light converges towards our eyes, making the object appear bigger. However, in many cases, the object actually appears smaller. The glass sphere works similarly to a fisheye lens, which is used to capture wide-angle photos. Both types of lens take in a very wide field of view, but the light rays that enter are refracted into the curved glass of either lens, causing the image to appear smaller. The distance of the object from the lens (glass sphere) dictates whether the object is magnified or zoomed out. When the sphere is close to an object, light rays can only enter from a small field of view, and the image we “see” is virtual and magnified. When this occurs, the image will also be right side up, because the light rays do not pass the focal point, so they don’t converge and travel to the opposite direction; however, when the sphere is far from an object, it will appear zoomed out because light rays will enter from a wider field, and are focused through the lens. Refraction is an optical phenomenon that is utilized extensively in photography. Although this article and its accompanying photos only focus on one tool — the glass sphere — there are many other ways to use optical physics in photography. You can try them out yourself with tools such as prisms or fractal lenses, or even household objects such as wine glasses and water droplets.

Making Electronic Music Great Again - David Wolber


Keith Emerson performing in Tuscaloosa, 1974. The device in front of him is the Moog Modular.

n the year 1965, concertgoers largely knew what to expect to find on stage when their favorite artists played. A guitarist, bassist, singer and drummer were standard fare, with higher end productions featuring perhaps a string or horn section. Through the late-1960s, though, a new frontier in musical instruments was being explored by Bob Moog, who developed the first commercial synthesizer. Large and complicated beasts, Moog’s synthesizers sat in studios for the first half-decade they were in production, as composers such as Wendy Carlos and Rachel Elkind explored the sonic capabilities of these esoteric instruments. The Moog synthesizer was a daunting instrument to learn how to play, and for years sat as an instrument useful in studio recordings, but useless in live performance due to the complex nature of its operation. This lasted until August 29, 1970, when Emerson, Lake & Palmer premiered at the Isle of Wight Festival, hauling out a massive Moog modular synthesizer. The presence of this instrument on stage stunned the audience, towering over the stage with dozens of wires protruding from it and lights flickering all over. ELP’s keyboardist, Keith Emerson, stood at the synthesizer, giving a type of performance not seen to wide audiences before. Furiously plugging and unplugging cables, turning knobs and flicking switches in addition to playing the attached keyboard displayed a new type of musicianship. The thundering bass tones and soaring melodies that came out of this machine stunned audiences, while not subtracting from ELP’s rock sound. News spread of their engaging live shows (due in no small part to the novelty of their instrument choice), and Emerson, Lake & Palmer began selling out 20,000-seat stadiums before they had even released an album. In 1971, Bob Moog released the Minimoog, a much smaller and much cheaper instrument than the Moog modular, that provided much of the sound of the old system, with a fraction of its complexity. With the release of the Minimoog, the synthesizer’s place in the public consciousness became firmly cemented, with musical applications ranging between Snoop Dogg, ABBA, and countless more. What made the Minimoog successful over the Moog modular system was that it had its connections made internally. A modular synthesizer consists of modules such as pitched oscillators, filters, sequencers, and control signal contour generators that sit beside each other, unconnected until the performer connects them. It was the job of the synthesist to send audio from one module to another, while also connecting control signals to give shape and direction to the sound. A modular synthesizer offers nearly limitless capabilities in terms of music production; it is limited only by the number of sources, modifiers and effects within the system, as well as the abilities of the synthesist to program new synthesis architectures quickly. With the Minimoog and the vast majority of later synthesizers, however, one fixed audio synthesis architecture was provided on any given instrument. All of the connections previously made with cables were created in silicon, and many settings previously programmed by the synthesist were set at the factory, outside of the keyboardist’s control. What this gave the musician in terms of increased ease of use came at the cost of flexibility and scalability. This has run counter to the progression of electronic music as a whole. Krautrock, an electronic music style from the 1970’s that heavily utilized modular synthesizers, oozes with nuance. Synthesists were able to refine their sounds in an

intimate way that their later fixed architecture counterparts couldn’t match. House and techno music began with a single sequenced melody put to a simple, repeated drum beat with little variation to the underlying sounds. This simplicity was borne out of limitations of the hardware used; the music was simple because the instruments could not support much greater than that. But later, popular electronic styles like trance, jungle and electro began placing emphasis on the novelty of sounds used, in a more varied song structure. This has come at the same time as the exponential growth in digital electronics and the processing capabilities of personal computers. The concept of music “production” was created as a result, as opposed to music “performance”. Using a computer, the music producer is free to define their own audio synthesis architectures while sequencing them however they see fit. The musician is then free to write the music they desire, telling software instruments what to play and how to play them. Unfortunately, settings in digital audio workstations on PCs cannot be practically changed in real time by a performer. A high degree of complexity can easily overload the PC’s processor while processing in real time, causing interferences. Even if a computer could be relied upon to operate consistently for the musician, the peripherals available (i.e. keyboard & mouse, button controllers) are too far removed from the music production process to provide meaningful expression and dynamics to the performance. As a result, expression and dynamics are programmed in by the producer, and the final song is rendered to a file that is played back later. As electronic music become more complicated and more songs needed to be rendered (as opposed to played in real time), electronic music performances became DJ sets, where rendered music is mixed through by the DJ. While it is true that DJ-ing and all its forms (turntablism) are arts of their own, the act of performing a song is lost to much of contemporary electronic music. Thanks to recent advances in programmable logic and high-level-DSP-(Digital Signal Processing)-code-to HDL(Hardware Description Language) code translation, it doesn’t have to be this way. Field Programmable Gate Arrays (FPGAs) are simple devices rarely larger than a square inch that contain billions of logic gates with interconnects that can be reprogrammed. AND and OR logic gates, when interconnected in great numbers, can perform virtually unlimited operations on binary data. Born from a patent awarded in 1992, developed for a US Navy experiment, FPGAs allow for digital system design that is at the very lowest level. Programming FPGAs with a Hardware Description Language (HDL) isn’t the same as programming on a standard processor on a PC. It’s using a programming language to describe the circuits that design the processor in a PC itself, as well as its RAM, GPU, and any other digital architecture onboard the PC. For a modular synthesizer, this means that there is potential for modules to be designed that behave fundamentally differently than the way a computer does. Far more computationallyintensive signal processing tasks can be implemented than were possible with a processor-based system like on a PC. No system overhead is going to other tasks besides what is

specifically desired in the module. Even better, every module in the synthesis system would have its own FPGA, making every Digital Signal Processing (DSP) operation operate independently of one another, not allowing one to slow another down by its processing needs like the computer does. The issue with programming in an HDL is that it’s hard, and that gets in the way when trying to refine an algorithm. The very nature of describing something as complicated as circuit behavior using numbers, letters and symbols means that it’s very difficult to find where issues can be, especially when many hardware issues are caused by things not being synchronized in time. Thankfully, synthesis tools exist that allow you to use high-level programming languages like MATLAB to let the programmer organize what they are thinking. The nitty-gritty code can be taken care of in HDL, while DSP functions can be written in MATLAB and then translated into HDL. HDL works as systems of “modules” in code (exactly as they’re called in the world of synthesizers), so a MATLAB-to-HDL compiler allows the programmer to use an HDL “black box” that does some complicated DSP function that the programmer didn’t have to write in raw HDL, but rather an easier language that is refined for the DSP task at hand. An even further level of abstraction up from MATLAB is called SimuLink, which, along with a DSP Toolbox, allows the programmer to generate code from block diagrams, a frequent form of communication in the DSP community. Some examples of recent algorithms are physical models of single-reed woodwind instruments, bowed stringed instruments, nonlinear effects, special reverberators, and delay-based effects. This stands in sharp contrast to the world of modular synthesis. Analog modules reign supreme, commanding high premiums due to large amounts of discrete components that do a relatively small function. 25 discrete transistors packed together take up a much space as an FPGA that contains billions. One module using an FPGA in a modular audio synthesizer system can do the function of an entire modular system within it. The functions that a computer does to a musical sequence in a digital audio workstation— the note sequencing, control changes, parameter changes within instruments — can be created and handled by their own modules in the system. This frees the synthesist to interact with the sound more, rearranging higher-level architectures in the system by changing patch locations via cables, real-time parameter changes via knobs, switches and buttons that change parameters of the sound at a low-level, and playing samples or even other songs for live remixing. In the same way that MATLAB code can be translated into HDL to allow the programmer to write in a way that is closer to what they are trying to describe, building an audio processing system onboard an FPGA-based module as a part of a larger modular audio synthesis system allows the synthesist to focus more on the process of creating music than maintaining a patch. If the way audio hardware companies managed the complexity of audio synthesis systems was in the same way that semiconductor companies handled the complexity of HDL, then HDL would have just have become a simpler version of itself, driving out possibilities in one direction in favor of another.

artificial intelligence in healthcare - Chidera O. Nwosu

Artificial intelligence (AI) is the hallmark of applied science through its utilization and advancement of machinery. AI is the field of study that scrutinizes software that are capable of intelligent behavior. It is the integration of mind and computer. It is the exhibition of mankind’s greatest prodigy. It is the mimic of rationality and efficiency, and, most importantly, the future of healthcare. It seems like it was only yesterday when Mark Zuckerberg, inventor of Facebook, introduced the world to Jarvis — the Artificial Intelligence Assistant he set up in his own house. It was feat so great, it has momentarily galvanized further interest in the field of AI. It’s quite humorous how little people take notice of AI in their everyday life. Simple pragmatics ensure that AI finds constant use. From Amazon’s Echo to Microsoft’s Cortana or even Siri, many different AIs are slowly working toward perfection. It is this interest in perfection that remains the catalyst of infusing AI into the field of health care. As AI research continues to evolve, so does its potential. I am convinced that, in the next few years, AI will be revolutionary as it suffuses into healthcare and finds uses in the field of medicine. Babylon: A Case Study in AI Research into AI illuminates fundamentals such as reasoning, knowledge, and language processing that capitulates to human-like intelligence. Think about those long lines waiting at your doctor’s office, or times you take it upon yourself to Google the cause of your ailments. AI may abet those problems through online consultations. Take for instance Babylon, a virtual health service that figuratively puts an AI doctor in your pocket. Babylon is UK’s premier cellular application that offers AI consultation on the following bases: medical history and medical knowledge. Babylon functions through the input of user information on symptoms in which the subscriber is experiencing and these symptoms are analyzed (via speech processing) against a database of diseases. This application does additional tasks such as follow ups, medication reminders, and accounts for accurate

diagnosing. Imagine a world in which you had the ability to make a doctor’s appointment and be seen at that exact time. It seems that AI’s proficiency in diagnosing subscribers may very well lead to a decrease in these waiting room line. Moving Forward: The Employment of AI in Other Procedures AI may also have significant implications in genomics. Computational technologies being created now are methodically instructed to analyze patterns in genetic information, linkages to disease, medical records, and efficacious treatment plans for patient genomes. As stated on the landing page of Deep Genomics, their goal is to “predict the molecular effects of genetic variation, opening a new and exciting path to discovery for disease diagnostics and therapies.” In addition, Craig Venter, one of the fathers of the Human Genome Project, is currently working on an algorithm that could design a patient’s physical characteristics based on their DNA. This allows for complete genome sequencing and the targeting of diseases in their early stages. This project ensures that treatment options are personalized for the patient. AI may also specialize in the art of mining data (such as medical records) to facilitate rapid health services. Data management is key to ensuring that medical records are kept intact. Another AI venture focuses on the development of pharmaceuticals. According to the FDA, it takes 10 months for a standard review of a drug that is to be contested, whilst it takes six months for priority review on a drug that pertains to immediate crises. AI could be launched to analyze effectivity of treatment options in a manner that is expeditious. No more are the times of lengthy clinical trials and lavish spending of trillions of dollars. Among the leaders of this AI revolution is Atomwise, a company that attempts to create better medicines faster through research and machine analyses of millions of potential medicines. This company is the first of its kind to implement AI as a multi-tiered system approach to healthcare. Untapped potential is the only difference between now and the future with AI. As a society, we are moving one step closer to maximizing the bounds in which we can care for humanity.



A NEW PROTEIN The retinoblastoma protein is a tumor suppressor that negatively regulates the cell cycle. This protein is coded for by the retinoblastoma gene family, which consists of three members collectively referred to as ‘pocket proteins’. The significance of the retinoblastoma protein was initially discovered in the examination of childhood retinoblastoma—a pediatric neoplasm of the eye. Scientists studying the malignant cancer cloned retinoblastoma genes and determined that the protein product as a tumor suppressor. When the retinoblastoma gene is deleted or mutated there is an increased risk of cancer formation. This breakthrough catalyzed further study and ultimately led to the realization that other human neoplasms show deletion or modification of the retinoblastoma gene. Retinoblastoma proteins are functionally expressed at the G1 checkpoint and block S-phase cell growth. The protein prevents gene transcription by directly binding to promoter regions and by physically controlling chromatin. The exact process of how the retinoblastoma protein regulates the cell cycle remains unclear although it is evident that loss of retinoblastoma protein function causes deregulation of the cell cycle and can lead to tumorigenesis. A GENE’S MUTATION The retinoblastoma protein is closely implicated with Knudson’s two hit hypothesis—a theory that suggests multiple mutations are necessary for the genesis of cancer. Specifically, Knudson alluded to his belief in the heterozygous nature of tumor suppressor genes and the biallelic mutation involved in carcinogenesis. This was evidenced in children with inherited childhood retinoblastoma. The first mutation in the afflicted child’s genome was inherited from the parents, while the second mutation would quickly lead to cancer. This motif of two distinct mutation events is echoed in many different cancers, and has had a profound impact in the way that carcinogenesis is imagined. Our current understanding is of a process involving the activation of oncogenes and the deactivation of tumor suppressor genes. For a long time, scientists believed the retinoblastoma protein was limited to only influencing simple cancers like retinoblastoma. Evolving research has overturned this thesis— retinoblastoma dysfunction plays a role in virtually all cancers. A FUTURE OF USE An article published in the Journal of Investigative Dermatology studied the oncogenic pathways in Merkel cell carcinoma to explore possible gene therapies. The goal of the paper was to determine the specific mechanisms leading to the disease by studying somatic mutations (via sequencing) and biomarkers on mutated pathways. Although sun exposure has long been considered the major etiological factor for Merkel cell carcinoma-genesis, it was not the focus of the study. Rather, the investigation centered on evidence that suggested Merkel cell polyomavirus is one of the main drivers by which Merkel cell carcinoma develops. Merkel cell polyomavirus is the only polyomavirus currently known to be associated with a cancer, and even though it is commonly present on the skin of healthy adults it only affects the immunosuppressed. This could possibly explain why the elderly represent the largest proportion of Merkel cell carcinoma patients. Merkel cell polyomavirus is found in over half of all Merkel cell

carcinoma cases, further reinforcing the sense that Merkel cell polyomavirus could play an important role in the etiology of Merkel cell carcinoma. In the study, the genomes of a cohort of clinically diagnosed Merkel cell carcinoma patients were examined using whole exome sequencing and particular biomarkers with mechanistic significance. The presence of Merkel cell polyomavirus has major implications for retinoblastoma activity. Merkel cell polyomavirus expresses T-antigens that adversely affect the activity of retinoblastoma proteins. While the mechanisms of this process are not understood, it is clear that the T-antigens inhibit retinoblastoma activity, and thereby promote tumorigenesis. Interestingly, the researchers discovered that Merkel cell carcinoma patients can be positive or negative for Merkel cell polyomavirus. The patients that tested positive for the polyomavirus had fewer genomic mutations and a higher survival rate than those patients that tested negative for the polyomavirus. The authors postulated that the Merkel cell polyomavirus negative cases were clinically worse because they also exhibited a higher mutational frequency for retinoblastoma genes. According to the authors, identification of common pathogenic mechanisms provides a target for potential therapy. The retinoblastoma protein initially confused scientists. It is a very strong inhibitor of cell proliferation, yet it is stable in normal tissue cells at all times. Most cell cycle regulators degrade at particular times during the cell cycle in order to allow progression. The reason why retinoblastoma protein can be present in cells when it usually inhibits proliferation is because it is controlled by cyclin-dependent kinases. These regulators use cyclic phosphorylation and dephosphorylation to toggle retinoblastoma proteins between active and inactive states. This results in heavy modification (in this case, phosphorylation) identified during the late G1 cell cycle phase. An imbalance in cyclin-dependent kinase activity can cause problems for the regulation of retinoblastoma protein. Armed with this developing knowledge, scientists hope to eventually be able to address retinoblastoma protein dysfunction, and thereby interrupt tumorigenesis. Other researchers are looking into directly modifying the retinoblastoma protein in order to adjust its activity. One of the proposed therapies suggests using cyclindependent kinase inhibitors such as CDK4/6 to prevent the over-phosphorylation of retinoblastoma protein. This type of therapy is currently being tested by drug companies for treatment of breast cancer. A second suggestion is to inhibit KDM5A — a protein associated with cancer pathogenesis that is regulated by retinoblastoma protein. Inactivating KDM5A has been shown to restore normal differentiation in cells and cease cancer proliferation. The difficulty with this approach is attempting to mitigate the cascade of changes enacted by the missing protein. The vital role that the retinoblastoma gene family plays in the etiology of cancer simply cannot be understated. A small hiccup in the regulation of the protein or a dysfunction brought on by mutation can catalyze a multitude of cancers ranging from childhood retinoblastoma to deadly Merkel cell carcinoma. An evolving understanding of the way in which the protein operates and changes conformation as well as the genetic mechanisms that eventually lead to cancer has an incredible upside — novel therapy that can potentially play a role in cancer treatment.



The Medical College Admission Test (MCAT) Preparation Program is designed to help pr emedical students from underrepresented and underserved backgrounds prepare for the MCAT.

TAKE THE TEST WITH CONFIDENCE The MCAT preparation program is an eight -week course beginning Monday, June 5, 2017 through Friday, July 28, 2017. It offers class lectures, taught by Kaplan Test Prep, on content found in the Physical Science, Biological Science, Psychology & Sociology, and Verbal Reasoning sections of the MCAT. Participants will also receive study tips and test -taking strategies that will help them prepare for the written portion of the exam. In addition to these lectures, students will attend seminars that offer insight into the medical school application process and shadow physicians weekly at one of the UM/JMH teaching hospitals.

ELIGIILITY The MCAT program is a tuition-free, non-residential program open to college sophomores, juniors, seniors, and recent graduates who will be applying to health profession schools, specifically medical school. The admission committee will select 25 applicants whose applications demonstrate how they will benefit from participating in this program and are likely to be competitive candidates for medical school. Applicants must have taken organic chemistry in order to handle the course material. Accepted students are required to submit a refundable $100 deposit with their enrollment packet. The deposit will be returned after satisfactory completion of the course as determined by the program executive director and proof or registration for the MCAT.


2016 MCAT Prep Program Participants

An intense study - based curriculum prepared these students for the Medical College Admission Test.

The application is available online at: http://diversity.med.miami.edu/summer-programs/mcat Completed forms should be submitted along with the following documents: 

Official academic transcript (s) from all college (s) attended.

Three (3) letters of recommendation from college professors.

Personal statement (specifics outlined in application).

Passport photo (2x2).

Complete applications must be received in the Office of Diversity and Inclusion (address below) by Friday, March 17, 2017. Only complete applications will be considered. Applicants will be notified of their program status via email on Friday, March 31, 2017. Each candidate will be evaluated on the following: 

Sufficient academic achievement to be competitive for medical school admission.

Application demonstrates attributes desirable in medical school applicants, such as maturity, leadership, altruism, compassion, and good communications skills.

Extracurricular activities in health care field such as, community service, research, or employment.

Preference is given to applicants from underrepresented and/or disadvantage backgrounds.

Office of Diversity and Inclusion Rosenstiel Medical Science Building 1600 NW 10th Avenue, Suite 1130, Locator R11 Miami, FL 33136 Ph.: (305) 243-7156 - Fax: (305) 243-7312 Email: http://diversity.med.miami.edu

Naomi Fields Williams College, MA Class of 2016

"Participating in the MCAT Program empowered me not only through aggressive Kaplan test prep, but also through a motivating cohort, inspirational insights, and engrossing physicianship activities. It was a holistically beneficial experience that further fueled both my desire and ability to become a physician. More than that, I know that the skills and connections that I attained during the program -- not to mention the benefits of my test score -- will far outlast this past summer."

Nareka Trewick University of Miami Class of 2016

“The MCAT Prep program at the Miller School of Medicine gave me more than the opportunity to prepare for the medical school entrance exam with Kaplan. It gave me a community of peers and mentors who continue to support me on my journey into the field of medicine. My MCAT exam score improved throughout the summer and I feel more ready than ever for medical school .”


kamile grace willis major: biochemistry/molecular biology and women and gender studies hometown: St. Thomas, U.S Virgin Islands favorite scientist: Marie Maynard Daly area of research: dermatology

What advice would you give an undergraduate that might be interested in research? If you are truly interested in research, don’t hesitate, or think about it twice, just do it! The experience is rewarding academically. The amount of information and techniques you learn will benefit you in all lab classes you take at the U. You will be ahead of the game and it also stands out on your graduate school application.

What is your research about? Presently, I am involved in translational research with Dr. Joaquin Jimenez in the areas of chemotherapy-induced alopecia, alopecia areata, androgenetic alopecia, hair generation, and hair melanogenesis. My primary focus lies in alopecia areata. Alopecia areata is an autoimmune disease that results in patchy hair loss. It arises from genetic susceptibility and may be triggered by environmental stressors. I am deeply involved in the tissue culture and analysis of how these cells respond to forthcoming new treatments. Why did you decide to do research in this area? I decided to pursue research in the area of dermatology for a very personal reason. This reason is because of the skin diseases that I faced as a child. As I grew older, I became very interested in skin and hair treatments, ultimately creating my own to remove the excessive bumps and dark marks off of my skin. As a result, I desired to be a part of a greater solution to generate new and advance treatments to people who face skin and hair diseases. What is the most challenging about this research? The most challenging part about my research field is probably dealing with the animals that used to test our experiments. I have a soft spot in my heart for them, and it is difficult seeing how they are utilized for certain experiments. Can you remember the first day in your lab? My first day of research was very nerve wracking. I remember being completely confused as to how the research process goes. Fortunately, my PI Dr. Jimenez and my graduate assistant Gina Delcanto made my experience worthwhile, answering every single question that I had and making me feel comfortable in the lab. Being in research, you learn something new every day. Once you go into the lab with a very open mind, the possibilities are endless. What techniques have you learned? In order to participate in this study, I had to learn certain techniques such as western blotting, protein assays, tissue culture, and flow cytometry. You’re a sophomore and you already do so much. How do you balance school work, a social life and lab work? In order to balance my school work, social life, and my lab work I create a schedule for myself every week. Sticking to this schedule allows for a balance in most aspects of my life. Although, I always wish I could get one more hour of sleep in the mornings.

andrea wright major: marine science and geology hometown: Madison, WI favorite scientist: Kathleen Crane area of research: paleoecology

What advice would you give an undergraduate that might be interested in research? Definitely do it. Research will look good on any resume or graduate school application, but more importantly it will be a great learning experience. If you want to go into the sciences especially, making sure that you have some kind of hands-on experience is very important. Don’t be afraid to reach out to a professor whose research you’re interested in to get your foot in the door.

What is your research about? I’m currently involved in paleoecology research with the UM Department of Geological Sciences under Drs. James Klaus and Donald McNeill. My research focuses on analyzing ancient coral reef samples from the Dominican Republic and using them to assess the environment at their time of formation. Paleoclimate research is used often today to compare ancient environmental changes with modern climate change and see how we can better address it. I have been involved in selecting coral samples, cutting them, and drilling them to prepare them for x-ray diffraction and isotope analysis. Why did you decide to do research in this area? I decided to do research in this area because it is not only fascinating, but very useful in the modern day and age. By studying our past and the earth’s history, we can better adjust to the changes that we are facing today and anticipate how our environment will respond. I like that my research is a part of a larger field that will benefit the earth. What is the most challenging about this research? The most challenging part about this research is learning to use new tools and technologies that I am unfamiliar with. This makes the research both exciting and challenging. I am constantly learning new methods and how to use new equipment that will translate to other research I might do or jobs I may have. Can you remember the first day in your lab? On my first day of research, I remember being very confused about the aims of the project and how the research process would work, but I slowly eased my way into it. I gradually became more comfortable with asking questions, using the equipment, and understanding the research of which I was a part. Research is a learning process, but that is part of what makes it so interesting and useful. What techniques have you learned? I have learned a lot about analyzing coral reef samples and using tools for geological research such as a rock saw and drill. When I conduct future research, whether in graduate school or as a part of my career, I will now be more comfortable using these tools and working with coral samples. You’re a sophomore and you already do so much. How do you balance school work, a social life and lab work? I am busy, but I just try to prioritize the way I am using my time. I make sure I plan out what I need to do and when, so I if I have free time, I make sure I get done what I need to get done. If you manage your time correctly, you won’t have to lose much sleep or sacrifice a social life to do well in school.


The Advantages and Disadvantages of Genetic Testing - Alyssa Laffitte

Modern genetic testing has revolutionized medical technology. The results of a genetic test can either confirm or rule-out a suspected genetic condition or determine the chance that a patient will develop or pass on a certain genetic disorder. With the knowledge-potential of genetic testing, it is important to examine its advantages and disadvantages as a new medical technique. Genetic testing is advantageous because it helps individuals understand themselves and their families better, and it helps healthcare professionals create better treatments for patients’ unique situations. On the other hand, it can invades patient privacy and has broader implications for the individual. Genetic testing allows individuals to learn critical information about their bodies. For instance, they can learn exactly how their body reacts to certain foods. A genetic test will reveal if someone has an intolerance to a certain food or an allergy to specific ingredients. It can also tell them what nutrients they should consider to supplement their diet. When someone knows this information about their body, they can adjust their lifestyle accordingly so that they may live healthier and happier lives. Genetic testing can also help with preventative care. Tests could reveal predispositions for certain conditions and illnesses. With this information, doctors can help their patients adapt their lifestyle, environment and diet so that they are not affected by the condition (or to limit its effects). Genetic tests will prove a tremendous boon for pharmacists. They can determine specific tolerance and dosage for each individual patient by using a simple test. Pharmacists can also suggest the most effective drug at the optimal dosage for each patient if they have access to the patient’s genetic information. Genetic testing can also shed light on the past, as patients track their family history through more than 700,000 autosomal genes. Genetic testing has several disadvantages despite the advent of new medical techniques. The biggest disadvantage is the patient’s potential loss of privacy. Third parties, such as law enforcement, insurance agencies, and social media networks might be able to access such genetic information without consent of the consumer. Since genes are hereditary, this information would expose one’s family members to similar risks of privacy violation. Although the Health Insurance Portability and Accountability Act (HIPAA) requires that institutions secure and protect their patients’ medical information, HIPAA does not apply to cases involving law enforcement. Insurance companies may use genetic data to increase their premiums or to deny coverage to individuals. An insurer may deny coverage to a potential customer if they know the customer is predisposed to a condition whose treatment will be costly. Insurance costs are based on statistics on the general population in order to determine averages that allow insurance companies to make a profit. Genetic testing can pinpoint high-risk patients rather than having insurance companies consider national averages instead.

The knowledge of one’s genetic identity may lead to unnecessary medical treatment in the name of prevention. Patients in the past have had organs removed because they had a genetic predisposition for a certain disease or a certain kind of cancer. Reducing one’s risk of developing a disease is beneficial, but increasing knowledge of genetic predispositions has, in some cases, led to inappropriate preventative care. The use of genetic testing has broader social connotations. It can degrade the value of a patient’s life. If a patient has a genetic condition, they may be considered less “worthy to live” than a healthier individual. It may even affect people’s opportunities for marriage, since potential partners will have the ability to seek “genetic compatibility.” Genetic testing is nested within a complex web of advantages and disadvantages. On one hand, genetic testing is can help people learn

more about themselves, their family members, and their ancestors. It also helps doctors and pharmacists prescribe individualized treatments. These tests can also help people learn to adjust their diet, lifestyle, and environment to accommodate their genetic situation. On the other hand, genetic testing exposes people to certain risks as well. Test results are not truly private because genetic data makes it easy to identify a person. Insurers may deny someone coverage because they are genetically predisposed to a certain condition. This, in turn, exposes family members and future descendants to potential privacy violations. Although knowing about themselves can be beneficial and can help people live healthy lifestyles, the lack of privacy that may give way to discrimination for them, their family members, and their future descendants, and may not be worth the risk. If you are considering getting a genetic test done, please think carefully before you do so. Make sure that in your specific case, the benefits outweigh the costs.

P-Hack or Perish - Grant de la Vasselais


single faulty or fraudulent study can gravely undermine the public’s trust in scientific authority and have everlasting negative consequences for people and policy. Consider the recent decline in childhood vaccination rates. In 2004, 34 cases of measles were documented in the United States. In 2014, the number of reported cases had risen to an astronomical high of 667, and in 2015 the numbers held at 188, with multistate outbreaks radiating from places like the Disneyland theme park in California. This disturbing resurgence of preventable disease is in large part due to the lower rates of vaccination among children. Data obtained from the 2011 National Immunization Survey found that in a large sample of Americans, the range of vaccine coverage was distressingly low, with ranges falling in between 73 percent for the RotaC vaccine, and 94 percent, for the DTP3 vaccine. Additionally, the Pew Research Center recently concluded that up to 30 percent of Americans were either unsure or did not agree with the medical consensus that no causality exists between autism and vaccination. So, what happened? To begin, in 1998, a now infamous case study of 12 children conducted by Dr. Andrew Wakefield falsely concluded that there was a link between autism and the MMR vaccine, which protects children against measles. This new information created mass hysteria and concern surrounding vaccines. MMR vaccination rates plummeted in subsequent years, and while they have since recovered, the public’s confidence in vaccination has been seriously undermined. Hence, reported cases of measles have been on the rise as the vaccination rate dips below the critical threshold needed to provide herd immunity. Meanwhile, the British Journal of Medicine has determined that Wakefield’s study was, in essence, “an elaborate fraud” and that Wakefield had fabricated medical records in order to substantiate his claims. Numerous follow-up studies have failed to replicate Wakefield’s results. While Wakefield’s is a rather extreme case of scientific malpractice, the publication of questionable research, much of which often remains unchallenged, is disturbingly common. One of the biggest challenges to the veracity of scientific research lies in the nature of the scientific method itself; that is, a well-designed study conducted by researchers acting in good faith sometimes comes to the wrong conclusions by mere chance. The majority of research has failed to support the initial hypotheses of the researcher, in that the data collected fails to clear the threshold to demonstrate statistical significance. Of the studies whose data provides strong evidence for a link between two variables, those results are often the results of mere coincidence. This was the basis for a 2005 study by John P. A. Ioannidis, which caused quite a stir, titled “Most Published Research Findings are Wrong.” Ordinarily, this should not have been a problem, as replication studies will usually fail to demonstrate the same results as firsttime flukes. Unfortunately, incentives for researchers do not always line up with the interests of the scientific community as well as those of the public.

Academic scholars face institutional pressure from universities and research centers to publish frequently, which can create a sense of urgency to put out meaningful results even at the cost of reputability. Meanwhile, industryfunded studies generally show results favorable for the source of funding; contradictory results are swept under the rug, never to see the light of day. The pull of academic glory, job security, and funding is strong, and external and internal pressure can unconsciously bias even the best scientists’ views of their data. An arsenal of tools and tricks are at their disposal to confirm hypotheses, from the cherrypicking of data to the practice of p-hacking. If a researcher is just on the fringe of reaching Fisher’s (admittedly arbitrary) cutoff for statistical significance — a “p-value” of 0.05 (a value essentially representing the probability that a result happened by chance) — imagine the temptation to cull a few outliers, or to rationalize a problematic subset of data as inessential to the research. There is abundant statistical and anecdotal evidence of this phenomenon. For example, in the 20th century, the accuracy of estimates of Avogadro’s constant increased at a strangely slow rate. This was due to scientists’ reluctance to challenge the work of their predecessors, leading them to lowball changes in their estimations to make them more acceptable to the establishment. Additionally, a 2015 examination of the phenomenon of p-hacking in science by Megan L. Head , Luke Holman, Rob Lanfear, Andrew T. Kahn, and Michael D. Jennions titled “The Extent and Consequences of P-Hacking in Science” found that the number of results that simply meet the threshold for statistical significance far exceeds what would be naturally expected. While peer-review is designed to weed out poorlyconceived or executed research, some peer-review can be fraudulent or bought. Even legitimate peer-review often miss crucial mistakes, as peer-reviewers are usually not compensated for their work and often bypass examining the statistical methods sections of papers. Many predatory and disreputable journals with official-sounding names are also willing to publish the work of quacks, for a fee. Albeit, such publications — Journal of Computational Intelligence and Electronic Systems and the Aperito Journal of Nanoscience Technology — as impressive as they may seem, publish practical jokes. One example is an article exploring “Fuzzy, Homogeneous Configurations” by co-authors Maggie Simpson, Edna Krabappel, and Kim Jong Fun. While this description of the problems facing research may sound alarming, the integrity of scientific inquiry as a whole is high, especially among respected institutions. The rate of retractions in the scientific literature has increased significantly in the past decade but is still low, approximately 0.02 percent. The goal of truth seekers is not to discredit science, but to weed out those few bad apples that slow the trajectory of basic research and blur the public’s understanding of general scientific consensus. Structural and accountability problems do exist within scientific research, but the state of our empirical institutions is still strong, and the word of experts is not to be taken lightly.


GattaCAN’T: The Unsexy Truth about Designer Babies - Steven Lang


he 1997 hit film Gattaca depicts a dystopian future where human embryos are genetically engineered to harbor only “desirable traits.” Children who are conceived through natural means are dubbed “de-gene-rates.” De-gene-rates are discriminated against and marginalized to the periphery. The completion of the human genome project in 2003 was the harbinger of a revolution in our understanding of human biology and disease. Recently, the rise of increasingly precise and effective gene editing tools such as the CRISPR/CAS-9 system has led to heightened public anxieties regarding the possibility of realizing a Gattaca-like future. Even scientists have remained steadfast in their resolve to heavily regulate the editing of human embryos. So how valid are these concerns? Are we really just a few gene edits away from engineering a “master race” of genetically-modified humans that we see in Gattaca? For a trait to be amenable to “enhancement” in human populations via technology such as CRISPR, two conditions must be satisfied. First, the trait must be something which is heritable (controlled by genes), meaning that individual differences in the trait (such as variations in intelligence) can be attributed to individual differences in DNA variants. Certain human traits, such as eye color, conform very well to this condition; analysis has shown that up to 98% of variability in eye color can be explained by variability in a person’s DNA sequence. However, the vast majority of “desirable” human traits such as intelligence and personality have remarkably low heritability estimates, indicating that most of the variation in these traits is due to environmental and other non-genetic influences. This means that technology such as CRISPR would be ineffective in modifying a trait such as personality, because it can only work at the level of DNA — not the person’s environment. In order for gene editing to be practical, the trait must be controlled by a limited number of DNA variants. Most human traits are controlled by the additive effects of many DNA variants across the entirety of the genome. Picture a mosaic made up of thousands of tiles. Now imagine those tiles representing DNA variants, and that the mosaic formed by the variant tiles constitutes a trait, such as intelligence. The individual contribution of any single one of these tiles, or DNA variants, is alone too small to make trait manipulation practical. It seems that much of the public’s confusion about genomics research may come from sensationalized and misleading media reports. The branding of a gene implicated in a disease as “The Cancer Gene” or “The Alcoholism Gene” neglects the mosaic-type contribution of DNA variants, along with other environmental influences on a phenotype. This has led the public to develop a distorted view of the ability of genes to influence certain human traits, which understandably has precipitated anxiety and fear regarding genome editing. Fortunately, however, it seems that no matter how precise and accurate our technology becomes, designer babies will continue to be born only in the realm of science fiction.



a ne-r e g “de




modified h umans”


Apply to be a part of staff! Available Positions: Writer Designer Photographer Copy Editor Business Associate Marketing/PR Associate Distribution Associate Radio Show Writer Radio Social Media Manager

Please access the application via our Facebook page. Applications will be accepted on a rolling basis. Contact us at scientificaeditor@gmail.com or scientificamanaging@gmail. com for any questions you may have.

The University of Miami’s FIRST Undergraduate Scientific Magazine

Is Inconsistency in the Gym Actually Key? Periodized Resistance Training to Maximize Physical Performance - Shivesh Kabra


our average training or social media “fitness enthusiast” will preach that “consistency is key” to success in the gym. But is that really the case? Most Olympic coaches don’t seem to think so, and here’s why:

WHY LIFTING WEIGHTS WORKS When confronted with stress, the human body adjusts itself to better prepare itself for a future encounter with that stressor. The General Adaptation Syndrome describes our body’s general response to stress: (1) Alarm reaction - initial response to stress being placed on the body; the body begins gearing up for more adaptations. (2) Resistance stage - the body attempts to handle stress by altering physiology (e.g. increasing muscle mass, number of red blood cells, etc.). (3) Exhaustion stage – adaptive capacity is used up; this is the reason for the “plateau” and slight dip in gains that most lifters experience after several months of training. Resistance Training (RT) aims to increase physical performance by exploiting the first two stages of the General Adaptation Syndrome (Plisk and Stone 2003). In RT, we change the number of contractions (repetitions) against a resistance load (e.g. free weights, body weight, cables, etc.) to act as stressors that profoundly trigger and ultimately determine the type of adaptation that occurs. You might be wondering: doesn’t that mean RT will eventually wear the body out? How do professional athletes and douchebags work out every day and keep improving? Performance enhancing drugs and genetics seem to offer a shortsighted explanation; but the answer lies predominantly in an increasingly popular RT scheduling (or programming) technique—called Periodization.

QUALITIES OF ATHLETIC PERFORMANCE & PERIODIZATION We can change our style of RT to trigger different physiological adaptations that compete, and thus, overall athleticism is the product of mixed effects from four underlying qualities. The American College of Sports Medicine has put forth the following RT guidelines for each quality: (1) Hypertrophy - the biological response that increases muscle size and volume; most stimulated at loads between 70%-85% of 1-repetition-maximum (1RM) of 8-12 repetitions (2) Strength - defined by the maximum force that can be exerted by a muscular contraction; maximized by resistance training at 60%-70% of 1RM at rep ranges between 6-8 (3) Power - optimized by generating a large amount of energy and minimizing time during a muscular contraction; optimized by performing 3-6 repetitions at 30%-60% of 1RM, paying attention to moving as explosively as possible

(4) Muscular Endurance - the ability to perform repetitions of a contraction; enhanced by performing 10-25 repetitions at a submaximal load less than 70% of 1RM Periodization provides a schedule for changing RT styles such that the exhaustion stage of the General Adaptation Syndrome is staved off for each parameter, and beneficial adaptations occur. This is accomplished by dividing the macrocycle (overall training period) into discrete, goal-specific phases called mesocycles (4 – 6 wks.) and microcycles (1 day – 2 wks.). The progression of a linear periodization program aims to address the four main trainable characteristics of athleticism with appropriate mesocycles that taper in training volume and increase in intensity, with a short microcycle to prevent injury (Brown and Greenwood 2005). A limitation of this progression, however, is the loss of gains in a parameter that is not being trained actively. There are many non-linear periodization progressions that break up training into even more specific microcycles, but they require peak physical conditioning to prevent injury and/or overtraining. Athletes using periodized RT programs train to trigger specific adaptations that improve a desired parameter of athletic performance. For instance, a powerlifter doesn’t need the same well-rounded strength a volleyball player does, so a powerlifter’s training focus can be predominantly power and strength. It’s important to note that the weakest quality will have the greatest room for improvement, and so training weaknesses will result in the greatest increase in overall performance. An important idea to consider when designing a periodized RT program is the interaction between the athletic parameters. They haven’t been scientifically proven (but there is evidence), and the exact mechanisms aren’t known, but the logic behind this concept is quite easily illustrated by the following example: Say a lifter does hypertrophy RT and bulks up: his muscles are larger, so it makes sense that he’s stronger. However, his total power output has gone down. Why? The excess bulk muscle takes away from his ability to perform movements quickly (explosively). But this is not to say we shouldn’t train hypertrophy if power is our end goal. Increasing muscle mass will improve strength, and can potentially take away from power; but, ultimately, maximum power output is limited by the amount of force we can apply (strength) when time isn’t a factor. The various interactions between the four qualities are important to keep in mind—especially when training for a specific sport—but they’re automatically accounted for if you periodize your training schedule properly and your end-goal is just more performance. In other words: be inconsistent with your RT training styles (to an extent) when you go lift.


A Silent Killer:

The Overlooked Risk of Heart Disease - Grant de la Vasselais

On December 27, 2016, beloved actress Carrie Fisher passed away at the age of 60. The tragic event occurred several days after suffering from cardiac arrest aboard a flight. Her death highlighted the often unspoken risk of heart disease, especially among women. Heart disease is the number-one killer of both men and women in the United States. The Global Burden for Disease Study of 2013 concluded that coronary heart disease kills over 8 million people annually, making it the number one cause of death globally. To understand why heart disease has long been ignored as a women’s health issue and how we can take steps to protect our health, I spoke to Dr. Pamela Rama, a preventive cardiologist, who serves as the Chief of Staff and Medical Director of Cardiopulmonary Rehabilitation for Baptist Beaches Hospital in Jacksonville Beach, Florida. Our conversation has been edited slightly for brevity and clarity.

Q: What led to your interest in treating women’s heart disease and focusing on preventative treatment? A: Female cardiologists only make up about 10 percent of all cardiologists in the United States, and we never really focused on women’s heart disease until the mid2000s. In 2006, the American Heart Association started a movement called the Go Red For Women Movement to raise awareness among women that heart disease is their number one health risk. We always perceived heart disease as a man’s disease, the “Widow-Maker,” and because of that women did not realize their number

one health risk was coronary heart disease. They would have chest discomfort and a host of other symptoms and they wouldn’t seek medical attention because they didn’t believe [a heart attack] would ever happen. After seeing so many young women suffering from heart attacks, it was obvious to me that something needed to be done.

Q: Why do you think there is more of a reluctance among women to seek preventative treatment for heart disease? A: I think that a lot of it has to do with the public focus on other diseases, such as breast cancer. When you talk about breast cancer killing young women, everyone becomes afraid of breast cancer. What they don’t realize is that your lifetime risk of developing breast cancer, as a woman, is about one in eight, whereas your lifetime risk of developing coronary disease is about one in two. Even though more women are beginning to understand the risk of heart disease, studies have shown that only about 13 percent can personalize that danger. If you put together a group of women, you can expect one out of every two of them to develop coronary disease, but a majority of them think that it can’t happen to them. Although the situation is much better than ten years ago, there is still a lot of progress to be made in raising awareness and getting people, especially women, to personalize that risk so that they prioritize their cardiovascular health.

Q: Are there different patient outcomes for those who seek preventative treatment before developing coronary

disease, or suffering a heart attack, against those who do not? A: Absolutely. When you implement lifestyle changes early on, your risk of developing cardiovascular disease drops by 80 percent, and that’s just with lifestyle changes. It doesn’t include medication such as statins. By controlling your blood pressure, eating heart-healthy meals, not smoking, and exercising, you can reduce your risk of cardiovascular disease.

Q: For people who have a family history of heart disease, how substantially does their risk increase? Do you recommend genetic testing? A: I’m glad that you asked that question. A family history of premature coronary heart disease is a strong predictor of an individual developing heart disease. I have patients who run and walk, who exercise a lot, who are healthy, yet who still have heart attacks. For them, the only identifiable risk factor is a family history of premature coronary disease. When I say premature, that means a cardiac event, such as a heart attack, in a first-degree male relative who is younger than 55 and in a first-degree female relative younger than 65. Unfortunately, these people are grossly underrepresented in the risk assessments that we have now, which mostly look at personal behaviors. However, genetic testing is not the best option. There is one test that we now use more frequently, called a Coronary Calcium Score, which consists of a CAT scan that images the heart. It measures the amount of calcium buildup in the coronary arteries because of the presence of cholesterol plaques. A score of zero on this test means your prognosis is excellent, and that there is little risk that you would have a cardiac event in the next ten to fifteen years. Once your score reaches 300, your risk significantly increases. This is a very good risk discriminant. If you have two people with very similar risk factors in their lifestyle and even genetics, this tool can determine which patient is in more trouble.

your cholesterol checked, which includes both bad and good cholesterol. You should also check your triglyceride level, your blood pressure, and your blood sugar level to determine your risk for developing diabetes. If you have a fasting blood sugar that is elevated, you need to cut down on your sweets. Once you know your numbers, you can address your risk factors. The four things we tell our patients are:

(1)Move your body through space. Try to exercise daily. If you don’t have the time to exercise continuously for 30-60 minutes a day, you can break that into three 10-20 minute intervals and still reap the same benefits. (2)Eat a balanced diet. For most people, this means a Mediterranean diet that includes olive oil, nuts, grains, fruits and vegetables. This also means avoiding concentrated and processed sweets, including refined sugar. (3)Avoid smoking and people who smoke. Second-hand smoke is a real risk. Around 45,000 people die from secondhand smoke each year. (4)Manage your weight and avoid being overweight. Those with a BMI over 30 are at a higher risk of developing cardiovascular disease. It’s important to begin these changes early. We see a lot of patients in their sixties and they’re just implementing these healthy lifestyle changes. Since many of them have already developed coronary disease, this makes things difficult. Making lifestyle changes at any point in your lifetime will absolutely help, but your best bet is to adopt healthy habits early.

Q: You said that simple preventative lifestyle changes can reduce your risk factor by about 80 percent. What are changes that you commonly recommend to your patients and to our readers? A :The first step is to know your numbers. You should make it a priority, especially if you’re a young person who doesn’t know their current cholesterol levels, to get

Calcified Heart


Happiness Hacks - Rachel Colletti Happiness is an elusive state we all wish to attain. In the past, individuals have turned to religion or self-help for guidance toward happiness. But, a shift occurred at the turn of the twenty-first century. Now, society looks toward science in hopes of understanding happiness on a more objective level. The fledgling field of positive psychology uses the scientific method to study how individuals achieve happiness. While science has elucidated certain elements of happiness, more research needs to be conducted on the exact neuroscience. Part of the reason why studying the neuroscience of happiness is so difficult is that defining and measuring a subjective emotion such as happiness is somewhat out of the realm of traditional scientific inquiry. Some scientists define happiness as pleasure and so they study the neurobiology of pleasure in hopes of characterizing happiness. Other scientists propose that happiness has more to do with contentment — concepts that are harder to define and measure. However, one of the best ways to study the neuroscience of happiness has emerged from studying happiness interventions and their effects on the brain. I like to call these interventions “happiness hacks” — they teach us not only about the way happiness works in our brain, but also how we can take control of our happiness. One of the most influential findings of the new field of positive psychology has been that 40 percent of your happiness is determined by your thoughts, actions, and behaviors. While 50 percent of your happiness is genetic, and 10 percent is determined by your life circumstances, this other 40 percent is entirely in your control. Knowing this information, how exactly do we maximize our happiness and overall life satisfaction? Neuroscientist Alex Korb recently wrote a book, The Upward Spiral, on this very topic. The following hacks overview Korb’s suggestions. The first happiness hack is one that tends to be written off as new age or philosophical, but in fact turns out to be backed by neuroscience. The practice of gratitude has been shown to boost the neurotransmitter dopamine, which controls the brain’s reward and pleasure centers. Additionally, the simple act of trying to think of what you are grateful for increases serotonin production in the anterior cingulate cortex, one of the “hedonic hotspots” or pleasure centers, in the brain. One of the best ways to integrate this simple habit into your everyday life is by choosing a gratitude totem. It turns out that gratitude journals are not effective because people simply forget to use them, but a gratitude totem is something in your everyday life that you regularly see or do and you assign as a reminder to think of something you are grateful for. For example, whenever you sit down at the breakfast table for your morning coffee, you think of one thing that you are grateful for. This act reminds you to practice gratitude each and every day, thereby increasing your levels of happiness.

A sense of gratitude requires you to verbally articulate emotion. Neuroscience shows that the act of putting your feelings into words, whether they be positive or negative feelings, is beneficial for our mood. In one fMRI study, subjects viewed pictures of people with emotional facial expressions. It was shown that each participant’s amygdala activated to the emotions in the picture. The amygdala is a region in the brain that is involved with experiencing emotions. When the subjects were asked to name the emotion, the ventrolateral prefrontal cortex — or part of the brain responsible for dialing down emotions — was activated and reduced the emotional amygdala reactivity. Consciously recognizing negative emotions reduced their impact. Happiness seekers can achieve the same benefits by journalling when they are overwhelmed by negative emotion or by talking to a trusted companion. The science shows that describing the emotion in just one or two words reduces the negative emotional impact. We know that humans are social creatures, so this next hack should come as no surprise. Social stimuli — in particular, touch — increases happiness. One fMRI experiment demonstrated that social exclusion activates the same circuitry as physical pain, activating the anterior cingulate and insula in the same ways that pain does. In contrast, one of the best ways to release oxytocin, a hormone that acts as a neurotransmitter in the brain and contributes to social bonding, is through physical touch. One study showed that massage can boost serotonin levels by 30 percent, decrease stress hormones, and increase levels of dopamine.

Studies show that acts of kindness and compassion can do wonders for happiness. Being kind not only reduces stress and anger, but it also makes us feel happier and more connected to the world. In one study, students were asked to commit five random acts of kindness each week for six weeks. Those who engaged in acts of kindness showed a 42 percent increase in happiness, whereas the control group experienced a reduction in well-being. Studies have also shown that when we just think about being compassionate, we increase our happiness. We can do this by sitting with our eyes closed and meditating on what it feels like to be compassionate for 15 minutes. The last “happiness intervention” is one we all know. Exercise is linked to happiness because physical activity releases dopamine, serotonin, and endorphins. In a study that compared the effects of exercise to Zoloft, a common antidepressant, it was found that just 30 minutes of brisk walking 3 times per week was every bit as effective as the drug in fighting depression. In a one-year follow up on the patients who continued to exercise it showed that they were more likely to stay well than those who remained on medication for the year. Exercise benefits our physical and mental well-being, making it one of the most powerful happiness hacks. Happiness has become a cultural obsession. The thousands of books written on this topic are evidence of this. But the fact that the search for happiness has moved into the scientific realm shows that people are desperate for a definitive answer on how to be happy. Although we may never find this answer, we have discovered some habits that can make our journey a little more enjoyable.


blood, sweat and a lot less tears - Gabrielle Eisenberg


Great Minds Die Alike? Why writers are more likely to commit suicide than those in other creative professions - Carolene Kurien


ylvia Plath. David Foster Wallace. Virginia Woolf. Ernest Hemingway. Literary genius and creativity are not the only things that bind these four individuals together. All four of these individuals would choose to end their lives in quite drastic ways — death by gas oven, noose, drowning, and firearm (respectively). The connection between literary creativity and the inclination to commit suicide is one that has always been pondered philosophically, but never analyzed scientifically. Before the neuroscience of suicide can be analyzed, an understanding of the magnitude of America’s suicide epidemic must be gained. The general population assumes that suicide is an act committed by depressed individuals. However, this is a gross assumption, and most people do not attempt to understand the psychological or biological reasons underlying suicide. For the past few decades, scientists have tried to understand the biological basis behind the drive to commit suicide, but it has been frustratingly difficult. This is largely due to the fact that each individual has unique psychological and physiological reasons for committing suicide, making it difficult to pinpoint a specific reason. However, research in the past years has explored various biological explanations behind the drive to commit suicide. One explanation comes from Dr. Victoria Arango, Professor of Clinical Neurobiology (in Psychiatry) in the Department of Psychiatry at Columbia University. She describes an abnormality in the portion of the brain known as the prefrontal cortex, the area where executive decisions are made. Her explanation is connected to a 2014 study conducted at Yale Medical School which compared MRI images of those with bipolar disorder who haven’t attempted suicide, those with bipolar disorder who have attempted suicide, and healthy participants. The study showed that the individuals who attempted to commit suicide contained less white matter in important frontal cortex systems when compared to the other two groups. A second explanation from a study published by the National Institute of Health (NIH) is that the serotonergic system — the transmitter system that regulates serotonin (the neurotransmitter responsible for mood regulation) — is faulty in individuals who commit suicide. In fact, post-mortem studies performed on individuals that have committed suicide showed that these individuals have low levels of serotonin in the brainstem. Now the question remains: Why do writers, in particular, have a higher rate of suicide? The link

between artistry and suicidal tendencies was one that has never been substantiated — that is, until a 2012 Swedish study involving over 1,000,000 individuals was conducted. The goals of the study were to investigate whether or not creativity was associated with psychiatric disorders and to investigate authors in relation to psychopathology (abnormal states of mind associated with abnormal behaviors). Though the study concluded that those in creative professions were not more likely to commit suicide as compared to individuals in other fields, collected data showed that being an author was positively associated with mental illnesses, including bipolar disorder, anxiety disorders, and substance abuse. But what exactly distinguishes authors from other creative professionals? One argument is that individuals suffering from mental illnesses disorders are more likely to have suicidal tendencies. In other words the profession itself is a confounding variable, as the individuals who become writers in the first place are more predisposed to mental illnesses. Moreover, the Swedish study omitted authors with diagnosed psychiatric disorders and the data still showed that writers without any diagnosed illness committed suicide more frequently than those in control groups. There are many different theories as to why this is so: most writers tend to be isolated or ostracized from society from an early age because of their talent (Wallace), some tend to come from dark backgrounds (Hemingway, Plath), or others are, as Woolf put in her famed suicide note, “mad.” Unfortunately, there is no right or wrong answer to this question as of yet; concrete evidence is yet to be uncovered as to why there is an association between being a writer and committing suicide.

Statistics: 1. Suicide rates have increased steadily from 1999-2014 2. Around 121 people commit suicide daily in the US, which means about 44,000 people die from suicide yearly 3. Suicide is the 10th leading cause of death in America 4. Suicide is the 3rd leading cause of death among teens

Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.