Page 1

Issue 20: Spring 2017

EUREKA!

Serendipitous discoveries in science

VIAGRA

THE SKINNY GENE

FEYNMAN

How a side effect became a billion dollar industry

The genetics behind body mass

The calm amongst the storm

www.eusci.org.uk

|FREE


Contents Focus 9 From impotency to epilepsy Hannah Johnston discusses the misconceptions of epilepsy and its road to prevention 10 Mirroring molecules: from 3D structures to the creation of beer Joe Willson investigates the role of serendipity in the work of Louis Pasteur 12 Phineas Gage - well ain’t that a hole in the head Bella Spencer explores how a horrific accident had an invaluable impact on modern neuroscience 13 The Golgi stain Tessa Noonan discusses the accidental discovery of silver staining in the 19th century, a neuroscience technique still in use today 14 Glimpsing the unseen Calum Turner examines Röntgen’s serendipitous discovery of X-ray radiation 15 Rutherford’s lucky star: the discovery of the atomic nucleus Adelina Ivanova explores the role serendipity played in the discovery of the structure of the atom 17 Leptin: The skinny gene Bonnie Nicholson examines the role of genetics in determining our individual susceptibility to obesity 18 Richard Feynman: the calm amongst the storm Craig tells the story of Richard Feynman and his fortuitous discovery of the law of Quantum Electrodynamics 20 Serendipity in the revolution of psychiatry Fiona Ramage discusses the serendipitous events that led to the revolution of psychiatric treatment

Cover illustration by Vivian Uhlir

2 Spring 2017 | eusci.org.uk

22 Fingerprints from the birth of the universe Johanna Vos explores the serendipitous discovery that kick-started the era of observational cosmology 23 Blood and bone: uncovering hematopoietic stem cells Scott Dillon explores the discovery of haematopoietic stem cells and how they have informed the future of medicine 24 Cosmic heartbeats: the discovery of pulsars Katie Ember tells the story of the discovery of a new type of star 26 Curing cancer: a perfect storm Angela Downie explores breakthroughs and chance factors behind the advancement in treating testicular cancer 28 Natural product drug discovery: reaching for the low hanging fruit Carlos Martínez-Pérez explores the potential of natural chemical diversity for modern drug discovery 30 The discovery of viagra: how a side effect became a billion dollar industry Natasha Tracey looks at the discovery of Viagra 31 Scans from ‘healthy’ volunteers reveal serendipitous findings: a blessing or a curse? Lorna Gibson explores the challenges of handling incidental findings from imaging research

Features 33 The new fight against bacteria Imogen Johnston-Menzies explains how scientists are hunting for new strategies to combat 34 A brief history of the self in science Haris Haseeb discusses medical science, human anatomy and the reconceptualisation of the self

36 The dynamic little person in your brain Marja Karttunen explores how your brain senses the state of your body 38 Where to draw the line? Vicky Ware explores the grey area between health treatment and doping in performance sport

Regulars 40 Brave New World: Trumping evidence and information in politics Selene Jarrett discusses the distrust and disregard of science in current populist politics 41 From the bench to the field: nanopore sequencing Vesa Qarkxaxhija explores a natural sequencing force and its future 42 Learn actually Angus Lowe explains why he believes emphasis should not be placed on the employability of university degrees 43 A cuppa with…Professor Randy Schekman Alessandra Dillenburg chats with a Nobel prize winner about succeeding in science, Trump’s [scientific] America, and making science accessible 44 Taking the passenger seat Simone Eizagirre explores the benefits and challenges that come from the development of automated car technology 46

Dr. Hypothesis

EUSci’s resident brainiac answers your questions 47 Review: The Gods Themselves by Isaac Asimov


eu:sci

CreditsditsEditorial

Editorial

News Team Jonathan Wells, Teodora Aldea, Finn Bruton, Hans-Joachim Sonntag, Dawn Gillies, Isaac Shaw, Samuel Jeremy Stanfield, Athina Frantzana

Dear Readers, Welcome to Issue 20 of EUSci! In this issue, entitled serendipity in science, you can learn about the scientific breakthroughs that came about due to chance – head to page 8 to explore all this in detail all that and more.

Focus Team Hannah Johnston, Joe Willson, Bella Spencer, Tessa Noonan, Calum Turner, Adelina Ivanova, Bonnie Nicholson, Craig Young, Fiona Ramage, Johanna Vos, Scott Dillon, Katie Ember, Angela Downie, Carlos Martínez-Pérez, Natasha Tracey, Lorna Gibson

If you’re looking to catch up on some recent local and global updates in the world of science, check out our news section on page 4.

Feature Authors Haris Haseeb, Marja Karttunen, Vicky Ware, Imogen Johnston-Menzies Regulars Authors Selene Jarrett, Vesa Qarkxaxhjia, Angus Lowe, Alessandra Dillenburg, Simone Eizagirre, Chiara Herzog, Alice Stevenson Copy-editors Fiona Ramage, Chloë India Wright, Miguel Cueva, Adelina Ivanova, Monica Kim, Anna Schilling, Sarah Heath, Sadhbh Soper Ni Chafraidh, Ciara Farren, Nikki Hall, Andrew Bease, Iris Mair, Benjamin Moore, Harry Carstairs, Ailsa Revuelta, Rebecca Watson, Amelia Penny, Gwen Tsang, Rachel Harrington Sub-editors Amy Richards, Gwen Tsang, Holly Fleming, Sarah Heath, Owen Gwydion James, Brian Shaw, Catherine Adam, Samuel Stanfield, Sadhbh Soper Ni Chafraidh, Ciara Farren, Benjamin Moore, Kirsty Little, Harry Carstairs, Catherine Lynch, Clare Mc Fadden, Grace Carpenter, Oswah Mahmood, Marta Mikolajczak, Samona Baptiste, Claudia Cannavo, Owen James, Iris Mair, Vicky Ware, Jenny Nicolas, Angela Downie, Fiona Ramage, Chloe India Wright, Adean Lutton, Matt Rounds, Monica Kim Art Team Perna Vohra, Lana Woolford, Jemma Pilcher, Huimeng Yu, Lucy Southen, Ling Ng, Hannah Johnston, Ashley Dorning, Antonia Banados, Marie Warburton, Alanah Knibb, Katie Forrester, Alyssa Brandt, Sarah Atkinson

If you’ve still not had enough science, we have a handful of fascinating feature articles starting on page 34., where we explore a range of topics, including changes of the ‘self ’ in medicine through time, differences between doping and health treatment in performance sports, combatting antibiotic resistance, and how your brain senses the states of your body. This issue’s regular fixtures start on page 41, where we speculate on what Trump’s America means for science. We have a cuppa with Nobel Prize winner Randy Schekman, and chat with our Dr Hypothesis about the likelihood of a zombie apocalypse. In technology and innovation, we get into the nitty-gritty of nanopore sequencing and find out more about self-driving cars. Our opinion piece discusses university degrees and employability – a very relevant topic for most of our readers. We end the magazine with a review of The Gods Themselves, a science fiction novel that is bound to have you turning pages at the speed of light. If you’re looking for more issues of EUSci or how to get involved, head over to www. eusci.org to have a look. We’re always looking for new talent, so don’t hesitate to get in touch at euscimag@gmail.com if you’re interested! We’d like to thank the IAD for their support in printing this issue. We’d also like to recognize our massive team of editors, writers, and illustrators – we couldn’t have done this without you all. We hope you enjoy this latest issue, and look forward to bringing you Issue 21 next semester! Alessandra Dillenburg, Selene Jarrett and Meghan Maslen Editors

Editor Alessandra Dillenburg

Editor Selene Jarrett

Editor Meghan Maslen

Deputy Editor Simone Eizagirre

Deputy Editor Chiara Herzog

Deputy Editor Haris Haseeb

Web Editor Angus Lowe

Focus Editor Emma Dixon

Focus Editor James Ozanne

News Editor Hans-Joachim Sonntag

News Editor Teodora Aldea

Layout Editor Vivian Ho

Art Editor Isabella Chen

Art Editor Vivian Uhlir

Spring 2017 | eusci.org.uk 3


new s

Dozens of universities lose access to Elsevier journals Thousands of researchers in Germany have returned to work in 2017 without access to Elsevier journals after institutions failed to reach a deal with the academic publishing giant. Several other countries across the world are still in strained negotiations over subscription pricing: Taiwan has reached a temporary agreement for January, and Finland has settled on a one-year extension while talks continue. The dispute started in October of last year when DEAL, a consortium of German libraries, schools, and universities, decided not to renew subscriptions for Elsevier journals. Excessive subscription fees were a major factor in the decision, as these often reach six-figure sums for individual universities. Another factor was Elsevier’s stance on Open Access, which is at odds with the German government’s position. As part of a new strategy unveiled in September 2016, the Minister for Education and Research, Johanna Wanka, announced that tax-funded research results have to be available to the public free of charge. Despite the inconvenience, many researchers have welcomed the move, seeing it as a strong challenge to a deeply flawed industry. Though by no means the only big name in academic publishing, Elsevier has become the target of a high profile boycott dubbed The Cost of Knowledge, which currently has signatures from over 16,000 researchers. Critics such as Fields Medal winner Timothy Gowers claim that Elsevier’s business model essentially relies on charging researchers exorbitant publishing fees, and subsequently selling their own work back to them. Another practice that has drawn widespread criticism is bundling, where journals are packaged together, forcing institutions to purchase many low quality journals in

Image courtesy of Pixabay

order to access essential publications such as The Lancet or Cell. It remains to be seen how ongoing negotiations with Elsevier will be resolved, but in the meantime affected researchers will turn to alternative means of accessing papers. One such alternative is BioRxiv, an online server which many in the life sciences have started using to share research articles prior to peer-review. More clandestine options such as SciHub – a search engine for pirated papers – are also available, enabling researchers to circumvent paywalls via illicit means. Needless to say, these high-profile disputes with Elsevier will not pass unnoticed by other publishers. An important factor in sustaining their current business model has been the assumption that universities simply cannot afford to lose journal access. One can therefore hope that this development will be the first step towards a fairer publishing system, incorporating modern Open Access ideals. Jonathan Wells

Autism: Deeper down the rabbit hole More than 70 years after Leo Kanner first described autism as a behavioural pattern affecting mainly children, research on this spectrum of complex conditions is still leading us to more questions than answers. A recent study published in the journal Cerebral Cortex might be taking us back to square one: its evidence suggests that a previously widely accepted theory regarding the neural responses specific to autism spectrum disorders might actually be false. Autism is a widely researched yet still poorly understood spectrum of neurodevelopmental disorders, characterised by impaired communication and social interaction, as well as atypical behaviour and responses to stimuli, such as repeating certain actions. One proposed cause for these altered neural processes has been the so-called “neural unreliability” of those affected by autism spectrum disorders. The neural unreliability theory, which has emerged over the past few years, suggests that the brain normally responds to repeti-

Image of confocal microscopy of mouse brain courtesy of ZEISS Microscopy

4 Spring 2017 | eusci.org.uk

tive visual, audio or tactile stimuli in a consistent way and that this response is more variable in the brains of autistic people, which eventually translates into abnormal cognitive patterns. However, the study that initially brought this theory forward used a neuroimaging method which has since been deemed unsatisfactory for assessing such rapid brain processes. As a result, for this new study scientists from the University of Rochester decided to make use of more detailed high-density electrical mapping in order to more accurately assess the differences in neural response between 20 individuals with autism and 20 control patients. Surprisingly, they found no significant differences between the two groups, which suggests that the response of the autistic brain to repetitive stimuli is just as reliable and predictable as that of the controls. These findings are welcome in the field of autism research, as unsubstantiated theories are always best dismissed early on, and are also supported by a large body of evidence similarly suggesting that the sensory thresholds of individuals with autistic spectrum disorders are similar to those of control patients. However, the study opens up a whole new blank canvas for scientists working in a field that has seen several theories thrown out and very little progress made considering the percentage of the population that is affected by autism. As with every other field, more funding and research will be crucial if we are to untangle the mechanisms of this disorder. Teodora Aldea


n e ws

Hypoxia: A paradoxical partner in heart regeneration? The heart is a fundamental organ, defining life and death and bridging physiology and human culture. Yet, for an organ so indispensable to life, the adult mammalian heart is remarkably poor at regenerating itself after injury. Heart attacks are characterised by death of the cardiomyocytes (muscle cells of the heart) following blockage of the coronary artery and consequential oxygen deprivation. The heart replaces cells at a very low rate. Thus, the heart is patched up by scar instead and loses functionality. Considering that acute oxygen deprivation is the chief effector of a heart attack, it is surprising that scientists at the University of Texas have succeeded in partially regenerating adult mouse hearts by exposing them to abnormally low oxygen levels. During a heart attack, cardiomyocytes are exposed to a very sudden and drastic decrease in oxygen. In contrast, this study slowly lowered the air oxygen levels of adult mice, and surprisingly found that this induced proliferation of these cardiomyocytes. This is exciting because usually past-replication adult mammalian cardiomyocytes are considered post-mitotic and can only really be made to divide artificially by rather drastic manipulation of cell cycle proteins. Not only did these researchers show they could induce proliferation of the hearts of healthy mice, they were able to do the same in mice recovering from a heart attack. They demonstrated that surviving cells de-differentiated, proliferated and then re-differentiated into new cardiomyocytes. Consequently, these mice had significantly improved cardiac function and reduced scarring compared to those who had recovery at normal oxygen levels.

But how is a seemingly crude treatment having such a beneficial effect on heart regeneration? Surely decreasing the availability of oxygen to the heart would only make things worse? The authors hypothesise that maintaining a ‘hypoxic’ or low oxygen environment decreases the oxidative stress on cardiomyocytes, a factor they believe normally limits their ability to proliferate. This theory may in part explain the curious phenomenon of neonatal mice that are able to regenerate their heart up to a week after birth, which was also recently shown in humans. The intrauterine environment is also relatively hypoxic, and so the cell-cycle arrest is perhaps a consequence of increased post-natal oxygen, respiration and consequently oxidative stress. Anyone who has looked into heart regeneration will surely agree that it is a frustratingly confusing business. To stumble across such a simple intervention is as remarkable as it is counterintuitive. Nonetheless, it is indeed more of an intervention than a treatment. These mice were kept at oxygen levels about a third of that of normal for two weeks. Keeping people recovering from heart attacks in a hypoxic cell for weeks may not be acceptable. Furthermore, what are the consequences for other tissues? Various stem cell niches are regulated, in part, through oxygen levels and some tissues are more sensitive than others to hypoxia. It will be interesting to see the future pursuit of this area and if this effect can be manipulated more specifically. Finn Bruton

Facial recognition linked to brain growth in childhood How does the human brain develop after birth? A major factor involved is synaptic pruning, which describes the removal of unnecessary connections between neurons. This occurs largely in infancy, and scientists have long thought that the structure of the brain remains relatively stable afterwards. However, a new study published in Science early this year finds that a region of the brain associated with recognising faces grows through childhood, leading to an improvement in facial memory. To measure brain volume, the researchers used a technique called quantitative magnetic resonance imaging (qMRI). Protons in the brain are excited by a magnetic field and the time it takes them to relax back to their original state, known as relaxation time, is measured. This will be shorter if they are surrounded by many molecules, serving as a proxy for volume. Using this technique on 22 children between the ages of 5 and 12, as well as 25 adults aged 22 to 28, the study found that the adult group had, on average, 12.6% more volume in the brain re-

gion associated with facial memory. A higher volume also meant better performance in a face recognition quiz, suggesting that factors other than pruning are relevant for brain development. While the exact nature of the structural changes that lead to increased volume are not clear, the authors suggest that proliferation of dendrites, which branch out from neurons to receive signals, may be a key component. Interestingly, a region just 2 centimeters away that is linked to place recognition did not show a significant volume difference between children and adults. Thus, a precise explanation of the interplay between structural and functional changes that drive brain development remains elusive. The project involved an international collaboration between researchers at Stanford University and various European universities that form part of the Human Brain Project. This is a flagship 10 year research project, funded with 1 billion euros from the European Union, that eventually aims to model the human brain and better understand neurodegenerative diseases, like Alzheimer’s and Parkinson’s. While small steps are being made, an understanding of the overall working of the brain is still far off, and we may encounter more surprises like this along the way. Hans-Joachim Sonntag

Image courtesy of Kairos

Spring 2017 | eusci.org.uk 5


res ea rch i n e d i n b u rg h

New optical imaging techniques for cancer How squishy is a cancer cell, and why is it important? Cancer killed more than 80,000 people in the UK in 2014, with 90% of those deaths caused by metastasis – where a tumour spreads to new sites of the body. Understanding what causes cancer to spread can aid in the creation of new treatments and increase survival rates. Interestingly, research suggests that the ‘squishyness’ of cancer cells plays an important role. A tumour’s ability to spread depends on many factors. In breast cancer, those cells that are more likely to invade and migrate to other sites are five times softer than those that are not. These softer cells are able to move into the bloodstream or lymphatic vessels, which transports them to different parts of the body. As the cancer develops, the proportion of these soft cells increases, which allows the cancer to colonise different parts of the body and greatly complicates treatment. The Bagnaninchi group at the University of Edinburgh has developed a new, non-invasive imaging platform, which can measure the relative stiffness of cells in real-time. Optical coherence tomography (OCT), an imaging technique most commonly used by opticians to image the eye, uses the reflection of light to capture a 3D image of tissue. OCT can be used at cell resolution to track a breast cancer cell’s movement in response to air pressure in real-time. The non-invasive nature of this technique means that we

Image of cancer cell in culture via Wikimedia Commons

can observe how the cell behaves in its natural environment. This novel method can show clear differences in the stiffness of different cell types, and it is hoped that it can be further developed to achieve a quantitative measure of cell stiffness in future. Better understanding the biology of metastasis could help to identify new treatment methods, and characterise the potential for cancer to invade other organs non-invasively in order to give patients the best chance of survival. Dawn Gillies

Women, STEM and the F-Word: Are you a feminist? Attending the EQUATE Scotland event about Women in STEM and Feminism on 4th November was the perfect way to end a work week in my notoriously male-dominated STEM field . It was a fun and informative evening full of ideas and concerns about women’s (and men’s) views on feminism and women’s position in STEM. The event took place in the beautiful Playfair library, at the Old College - arguably ironic to talk about gender equality and feminism in a room full of famous male busts, as Talat Yaqoob (director of Equate Scotland) jested. Over the course of the evening, we heard several first-hand stories of sexism unfold. Lorna Slater (chartered electro-mechanical engineer) described her own experience of sexism on a training course, where with her being the only female in the room, the instructor ‘attacked’ all women by making inappropriate sexist jokes about how women only spend money, and eat chocolate on their ‘difficult’ days. She inspired women to speak up and raise awareness on this issue, just as she had done by writing a feedback letter to the ‘sexist’ instructor and

consequently getting support from many others in her workplace. She pointed out that sexism has - in a way - been normalised to the point where people often don’t even notice it exists. The talks also pointed out some solutions to the problem - from two lovely speakers, Prof. Polly Arnold (University of Edinburgh) and Anna Ritchie Allan (manager at Close the Gap), we learnt how important it is to persuade people that ‘feminism’ is not a toxic word and to understand how the occupational gender segregation affects the world economy. Closing the gender gap in industry by increasing the number of women in male-dominated jobs, considering women for product design and promoting more female role models and leaders can influence the next generation and benefit the economy. The Q&A session which concluded the evening also emphasised the need for more active involvement of women in STEM subjects. We heard concerns of STEM female students and advice from senior ladies who guided the younger members - and not only - to be informed and educated, to seek mentors, to congratulate feminists, to be ambitious and positive and to value the women around them and their work. One of the main lessons to be learnt from this evening is how important positive experiences of women are in the fight against negative ones and how inspirational sharing these positive stories can be. Women should not be afraid to join communities, network with other scientists and inform people not only of the difficulties and concerns of being the minority in a STEM field, but also of how rewarding a career in science can be and how this can impact the world around them. Athina Frantzana

Image courtesy of Argonne National Laboratory

6 Spring 2017 | eusci.org.uk


resea rc h i n edin bu rgh

Progress in rare cancer research You may never have heard of cholangiocarcinoma (CC), and from your perspective that’s probably a good thing. It is a rare cancer of the bile ducts, which are located near the liver, and unless you can surgically remove it quickly it is incurable, with only 5% surviving over a year after diagnosis. As with other rare cancers, CC is hard to treat because we know so little about it, and its rarity means there is less money or demand for a treatment than there is for, say, lung cancer. Recent research in Edinburgh led by Professor Forbes, however, is changing this. A research paper recently published by his group identified a new signal that causes healthy cells to become cancerous and proliferate into CC tumours. This is important as inhibiting these signals using drugs may allow us to remove the cancer’s impetus to keep growing, providing a targeted therapeutic strategy. The scientists in question found that a protein called Notch3, found on the cell surface, is expressed more prominently in human CCs. Furthermore, genetically removing it from a mouse model reduced CC tumour formation and progression. However, there are multiple Notch proteins in the body performing very important roles, and they all signal into the cell using the same set of downstream proteins. This means that if we target Notch3 signalling we could upset oth-

er Notch signals in the body, causing more harm than good. The additional exciting discovery of the Forbes group, however, is that Notch3 does not use the conventional Notch downstream pathway to drive tumour growth. This means there may be a unique element of its signalling that can be specifically targeted for therapy. What this alternative pathway is, we do not yet know, but identifying key unique elements of cancer progression is the first small step on the road to a treatment. So, does this spell hope for people who currently suffer from CC? Well, unfortunately not. Treatments for even the most common cancers that receive the most research funding, such as breast and prostate, are not yet fully effective, and their journey of therapy development began decades ago with leads similar to the ones presented here. 20% of all cancers diagnosed are rare, like CC, and the harsh reality is that without sufficient research investment, many of them will go uncured for decades to come. I’ll leave you with a strapline from the cholangiocarcinoma charity AMMF: “Without awareness there is no funding. Without funding there is no research. Without research there is no cure.” Isaac Shaw

Developing integrated technologies for lung disease By the time you have finished reading this article, somebody in the UK will have died of lung disease, and by the end of this week, about 10,000 further people will have been diagnosed. Yet despite these numbers, very little is known about the processes that drive respiratory infection and inflammation. Of particular concern are patients in intensive care. They are hooked up to life-saving equipment and can’t be easily moved, meaning current methods for investigating the lungs can be dangerous and are often inconclusive. A biopsy may provide cellular and molecular information, but surgically removing small sections of the lung is uncomfortable and time-consuming. An X-ray may roughly determine the location of an issue, but can’t distinguish between the myriad of potential causes. Any resultant uncertainty in the diagnosis can lead to the broad application of antibiotics and other drugs, with all the associated health and cost implications. It is this clinical and biomedical problem that motivates PROTEUS, an Interdisciplinary Research Collaboration (IRC) between three universities: the University of Edinburgh, the University of Bath and Heriot-Watt University. Their aim? To

Image of Giant cell Intersititial Pneumonia via Wikimedia Commons

allow clinicians to make increasingly accurate and timely bedside diagnoses and hence provide more appropriate and targeted treatments, improving patient outcomes, and reducing time and cost. To do this, they are creating a device capable of performing a molecular optical biopsy both inside the patient and in real-time. The system being built comprises of a great number of different components. Custom optic fibres transfer light and chemicals into and out of the lungs. These chemicals include imaging agents which use fluorescence to locate, identify and distinguish different types of bacteria, fungi, neutrophils and enzymes. Nanosensors mounted on the end of the fibres are able to measure physiological parameters important in pathological processes and injuries, such as pH and redox potential. To facilitate these investigative techniques, the device uses multiple light sources, spectrometers and processors, as well as software (such as machine learning and signal processing algorithms) and computer hardware to provide clinicians with the patient information. A task of this nature requires an interdisciplinary team. Optical physicists, chemists, biologists and computer scientists (to name but a few) can all be found at the PROTEUS hub in the Queen’s Medical Research Institute here in Edinburgh. They worked with clinical scientists to understand the restrictions of current technologies and create the tools they need. The new technologies produced through this collaboration will allow doctors to provide patients with more personalised and targeted care. In the long term, previously inaccessible physiological information gathered from the distal lung will also provide new insights into illness and disease.

Samuel Jeremy Stanfield Spring 2017 | eusci.org.uk 7


focu s

Serendipity in Science What does it take to make it in the world of science? Hard work, hours in the lab, and a drive to find answers? Undoubtedly these are all important, but in this issue we turn our attention to a sometimes dismissed factor in some of the biggest scientific breakthroughs – luck. Exploring this theme allows us to take you on a tour of some of the greatest scientific breakthroughs, from the birth of modern science right up to the present day. Starting in the 19th century, Hannah Johnston introduces us to Charles Locock’s serendipitous discovery of the first effective anti-epileptic (p9), while Joseph Wilson discusses Louis Pasteur’s groundbreaking work with biological enantiomers (p10). Following on, Bella Spencer examines how an unfortunate accident has impacted the field of neuroscience (p12). Moving into the latter half of the century Tessa Noonan and Calum Turner both explore the serendipitous discoveries of two invaluable imaging techniques: the golgi stain (p13) and X-rays (p14). Moving into the 20th Century, Adelina Ivanova investigates a number of lucky encounters that aided Rutherford’s understanding of the atomic nucleus (p15) and Bonnie Nicholson explores how a chance mutation in a colony of lab mice has led to a better understanding of the genetics of obesity (p17). Craig Young then turns our attention to the unusual approach taken by physicist Richard Feynman, who took inspiration from a falling plate to understand electron spin (p18). The role luck played in revolutionising the fields of psychiatry and cosmology are then discussed by Fiona Ramage (p20) and Johanna Vos (p22) respectively, before Scott Dillon explores how work on wartime radiation exposure led to the discovery of hematopoietic stem cells (p23). We then look to the stars as Katie Ember brings us the story of a female astronomer whose work led to the characterisation of pulsars - the remnants of supernovae (p24). Bringing us back to Earth, Angela Downie explores the chance factors that have influenced cancer therapy advances (p26), and Carlos Martínez-Pérez investigates the ongoing role of natural compounds in the development of medicines (p28). Bringing us to the end of the 20th century, Natasha Tracey examines the fortuitous development of one of the world’s most used drugs – Viagra (p32). Finally, looking to the future, Lorna Gibson highlights the challenges that serendipitous findings present to medical imaging research (p30). I think you’ll agree we’ve been particularly lucky this issue to get such a fine array of articles, and we hope you enjoy reading them as much as we did. James Ozanne and Emma Dixon, Focus Editors Illustration by Alanah Knibb

8 Spring 2017 | eusci.org.uk


fo cu s

From impotency to epilepsy Hannah Johnston discusses the misconceptions of epilepsy and its road to prevention Epilepsy was once perceived as the result of possession by the devil and then subsequently as a consequence of masturbation. Just from that sentence, I think you'll agree that our understanding of the causes, as well as the treatment of the condition has certainly come a long way. However, before considering how the treatment of epilepsy was accidentally discovered, we should first examine the current explanation for the condition that affects 1% of people worldwide. Our brains contain on average 100 billion neurons (nerve cells) which are constantly firing information to the body. The nervous system is like a dense road network, where traffic lights control the movement of cars, but if they go faulty, the likelihood of collisions increases. If a fault does occur, as in there are too few red lights and cars collide, excessive and synchronous excitement of neurons can lead to seizures. If these seizures are recurrent, spontaneous and not caused by environmental factors or head traumas, the diagnosis is most likely epilepsy. There are two main types of seizures: partial and generalised. The former involves a localised part of the brain being affected and usually the patient is conscious. The seizure may consist of jerking movements, hallucinations, or strange sensations which the patient may or may not remember. Generalised seizures affect both hemispheres of the brain. There are five subtypes which involve varying stages of convulsions and/or losing and regaining consciousness. Epilepsy is common in young children, who can experience 100 seizures a day unknowingly. The cause of most seizures is unfortunately unknown but drugs, alcohol, oxygen depletion, or multiple gene abnormalities are possible reasons for the attacks. Charles Locock, a graduate from the University of Edinburgh’s medical school in 1821 and Queen Victoria's first obstetrician, was the first to come up with a more primitive hypothesis for the cause of seizures. Locock believed that masturbation lead to epilepsy and that it also occurred in women who experienced “a great deal of sexual excitement” during menstruation. Based on this belief and possible inspiration from Otto Graf, who reported in 1842

that his self-administration of potassium bromide lead to impotency, Locock thought potassium bromide would cure these "hysterical" women. He allegedly claimed to have successfully treated 14 out of 15 women with the drug. Locock didn’t actually record his findings but did mention them during a discussion on Dr Edward Sieveking’s paper on epilepsy in May 1857 at the Royal Medical and Chirurgical Society. It was from this day forward that potassium bromide, the first effective antiepileptic drug (AED), was recorded in use and can be found in the Lancet, a UK medical journal. Better understanding of the neuronal inhibitory properties of the drug, and not the previous misconception of its anti-arousal property, has subsequently led to the discovery of even more efficient drugs. The next chance development in epilepsy treatment was made by Alfred Hauptmann in 1912, again another fortuitous discovery but without the same misguided sexual arousal underpinnings. Hauptmann was woken frequently during the night by his patients falling out of bed during seizures in the ward below his accommodation. He therefore sedated his patients with phenobarbital, a new drug on the market at that time used as a hypnotic. Not only did Hauptmann

catch up on lost sleep but he found that his patients had fewer seizures throughout the entire day. To this day phenobarbital remains the most commonly prescribed AED in the developing world. Since then further advances have been made such as the experiments carried out on cats by Tracy Putnam and his team in 1934, using a selection of non-sedative drugs. Phenytoin proved to be particularly successful at protecting cats from electrically induced convulsions. The first clinical trial on an epileptic patient was successful with no subsequent seizures, making phenytoin the most widely prescribed AED in the US. The drug works by blocking voltage-gated ion channels on neurons which in turn prevents too many electrical pulses firing through the brain. The brain is a complicated and intricate system, relaying signals from neuron to neuron. AEDs decrease the frequency and/or severity of seizures which are caused by neuronal hyperactivity. The drugs either oppose excitatory processes or augment inhibitory processes. This has all emerged thanks to Locock's original serendipitous discovery, through which he has shaped the lives of 65 million people worldwide suffering with epilepsy. Hannah Johnston is a fourth-year PhD Chemistry student

Illustration by Hannah Johnston

Spring 2017 | eusci.org.uk 9


focu s

Mirroring molecules: from 3D structures to the creation of beer Joe Willson investigates the role of serendipity in the work of Louis Pasteur Louis Pasteur is known as the ‘father of microbiology’ for his groundbreaking work on germ theory, immunisation, and for famously debunking the theory of spontaneous generation. His discovery of sterilisation via ‘pasteurization’ was arguably one of the biggest scientific breakthroughs of the 19th century and has led to the prevention of millions of deaths from pathogenic disease. Perhaps less well known is his discovery of biological enantioselectivity, the culmination of a lifelong body of work, from his beginnings as a chemist to his later years studying fermentation and microbiology. Throughout his career, Pasteur vehemently denied the role of serendipity in his discoveries and insisted that success was down to careful planning and scientific rigour. However, it has become clear with hindsight that chance was more involved than Pasteur cared to admit.

Pasteur vehemently denied the role of serendipity in his discoveries and insisted that success was down to careful planning and scientific rigour In 1847, Pasteur had just earned his doctorate in chemistry and was working on his first research project studying a phenomenon discovered by fellow French chemist Jean Baptiste Biot. Following on from the observation that quartz crystals could rotate plane polarised light, Biot had noted that some organic compounds, such as tartaric acid, could also rotate polarised light in solution. Biot understood the rotation of the polarised light was a consequence of some molecular property, but at the time very little was known about the structure of compounds. With Biot’s research in mind, Pasteur examined the crystals of sodium ammonium tartrate, a naturally-occurring salt of tartaric acid found in tartar deposits formed during wine fermentation. Inspecting the crystals under a microscope, Pasteur noted that, like quartz crystals,

10 Spring 2017 | eusci.org.uk

the sodium ammonium tartrate crystals were hemihedral (not symmetrical). Pasteur then looked at crystals of another sodium ammonium salt that had been formed as an unexpected side product during the synthesis of tartaric acid at a chemical plant in France. The identity of the new compound was somewhat of a mystery. The French chemist and physicist Louis Joseph Gay-Lussac had obtained a sample for study and ascertained that it had the same composition as the tartaric acid which had been described. Gay-Lussac named it racemic acid, from the Latin racemus, or “cluster of grapes”, although it was more commonly described as “paratartaric acid”. Paratartaric acid shared similar properties to tartaric acid and had the same chemical composition, but did not rotate polarised light in solution. This posed a problem to chemists as it was unknown how compounds sharing the same chemical composition could exhibit different properties. When Pasteur visualised paratartaric acid via microscopy, he observed two distinct types of hemihedral crystal, which were enantiomorphous, meaning they were the same but mirror images of each other. In an elegant experiment, he separated the crystals into two piles and made solutions from each. These solutions could now polarise light and did so in opposite directions. Pasteur had established that paratartaric acid was a mixture of two enantiomers, dextro- and levo- tartaric acid, he concluded that the optical properties were a result of the chirality of the molecules themselves. His findings established the basis of stereochemistry, the study of the three-dimensional structure of molecules; particularly astounding considering little was known about valency, bonding or the structure of chemical compounds at that time. Following this discovery, Pasteur was appointed professor of Chemistry at the University of Lille in 1854. Lille was an industrial region, responsible for a great deal of alcohol production. In the summer of 1856, the father of one of his students, only known as Mr Bigo, contacted Pasteur looking for help. Bigo owned a sugar beet distillery in the area,

and had found that some batches of his beer were turning sour during the fermentation process, ruining the brew. Pasteur agreed to help and began experiments in his factory. Examining both the normal and sour batches under the microscope, Pasteur saw that the sour beer, in addition to round yeast cells, was full of millions of tiny rod-shaped microorganisms. He concluded that these were causing the souring of the beer and that they had contaminated the beer from the environment. He went on to find lactic acid in the beer, which he deduced was produced by these contaminant microorganisms through an aberrant process of fermentation. This discovery cemented Pasteur’s interest in fermentation and microbiology, and he would go on to publish articles on the processes of alcoholic and lactic acid fermentation.

One enantiomer may be an effective treatment, the other a dangerous poison

When Pasteur left Lille for Paris to serve as the administrator of the École Normale Supérieure (ENS), he continued his studies on fermentation in other liquids. During this time, Pasteur made a critical and undoubtedly serendipitous discovery. An aqueous solution of d-tartaric acid, which had been left exposed to the air in his laboratory, had turned turbid and spoiled. Pasteur recognised that the solution had fermented. However, Pasteur’s previous experiences as a chemist led him to turn his attention to the fermentation of the racemic (has equal amounts of both enantiomers) form he knew as paratartaric acid. Pasteur discovered that during the fermentation of paratartaric acid, the d-isomer was consumed with preference over the l-enantiomer. Almost by accident, Pasteur had discovered biological enantioselectivity, which is how the chirality of compounds affects their


fo cu s

Illustration by Lana Woolford

interaction with living systems. In 1857, Pasteur published his ‘Memoir on Alcoholic Fermentation’ describing the preferential fermentation of the ‘right’ d-tartaric acid. None of Pasteur’s articles or lectures mention the accidental nature of this groundbreaking discovery. In fact, Pasteur actively denied the role of chance in his findings, explaining that he had read about the fermentation of tartaric acid and had directed his research towards fermentation as a result. Even his serendipitous encounter with Bigo, the catalyst which led to his work on lactic acid fermentation, was downplayed by Pasteur. Soon after the discovery, Pasteur had claimed his move to study fermentation was due to observations that amyl alcohol, a fermentation product, had displayed chiral properties, and he was interested in exploring this chemical further. Later articles from Pasteur abandoned this explanation and instead described the fermentation of paratartaric acid as the catalyst for his work in fermentation, despite the fact that this discovery was made almost a year following his work in Bigo’s factory. Thirty years later in 1886, Pasteur presented the work of an Italian chemist named Arnaldo Piutti describing how l-asparagine has a sweet taste, whereas d-asparagine does not. Pasteur added

notes to the finding, declaring that the chiral property of the molecule was interacting with a chiral molecule in the 'nervous system of taste'. Pasteur had described the fundamentals of biological enantioselectivity. Advances in our understanding of chemistry, structural biology and pharmacology have made clear how stereochemistry has a fundamental impact on the activity of molecules in the body through their interaction with different receptors in the body. This has implications for a number of biochemical reactions in the body, and importantly in drug discovery and design. One enantiomer may be an effective treatment, the other a dangerous poison. The most stark example of this was the use of the drug Thalidomide in the 1960s for the treatment of morning sickness. L-thalidomide was an effective anti-emetic, whereas d-thalidomide was a potent teratogen. The distribution of enantiomerically impure thalidomide led to thousands of babies born with birth defects. Today, stereochemistry is an important consideration for every new drug in development and many common drugs are manufactured with stereoselectivity in mind. For example l-propranolol, a common inducer of general anaesthetic, is a powerful adrenoceptor

agonist, whereas the d-isomer has no effect whatsoever. Enantiopure drugs are now a market in themselves, with many drugs marketed in both racemic and enantiopure forms. By 2002, stereopure drugs had become a 160 billion dollar industry, led foremost by the stereopure statins Lipitor and Zucor. Throughout his life, Pasteur fiercely downplayed the role of serendipity in his scientific work and maintained that his findings were due to rigorous, logical progression. It is important to note that along with serendipity, Pasteur had the knowledge and experience to recognise important findings. The fermentation of paratartaric acid had been described prior to Pasteur’s discovery, but only Pasteur interrogated this phenomenon fully due to his experiences with fermentation and stereochemistry. Perhaps Pasteur said it best himself: “in the field of experimentation, chance favours only the prepared mind.” Joe Willson is a 3rd-year OPTIMA PhD Student

Spring 2017 | eusci.org.uk 11


focu s

Phineas Gage – well ain’t that a hole in the head Bella Spencer explores how a horrific accident had an invaluable impact on modern neuroscience The year was 1848 and the accepted dogma of neuroscience was phrenology – the idea that the shape of the skull correlates to character and mental ability. However, on 3 September that year a horrific accident occurred that would alter our understanding of neuroscience forever. During a routine procedure to clear away rocks from a railway track, one worker made a mistake and his iron tampering rod directly hit an explosive. The rod was propelled upwards and penetrated the side of his face, shattering his jaw, passing through his eye, his frontal cortex and out to the top of his head. That worker was Phineas Gage. Phineas Gage survived the accident. In fact, within minutes he was able to walk to a cart and was taken to receive medical attention. It was recorded that during Gage’s initial examination, approximately half a teacup of brain fell to the floor as he vomited. This mass correlated to a loss of 4% of his cortex. At the time, the cortex was regarded as a non-functional, homogenous and protective covering for the ventricles. Consistent with this thinking, the damage to the cortex should not have resulted in any major change to Gage’s behaviour and capabilities. However, upon Gage’s apparent recovery and release from the hospital, his personality was profoundly altered. Although seemingly healthy, he had converted from a well respected, diligent man to an

Image from Wikimedia Commons

12 Spring 2017 | eusci.org.uk

irreverent and disinhibited character. Initially, Dr John Harlow, who cared for Gage, was hesitant to publish findings regarding the decline of Gage’s behaviour. However, after his death, Harlow argued that Gage’s personality change was sufficient evidence to counter the theory of phrenology, and to demonstrate that the brain must be involved in the regulation of personality. Furthermore, the fact that damage to the frontal lobe had caused Gage to lose his social inhibition provided crucial evidence in support of localisation - a theory that postulated that each brain area has specific role. In this case, it was illustrated that the frontal cortex is involved in behaviour.

If we fast forward to 2012, Phineas Gage’s brain was still impacting the spheres of neuroscience

However, the discoveries didn’t end there. If we fast forward to 2012, Phineas Gage’s brain was still impacting the spheres of neuroscience. Researchers at the University of California, Los Angeles (UCLA) used brain imaging data to map Gage’s loss of white matter – the collection of axons and myelin that connect

brain areas. From this mapping, it was theorised that Gage lost 10% of his white matter and that this was the major contributing factor to his personality change. Additionally, the damage likely disrupted connections between his left frontal cortex and his limbic system. This suggested that the frontal cortex may also be involved in the regulation of emotions. After his accident, Gage worked as a coach driver in Chile, an occupation requiring a reliable and respectful character – one opposite to the antisocial personality attributed to Gage following his injury. Gage’s ability to maintain his coach driving job for seven years thus implies that his personality decline was transient. This suggests that the brain was able, to some extent, to restore the connections between the frontal cortex and the limbic system by remapping to compensate for the damaged white matter connections. Yet another remarkable discovery. In 1859, Gage’s health deteriorated and he returned to live with his mother in New Hampshire until his death from an epileptic seizure on 20 May 1860. It is theorised that his epilepsy may have resulted from his brain injury. According to urban legend, he was buried with his tampering rod until 1866, when his his body was exhumed and his brain was examined. It seems beyond the realms of science fiction, let alone science, that such a horrific accident could serendipitously have such scientific value. Gage’s brain proved integral to the demise of phrenology and the dawn of the theories of modern neuroscience. In 2007, in a final burst of luck, a collector of vintage photographs uploaded an image titled ‘One-Eyed Man with Harpoon’. Through a series of Flickr comments it was concluded that picture was not of a harpooner but was in fact the only photo in existence of Gage. The value of serendipity to science is perfectly demonstrated by the case of Phineas Gage and the neuroscience community owes a great deal of thanks to Gage’s frontal cortex. Bella Spencer is a neuroscience undergraduate student completing a year long internship at the Centre for Neuroregeneration


fo cu s

The Golgi stain Tessa Noonan discusses the accidental discovery of silver staining used in brain research today Around 1870, acclaimed neuroscientist Camillo Golgi was embroiled in a fierce battle with his contemporary, Santiago Ramon y Cajal, about how neurons are connected. Cajal believed that all nerves are discrete, individual cells, whereas Golgi was a staunch supporter of the reticular theory, whereby all of the cells in the brain are connected in one continuous network. Neither scientist could be proven right because, until then, the only way of visualising nerve cells was using a light microscope, under which all cells appeared as a long, single thread. In an effort to find evidence to support his hypothesis, Golgi devoted a lot of time to develop a method of staining neurons that would allow clear visualisation of their arrangement in tissues. One day, Golgi returned to his lab to find that his cleaner had cleared away all of his bench materials. When he went to retrieve his slides, he found that the slide bearing the brain tissue slice on which he had been experimenting had something rather peculiar on its surface. Upon closer investigation under a microscope, Golgi found that the slide showed a clear image of individual neurons. After a long interrogation of his cleaner, he managed to piece together the order in which she had thrown chemicals onto the slide. After some tinkering, Golgi eventually managed to work out a recipe still used today: the Golgi stain.

Illustration by Ashley Dorning

However, this is only Golgi’s version of the story. The cleaner may well have been experimenting with the chemicals herself, rather than randomly throwing them away. It does after all seem a fortunate coincidence that the chemicals happened to fall in the right configuration, and that she could manage to piece together the order in which they were disposed of. Unfortunately, scientific history is tinged with examples of women’s work being claimed by men, such as the infamous case of Watson and Crick using Rosa Franklin’s work as key evidence in determining the structure of DNA.

Silver permeation is a valuable way of monitoring the alterations in dendrite shape that occurs during neuronal degeneration The Golgi stain works by impregnating the slices of nervous tissue with potassium dichromate and silver nitrate, staining a random selection of neurons in their entirety, which can then be visualised under a light microscope. This is in contrast to other stains, which stain all of the neurons in a tissue section, making them hard to distinguish as they

are tightly packed together. A second problem with other staining techniques is that they do little to highlight finer structures that branch off the main neuron, such as dendrites and axons, which are too thin to pick up. Conversely, the Golgi stain produces much sharper images such as those below. This allows neuroscientists to trace the projections of individual neurons and begin to elucidate the vast and complex neural networks making up the brain and spinal cord. Cajal himself, after seeing the results that the Golgi stain produced, exclaimed: “I expressed the surprise which I experienced upon seeing with my own eyes the wonderful revelatory powers of the chrome-silver reaction.” Unfortunately for Golgi, Cajal then went on to use the Golgi stain to prove that neurons really are separate units and not part of one continuum. Thus, Cajal proved Golgi’s theory wrong, and the neuron doctrine was born instead. This doctrine states that the nervous system is composed of individual cells called neurons, and that neurons extend processes called axons and dendrites, as well as detailing other features that define what we understand as a neuron today. Furthermore, silver permeation is a valuable way of monitoring the alterations in dendrite shape that occur during neuronal degeneration. In 2015, a group in Greece used the Golgi stain to visualise neurons in a model of Alzheimer’s disease, where the decrease in the cortex causes progressive mental decline. Golgi staining allowed the group to visualise the areas of the brain in which these changes were taking place. Characterising the areas affected in a disease, and the nature of the changes that take place within them, allows researchers to hypothesise new ways to treat these diseases. The Golgi stain has been instrumental in many milestone neuroscientific discoveries. Over the years it has been updated to make it cheaper and more efficient, but the principle of silver staining discovered by accident one day in 1874 remains the same, and is still used to advance our knowledge of the brain today. Tessa Noonan is a 4th year Pharmacology student at the University of Edinburgh Spring 2017 | eusci.org.uk 13


focu s

Glimpsing the unseen Calum Turner examines Röntgen’s serendipitous discovery of X-ray radiation Consider a world where we couldn’t gaze upon X-rays. Radiographers couldn’t peer inside a patient to discover a hidden malady and astronomers would be blind to some of the most exciting objects in the night sky. The uses of X-rays have ranged from the grandiose to the unlikely – everything from uncovering the mysteries of the most energetic galaxies in the universe to the more mundane role of hair removal. Though inventions exploring this region of the electromagnetic spectrum are ubiquitous in modern life, X-rays were discovered a mere 121 years ago. Their fortuitous discovery and subsequent development is a fascinating story of happenstance and scientific serendipity, and begins in the late 19th century with a German physicist named Wilhelm Röntgen. In 1895, Röntgen was investigating the properties of vacuum tubes at the University of Würzburg. These evacuated glass vessels were the favoured apparatus for researchers investigating one of the outstanding scientific problems of the day – cathode rays. Though these rays were visible as glowing arcs on vacuum tubes, their origin was a mystery. Cathode rays are now known to be beams of electrons in a vacuum, but in Röntgen’s day the electron was not yet part of the established theoretical background. With science unable to adequately explain cathode rays, investigations into their nature abounded. Scientists working in the field had already noticed mysterious effects they couldn’t attribute to the cathode rays. Photographic plates stored near the apparatus inexplicably became fogged despite being kept from all sources of light. However, these phenomena went uninvestigated until a chance discovery in early November 1895.

Upon seeing the image, she is said to have exclaimed “I have seen my death” Intrigued by unexpected findings from earlier experiments, Röntgen had constructed a vacuum tube shrouded in thick card, designed such that no light could escape. He then set out to test the opacity of his construction. Dousing

14 Spring 2017 | eusci.org.uk

Illustration by Lucy Southen

his laboratory lights, he passed a charge through the vacuum tube and inspected his set-up carefully. Though the vacuum tube was indeed lightproof, Röntgen observed a faint glimmer from his lab bench. By the light of a struck match, he found that the glow had originated from a small paper screen painted with barium platinocyanide left lying on the bench. He surmised that some unknown ray had pierced the cardboard and caused the screen to fluoresce. Unwittingly, Röntgen had just discovered X-rays. Röntgen threw himself into researching the new rays, spending the weeks following his discovery eating and sleeping in his laboratory. Two months of intense research culminated in the first scientific paper on X-rays, ”Über eine neue Art von Strahlen” (On a New Type of Rays). During this time, Röntgen also created the first radiograph, choosing as his subject the hand of Anna Bertha, his wife. Upon seeing the image, she is said to have exclaimed “I have seen my death”. Röntgen’s X-rays – named after the common practice of denoting an unknown quantity “x” – caused a sensation in the 19th century physics community. Röntgen’s discovery earned him the first ever Nobel Prize in Physics and immortalisation in his native tongue: in German, X-rays are known as Röntgenstrahlung, or “Röntgen rays”. The popularity of the new rays led to a spate of experiments

by curious scientists eager to learn more about this new radiation. However, the intangible rays Röntgen had discovered soon proved to have hidden dangers. Word began to spread of strange ailments affecting those who worked closely with X-rays. Tales of hair loss, burning, and more frightening symptoms became common. Just as Röntgen had stumbled upon X-rays, his fellow scientists were unwittingly discovering the perils of the unseen radiation. Despite these early warnings, the popularity of X-rays only increased in the 20th century – machines which emitted the radiation were used for everything from fitting children’s shoes to fairground attractions. As time progressed and the risks of X-rays became widely known, more spurious uses of the radiation fell from favour. At the same time, however, its medical and scientific benefits were beginning to be uncovered. Far from an inconspicuous glimmer on Wilhelm Röntgen’s lab bench, X-rays are now used in all walks of life, from life-saving applications in hospitals to cutting-edge science in laboratories. Our world would be a very different place if, over 100 years ago, Röntgen hadn’t left a scrap of paper lying on his lab bench. Calum Turner writes to avoid working on an MPhys in Astrophysics


fo cu s

Rutherford’s lucky star: the discovery of the atomic nucleus Adelina Ivanova explores the role serendipity played in the discovery of the atomic nucleus Though scientists appear to be determined people who always know the end goal of their research and predict their results, you might be surprised to find this is not always the case. Serendipity and even pure luck aided a lot of crucial discoveries, but they are often overlooked when learning about science and how landmark discoveries and great innovations came to be. As early as high school, students are taught about the great discovery of the atomic nucleus by Ernest Rutherford in 1911, a turning point in the perception of atomic structure. However, we are seldom told about the great role serendipity played in this discovery.

The first thing Rutherford was extremely lucky about was the fact that he was working during the times of great minds like Henri Becquerel and Pierre and Marie Curie

The idea of the atom was not a new one during Rutherford’s time. As early as the 5th century BC, the Ancient Greek philosophers Democritus and Leucippus came up with the term atom to describe the indivisible particles that make up matter. In the 1800s, John Dalton restated this concept, arriving at the theory about the existence of atoms during his work on gas laws. The earliest theoretical description of the atom’s structure was proposed in the 1900s by J.J. Thompson who suggested the Plum Pudding model, where atoms were a huge soup of positive charge with embedded electrons. Although Thompson’s model became widely accepted, other ideas like the Cubic model, with electrons located at the corners of a cube, and the Saturnian model still persisted. The Saturnian model was suggested in 1903 by Japanese physicist Hantaro Nagaoka who stated that electrons were orbiting

a big, heavy, central point. Rutherford’s discovery of the atomic nucleus was a huge leap forward, allowing the scientists after him to develop the modern day conception of the atom. Rutherford was extremely lucky to be working during an era of great minds like Henri Becquerel and Pierre and Marie Curie. On Sunday 1st March 1896, Henri Becquerel accidentally discovered radioactivity. He was working on phosphorescent materials that glow after being exposed to light, and he was experimenting with uranium plates. He believed that sunlight was the reason that crystals burn their image onto a photographic plate. One stormy day, he decided to postpone his work for better weather, but he was surprised to see that the image was still burned on the plate despite staying in a dark drawer with no light. This led to the conclusion that there must be some form of invisible radiation, later named radioactivity by Pierre and Marie Curie. Without this knowledge of radioactivity, Rutherford could never have progressed his theory about the atom. Rutherford committed to investigating radioactivity and confirmed Becquerel’s discovery in 1898, while also stating that there were at least two different parts to the uranium rays, which he named alpha and beta rays. In the summer of 1907, he moved to Manchester, where scientists Heins Geiger and William Key worked under his supervision. Although still concerned with his research on radioactive decay, for which he got a Nobel prize in Chemistry, Rutherford became interested in the function of alpha and beta rays. He promoted William Key in 1908, and Key became a valuable source for understanding Rutherford’s methods in the lab. According to an interview he gave in 1957, Rutherford only roughly outlined the apparatus he wanted to use, leaving Key to explore how to make them and fit them to the purpose of the experiment. PhD student Geiger and Rutherford worked closely together on detecting and measuring alpha particles, as Rutherford’s previous attempt to count these had failed. In 1909, they succeeded in devel-

oping two methods for observing alpha particles, one involving an instrument that later became known as the Geiger counter. Rutherford’s experiments confirmed his suspicions that alpha particles were helium atoms stripped of their electrons, as there was still no concept of a nucleus. The previous year, Geiger had started passing beams of alpha particles through gold and other metallic foil, and later utilised his newly developed techniques for measuring their dispersion. Meanwhile, many labs were studying the scattering of beta particles from atoms. They claimed that the large scattering angles observed could be attributed to Thompson’s Plum Pudding model where the positively charged soup led to a lot of small angle scatterings. Rutherford, however, did not support this theory of multiple scattering.

It was almost incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you Ernest Marsden, a 19-year old student in Honours Physics, was invited to join Geiger in his search for a mathematical relationship between the thickness of metallic foil and the dispersion of the alpha rays. Rutherford, pedantic by nature, instructed Marsden to check if any of the alpha particles were directly reflected backward from the metal surface. However, neither Rutherford nor his students believed there would be any reflection, as this contradicted their whole understanding of the atom at this point. When Marsden and Geiger reported that indeed, there had been alpha particles reflected backwards, Rutherford wrote the following: “It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you.” Rutherford realised not only how lucky he was for Spring 2017 | eusci.org.uk 15


focu s this discovery, but also its fundamental importance for scientific understanding. The experiment meant that most of the mass and charge of an atom were concentrated in a small central body, and so Rutherford tried to reconcile the result with the different atomic models, especially Thompson’s Plum Pudding model. Rutherford was cautious about defining the nature of this body and did not call it a nucleus until 1912. He initially entertained the possibility that the charged center is negative. Although this sounds odd today, Rutherford thought that since alpha particles carry a positive charge, a big negative charge in the center of the atom would slingshot the alpha particle around and fire it back towards its source, considering Nagaoka’s Saturnian model. He devised ways to investigate the central charge, particularly beta particles, and slight differences in two papers published by Rutherford’s team in 1912 led historians to believe that he had decided in favour of the charge being positive.

Rutherford’s discovery of the nucleus was the leap forward that allowed the development of the modern atomic theory The Great War disrupted the work in the Manchester group. Marsden fled to New Zealand, and other prominent members of the group also left. Rutherford finalized his Planetary model and published it. According to his model, the atom consisted of a positively charged nucleus that constituted its mass, and of electrons revolving around it like planets in their orbits. However, as he had received a Nobel prize several years earlier, the committee considered it inappropriate to nominate him again despite his huge discovery. Later, it was postulated that Rutherford’s model was inconsistent with classical physics and electromagnetic theory, according to which any charged particle moving on a curved path emits electromagnetic radiation. For the Planetary model this meant that the electrons would lose their energy and spiral into the nucleus, so the atoms would collapse. Niels Bohr amended the model, saying that electrons move in orbits of fixed size and energy, and radiation occurs only when they jump from one orbit into the other. With the development of quantum

16 Spring 2017 | eusci.org.uk

Illustration by Huimeng Yu

mechanics, the Uncertainty Principle, and wave-particle duality, the modern view of the atom talks about electron clouds—regions of space around the atom where electrons are most likely to be found—rather than fixed orbitals.

Knowledge leads to results, but imagination and luck can often be key components in the process of discovery Nevertheless, Rutherford’s discovery of the nucleus was the leap forward that allowed the development of the modern atomic theory. This discovery was not only based on the genius of the team of scientists, but also on luck and Rutherford’s pedantic nature. Rutherford knew that serendipity and working with bright minds like Geiger and Marsden allowed him to make his

discovery and in 1939 he commented: Experiment, directed by the disciplined imagination either of an individual or, still better, of a group of individuals of varied mental outlook, is able to achieve results which far transcend the imagination alone of the greatest philosopher. Knowledge leads to results, but imagination and luck can often be key components in the process of discovery. The value of science is to explore what we have never thought was possible, because that is the essence of true progress. Maybe it is time to acknowledge that we can benefit from serendipity, as long as we know how to use it, as Rutherford did. Adelina Ivanova is a second year Chemistry student


fo cu s

Leptin: the “skinny gene” Bonnie Nicholson examines the role of genetics in determining our individual susceptibility to obesity In 1949, at the Jackson Laboratories in Maine, a few particularly plump mouse pups were discovered amongst a litter of standard experimental mice. These mice increased rapidly in weight until, at eight months of age, their weight was four times that of their non-obese littermates. Further study revealed that this divergence in body weight was the result of a spontaneous mutation at a specific point in their DNA. While spontaneous mutations do occur by pure chance, they are in fact the result of a series of intricate biological processes designed to introduce biological variance into the population. When DNA replication occurs, the original strand is not reproduced identically. The resulting alterations in the DNA sequence usually result in complete inhibition of gene function or no change in gene function at all, but very occasionally, a new form of the original gene product is made, with altered or partially inhibited function. Without this mechanism, there would be no biological diversity within populations, and thus natural selection and evolution could not occur. In the case of the obese mouse population, this chance mutation eventually lead Douglas Coleman of the Jackson Laboratory and Jeffrey Friedman at Rockefeller University to the discovery of the first fat cell-derived hormone. The mutation was first found to affect a gene dubbed obese and designated by the symbol Ob, and so mice with the mutation became known as Ob/Ob mice. The spontaneous mutation resulted in the occurrence of an early stop

codon in the DNA sequence, causing it to now code for a truncated and nonfunctional protein. This protein, now known as leptin (from leptos, the Greek word for ‘thin’), was later identified and found to be a satiety hormone secreted from fat that regulates appetite. This surprising discovery changed the generally accepted view of fat as an inert tissue for energy storage to the concept of fat as a dynamic endocrine (hormone-secreting) organ. Leptin has since been shown to have a range of obesity-protecting effects. It acts on the leptin receptor in a brain region known as the hypothalamus, which is important for control of body temperature, sleep and hunger, amongst other things. The binding of leptin to the leptin receptor causes the inhibition of hunger and the stimulation of satiety through the release of a number of downstream signalling proteins. Leptin is also believed to have powerful effects on energy expenditure, suppressing the activity of the enzyme SCD-1, which in turn activates a metabolic pathway that promotes the burning of fat. The concept of “switching off ” hunger and “switching on” fat-burning simultaneously with this one magic compound was met with much excitement regarding the potential therapeutic applications of leptin. A recombinant form of the hormone was manufactured and was approved by the FDA for clinical use in 2014. In some respects, the development of leptin replacement therapy has been a success. A group of patients with severe early-onset obesity were

discovered to have mutations in genes encoding leptin and targets of leptin action. These patients had normal birth weight but were constantly hungry and rapidly became overweight. Leptin replacement therapy in these patients was hugely effective in decreasing food intake, decreasing fat mass, and increasing energy expenditure and rate of metabolism. However, congenital leptin deficiency is a rare disorder, and accounts for only a fraction of today’s obese population. Unfortunately, in the more complex forms of obesity (i.e. those with multiple genetic and environmental factors), leptin therapy has proved ineffective.

Further study revealed that this divergence in body weight was the result of a spontaneous mutation at a specific point in their DNA However, the discovery of leptin did, however, lead to the recognition of the role of genetic factors in the development of obesity, challenging the view that obesity is simply an affliction of the lazy. In an article in Science Viewpoint in 2004, Jeffrey Friedman wrote: “While answers are beginning to emerge as to why so many of us are obese, there can be no meaningful discussion on this subject until we resist the impulse to assign blame. Nor can we hold to the simple belief that with willpower alone, one can consciously resist the allure of food and precisely control one’s weight.” Despite this knowledge and continuing research, global obesity is more than twice as prevalent as it was in 1980. The rate of growth of the obesity epidemic demands that we further consider the undeniable influence of genetics on individual susceptibility, and to continue the search for further “skinny genes” as potential therapeutic targets. Bonnie Nicholson is a second-year PhD student at the University BHF Centre for Cardiovascular Science

Illustration by Ling Ng

Spring 2017 | eusci.org.uk 17


focu s

Richard Feynman: The calm amongst the storm Craig tells the story of Richard Feynman and his fortuitous discovery of the law of Quantum Electrodynamics There is a stigma attached to physicists and, more specifically, theoretical physicists. They are often seen as being enigmatic, tortured individuals. For example, Sir Isaac Newton, who proposed the idea of gravity and was also commonly considered to be the first to develop the now ubiquitous mathematical technique of calculus, had numerous nervous breakdowns throughout his life. Another famous example is Albert Einstein, the most renowned of all theoretical physicists and the man who corrected Newton’s equations of gravity, was the prototypical mad scientist. There was Robert Oppenheimer, the physicist responsible for overseeing the design and construction of the first nuclear weapons, who tried in earnest to murder his best friend in a fit of jealousy. All in all, not a particularly happy-go-lucky bunch. Richard Feynman, however, somewhat broke this mould of the unapproachable, intensely focused theoretical physicist, opting for his own distinctive laissez-faire approach to science. His personality and outlook played a key role in a fortuitous discovery that changed the face of physics. Born in Queens, New York in 1918, Feynman displayed all of the characteristics that would come to define him from a young age. As a child, he set up an experimental laboratory in his house to repair radios. This was perhaps an unusual hobby for a teenager, but Feynman was certainly not your usual teenager. By the age of fifteen he had taught himself trigonometry, advanced algebra and both differential and integral calculus. Feynman studied for his undergraduate degree at the Massachusetts Institute of Technology (MIT), where he published two papers, both of which made meaningful contributions to physics that still persist today. By this point, Feynman was garnering quite a reputation, not just for his supreme command of the mathematical techniques of physics, but also for his unique approach to tackling problems. For instance, when attempting to learn about feline anatomy (Feynman’s natural curiosity apparently stretched to such things) he reportedly asked, “Do you have a map of the cat?” This was not an

18 Spring 2017 | eusci.org.uk

attempt to be funny but rather an indication of his direct and unique approach to tackling problems. While not the usual vernacular, it’s clear that Feynman was referring to was an anatomical diagram of a cat which would help him visualise the problem. Drawing diagrams was a technique he used extensively.

At a time when the whole world was on edge, Richard Feynman was singular in his ability to remain aloof, despite being at the very epicentre of the tension

After earning his PhD from Princeton University, Feynman was recruited to be a part of the Manhattan Project, which was responsible for developing nuclear weapons for use in World War Two. Although the project consisted of mainly scientists, it was still a military operation with all the rigour and bureaucracy that goes with it. While Feynman used the experience to help his country and advance the knowledge of nuclear fission, his tendency to bend the rules allowed him to explore one of his eccentric hobbies, safecracking. As a top secret military building, the Los Alamos laboratory where the Manhattan Project was based contained its fair share of safes for Feynman to practice on. Like with most things he pursued, he excelled. In fact, he became so successful that whenever colleagues left the lab, Feynman was recruited by other colleagues to retrieve important files from their offices. As such during his time at Los Alamos, he became a bit of a scourge to the security personnel charged with maintaining the labs defensive integrity. In his thoroughly amusing autobiography, Surely You’re Joking Mr Feynman, he relays how he would confuse the poor men whose task it was to guard the gate. One day I discovered that the workmen who lived further out and

wanted to come in were too lazy to go around through the gate, and so they had cut themselves a hole in the fence. So I went out the gate, went over to the hole and came in, went out again, and so on, until the sergeant at the gate begins to wonder what's happening. How come this guy is always going out and never coming in? And, of course, his natural reaction was to call the lieutenant and try to put me in jail for doing this. I explained that there was a hole. At a time when the whole world was on edge, Richard Feynman was singular in his ability to remain aloof, despite being at the very epicentre of the tension. He mentions in his book that he never really stopped to think about the practical implications of his work at Los Alamos until the very late stages. Feynman treated everything he was involved in with the same mischievousness—even national security. Following his brief entanglement with military life, to which his laissez-faire attitude was clearly not suited, Feynman returned to academic life and to yet more successful musings on the physical world, including something as simple as a plate slipping from someone’s grasp—an event which led to one of Feynman’s most famous theories. Imagine you’re in a cafeteria, relaxing and enjoying some refreshments, when you spot that someone has slipped while carrying a plate. In this situation, do you (a) flinch and recoil at the prospect of the smash and clatter of the plate meeting its gravitational inevitability or (b) carefully observe the plate and determine its spin to wobble ratio? I suspect most of us would fall into the first category, but not Feynman... So inspired was he by this event that he sought to describe this effect in a more rigorously mathematical way. In doing so he noticed that the equations describing this ratio were somewhat analogous to those relating to the intrinsic ‘spin’ and orbits of electrons. He went on to use these principles to develop his Nobel Prize winning theory of Quantum Electrodynamics (QED), which describes the interaction between light and matter. This approach was typical of


fo cu s

Image from Wikimedia Commons

Feynman. While most physicists, both past and present, seek to answer the most relevant questions of the day by straining their brains to find inspiration and shoehorning it into their work, Feynman took a different approach. He effortlessly drew inspiration from even the most seemingly banal events, such as fortuitously observing a falling plate, and found the perfect place for it in his work.

You’re not confused or desperately trying to keep up. You believe him The famous physics diagrams Feynman developed are a perfect manifestation of his crystal clear understanding of even the most difficult of concepts. While other physicists reached the same conclusions as him about QED almost simultaneously, their work required extremely complex mathematics that was pretty much invented solely for solving

the problem. Feynman’s real genius was his ability to understand that the same conclusions could be reached through the aid of simple diagrams, allowing even the most mathematically challenged theoretical physicist to follow his logic. So, Richard Feynman was awarded a Nobel Prize for helping to answer one of the most fundamental questions in all of physics. He also played an important role in the Manhattan Project, arguably the greatest technical achievement of the 20th century, despite its sinister nature. However, this, does not make him unique. There were plenty of Nobel laureates and soon-to-be Nobel laureates at Los Alamos. What truly set Feynman aside, as we look back on him now, was his nature and the way he utilised it to teach the next generation of physicists. He was asked by the California Institute of Technology to create a course covering the physics content required for the first two years of undergraduate study. What he produced is considered to be a masterpiece. All of these lecture notes are now available for free online and, for being a university level physics textbook, are actually quite readable.

Feynman arguably displays his greatest skill in taking the most challenging physical concepts and reducing them to something that is not only understandable but also rather entertaining. This talent is apparent to an even greater extent in the delivery of his lectures. There are many videos on YouTube of Feynman explaining physics and the way he is able to convey such complex information with such clarity is unbelievable. His demeanour is that of a man who is effortlessly at home in what he’s doing and saying. You begin to understand how his students and colleagues must have felt. It doesn’t sound like he’s trying to force the knowledge into your brain, he’s convincing you that everything he’s saying is as obvious to you as it is to him. You’re not confused or desperately trying to keep up. You believe him. Craig Young is a 4th year student in theoretical physics

Spring 2017 | eusci.org.uk 19


focu s

Serendipity in the revolution of psychiatry Fiona Ramage discusses the serendipitous events that led to the revolution of psychiatric treatment Mental disorders have been around since primitive brains became more developed. However, our understanding of them, contrary to that of many physical ailments, remained extremely limited. Until very recently, mental illness was dismissed by society, and the available treatments considered inhumane by today’s standards. At the beginning of psychiatric practice up until the 18th century, the long-term institutionalisation and a very elementary recognition of mental disorders were commonplace. More humane treatment of patients and the recognition of psychiatry as a medical discipline was only established by the 19th century, but mental illness remained completely taboo in society. At the same time, a more modern form

Illustration by Jemma Pilcher

20 Spring 2017 | eusci.org.uk

of psychiatry was initiated in asylums in response to poor recovery rates and a constantly expanding number of mentally ill patients. Rather than utilising talking therapy, more physical approaches were employed, leading to the development of treatments such as convulsive therapies and lobotomies. These initial improvements were not incidental, but rather produced through careful consideration of scientific theory and a large body of practical evidence collated over an extended period of time. Nevertheless, only modest progress in the treatment of patients was made, and these gains were still insufficient to completely discount mental illness and underfunded institutionalisation. Mental illness remained a primary unresolved health

concern well into the 20th century and paved the way for the discovery of a new generation of treatments.

Until very recently, mental illness was dismissed by society, and the available treatments considered inhumane by today’s standards A number of serendipitous discoveries in the 19th and 20th century were responsible for a revolution in drug development that were indispensable for


fo cu s a jumpstart in the evolution of psychiatric treatment. Several separate chance discoveries revolutionised our way of thinking about mental illness, shifting our approach to their treatment. Some of the more notable developments were those of antibiotics, the antipsychotic chlorpromazine and the mood stabiliser lithium. Whilst trying to produce quinine, the young English chemist William Henry Perkins accidentally synthesised the first artificial dye in 1856, which led to the beginnings of the dye industry. After further study of this discovery and the advent of organic chemistry, members of these same companies expanded their work to pharmaceuticals, leading to the development of the drug industry. The discovery of the first antibiotic penicillin by Alexander Fleming in the first half of the 20th century, although not being directly related to psychiatry, led to a general boost in efforts of drug research. It provided evidence that a drug could target a single aspect of a disease and modify it, a concept which had not been much explored up to this point. In addition, it was found that syphilis, a bacterial infection with neurological manifestations, could be cured by the administration of antibiotics, and relieve both psychiatric and physical symptoms. From this point, a shift began in the medical field: if antibiotics could selectively eradicate bacterial infections, why couldn’t different specific drugs, dubbed ‘magic bullets,’ be developed for other illnesses? And if antibiotics could cure the mental symptoms of syphilis, why couldn’t they do the same for other mental disorders? Thus began the search for pills to treat mental illness. Towards the middle of the 20th century, two important drugs that are still in use today were developed – chlorpromazine and lithium. Chlorpromazine is regarded as the drug which sparked the revolution of medical treatment in psychiatry. The class of compounds to which chlorpromazine belongs (phemothiazines) was originally used for making artificial dyes. During a search for a ‘magic bullet’ for various disorders, this class of compounds was also found to have properties which could reduce inflammation following surgery. When their sedative properties later came to light, its use as a safer adjunct to anaesthesia was considered. The more powerful derivative chlorpromazine was then created in an attempt to augment their anxiolytic properties. It was soon seen to provide the same calmness as lobotomy in psychiatric patients, though less permanent

and invasive than irreversible surgery. Its potential was recognised and it therefore became one of the first antipsychotics. Lithium was discovered during an experiment in which the Australian psychiatrist John Cade injected urine from mentally ill patients into the abdomen of guinea pigs. The death of a greater number of animals led him to believe that a higher concentration of uric acid in the patients’ urine was the culprit. After difficulties testing this hypothesis due to the low solubility of uric acid, the more soluble lithium urate was added, and led surprisingly to fewer deaths, as well as sedation of the subjects. Safety tests on himself and efficacy tests on mentally ill patients proved that lithium possessed outstanding calming properties and effectively treated mania. Lithium became known as a powerful mood stabiliser and is still in use today. Treatments such as these were to become the foundation of a new toolkit for the treatment of psychiatric disorders in asylums and later in the general public. Additional important discoveries were made, including that of benzodiazepines (a powerful anti-anxiety treatment) and muscle relaxants, as well as the first generation of antidepressants.

Even several decades later, a full understanding of the illnesses that psychiatric medications are known to treat is still lacking In contrast to other fields, the development of psychiatric treatments seems to have followed a top-down approach. Instead of using a hypothesis to generate treatments and cure illness, treatments were created and theories about the cause of diseases were later suggested based on the effects of each drug and how they appeared to modify illness. Over the following few years, additional treatments were developed, though there remained a lack of understanding of these very same diseases which clinicians were attempting to treat. With the increasing popularity of these therapies, it became crucial to understand exactly how they generated their effects. It was around the 1950s that scientists finally began to understand how brain cells communicate with each other, and an understanding of the mecha-

nistic functions of the brain began to develop. For example, the discovery of neurotransmitters, the chemicals used by brain cells to communicate with one another, occurred in large part due to the drive to understand the effects of medications such as chlorpromazine in psychiatric patients. This comprehension of an essential component of brain function has been pivotal to neuroscience research, and has prompted a host of subsequent discoveries across several fields. From this emerged several theories that diseases such as depression and schizophrenia could be caused by an imbalance of neurotransmitters such as dopamine and serotonin respectively. These theories are still prominent, albeit somewhat controversial, today. Even several decades later, a full understanding of the illnesses that psychiatric medications are known to treat is still lacking, and almost all drug classes discovered by chance remain in use. The range of conditions which were quickly being characterised at an extremely fast rate created the need for more stringent diagnostic criteria to separately diagnose and treat these illnesses. This gradually led to the coining of terms such as ‘depression,’ ‘anxiety’ and ‘schizophrenia.’ The distinction between different subtypes of mental illness has led to further understanding of their symptoms and presentation, as well as the gradually increasing acceptance of their existence in medicine and society. Today, in addition to psychiatric medications, an increasing range of cognitive and behavioural therapies targeted to individual patient needs are utilised. Interestingly, although there is still much to discover about psychiatric illness, new issues with mental health treatment are coming to light, with the risk of overdiagnosis and overmedication of patients. Thus, it is clear that in part due to serendipity, psychiatry has progressed hugely over the past few decades, with a large proportion of patients currently receiving adequate and ever-improving treatment for mental health conditions. Yet there remains a dire need for a better understanding of these diseases in order to develop targeted treatments. We may need some more lucky strikes to get to the bottom of these diverse illnesses. Fiona Ramage is a recent graduate of the MSc by Research in Integrative Neuroscience programme

Spring 2017 | eusci.org.uk 21


focu s

Fingerprints from the birth of the universe Johanna Vos explores the serendipitous discovery that kickstarted the era of observational cosmology Just over 50 years ago, Arno Penzias and Robert Wilson sat in a plastic shed in northern New Jersey listening to a relentless hum through their radio antenna. They had spent months unsuccessfully attempting to rid their observations of this maddening noise and were beginning to lose hope. Little did they know, they were listening to the last remnants of the birth of the universe. At the time, there were two main theories regarding the birth of the universe. Steady state theory posits that the universe is in a constant state of expansion, with no beginning or end in time, whereas Big Bang theorists challenged the idea of an unchanging, eternal universe, and believed that it arose spontaneously 13.7 billion years ago. The discovery of the Cosmic Microwave Background (CMB) provided robust evidence that the Universe had sprung from a hot, rapid explosion that created all matter, a result that steady state theory could not explain. The universe has been expanding and cooling ever since. The CMB, now at a temperature of just 2.7 Kelvin, or -270ºC, is the last burning ember of its violent beginning. Since its initial discovery, observations of the CMB have provided a unique insight into the origin and fate of our Universe. In 1964, Penzias and Wilson were using a leftover 20-foot-long horn-shaped antenna to measure radio waves emitted by molecular hydrogen in the Milky Way. However, their readings were plagued by a persistent ‘microwave noise’. The pair began investigating possible causes, and ruled out the urban interference of New York city as well as military testing before discovering pigeons nesting inside the antenna. A few unsuccessful bird-catching attempts later, they managed to rid the antenna of wildlife, but despite their efforts, the hum rumbled on. Through a chance meeting with a fellow physicist on a plane, Penzias got word of a manuscript written by a Princeton researcher named Jim Peebles along with his supervisor Robert Dicke. This paper outlined that if the universe was so dense and hot in its earliest moments, then there should be leftover radiation from this initial hot period. This

22 Spring 2017 | eusci.org.uk

Image Courtesy of NASA

signal should be at the very temperature of their so-called noise. Dicke and his team had just finished constructing their own radiometer to measure this signal when the call from Penzias came. “Well boys, we’ve been scooped,” Dicke sighed. Both teams published companion letters in the Astrophysical Journal in May 1965, with Penzias and Wilson adopting possibly the most understated title of all time: A Measurement of Excess Antenna Temperature at 4080 Mc/s. But behind those banal words was a discovery that kick-started modern cosmology as we know it, and earned Arno Penzias and Robert Wilson the 1978 Nobel Prize in Physics. With the constant development of new instruments, theorists and astronomers have gone on to explore the CMB in increasingly more detail. In the 1970s, researchers including Jim Peebles, realised that there should be some structures, or ‘anisotropies’, present in the CMB that could tell us about the fundamental properties of the universe. These anisotropies would appear as tiny density and temperature perturbations – clumps that would eventually grow to produce the galaxies we observe today. A measurement of this faint pattern would reveal important properties of our universe, including its density, age, geometry, and even its fate. In 1989, NASA launched the Cosmic Background Explorer (COBE) satellite to take precise measurements of the CMB. After four years of operation, the satellite managed to map out the detect-

able temperature pattern of the CMB, hailed as the fingerprint of the Big Bang. Next came BOOMERanG (Balloon Observations Of Millimetric Extragalactic Radiation And Geophysics), a balloon-borne telescope which provided a map over 40 times more detailed than the COBE map, allowing the first measurement of its polarisation structure. An exciting announcement came in 2014, when the BICEP2 consortium (Background Imaging of Cosmic Extragalactic Polarization) announced that their South Pole telescope had detected the first evidence of ‘cosmic inflation’, a brief period immediately after the Big Bang during which the universe expanded exponentially. However, within a couple of months the team had withdrawn their original claims, citing interstellar dust as a contaminant. Astronomers are still battling the elements at the South Pole telescope searching for the signature of cosmic inflation. CMB research is now a well-established field, and thanks to modern technologies we are in the era of making high-precision measurements to find specific properties of the Big Bang. Since its initial discovery, the CMB has revealed fundamental properties of the universe on a scale never before imagined. Its accidental discovery in 1965 is truly one of the most important scientific breakthroughs of the 20th century. Johanna Vos is a PhD student investigating weather phenomena on exoplanets


fo cu s

Blood and bone: uncovering hematopoietic stem cells Scott Dillon explores the discovery of haematopoietic stem cells and how they have informed the future of medicine Your whole body is one big mishmash of cells. For the most part, these cells have a strictly defined job and work cooperatively with their neighbours to perform all of the diverse functions that keep us happy and healthy. Most cells look and behave very differently from each other. Under normal circumstances one type of cell can’t take over the job of another if it calls in sick, a brain cell wouldn’t have a clue what to do if it found itself in the kidney. The origin of these different cell types puzzled biologists for many years. We now know that all of these cells are derived from a small population of special individuals who are the multitaskers of the body, known as the stem cells. Stem cells are unique as they are able to both make many copies of themselves, and also differentiate into the varied and diverse cells that do the day-to-day job of keeping us together. The prolific German biologist and naturalist, Ernst Haeckel, first coined the term ‘stem cell’ (or ‘Stammzelle’ in German) in 1868, when referring to a hypothesised common ‘Mother’ cell. Extensive research in the late 19th and early 20th centuries refined this idea by introducing the concept of totipotent and multipotent stem cells; those which can differentiate into any cell imaginable, and those which can give rise only to a certain subset of cells. Research in this area has seen explosive progress in recent years, and much of its success stems, if you will, from work investigating the multipotent progenitor (the ‘Mother’) of the blood cells, which is derived from rather surprising origins. Our blood contains many different cells, including the oxygen-carrying erythrocytes (red blood cells), and the leukocytes (white blood cells), our personal soldiers against infection. Interestingly, understanding how those cells are produced (a process known as hematopoiesis) suddenly became hugely important during the Second World War. The development of the atomic bomb brought a level of destruction to the Japanese cities of Hiroshima and Nagasaki, which was unprecedented in human history. Exposure to high doses of radiation caused devastation of the blood cells, especially the leukocytes, and

subsequent vulnerability to life-threatening infections. Haematologists (the scientists and clinicians who study blood) were therefore inspired to devise new treatments for radiation sickness.

A brain cell wouldn’t have a clue what to do if it found itself in the kidney. Following WWII it was found that transplanting undamaged bone marrow into patients replenished their stock of blood cells. Nevertheless, the mechanism remained a bit of a mystery as the progenitor cells themselves were elusive. In 1961 the Canadian scientists James Till and Ernest McCulloch were the first to experimentally demonstrate a common haematopoietic stem cell (HSC), housed in the bone marrow, which gives rise to all of the body’s mature blood cells. However, they came across this finding quite by accident. During the height of the Cold War the pair were working on a method of measuring how sensitive bone marrow was to radiation damage, and if this limited its ability to produce blood cells. When mice were irradiated and their bone marrow destroyed, the transplantation of undamaged bone marrow cells resulted in formation of nodules on the spleen, which not only contained dividing stem cells, but also those cells were actively differentiating into different types of blood cells. More extraordi-

nary than that, when the cells from these nodules were transplanted into other irradiated mice, new nodules developed. These cells were therefore both self-renewing and able to differentiate – true ‘Stammzelle’. While this finding was a shock, it represented a revolution in stem cell science and informed theories that shaped the field for decades to come. Since their discovery, the nature of HSCs has fascinated scientists and they have become an immensely versatile tool in the clinic. Haematopoietic stem cell transplant (HSCT) uses the stem cells of the bone marrow to treat a wide variety of pathological blood conditions, most notably leukemia. Leukemia is a group of white blood cell cancers that cause a deficiency in the ability of the leukocytes to fight infection. These patients are given HSCT to replenish their blood with healthy cells and re-establish their immune systems following chemotherapy and radiotherapy. This new age of personalised treatment and stem cell therapy could propel us into a new age of medicine, all thanks to the nuclear bomb and a little bit of luck. Scott is a first year PhD student studying bone biology with the Farquharson Group at The Roslin Institute

Illustration by Prerna Vohra

Spring 2017 | eusci.org.uk 23


focu s

Cosmic heartbeats: the discovery of pulsars Katie Ember tells the story of the discovery of a new type of star The year was 1967, and the technicolour hues of mini dresses, Mini Coopers and muscle cars were turning heads in streets worldwide. Sergeant Pepper was released, Che Guevara was captured, and Twiggy and Rolling Stone were making their debuts around the globe. Though it was a well-known fact that peace was cool, relations between the USA and Russia were frigid, and China had just tested their first hydrogen bomb. But, in a secluded room in Cambridgeshire, oblivious to this fiercely changing world of spies, supermodels and superpowers, a figure sat in quiet contemplation over a ream of paper 121.8 metres long. In the days before the silicon chip, this was how science was done. Paper containing data was churned out from a measuring instrument, in this case a radio telescope at Cambridge University’s Mullard Observatory, for manual analysis. At this moment, with a pencil in hand, the young astrophysicist had noticed something strange: data that shouldn’t be there. At 24 years of age, this was a scientist on the brink of a Nobel Prize winning discovery. This was because, sketched in the ink of the analogue recording system, were measurements that told the tale of a cosmic clock the size of a city. Jocelyn Bell Burnell was staring at the first signs of a new type of star: the pulsar. Born in County Armagh, Northern Ireland, Bell Burnell was considered something of an anomaly in the cloistered world of Oxbridge science: she was a woman and a Quaker. Her parents had had to fight for her to be allowed to study science at school and, as one of just three other girls in her class, she had time and again come first in her year for physics. This love for science had propelled her through a degree in Natural Philosophy at Glasgow University, and now she was going to put that knowledge into practice. During her PhD with Antony Hewish, Bell Burnell had helped construct the 81.5 Megahertz radio telescope, located in one of Cambridgeshire’s iconic green fields. Unlike the tubular brass-and-glass telescopes of Galileo’s time, their device was an array of post-shaped detectors connected by 120 miles of signal-transmitting wires. Sledge hammering, making transformers, and installing cables were

24 Spring 2017 | eusci.org.uk

all in a day’s work for Bell Burnell and the other students. The finished set-up was spread over an area equivalent to four and a half football pitches, and was capable of collecting radio signals from the far reaches of space. So, what was this vast device searching for? Many stars emit forms of radiation other than visible light. There is a whole spectrum of electromagnetic radiation, from radio waves with metrelong wavelengths, to X-rays and gamma rays with wavelengths the size of atoms. Hewish and his team had built their telescope to investigate quasars: mysterious celestial objects which emit radio waves and are associated with black holes.

The only explanation Hewish’s group could think of was that this was a signal from an alien civilisation, but this was too bizarre to contemplate Every four days, Bell Burnell was responsible for analysing the river of paper her telescope spooled out, one tenth of a kilometre long. It would have been all too easy to be lax about analysing such a mountain of data, but Bell Burnell was meticulous, partly in an effort to overcome her ‘imposter syndrome’. Young scientists often feel less clever than those around them, believing they have been mistakenly admitted to their post. Bell Burnell was no exception: “I took the conscious decision to work as hard and as thoroughly as I could, so that when they threw me out, I wouldn't have a guilty conscience.” And so it was, that Bell Burnell noticed a signal behaving in an extraordinary way. The intensity of quasars varies randomly; they twinkle like visible stars, as they are obscured by clouds of charged particles from the Sun. But Bell Burnell had found a source of radio waves that changed periodically. It seemed to be sending out intense bursts of radio-frequency radiation –

on-off-on-off every 1.3 seconds – resembling the heartbeat of a celestial giant. No one had ever seen anything like it. Bell Burnell took her results to Hewish, who was convinced that the pulse rate was much too rapid for a star; the signals must be man-made or an instrument error. Further investigation showed that they came from outer space, from something planet-sized or smaller. The only explanation Hewish’s group could think of was that this was a signal from an alien civilisation, but this was too bizarre to contemplate. However, it wasn’t long before Bell Burnell had discovered three more similar sources. At last, clarity. The likelihood of there being four alien planets simultaneously attempting to communicate was vanishingly small. When Hewish presented their findings to other astronomers, Professor Fred Hoyle suggested that the pulses could be emanating from neutron stars – the collapsed remnants of supernovae. When stars run out of fuel, they die in a mass-dependent way. Smaller stars (like the Sun) burn until they form a carbon core known as a white dwarf, but much larger stars can keep burning past this stage. Huge gravitational forces at the star’s centre allow it to form iron which is no longer involved in the fusion process. The outer star layers then collapse inwards, compressing the iron core, which responds by emitting a shockwave, resulting in a gargantuan explosion. The shockwave has enough energy to generate most of the elements in the periodic table. Gold, platinum, and uranium are all blasted away from the core in the star’s epic final breath. This stellar bomb is a supernova and its light can outshine the rest of the galaxy for a month. Meanwhile, the remaining iron atoms of the core are converted into neutrons by the shockwave. For extremely large stars, this densely-packed ball of neutrons collapses further, forming a black hole. But the cores of less massive stars don’t suffer the same fate; instead they continue to exist as a neutron star. Despite having the same mass as a regular star, a neutron star is about 30 kilometres wide, the diameter of a city. Next to


fo cu s

Illustration by Antonia Banados

black holes, they are made of the densest substance in the universe – one teaspoon would have the same mass as Everest. Expanding on Hoyle’s theory, the US physicist Tom Gold suggested that pulsars don’t actually pulse; they rotate, sending out a steady stream of radiation from two vertically opposite points on their surface. When either point faces us, we detect a strong radio signal, but at all other times the pulsar appears dark – just like the sweeping beam of a lighthouse. And as some pulsars emit almost 1000 signals every second, these stars must be spinning on their axis a few times per millisecond. Far from being mere curiosities, these wondrous by-products of stellar death have proven to be invaluable tools for us. The constant flashing signal from pulsars makes them one of the most accurate time-keepers in the universe; millisecond pulsars can rival atomic clocks in their accuracy. Astronomers frequently rely on pulsars as cosmic timing devices and they were used to provide the first

evidence for gravitational waves. Bell Burnell looks into the future, when they will truly act as lighthouses: “When we travel through the galaxy in spaceships, pulsars will serve as navigation beacons, giving us a fix on where we are in space.” In 1974, the first ever Nobel Prize for astronomy was awarded to Hewish and Martin Ryle, the head of the Cavendish radio astronomy laboratory, for their work on pulsars. Although these two men had been instrumental in laying the foundations of our understanding of these stars, Jocelyn Bell Burnell irrefutably made the initial discovery. She detected not one, but four pulsars, and dedicated three years to the building of the original telescope. The decision to award the Prize solely to her supervisors remains extremely questionable. To this day, the crystallographer Dorothy Hodgkin remains the only British woman to have been awarded a Nobel Prize for science. Yet, 50 years later, Professor Jocelyn Bell Bell Burnell continues to blaze a trail through the field of astrophys-

ics. Working in almost every region of the electromagnetic spectrum, her first discovery still holds a grip on her: “My primary interests remain compact stars: neutron stars, sometimes black holes. However, I am watching what the new field of transient (high time resolution) astronomy will deliver.” She is President of the Royal Society of Edinburgh, Dame Commander of the British Empire, holds numerous professorships and has won countless awards, including the Royal Society’s Royal Medal. Also, not only has Bell Burnell added to the diversity of star types, she aims to advance the role of women in science too. Because, like all scientists, Bell Burnell is constantly looking to the future, and the future of science lies in the hands of those who make the discoveries. Katie Ember is a second year PhD student at the University of Edinburgh using spectroscopy to detect liver damage non-invasively Spring 2017 | eusci.org.uk 25


focu s

Curing cancer: a perfect storm Angela Downie explores breakthroughs and chance factors behind the advancement in treating testicular cancer Cancer has become ever-present in our society. The latest available WHO statistics indicate that in 2012 there were 14.1 million new cases diagnosed and 8.2 million deaths attributed to cancer worldwide. With an increasingly aging population these numbers are not about to slow down. In fact, a study published in the British Cancer Journal last year found that 1 in 2 people are expected to develop cancer during their lifetime. These numbers paint a bleak picture, but there is hope. In the past 40 years, survival rates in the UK have doubled and now stand at 50%. However, this can prove to be a misleading number since survival rates vary enormously depending on the type of cancer. Despite substantial efforts, we have not managed to improve survival rates for either lung or pancreatic cancer in the past 40 years, with 10 year survival rates below 5%. However, better progress has been made in other types of tumours. In 1970, only four in 10 women with breast cancer would be alive 10 years after their diagnosis, but that figure is now closer to eight in 10. Similarly, survival rates for prostatic cancer have tripled in the past 40 years. Thyroid cancer and melanoma join these ranks with practically all thyroid cancer patients under 50 being cured and melanoma 10 year survival rates now surpassing 90%. But our efforts have not shown as much success against any other types of tumour, as with the case of testicular cancer, where a combination of hard work, optimal conditions and one very lucky breakthrough have crystallized in remarkable results in treatment and survival.

Image from Wikimedia Commons

26 Spring 2017 | eusci.org.uk

Although there is always hesitation in using the word cure when related to cancer, testicular cancer is indeed becoming a curable disease. Survival rates for patients are now 98%, and when caught in early stages, survival rates are practically 100%. This is incredible in itself, but it is in the treatment of late stages where testicular cancer really outperforms the rest. Even at its last stage, when the cancer has spread, survival rates are still at 73%. As a comparison, when breast cancer progresses to its last stage, survival rates drastically drop from 72% to 22%. We are winning the battle against testicular cancer; even in the very worst case scenarios, with the odds stacked against us, we are curing three out of every four people. But why have we been able to successfully tackle testicular cancer whilst barely making a dent in lung cancer? A large part of this is due to the chance discovery of a remarkable little molecule called cisplatin, sometimes referred to as a ‘magic bullet’. This, together with a variety of different circumstances and factors has combined to produce a perfect cancer curing storm. Cancers can be very good at hiding and camouflaging, making it difficult to find them when they are buried deep within our body. However, certain cancers such as melanoma or breast cancer show themselves as moles and lumps. The same is true of testicular cancer, where most cases are first discovered by the patient, either as a lump, hardening of the testicles, or pain. Increased awareness of this has played a crucial role in reducing deaths. However, testicular cancer additionally reveals itself in a quieter yet still

very important way. It alters the levels of some molecules in the blood. This is important because looking at the levels of three specific molecules (alpha-fetoprotein, human chorionic gonadotropin, and lactate dehydrogenase) can alert us to the presence of a tumour before we can see it. The levels of these molecules can also provide important information about the cancer that permits the design of an effective treatment strategy and the surveillance of cancer recurrence after treatment. Having an efficient surveillance mechanism means that if the cancer returns it is caught very quickly and stopped in its tracks.

Cancers can be very good at hiding and camouflaging, making it difficult to find them when they are buried deep within our body

Another contributing factor is how effective surgery is in the early stages. Orchiectomy (removal of the testicle) is the most effective way to treat testicular cancer that has not yet spread, and can be accompanied by the removal of surrounding lymph nodes if the cancer has spread. Although this sounds extreme, it is rare that patients have tumours on both testicles, so in most cases surgery does not impact the patient’s ability to have children. Furthermore, when the tumour is removed it can then be studied, providing information about its individual characteristics that will help doctors create an effective game plan. Surgery itself is often enough to deal with the tumour, but when necessary it can be complemented by radiation or chemotherapy, which have proven to be exceptionally effective in treating this type of cancer. These approaches are also extremely important when dealing with late stage cancers that have begun to spread. Radiation is used to treat a certain type of testicular tumour known as seminoma, which has shown to be extremely sensitive to radiation therapy. Non-seminoma tumours,


fo cu s

Image Courtesy of Melvin & Bren Simon Cancer Center

on the other hand, are preferentially treated with chemotherapy. Enter cisplatin: the miracle game changer accidentally discovered by Dr Barnett Rosenberg at Michigan State University in 1965. Dr Rosenberg was not searching for a miracle cancer molecule. His interest was in studying how an electric current affected bacteria cells dividing, which arose from looking at images of chromosomes dividing and thinking they resembled the pattern of iron filings around a magnet. His idea was to use electric currents to generate a magnetic field and hopefully manipulate the way the cells divided. To generate this field he used platinum rods, which he had chosen because he expected that they would have no effect on the bacterial cells. However, the exact opposite proved to be true. Unexpectedly, when electricity was passed through these rods a platinum based molecule that we now call cisplatin was released into the bacterial cells and stopped them dividing, and as the electricity was turned off and cisplatin was no longer released they would resume division. It took Dr Rosenberg two years to first realize that the observed effect was in fact due to cisplatin and not the electric charge itself, and it took a further two years to identify the responsible molecule. After this, a great effort was put into proving that the same effects

seen in bacterial cells could also stop cancer cells from dividing, first in mice, then in dogs and monkeys, and finally in the first trials in humans in 1972.

While not impossible, it is unlikely that we will come across another game changer like cisplatin Cisplatin has been revolutionary in cancer treatment in general and is currently used to treat a variety of different cancers in combination with other drugs. However, it has proven far more efficient in combating testicular cancer than any other type of cancer, the reason for this being unknown. Understanding the discrepancy in cisplatin efficiency is of great interest for researchers and is fundamental in extrapolating its success beyond testicular cancer. We should look to the treatments of testicular cancer as a reminder that curing cancer is not out of reach. This is a battle that we continue to win through a lot of hard work and a little bit of luck. While it is true that a series of unique factors surrounding testicular cancer have made it easier to beat, this doesn’t mean that

other cancers can’t follow suit. In fact, the treatment of prostate cancer has benefited enormously from early screening and is quickly following testicular cancer into the list of cancers we might consider curable. Although other cancers might not have numbers that quite match up to the impressive 98% survival rate, huge strides have been made in very different areas of cancer treatment. Immunotherapy based drug Herceptin has improved 10 year survival rates for breast cancer by 40%. In the past 20 years survival for colorectal cancer has doubled, largely due to improved surgical procedures. Cervical cancer survival has increased from 46% to 64%, mainly because of improved awareness and screening techniques. While not impossible, it is unlikely that we will come across another game changer like cisplatin. Future advances in bolstering cancer survival rates are going to be the sum of lots of little improvements in treatment and diagnosis over time. It is going to take patience and hard work, but now we know it can be done.

Angela Downie is a second year PhD student in Molceular and Cell Biology Spring 2017 | eusci.org.uk 27


focu s

Natural product drug discovery: Reaching for the low hanging fruit Carlos Martínez-Pérez explores the potential of natural chemical diversity for modern drug discovery We live in the era of modern medicine and it cannot be denied that medical practice and pharmacology have come a long way in the last century. Infectious diseases that would have been akin to a death sentence in the 1900s are now easily treated with antibiotics, or have been eradicated altogether and are considered a thing of the past. Although we are still working hard to find cures for many other grave illnesses, modern doctors are now able to offer better treatments, and many more people are surviving or living longer with conditions such as cancer, diabetes, and cardiovascular disease. In the last few decades, these medical advances have even allowed us to shift part of the attention from just curing disease, to focusing on disease

Illustration by Marie Warburton

28 Spring 2017 | eusci.org.uk

prevention, well-being and longevity.

Almost half of all new drugs developed since the 1940s have a natural origin

Despite these vast improvements, our need for drugs to treat both known and newly emerging diseases is not yet met. Whether to combat resistance to current therapies or to strive for better survival and fewer side effects, scientists and clinicians will continue to look for the next great treatment alternative.

But this is not a straightforward task, as the process of creating a new pharmaceutical drug takes well over 10 years, and according to a 2014 report, can now cost more than a staggering £2 billion. So how should we go about finding new treatments? We may just need to look up from our screens and go for a wander in the park. To quote Baloo the bear from the Jungle Book, sometimes it’s Old Mother Nature’s recipes that bring the bare necessities of life. Although Mowgli’s furry companion was referring to honey, he may have been on to something. Before the naysayers begin drafting their emails to the editor, I should make it clear that I am not talking about homeopathy and we are still to find magical stones that will


fo cu s relieve your toothache if placed under your pillow. I am talking about finding and taking advantage of natural molecules that have a function in plants but could also be useful in a hospital ward. Natural compounds have been the source of countless remedies in traditional medicine across civilisations for millennia. Only 30 years ago, the World Health Organisation (WHO) estimated that about 65% of the world’s population still relied on these traditional treatments for their primary health care. Some of these natural products act as prodrugs that can be directly transformed into active compounds by the human body. The most famous example is probably aspirin. Its scientific name, acetylsalicylic acid, comes from the Latin species denomination for the white willow tree (Salix alba). Extracts from the bark were traditionally used to treat pains and fevers. We now know that this bark contains glucoside salicin, a pro-drug that our gut and liver can transform into the active compound. Using this knowledge, we can easily produce tablets of acetylsalicylic acid itself that act faster and more effectively than glucoside salicin and voilà, there goes your toothache! Sometimes the process is not as simple as that, but we can still exploit natural structures for drug design. Many natural compounds have certain molecular structures that, when placed in the right chemical structure, or ‘backbone’, can be applied for their pharmacological activity. These structures are referred to as pharmacophores and the new compounds obtained by copying, pasting and modifying them are called semi-synthetic drugs. Developing new effective therapies in this way can still be very complex, but nature has had 3 billion years to refine its chemistry and we would be naive to miss out on all that molecular diversity. It may not be the original recipe, but a pinch of those natural ingredients and a healthy dose of scientific research may just do the trick. Many of us don’t realise it, but semi-synthetic drugs are already a very important part of modern medicine, as almost half (47%) of all new drugs developed since the 1940s have a natural origin. Currently, plant-derived drugs take up about 25% of the prescription drug market and produce a turnover of several billion pounds every year. There are many examples of these which include over-the-counter drugs you may find in your medicine cabinet to novel therapies used in operating theatres or administered as chemotherapy for

cancer patients. Some of these semi-synthetic medicines are steroids, pain killers, anaesthetics, muscle relaxants or drugs to treat cough, hypertension and infectious diseases, such as malaria. The area with the most examples of plant-derived compounds used in clinics all over the world is cancer therapy. In particular, many chemotherapy drugs were first derived from natural compounds. Extracts from the flowering plant Madagascar periwinkle (Catharanthus roseus) were the source of vinca alkaloids. Epipodophyllotoxin in the bark of wild mandrake (Podophyllum peltatum) led to the development of etoposide and teniposide, also, the drugs topotecan and irinotecan were derived from camptothecin, which is found in the stem of a tree (Camptotheca acuminata) used in traditional medicine in China, where the plant is whimsically referred to as the happy tree or even the cancer tree. Many of these drug names mean nothing to most of us, but all of them are really effective against different types of cancer (including leukaemia, lymphoma, brain, ovarian and lung tumours) and some are even included in the WHO’s list of essential medicines.

Nature has had 3 billion years to refine its chemistry

The best example of a plant-derived drug is paclitaxel, an anti-cancer drug first discovered in the 1970s. Since then, paclitaxel and other close derivatives have become essential treatments for breast and ovarian cancer due to their effectiveness at interrupting the division of cancer cells. Paclitaxel is also known as Taxol, a name it receives because it can be found in the bark and leaves of different species of yew trees (belonging to the Taxus genus). This nature-derived drug brings sales of over $1 billion in the United States alone and helps save millions of lives every year. After decades of developing and using semi-synthetic drugs, why are we only talking about them now? As it turns out, this back-to-basics approach to drug development has recently become trendy again. In late October 2015, the Swedish Karolinska Institutet decided to award the Nobel Prize in Physiology or Medicine to eminent researchers whose discoveries of natural drugs had a revolutionary impact in the fight against parasitic diseases. The Nobel

Prizes went to William C. Campbell and Satoshi Ömura for their discovery of avermectins and to Youyou Tu for her discovery of artemisinin. Avermectins have been used to reduce the incidence of onchorcerciasis (river blindness) and filariasis (elephantism), while artemisinin has greatly reduced mortality from malaria. These are great examples of natural products with unique structures and chemical properties that were first identified for their potential therapeutic applications and then optimised to develop new semi-synthetic drugs with significant clinical applications.

Experts are talking of a New Golden Age of natural products drug discovery It may not have caused as big a splash in the mainstream media as Bob Dylan’s recent Nobel Prize for Literature, but the award for Campbell, Ömura, and Tu certainly did not go unnoticed in the field of pharmaceutical discovery. Since the late 1990s, modern drug discovery strategies have developed in record time. However, this new era of molecular engineering, robotic methods and high-throughput screening of new synthetic libraries of compounds has also coincided with a decline in the discovery and approval of new drugs, with a decades low record reached in 2007. The decision of the Nobel committee has brought the continuing importance of natural molecules as a reservoir for novel therapies back into the limelight. Additionally, pharmaceutical industries are returning to natural compounds as a source for developing modern commercial medicines. Natural products still hold a great untapped potential that could provide new therapeutic tools with better activity and fewer side effects. A large number of compounds derived from natural products are currently being studied and are undergoing clinical trials. In fact, experts are talking of a 'New Golden Age' of natural products drug discovery. It seems that in pharmaceutics, as in most things in life, Mother (Nature) knows best. Carlos Martínez-Pérez recently completed a PhD in cancer research

Spring 2017 | eusci.org.uk 29


focu s

The discovery of viagra: how a side effect became a billion dollar industry Natasha Tracey looks at the discovery of Viagra Sildenafil, more commonly known as Viagra, is a drug that has changed the landscape of erectile dysfunction treatment. It was synthesised with the intention of treating angina, which is caused by a buildup of plaque in the arteries serving the heart, resulting in chest pain. However, the scientists who developed this drug had no idea how much further their discovery would go. Erectile dysfunction is a condition that affects 52% of men globally, and is increasingly common with advancing age, affecting 70% of men 70 years old and over. It has many causes due to complex processes involving the brain, nervous system, muscles, and blood vessels in the penis, that dilate to increase blood flow and cause an erection. Although physiological issues are often responsible for erectile dysfunction, it can also be caused by psychological problems, or a combination of the two. Prior to the discovery of Viagra, the only treatments for erectile dysfunction were incredibly invasive and painful. During the 1980s, when erectile dysfunction was first recognised as a disorder that was not just psychological, urologists began to treat patients with injections of drugs to cause vasodilation – relaxation of the blood vessels – to allow blood to flow more easily to the penis. Viagra was designed to relax obstructed arteries in the heart, reliev-

ing the pain associated with angina by allowing enough oxygen to get to the heart muscles. It does this by inhibiting phosphodiesterase 5 (PDE5), which is also present in blood vessels of the penis. During sexual arousal, cyclic guanosine monophosphate (cGMP) is produced, which causes the blood vessels in the penis to relax and blood flow to increase. PDE5 inhibits cGMP, allowing the blood vessels to contract again. Viagra stops this contraction, meaning the blood vessels remain dilated and the erection is maintained. Initial trials of the drug were very promising. Viagra showed good selectivity for PDE5 over the other PDE enzymes (PDE1-4) when it was tested in cells in vitro, and was potent at a low dose, which meant fewer side effects in patients. In preclinical trials, it proved to be effective in reducing the obstruction of arteries in the heart. Therefore, it eventually progressed to clinical trials in humans without any indication of its rather interesting side effect. The first clinical trials to test Viagra for the treatment of angina were carried out on healthy volunteers, and reported good results with regards to vasodilation. Further clinical trials were carried out to find how the drug acts and how it is metabolised in the body at various doses. It was at this stage that reports came in of side effects, such as flushing,

headaches and vision disturbances when Viagra was taken for several days. Penile erections were also a side effect, but this was initially disregarded.

Erectile dysfunction is a condition that affects 52% of men After two years of trials, the drug's ability to relieve angina was looking less promising than the scientists originally hoped, due to the short half-life of Viagra. To be effective, it would have to be taken three times a day to maintain sufficient levels to have any effect on angina pain. But the drug developers didn’t lose hope, and it was at this point that the idea for using Viagra to treat erectile dysfunction was brought up. Clinical trials for the use of Viagra as a treatment for erectile dysfunction began in the mid 1990s, with very promising results. By the turn of the century, the use of Viagra had increased in the population. Scientists were beginning to understand the molecular causes of erectile dysfunction, based on the mechanism of action of Viagra. Since the 1980s, Viagra has gone from discovery to being one of the most prescribed drugs for erectile dysfunction, and one of the highest selling drugs in the world. The accidental discovery of Viagra has led to a change in the way we think about and treat a condition affecting over 50% of men. This helped to remove the stigma associated with erectile dysfunction, and changed the lives of millions for the better. Today, Viagra is even used by mountain climbers to combat the effects of low oxygen. This is all thanks to a little blue pill. Natasha is a 2nd year PhD student at the University of Edinburgh

Image from Wikimedia Commons

30 Spring 2017 | eusci.org.uk


fo cu s

Scans from ‘healthy’ volunteers reveal serendipitous findings: a blessing or a curse? Lorna Gibson explores the challenges of handling incidental findings from imaging research Scanning technologies, such as magnetic resonance imaging (MRI), ultrasound, bone density scanning, computed tomography (CT) and positron emission tomography are vital tools used by scientists to investigate human health and disease. The use of imaging is continuously rising in research, resulting in increased serendipitous detection of potential health problems in healthy research volunteers: so-called ‘incidental findings’. The challenge of handling incidental findings is the subject of widespread debates, and needs to be addressed urgently due to new large-scale population-based research imaging studies. One such study, the UK Biobank Imaging Study, is about to generate the world’s largest multi-scan imaging dataset. UK Biobank aim to perform MRI scans of the brain, heart, and body, ultrasound scans of the arteries in the neck, and bone density scans in 100,000 participants. Data from these scans will be combined with extensive data from participants’ physical measurements, lifestyle questionnaires, cognitive tests, blood tests, and health care records. This will enable scientists to investigate associations between a wide range of potential risk factors for serious diseases which burden public health, such as dementia, heart attacks, stroke, and cancer. Thus, these data will become an extremely valuable research resource; however, as with any imaging study, the process of collecting it may turn up incidental findings in research volunteers. Incidental findings may range in medical seriousness from a harmless fluid-filled cyst on the kidney to cancers. A team at the University of Edinburgh reported that researchers were likely to find one incidental finding in every 37 healthy volunteers undergoing brain imaging. Other studies report that incidental findings are detected in almost half of research volunteers. However, the medical seriousness of many of these findings is negligible or unclear, and so far there is little information on how often a medically serious inciden-

tal finding is detected in a volunteer. Providing such information about the chance of detecting a serious incidental finding, and that further tests and treatments may be needed, is crucial to enabling people to give informed consent to participate in a study.

Knowledge of incidental findings is often assumed to be beneficial, but often it is not quite as clear-cut There is also very little information about how an incidental finding impacts research volunteers and health services. Incidental findings detected during research imaging may not be shown clearly enough to make a confident diagnosis of a disease, and as a result of this uncertainty, ‘healthy’ volunteers are often referred to their doctors for more tests, appointments, and procedures. Research imaging scans are tailored to help researchers answer a specific scientific question, such as measuring changes in brain connections, a technique not yet proved of use in hospitals. In contrast, medical imaging is tailored to help doctors find a specific disease to explain the cause of symptoms in a patient. For example, a patient suspected of having a brain tumour would have images taken after an injection of dye, which would accumulate in the tumour and make it easier to see its full extent. However, an injection of dye would not be given to a patient suspected of having a stroke, as a definitive diagnosis of a stroke can be made without dye, and it would not provide any additional useful information to the doctors. This difference between research and medical imaging may be misunderstood by the public, as researchers from Stanford University found that over half of people who had research imaging done thought that it

would detect any type of abnormality. Alan Milstein, an attorney who has represented people injured by human research, argues that research imaging should always be performed in combination with medical imaging in order to confidently diagnose the incidental finding. However, the number of different types of tailored scans needed to confidently diagnose the wide range of different incidental findings would be impractical, unaffordable, and due to the length of time spent in the scanner, uncomfortable for participants. The impact of incidental findings on research volunteers will depend on their final diagnosis. Not all incidental findings turn out to represent serious diseases, and investigation of these may expose participants to risk, as well as overburden health services unnecessarily. For example, research imaging of the abdomen may show a kidney with a solid lump, but tailored medical imaging may then show that the ‘lump’ is in fact liquid; a benign cyst which is common and not at all serious. When an incidental finding turns out not to be a serious disease, any medical tests, appointments with doctors, or procedures such as biopsies or even surgery may be entirely unnecessary. Medical tests and procedures are also not without harm. Potential ‘harm’ may occur from ionising radiation from computed tomography (CT) scans, bleeding after a biopsy, and time off work for participants to go to hospital. Even waiting for news of an incidental finding after participating in research imaging can cause some harm, with almost half of volunteers experiencing distress during this time, according to researchers from the University of Griefswald in Germany. In rare cases, incidental findings do turn out to be serious diseases. Knowledge of these incidental findings is almost always assumed to be beneficial, but often it is not quite as clear-cut. According to a report commissioned by the Wellcome Trust and the Medical Research Council, members of the public often assume that advantages Spring 2017 | eusci.org.uk 31


focu s

Illustration by Alanah Knibb

of knowing about incidental findings outweigh the disadvantages. Members of the public may assume that finding a disease before it has caused symptoms will allow them to have treatment early, to their benefit. On the face of it, treating a disease before it causes symptoms does sound beneficial, but studies show that this may not always be the case. Malformations of blood vessels in the brain can result in bleeding, seizures and sometimes death. When these malformations cause symptoms, they can be treated by passing a wire through the arteries and blocking the malformed blood vessels with small metal coils. The benefit of reducing the symptoms in patients justifies the small risk of accidentally causing a stroke during the procedure. However, a 2014 study published in The Lancet found that treating people with brain blood vessel malformations who do not have symptoms resulted in more strokes and more deaths compared to people who were only treated when symptoms developed. Given the uncertainty around diagnosing an incidental finding, the risk of harm and the lack of clear benefit of medical tests and treatments, researchers must carefully consider how they handle incidental findings generated during an imaging study. It may be assumed that the best way to detect incidental findings and

32 Spring 2017 | eusci.org.uk

make decisions on whether or not they are serious is to have all research images reviewed by radiologists. However, radiologists routinely look at medical rather than research imaging, and their ability to distinguish between incidental findings which will turn out to be serious disease and those which will not harm the volunteer is not known. Using radiologists may not even be practical: the shortage of radiologists in hospitals means that it is not feasible for radiologists to also examine all research images to look for incidental findings, according to a report by the Royal College of Radiologists. In light of these gaps in knowledge of frequency, medical seriousness, impact on volunteers and health services, and the uncertainty surrounding benefits and harms of feedback, it is not surprising that methods to handle incidental findings vary around the country. Some research centres’ images are all reviewed by radiologists, specifically to look for incidental findings, whereas some research centres do not have routine access to radiologists. Given the range of different types and sizes of imaging study, it is likely that the ‘best’ way to handle incidental findings is very dependent on the context of the study. Two of the UK’s major research funding bodies, the Medical Research

Council and the Wellcome Trust, now mandate that researchers design policies for handling incidental findings. They also provide guidance on the issues for researchers to consider when designing a policy. These include assessing the likely number and types of incidental findings, their likely medical seriousness and treatability, current knowledge of the benefits and harms of informing participants about incidental findings, and logistics such as the cost of administering feedback to participants. Handling incidental findings appropriately is paramount in order to limit potentially unnecessary medical tests and appointments with doctors, which burden both participants and health services. Explaining incidental findings policies and processes to research participants is crucial to maintaining public trust in research, according to Professor Jeremy Farrar, Director of the Wellcome Trust, and Professor Sir John Saville, Chief Executive of the Medical Research Council. Handling incidental findings well is therefore challenging, but crucial to enable scientific discoveries, serendipitous or otherwise, to continue. Lorna Gibson is a radiology registrar, and second-year PhD student


fe atu re s

The new fight against bacteria Imogen Johnston-Menzies explains how scientists are hunting for new strategies to combat antibiotic resistance Antibiotics are a remarkable natural phenomenon that humans have harnessed for medicine. They are the intrinsic defence mechanism employed by microbes to fight and compete with other bugs, attacking critical processes, such as the synthesis of the bacterial cell wall or DNA. It is therefore not unexpected that resistance to these defences was present even before antibiotics were discovered. Resistance is the reality that modern medicine has to come to terms with and tackle. Currently, the statistics are dramatic: 23,000 people die from antibiotic-resistant infections each year in the United States alone. There appears to be no current prospect of a new broad-spectrum antibiotic being discovered, and if one were found, it is likely that those intrinsically resistant bacteria would still find a way to spread and dominate. Therefore, the future defence against bacterial infections must come from a different source.

The newest concept in the fight against antibiotic resistance is anti-virulence drugs The newest concept in the fight against antibiotic resistance is anti-virulence drugs. These compounds target a large portion of the bacterial arsenal – virulence factors. Virulence factors are essential for causing disease, either by actively damaging tissue or by manipulating the host immune response. One of the most interesting examples of a virulence factor is cholera toxin, which is responsible for the diarrhoeal disease and dehydration associated with Vibrio cholerae, the bacteria that has caused global pandemics of cholera. Unlike antibiotics, which either kill the bacteria or prevent it from growing, anti-virulence drugs are designed to target virulence factors, which are not essential for bacterial survival. Therefore, theoretically, they would not exert the same selective pressures as antibiotics to acquire resistance. The production of virulence factors is one of the most ener-

Illustration by Alyssa Brandt

getically costly processes for bacteria, so those that are ‘disarmed’ would have a growth advantage against resistant strains. With this understanding, there have been claims that the design of anti-virulence drugs would lead to an ‘evolution-proof ’ defence against bacteria. Several successful candidate drugs are in clinical trials but none are yet commercially available. One of these is Virstatin, a small-molecule compound that directly targets the synthesis of cholera toxin. In the lab, Virstatin did not alter the growth of the bacteria, but did weaken its ability to cause disease. Unfortunately, this effect was diminished in a strain of cholera bacteria carrying a single DNA mutation that allowed the survival of the bacteria in the presence of Virstatin. But if the drug had no effect on growth and survival, why would resistance spread? The answer diverges from the traditional view that virulence factors are not necessary for bacterial growth and survival. In a host, such as humans, bacteria use virulence factors to both cause disease and invade tissues and cells, where they evade the immune system. This is to the pathogen’s benefit. Unfortunately, this area of bacteriology and drug design is still highly debatable and in need of further research. A future option is to target how bacteria regulate the production of virulence factors. A common form of signalling in bugs that cause clinically important infections is quorum sensing – the mechanism employed by bacteria to communicate not just with each other, but also to sense signals from the environment, such as temperature. Quorum sensing controls the secretion of pathogenic proteins, the formation of biofilms, and adherence in Pseudomonas aerug-

inosa – the bacterium that causes dangerous lung infections in cystic fibrosis patients. This makes it a significant target for drug development. Moreover, because quorum sensing is considered a universal factor in bacterial lifestyles, and not critical for bacterial growth, anti-virulence drugs that target sensory systems are an optimistic new line of research. This universality, however, also illustrates that broad-spectrum inhibition of quorum sensing could also target and damage our resident healthy gut bacteria. Interestingly, new research into inhibition of quorum sensing identified a marine organism Delisea pulchra, which secretes a compound that successfully prevents binding of the quorum sensing signal to the bacteria. This discovery illustrates that science has come full circle and again turned to products used by organisms in the natural world. Although this compound, a halogenated furanone, cannot be classed as an antibiotic, its discovery is fascinating. Scientists have successfully engineered a mimic of the compound and shown the degradation of quorum sensing and biofilm integrity in Pseudomonas. This example emphasises the progress being made and the exciting new anti-virulence prospects for the future. Therefore, decades after William Stewart, the General Surgeon of the United States, declared, “We have closed the chapter on infectious diseases, due to antibiotics”, science is still combating infection. Fortunately, new lines of research are constantly being driven forward in this host-pathogen arms race. Imogen Johnston-Menzies is a bacteriology PhD student at the Roslin Institute within the University of Edinburgh Spring 2017 | eusci.org.uk 33


fea t ures

A brief history of the self in science Haris Haseeb discusses medical science, human anatomy and the reconceptualisation of the self In medicine, we enquire about the science of the body. We elucidate its internal structures, detail its homeostatic rhythms and respond, where appropriate, to its dysfunction in disease. Yet what is less well understood is how by virtue of such an enquiry, we transform not only the care of our patients, but also our own sense of personal identity, both in relation to our internal selves and our external worlds. If we consider medicine as a discipline committed to the perpetual reconstruction of our scientific knowledge of the human interior, then accompanying this enquiry is an equivalent refashioning of our philosophical understanding of both selfhood and of fundamental human experiences—specifically, illness. Reflecting on the progression of medicine, we discover that historical practices have largely informed the modern conceptualisation of the self as a fundamentally divided figure. Rooted in Plato’s Phaedo, the divided self where, the physical body was divorced entirely from the mystical soul, was the principal doctrine of embodiment prior to the 15th century. During Europe’s Renaissance, however, the emerging field of anatomy sought to verify this claim, extending what was primarily a philosophical concept into the growing realm of science. Consequently, for centuries to come, the nature of the body could not be understood without considering what gave its materiality significance: its internal ether, its anima, its soul.

Body and self would thus become an objective phenomenon Therefore, body and self would become an objective phenomenon and despite attempts at revival during the Romantic period (1800s), the subjective experience and the value of the individual was somewhat lost. Although initially appearing abstract, the consequences of objectively categorising the body are clear. Today, we are able to see vividly the homogeneity

34 Spring 2017 | eusci.org.uk

of gender, the wholeness of ableism, the profound divisions between mental and physical healthcare, and rigorously pathologised experiences of illness. It is precisely these social phenomena, rooted in science’s preoccupation with making objective ideas of the body and soul, that arguably have been its greatest undoing. Beginning in Early Modern Europe, where new methods of scientific enquiry emerged from medieval darkness, we saw the radical development the field of human anatomy. Da Vinci drew the Vitruvian Man, Vesalius wrote Corporis Fabrica, and Harvey conceptualised humoral circulation. Crucially though, throughout Europe’s Renaissance the universe was theologically ordained, at least initially. Though bisected and bound in the perpetual struggle of sin, body, and soul stood at the centre of the universe as abstracted images of God. The science of anatomy, whether that of Vesalius, Harvey or Faloppio, remained guided by the fundamental belief that the human body and the form of its internal structures corresponded to the greater image of God. In what was an age of colonial discovery with the expeditions of both Columbus and De Gama, the growing field of anatomy, much like the explorers of the terrestrial world, set out to fiercely map the internal contours of the body. Whilst distinct from mind and soul, the body’s geographical landscapes were paradoxically unified in the wider macrocosm of the cosmos. However, as time progressed, the science of anatomy occasioned a problem from which an irreconcilable tension emerged; the anatomists of the 15th and 16th century would no longer be able to read the body as a source of geographical metaphors because its anatomical divisions had become too vast. Here lies the paradox of the European Renaissance. The same anatomical enquiry which set out to verify the theological division of the body and mind had to ultimately reject the very sentiment it previously aspired to achieve. The existential paradigm shifted and though it remained divided, the body as divine metaphor became subsumed by Descartes’ method, which insisted upon its reconceptualisation as a machine.

Rene Descartes, the French philosopher and anatomist, divorced entirely the body from the realm of its thinking subject, objectifying the self and rendering its individuality obsolete. Its corporeality became the focus of a new, intensely observational enquiry. This Cartesian (as belonging to Descartes) method of enquiry was rooted in a tradition of empiricism, which although was in its infancy, provided the framework for all future methods of scientific discovery. The body then, could be understood only as it was observed.

Narratives of disability were ridiculed, stories of gender dysphoria were criminalised and the experience of mental illness was largely feminised

Thus, Descartes’ reduction of the body to mechanism offered a radically reconstituted self, which though bifurcated still, appeared fundamentally distinct from that of the metaphorical body whose form would reflect the divine macrocosm. By virtue of its verification in empirical thought, the Cartesian body, removed from the constraints of theology, became absolute though no less free. Now, under a new method, it would be met with the rigorous demands of empirical science. The close of the Early Modern period saw the verification of the body as metaphor, its rejection, and its subsequent reinvention as a machine. Its silencing as ‘abstract’ and its proliferation as ‘empirical’ would serve as a major shift in the historical paradigm. The self would now be explored within a closed system, as an automaton with little accord for the authority of God. The recognisably modern science of the body would no longer rely on methods of philosophy to verify its claims to truth, and a growing


fe atu re s culture of empiricism was established. This phenomenon came to define the period of Western Enlightenment, which post-dated the European Renaissance. Throughout the Enlightenment, an era distinguished by its preoccupation with reason, science proliferated in the absence of imagination. The subjective experience, subsumed by post-Cartesian rationality, was invalidated and largely displaced by the categorical and the objective. Reflected not least in the period’s art, where nature’s sublimity was reduced to observable detail, advances in science also illustrated the radical reconceptualisation of the self as an object of reason. Considerable advancements were made to the fundamental tools of measurement: in medicine, the microscope changed dramatically, the first stethoscope emerged and the sphygmomanometer was created, each with the purpose of empirically measuring the body’s rhythms. Anatomisation was replaced by methods of empirical verification, and now not only was it inconceivable to consider the self outwith the prevailing scientific discourse, but it was in a sense impossible; the body of the Enlightenment was homogenised, and if diseased, could be restored with little disruption to its soul or the cosmos.

Over the past half century... rather than rigorously dividing the body with reason, we are instead learning to celebrate its diversity from a position of shared vulnerability

The subjective experience was lost, the thinking entity marginalised and the mind/body divide widened, and though Romantic writers attempted to reinstate the value of the subject (Shelley’s Frankenstein and Keats’ Lamia perhaps the most radical critiques of Cartesian bifurcation), neo-criticism of the 20th century swiftly rendered their efforts obsolete. In a century which reinstated the empirical rigour of Enlightenment science, the modern era of medicine (lasting approximately until the turn of the millennium), informed by paternalism, sought to categorically patholo-

Tile artwork by Sarah Atkinson

gise the illness experience by ignoring the subjective voice. At their inception, stories of illness were stifled and as medicine became professionalised, the Cartesian self was at once extended into the realm of disease. The thinking subject, oppressed by a doctrine of paternalism which saw diseased bodies as objects of rigorous treatment, was diminished, and whilst medical science witnessed its greatest period in functional growth, it occurred without accord for its responsibility to humanity. Narratives of disability were ridiculed, stories of gender dysphoria were criminalised, and the experience of mental illness was largely feminised. Despite the historical (and in many senses present) homogeneity of embodiment, today it is generally accepted that the objective self no longer exists. Over the past half century, with the advent of narrative medicine and the growth of medical humanities, rather than rigorously dividing the body with reason, we are instead learning to celebrate its diversity from a position of shared vulnerability. And as medicine continues to explore its growing intersections with the world, and as we recognise its complexity as a science in equilibrium with human experiences, we begin to see new selves emerge –

the individual as electrochemical, as autoimmune, as cellular, and as genetic. Yet despite our progress, alterity remains feared. The election of Trump, the decision to vote Brexit, the growing national front across Western Europe— monumental democratic events determined largely by an insidious fear of that which presents as ‘other’. How then can we displace this fear of alterity? As a student, though I cannot write with the same authority as a politician, the position from which I can write is from our common interest in science. As ambassadors of scientific enquiry, we have a responsibility to encourage its subsets to be less of an obstacle and more of a platform from which love for the other can grow. By doing this, hopefully one day we will celebrate rather than oppress our bodies’ wondrous heterogeneity. Having completed his intercalated Honours in Medicine and Literature, Haris has now entered his fourth year of Medicine at the University of Edinburgh

Spring 2017 | eusci.org.uk 35


fea t ures

The dynamic little person in your brain Marja Karttunen explores how your brain senses the state of your body Montreal, Canada. The year is 1934, and in an operating theatre at McGill University, the neurosurgeon Dr Wilder Penfield is performing surgery. His patient is epileptic, and Penfield is employing a groundbreaking method to identify the cells in the patient’s brain that are responsible for the disease. For this procedure to work, Dr Penfield needs his patients to stay awake. He applies a local anaesthetic and removes a small section of the skull to expose the brain. He then uses electrodes to excite different regions of the cortex, the surface of the brain, and asks the patient to report where in the body they have a response, and how it feels. In doing so, he is able to pinpoint which cells trigger the sensation of a seizure, and carefully cuts them out. This method, which became known as the Montreal Procedure, was developed by Penfield and his colleague, Herbert

Illustration by Katie Forrester

36 Spring 2017 | eusci.org.uk

Jasper. It succeeded in curing over half of the epileptic patients who underwent it, and variations of it are still in use today. However, this radical therapy for epilepsy is not Penfield’s only legacy. As an ambitious and curious young scientist, Penfield realised that by systemically stimulating different regions of the cortex with electrodes, he could learn which area of the brain corresponded to which part of the body. Indeed, over the course of hundreds of surgeries, Penfield was able to map out how the patients’ bodies were represented in their brains. Intriguingly, the map he sketched from his observations has the shape of a grotesquely disfigured human draped across the cortex. Penfield called this map the cortical homunculus, which literally translates to ‘little man’. Modern research using functional magnetic resonance neuroimaging and transcrani-

al magnetic stimulation methods have verified the organisation of the homunculus, and Penfield’s maps from the 1930s are still in use today, virtually unchanged.

Penfield was able to map out how the patients’ bodies were represented in their brains

The reason for the homunculus’ distorted form is that it does not represent the actual size of the body parts, but instead the density of sensory receptors in different areas. For example, the centre of the retina, known as the macula, is a


fe atu re s tiny spot packed full of visual receptor cells called rods, where our visual acuity is at its highest. This tiny spot is mapped onto a disproportionately large section of the visual cortex, so that input from all the rods can be processed. By contrast, our legs, while accounting for a large proportion of our actual body size, are less sensitive, and consequently occupy proportionally less cortical space.

Intriguingly, the map he sketched from his observations has the shape of a grotesquely disfigured human draped across the cortex

Further investigation by Penfield identified a second homunculus adjacent to the sensory one: the motor homunculus. This also has a distorted human shape, but this time it is tuned to areas of fine motor control. The hands, the most dexterous tools available to us, are dramatically overrepresented, while the representation of the torso is disproportionately small. Wilder Penfield’s maps have played an instrumental role in helping 20th century neuroscientists understand how the brain controls the body. Moreover, in recent years, it has become clear that, the homunculus, like the brain itself, shows a high degree of plasticity. In other words, it is constantly changing in response to our environment and our interactions with it. For instance, when learning to play the violin, the area of the motor homunculus representing the hand used to wield the string grows larger than the area representing the hand holding the violin. Similarly, the homuncular representation of the fingertips on the reading hands of Braille readers are enlarged, compared to their non-reading hand or the hands of non-Braille readers. This striking plasticity of the homunculus can be both a blessing and a curse. Phantom limb pain – the pain amputee patients experience in the removed limb – is a famous example of how homuncular adaptivity can be problematic. Neuroimaging studies have demonstrated that the part of the sensory homunculus that represented the lost limb undergoes profound reorganisation following amputation. The location previously representing the amputated arm shrinks

because it is no longer receiving input from the arm, and the neighbouring regions corresponding to other body parts invade the unused space. This can lead to unsettling sensory miswiring, whereby touching the patient’s lip, for example, provokes a sensation in the arm he or she no longer has. Similarly, pain-transmitting pathways from elsewhere in the brain, typically still hyperactivated from pre-amputation pain, can colonise the region of the homunculus previously representing the lost limb. As a consequence, this triggers the sensation of pain in a limb that isn’t there. The phenomenon of phantom limb pain was first described by a French Army physician in the 16th century, but its cause remained a mystery for centuries. Only with our understanding of how the body and its sensations are represented in the brain (in other words, the homunculus) has it been possible to understand the anatomical basis of phantom limb pain. More importantly, our understanding of the homunculus can help us develop strategies and therapies to treat it. A famous example of this kind of treatment is mirror therapy, devised by American neuroscientist Vilaynur Ramachandran. This therapy employs a lidless box, divided into two compartments by a mirror propped vertically in the middle. The amputee patient places their intact hand into one compartment and looks at the mirror. This shows a reflection of the intact hand, but the brain perceives the reflection to be the missing hand. The patient can then perform movements with their intact hand, which the brain interprets as being performed by the lost hand. The rationale is that hoodwinking the brain into thinking the lost arm is active will restore activity to the area of the homunculus dedicated to the lost arm. Doing so dispels the pain-transmitting pathways and representations of other body parts from that region. Since Ramachandran’s initial publication, several other studies have reported that patients experienced relief from this procedure. However, more clinical research is needed before mirror therapy can be formally established as a treatment for phantom limb pain. Another manipulation of the homunculus to help people cope with amputation is the ‘rubber hand illusion’. In this therapy, a patient watches as a rubber hand is repeatedly stroked in synchrony with the stroking of their own hand, which is kept out of sight. The patients consistently report that the rubber hand 'feels like their own hand', implying that

their sensory homunculus has adopted the rubber hand as part of the body. It is not difficult to appreciate how such swift incorporation into the homunculus can benefit patients with prosthetic limbs. Some developing therapies for chronic pain conditions also make use of the homunculus. Intriguingly, much like in phantom limb pain, extensive reorganisation of the brain’s sensory homunculus and pain pathways are seen in patients with chronic pain. In his book The Sensitive Nervous System, physiotherapist David Butler suggests that repeatedly performing non-threatening body movements in a relaxed setting can help ‘reshape’ the distorted homunculus and alleviate the sensation of chronic pain. In this way, promoting the adaptive plasticity of the homunculus can play a key role in treating chronic pain. Undoubtedly, a great deal of fine-tuning is required before such strategies for treating phantom limb and chronic pain states can be routinely offered to patients. However, it is heartening to think that in a market dominated by drugs, fundamental neuroscience concepts like the homunculus are helping us to develop different, non-pharmacological treatments.

This striking plasticity of the homunculus can be both a blessing and a curse

Although its discovery was a byproduct of epilepsy surgeries, Wilder Penfield’s homunculus has provided a core conceptual framework for understanding how the brain represents and controls the body. Subsequent exploration has shown that the homunculus has remarkable plasticity, which can result in detrimental conditions such as phantom limb or chronic pain. On the flipside however, the very same plasticity can also be harnessed to reverse those pain states. The homunculus is a captivating example of how an unintended scientific finding can yield unexpected and far-reaching benefits. Marja Karttunen has recently finished her PhD in Neuroscience working on a zebrafish model of myelin damage and repair

Spring 2017 | eusci.org.uk 37


fea t ures

Where to draw the line? Vicky Ware explores the grey area between health treatment and doping in performance sport Imagine you’re 19 years old. You’ve been told you have a talent for cycling so, rather than stay in school like everyone else, you make a career out of racing. Imagine you’re completely immersed in this world of cycling – where no one seems to care about much other than your ability to push the pedals harder than your competitors. A world where everyone is obsessed with how much power they can produce, how much they weigh, and finding out every little detail that could be the difference between being at the top of the sport or being someone who produces 2% less speed – the ones no one has heard of.

When an athlete is doing more than they can recover from for an extended period, the immune and endocrine (hormonal) systems become dysfunctional

Now, imagine things aren’t working out quite as you’d hoped. The people who matter – the ones who run the teams and coach the riders, are starting to pay more attention to your teammate than you. Loyalties are weak in this world, where your speed is your worth. Imagine you’re 23 and still haven’t achieved the big result you need to secure a contract that will pay enough to feed you. Imagine someone offers a solution that will make all your problems go away, a simple injection that will make you faster than everyone else. You’ve been taking supplements of vitamins, minerals and who knows what else all this time anyway – it’s part of the culture of the sport, the obsession with engineering each nutritional detail to improve recovery. But they’ve always been on one side of the line before - the legal side -the side that won’t get you banned. You have no other skills, nothing else you can do. Cycling is all you know. Do you take it? Unfortunately, for many cyclists, the

38 Spring 2017 | eusci.org.uk

answer was ‘yes’. Improved methods for the detection of performance-enhancing drugs in blood have led to many more athletes being caught, which has lessened the doping culture surrounding professional cycling. As more athletes are caught, fewer try illicit substances designed to make them faster. If fewer athletes dope, fewer feel they must also cheat to stand a chance of reaching the top of the sport. Things have improved. And aside from ensuring a fair playing field, this is crucial for athletes’ health. Horror stories abound of people who have died due to doping. Tommy Simpson, one of Britain’s first super-start cyclists in the 1960’s, collapsed and died on a mountain during the Tour de France, likely as a result of the blood-thickening drugs and amphetamines he was taking. Riccardo Ricco almost died of blood poisoning in 2011, and later admitted to having removed his own blood and storing it in his kitchen fridge for months before transfusing it back into his body. This isn’t a smart thing to do. Blood transfusions (when done correctly) improve performance by increasing the number of red blood cells in the circulation. This boosts the amount of oxygen reaching tired muscles, allowing you to train more and race faster. Also, transfusing your own blood makes it more difficult to be caught doping because the biological markers on the blood cells are your own. When it comes to certain drugs – those designed to stop you feeling the pain, to give you seemingly endless energy or make your blood carry more oxygen than your competitors – the line between clean athlete and doper is clear. Substances defined as performance-enhancing are listed by the World Anti-Doping Authority (WADA) and any athlete found to be taking them (via blood or urine sampling) is subject to whatever bans and fines are ruled appropriate for that misdemeanour. The problem in endurance sports is that, in some areas, this line is less clear. Sir Bradley Wiggins, a multi-Olympic and Tour de France champion, has arguably been a pioneer in bringing bike racing to the mainstream in Britain. He was recently found to have been taking triamcinolone acetonide, a corticosteroid drug

which he claims was to treat his asthma, prior to most of the major wins of his cycling career. Another Tour de France winner, Chris Froome, has taken a corticosteroid, called prednisolone, at a dose normally reserved to stop organ rejection in people who’ve had a transplant. One issue with this is that it is unlikely that an asthma sufferer requiring such a high dose would be capable of becoming the world’s fastest cyclist. The bigger issue is that corticosteroids have been shown to enhance endurance performance regardless of whether the person taking them has asthma. Consider, for argument’s sake, that Wiggins and Froome were suffering from asthma and the drug was for this condition. What if the cause of Wiggins’ asthma was the amount of training he was doing? Should he still be able to take a drug to cure the asthma, or is this tantamount to artificially enabling his body to do more training than it is naturally capable? By citing asthma as the reason for taking these drugs, athletes like Wiggins remain on the legal side of the doping fence, as long as they have a doctor prescribing the drug for therapeutic use. Whether or not they’re on the morally defensible side is another story. Endurance sport in humans can lead to allergic sensitisation and asthma. When an athlete is doing more than they can recover from for an extended period, the immune and endocrine (hormonal) systems become dysfunctional. Overtraining leads to chronic inflammation, which can lead to a dysfunctional immune response over time. One mechanism for this is via cortisol, the so-called ‘stress hormone’. When you experience stress, cortisol spikes to help the body release energy stores needed to fight or flee. Cortisol is also an anti-inflammatory, which may be why stressed people are more likely to become ill – their immune system is slightly suppressed. In the short term, your body can cope with this. But, if it goes on for too long, the body becomes unable to continue producing the extreme levels of cortisol required. Eventually these levels dip, and the body is unable to make its natural anti-inflammatory. This is one way the immune system becomes dysfunctional in overtrained


fe atu re s endurance athletes. Without cortisol, inflammation can lead to sensitisation to allergens such as pollen. A number of professional cyclists cite their allergies to pollen as the reason they cannot perform well during the early season spring races. Wiggins and fellow Grand Tour racers Chris Froome, Alberto Contador, Dan Martin and Chris Horner all fall into this category – a suspiciously large percentage of the world’s top bike racers. Corticosteroids, such as the drug Wiggins was taking, mimic cortisol in the body and are strongly anti-inflammatory. It makes sense that they would help people who have exhausted their body to the point where it no longer produces the cortisol they need. Here’s where it’s important to get a clear idea of how athletes, especially endurance athletes, dope. It’s not quite as simple as taking a drug and being a superhero on a bike. Most drugs endurance athletes take to enhance their performance do so by allowing them to train more. Yes, that’s right, they actually want to be able to train more. One of the biggest limitations to becoming a world class endurance athlete is the resilience of your body to training. How well and how quickly you recover from training impacts how much

you are able to train. After many years of training, shorter recovery times lead to big differences in speed compared to someone who recovers more slowly.

Most drugs that endurance athletes take, such as human growth hormone, allow them to recover faster from exercise, meaning they can train more without getting injured or exhausted

How fast you recover depends on a number of factors, including how much you rest when you’re not training and how well you eat – your body can’t build new muscle and red blood cells if you don’t give it the building blocks to do so. Most drugs that endurance athletes take, such as human growth hormone, allow them to recover faster from exercise, meaning they can train more

without getting injured or exhausted. To achieve a feat like winning the Tour de France, an athlete must be teetering on the brink of overtraining. The people who succeed are the ones who manage to get closest to this knife edge without tipping over into illness and injury. By taking drugs that alleviate the inflammation and other symptoms associated with overtraining, these cyclists are not on a level playing field with those who keep their training to an amount that, though still extreme, their bodies can recover from naturally. Aside from the fact that this is ultimately bad for long-term health (which anti-doping rules are meant to protect), this means Wiggins is in fact doping in the exact way endurance athletes dope. He’s compensating for overtraining by taking drugs that allow him to keep performing despite being exhausted. If the sport of cycling is to remain clean and fair, WADA must find a way to close these blurry-edged loopholes, to protect the integrity of the sport and the health of those who participate in it. Vicky Ware is an Edinburgh University graduate and cyclist

Image from Wikimedia Commons

Spring 2017 | eusci.org.uk 39


reg u l ar s : p o l i t i c s

Brave New World: Trumping evidence and information in politics Selene Jarrett discusses the distrust and disregard of science in current populist politics “We live in a society absolutely dependent on science and technology and yet have cleverly arranged things so almost no one understands science and technology. That’s a clear prescription for disaster.” - Carl Sagan The reasons for Trump’s election are multifaceted. There seemed to be a perfect storm of circumstances that led to his election, which included Trump’s perceived transparency, racial tensions, sexism, a generational divide, Clinton’s unpopularity, the media’s coverage of Trump, and most significantly, a disenfranchised population. This is not unlike Brexit, where a significant proportion of the UK’s population felt abandoned by their politicians and exploited by external powers. Those struggling to survive tend to vote for the person or party promising the biggest changes to their circumstances. It has become apparent, however, how little facts and evidence influenced the voting booth in both the US election and the EU referendum. This is especially worrying when you consider Trump and other populists’ opinions on important issues such as climate change and medical research. Global temperatures have risen drastically since the 1970s, with the majority of this heat being absorbed into the oceans. This has caused ice sheets to melt

Image from Wikimedia Commons

40 Spring 2017| eusci.org.uk

in Greenland and the Antarctic, glaciers to recede in the Alps, Himalayas, and Alaska, and snow to disappear in the Northern Hemisphere. As a result, global sea levels have risen by 17 centimetres in the last century. Additionally, the acidity of ocean water has increased by 30% due to the ocean’s increased absorption of carbon dioxide. In April 2016, US Secretary of State John Kerry, along with 174 other world representatives, signed the Paris Climate Agreement, which aims to mitigate the emission of greenhouse gasses to combat climate change. However, Donald Trump rejects this scientific consensus, showing strong scepticism of anthropogenic (human driven) climate change. As such, he wishes to withdraw the US from the aforementioned Paris Climate Agreement, as well as eliminate the US Environmental Protection Agency. This stems from at best wilful ignorance and at worst, a belief that climate change is a hoax perpetuated to restrict US manufacturing and business. Further distancing himself from rational observation, Trump has expressed his conviction that vaccines cause autism. In 1998, Andrew Wakefield published a research paper in The Lancet claiming that the measles, mumps, and rubella (MMR) vaccine was associated with autism and bowel disease. After other researchers failed to reproduce his findings, Wakefield’s paper was discredited and retracted by The Lancet. Wakefield himself was removed from the UK medical register. Since then, there has been no scientific evidence supporting an association between autism and vaccines as confirmed by the Centres for Disease Control and Prevention. However, Donald Trump has yet again chosen to ignore scientific consensus and already appointed Robert Kennedy Jr, an outspoken opponent of vaccine administration, to chair a commission on vaccination safety and scientific integrity. Additionally, Trump has stated that he would like to remove federal funding of the Planned Parenthood reproductive

health service. This is not a surprising statement as Trump’s vice president, Mike Pence, is a conservative evangelical Catholic who already cut funding for Planned Parenthood in his own state of Indiana. Pence has also signed an anti-abortion law which criminalises foetal tissue collection, requires women to view the foetal ultrasound before receiving an abortion, and further requires them to organise a funeral for the foetus after an abortion. The ban on foetal tissue collection has led embryonic and developmental scientists, mainly working in the field of regenerative medicine, to worry whether support and funding for their research would be reduced or removed altogether. When considering the issues mentioned, it is apparent that there is a clear disregard of science in some factions of populist politics. Our world is facing many issues requiring the cooperation of world leaders with academics and experts across nations and scientific fields. However, there is a discernible distrust of these ‘intellectual elites’. Michael Gove famously stated that “People in this country [UK] have had enough of experts.” And he was right. Leading economists, such as John Van Reenen of the London School of Economics, stated their reservations of leaving the EU. Nevertheless, 52% of voters did not heed their advice. With general elections in France and Germany in 2017, we may see reiterations of this new wave of politics as many have grown tired of the status quo. As such, science communication and public outreach are vital to promote the work carried out by researchers in different fields and explain how science will directly impact and improve people’s lives. This may influence how the average person votes in future elections and referendums. Selene Jarrett is a fourth year PhD student in Developmental Biology


reg ul a rs: tec h n o lo gy

From the bench to the field: nanopore sequencing Vesa Qarkxaxhija explores a natural sequencing force and its future The limitations of available technology, as opposed to the collective brainpower of researchers, continues to be one of the greatest hindrances to the progression of scientific research. There is a constant drive to develop faster and cheaper technologies in order to make it easier and more accessible for scientists to carry out their research. Nanopores are one instance of natural phenomena that have been harnessed to design a tool for scientific progression; in this case, for genetic analysis. This has been made possible by the serendipitous discovery of a particular organism’s attempt at survival: Staphylococcus aureus bacteria have evolved an elegant method of nutrient acquisition, for which John J. Kasianowicz saw as potentially exploitable. S. aureus secretes proteins that bind to the outer membranes of cells. Upon binding, a water-filled channel forms, which accommodates uninhibited permeation of small organic molecules and ions. This allows for vital molecules such as ATP to diffuse into the bacteria. At the same time, dissipation of ionic gradients results in cell lysis due to irreversible osmotic swelling of the host cell. The transmembrane pore was observed to be able to conduct relatively large linear macromolecules of up to tens of kilodaltons, such as DNA or RNA strands, with electrochemical gradients as the driving force. The chemical structure of individual molecules can be detected by discriminating between the resultant ionic currents. These can be interpreted to identify sequences of RNA, DNA, homopolymers and even segments of purine and pyrimidine nucleotides (differentiating between A, C, T and G), giving an accurate outline of a genetic

sequence. Essentially, the ion channel dynamics can be manipulated so as to output electrical ‘noise’ that provides information about structures that pass through. Although this research has shown great potential, leading nanopore technology companies have neither described the foundations of their work in detail, nor disclosed whether they have applied a refined version of this specific pore. As with most revolutionary technologies, their trade secrets are well kept. However, what has been publicly released has hinted the use of similar systems. The nanopore is set into an electrically resistant membrane and a voltage is sent across the membrane causing an ionic current to pass through the nanopore. Once a molecule passes through the pore or near its aperture a disruption is detected in the current. These disruptions are measured and the given sample is thus sequenced.

Leading nanopore technology companies have neither described the foundations of their work in detail

One company, Oxford Nanopore Technologies, has designed and made available desktop and portable technology that can fit into the palm of your hand. Their MinION flow cell can sequence DNA and RNA in real time, allowing users to run samples until they have the data they need. This is in contrast to other current technol-

ogy, where samples are processed in their entirety – for large samples, this can be enormously time consuming and hindering to research turnover. MinION’s smaller sibling – the innovative SmidgION – has the potential to be even more of a game-changer, allowing researchers to take the science into the field through its unassuming pocket size and smartphone compatibility. The potential applications of this pioneering technology are far-reaching. It can facilitate on-site analysis of environmental samples for microbiology, wildlife, food or agriculture. In a medical context, it can bring the newly emerging field of personalised genomics into the doctor’s office or allow for effective monitoring of outbreaks of infectious disease in areas which lack accessible facilities, thus making for quicker responses of medical aid. These handheld devices will assist in population genetics projects, such as the African Genome Variation project, which sees its limitations in legal transport of samples across borders to reach laboratory facilities. This technology would allow for field-based analysis of remote areas which otherwise would not have been considered due to the cumbersome logistics of acquiring results. By reducing the time and cost of generating results, portable nanopore technology means that soon people will be able to sequence on their phones for a fraction of what is paid now for inhouse sequencing. However, there is some ethical concern: given the underdeveloped boundaries of the law concerning genetic data, one cannot help but wonder if this advance has come too soon. One thing that we can be certain of, however, is that progress in genetics will continue exponentially due to this groundbreaking biotechnology. Vesa Qarkaxhija is an Msc Student in Genetics and Molecular Medicine

Images from Forbes

Spring 2017 | eusci.org.uk 41


reg u l ar s : s c i a t r i b e

Learn actually Angus Lowe explains why he believes emphasis should not be placed on the employability of university degrees The predominant narrative among the generation of post-secondary students today tends to frame those pursuing science, technology, engineering, or maths (STEM) degrees in a more positive light, both in terms of employability and intellectual prestige. As a consequence, many students who would otherwise find these subjects unappealing are allured by the notion of contributing to our modern economy. I believe that this trend represents a failure to place more emphasis on learning for reasons that are not inspired by success or prestige. The usage of the acronym STEM does not originate amongst those to whom it refers. Rather, it is a term coined by businessmen and economists to address a supply and demand imbalance within job markets. In other words, the technical fields of STEM are related by the common denominator of employability, not by the content of the subjects themselves. Take two STEM subjects like medicine and civil engineering: it’s quite hard to find anything in common between them besides being in demand. This is testament to the job-oriented nature of the term STEM. I believe its usage reflects the tendency of parents, universities, secondary schools, and students in this generation to see securing a job as the primary motivation behind completing a degree. And, from what I’ve mentioned so far, there isn’t anything inherently wrong with this being the prevailing opinion. However, I now hope to explore a few reasons why this attitude can be detrimental. Firstly, choosing a field of study out of fear for the future leads to, at best, complacency with that choice. I don’t think that ‘complacent’ should be a word used to describe someone dedicating four years of their life to a pursuit of knowledge, no matter the reward promised at the other end. Of course my belief and overriding message of “if you don’t love something you shouldn’t do it” might seem like a trope of commencement speeches, or at the very least romanticised. However, an often neglected instance of idealistic beliefs is the one that you will end up with a high-paying and rewarding job that you love as long as

42 Spring 2017| eusci.org.uk

you put in the work during your studies. This is simply not even close to guaranteed, even if you study a subject deemed ‘highly employable’, like software engineering: there is no way to accurately predict the future, and the possibility of struggling to find a job or being underemployed or underpaid can never be discounted. Students should not be systematically told to forgo their genuine interests for a slightly higher chance of being employed because the consequent two-fold risk of complacency in occupation and during studies is too great.

The technical fields of STEM are related by the common denominator of employability

Another reason why students should not be motivated primarily by job prospects is that this promise does not replace genuine curiosity in a field. The greatest advancements in every subject, both scientific and non-scientific, are often made by those who require no external motivation to do their work. If we start to push people away from what they enjoy doing and toward something with

Image from Wikimedia Commons

supposedly improved career prospects then we risk losing the most passionate scientists and artists to other fields emphasised by the modern job markets and STEM. Nevertheless, I recognise that the current job markets undeniably favour those graduating with STEM degrees: I’m just hopeful that this imbalance in the supply and demand of scientists in the world will resolve if we start emphasising how cool science and its content can be, as opposed to emphasising which degrees are required to achieve certain careers. In this way, I think we can get more, and more passionate, scientists. It is my strong belief that the endgame of learning is learning more still, not the career you happen to land along the way. The current job markets may favour graduates with STEM degrees, but they are not impenetrable to humanities graduates, and the world needs people from all disciplines. I would rather have a slightly higher chance of unemployment than a guaranteed job (and there is never a guarantee) doing something I find uninteresting. We should start shifting focus away from STEM and employability in our modern economy and toward learning out of curiosity. If we do this well, I think the problem of the shortage of qualified STEM employees will solve itself. Angus Lowe is a second-year computer science and physics student


reg ul a rs: i nte rv ie w

A cuppa with…Professor Randy Schekman Alessandra Dillenburg chats with a Nobel prize winner about succeeding in science, Trump’s [scientific] America, and making science accessible Last November, Professor Randy Schekman was invited to the University of Edinburgh to receive an honorary degree of Doctor of Science and give a talk on his current research. In collaboration with James Rothman and Thomas Südhof, Professor Schekman was awarded the 2013 Nobel Prize in Physiology or Medicine for discoveries related to vesicular trafficking – more commonly known as the transport system of our cells. During his visit to Edinburgh, I managed to secure 30 minutes of Professor Schekman’s time for an interview. Alessandra Dillenburg (AD): You’re no stranger to this city! Can you tell me about your first time in Edinburgh as an undergraduate exchange student? Randy Schekman (RS): Well, I had never been out of the US, so it was quite a shock. I spent a lot of years in California, so coming here was quite a…chilling experience! I rented a room in a family home and I could never heat the room there was a metered gas heater set into the fireplace, and I kept feeding shillings into it and it would just never heat up! Since I worked in the Darwin building [at King’s Buildings], I used to spend all my time here and just shower in the basement of the building instead of using my frigid bathroom. But I was an unusual student; I spent much of my time here working in the lab. In fact, when I arrived, my supervisor Bill Hayes mistakenly believed I was a sabbatical visitor from UCLA [University of California, Los Angeles], so I was assigned my own lab space and office. Thankfully, they quickly realised that’s not who I was and assigned me to a lecturer who was patient to teach me a little bacterial genetics. What I learned here greatly influenced my future directions, and I had a wonderful experience. AD: How do you think a Trump presidency will affect American scientific research? RS: The honest answer is I don’t know. If you rely on the few things that he’s said about science, he doesn’t seem to have any appreciation for it. For ex-

Image Courtesy to eLab

ample, on the issue of climate change, he’s publicly said he believes it’s a hoax perpetrated by the Chinese. He’s either deliberately or woefully ignorant on that and, one fears, likewise on other aspects of science policy. Who knows what position he has about biomedical funding – he’s talked about increasing funding for the military, the extent to which you’d require substantial cuts of the budget for discretionary spending, which could hit biomedical science pretty severely. Funds available for basic science could be seriously threatened. AD: Speaking of funding, what are your thoughts on funding for basic science without a direct human application? RS: That’s a trend that I object to obviously…and even in the best of times, in the past 8 years there’s been a tendency on part of the NIH [National Institute of Health] to focus research funding on those projects that have medical relevance. That would have cut out funding for me at the outset of my career. And yet what I did with yeast genetics ended up having relevance to human disease in ways that you can’t predict. I could cite hundreds of examples where some pure basic science has led directly to clinical applications, and we starve that at our own peril. I take a very dim view of these efforts to force people to focus exclusively on what could be medically applied. AD: You have well known views on open access science. Do you think

journals are doing enough to make science more accessible? RS: It’s good that journals are moving in the direction of open access – but I have a rather cynical view of the way Nature has gone about it [with open access Scientific Reports]. It’s a profit center for them. It’s not that I have anything against commerce – I’m a capitalist, that’s fine – if they make a better product, more power to them. But I don’t consider it a better product, because the flagship journal Nature is not open access. When they honestly make everything openly accessible then they will have moved in the right direction, but I don’t see any step in that direction just yet. AD: Any advice for a young scientist? RS: Don’t do what the crowd is doing. When you think about what you’d like to do, think about something different. And don’t continue to do the same thing from one training period to another. If you’re successful as a grad student and you want to continue to a post doc, choose something else. Choose a different problem or a different approach. You have to learn some other discipline and how another field develops so that if you have the opportunity to later have your own career, you can blend these different experiences and have your own way of doing things. Alessandra Dillenburg is a second year PhD student in Neuroscience Spring 2017 | eusci.org.uk 43


reg ul ar s : i n n n o v a t i o n

Taking the passenger seat Simone Eizagirre explores the benefits and challenges that come from the development of automated car technology Self-driving cars have entertained a long history in science fiction. Over half a century ago, Isaac Asimov envisaged a universe where the only cars allowed on the streets were those that communicated exclusively with each other, without human drivers. Formal research into the field began in the late 1980s with Carnegie Mellon University’s Navlab and the Mercedes Benz’s Eureka Prometheus projects. Remarkably, if developments in the field continue at their current rates, Asimov’s world will become our reality and we could be looking at our roads being fully automated by 2030. It’s not just major automobile manufacturing companies (Mercedes, Nissan, and Tesla to name a few) that are working on this technology: Uber announced a partnership with Carnegie Mellon last February and Google has been working on their autonomous car development projects (now called “Waymo”) since 2009. It’s no wonder that research into the field has become so popular as the supposed advantages of full automation are clear: driverless cars are safer, maximise road efficiency and increase mobility. According to the National Motor Vehicle Crash Causation Survey from 2008, human error is the primary cause behind around 93% of crashes. Consider how many accidents are caused by careless human behaviour: 1 in 4 crashes in the United States is caused by texting while driving, and alcohol-impaired drivers cause 28 deaths daily in the US alone. When humans are driving, our safety depends not only on our own responses to changes in the road, but also on other people’s ability to drive responsibly. Self-driving vehicles, however, are able to communicate with other cars on the road, forming a giant network of vehicles that are aware of how each member might act in a given situation. This real-time access to data from all other vehicles results in an omniscient map of their positions, speeds and directions of travel. Human reaction time (typically 0.68 seconds) is no longer an issue as the car is already aware of the future intentions of the vehicles around it. In addition, the inbuilt sensory system of the vehicles, complete with radar, lidar, cameras and other equipment, makes the

44 Spring 2017| eusci.org.uk

process of responding to stimuli much faster. Cars will no longer unexpectedly swerve into your lane leaving you with a split second to respond and no driver will have to suddenly brake metres away from a red light that they hadn’t previously noticed. Vehicles will be able to predict what the road will look like in the future and make faster and safer driving decisions. Just last December, a video featuring dashcam footage from a Tesla car was released on Twitter, appearing to show the autopilot warning of a potential impact and applying the emergency brakes just moments before the vehicle ahead of the car crashes into the one in front of it.

We could be looking at our roads being full of automated cars by 2030 Fewer accidents and increased awareness of present and future conditions of the road would allow driving to become much more efficient. Routes can be optimised through the use of live-access data which decreases commute time, and in turn, leads to less fuel consumption and increased road capacity, as accidents decrease. However, a recent study by the University of Leeds, the University of Washington and the Oak Ridge National Laboratory suggested that while these vehicles have the potential to significantly reduce energy consumption, the energy required to power the inbuilt operating systems that driverless cars require could actually lead to consumption levels rising. Developing vehicles powered by electricity, biofuels and other alternatives to fossil fuels will be one way to reduce their potential environmental impact. Furthermore, once cars become fully autonomous, every human becomes a passenger. Automated vehicles could, in this way, increase the mobility of the young and elderly, as well as people with hearing or visual difficulties, physical disabilities, medical conditions or other factors that would prevent them from driving. Think of the social benefits that come from empowering individuals who might cur-

rently find it difficult to visit their loved ones, making it easier for them to engage with their local and wider communities. Although driverless vehicles may theoretically be in the best position to make decisions based on their holistic knowledge of road activity, these benefits only exist if this knowledge represents a complete and accurate picture. They receive all their information from sensory systems, interpret multiple sources of data, and determine the best course of action. Therefore one incorrect step (such as a failing sensor, extreme weather conditions hindering visualisation ranges or flaws in the control system) could collapse the entire process, leading to potentially dangerous failures in the complex algorithms that dictate the car’s movement. Earlier this year there was a crash between a Tesla S model electric car in autopilot mode and a tractor-trailer, which the company attributed to a failure in both the driver and autopilot’s part to 'notice the white side of the tractor-trailer against a brightly lit sky'. On the other hand, the likelihood of these software failures could be reduced by increasing and combining techniques used to collect information, making it harder for control systems to overlook potential hazards. It is nonetheless still terrifying that, despite self-driving cars largely existing as prototypes, they are still capable of failure. If, and when, we reach a point of full automation – as predicted by Asimov – humans might not have the ability to override a vehicle’s decisions in the face of a crash. What does this mean in terms of liability? There is a wide gap in legislation that would need to be thoroughly discussed to determine the accountability of individuals in these situations. Placing the liability on the motoring companies themselves, for example, could lead to higher incentives to develop safer technologies and thoroughly invest in reducing potential risks. In a highly competitive field such as this one, accountability mechanisms must be introduced to ensure that both software developers and motoring companies are encouraged to address these concerns rigorously before vehicles enter the market. Another issue that comes from vehicles accessing a giant network of data is


regul a rs: i nnno v atio n

Image from Pexels

that of privacy and security. In order for driverless vehicles to have the most accurate picture of the road around them, communication with other cars through vehicle-to-vehicle wireless transmissions is inevitable. However, without the proper security measures, this type of wireless system could be vulnerable to cyberattacks that could take complete control over the car: open and close windows, accelerate, change travel direction, or stop the vehicle in the middle of a busy road. Furthermore, breaching these wireless connections could also give access to the passengers’ private information. Consider how integrated our digital lives are to our vehicles already; you might use a bluetooth connection to play music from a handheld device on the car stereo or synchronise your mobile phone to your GPS through applications such as Google maps. Moreover, you may use the car’s own integrated wireless connection to transfer sensitive data such as emails, or even bank transfers. In addition, increased dependency on online communications means that your destination or journey duration, or perhaps even the number of passengers in the car at a given time could be stored in the vehicle’s memory. The question of how long this data should be stored for and who has access to it is another consideration. In the coming decades, while the technology for vehicle automation is being developed, solutions to cybersecu-

rity and data privacy concerns will also be an important issue to be addressed.

Incomplete collection of this sensory data or flaws in the control system software could lead to dangerous decisions

Finally come the ethical decisions that a driver makes when faced with an unavoidable accident. You’re driving along when suddenly the brakes fail to respond or an unwary group of pedestrians suddenly runs across the road: do you crash into the pedestrians or swerve into a wall, potentially killing yourself ? Today, each individual driver is accountable for their own decisions, and can make them following their own ethical codes. The morality of life is an incredibly subjective matter: hardcore believers of utilitarian ethics might be prepared to sacrifice themselves to save more people, but others might not be okay with this notion at all. You might, for example, make very different decisions if you are driving with children or on your own. The problem that comes from automated vehicles, however, is that how they

respond to a situation must be coded into their control software in advance, rather than being a decision that is made in the moment, based on each situation. For this to be possible, we would have to reach the unlikely feat of reaching a general societal consensus on what moral philosophies should be applied. This decision could potentially affect whether full automation will ever be accepted by society: would you be prepared to give up all control over your vehicle, knowing that it might decide to sacrifice you? If we introduce the nuances of our moral philosophies into the software system, thereby allowing the passengers to determine whether they want to drive in ‘utilitarian’ or ‘save the passenger’ mode before each journey, will the individual owners be accountable for the consequences that their vehicles make in these extreme situations? It is clear that self-driving cars have the potential to make our roads safer and more efficient, but as we come closer to developing the necessary technology, it will be important to defend user privacy and debate the ethical concerns that arise from surrendering direct control of our vehicles. Simone Eizagirre is a third-year Chemical Physics student

Spring 2017 | eusci.org.uk 45


reg ul ar s : l e t t e r s

Dr Hypothesis EUSci’s resident brainiac answers your questions Dear Dr Hypothesis, The internet is full of people who are preparing for a zombie apocalypse. How likely is a “zombie virus” to emerge in the near future? Worried Walter Dear Worried Walter, Some people believe that researchers behind barred lab doors are already working on secret biological weapons that could turn us into undead monsters with an insatiable hunger for brains. However, it is unlikely that any known pathogens will cause a zombie apocalypse as seen in popular TV shows or movies anytime soon. To discuss this subject in detail, we first need to define what constitutes a zombie ‘virus’. It has to infect the host’s nervous system and alter its behaviour or mind control the host (for example, elicit a craving for brains and/or flesh), while keeping the host alive in order to infect as many other individuals as possible. Stripping down the concept of a ‘zombie virus’ to these factors, there are some candidates worth investigating. The aim of any pathogen, or any organism, is to ensure its reproductive success. Certain pathogens, including bacteria and fungi, have adapted to alter their host’s behaviour in order to create more favourable conditions for them, such as gaining more nutrients or seeking out certain environmental conditions. Recent studies have provided evidence that microbes in our gut release chemicals that interact with our nervous system that can cause cravings for certain foods (for example, junk food) which contain the nutrients that the microbes require to survive. Although it is unclear to what extent this affects our diet, this may be classified as a subtle form of mind control.

The concept of mind controlling the host is not unknown to pathogens. A more deadly form of host mind control is exerted by a fungus called Ophiocordyceps unilateralis. It only grows in very specific locations in the rainforest and requires a certain degree of humidity to sporulate. In order to reach an ideal spot for growth, O. unilateralis infects ants and then hijacks their bodies for transport to that spot. This mind controlling fungus forces the ants to leave their nest and bite down on a leaf, in what is described as a ‘death grip’, leading to the ants’ death. A fungal stalk develops through the dead ant into the ground and a new fungus grows, ready to infect more ants. How the fungus manages to mind control ants is yet to be unravelled. There is another candidate for a potential zombie virus: rabies. Fortunately, rabies is rare these days and is generally associated with ‘rabid’ animals, foaming from their mouths.

46 Spring 2017| eusci.org.uk

Image from Wikimedia Commons

The rabies virus spreads through bites, infects the nervous system, and then travels retrogradely, from nerves in your limbs through the spinal cord until it finally reaches its target, the brain. Once the rabies virus enters the brain, it causes strong behavioural changes, such as rage, dizziness, and an aversion to water. After the onset of these behavioural changes, patients usually die within a week, with no curative treatment at this stage. There has only been one case in history where a patient survived after exhibiting symptoms of rabies. The incubation period, defined as the time from being bitten to exhibiting behavioural symptoms, is relatively long and can range from months to years, which does not render rabies a particularly fast-transmitting virus. In addition to that, the rabies virus kills its host rapidly after the onset of symptoms. Unlike bacteria, fungi, or parasites, viruses are not living organisms. They require the host’s cellular equipment to survive and replicate, which is where rabies meets a serious limitation on its way to becoming a zombie virus. If the host is dead, the virus will no longer be able to survive. If rabies were to mutate, allowing it to spread through the nervous system more rapidly, keep its host alive for longer, or become ‘airborne’ (transmissible via small particles in the air), this would make rabies a very dangerous candidate for a world-wide zombie epidemic! So while it seems rather unlikely that the human population will have to encounter any zombie apocalypse as displayed in movies, the concept of mind controlling the host is not unknown to pathogens. However, genetically engineering a pathogen that is capable of eliciting profound changes in host behaviour, in addition to fast transmission between individuals, is far more complex than Hollywood would lead you to believe since we do not know exactly how pathogens control their hosts. Hopefully that is comfort to us all. Dr Hypothesis’ alter ego is Chiara Herzog, 2nd year neuroscience PhD student at the Centre for Neuroregeneration


regul a rs: re v ie ws

Review: The Gods Themselves by Isaac Asimov Isaac Asimov’s The Gods Themselves offers a truly unique perspective on scientific accountability, the consequence of inaction, and Earth’s place in the universe. This is no small feat considering it was written over 40 years ago. Helping students and staff The premise lies in the discovery of a so-called solutheircrisis current roles tion succeed to the world’sinenergy in the form of a ‘positron pump’: an energy supply miraculously harnessed from and in their future careers, a parallel universe. Energy is abundant and scientists can focus by on providing fields such as University space explorationwide and medicine. One of the delights of delving into this novel is the quality of support for teaching, the science behind the fiction. Asimov, a professor of biochemistry at Bostonlearning University, was a master of basing his fantasies on sciand researcher entific concepts. Readers from scientific backgrounds can appredevelopment. ciate that the fantasy world Asimov creates could actually happen. The Gods Themselves is a three-part story: the first follows the politics of the scientific community when a problem More can the be found at:to a parallel is uncovered. In information the second section, tale turns universe inhabited by ‘soft’ and ‘hard’ beings. The www.ed.ac.uk/iad third, and my personal favourite section, centres on a Moon colony back in our universe. At this point the threads of the novel converge, changing your outlook on scientific progression and conservation. Is it better to conserve our faded environment or strive to explore the rest of space and abandon the lonely blue dot? This is perhaps more relevant today in a world on the verge

of climate change induced collapse than it was 40 years ago. student learning development It is too late to save it? Maybe as Asimov hints, we should abandon the past and move on beyond the horizons of space. researcher development— This classic novel isskills a light read in terms of style and planning, plot communication language, research but the unexpected is ideal for readers new to science skills, fiction.professional Overall, this development, book is about a chance discareer covery that sets management, our world on business a slightlyand different path enterprise, more and reader alike to a sur– one which leads the and protagonist prising change in perspective on space, science, and life.

continuing professional development

practice sharingMSc in teaching, Alice and Stevenson is a chemistry student and avid food and learning and supervision fiction consumer support for curriculum, programme and assessment design and development

student learning development

Helping students and staff succeed in their current roles and in their future careers, by providing University wide support for teaching, learning and researcher development. More information can be found at:

researcher skills development— research planning, communication skills, professional development, career management, business and enterprise, and more continuing professional development and practice sharing in teaching, learning and supervision support for curriculum, programme and assessment design and development

www.ed.ac.uk/iad

Spring 2017 | eusci.org.uk 47


EUSci #20  
Read more
Read more
Similar to
Popular now
Just for you