Page 1

Issue 21: Autumn 2017

s ' t ha



t n i s n tio n e t n co d n a nce e s i n c o s i ut of e r u t Revol fu



Rise of the machines?

? t x e N


Future gene editing applications in agriculture

TERRORS OF BIOLOGY The microscopic armies

Contents Focus 9 Is my data really mine? Simone Eizagirre discusses how data-mining fits into the trade-off between privacy and security 10 Saving lives and protecting privacy: finding the balance Angela Downie explores the challenges and potential we face in storing and sharing patient data 12 Rise of the machines: the future of Artificial Intelligence Samona Baptiste investigates where artificial intelligence is headed and the consequences that could entail 14 Smart cities: how smart is safe? Bonnie Nicholson explores the potential benefits and dangers of Smart Cities 16 Life in space: an even bigger leap for humankind? Emma Dixon explores whether space colonisation could, and should, be achieved 18 Physics: a game of two halves Theoretician Craig Young momentarily puts his bias aside to discuss the sometimes conflicting worlds of theoretical and experimental physics 19 Is ageing a disease, and can we cure it? Adelina Ivanova explores ageing as a disease and the strive to find a cure for it 20 Assisted Reproduction Sadhbh Soper Ní Chafraidh explores recent advances in assisted reproduction and the ethical issues they raise 22 The pill with your name on it Chiara Herzog discusses the potential of personalized medicine for the masses

Cover illustration by Andrea Chiappino

2 Autumn 2017 |

24 Terrors of biology Alina Gukova reflects on the harmful side of the biological sciences and their use in biowarfare and bioterrorism 26 Antibiotic resistance: a ticking time bomb Carlos Martínez-Pérez explores the issue of antibiotic overuse and resistance 28 Problems with protein: sustainable meat in an uncertain world Alice Stevenson investigates the surprising complexities of a sustainable diet 30 CRISPR bacon: future gene editing applications in agriculture Kirsty Millar explores the role of gene editing in agriculture 32 The future of nuclear power Brian Shaw weighs the costs and benefits of nuclear power 34 Nuclear risk: the monster skulking in the shadows Alyssa Brandt lays out the current state of affairs in the field of nuclear risk research 36 Academia in a rat race Marta Mikolajczak reveals the existing problems of academia and the future they paint

Features 38 Preprinting: setting the precedent for the future of publishing Amelia Howarth contemplates the future of preprinting in the scientific community 40 The politics of geoengineering Rachel Harrington explores Geoengineering as part of a viable solution to global climate change

42 ‘From monkeys to men’ and other fallacies about evolution Andrew Bease explains the process of evolution and addresses some common misconceptions 44 What mummy and daddy didn’t tell you Issy MacGregor investigates three of the most common questions that besiege parents 45 Owning the world Clari Burrell explores land use conflict and resolution

Regulars 46 Filter bubbles and the future of politics Teodora Aldea puts her news feed on trial and investigates the effects of the filter bubble on politics 47 Inspire launch grow: innovation on the doorstep James Ozanne takes a look at some of the latest cutting edge innovations coming out of the University of Edinburgh 48 Medicine and the art of representation Haris Haseeb writes of the historical and current significance of the art of representation in medical practice 50

Dr. Hypothesis

EUSci’s resident brainiac answers your questions 51 Review: The Handmaid’s Tale TV adaptation



News Team Benedetta Carbone, Jack Kellard, Aishwarya Sivakumar, Laura Milne, Rebecca Plowman, Priya Amin, Duncan McNicholl, Joseph Willson Focus Team Adelina Ivanova, Alice Stevenson, Alina Gukova, Angela Downie, Bonnie Nicholson, Brian Shaw, Carlos MartinezPerez, Chiara Herzog, Craig Young, Emma Dixon, Kirsty Millar, Marta Mikolajczak, Sadhbh Soper Ni Chafraidh, Samona Baptiste, Simone Eizagirre Feature Authors Alyssa Brandt, Amelia Howarth, Andrew Bease, Clari Burrell, Issy MacGregor, Rachel Harrington Regulars Authors Haris Haseeb, James Hitchen, James Ozanne, Teodora Aldea, Alice Stevenson Copy-editors Holly Flemming, Stephanie Campbell, Chloë Wright, Amber Abernethie, Rachel Harrington, Nikki Hall, Owen Gwydion James, Sarah Heath, Joseph Wilson

Dear Readers, With a new academic year ahead of us, this issue of EUSci aims to delve into the not too distant future and examine the breakthroughs and controversies it may hold. Whether it is medicine, astrophysics, chemistry or data science, research has always been about revolutionary advances and exciting discoveries. Now, more than ever, it seems science and technology are accelerating at lightning pace, from personalized medicine which caters to our unique genetic background (p22) to artificial intelligence which may soon be taking over our workplaces (p12). The path to the future may prove perilous however, as with all great advances these leaps and paradigm shifts are not without their caveats, risks, and contentions. This will inevitably be reflected by the legacy we carry forward, be it the global threat of nuclear weapons (p34) or the damage we’ve wreaked on the natural world (p40). Our focus section (p8) explores all of these and more - turn to page 16 to learn more about the prospect of colonizing other planets or page 14 for a closer look at smart cities and their feasibility. For those of you simply looking to keep up with the rapid advancements from the world of research, our news sections will keep you abreast of the latest issues globally (p4) as well as locally (p6). If you hunger for more, we advise you check out our weekly website posts to ensure you’re kept in the loop.

Sub-editors Holly Flemming, Ciara Farren, Chloë Wright, Stephanie Campbell, Ben Moore, Oswah Mahmood, Amelia Penny, Sarah Heath, Monica Kim, Samona Baptiste, Hanna Landenmark, Clari Burrell, Issy MacGregor, Natasha Tracey, Samantha Barton, Catherine Lynch, Jini Basu, Owen Gwydion James

In addition to this, we have our feature articles looking at a diverse range of topics, from the theory of evolution (p42) to some common dilemmas posed by the inquisitive minds of children (p44). Furthermore, our regular articles are here to investigate the effect of filter bubbles on our political landscape (p46), to uncover the story of how the ECG came to change the representation of the human body in medicine and beyond (p48), and to settle a cosmic conundrum (p50).

Art Team Alyssa Brandt, Yivon Cheng, Eleonore Dambre, Victoria Chu, Alice McCall, Rebecca Holloway, Mila Rocha, Lana Woolford, Marie Warburton, Stephanie Wilson, Kat Cassidy, Katie Forrester, Andrea Chiappino, Lucy Southen, Julia Rowe

We hope you enjoy reading these articles as much as our writers have enjoyed writing them.

Editor Teodora Aldea

Web Editor Angus Lowe

Editor James Ozanne

News Editor Samuel Jeremy Stanfield

James Ozanne and Teodora Aldea Editors

Deputy Editor Haris Haseeb

News Editor Bonnie Nicholson

Deputy Editor Simone Eizagirre

News Editor James Hitchen

Focus Editor Chiara Herzog

Layout Editor Vivian Ho

Focus Editor Emma Dixon

Art Editor Vivian Uhlir

Autumn 2017 | 3

new s

How can mammals regenerate? Regeneration is the process of regrowing damaged tissue. Some organisms, like zebrafish and salamanders, are very good at regenerating, but unfortunately humans and most other mammals respond to injury by forming scar tissue instead. An exception is the African spiny mouse (Acomys cahirinus), a small rodent that can regenerate skin, blood vessels, nerves, cartilage, fat tissue and even muscles. Although the immune system is undoubtedly important in responding to injuries, its role in regeneration is relatively uncharacterized. In May of this year, researchers from the University of Kentucky, USA, published the results of a study investigating the relationship between the immune system and regeneration. They decided to compare the immune response to injury between the spiny mouse and the common house mouse (Mus musculus). As a model for injury, they made a tiny hole in the ears of the mice and studied the response of the cells in the tissue. First they compared the immune cells present in the ear after injury: the house mouse showed a bigger inflammatory response, with an accumulation of neutrophils (immune cells that are a hallmark of inflammation itself). In comparison, the spiny mouse showed significantly more active macrophages (immune cells that clear away debris and pathogens, and regulate inflammation). In both species, the neutrophils disappeared from the injury site in a few days, while macrophages persisted for up to two weeks. This suggested that the long-lasting, more active macrophages in the spiny mouse could be responsible

Image courtesy of wikimedia commons

for the regeneration of the mice’s ears, as opposed to scarring. To test this, the researchers chemically removed the macrophages from the ears of the spiny mice just before injury, and subsequently observed that the hole took a lot longer to close, with defects in skin regrowth and cell division. Only when macrophages started to re-emerge, 20 days later, the tissue began to properly regenerate. How do macrophages promote regeneration? The authors suggest two hypotheses: either the cells are sending regeneration signals themselves, or they are clearing away other cells that naturally inhibit regeneration and promote scarring. Scientists will now have to define which macrophage properties are beneficial for regeneration, as well as if (and how) we can replicate those conditions in non-regenerative organisms like humans. Benedetta Carbone

New opportunity springs from ancient knowledge Looking to the wisdom of ancient civilizations can lend insight for constructing our future. Using current compositions and methods, concrete - particularly that used to construct sea walls - crumbles within a matter of decades. Whilst recent research on concrete reinforcement has focused primarily on resisting bombs in the Middle East, solutions to the quiet but constant battle of attrition with the elements have proved elusive. Now, the answer may be drawn directly from the ancient walls of the Roman Empire. For years researchers have read historical accounts proclaiming the strength of Roman concrete growing stronger with each crashing wave. In July this year, researchers at the University of Utah set out to investigate these statements. Cores were drilled from the Roman harbour walls in Pozzuoli Bay, Italy, and

Image courtesy of Pixabay

4 Autumn 2017 |

two key minerals were isolated: Al-tobermorite and phillipsite. Unlike other long-lost recipes such as Damascus steel or Greek fire, Roman concrete remains superior to contemporary materials rather than becoming inadequate or irrelevant. Although it is known that Al-tobermorite’s crystalline structure would lend durability, attempts to integrate it into modern concrete have failed. The key was exposing the volcanic ash ingredients to seawater, causing a chemical reaction to form slow-growing mineral reinforcements within the wall. Rather than succumbing to the force of the ocean, the wall grew increasingly resilient. The popular criticism of Al-tobermorite is that Roman concrete was never reinforced with steel, unlike the modern equivalent. It has been suggested that steel may be detrimental to structural integrity as it rusts, resulting from an expansion of the metal known as “oxide jacking”. The argument is that oxide jacking, rather than a lack of crystalline minerals, is to blame for our short-lived infrastructure. Alternative solutions, however, are yet to be offered. The development of new infrastructure is critical,especially when considering global population growth. The United States continues to ignore the American Society of Civil Engineers estimated $3.9tn worth of investment required for updates and repair, a $300bn increase in 4 years. Closer to home, the Swansea tidal lagoon is projected to cost £1.3bn, with an untenable payback-term of 120 years to recoup the investment. With that said, if components of Roman concrete were introduced into modern construction, the blossoming of sturdy mineral structures within the walls would take generations. Jack Kellard

n e ws

Venom evolution in parasitic wasps The speed at which genes can evolve to have different functions determines the adaptability and evolvability of an organism. One way genes evolve is by duplicating, where the new copy of the gene acquires a new function. Another mechanism is through the process known as ‘lateral transfer’, in which a gene performing a certain function is acquired from a different organism. Other known mechanisms by which evolution is driven are ‘chimeric fusion’, where two genes fuse to produce novel products, and cases where DNA that historically coded for nothing acquires new coding properties. Led by Prof. John Werren, a group of scientists from University of Rochester decided to study the evolution of venom in parasitic wasps, which feed off living hosts. Similar wasps often shift between extremely diverse host insects and interestingly, the wasp venom doesn’t kill their host, but in fact alters their metabolism, immunity or behaviour to suit the feeding wasp larvae. One example of this is where a wasp stings its cockroach in the brain and the venom makes this host immobile without paralysis, so the wasp can continue to feed itself. A fair hypothesis is that the genes encoding the venom would have to rapidly evolve to suitably manipulate the host insects. This research group studied four closely related species of wasps that all colonise insect pupae. They found that venom genes are rapidly lost and gained in the specialized wasp venom gland. One can imagine the loss of genes, but where do new genes come from? To study this, they selected 53 genes that were recently gained by these wasps. The results showed that 34% of the genes selected seem to have been acquired in an ancient event of duplication; 17% in a similar but more recent event. However, 49% of these are found as sin-

gle-copies, suggesting they are ‘new’. Using RNA-sequencing, a routine technique used to study gene expression, the authors show that the new genes are expressed specifically in the venom gland, highlighting that these single-copy genes have been ‘co-opted’ for venom function - in other words, under independent circumstances the genes have evolved to do the same function (unlike cases of duplication where one gene arises from another). As for the venom genes that are lost, the authors elegantly show that the expression of these genes is developmentally regulated, and the venom function is more readily lost in genes that have alternate functions. This study, published in July 2017 in Current Biology, brings to attention a new less-studied mechanism of gene evolution; but whether this is restricted to specialised organs and functions or may be a more general process remains to be answered. Aishwarya Sivakumar

Image courtesy of wikimedia commons

What we can learn from knockout studies in humans? While the human genome project has given us a list of many of the genes in the human body, we still don’t know what most of them do. In order to study the function of a gene, scientists will often delete it in a model organism - known as a ‘knockout’ for that gene. It is not possible to do this in humans, but scientists can make use of loss-of-function mutations to find out what a particular gene is doing. A loss-of-function mutation is when a mutation inactivates - or partially inactivates a gene.This either prevents a protein from forming, or results in a mutated protein that cannot carry out its role in the body. As humans usually have two copies of most genes (the exception being any carried on the X chromosome in men), both copies must contain a loss-of-function mutation for the gene to be knocked out. This is rare in humans, but the incidence in-

Image courtesy of Pixabay

creases in families whose parents are closely related: for example when the parents are first cousins, termed consanguineous unions. A recent paper by Saleheen et al, published in the journal Nature, has detailed the findings of a study carried out in Pakistan, where the incidence of consanguineous unions is high. The study, named the Pakistan Risk of Myocardial Infarction Study (PROMIS), investigated determinants of heart disease and diabetes. The study identified 1317 genes where both alleles had a loss-of-function mutation. Carriers of some of these mutations did not show any obvious physical effects. Individuals with homozygous loss of function of one gene, APOC3, did not increase blood fat levels when challenged with a fatty meal, as would happen in individuals without the mutation. This indicates that the loss of function of this gene may be protective against heart attacks. Studies like this have the potential to provide insight into therapeutic targets for disease, and to reveal the potential effectiveness of certain drugs that target particular proteins. It may also point to the redundancy of genes within the human genome; information that will be valuable in the search for drug targets. Laura Milne Autumn 2017 | 5

res ea rch i n e d i n b u rg h

Life on Mars If you were still holding on to some hope for Martians, unfortunately, the chances of life on Marsnow seem even slimmer. In the University of Edinburgh’s Physics and Astronomy department, Prof. Charles Cockell’s group have recently published work suggesting the surface of Mars is more hostile to life than previously thought.The work by postgraduate student Jennifer Wadsworth looks at the effect of perchlorates; a form of highly oxidising chlorine salts. They were first found on the surface of Mars by NASA’s Phoenix landing in 2008. Initially their discovery caused some excitement, as it was suggested they were a potential energy source for bacteria. On top of that, the presence of salts also makes the chances of liquid water higher. However, this new research may put a slight halt to this excitement because it shows thatperchlorates can be very toxic, given certain conditions. When bacteria are grown in the presence of perchlorates alone there are no negative effects on growth. It has also been shown that radiation levels close to that experienced on the surface of Mars have a limited negative effect on bacterial growth. But now, Wadsworth has found that when bacteria grown in the presence of perchlorates are also exposed to just 30 seconds of Mars-like levels of ultraviolet (UV) radiation, the combined effect is so toxic that none of the bacteria survive.The group also tried growing the bacteria in different conditions to mimic the surface of Mars.Growing the bacteria on an artificial rock surface or at a reduced temperature slows down the toxic effects of the perchlorate and UV combination. The group also looked at the effect of

Image of twin peaks on Mars via Wikimedia Commons

two other chemicals found on Mars; iron oxides and hydrogen peroxide. When these chemicals were combined with the perchlorate and UV exposure, the total effect was even more toxic. This combination of multiple compounds and high levels of UV exposure makes the Martian surface an incredibly difficult place for life to exist. However, all hope is not yet lost. Since UV radiation is key to activating the perchlorates’ toxicity, we may just need to start looking under the surface. Rebecca Plowman

The resilience of malaria parasites Can parasites make decisions? A recent study addresses the controversial concept of whether malaria parasites are able to make decisions to maximise their survival in a host and transmission between hosts. Researchers at the Institute of Evolutionary Biology have found evidence suggesting that malaria parasites are able to change their behaviour according to the environment inside the host. In the same way that gazelles and wildebeests survive the variable conditions of the African savanna, these unicellular beasts are able to exploit their red blood cell resources to survive and successfully transmit. This recent work, led by PhD student Philip Birget in Prof.

Image courtesy of Pixabay

6 Autumn 2017 |

Sarah Reece’s lab, explores the reproductive behaviour of malaria parasites when red blood cell numbers are low (otherwise known as anaemia). Should they invest more in survival or transmission? They studied this phenomenon in mice infected with the rodent malaria parasite Plasmodium chabaudi. Their data suggests that malaria parasites find an anaemic environment favourable, and as a consequence, invest more in transmission. Although this may seem counterintuitive, it has sound reasoning. They found that malaria parasites take advantage of increased levels of immature red blood cells, occurring as a result of anaemia, which provide them with a more resource-rich environment. As a result, allowing them to invest more in transmission whilst maintaining high in-host survival. Interestingly, they also found that less virulent strains are able to do this to a greater extent than their more virulent comrades. This implies that populations with lower virulence compensate by having a more robust transmission strategy during anaemia. The ability to alter behaviour to cope with variable environmental pressures is a trait typically associated with animals. This study extends that notion to parasites and abandons the traditional idea that they are disciples to their host. It instead highlights the ability of unicellular parasites, acting as a single entity within a host, to take on a form of active ‘decision making’ in response to their surroundings. The original paper is titled ‘Phenotypic plasticity in reproductive effort: malaria parasites respond to resource availability’ and was published in the journal Proceedings of the Royal Society B in August. Priya Amin

resea rc h i n edin bu rgh

This carbon dioxide tastes funny When Cameron and Jane Kerr discovered unusually high carbon dioxide concentrations in the soil and groundwater of their farm in rural Saskatchewan, they assumed that the culprit of this pollution was the carbon storage facility at the nearby Weyburn and Midale oil fields. This view was backed up by a study conducted by Petro-Find GeoChem on behalf of the Kerrs, but immediately refuted by the operator of the facility, the Petroleum Technology Research Centre. This dispute over the source of the additional carbon dioxide was more important than just the question of liability in this individual case, however. As a test monitoring facility, if the Weyburn-Midale centre was leaking carbon, the future of carbon capture as a technology would be in very real danger. A team of scientists from the Universities of Edinburgh, Saskatchewan and Rochester have developed a novel method of testing carbon dioxide deposits to determine their history and provenance. In the same way that you can taste the difference between bottled waters because of the trace quantities of minerals that dissolved in them as they flowed through rock layers, in theory instruments can also "taste" the difference between carbon dioxide that has filtered up from an underground chamber and that which has simply dissolved from the local atmosphere. The researchers used mass spectrometry to determine the isotopic ratios and abundances of noble gases in the samples collected from the ground water on the Kerr farm. They looked at the ratios of 3He/4He and 4He/20Ne to establish the presence of dissolved radiogenic 4He from deep rock layers. They

also measured Ne, Ar and Kr abundances, as lower values are strong indicators of subsurface gases “stripping” these noble gases from the groundwater on their way to the atmosphere. The study determined that the ratios were particularly strong indicators of the carbon dioxide origin, and that the gas on the Kerr farm was most likely to come from atmospheric sources. These findings corroborated evidence from two other trials, helping to restore faith in carbon capture and storage. Duncan McNicholl

Image courtesy of Pixabay

Unlocking the secrets of altitude sickness When you think of a lab, you don’t usually think of an alpine hut on the side of a mountain. For a group of intrepid students, doctors and volunteers, this was their home for a week in July, as they embarked on the APEX 5 high altitude expedition. Their aim? To study the effects of high altitude on the body. High altitude can cause a range of conditions. Some people adjust to altitude quickly, while others can become seriously ill. Currently, there is no way of knowing whether or not an individual will experience ill effects. These include acute mountain sickness (AMS), and also life-threatening conditions such as high-altitude cerebral edema (HACE) and high-altitude pulmonary edema (HAPE).

Image courtesy of APEX 5

The expedition has a diverse range of research aims. Studies include examining the effects of altitude on blood clotting, the innate immune system, vision and eye health, and cognitive function. Blood samples, blood oxygen readings and psychological information were taken from the volunteers over the course of the expedition and samples shipped back to Edinburgh for further investigation. The team hope the research will shed light on how to better treat those who suffer from altitude sickness. The research may also provide insight into diseases associated with low blood oxygen (hypoxia) experienced by patients at all altitudes, such as Chronic Obstructive Pulmonary Disease (COPD). Led by Chris Graham, the expedition was organised by a team of undergraduate medical students, and the researchers and volunteers were motivated students from the University of Edinburgh. The team faced difficult conditions as they worked at a small refuge at the base of Huayna Potosi, an iconic 6,088m tall mountain in the Cordillera Real mountain range in Bolivia. Students set up a makeshift lab complete with centrifuges, microscope, and even a plate reader flown all the way from Edinburgh! The volunteers spent four days acclimatising in La Paz (the highest capital city in the world at 3,650m above sea level) before ascending to the lab to spend a week at 4,800m. Spirits were high even in difficult conditions; volunteers relaxed in their downtime, playing cards, hiking in the beautiful surroundings, and walking the local dog! The team are now hard at work analysing the results from the trip and hope that the data will help unlock some of the secrets of altitude sickness – watch this space! Joseph Willson Autumn 2017 | 7

focu s

THE CONTENTIOUS FUTURE A hypothetical journey into the direction of science

Whereas in our last issue we explored some highlights from the (science) history books, this issue we are jumping forward and exploring the future of science and technology - both the good and the bad. Starting with the pertinent topic of data security, Simone Eizagirre explores data mining and Angela Downie discusses the storage and sharing of patient data. Looking into the not-so-distant future, Samona Baptiste investigates benefits and concerns surrounding artificial intelligence, Bonnie Nicholson introduces us to smart cities and Emma Dixon discusses potential space colonization. Sticking with physics, Craig Young looks into the conflicting worlds of experimental and theoretical physics. Looking closer at our own bodies, Adelina Ivanova asks whether we could ever ‘cure’ aging, Sadbh Soper Ni Chafraidh looks at assisted reproduction and Chiara Herzog explores personalized medicine. Two more worrying sides of biology are then discussed - Alina Gukova investigates bioterrorism whilst Carlos Martinez-Perez introduces antibiotic resistance. Giving us some food for thought, Alice Stevenson examines the complexities of achieving a sustainable diets and Kirsty Millar shows us how CRISPR technologies may be used in agriculture. Getting elemental, Brian Shaw looks at the future of nuclear power whilst Alyssa Brandt discusses the difficulties of studying nuclear weapons. Bringing us back to a world familiar to many of us, Marta Mikolajczak explores the problems facing academia and what that might mean for its future. Brimming with interesting articles, we hope this issue will provide you with thought-provoking ideas and will enable you to see the future of science in a more critical light, weighing up benefits and potential dangers. Emma Dixon and Chiara Herzog, Focus Editors Illustration by Alyssa Brandt

8 Autumn 2017 |

fo cu s

Is my data really mine? Simone Eizagirre discusses how data-mining fits into the trade-off between privacy and security In today’s increasingly digital world, we practically live online. Every day the world’s online activity produces about 2.5 quintillion bytes of data, which is roughly equivalent to downloading every episode of Game of Thrones 10 million times. The question of who exactly has access to the blueprint of our digital lives has therefore become quite a pertinent one to answer. Striking the balance between privacy and security is not a novel dilemma, but what is new is however, deciding how this trade-off applies to the internet. Our online activity is not just limited to communication: your smartphone knows what time you’re most likely to open and use specific apps, the places you’ve been while your GPS was active, and how many times you’ve streamed the latest album by your favourite band. On top of this, your likes and comments on Facebook paint a very accurate picture of your personality and psychology. Access to this kind of information then, offers a powerful insight into identifying how people’s online behaviour transcribes to their offline lives. The process of discovering these patterns in big data is referred to as “data-mining” or “data analytics”. Now you might think personalised user content guarantees a better, more relevant experience online. However, the algorithms used to provide relevant content hold the danger of only showing us content that we want to see, leading to the ‘echo chamber’ effect on social media (for more on the political power of social media, check out page 46). With the rising number of terrorist attacks in Europe, politicians have called on communication and social media companies, such as Whatsapp, to reduce database encryption levels so that governments can increase their access to these platforms. It is often in periods of political and social unrest that governments are given the mandate to heighten surveillance legislation. When citizens feel most vulnerable, fear is used as a means of legitimising these clampdowns and the public is often willing to make concessions on privacy in the name of national security. Furthermore, the extent to which data-mining is actually useful for pre-

serving public security is unclear. Data-mining works by finding patterns and trends in an enormous amount of information, and reporting these back. It’s a highly effective tool for analyzing and tracking the behaviour of specific groups of people, such that companies can identify the trends of their consumers and make their marketing strategies more effective. It becomes very easy, for example, to find out which products are most appealing to specific target audiences. But to use data-mining techniques in counter-terrorism efforts requires knowing which needles you are looking for in a haystack of data that is amassed, and therefore demands knowledge of online behaviours which can identify a potential terrorist. For this method to be effective, and in order to be able to accurately recognise these patterns through surveillance, governments need access to a reliable and extensive set of data from which the behavioural concerns of terrorists can be extracted. While some might argue that “you have nothing to fear if you have nothing to hide”, there are concerns that are unique to increasing governments’ access to our online activity. The extent to which records of our online activity can be used to profile our social, political and personal behaviour is frightening. Following the outcome of the Brexit referendum last June and the recent US Presidential Election, there have been reports of campaigns allegedly working with web analytics companies such

as Cambridge Analytica. This would allow information from online activity records (magazine subscriptions, social media likes and most visited websites, for example) to be matched with voters, such that online advertising can be targeted to influence people’s voting intentions. This allows a more effective campaigning strategy, where each voter can be swayed by the topics that matter to them; for example, people from areas with high unemployment rates could be targeted with advertisements featuring foreigners “coming to take their jobs”. The degree to which these strategies were employed and actually affected the elections is still unclear and, in the case of the Brexit vote, under investigation by the Information Commissioner's Office and the Electoral Commission. It is clear that there is a balancing act between security and privacy when it comes to our online lives. To ensure that the state’s duty to guarantee security does not compromise citizens’ right to privacy, we need to hold governments accountable for what is done with our information. We must ensure that legislation encourages our right to live freely and securely, while protecting our privacy to ensure the state cannot abuse its power to a degree where its democratic nature is jeopardised. Simone Eizagirre is a fourth year Chemical Physics student at the University of Edinburgh.

Image courtesy of Pixabay

Autumn 2017 | 9

focu s

Saving lives and protecting privacy: finding the balance Angela Downie explores the potential benefits and challenges we face in storing and sharing patient data “Sharing data saves lives.” This is a common belief shared by health providers and researchers worldwide. It is also the title of a 2014 campaign launched by the UK’s leading medical research charities to highlight the importance of patient data sharing. This campaign accompanied a massive effort by the NHS England to introduce, a system that would allow patient data to be shared amongst the medical and research communities. This system promised to help improve healthcare and advance knowledge, potentially saving thousands of lives. After many controversies however, this programme is now defunct. This is because in reality patient data sharing is in equal parts promising and complicated. Our medical records represent some of our most valuable and sensitive information and having them in the wrong hands can cause devastating results. British physician and author Ben Goldacre once compared patient data to nuclear power, referring to the massive stakes in unlocking its potential benefits without also unleashing the catastrophic consequences of its misuse. We therefore have to address the issue of finding the balance between allowing medical professionals access to this goldmine whilst keeping our own privacy safe. In dealing with this problem, perhaps the first thing to consider is the data itself. Sharing of patient information is certainly not a new endeavour and has been enriching the medical community for many decades. In the 1950s for example, patient records provided the cornerstone of proof needed to establish that smoking caused lung cancer. However, both the quantity and quality of patient data we now have access to is beyond the scope of what we could have imagined 50 years ago, and is still evolving. We are no longer referring to piles of paper records, since advances in both storing and sharing technology mean we can have quick access to specific pieces of information collected from thousands of patients worldwide. Additionally, we now have the ability to compile and compare vast quantities of genetic information from individuals around the world. This is crucial for

10 Autumn 2017 |

many reasons. It offers the possibility of discovering novel genetic markers and diagnostic tools which help us improve and personalize treatment, and it also represents a unique hope for individuals suffering from rare diseases to pin down the causes of their conditions and develop courses of action for them. Furthermore, it carries the potential to predict our risk of developing certain diseases, which poses a key point of conflict, since this allows doctors to enact preventive action whilst posing a threat if received by insurance companies. Novel ways of collecting patient data have also been developed. Taking advantage of the smartphone surge, Apple released ResearchKit in 2015, an open source software that allowed researchers to develop apps designed to collect worldwide patient information to aid in clinical studies. This platform has since launched apps that have helped study conditions including Parkinson’s disease, epilepsy, asthma and autism. It has also led to the realisation that varying degrees of type 2 diabetes may exist, helping improve personalised treatment. Interestingly, a recently published study showed that the data collected from patients by an app is indeed reliable and comparable to that obtained from traditional methods. An analogous program, Research Stack, has now also been launched for android platforms, allowing researchers to share their apps with a larger audience. Similarly taking advantage of our need for forever having a phone in hand, UCL helped develop a game called Sea Hero Quest, through which players are also providing information that improves our understanding of the early stages of dementia. This game has already collected information that would have taken over 9,000 years to collect in traditional lab settings. Nowadays, medical devices themselves (such as monitors, pacemakers and insulin pumps) can be fitted with sensors and connected to the internet in such a way that they can provide useful information allowing patients to be closely monitored and treatment to be adjusted. However, they have themselves become a vulnerable target of malicious tampering.

In the past two years 90% of US health service providers have suffered security breaches

Each different type of data and collection method presents its own advantages and challenges, and requires very specific regulation, as well as adequate protections put in place to prevent data breaches. At this point it is important to distinguish between two very different concerns: the parameters of the data sharing and the safeguarding of the data itself. Firstly, clear limitations must be established for the voluntary sharing of data: making sure it is properly agreed and stipulated what patient data is shared and who it is shared with, along with following correct procedures for anonymization of data and respecting patients’ rights to remove their data from any database at any time. This proved to be critical in the demise of the already troubled NHS as the general public was upset that their (albeit coded) data had been shared with the insurance industry for purposes of better setting insurance premiums. A Wellcome Trust survey has found that although over 75% of people are happy to share their medical records amongst the research community in both public and private institutions, this consent does not extend to the insurance industry. Furthermore, it is vital to ensure that any data being stored, and particularly shared, is properly anonymized. Whilst this might not sound too complicated, it is in fact represents a terrible conundrum. Even if appropriate care is taken for identifiers such as names, postcodes and emails to be removed in a secure way, a person’s identity can also be identified through additional information about a person such as their age, medical conditions, lifestyle choices, and especially

fo cu s

Image courtesy of Pixabay

genetic information. This leads to regulations being put in place that limit the amount of information shared. An example of this is requiring that dates of birth be omitted or reduced to only the year, but this can cause the data to lose value to researchers. Consider the following question: What sets you apart from everybody else? This is precisely the question that researchers want to understand when looking at the etiology of diseases and personalised course of treatment, but it is also exactly what you don’t want your records to reveal. Finding the correct balance is going to come down to trial and error, as well as optimising the bundling of data so as to lose as much trace of the individual as possible whilst conserving maximum resolution of the data. This will also require cooperation between countries since data security laws vary massively, yet the most valuable studies are the ones including a high number of individuals. The second security concern we face is in keeping the actual databases safe from criminal attackers. Malicious attacks on health providers’ databases are a growing threat, with IBM claiming

the health industry has become the top industry for cyberattacks. In the past two years, 90% of US health service providers have suffered security breaches, whilst in the UK the NHS has reported more than 1330 data security incidents to the Information Commissioner’s Office (the UK body that oversees data protection) since they began keeping records in April 2015. It is important for citizens to understand that no information will ever be completely safe, but the government should put in place and enforce strict policies and sanctions forcing any company handling personal information to follow recommendations ensuring as much safety as possible. Some of the measures health care providers can possibly take include encrypting all data and using secure email servers, keeping software versions up to date, constantly evolving and upgrading security measures, and concentrating efforts not only on preventing breaches, but on detecting them quickly so that action can be taken promptly and damaged minimized. As well as this, all staff must be correctly trained to identify, report and act upon encountering a data breach.

As technology continues to evolve, we have to be able to adapt As technology continues to evolve, we have to be able to adapt. If we are able to utilize technology to our benefit, patient data sharing may hold the key to advancing medicine at an unprecedented rate. However, it is key that we establish clear regulations that always ensure the safety of individuals as much as possible. It is important that we train the upcoming generations in cyber security to develop better safety and detection mechanisms. As we are currently still learning a vast amount about how to best handle and share information, with many things yet to figure out, it is essential that all processes are transparent so that input from different parties can be heard and the best possible balance can be achieved. Angela Downie is a second year PhD student in Cell and Molecular Biology. Autumn 2017 | 11

focu s

The future of Artificial Intelligence: rise of the machines? Samona Baptiste investigates where artificial intelligence is headed and the consequences that could entail The concept of artificial intelligence (AI) is something which seems to strike fear into the hearts of many, and the media does nothing to quell this hysteria. Films such as I, Robot, Ex Machina, Terminator, and Robocop all explore artificial intelligence from its conception through to complete societal integration. With Stephen Fry’s recent claims that without a stronger political influence, dystopian nightmares about AI could become a reality, should we be concerned about the rate at which this research is progressing?

AI will outperform humans in all tasks in 45 years

A 2017 study showed that machine learning experts believe that AI will fold laundry more efficiently than humans in just 6 years, surpass the ability of human surgeons in 36, and outperform humans in all tasks in 45. With such remarkable statistics coming from experts in the field, it certainly appears that an AI-driven future is almost inevitable. Many of the media’s concerns about AI focus on humans becoming redundant altogether. One area this would likely begin in is the work-

Image courtesy of Pixabay

12 Autumn 2017 |

place, with experts predicting complete automation occurring within 120 years. In reality though, would workplace automation be as straightforward as it might seem in theory? These fears are not new. Exponentially rapid technological developments resulted in very similar concerns about automation in the 1960s. Yet in the 50 years since, while technology has become more integrated into our daily lives, automation has not yet occurred. Why is this? Something that machines cannot currently be substituted for is human-to-human interaction. Though it seems more economical, complete automation can sometimes lead to a demand for non-automated jobs, such as with ATMs. Their popularity led to an increased demand for bank clerks, as customers seemed to prefer working with staff for longer transactions, despite the speedy convenience of the ATM. In certain circumstances, human jobs can complement automation, which may lead to an evolution and a shift in the jobs performed by humans, rather than a total replacement by AI. Furthermore, some jobs exist because of an appreciation for the skill and finesse required to undertake a particular task, such as with artisan or fairtrade goods, despite technology making it cheaper and easier to automate such work. The demand for these handmade products still exists and comes with a hefty price tag to match. So even if

widespread automation were to occur, it would seem that there would still be space in the market for handmade goods. Just as with every other major scientific advancement, developments in AI and increased integration into society elicit a lot of ethical debate. Complete workplace automation could lead to the loss of millions of jobs and a complete upheaval of the economy as we currently understand it. A greater portion of the profits would go to the small number of companies and individuals who owned the machines, leading to greater wealth inequality. Though it could be considered ethical for machines to perform dangerous jobs such as mining, if automation ultimately leads to increased poverty, would this still be ethical? During the machine’s development, it is likely that AI will have the same biases as the people who write the programmes. In some areas, such as policing, this could have the potential to be extremely problematic. To prevent this, research and policies in these areas need to be more transparent to allow for open discussions, leading to fair conclusions, but how can this realistically be achieved? These are all difficult questions that have no easy answers, but solutions need to be found before complete workplace automation occurs. Although we may possess the technology for AI to be developed and integrated into society within 120 years, the question remains: With all this doom and gloom, are there any benefits to increased AI involvement? If we take a closer look at how it is currently used in society, we can already see huge benefits. Google Translate, Siri, and search engines are all extremely commonly used technologies that use machine learning algorithms. Furthermore, the potential of AI is vast. From self-driving cars and improved accuracy in medical diagnostics to robots that carry out work and research too dangerous for humans, AI can provide us with previously unfeasible opportunities to learn more about ourselves and the world around us. These ideas, however, take the view point that humans remain the dominant species and will continue to hold an

fo cu s

Illustration by Rebecca Holloway

advantage over machines. Polyani’s Paradox, a philosophical concept coined in the mid-twentieth century, suggests we will continue to do so due to a tacit understanding. That is, knowledge of ourselves and our skills exists beyond our comprehension, passed on through culture and history. However, one could argue that whilst our skills are not always necessarily based on conscious decision-making and reasoning, processing and rationalisation still occurs. Should we reach a stage of machine learning where programmes develop and apply implicit rules intuitively, we as a species might no longer find ourselves with such an advantage.

AI can provide us previously unfeasible opportunities to learn more about ourselves and the world around us

It seems that a future with a more pronounced AI presence does not necessarily equate to a Matrix-style control of the population, at least with the current trend of developing AI for specific tasks. 2017 marks the twentieth anniversary of the defeat of a chess champion by a computer programme, and marks the first time an AI has beaten the Go champion, a game that requires even greater intuition and creativity. AI has proven to excel at specific tasks, but could

it be possible for AI to develop beyond this and actually gain consciousness? To answer this question, we must first consider what consciousness is. Tononi’s Integrated Information Theory defines consciousness in five key ways. Firstly, it occurs intrinsically. Descartes’ famous words, “I think, therefore I am”, suggest we are aware of consciousness inasmuch as we know that we experience it. Therefore, this intrinsic awareness is linked to a physical structure within the brain. Secondly, what we experience is made up of composite parts such as colour, location, or taste. Thirdly, what we experience is definable and unique - no two experiences are the same. Fourthly, though made of composite parts, we experience things holistically. Two separate events (i.e. reading a book and looking at the words ‘crime and punishment’) cannot be independently combined to produce the experience of specifically reading Crime and Punishment. Finally, the experience of consciousness is definite, limited by a constant state of perception. While this theory does not yet have the empirical evidence to prove it, it doesn’t exclude the potential for AI to develop consciousness. Yet with enormous amounts of research still needing to be done, can we ever truly achieve an artificial consciousness similar to that of humans when we are limited by our own self-definition? That is, our scientific approaches tracing consciousness back to brain structures are only able to test secondary symptoms - things that we believe to occur as a result of consciousness. Furthermore, most of this research is done in animals,

who we cannot be sure experience consciousness in the same way we do. Alan Turing’s famous test is still used as the gold standard to establish whether a computer can exhibit intelligence at a level that allows it to pass as human. John Searle argued against its validity, suggesting it is inherently flawed in distinguishing between a machine which simulates consciousness and one that truly experiences it. His Chinese Room thought experiment states that in a locked room with enough time, dictionaries, and grammatical rules, it would be possible for a human to accurately respond to Chinese sentences without actually comprehending Chinese. Searle argued that machine intelligence operates in the same way. From a theoretical perspective, this is a very important point - simulating something is not the same as truly experiencing something. Yet if this is applied to everyday technologies, with humans being unable to distinguish the difference, does it ultimately matter? With advancements in AI occurring rapidly, regardless of what the future holds, it is important that AI development should be done for the sake of improving our future and serving a positive purpose. For this to happen, we need research to be more transparent, the ethics to be fair, and above all, the potential impact it could have on humanity to remain central to progress. The power is in our hands.

Samona Baptiste is an Integrative Neuroscience MSc Student. Autumn 2017 | 13

focu s

Smart cities: how smart is safe? Bonnie Nicholson explores the potential benefits and dangers of Smart Cities Picture a scene that many of us know all too well: a shrill alarm rudely interrupts your sleep, declaring the start of Monday morning. You roll over in bed and, one eye-half open, manage to hit snooze on your smartphone. Now imagine that this half-hearted act activates your smart coffee machine, and shortly after, the aroma of coffee fills your lungs and lifts you out of bed. Your walk to the kitchen activates motion sensors, igniting the smart heating and rolling up the smart blinds. You take the milk out of your fully-stocked smart fridge (thank goodness for the text it sent yesterday reminding you to buy more). And when you sit down in front of your smart television, cereal in hand, your favourite sit-com comes on automatically. Now imagine this on a bigger scale, city-wide: sensors in car parks to monitor the number of spaces available; sensors in rubbish bins to inform when they need to be emptied; sensors to monitor air pollution, energy usage, traffic congestion and queue lengths. And all of this information publicly available, at the tap of a touchscreen.

The connectivity of everyday devices has the potential to revolutionise daily life

The Internet of Things, or IoT, describes the connectivity of everyday devices, and it has the potential to revolutionise daily life and contribute to the development of Smart Cities. A totally Smart City, with a centralised information database and a fully integrated IoT, could drastically reduce energy usage and increase efficiency of public services – revolutionising transportation systems, schools, libraries, hospitals, and utilities. I asked Dr Dave Fitch, Head of Operations at The Data Lab and former manager of the international academic network for the SmartCities project, about the motivation behind Smart City initiatives: “At a basic level, every city wants to be smart – but what that means

14 Autumn 2017 |

Illustration by Alyssa Brandt

in practice is unclear. Smart City initiatives tend to focus on the things that can be managed – for example, infrastructure – things that can be regulated and improved by regulation. Hence, the European focus on Smart Cities is actually primarily about energy efficiency in buildings, as this is an area where you can drive change through regulation.” Barcelona is at the forefront of Smart City technologies. Citizens are able to access records from hospitals, police stations, prisons, and law firms via a centralised information database established by the Ministry of Justice. In the streets, the 2014 XALOC project saw the installation of a system to detect available parking spaces. And in its Parc del Centre de Poblenou, sensor technology in the irrigation systems lets gardeners know about their plants’ water levels. Smart technology has also been used to improve the city’s transportation network – bus routes are planned to maximise green lights, and smart traffic lights can turn green when emergency vehicles approach. A bit closer to home, Milton Keynes is a forerunner in the global Smart City race. The focus of the MK:Smart project is on securing a sustainable community while fuelling economic growth. Smart technology is being integrated into things like soap dispensers and

rodent traps, as well as central heating systems, water meters, and transport infrastructure. For example, a ‘MotionMap’ is being developed to track real-time movements of vehicles and people across the city. Meanwhile, the MK Data Hub compiles all the information collected. As with any technological movement, it is integral that the sustainability of the projects themselves are addressed. In line with this idea, several educational programmes have been set up in the city to teach data-handling skills to school and university students, ensuring the data will be used to its full potential in the future. But before we make plans to recreate the apparent utopia of Milton Keynes, let us first consider the risks associated with Smart Cities. Such technology generates ‘big data’ – massive databases of information about your personal, dayto-day life: from tracking your route to work to logging your daily energy consumption. All of this data can be sold to companies and used for sophisticated marketing strategies and the development of behavioural models. “I think the question citizens have is how their data is being used and resold, and by whom and for what use,” Dr Fitch suggested. “They don’t understand what’s going on, and I think they’d be appalled if they

fo cu s knew some of the things that were happening. Look at for example; where what was sold as an ‘email subscription management service’ was selling customer information about purchases to other companies.” In such scenarios, we must ask: who really benefits from the development of Smart Cities? The general public or the companies who can use that information to their advantage? It is certainly worth considering how the privacy of our personal lives may become increasingly jeopardised. But perhaps big data might benefit the public, Dr Fitch elaborates: “The value of aggregated data – for example on cell phone use – is potentially huge. UNICEF has used cell phone data to predict epidemics – but accessing that data often comes at a huge financial cost to UNICEF. So, there is an arguable need for greater access to aggregated data, and even less access to personal, identifiable data. It’s a hard circle to square.”

A totally Smart City, with a centralised information database and a fully integrated IoT, could drastically reduce energy usage and increase efficiency of public services

The anxiety surrounding the use and resale of big data is not the only concern about the development of Smart Cities. As the entire infrastructure of a city would be coordinated online, we would become progressively more dependent on the reliability of internet services, and increasingly vulnerable to the effects of a cyber-attack. ‘Ransomware’ – infective malware that can only be removed by paying a ransom to the cyber-attacker – is worryingly available on the web. Earlier this year, a cyber-attack disrupted thousands of computers across the NHS, raising concerns about smart devices and internet-based public services (or e-services). But Dr Fitch explained that there are ways to protect against these hacking crimes: “There is a lot of work being done in the data world on anomaly detection – finding things that are out of place – and this is being used to improve security. You can use typing data, for example, to predict the typer’s native language, so FinTech (financial technology) companies use information like this, on top of passwords. There is a lot of work on customer profiling – some of this is done to sell you more things – but some of it is done to help identify purchases or locations that are unlikely to be you.” My chat with Dr Fitch has shed light on some possible solutions to the challenges of Smart City development. However, there are still some major obstacles facing the progress of such initiatives. For example, the integration of all this information in a user-friendly and effective database poses a difficult problem. To use the example of parking apps across the UK, local authorities are

currently confronted with geographical and political obstacles, which prevent information being shared and collated between districts. To make such apps more useful and user-friendly, e-services will need to be simplified and rationalised across the public sector. The final but perhaps most influential factor in the development of Smart Cities is, of course, money. e-Services are expensive to develop, and as with any research and innovation, not all projects have successful outcomes. As Dr Fitch aptly put it, in a sentiment that many research students will relate to: “In this environment it’s hard to drive innovation, as there also needs to be some systematic tolerance of failure (or of less success).”

...who really benefits from the development of Smart Cities?

The risks and obstacles facing the development of Smart Cities are numerous and complex, but overcoming them could take us closer to a more efficient, sustainable, and comfortable life for communities around the world. Bonnie Nicholson is a second year cardiovascular sciences PhD student studying the effects of genetics on obesity.

Image courtesy of Pixabay

Autumn 2017 | 15

focu s

Life in space: an even bigger leap for humankind? Emma Dixon explores whether space colonisation could, and should, be achieved Imagine it’s the year 3017. You’ve just teleported back from your university classes taught by the latest model of AI lecturers, and are now off to Mars for a short weekend break. Whether it’s in books, in films or on television shows, it is often presumed that humanity has taken its next big leap, and is no longer confined to living on Earth. Jumping back to 2017, and whilst we might not be living in space yet, it seems we are reaching ever closer to that goal. In under half a century, we have progressed from landing men on the moon to propelling an unmanned space probe out of our solar system to explore the unknown. Unsurprisingly, space travel is an increasingly popular topic, with the likes of Mars One suggesting a series of televised one-way trips to the red planet in the hope of creating a colony by 2033. A little closer to home, we have successfully manipulated Earth to be able to live in some of the most extreme areas of the planet – from science bases in the polar regions to man-made islands in Dubai. Indeed, it does seem that moving to space is humanity's next big step. There are two main reasons for moving to space. The first, and perhaps more realistic, would be to set up extra-terrestrial outposts on other celestial bodies within the solar system, as bases for scientific study and further exploration into the universe.

It does seem that moving to space is humanity's next big step

Amongst the most prominent nearEarth candidates are our own moon or Mars, having been well researched over the past few decades. A few moons of Jupiter and Saturn could also offer suitable bases for further exploration, even outside our solar system. We have already seen the likes of the Mars Rover carry out scientific studies of martian geology, and NASA aims to conduct round-trip missions to Mars from 2030, to further explore the planet’s history

16 Autumn 2017 |

and whether it can feasibly sustain life. The second, more challenging option, would be complete colonisation of another planet. Given the rapid expansion of Earth’s human population, climate change and threats such as nuclear war, Stephen Hawking has suggested that colonising other planets is the only way to ensure humanity’s existence. Not only would this require transport of people and equipment to sustain life, but it would necessitate self-sufficiency and the ability to reproduce and populate the planet. There are obviously many challenges in transporting people off Earth, let alone long-term survival on another planet. Once a suitable location has been selected and reached, astronauts would require habitation for their stay. In the instance of Mars, a short-term visit may utilise small lightweight habitats that could be transported alongside the astronauts themselves. For a longer term mission, the high levels of radiation means that humans would need to live underground or in very well shielded structures. These would likely be too big to be transported to Mars, so may need to be created onsite using a mix of Earth and Mars resources. Another longer term option for human habitation of Mars could include altering the Martian environment to be more Earth-like - a process termed ‘terraforming’. Two of the biggest obstacles of the Martian environment are its thin atmosphere (which would result in cosmic radiation exposure) and the surface temperature, which is extremely cold (around -63 °C, rather chilly even for brave Scots). To overcome these challenges, the planet would need to be warmed, melting the polar ice caps and releasing trapped CO2 into the atmosphere - this would both thicken the atmosphere and further warm the planet through a process akin to global warming. Three methods have been suggested to start this warming and melting process: using giant reflective mirrors to concentrate the sun’s radiation onto the Martian ice caps; setting up greenhouse gas factories on Mars, powered by solar energy; or by using rocket engines to guide ammonia-rich asteroids to Mars to raise atmospheric ammonia levels. Even if we were able to create a thicker

atmosphere, it would still lack ozone, which on Earth blocks some of the Sun’s damaging radiation. To overcome this, oxygen-producing plants and bacteria could be sent to the red planet - a method being explored by NASA. An even bigger issue is Mars’ lack of a magnetosphere a magnetic region surrounding a planet that affects how charged particles around the planet act. This means it is unable to permanently hold an atmosphere, and there are no current methods to overcome this challenge. However, physicist Michio Kaku has suggested that Mars may be able to hold a terraformed atmosphere for a couple thousand years, giving some time for Martian humans to come up with longer term solutions.

Stephen Hawking has suggested that colonising other planets is the only way to ensure humanity’s existence Beyond the logistical challenges of setting up a suitable environment for humans to survive, the physiology of the human body needs to be considered. Just in travelling to the red planet, astronauts would be exposed to more solar and cosmic radiation than any human has ever experienced – a 30-month round trip would rack up 1 sievert (Sv) of radiation. The health effects of such radiation are evident in NASA’s Apollo astronauts; research has suggested that the radiation exposure from their two week trip has led to the astronauts having a 4-5 times higher chance of cardiovascular disease. Other research in mice has suggested that radiation exposure is linked with neurodegenerative conditions such as Alzheimer's. There have been suggestions of genetically modifying humans to be better suited for spaceflight and extraterrestrial living. However such modification, will likely be met with similar technical uncertainty and ethical concerns raised by current genetic modification approaches being trialled in embryos. It is clear that any would-be Martians have a number of obstacles to tackle.

fo cu s

Illustration by Marie Warburton

Whilst the red planet may not seem so different from our own blue planet, its thin atmosphere, lack of magnetosphere and exposure to cosmic and solar radiation will provide serious challenges for it’s colonisation. And, though there may be solutions to some of these issues, the effects of cosmic radiation on the human body may be less easily overcome. Besides the logistical challenges facing pioneers hoping to live on worlds beyond our own, pushing the boundaries of human inhabitation raises ethical and moral questions. Firstly, it raises the question of who ‘owns’ space? From a legal perspective, any colonies set up on extraterrestrial bodies would not be able to be claimed by any Earth State, as outlined by the UN’s Outer Space Treaty. Certainly, humans don’t have the best track record with forcibly claiming ownership of land on Earth, and this raises an ethical and moral question of whether humans ought to claim land on other planets even if we were to overcome the logistical and technological challenge of such a move. Secondly, deliberately colonising and/or terraforming another planet will have irreversible environment impli-

cations for the planet. This may make studying any native microbial life more challenging or may harm any such life on the planet. Furthermore, the Outer Space Treaty states that efforts should be made to minimise the risk of contamination of celestial bodies. The University of Edinburgh professor and researcher Charles Cockell has previously suggests that ‘planetary parks’ could be set up to preserve areas of planets such as Mars and protect any microbial life living there.

From transport to habitat, plans to see humans live on Mars will have some serious challenges to overcome From an ethical perspective, it could be argued that it is irresponsible to permanently alter the environment of another planet. Given that a key reason for humans colonising extraterrestrial locations stems from how we have treated (or mistreated) our own planet, is it

right for us to definitively alter another? The concept of humans inhabiting other planets sometime in the future has become commonplace in popular media from blockbusters including ‘Interstellar’ to TV shows such as ‘Futurama’. From transport to habitat, plans to realise this fiction, either in the short or long term, will have some serious logistic and technological problems to overcome. Whilst we might be making giant leaps in regards to this with space research and technology, space colonisation additionally raises several ethical and moral questions. Given the way in which we have treated our own planet it could be argued our time, efforts, and money may be better invested in combating climate change on Earth before we start seeking to change other planets for better or worse. Emma Dixon has recently completed an MSc in Science Communication and Public Engagement.

Autumn 2017 | 17

focu s

Physics: a game of two halves Theoretician Craig Young momentarily puts his bias aside to discuss the sometimes conflicting worlds of theoretical and experimental physics There are two types of physicist in this world. It may sound like the beginning of a bad joke but it’s true - there are theoretical physicists and experimental physicists. While in the past there have been figures who have blurred the line between the two disciplines, the ever increasing specialization required to forge a career in physics has left very few of these renaissance men and women in the modern world. Thus, as with all dichotomies of this sort, the natural inclination is to pick a side, form stereotypes, and denigrate the proponents of the other side. So let’s get started. Perhaps not in keeping with its content, the label “theoretical physics” is neither abstruse nor subtle. Theoretical physics, in short, is exactly what it sounds like, with theorists tasked with formulating new solutions to physical problems as well as developing existing theory. Traditionally this is more of a solitary pursuit, lending itself less to direct collaboration, though it is certainly influenced by the work of predecessors and contemporaries. In contrast, being an experimental physicist requires constant communication and cooperation and is very much a team sport. It is natural then that experimental physicists are less associated with the introverted, antisocial personas of their theorist counterparts. Public perception does not always favour the experimentalists though. To the general public, the image of a theorist reclined on a chair, fingers pensively rubbing their chins and waiting for a eureka moment, is far more romantic than worker bees running around in lab coats, calibrat-

ing apparatus and recording results. Of course, this isn’t necessarily an accurate depiction of either field, but recent scientific history has served to etch these images onto the minds of the population. The 20th century was dominated by new ideas - Einstein’s relativities and the advent of quantum mechanics. These theories are truly monumental in the physics world and the effort to meld the two together cohesively is still the most fundamental and pressing goal of physics to this day. Not only this, but Einstein is a man who transcended the scientific world to become one of the most instantly recognisable figures in history. Experimental physics has never had an Einstein and so can never hold the same mystique. The big headlines nowadays are grabbed by “big physics” experiments such as the Large Hadron Collider, where the Higgs boson was discovered, and the LIGO experiment, which first detected gravitational waves. However, the fact that thousands of people are associated with each means that the praise often gets diluted. The media want a figurehead and so, instead of praising the collective efforts of the research teams at the LHC or LIGO, they usually portray the theorists as the heroes of the hour. However, at the turn of that same century, a lot of people regarded physics as solved. There was a growing feeling that we as a species had learned everything there was to know about the world around us. It took experimental results that couldn’t be explained by the theories of the day to show them they were wrong. And oh how wrong they

were! Who’s to say that Einstein would have thought of general relativity if it hadn’t been for the experiments showing that Newton’s current theory of gravitation couldn’t quite explain the precession of Mercury around the Sun? Why would he have bothered if everyone had accepted that we had solved the problem of gravity 250 years ago? In this case, experimental results required new theories to explain them. Conversely, the complete opposite can occur with new theories requiring experimental probing to confirm or reject their hypotheses. This was the case when Peter Higgs proposed the existence of a new particle in the 1960s. To probe for the existence of this particle, a new particle collider with far greater beam energies was required, and so the Large Hadron Collider was planned, commissioned and built. Thus, almost fifty years after Higgs’ original paper, the experimental physicists working at the LHC confirmed the existence of the Higgs boson.

One cannot exist without the other in any meaningful way

This is the very heart of the relationship between theoretical and experimental physics. One cannot exist without the other in any meaningful way. Imagine Peter Higgs had predicted his eponymous particle but no one had ever bothered to check that it existed; or imagine that after experimentalists spotted that Newton’s laws of gravitation couldn’t explain Mercury’s movement around the Sun, nobody had sought to form an alternative theory that could. Experimental physics is a natural extension of theoretical physics and vice versa. So the question of theoretical physics vs experimental physics is no question at all really. We need both and, luckily for us all, we have both. Craig Young is a 5th year MPhys student in theoretical physics.

Image courtesy of Pixabay

18 Autumn 2017 |

fo cu s

Is ageing a disease, and can we cure it? Adelina Ivanova explores ageing as a disease and the strive to find a cure for it Growing old seems inevitable. Regardless of age, humans always seem sensitive towards the passage of time and its effect on our bodies, and are searching for ways to elude it or slow it down. Since many infectious diseases and life-threatening medical conditions (such as cholera and tuberculosis), are now curable, scientists and medics turn to the next assailant of life expectancy: ageing. In developmental biology, ageing is defined as “the time-related deterioration of the physiological functions necessary for survival and fertility”. Ageing happens to all individuals of a species, but the degeneration of biological systems may unfortunately also cause age-related diseases in some. It is now postulated whether ageing itself can be regarded as a disease and if it is curable. Scientists still disagree on whether ageing should be added to the International Classification of Disease manual, issued by the World Health Organization. Aubrey de Grey, chief science officer at the Strategies for Engineered Negligible Senescence (SENS) Research Foundation, claims that “Ageing is bad for you, it’s a medical problem’’, and thus should be treated. To the regular individual, the idea that ageing is a disease rather than a natural process may seem unbelievable. However, scientists and organizations like Google’s California Life Company, Calico, established in 2013, support the idea that ageing can be controlled and even reversed and are already working on ways to make this happen. A research team from the Salk Institute in California reversed some of the signs of ageing in mice in a progeria (accelerated ageing) model, making the animals look younger and live 30% longer, by stimulating the expression of four genes knows as the Yamanaka factors. These genes, active during developmental processes, have previously been shown to reprogramme adult cells into a more primitive state. However, the Salk Institute team was the first to demonstrate that stimulating the Yamanaka factors for a short period of time could reduce age-associated patterns without changing the cell’s identity. Stimulating these factors was also found to rejuvenate human skin cells in the lab, and

Illustration by Kat Cassidy

the team now aims to create a drug to mimic the effect of the genes and enter human trials 10 years from now.

Ageing is a very dynamic and plastic process

Dr Matt Kaeberlein and Dr Daniel Promislow from the University of Washington are already considering anti-aging drug development and are conducting trials on the effect of rapamycin on old dogs. Rapamycin is a drug obtained from the soil bacterium Streptomyces hygroscopicus, and recent trials on mice demonstrated that the drug could extend a mouse’s life expectancy by approximately 25% and reduce the rate at which cancer and Alzheimer’s developed. Promisingly, the side effects observed in mice - such as cataracts, mouth sores and male infertility - are not present in the pilot study on dogs. These scientific discoveries give hope that ageing could be tackled. Dr Juan Carlos Izpisua Belmonte, a professor in Salk's Gene Expression Laboratory, comments, “obviously, mice are not humans and we know it will be much more complex to rejuvenate a person. But… ageing is a very dynamic

and plastic process, and therefore will be more amenable to therapeutic interventions than what we previously thought”. However, the scientific community remains sceptical. The mechanisms which lead to ageing seem so fundamental that altering them could hide dangers we cannot predict. Professor of anatomy at the University of California, San Francisco, Leonard Hayflick, believes that ageing is a normal process and cannot be cured. Professor Hayflick was the first to discover that human cells have a limited number of divisions, demonstrating they can only divide up to 40 to 60 times, even if their division is paused and then resumed. This discovery, known as the Hayflick Limit, supports the professor’s argument that even if we manage to slow down ageing, our life expectancy has a definitive cut-off point which cannot be surpassed. Disease or not, ageing and its correlated conditions are the major factor restricting human life expectancy. Moreover, the question of whether we can reverse ageing is only the beginning of the story, acting as a catalyst of downstream problems and questions. What are the psychological consequences of living over 100 years? Can our planet handle a growing human population? Science still has a lot of us waiting for answers in order to calm these fears. Adelina Ivanova is a second-year Chemistry student. Autumn 2017 | 19

focu s

Assisted reproduction Sadhbh Soper Ní Chafraidh explores recent advances in assisted reproduction and the ethical issues they raise Major scientific breakthroughs are often met with alarm from the public, and this is particularly true in the field of reproductive technology. From fears that Dolly the sheep would pave the way for breeding human clone armies to panic that IVF ‘test tube’ babies would lack a soul, many of these concerns have proved unfounded. Advances in assisted reproduction have allowed many parents to conceive happy, healthy children. However, it is undeniable that these advances raise real ethical concerns for scientists and society.

James Watson, co-discoverer of the structure of DNA, predicted that “all hell will break loose, politically and morally, all over the world” Assisted reproductive technologies (ART) range from very simple interventions to more advanced techniques. Many fertility problems may be addressed using in vitro fertilization (IVF). This involves treating the woman with hormones to increase egg production before harvesting mature eggs. These eggs are then incubated in a dish with sperm samples to allow fertilization to occur. In cases where the sperm is dysfunctional, it may be directly injected into the egg, in a procedure known as Intracytoplasmic Sperm Injection (ICSI). Fertilized eggs are then allowed to develop for a few days before being placed in the mother's womb, where they continue to develop normally. The first baby born by IVF was Louise Brown in 1978. Apart from condemnation from the Catholic Church and much of the mainstream media, IVF even drew criticism from within the scientific community. James Watson, co-discoverer of the structure of DNA, predicted that “all hell will break loose, politically and morally, all over the world”. However, many were guided by a vision of the potential of IVF to alleviate suffering caused by infertility. Dr Lappe of the Institute of Society,

20 Autumn 2017 |

Image of ICSI sperm injection into oocyte, courtesy of RWJMS IVF Laboratory via Wikimedia commons.

Ethics and the Life Sciences, New York, shared this view. A few years before the conception of Louise Brown, he stated: “There is a deep and pervasively felt need for family…common to all peoples. I believe that human compassion dictates a response to individual couples who strongly sense that need, including the provision of in vitro fertilization.” Public opinion of IVF has changed drastically over the past four decades, as it has become commonplace. IVF has been responsible for more than 5 million births worldwide, including over 250,000 in the UK. In 2010, Robert Edwards won the Nobel Prize in Medicine for his work on developing the procedure. Although IVF is widely accepted now, there are still many contentious issues surrounding its application. As IVF and other ARTs provide us with more control over human fertility, we are faced with difficult decisions over how to exercise that power. One thorny issue is how to ensure equality of access to IVF treatment. One cycle of IVF costs approximately £5,000, and often multiple cycles are required to achieve a pregnancy. This means many prospective parents cannot afford IVF. Some countries have public health schemes which cover a number of IVF cycles for eligible patients. Unfortunately, it is not currently possible to fund IVF for everyone who needs it. In these cases, science may provide the answer by developing cheaper methods for assisted reproduction. For example, a recent innovation, INVOcell, uses a clever method to reduce the need to

keep embryos in expensive incubators before implantation. Instead, eggs are placed along with sperm in a special polystyrene tube which is inserted into the vagina of the mother, where the pH and temperature conditions are ideal for early development. After 3-5 days the device is retrieved and the embryo implanted into the mother’s uterus. Another long-standing fear surrounding IVF has been the creation of ‘designer babies’. Today, embryos can be screened for genetic diseases, allowing healthy embryos to be selected. This practice aims to reduce human suffering, but strays uncomfortably close to the territory of eugenics. Its advocates argue that it is more compassionate to spare potential children from being born only to suffer devastating diseases. Voluntary screening is already being practised in the UK for specific serious conditions, such as Huntington’s disease. Decisions to include certain conditions are particularly controversial. For example, embryos with BRCA1 and BRCA2 mutations, which are associated with an increased risk of breast and ovarian cancers, can be selected against. This is despite the fact that children born with these mutations might never develop these types of cancer. Even though regulations will likely continue to prohibit the use of embryo screening to select for desired traits or sex, it remains far from clear where to draw the line. Advances in gene editing technology in the near future might also allow manipulation of the genes of embryos before

fo cu s implantation. The Nuffield Council on Bioethics, an independent advisory body, has recently opened an online public survey which aims to collect opinions on what kind of gene editing interventions might be considered acceptable. Some of the dilemmas they raise sound like they are taken straight from science fiction, but it is important to engage the wider public in this discussion at an early stage, as these therapies have the potential to reshape our society forever. ARTs also open the door to surrogacy, an area fraught with a host of ethical and legal concerns. Fertilizing eggs in vitro allows them to be placed in the womb of a surrogate rather than the genetic mother. Surrogate mothers can help some parents have their own genetic child and may gain satisfaction from helping others in this way. However, there is a risk that children produced by surrogacy may suffer as a result of legal disputes, as well as possible psychological impacts. Women who act as surrogates are at risk of exploitation, emotional or financial coercion, and infringements on their personal and medical autonomy. There is also a concern that surrogacy encourages society to view children as commodities and women’s bodies as mere ‘vessels’. Due to these concerns, many countries attempt to regulate surrogacy to protect those involved. Regulation can be very difficult due to the complex issues involved and some countries, including France, opt to enforce a complete ban on surrogacy arrangements. In the UK, the surrogate is regarded as the legal mother of the child at birth. It is recognized that it may be impossible for a woman to give fully informed consent prior to entering a surrogacy agreement, as her attitude towards the child may change during pregnancy and birth. Despite this, it may not be in the best interests of the child to prevent it from being raised by genetic parents who can provide a nurturing home. Additionally, unenforceable arrangements risk the commissioning parents backing out of the arrangement, leaving a surrogate with full responsibility for a child she did not want. Commercial surrogacy is prohibited in the UK. This ban aims to prevent the commodification of children and to prevent surrogates from being financially pressured into arrangements. However, these regulations may encourage prospective parents to enter into international arrangements, which pose additional difficulties. There are currently no international agreements which govern surrogacy, and differences in national

legislation may lead to confusion over the legal parents and citizenship of the child. There is also a greater risk of women being exploited, as policies designed to protect surrogates in the UK are absent in other jurisdictions. International surrogacy was recently banned in India due to concerns about the high numbers of foreign parents commissioning surrogates there.

This process is likely to raise fundamental questions about what kind of society we want to live in

More recent advances in reproductive technology offer even greater opportunities as well as potential problems. For example, ‘in vitro gametogenesis’ (IVG) could be the next big thing in assisted reproduction. Although it has not yet been successfully applied in humans, the principle involves reprogramming adult cells, such as skin cells, to become egg or sperm cells. So far, scientists have achieved this in mice and, if not thwarted by technical or regu-

latory obstacles, IVG could have huge consequences for fertility treatment in humans. Most notably, it could be used to help patients who are unable to produce eggs or sperm, as well as enabling same-sex couples to have children that are genetically related to both parents, instead of having to rely on donor cells. Currently, it seems that initial fears of ARTs were overblown, since these have brought much joy into the world. Nevertheless, there is still the potential for these technologies to take a more dystopian turn. Some of the issues raised here may be managed through careful regulation, but these technologies remain open to abuse. However, this is not an argument against scientific progress. Research should not be guided by fear, as the potential benefits or harms of any technology are largely dependent on the context in which the technology is applied. It is important to try to foresee any potential uses or misuses of these developing technologies and prepare in advance in order to address any problems that may arise, as is now being done for gene editing technology. This process is likely to raise fundamental questions about what kind of society we want to live in. Sadhbh is a second-year PhD student in cell biology. She has experience of being in a fertility clinic on work placement.

Illustration by Stephanie Wilson

Autumn 2017 | 21

focu s

The pill with your name on it Chiara Herzog discusses the potential of personalized medicine for the masses We have entered the post-genomic era. The sequencing and publishing of the human genome (our genetic make-up in the form of DNA) has opened up a whole new world of opportunities to the doctors of the future, as access to a person’s DNA allows for the study of genetic origins of diseases. The cost of whole-genome sequencing has dropped significantly in the past 15 years and, at a price point of under $1000, is currently more affordable than a new MacBook Pro. Therefore, it is entirely possible that the procedure will enter day-to-day practice in the near future. On the more commercial side of things, private companies such as 23andme. com and have popped up willing to scan your genetic material for potential diseases, viking ancestors or relation to a celebrity of your choice. Novel methodologies, including genomics and other techniques that have emerged alongside it, are the cornerstones of personalized or ‘precision’ medicine: they allow us to specifically target the underlying molecular and

Image courtesy of Pixabay

22 Autumn 2017 |

cellular mechanisms rather than using a broader, non-selective approach. For instance, genomics can be used to study a cell's genetic sequence for risk factor genes of neurodegenerative diseases or cancer, such as the BRCA2 gene (creative for ‘breast cancer 2’). But sometimes there is no actual mutation in the DNA - there is just a change in the number of RNA transcript molecules (and maybe proteins that are translated from the RNA sequences), which occasionally underlie cancer. Disease prevention is a field in which the advances in genome-sequencing and the development of personalized medicine will make a striking impact. A steadily increasing number of diseases are associated with certain genetic risk factors or, in some cases, even directly caused by mutations. Examples of diseases associated with or caused by mutations are breast cancer, colon cancer, or Huntington’s disease. Knowing your genetic risk status can help to prevent diseases by taking precautionary measures (e.g. having a

breast mastectomy/amputation), but it might also provide a more bleak outlook for Huntington’s disease, where currently no useful treatments exist. While the topic of tailor-made medicine is flourishing now more than ever, the idea behind it is not new. Hippocrates, born 460 BC, hypothesized that “it is more important to know what sort of person has a disease than to know what sort of disease a person has”. And indeed, we are now realising that there are huge variations in drug efficiency between individuals and population groups. For example, it has been discovered that there is a significant individual variation in (liver) enzyme efficiency. Liver enzymes are responsible for breaking down and removing drugs from our system. Individual differences in these enzymes’ efficiencies mean that, in some patients, drugs might be degraded too fast to actually work for them, while for other people they might not be degraded efficiently enough, leading to potentially nauseating adverse effects. This may be why that miracle

fo cu s painkiller everyone else swears by never actually works for you (looking at you, paracetamol). The field of pharmacogenomics studies these individual differences which can be detected in the DNA sequence and will be useful in the future to identify the necessary dosage and drugs to administer at a patient-specific level.

The cost of whole-genome sequencing has dropped significantly in the past 15 years and, at a price point of under $1000, is currently more affordable than a new Macbook pro

Personalized medicine will furthermore offer new therapeutic avenues in drug development. A fascinating example of this is cancer immunotherapy. Current cancer therapies largely include antiproliferative drugs like 5-Fluoro-Uracil (FU), that target rapidly dividing cells such as cancer cells. However, many of our own healthy cells divide rapidly as well, such as those in our hair follicles and in the epithelial lining of our colon. This is why chemotherapy is associated with such severe adverse effects, robbing patients of their appetite and hair. But actually, the body’s own immune system is an efficient way to get rid of cancer cells and it doesn’t necessarily require very much therapeutic involvement, although sneaky cancer cells sometimes find ways to evade the immune system. To aid immune cells in the disposal of cancer, the ‘surfaceome’ can be used: each type of cell has a distinct coating of the cell membrane. The immune system can be triggered against the coating of specific cancer cells, which will set immune cells to attack the cancer. This is called immunotherapy and is an effective and clean way of treating cancer with virtually no side effects. The problem that cancer drug development has been facing since the dawn of modern medicine is that no two cancers are identical - making targeting difficult. However, with the new methodological toolbox that genome-sequencing and subsequent personalized medicine offers, it is conceivable that individual cancers - or at least certain subgroups of cancer - can be characterized by their surface character-

istics, transcripts, or genetic mutations. The uber-futuristic preview into precision medicine is a daily pill with your name on it: it is tailored to eliminate your specific risk for Alzheimer’s, treat your gout and help with the digestive issues you might have, all at the same time and while taking the enzymatic activity of your liver and kidneys into account. A more realistic view of it is that if you do end up developing a disease, a tailored treatment can be provided based on the molecular study of said disease. Although it is not yet financially feasible to provide individual personalised treatment, subgroups of diseases are now starting to be segregated based on certain biomarkers and molecular characteristics. It is already possible to receive personalised treatment for some types of cancer, such as described above with the immunotherapy based on surface characteristics. Personalized medicine - in whichever form - is undoubtedly one of the most promising candidates to revolutionize modern medicine. Its benefits are widespread across disease prevention and intervention, drug development, and pharmacogenomics. Clinical trials could be run in smaller sizes for drugs that are only suitable for a certain population, decreasing the overall cost of drug development and allowing drugs that would fail in larger studies to be available to people who need them (and who benefit from them). However, as with all scientific advances, we need to start thinking about the ethical and practical implications: who will have access to personalised medicine? While it is true that the cost for a lot of the required methodology has declined dramatically as the technologies are now being commercialized, it is still not cheap and available on a large-scale basis for public health providers. This raises the question whether the advance of personalised medicine could further push the issue of a society divided by money. In 2011, nearly a decade after the human genome was first published, the British public were still missing out on the advances personalized medicine offered because the NHS was “completely unprepared” for it, as the UK government’s chief genetics advisor put it in an interview. As a response, David Cameron apparently took a personal interest in putting personalized medicine and genetics at the centre of NHS treatment and diagnosis of cancer and rare diseases. A few years down the line, personalized medicine is still not seen in the daily practice of the UK’s general health provider, although a ray of hope

might be on the horizon. Initiatives such as the 100,000 genomes project provide the first step to establishing personalized medicine within the NHS. The genomes of people with rare diseases, as well as cancer patients and their families, are to be sequenced to provide a better understanding of the molecular causes, risk factors, and heritability of these diseases. Furthermore, in 2013, the green light was given for a £20 million personalized medicine research centre in Scotland (Stratified Medicine Scotland at the Queen Elizabeth University Hospital in Glasgow). This centre is ultimately aimed at implementing precision medicine in the clinic, and additionally at providing new developments and services for a global market. NHS England is also beginning a discussion on defining personalized medicine and what exactly it will encompass, and identifying partners to embrace

Personalized medicine - in whichever form - is undoubtedly one of the most promising candidates to revolutionize modern medicine new approaches, “while ensuring that ethical, equality and economic implications are fully addressed”. During the 20th century, medicine has made more progress than ever before. The discovery of methods to fight microbes and infectious diseases with vaccines and antibiotics, improvements in medical imaging, stem cell technology, and advances in surgery and prostheses have ushered in the advent of modern medicine - people in the western world no longer die from infections but from cancer or old age (and often, the two of them go hand in hand). It is to be expected that the exponential rate of increase in medical knowledge will continue, so perhaps personalized medicine will be the new determining factor for the upcoming decade(s). We can only wait and see what the future brings. The pill with your name on it might still be a few years off - but who knows? Chiara Herzog is a molecular medicine alumna and 2nd year regenerative neuroscience PhD student.

Autumn 2017 | 23

focu s

Terrors of biology Alina Gukova reflects on the harmful side of the biological sciences and their use in biowarfare and bioterrorism Nature has given us numerous gifts we use for survival, but also many threats we need to protect ourselves from. With a bit of creativity, we can turn the threats into weapons to use against each other. When we think about warfare, threats from the fields of physics and chemistry such as nuclear energy and chemical weapons are some of the first to come to mind. However, the field of biology offers no exceptions. Toxins and biological agents, such as particular fungi, bacteria, and viruses, are classed as potential weapons of mass destruction, capable of causing death, famine, and widespread panic. A biological assault could be difficult to detect: invisible to the naked eye, odourless, and with a potentially long incubation period, bioweapons can pose a great danger.

Toxins and biological agents, such as particular fungi, bacteria and viruses, are classed as potential weapons of mass destruction

Biological warfare and bioterrorism are very old concepts. So old in fact that one of the earliest records of biological warfare usage is provided in The Old Testament, wherein the Biblical god appears as a source of destructive force. The Plagues of Egypt depict a range of devastating biowarfare approaches: spoilt water, an epidemic amongst the livestock, attacks by pests, and a mysterious microbe with a targeted effect. Regardless of whether these events happened or not, these examples show that humans were familiar with the idea of using biological species and related toxins as a means of warfare since the ancient times. There are numerous similarly dire examples occurring throughout human history. A well-documented example from the 14th century, the siege of Caffa in the Crimea, depicts how the desperate attackers fell ill with plague and decided to dispose of the bodies by catapulting them into the besieged city. The attack

24 Autumn 2017 |

had unfortunate consequences as those who managed to escape Caffa and travelled to Europe carried the plague with them, contributing to the Black Death pandemics. This example demonstrates one of the main dangers of biological warfare; poor targeting, containment, and lack of control over the weapon. In the 19th century, improvements in microscopy and development of modern microbiology boosted the arsenal of biological munitions. Robert Koch, a German pioneer of microbiology, made a great contribution to our knowledge of cholera and anthrax, which later became popular biological ‘missiles’. Some evidence of governments using anthrax in biological warfare is dated back to World War I. However, the destructive magnitude of biological and chemical warfare was quickly recognised by the global community. This resulted in the creation of the Geneva Protocol in 1925, prohibiting the use of gases and bacteriological warfare. Unfortunately, due to lack of clarity in the document, its ratification had little power. Thus governments, being wiser from the painful war experience, shifted the development of offensive biological weapons to a larger scale. Governmental facilities, such as the well-known Japanese Unit 731 and the US Camp Detrick, deployed their deadly forces widely in World War II. Microorganisms causing anthrax, plague, and cholera, to name a few, were researched and in some cases used most commonly during this time. Research into biological and chemical warfare continued in the post-war period. However, a gradual shift away from offensive usage to defensive purposes began, culminating in ratification of the Biological Weapon Convention in 1972. It was stipulated that the 103 signatories should dispose of the production equipment and weaponry stockpiles, with only research for defensive measures being allowed. However, like in the case of The Geneva Protocol, the new convention was vague and reliant on the signatories’ good will - not always a firm ground in the world of politics. Bioweapons were used by governments on a large scale in times of war when the goal was to kill or neutralise the opposing side, but only a small amount

of pathogen is sufficient for blackmail or to spread panic. This approach has been utilised by various terrorist organisations and destructive cults in the second half of the 20th century. Only a few affected individuals are enough to cause a nation-wide panic, as was demonstrated by the case of the ‘anthrax letters’ in the aftermath of 9/11. These letters, delivered to several high-profile recipients in media and politics, contained a gram of powdered anthrax spores that could be inhaled. Five deaths caused by the attack were enough for people across the country to fear opening their mail and for postal workers to fear carrying out their job. Alarmingly, the main suspect behind the attack turned out to be a senior US biodefence microbiologist, whose actions were explained by mental illness.

Knowledge is always both a power and a responsibility, and it should always be used for the benefit of the humankind as a whole

Since the goal of such attacks is to cause disarray, the threat of a bioterrorist attack is often as effective as an attack itself. A terrorist group called The Red Army Faction was suspected to be planning an attack after flasks containing botulinum toxin were uncovered in their possession. The information was published in a large national newspaper and read by thousands of people. Even though the allegations were not confirmed, the suspicion was enough to cause fear and discomfort amongst the population. One of the rare successful attacks took place in the US, where a nefarious cult called the Rajneeshees attempted to manipulate local elections. They reasoned that the most effective way to increase their candidate’s chances to win would be to decrease voters’ turnout by making people ill. Since the goal was not to kill but to temporarily incapacitate, Salmonella enterica Typhimurium was spread in local restaurants, resulting in over 700

fo cu s

Image from Baseline HIV Awareness

people falling ill due to food poisoning. An environmental perspective of bioterrorism is exemplified by an incident known as ‘Operation Dark Harvest’. It was concerned with the condition of the Scottish Gruinard Island that was heavily contaminated with anthrax during the World War II as a result of military trials. A group of activists began sending soil from the island to various governmental offices in the UK, demanding immediate decontamination, which eventually took place as a response to the threats. For now, the island is deemed safe, but only time will tell whether the resilient anthrax will return.. Today, we tend to welcome advances in biology, but they may not only save lives and move humanity forward. Synthetic biology can become a tool to resurrect poliovirus or influenza strains by the costly, but not impossible means of whole-genome synthesis. Another technique, genome editing, was listed in a recent Worldwide Threat Assessment by the US Intelligence Community as a tool for creating, “potentially harmful biological agents or products”. The modern techniques of genome editing are low in cost, and are much more precise than pre-existing approaches. Given that countries

with different regulatory and ethical norms have access to these methods, their application may result in misuse, deliberate or unintentional. Some experts claim that unless in the right hands (with a knowledgeable brain behind them), they are as non-threatening as any other molecular biology technique. However, it is worth remembering that the ‘anthrax letters’ were most likely organised by no other than a professional scientist. There is a special term for findings that could be used for both good and evil – ‘dual-use science and technology’. In his 2013 speech, Mark Walport, the UK Government Chief Scientific Adviser, pointed out that any discovery can be turned against humanity, even elucidation of neurotransmitter activity. He provided an example of a study in which contamination of a milk-supply chain with botulinum toxin was used as a model to assess the risk of bioterrorism attack. Such open access research could potentially be used by both attackers to plot an assault and by the government to prevent it. Nevertheless, research in all areas should not be confined by boundaries, and as such, work in this area shouldn't be banned outright. In a 2015 joint statement by the major funding bodies,

Wellcome Trust, MRC, and BBSRC, on managing risks of research misuse, it was pointed out that any work on dual-use areas should be encouraged because it, “will be absolutely crucial in the fight to combat the diseases that these agents cause and to improve our ability to respond to bioterrorist attacks and other potential threats”. As a balancing measure, sections describing research materials and methods can be omitted from the published version or only available on request, to try and prevent methods relating to creating bioweapons falling into the wrong hands. Taken together, the evidence shows that biology is yet another tool that people can use to fight each other. In today’s world, the role of science and scientists should not be diminished, as they bring changes that can potentially be both constructive and destructive. In this light, what should be absolutely ingrained in the mentality of researchers is that knowledge is both a power and a responsibility, and it should always be used for the benefit of the humankind as a whole. Alina Gukova is a 2nd year biochemistry and structural biology PhD student at the University of Edinburgh. Autumn 2017 | 25

focu s

Antibiotic resistance: a ticking time bomb Carlos Martínez-Pérez explores the issue of antibiotic overuse and resistance In 1928, Scottish biologist Alexander Fleming discovered penicillin in one of the most important scientific discovery to date; a historical moment which, both genius and serendipitous, transformed medical science. Fleming’s discovery triggered the “antibiotic revolution”, a period of a few decades spanning the mid-20th century which saw the identification of many more antimicrobials and changed medicine forever. Together with the development of vaccinations, antibiotics have helped to eradicate diseases, and turn infections which a few decades prior would have meant a death sentence, into more manageable conditions; a fact not least demonstrated by The World Health Organisation (WHO), who estimated that, on average, antibiotics add 20 years to each person’s life. By the end of the 1960s, US surgeon general William H. Steward went as far as declaring that “the war against infectious diseases had been won”. Although his enthusiasm may have felt justified then, his statement has since proven to be wrong, and today, we continue to rely heavily on antibiotics. Moreover, the application of antibiotics has not been without its limitations. The explosion in antibiotic discoveries in the mid-20th century was followed by a slump that some referred to as a “discovery void”. For almost three decades, no new antimicrobials were discovered, and it was only in 2015 that a single new antibiotic, teixobactin, was identified. This lack of new discovery is a major issue, since a number of factors are also contributing to the drugs we already have becoming ineffective due to bacteria developing resistance to antibiotics. Experts have warned that this phenomenon is on the rise, with new resistance mechanisms emerging and spreading globally. This presents a serious threat to our ability to treat common infectious diseases, including pneumonia, tuberculosis or sexually transmitted infections. For instance, multidrug resistant strains of some bacteria such as Escherichia coli have spread internationally and in recent years UK hospitals have seen a great increase in the number of patients with blood infections caused by these micro-

26 Autumn 2017 |

organisms. Recent figures have shown that up to 2,500 patients a year in the UK die of sepsis caused by some types of drug-resistant bacteria, which have mortality rates of 30%, doubling those of non-resistant strains. Globally, more than 700,000 people are estimated to die every year because of drug-resistant bacteria.

More than 700,000 people are estimated to die every year because of drug-resistant bacteria

Health economics specialists have estimated that antibiotics becoming largely ineffective could lead to an economic burden of up to £10 billion per year in societal costs worldwide. However, other experts have suggested that these figures are a gross underestimation because we have become reliant on antibiotics to prevent infections in patients receiving other treatments that make them more vulnerable to infections. As one of the pillars of modern medicine, the loss of antibiotics would have even bigger implications, toppling many other clinical tools such as cancer treatments or surgery. For example, a recent study suggested that infection rates after hip replacement could increase from 1% to up to 50% of cases, and a third of people with infections could die. While microbes can naturally develop resistance as an adaptive mechanism, the global misuse of antibiotics in animals and humans is dangerously accelerating this process. When antibiotics were first introduced, their revolutionary effectiveness also led to a change in how many physicians perceived the causes of a disease (or aetiology) and how it could best be treated. A good example is that of upper respiratory tract infections, medically referred to by the acronym URTIs. These are a group of illnesses affecting

the nose, sinuses, pharynx or larynx and include very common conditions such as sore throat, tonsillitis, laryngitis, the common cold or the flu. Every year, 340 million people attend GP surgeries worldwide. URTIs are the most common reason for these visits and cases have increased by 40% in the last decade. The problem is that the symptoms of URTIs are very similar to those of pneumonia, asthma or allergies, which can lead to frequent misdiagnoses. Even if patients actually have an URTI, these can be caused by bacteria, fungi or, in most cases, a virus. An accurate and thorough diagnosis of a bacterial infection would require lengthy laboratory tests which, given the millions of patients visiting their GPs with these kind of conditions, is often not feasible or affordable. This means that in the majority of cases, patients will receive an intuitive diagnosis based on physical examination of their symptoms. Depending on the severity of their symptoms, the patient's age or the presence of other illnesses that may put the patients at higher risk, most patients will be prescribed antibiotics. In fact, some studies have estimated that only about 10% of chest infections are caused by bacteria, but almost 50% of patients with this condition tend to receive antibiotic treatment. This means that many millions of patients worldwide with viral infections receive antibiotics every year, even though they are only effective against bacteria. When this happens, the drug will not help them get better but will still attack other harmless bacteria in their body. This exposure to antibiotics means that more sensitive bacteria will die and those that are resistant will be naturally selected by what we call “survival of the fittest”, prevailing and passing their resistant features on to succeeding generations. However, benign organisms will also be able to share these properties with harmful bacteria, thus promoting dangerous antibiotic resistance. Despite the continued advancement of medicine since the antibiotic revolution, the overuse of antimicrobials has persisted over the years. The reasons for this primarily derive from a lack of awareness among the general public

fo cu s and indeed some clinicians, and the fact that accurate diagnosis of the cause of common infections is simply not feasible. In recent years, experts have finally realised that antibiotic resistance is not only a worrying possibility, but a real phenomenon developing right now. In fact, antibiotic resistance has become one of the biggest threats to global health, food security and development. Some experts have talked of an “unfolding catastrophe” and an “apocalyptic threat” similar to that of climate change. Experts have said that the full economic burden in this scenario is “not only inestimable, but unimaginable”. Recent studies have shown that interventions to encourage a more rationalised use of antibiotics are both cost-effective and necessary to improve healthcare practice, and many Western countries have started developing their own national initiatives to address the issue. The American National Institutes of Health (NIH) have developed a National Action Plan for Combating Antibiotic Resistant Bacteria. Here in the UK, mass media campaigns have helped to increase awareness of this issue by encouraging sustainable use of antibiotics. 2009 saw the establishment of Transatlantic Taskforce on Antimicrobial Resistance (TATFAR), leading a joint effort by the US and the EU. While these are positive steps forward, bigger changes are needed to fight the problem of antibiotic overuse and its dangerous potential consequences.

Antibiotic resistance is not only a worrying possibility, but a real phenomenon developing right now

In 2013, the UK’s chief medical officer published a report highlighting the potent threat of antimicrobial resistance. This led to the proposal of a new UK five-year Antimicrobial Resistance Strategy and Action Plan by the Department of Health. The strategy included seven areas of focus to move towards better infection prevention, antimicrobial stewardship (a term referring to the promotion of more rationalised use of antibiotics) and, importantly, extensive research programmes to find new diagnostic tools and drugs. This plan was an important step in recognising this issue

Illustration by Alice McCall

and brought the UK to the forefront of what needs to become a global effort. Despite this looming threat, coordinated international actions, particularly at the political level, have been largely insufficient until recently. The WHO raised awareness on the urgency of this issue with the publication of their first global report on antimicrobial survival in 2014. Last year, leaders gathered for the 71st UN General Assembly in New York to agree on a shared, sustainable strategy. This marked only the fourth time that a meeting of such calibre has been called regarding a health issue, after the HIV/AIDS crisis, non-communicable diseases and the recent Ebola outbreak. The meeting, in September 2016, reaffirmed the global commitment to an action plan drawn up in 2015 with five goals: improve awareness and understanding of antimicrobial resistance; strengthen knowledge on their use in humans and animals through surveillance and research; reduce the incidence of infection; optimise the use of antibiotics; and develop the economic case for sustainable and increased investment. Another interesting approach to this fight is the introduction of national and international initiatives promoting research. Different global efforts are aiming to provide support for novel antibiotic research and screening and other initiatives focus on the development of more effective diagnostic methods. These include the NIH’s Antimicrobial Resist-

ance Diagnostic Challenge and the Longitude Prize, endowed with $20 million and £10 million prizes, respectively. Both competitions encourage the development of innovative, rapid point-of-care laboratory diagnostic tests which may make it easier for physicians to diagnose common infections such as URTIs in an accurate and cost-effective manner, to promote a better use of antibiotics and prevent the spread of drug resistant bacteria. Despite the worrying prospect, the fact that experts have realised the urgency of this issue and started to action a global plan is promising, since sustainable, organised strategies could really make a difference. For instance, recent efforts to manage the spread of specific antibiotic-resistant bacteria, such as methicillin-resistant Staphylococcus aureus (MRSA) and Clostridium difficile, have proven quite effective, with infection rates for these bacteria in England and Wales falling by 85% and 60%, respectively, between 2003 and 2011. As a global threat, this issue is now in the hands on clinicians, researchers and policy-makers, but also the general public. Much like in the case of climate change, we now need to be aware of the risks and stick to the strategies in place, which may just prevent a world without effective antibiotics. Carlos Martínez-Pérez recently completed a PhD in cancer research.

Autumn 2017 | 27

focu s

Problems with protein: sustainable meat in an uncertain world Alice Stevenson investigates the surprising complexities of a sustainable diet Vegetarianism and reduced-meat diets, once considered the preserve of soybean-eating hippies, have spread into the mainstream. Not only does a low-meat diet have a reduced environmental footprint, but the health benefits of limiting red meat include a reduced risk of cardiovascular diseases, cancers, and obesity. The rise in so-called ‘sustainable eating’ is picking up speed across the west, reflecting a change in consumer attitudes towards environmentally friendly products. Livestock take up more than double the land area of crops, plus they consume a third of the crop yield in feed. Animal products also account for around 32% of global anthropogenic greenhouse gas emissions – and let’s not even get started on the indirect effects from transport, processing and agrochemical use. The main emissions are methane and nitrous oxide, both of which have global warming potentials over twenty times that of CO2. Aside from the land grabbing, gas emitting, and crop guzzling, livestock also soak up water like nobody’s business. Latest figures from the Department of the Interior's Geological Survey show that,

Illustration by Eleonore Dambre

28 Autumn 2017 |

in 2010, the US alone used two billion gallons of water per day for livestock. With these statistics, it’s no wonder a juicy steak seems less appetizing. Yet, as the global population and urbanisation increase, so too does the demand for meat and high quality protein.

And hence the obvious obstacle to lab-grown meat: consumer perception Mark Post, professor of vascular physiology at Maastricht University in the Netherlands, was one of the first to pioneer the production of cultured meat. His dream of a lab-grown burger was realised in 2013. It apparently tasted ‘almost’ like a normal burger, and took 3 months and cost more than $330,000 to produce – pretty pricey for a meal for one. The meat is produced by culturing adult-derived stem cells called mysosatellites, usually obtained from an animal biopsy, and then inducing

muscle and tissue cell formation. However, a problem with this technique is that myosatellite cells can only divide a few dozen times, and eventually wear out as their telomeres (the protective endings on the chromosomes of cells) are damaged. This could be remedied with the insertion of a tumour growth gene to encourage growth. However, the meat would then be classified a genetically modified product, a term already hounded by consumers. In the production of the lab-grown burger, the muscle microfibers are then fused together and anchored to create a natural tension (without this, the synthetic muscle is weak and has too smooth a texture). By exercising the muscle with electric shocks, the tendrils can build from 100 to 800 milligrams in a matter of weeks. If that description of twitching, tumour-induced muscle fibres doesn’t make your mouth water, I don’t know what will. And hence the obvious obstacle to cultured meat: consumer perception. Whilst the hipster youth might be changing diets, the rest of the world isn’t so likely to catch up. The cultural and personal barriers to limiting meat

fo cu s consumption are strong, particularly since eating ‘real’ meat is perceived as a sign of wealth asurbanisation continues to rise. Then there’s the ‘yuck’ factor, a major impediment to the widespread consumption of lab-grown meat, an aversion to ‘Frankenstein food’. And that’s all before taste, looks, and nutritional value are taken into account. According to the United Nations the global average of meat consumption was estimated at 43 kilograms per capita in 2016. At this rate, even with no increase in consumption, that would require 420 hundred million tonnes of meat to feed the population in 2030. It is with this increasing demand in mind that the new technology of in vitro cultured ‘lab-meat’ was conceived back in the early 2000s. A recent environmental impact study conducted at the University of Edinburgh by Dr Peter Alexander compared several of the top meat-free diet solutions that are capable of providing high quality protein. A question that is becoming increasingly relevant as we delve into the age of climate change and resource strain is: Which approach is the most sustainable and, importantly, viable?

Sustainable food could look less like a burger and more like a cricket The hot contenders in the world of protein food supply and demand are cultured meat, imitation alternatives (the likes of Quorn and tofu), and edible insects. Dr Alexander’s study, conducted this year, is the first to provide a direct comparison and focuses on the contentious issue of agricultural land use. The human diet varies throughout the world and protein is obtained via several sources: animals products like milk and cheese, vegetables including nuts and legumes, and meat products like poultry and beef. Assuming that, based on the average global diet, the world’s population replaces 50% of their current protein consumption, a combined approach of insect and imitation meat is suggested. Cultured meat, though dramatically more efficient compared to beef, falls short of any real benefits against other conventional animal products such as milk and chicken. The problem with cultured meat is that the technology is young and small-scale. The medium used to develop the muscle

cells is bovine-based, somewhat defeating its purpose, and culture media based on non-animal organisms, such as bluegreen algae, are not available on a large scale. Even if the technology continues to advance, solutions to these problems might never be found. Progress would depend on two novel technologies to optimise the process: changing the culture medium and producing it on a large scale. For sustainability, an even greater problem with cultured meat is energy input. Whilst cultured meat may in theory reduce land use by negating most livestock, in practice this is offset by sterilisation and raw materials processing. The result is a product that has an energy input of up to 25 gigajoules per tonne, compared to a maximum of 4.5 gigajoules per tonne required for conventional animal products. To make cultured meat a viable environmental and economic reality, a low carbon and low-cost energy source is needed. Insect consumption, however, tells a different tale. Insects have been a main protein source in some human diets since our evolution, though perhaps unpalatable for some. Insectivorous diets remain present in some 119 countries, but ironically are not prevalent in the countries with the highest protein consumption per capita, where the change would have the greatest impact. Studies conducted by Rumpold and Schlüter in 2013 revealed that over 2000 edible species of insect are eaten worldwide, but only five in Europe. It may seem counterintuitive to advocate the protein value of a cricket versus a cow, but in fact they have countless benefits. Insects are able to provide all the essential amino acids necessary for humans, and some even contain micronutrients such as iron, phosphorous, and folic acid. They are also highly efficient; 80-100% of the insect can be consumed per weight as opposed to a mere 40% of cattle. Not only that, but insects have rapid growth rates and are poikilothermic, meaning that they don’t expend energy to heat or cool. Additionally, insects can exploit a wide variety of feeds, including animal by-products and organic waste. If 50% of protein consumption was replaced with insects, compared to beef, land appropriation for food could be reduced by 80%. The future of sustainable food could look less like a burger and more like a cricket. However, the consumption of insects might be a sticking point for the ethical vegetarian that would rather befriend the spider than eat it. Future animal rights debates could be taking a new direction

into the moral code of the insect world.

Ultimately, no one is confident about the right path to take in order to feed an ever increasing population and keep the planet alive in the process

Another meat-free alternative already, with a $13 billion market in China, is the soybean curd product tofu. Its protein feed conversion efficiency from the soy curd to the tofu product is surprisingly high: 72%, the best of all the alternatives and conventional animal products considered. It also fares well in terms of land use, with a slight lead in the lowest land use share in the 50% replacement scenario. However, this is dependent on cropland suitability, not a certainty for the climate’s inevitably unstable future. Ultimately, no one is confident about the right path to take in order to feed an ever increasing population and keep the planet alive in the process. Cultured meat has a long journey ahead towards becoming a viable large scale process, let alone a truly sustainable one. And whilst an insectivorous diet offers a highly energy efficient alternative, whether it would be accepted fast enough to have a global impact is questionable. It is no wonder that scientists and the public alike struggle to choose the best environmental solution. The most probable way to significantly reduce agricultural land use is a combination of changes: a change in diet to reduce meat and only that of the most efficient animal products (milk and chicken), a shift towards both soy based and insect food groups, and a reduction in food waste. Consumer acceptability and attitude need a distinct transformation for any of these approaches to succeed. One thing is for certain, more comparative research is needed before consumers and the food industry can instigate a change. Alice Stevenson is a Masters of Chemistry graduand.

Autumn 2017 | 29

focu s

CRISPR bacon: future gene editing application in agriculture Kirsty Millar explores the role of gene editing in agriculture Rarely is a novel technology described as revolutionary, but the CRISPR/Cas9 technology (short for ‘clustered regularly interspaced short palindromic repeats’) has been hailed as just that. The CRISPR/Cas system is a prokaryotic adaptive immunity mechanism. Prokaryotic microorganisms have the incredibly useful ability to take DNA from invading viruses and store it in a ‘genetic library’, which is CRISPR. If the same virus invades again, CRISPR uses molecular guides and enzymes (Cas proteins), which act as ‘molecular scissors’ to cut up the invader DNA. This removes the invader DNA but leaves the host DNA damaged. The damaged DNA is then repaired by endogenous repair mechanisms found naturally in the host. It was not until scientists Jennifer Doudna and Emmanuelle Charpentier realised that CRISPR could be used to alter the DNA of any organism that the true potential of the technology was

CRISPR represents the simplest, most precise and versatile method of gene editing

unlocked. Doudna described the technology as a ‘Swiss Army knife’ as it can permanently switch genes on and off or temporarily halt the production of a target protein. Designing specific molecular guides allows sections of DNA to be removed, added, or altered, allowing scientists to introduce desirable changes in the genes of target organisms. CRISPR now represents the simplest, most precise, and most versatile method of gene editing, and it has been altered for use in a wide variety of organisms. Since its demonstration as an effective gene editing tool in model plant species of Arabidopsis and tobacco in 2013, CRISPR has been used to edit a wide variety of plants for different purposes, from fungal resistance in wheat to herbicide resistance in oilseed rape. By the end of 2014, there had been surge of

30 Autumn 2017 |

investment into CRISPR applications in crops and livestock. Earlier this year, researchers for DuPont Pioneer used CRISPR technology to generate novel drought-tolerant maize variants that resulted in increased grain yield under drought stress conditions compared to the wild-type crop, with no loss in yield under well-watered conditions. Crop varieties that are drought tolerant or resistant could play a vital role in maximising water usage and reducing crop loss. Researchers at the Roslin Institute recently used CRISPR technology in pigs, in order to introduce resistance to Porcine Reproductive and Respiratory Syndrome (PRRS), a disease that results in major economic loss in the pig industry. The PRRS virus requires a specific host receptor to complete successful infection. The researchers used CRISPR/ Cas9 gene editing to remove the specific receptor subdomain that interacts with the virus. As a result, the pigs were completely resistant to PRRS, while other normal biological functions conducted by the receptor remained intact. The number of start-up companies devoted to gene editing using CRISPR is on the rise. Recombinetics is a company that aims to ‘improve and grow agriculture’ by use of gene editing technology. The firm has previously edited the genomes of bovine embryo fibroblasts, which were inserted into a surrogate cow. The calves born had no evidence of horn buds. This procedure would eliminate the need to dehorn dairy cattle, a practice carried out for the safety of cattle and their handlers. Recombinetics recently filed a patent for using gene editing to provide livestock with a phenotype which grants them superior thermoregulation and reduced depression in milk yield in summer, reducing animal discomfort and improving profitability for farmers. Gene drives are an innovative application of CRISPR technology. Through this process, organisms that have CRISPR-edited genomes can spread a positively selectable gene through a wild population, from generation to generation. Its applications in agriculture have vast potential, from reversing herbicide and pesticide resistance in weeds and insects to the control of

damaging invasive species. However, it should be noted that its applications are limited to sexually reproducing species and the gene would only spread significantly in fast reproducing species. Despite its dramatic positive impact in the agricultural world, there are downsides to CRISPR technology. Off-target effects, where the Cas proteins cut at an unintended site, are a significant pitfall. However, the frequency of these so-called off-target effects has been falling and they can be minimised by choosing a unique CRISPR RNA sequence. Even with improved precision, the outcome may not be as desired. The phenotypic trait that scientists are looking to achieve may be influenced by many different genes and complex environmental factors. Some DNA repair mechanisms

CRISPR applications in agriculture have vast potential, such as reversing herbicide and pesticide resistance in weeds can create major genomic changes with potentially detrimental effects. However, in some cases this effect is desired, such as in gene knockouts. To help ensure the introduction of precise modifications, a synthetic repair template can be provided. Gene editing techniques like CRISPR are described as New Plant Breeding Techniques (NPBTs), but the legislative framework in the EU is currently based on the laws for genetically modified organisms (GMOs), which are interpreted and executed based on the introduction of foreign genes into an organism. The European Academies Science Advisory Council (EASAC) produced a report in 2013 that concluded, ‘‘the trait and product, not the technology, in agriculture should be regulated, and the regulatory framework should be evidence-based’’. The EASAC has requested that legislative uncertainties regarding NPBTs should be resolved and EU regulators should confirm that when the products of NPBTs do not contain any foreign DNA, they do not fall under the scope of the legislative

fo cu s

Illustration by Victoria Chu

framework for GMOs. Contrastingly, anti-GMO non-governmental organisations request that the EU ensures that organisms produced by NPBTs are regulated under existing EU legislation and that health and environmental safety testing for GMOs are reinforced. There are already strict regulations in place for the health and wellbeing of animals in regard to gene editing research. The EASAC has similar recommendations for livestock gene editing as it does crop plant gene editing: the trait should be regulated, not the technology, and there should be transparency about what is being done. Gene drives in particular may significantly alter wild populations and therefore comprehensive biosafety measures would be needed. It may be required for laboratories creating gene drives to create a reversal drive which could restore the original phenotype of the population. Current EU legislation for genetic engineering contrasts significantly to legislation in the US. There is far greater pre-market control which emphasises a precautionary approach and there are fewer limitations on governmental bodies to prohibit or impel commercial speech. EU products may also require labelling. Enforcing legislation in the EU is further complicated by the

division of authority between individual member states and the governing bodies of the EU. However, there has been increasing support for autonomy at the national level of individual states. Objections to genome editing and other forms of genetic engineering are mainly economic, political, and visceral; scientific evidence does not bear much influence. Economic concerns are largely based on distrust of corporate agriculture, aversion to intellectual property rights on seeds, and hindrance of local industries which rely on wild-type animals and plants. Political pressures, such as pressure to increase food production, could influence what legislation is passed. Some may argue that gene editing does not fit with the political goal of sustainable agriculture. For instance, overcoming herbicide resistance may lead to increased use of herbicides, which can have a detrimental impact on biodiversity and soil quality. Ethical concerns are primarily concentrated on off-target effects which could introduce deleterious mutations. Safety concerns include fear of unintended ecological consequences, which is particularly relevant to gene drives. Public perception of nature also plays a significant role in the reception of gene editing technologies. Genetic modifica-

tions using man-made technologies can be seen as ‘playing god’, despite decades of research that demonstrates there is no evidence of a higher risk for genetic engineering compared to conventional breeding. Humans have been selecting for desired traits for thousands of years through conventional breeding and it could be argued that gene editing is simply speeding up this process. Research should allow for sufficient time for the safety and efficacy of gene editing and gene drives to be determined before regulatory and legislative decisions are made. Ultimately, policies should be based on the trait and product produced by the technology and not on the technology itself, as each application will have a specific mutation, organism, and ecosystem in question. The question of whether CRISPR/Cas edited products will be used towards a more sustainable future depends largely on the attitudes of politicians and the public. Sadly, even the most comprehensive research, weighing of benefits and risks, and risk management strategies may not be sufficient in overcoming political and social influence. Kirsty Millar is an MSc Biotechnology student.

Autumn 2017 | 31

focu s

The future of nuclear power Brian Shaw weighs the costs and benefits of nuclear power It has not been an easy few years for the nuclear power industry across the world. Public perceptions of the sector, which have always been mixed to say the least, were further damaged following the meltdown at Fukushima Daiichi in 2011. Across Europe, phase-outs of nuclear power are underway. Switzerland recently became the latest country to announce a phase-out following a referendum in which over 58% of people voted against any new nuclear reactors. This means that the Swiss have followed Austria, Belgium, Italy, Sweden, and even Germany towards the end of nuclear power in those countries. As part of its Energiewende (Energy Transition), a major project to cut emissions of greenhouse gases and focus more on renewables, Germany announced its nuclear phase-out in 2002. The centre-right government of Angela Merkel reversed this decision in 2010, intending to at least delay the phase-out. However, after Fukushima, the plan was changed once again, as public attitudes shifted vigorously away from nuclear power. Germany, of course, is home to the ‘Atomkraft? Nein, danke.’ (‘Nuclear power? No thanks.’) movement.

Power is generated through the process of nuclear fission, using the isotope uranium-235

These phase-outs create plenty of other problems for authorities in their countries, not least the large energy gap that needs quickly filling to minimise disruption in people’s lives. In Switzerland, the country’s five nuclear plants provide around a third of their energy needs. In the UK, a country that is certainly not considering a phase-out, nuclear power provides around 21% of electricity, a crucial part of the energy mix. In fact, with its commitment to building new plants such as Hinkley Point C and to research bodies such as the National Nuclear Laboratory (NNL), the UK has positioned itself as a country very much focused on its nuclear expertise in the future. The United States, a country that has long been at the forefront of nuclear

32 Autumn 2017 |

Image of nuclear ghost town in Japan via Wikimedia Commons

research, also sees a difficult future for the industry. Public attitudes struggled after the Three Mile Island incident in Pennsylvania (1979), even though the partial meltdown was found to cause zero deaths, injuries, or long-term negative effects to the area. That plant will be closing down in 2019, albeit for economic reasons, demonstrating another problem faced by nuclear power these days. Exelon, the company which runs the plant, has announced it is unable to compete with cheap shale gas (produced by fracking) in terms of costs. Six further plants have closed since 2013, leaving a total of around 60 operating right now. The American Petroleum Institute (API), a trade group for the oil and gas industry, has recently been negatively portraying the nuclear industry in advertisements and in political lobbying. This is an attempt to end small state subsidies for nuclear power in Ohio and Pennsylvania, and force the industry out of energy generation in those states. Between this and attacks from the environmental movement on the left, nuclear has had a tough time in the public eye. In terms of the science, nuclear power has a lot of advantages; not least the high amount of energy produced from a relatively small amount of material. Indeed, the fission of just 1 gram of uranium produces as much energy as burning 3 tonnes of coal. It’s also worth noting the latter’s negative effects on air quality and its associated health outcomes. The high short-term cost of building a nuclear power plant (for example an estimated £18 billion for the Hinkley Point C site in Somerset) can thus be recovered over its lifetime, with large amounts of energy produced at competitive prices. In June 2017, the Director General of the International Atomic Energy

Agency (IAEA), Yukiya Amano, announced that nuclear power is key for a low-carbon future, and also cited the energy security and highly-skilled jobs provided by the industry. It is indeed true that nuclear power is much cleaner than many other energy sources. Power is generated through the process of nuclear fission, using the isotope uranium-235. The uranium is bombarded with high-energy neutrons, which cause the fissile isotope to break up into other isotopes, thereby releasing further neutrons. These in turn collide with other fissile isotopes causing a chain reaction. The heat energy produced by this reaction is used to create steam, which drives turbines and creates electricity. Unlike coal, natural gas, and oil, at no point in this process is carbon dioxide or any other greenhouse gas emitted, making nuclear a very clean energy source in that respect. However, the most common criticism of nuclear power is not the energy creation process itself, but the waste that is created alongside it.

The fission of just 1 gram of uranium produces as much energy as burning 3 tonnes of coal

The other isotopes that split off from uranium-235 in the fission process are themselves often very unstable, meaning they emit hazardous radiation as they decay to become more stable. The amount of time taken for the radioactivity of a given sample to reduce by 50% is called the half-life. The half-lives of isotopes

fo cu s in nuclear waste can stretch as high as 24,000 years, as in the case of plutonium-239 (formed not by fission, but by the capturing of neutrons by uranium). Failure to properly handle this kind of waste can lead to it seeping into the groundwater, which presents a major hazard to the populace. At the moment, spent fuel rods are kept in cooling pools for around 10 years, as they still produce heat whilst undergoing radioactive decay. At the end of this time, the rods are transferred to large steel and concrete casks for up to 40 years, with the longterm goal being deep geological storage. Alas, in spite of the already large and constantly growing need for a permanent solution to nuclear waste disposal, nothing so far has been developed. In the UK and France, the procedure up to now has been to reprocess nuclear waste, essentially extracting unused uranium from the waste to use again in reactors. Although this is a sustainable process in terms of getting the maximum energy from the uranium fuel, it doesn’t solve the problem of radioactive isotopes of other elements, and in fact increases the volume of waste to be disposed of. On top of this, the Thermal Oxide Reprocessing Plant (THORP) in the UK will be shut down in 2018, with the UK moving away from reprocessing and towards

a deep geological storage solution. In the USA, a deep geological storage site was proposed at Yucca Mountain, Nevada, in 1987. Over the next decades, over $31 billion were spent researching the geology of the site intensely to determine if it was suitable for longterm storage of the waste. Important criteria included it being located away from fault lines in tectonic plates and the presence of impermeable rock to prevent seepage into groundwater. As recently as 2006, the site was looking promising, and the authorities started accepting waste, with a view to a 2017 opening. However, local and political opposition to the site had been a constant since it was first proposed, and in 2008, candidate Barack Obama promised to scrap the entire project if elected, a promise he kept by withdrawing its federal funding. Although this won the President friends among the progressive and environmental movements, it did little to solve the very real problem of nuclear waste disposal that continues to exist across the US, and indeed across the world, with indefinite cask storage continuing for now. However, in yet another political twist in the tale, President Donald Trump has committed another $120 million to restarting the Yucca Mountain project. As of now,

therefore, it is still very unclear what will become of the US’ nuclear waste and what sort of impact this may have on the nuclear power industry in general. Despite the negative signs for nuclear in some countries, the sector is very much on the up in others. Expansion is happening rapidly in Asia, particularly in India and China, and the UK and France remain significant nuclear nations in Europe. With the necessity of vast reductions in carbon emissions in the immediate future, and the difficulties with going 100% renewable so rapidly (due to storage issues, for example), nuclear energy will remain on the table for many decades to come. As an energy source, it is devoid of carbon emissions, safe, and very reliable, but a well-researched solution to waste disposal is absolutely crucial for the future of the industry. In that vein, we should all hope for the success of the Yucca Mountain Project and similar projects in other countries. The future of the nuclear industry, and indeed of low-carbon energy generation, may well rely on it. Brian Shaw is a PhD student, looking at the chemistry of uranium.

Image courtesy of Pixabay

Autumn 2017 | 33

focu s

Nuclear risk: the monster skulking in the shadows Alyssa Brandt lays out the current state of affairs in the field of nuclear risk research Nuclear weapons profoundly shaped the course of the last century. No other man-made creation has posed such a global threat than that of nuclear annihilation. Experts in politics and warfare alike give varying estimates from 1% to 25% chance of a nuclear bomb targeting a civilian location this decade. The Future of Humanity Institute (FHI) places the risk of “probability of complete human extinction by nuclear weapons at 1% within the century, the probability of 1 billion dead at 10% and the probability of 1 million dead at 30%”. J.F. Kennedy said the risk of nuclear war during the Cuban Missile Crisis was the equivalent of putting a bullet in a three-chambered revolver. While there have only been two detonations in the name of war, the threat to human life remains very real. We are not immune to repeating the nuclear past. We need to be sure that researchers are equipped well enough to mitigate the variety of nuclear scenarios which could occur.

Nuclear weapons profoundly shaped the course of the last century

Presently, we have an unprecedented number of high-yield warheads controlled by both stable and volatile nations. The total number of warheads has decreased since the 1960s, but the destructive power of our current weapons considerably outcompetes old technology. Of the 196 recognized sovereign states, 13 possess or have access to nuclear weapons, with the United States and the Russian Federation holding the largest number , despite having lowered their combined stockpile from 60,000 to 14,000 warheads over the last five decades. France, the UK, and China hold between 200-300 warheads apiece. Pakistan and India have about 120 each. Israel and North Korea have unknown numbers of nuclear weapons. The rest of the list share the control and storage of another 150-250 in a nuclear

34 Autumn 2017 |

sharing program, which brings the total number to roughly 16,000. With all these weapons in hand, nuclear war has several ways of potentially unfolding: by accident, on purpose, or because someone was being stupid (perhaps both of the former are included in this). Governments are run by people, people make mistakes, and mistakes are often not realized until it’s too late.The weightiness of nuclear war is that these mistakes are not on the scale of thousands of deaths, they areon the scale of millions. Although nuclear weapons have remained a hot topic politically and scientifically since their invention, many worry the attention given to them is not enough. Several organizations like the International Campaign to Abolish Nuclear Weapons have made it their mission to decrease nuclear stockpiles globally. Organizations like FHI, the Global Challenges Foundation, the Ploughshares Fund and others encourage people to engage with nuclear risk. They promote research into all stages of nuclear war: negotiations, policy making, decreasing stockpiles, and in the worst cases, preparing for the aftermath of large-scale nuclear war. The main question is whether we are putting enough resources into studying nuclear risk, or if we’re gambling with human lives. After all, the consequences of nuclear war concern the whole of earth’s population, not just those in warzones. In the 1980s, a group of scientists modeled the damage to our climate after a nuclear attack in what is known as the TTAPS study, named after the authors’ initials. Using methodology for modeling volcanic eruptions, they found that even a relatively small conflict with a total detonation of 100 megatons (Nagasaki was only 22 kilotons, but the largest warhead in existence today has a potential yield of 100 megatons) could cause subfreezing surface temperatures for months, a phenomena called nuclear winter. Regardless of where war would break out, the consequences of nuclear winter would disrupt food supplies globally and result in potentially millions of lives lost. And despite performing a more modern analysis on these models, a paper published in 2007

by Robock, Oman and Stenchikov arrived at the same conclusion: the effects of a nuclear war could still lead to nuclear winter. Modern arsenals may have fewer weapons than those in the mid 20th century, but modern weapons eclipse them by megatons of detonating power. Technological advances have made it possible to cause more damage to the climate with fewer detonations, which means even a war between nations like Pakistan and India, with their limited stockpiles, could cause global devastation.

The weightiness of nuclear war is that these mistakes are not on the scale of thousands of deaths, they are on the scale of millions.

This grim forecast is not certain however; nuclear winter lacks precedence and is therefore difficult to study. Although the two nuclear strikes in Japan showed us the detrimental effects of radiation and the destructive capabilities of the warheads, the impact of those bombs would be small compared to the potential destruction caused by contemporary arsenals. After nuclear silos, the most likely targets for nuclear weapons are the towering metropolitan centers housing scores of people. Technological advancements in architecture and infrastructure have created a different environment compared to the cities targeted in the past, and thus there is a great need to model the damage to reflect these changes. The TTAPS and Robock studies, while important, have areas of uncertainty because their inputs were not based on the burning of cities. Because cities have a diverse mix of materials, including plastics, petroleum, and other fuels, the type of smoke is much thicker and absorbent than smoke from a forest

fo cu s

Illustration by Alyssa Brandt

fire or volcano. It is not certain how much smoke or the ratios of smoke types that would result from burning a city, which points to a hole in the current literature. Along with the ash and dust, the vast amounts of nitrogen oxides from the blasts would likely deplete the ozone layer of the stratosphere even further, increasing the amount of UV radiation reaching the earth. Combined with the drastic cooling, the climate would be completely disrupted and as desolate as a frozen desert. Nuclear winter then, would be anthropogenic climate change, but on an astronomical scale

Technological advancements in architecture and infrastructure have created a different environment compared to the cities targeted by nuclear weapons in the past, and there is a great need to model the damage to reflect these changes

Beyond this long-term climate change, at a human level nuclear winter

would carry a steep cost to essential food supplies. We are almost entirely dependent on food sources that are not locally grown. Immediately after a nuclear event, transportation of food to nearby areas would be nearly impossible due to radiation and the destruction of infrastructure. If stored food was relied upon, there would likely not be enough to feed the affected population for more than a year. In the following months, temperature drops and rain disruption from the smoke and ash would obliterate essential growing seasons for the entire globe. Death from starvation would not be the only risk for survivors: absence of food on this scale is expected by some experts to lead to the crumbling of society as we know it. The next steps facing humanity in the face of nuclear risk will be challenging. Both the governments and the public need to be convinced that it is valuable to study and support the prevention and mitigation of nuclear war. It may seem obvious to some, but to others it feels far-off, distant, and much like the Bogeyman you were warned about as a child; a tool meant to strike fear, not change. Constant impending doom is easy to ignore once we acclimatise to the narrative. When experts discuss nuclear risk in terms of “chance” or “probability”, it becomes almost harder to grasp the

severity. Humans are not very good at internalizing risk and probability, which means there is only a handful of people who understand the danger, and a smaller handful still of those who actually study nuclear risk. Cognitive biases, the handy mental cushions our brains setup to help us navigate the world, are often more of a hindrance. For example, it’s about 430,000 times more likely that you’ll die in a car collision than it is to win the lottery, yet people still play the lottery hoping to win, and people still get in their cars every morning and expect to keep living,. Nuclear war may have not broken out yet even in the highest-tensioned settings, but lack of precedence does not negate threat. There will always be a risk as long as there are weapons. The bright side of all of this is that it isn’t too late. The field of nuclear risk prevention and research could use additional capable minds. The best we can do as global citizens would be to use our voices to raise awareness, to participate in the democratic process if possible, to support the disarmament of nuclear weapons, and to hope our efforts win before someone makes the greatest mistake of this century. Alyssa Brandt is an alumna from the MSc in Integrative Neuroscience program.

Autumn 2017 | 35

focu s

Academia in a rat race Marta Mikolajczak reveals the existing problems of academia and the future they paint The 21st century is certainly an exciting time for scientists - we now have loudspeakers being built from carbon nanotubes, we have broken the petaflop barrier of computer processing speed, developed CRISPR/Cas9 genome editing, and have confocal microscopes able to generate a 3D detailed image of tiny cells literally in minutes. However, technological innovations are not all for free, and what academia needs, now more than ever, is money. Research capital resources have always been sparse and difficult to obtain, but since the 2008 financial crash the constant reductions to science budgets have caused serious and worldwide problems in academia. Publication record is key to securing further research funding, generating a tremendous pressure to publish among scientists. A 2014 study concluded that academic success is determined by the “number of publications held by the academic, the impact factor of the journals these papers were published in and the average number of citations

Illustration by Lana Woolford

36 Autumn 2017 |

that their publications have received”. Extremely limited funding generates intense competition, not only between scientific groups across the world but also between individuals within the same institutions, and even those working in the same lab group. Typically, increased competition proves beneficial but unfortunately, not within academic research. Postdoctoral researchers (postdocs), gathered at a symposium in Boston in 2014, concluded that fierce financial competition promotes academic dishonesty, discourages creative thinking and risk-taking, and is actually “encouraging scientists to present data in the most optimistic light”, rather than reflecting the objective results. The most efficient progress in science happens when people cooperate, sharing ideas and experience, instead of competing with one another. Data not shared among other scientists and not published, due to lack of ‘splendour’, leads to unnecessary repeats of already performed experiments. Moreover, scientists are terrified of being ‘scooped’, that is competing labs pub-

lishing similar results first before they get an opportunity to do so themselves, reducing their chances of securing the next grant. Even collaborating researchers do not completely trust each other, and only share the most basic information about a given project. This equates to a hostile, limited, and very close-minded work environment - the exact opposite of what science is meant to be.

What academia needs, now more than ever, is money It is feared that academia is becoming slowly ‘industrialised’. According to some senior scientists from the University of Edinburgh, most of the grants in biology are being allocated to studies focused on new drug targets or preventing global warming. Whilst these are certainly noble causes, it seems that the pure ‘passion for knowledge’ has

fo cu s now been taken away from academia. Importantly, academics are suffering from insecure and precarious job contracts. The University and College Union (UCU) stated in April 2016 that, “54% of all academic staff and 49% of all academic teaching staff did not have stable work positions”. UCU also certified that academics are underpaid, vulnerable, and constantly facing the prospect of unemployment. Moreover, extremely long working hours and poor work-life balance is prevalent among academic workers. Dr Meghan Duffy, a senior ecologist, says that “if you're not working 60 or 80 hours a week you're not doing enough” and that it can be embarrassing to admit to a 40-50 hour working week.

The most efficient progresses in science happens when people cooperate, sharing ideas and experience, instead of competing with one another

Surprisingly, it is not necessarily research that prevents academics from going home in the evening. For those who dream about being a scientist, the University of Manchester advises that “having a good PhD is not enough to secure the job and to succeed”. Although principal investigators (PIs), the senior scientists that run a lab, endeavour to conduct experiments in a lab they are often swamped with other work, including: writing draft publications or grant applications; teaching; delivering lectures; marking exam papers; attending and organising conferences; being active in their professional institutions and scholarly associations; and engaging with the general public. Therefore, most of the lab-based work is left to postdocs and PhD students. The overload of work and high demand for generating results, with sometimes little support from their already overworked PIs, means that mental health issues are rising among young researchers. Elizabeth Pain, editor of the journal Science, recently reported that “approximately one-third of PhD students are at risk of having or developing a common psychiatric disorder like depression”. Sadly, little help is offered to those struggling. Additionally, upon completion of their PhDs only a fraction of students

end up staying in academic research. Those that do make it to a postdoc have to put up with a serious lack of permanent employment. President of Johns Hopkins University in Baltimore, Ronald Daniels, reports that showing preliminary data is essential to secure necessary funding for independent research. Therefore, “the allocation of research grants clearly favours established scientists over new entrants”. Postdocs working on a project funded by an existing grant do not have the potential to gather the necessary preliminary data, regardless of having a potentially successful idea. They must instead rely on the generosity, goodwill, means, and ability of their superiors to help them progress further. Daniels notes that “today, more than twice as many grants are awarded to PIs who are over 65 years of age as are under 36 years”. Furthermore, Kendall Powell put forward the case that the postdoctoral system is broken in a 2015 Nature article. She writes that postdocs are the major research backbone, but they are often ‘rewarded’ with little potential to progress in academia. Apparently, the number of postdocs in science is now ballooning. In the United States alone, there was an 150% increase in postdoc positions between 2000 and 2012, which is not supported by the number of tenured and other full-time faculty positions, generating an academic bottleneck. Those young scientists who continue in academia often end up trapped as ‘permadocs’ or ‘superdocs’, doing multiple postdoc terms and never acquiring scientific and financial independence. According to Ushma Neill, a vice-president of scientific education and training at the Memorial Sloan Kettering Cancer Center in New York City, only 15–20% of all postdocs end up in stable academic positions. Even more crushing is the fact that many postdocs suffer long periods of unemployment between their positions, which only adds more misery to this already dark reality of an academic career. Moreover, Michelle Newman, a postdoc, reports in Nature Blogs that, currently job agencies are “not equipped for dealing with the specialised nature of academic employment”, so postdocs are left on their own to find new positions in this increasingly competitive field. Problems in academia are seemingly endless, and include underrepresentation of women as PIs and the fact that ‘well-known’ scientists, regardless of the quality of their work, are more likely to publish in a high impact factor journal than others. Coverage of such issues is

too vast for the scope of this article, yet they deserve to be at least mentioned. Low funding and high competition has made academia a ‘rat race’ like never before. Changes are necessary and some are already happening. Firstly, it was proposed that funding could be delivered to Universities and Research Institutions, which can then more fairly distribute resources to workers, allowing support for smaller groups and creating additional job positions. Secondly, Shirley Tilghman from Princeton University, suggested in a 2015 Nature article that, “the real solution to the postdoc problem, lies in dramatically changing the composition of labs to make them smaller, with a higher ratio of permanent staff scientists to trainees”. Furthermore, the terms of grant applications could be changed to support young researchers, putting emphasis on experimental ideas rather than preliminary data. As an alternative to the ‘typical’ funding system, it was proposed that researchers could work towards pre-agreed milestones to receive smaller chunks of money. This reduces the risk of losing the precious funding in full if the scientist is unable to deliver.

Low funding and high competition has made Academia a ‘rat race’ like never before

Additionally, to reduce their current work burden, scientists should be left to do what they do best - science. High demand for public engagement could be met by increasing the number of Science Communicators and Public Outreach Officers, providing new career possibilities - perhaps for the high number of currently unemployed postdocs. Currently, the academic system is not working well for most people involved. The deeply rooted problems are huge, and difficult to overcome. Yet scientists are still fighting hard for what they hold dear. One possibility is that academia will evolve in the future into some type of a business-like institution, ruled by survival of the fittest. But we might still see an academic U-turn and research may again be performed for the pure drive to discover the unknown. Only time will tell Marta Mikolajczak is a first-year postdoc at the University of Edinburgh. Autumn 2017 | 37

fea t ures

Preprinting: setting the precedent for the future of publishing Amelia Howarth contemplates the future of preprinting in the scientific community American author Louis L’Amour once commented that ‘knowledge is like money: to be of value it must circulate, and in circulating, it can increase in quantity and, hopefully, in value’. This insight is applicable across many disciplines, but is perhaps most rigorously observed by those within the scientific and technological fields, particularly within academia. When scientists produce results, they share them, most commonly through publication in a scientific journal specialising in their field. The point of this is to share findings, add to the field, increase knowledge and push quality research forward, which are all very fundamental concepts of scientific study.

Knowledge is like money: to be of value it must circulate

However, it’s not as easy as you might expect; in fact publishing is a lengthy and labour intensive process. Once scientists have made a discovery, they then write a paper about it and choose a journal to submit it to. The paper is peer reviewed, meaning other experts in the field read it, review it and make comments on the validity and relevance of the data. At this stage, the paper is usually returned to the author to make relevant improvements and then passed back to the editors, after which it will hopefully continue to publication. However, in biology at least, only around 15% of papers are accepted, with the whole process averaging around 7 months. The benefits of this approach to publications is that the final product is a peer reviewed paper in a respected journal, meaning the content should be reliable, repeatable, and ultimately well-trusted. Despite this, there’s still a retraction rate of 0.5% every year, meaning low-quality data does still occasionally slip through the cracks. Then, even if a paper is pub-

38 Autumn 2017 |

lished and not retracted, the high costs associated with it are often difficult to swallow, with the average cost of publication reaching the thousands of dollars mark. With a large proportion of money directed into biomedical research coming from charities - the British Heart Foundation brought in over £140 million for UK research last year alone - fundraisers are eager to see their money put to good use, which is not reflected when so much of it is lost to sharing data through expensive journal publication. In the opinion of many, this process of academic publishing is far from ideal. Luckily, one notable alternative comes in the form of preprinting, a method of publishing which has been around since the 1990s but has only recently been gaining more popularity. Preprinting is a way of making your work publically available, without going through the traditional journals. The process is slightly different in that a scientist makes a finding, writes a paper and then immediately submits it to an online website. Papers are usually only checked to make sure they don’t contain inappropriate content, and within days, knowledge can be shared worldwide, for free. Other experts and scientists are then free to make public comments on the paper or contact the lead author, promoting an ‘online peer review’ system through direct contact between scientists. In physics and engineering, preprinting is not a new concept and websites such as ArXiv, where preprints can be submitted, have been running since the 1990s. However, the primary biology equivalent, BioRxiv, has only been running since 2013 and the idea of preprinting in this field is only just emerging, promoted by the recognition of preprints by major funding bodies. In January 2017, the Medical Research Council (MRC), which invested over £846 million into biomedical research last year, announced that it supports the use of preprints, a statement soon seconded by the Wellcome Trust (with a research investment of over £822 million last year). The following month, both of these funders, along with the National Institute of Health (NIH), proposed and supported the establish-

ment of a central biomedical preprint database, a move that has been largely attributed to the preprint campaigning group ASAPbio. It’s now obvious that, although somewhat slow on the uptake,

In biology… only around 15% of papers are accepted, with the whole process averaging around 7 months

huge funders are now openly and even actively supportive of preprinting. However, they are not the only important players in the publishing game. Journals have been equally as vocal about preprinting biomedical papers, although the opinions are more mixed. Nature, arguably one of the most revered biological journals with a top three impact factor of 43, were ahead of the game, setting up a preprint database ‘Nature Precedings Preprints’ in 2007. However, the contribution and growth of this database, with over 5000 papers archived, wasn’t sustainable, leading to the closing of submissions in 2012, although all content is still freely available online. Other journals have been less liberal in their stances on preprints. Cell, for example, have stated a variety of guidelines relating to preprint publications, warning against the risk of ‘bad data’ whereby submissions contain falsities or invalidated methods. It is a real concern, because not only do inaccurate publications promote bad science and false results, but it also could lead to a loss of credibility of scientific research, which could be devastating for funding. It is generally thought, however, that the opinions of journals should be taken with a pinch of salt – they are, after all, money-making enterprises, no matter what their founding beliefs are. It’s fair to say that the real interest in preprinting opinions lies with those on the ‘front line’ of science – the research-

fe atu re s ers themselves. Unfortunately, this may be the pool in which views on the subject vary more than any other. Obviously, many scientists are supportive of the notion, backing previously mentioned ASAPbio, who describe themselves as ‘a scientist-driven initiative to promote productive use of preprints’. ASAPbio believe that the traditional, journal-orientated method of publishing is no longer sustainable, with scientists relying too heavily on publications in high impact journals to further their career. They believe that this is unfair and that free sharing and reviewing of information through preprints is the way forward. However, many researchers have concerns over the idea, reaching beyond career worries, including clinical researcher Dr. James Dear, at the University of Edinburgh. He has been considering preprints recently, and has concerns over what the spread of bad data could mean for general public health, whereby people who see ‘bad science’ may use it for uninformed self treatment. He states “(There’s a) danger that complete rubbish will be preprinted, then marketed by social media without critical review and impact on patients. There’s no filter, so it’s dif-

ficult to know if a paper is really good”.

Preprinting is a method of making your work publically available, without going through the traditional journals

He does, however, acknowledge the benefits of preprinting, adding, “Through preprints, you can get your work out immediately, open peer review of work, and get free open source access to papers, all without any bias from journals”. With the biological scientific community remaining largely on the fence regarding preprinting, it’s difficult to see where this system will end up. It is indeed considered normal practice in some fields,

such as physics, but within the highly varied domain of science, there is no ‘one size fits all’ approach. There are doubts over whether this is the right move for the biomedical community, who have depended on journal based publication for so long. The advantages and disadvantages of preprinting are equally convincing, but the real-life observations are clear-cut – the submissions of preprints to BioRxive have increased in number every year since the site was launched, suggesting that support for preprints is on the up. It remains to be seen whether journal-based publishing will prevail or if preprints will stand the test of time. Dr. Dear weighs in, “My personal view is that we all know the journals that our employers want us to publish in. It is not a secret. In that sense, I think the current system, while not perfect, is quite transparent. Preprinting may be useful but I don’t see it taking over completely.” It seems only time will tell if this will be the case. Amelia Howarth is a final year PhD student at the Centre for Cardiovascular Sciences.

Image from Wikimedia Commons

Autumn 2017 | 39

fea t ures

The politics of geoengineering Rachel Harrington explores Geoengineering as part of a viable solution to global climate change Most respected scientists would dread to think they have something in common with Donald Trump, but surprisingly, a small and increasing number do. In 2009, Trump, amongst other prominent business leaders including his own children, signed a letter addressed to President Obama calling for “meaningful and effective measures to control climate change”. Since then, however, his Twitter account has been ablaze with scepticism surrounding the subject, most notably stating in 2012 that global climate change was a hoax ‘created by and for the Chinese in order to make US manufacturing non-competitive’. Earlier this month, the world was left in no doubt as to Trump’s altered stance on the subject, when he announced that the USA would be pulling out of the Paris climate agreement. Whilst most do not contest its existence, where some scientists and Trump share common ground is in the advocacy for a reformed opinion on global climate change Rather, some are daring to speak

Image courtesy of Pixabay

40 Autumn 2017 |

out against the popular opinion that the only solution is a staunch combatting of greenhouse gas emissions. They suggest that perhaps, alongside the important mitigation efforts, adaptation techniques (such as geoengineering) could be used to help in our battle against long-term changes to the Earth’s climate. Global climate change is an issue that many people are aware of, but not well informed. With that in mind, the latest facts and announcement of a new research project may convince more that a change in direction is needed to protect the future of our planet. Global climate change is a term encompassing the long-term impact of humans on the Earth’s climate. This includes alterations to sea levels, changes in precipitation patterns and the well-known rising of global temperatures. The Earth itself naturally controls temperature through the greenhouse effect. Solar radiation reaches the Earth’s atmosphere, where some is reflected back into space, and the

rest is absorbed by the Earth, naturally heating it. In turn, this heat is re-radiated into space. Some of this is trapped by greenhouse gases in the Earth’s atmosphere, keeping it warm enough to sustain life. Human acts such as deforestation, burning fossil fuels and agriculture, are increasing the levels of greenhouse gases in our atmosphere, creating an increase in the Earth’s temperature. Anthropogenic (originating from human activity) climate change is combined of several greenhouse gases, and as stated by the IPCC (Intergovernmental Panel on Climate Change) humans have caused “most of the observed increase in globally averaged temperatures since the mid-20th century” which was stated with a greater than 90% certainty. The consequences of human activity are far reaching. As a WWF (World Wildlife Fund) article on climate change stipulates, “no matter what we’re passionate about, something we care about will be affected by climate change”. The impact

fe atu re s of climate change can be seen globally, from melting of mountainous glaciers, of which over a million people rely on for their drinking water and sanitation, to higher summer temperatures causing droughts, heatwaves and loss of crops. There are two main policies available to us in the combat of global climate change- mitigation and adaptation. Mitigation aims to reduce climate change itself, either through reducing the source of the greenhouse gases (switching from fossil fuels to renewable energy for example) or increasing “sinks” which are natural resources that store these gases (oceans, forests and soils). Adaptation aims to lower the risk of the consequences of our past and continued contributions to climatic changes.

We are not human beings alone, we live in a planet that has an environment and we are integrated into it

Whilst mitigation aims to reduce human interference with the Earth’s climate, even if all anthropogenic sources were removed tomorrow, greenhouse gases, and in particular CO2, can linger in the atmosphere for many years. In other words, we will be subjected to some level of climate change for the foreseeable future. As such no single option alone is a solution, and effective implementation of a worldwide multifaceted approach is complex. As stated by NASA’s Earth Science Communications team, “climate change involves many dimensions - science, economics, politics and moral and ethical questions”. In terms of the UK, it is clear climate change is not a priority, mitigation or otherwise. In the most recent election both the Conservatives and Labour failed to mention climate change at all in their opening speeches and barely thereafter in their manifestos.There is no contention that national security, prevention of terrorism and survival of our emergency services should take precedence over such an issue.However, should more be done by way of funding research and taking developments such as the Paris agreement more seriously. As part of the agreement countries are vowing to not only make changes

that would halt a global temperature rise in the future, but deal with the impact of existing climate change. To achieve both will be to rely on research into both mitigation and adaptation. As the agreement leaves decision making up to the individual country, we need to be highly educated on the impact of all solutions. Time will tell as to what a successful balance of mitigation and adaptation will look like. Whilst critics argue geoengineering is a distraction from cutting greenhouse gas emissions, it may well be part of a viable solution. Geoengineering is the “deliberate large-scale intervention in the Earth’s natural systems to counteract climate change.” It can be divided between two categories. Solar Radiation Management (SRM, also known as albedo modification) involves masking sunlight from entering our atmosphere. Ideas for SRM can often sound far-reaching and at times border verging on extra-terrestrial, for example “space umbrella” and “artificial clouds”. Carbon Dioxide Removal (CDR) unsurprisingly involves removing CO2, either directly through man-made devices, or by enhancing the Earth’s capacity to soak it up. One scientist willing to concede that a change of direction in our battle against climate change may be necessary, is Harvard University’s Applied Physics Professor David Keith. The Keith group, led by David himself, comes as the latest announcement of a research project into the viability of geoengineering. It is an attempt to be ahead of the curve, discovering solutions to the consequences of climate change we are only just beginning to experience. In the projects’ opening video, they stress their awareness that geoengineering is not to be used as a substitute for lowering greenhouse gas emissions. Professor of science and technology studies, Sheila Jasanoff, makes a point worth remembering, “we are not human beings alone, we live in a planet that has an environment and we are integrated into it.”. Their project focuses on research into albedo modification, namely stratospheric aerosol injection. This involves injecting particles, for example sulphur dioxide (although the project intends to test several chemicals), via a balloon into the upper atmosphere, which scatters sunlight back into space. CO2 causes a warming effect because of its ability to absorb Infrared (IR) radiation, of which our sun emits amongst other forms of electromagnetic radiation. A particle of CO2 will vibrate upon absorbing an

IR photon. After a short interval, the molecule will re-emit an IR photon, and stop vibrating. The ability of a particle such as CO2 to absorb light in contrast to pure sulphates or nitrates which almost perfectly reflect sunlight, is dependent upon the composition and colour of the particle. It is the same logic as to why we wear white rather than black to keep ourselves cooler on hot days. The project’s first experiments aim to commence in 2018 with a propelled balloon that will be able to travel through a volume of ‘well mixed’ air that will serve as an experimental beaker. The proposed aerosol particles will be deposited and their impacts on the background atmosphere and incoming radiation observed. The project is thorough, investigating the particles impact on sea levels, and aims to be multidisciplinary, having obtained a grant to address questions of political, social and economical nature from Harvard’s Weatherhead Centre for International Affairs.

Climate change involves many dimensions - science, economics, politics and moral and ethical questions

Geoengineering is at the forefront of climate change advancement. It is a concept that could be abused in the wrong hands, and all research should be done transparently with honourable intentions. Whereas I previously joined respected critics, including organisations as large as NASA, in their fears that it could be used as a scapegoat for reducing our emission footprint, I now stand supportive of research into its potential benefits. David Keith himself puts it best, “I’m not saying it will work, and I’m not saying we should do it,” but it would be reckless not to begin serious research on it.” “The sooner we find out whether it works or not, the better.” Rachel Harrington is a second-year Physics student.

Autumn 2017 | 41

fea t ures

‘From monkeys to men’ and other fallacies about evolution Andrew Bease explains the process of evolution and addresses some common misconceptions Given that lies, untruths, and ‘alternative facts’ seem to be becoming ever more prevalent in society, it comes as no surprise that science is one area in which various myths have propagated. This is partly due to the understandable lack of scientific knowledge within the general public but also due to individuals who deliberately spread misconceptions to further their own agenda. I once had the displeasure of watching a minister argue his case for intelligent design by telling a group of impressionable children that, “If you believe in evolution, you’ll believe anything.” His view of evolution was that fish crawled onto land because they were bored of the sea and then decided to grow hair because it was cold. This is just one example of the numerous misconceptions that have surrounded the theory of evolution since it garnered interest via Charles Darwin’s On the Origin of Species. Interestingly, although Darwin is seen as the father of evolutionary theory, the concept of evolution actually dates as far back as the ancient Greeks. Here, we look at some of the common fallacies that people believe about evolution and explain how the process actually works.

His view of evolution was that fish crawled onto land because they were bored of the sea and then decided to grow hair because it was cold

One misconception that is often uttered by people who do not understand evolution is that it is only a ‘theory’, thus suggesting that there is a lack of evidence for it or that there is doubt in the scientific community about its validity. While the word ‘theory’ is used in common discourse to refer to an idea that may not have strong supporting evidence, a scientific theory is in fact based on observable evidence that can be repeatedly tested. Despite no one being around millions of

42 Autumn 2017 |

years ago to make direct observations over the ages, there is certainly not a lack of proof for evolution. The widespread increase in antibiotic-resistant infections in the last 50 years is an example of how bacteria have evolved over a relatively short space of time as a result of the increasing use of antibiotics. Similarly, the fossil record provides evidence of evolution over longer periods of time as there are numerous examples of transitional fossils that provide links between different groups of organisms, such as Archaeopteryx (a bird-like dinosaur) and Tiktaalik (a walking fish). In short, evolution is as demonstrable as germ theory or the theory of gravitation. The actual process of evolution is quite simple and it is important to note that evolution is something that happens to living organisms and should not be confused with abiogenesis, which is the process by which life originates. Evolution requires three things to occur: the replication of genetic material, random mutations in this material, and a selective pressure. In addition, the prevalence of certain gene variants develop within a population by chance - a process called genetic drift. All living organisms contain deoxyribonucleic acid (DNA) with the exception of RNA viruses, although it is debatable as to whether viruses should actually be considered as living. DNA is often referred to as ‘the instructions of life’ and encodes all the information that gives every organism its characteristics. In order for living things to reproduce, their DNA must replicate. However, DNA replication is not perfect and occasionally mutations will be introduced that make the new DNA different from the parent DNA. These mutations are often lethal to the offspring but occasionally they can be beneficial, particularly if a new selective pressure affects the organisms. A selective pressure is any condition that selects for a particular trait within a population. For instance, a new disease emerging within a population will select for individuals who are resistant to said disease. These individuals will live longer and produce more offspring that are also resistant. This is what is meant by ‘survival of the fittest’ - not

that organisms who are necessarily the physically fittest will survive. Through successive generations, the entire population will eventually be replaced by a new disease-resistant population. This process is called natural selection. The constant repetition of random mutation during replication and selection is what drives evolution, allowing organisms to gradually change and adapt to different environments. However, not all evolution is the result of natural selection, as most of the domestic plants and animals that exist today were selectively bred by humans from their ancestors, and genetic drift can result in mutations with no benefit spreading through a population or disappearing.

In short, evolution is as demonstrable as germ theory or the theory of gravitation

Evolution is a process and as such does not have any plans or goals. However, it is often wrongly believed that certain aspects of an organism’s biology have been designed by evolution for a particular purpose or that living things evolve because they want or need to. Confusion on these matters is often caused by the use of simplified terminology to explain specific adaptations. For instance, it is easier to say ‘Dinosaurs evolved feathers for insulation’ when what is meant is actually ‘Feathers appeared randomly and were selected for because of their beneficial insulating properties’. Evolution does not happen completely by chance though. While mutations are random, and genetic drift can account for the random spread of some mutations, natural selection plays

fe atu re s

Illustration by Vivian Ulhir

the main role in sorting out which muta-

In an age where we must understand the truth in order to manipulate the world for our survival, a society that rejects or misinterprets it could be disastrous

tions are passed onto future generations. Many people also wrongly believe that evolution produces more complex organisms. While it is true that many modern life forms are more complex than life was billions of years ago, this does not mean that all life has become more complex. Bacteria and archaea are two domains of life that have remained entirely unicellular for millions of years but this does not mean they are less well adapted to their environments than more complex life. In fact, some archaea are capable of living in some of the most extreme environments on Earth. There are also numerous examples of organisms that have become less complex than their ancestors. However, this is not caused by evolution going backwards or ‘devolution’ as it is sometimes incorrectly

referred to. Evolution doesn’t operate in reverse but rather every change that occurs to a population, including a loss of complexity, is beneficial in some way. For example, certain animals that live exclusively in caves lose their eyes because supplying a useless organ with energy is wasteful and therefore is selected against. Perhaps the most commonly believed fallacy regarding evolution is that humans evolved from monkeys or chimpanzees. In reality, humans and chimpanzees both evolved from a common ancestor that is now extinct, which in turn evolved from a shared ancestor with monkeys. This can be proven by whole genome phylogenetic analysis, which shows due to similarities in their genomes, that all living things are related and descended from a single common ancestor. A question related to this misconception, which I myself asked when I was younger, is ‘Why haven’t chimps or humans evolved further?’; the simple answer is, they have. Chimpanzees have become better adapted to their environments and have not become more humanlike because there is no evolutionary advantage to do so, since humans already occupy the ecological niche of using advanced intelligence to manipulate the environment. Humans too have evolved. Homo sapiens were able to out-compete other early human species, such as Neanderthals, to become the dominant species on the planet, as well as the only extant

human species. Humans have come a long way, from learning to farm approximately 12,000 years ago to putting men on the Moon. Our physical appearance may not have changed drastically but our brains have certainly evolved. The human brain has actually shrunk in the last 10,000 years but humans have undoubtedly become more intelligent, using technology to adapt to the world instead of relying on new traits evolving. Evolution is a simple fact of life that many people, even some scientists, fail to fully understand. It may not seem important to understand evolution or to accept it as reality, but ensuring that it is taught and conveyed correctly is important for society as a whole. In an age where we must understand the truth in order to manipulate the world for our survival, a society that rejects or misinterprets it could be disastrous . There are too many misconceptions about, and fallacious arguments against, evolution to address here and these account for only a fraction of the misconceptions about science as a whole. It is for this reason that scientists must ensure they explain their work, and the work of those who came before them, in a clear and rational manner to the public so that in the future people can separate the facts from the fiction. Andrew Bease is a second year Infection and Immunity PhD student at the Roslin Institute. Autumn 2017 | 43

fea t ures

What mummy and daddy didn’t tell you Issy MacGregor investigates three of the most common questions that besiege parents There is no questioning that children bear inquisitive minds. Parents are accosted by a barrage of ‘whys’ and ‘hows’ on a daily basis, from the astute, ‘Why doesn’t the moon fall down?’ to the dreaded ‘Where do babies come from?’. It is therefore not surprising that many a parent struggles to navigate their child through life’s labyrinth of curiosities and consternations. So much so that according to a study by the Institute of Engineering and Technology, two thirds of parents admit to having fabricated an answer rather than reveal their own ignorance. Here I will address three of the most common questions posed by kids and delve a little deeper into the science behind the answers. Why is the sky blue? The blunt answer – Rayleigh scattering. A portion of solar electromagnetic radiation forms white, or visible, light. Common knowledge is that white light is composed of a spectrum of energy waves that differ in their wavelength. These waves quite literally include all the colours of the rainbow, from violets at 390 nm all the way to reds at 700 nm. Intriguingly, if white light is deflected the different wavelengths scatter to different degrees. The shorter the wavelength, the greater the scatter, and the simpler it is for our eyes to detect them. Blues and violets are scattered over ninefold more than reds by the earth’s atmosphere. Thus the sky appears blue. But, if violet has a shorter wavelength than blue shouldn’t we in fact be asking, ‘Why is the sky not violet?’. There are two factors accountable for this discrepancy – firstly, the colour-sensitive cone cells within our

Illustration by Yivon Cheng

44 Autumn 2017 |

eyes are less sensitive to colours at the peripheries of the light spectrum, including violets. And secondly, the intensity of the sun’s solar radiation and the extent of atmospheric absorption differs between wavelengths, causing blues to appear more prominent than violets. Are dreams real? The lion in the cupboard or the bogeyman under the bed understandably provoke cries for bedtime reassurance. However, it may not always be as simple as rocking to the mantra of ‘It was only a dream’. Lucid dreamers have the capacity to take command of their dreams, walking a fine line between reality and the dream state. Estimates suggest that in fact half of the general population will experience at least one lucid dream during their lifetime, and younger children may lucidly dream more frequently. Parapsychologists have studied these so-called ‘psychonauts’ for decades. Doctors Keith Hearne and Stephen LaBerge demonstrated that lucid dreams correlate with rapid eye movement (REM) cycles during sleep. These REM cycles are distinct from deep sleep due to elevated rates of brain activity, random eye movements, and, critically in this case, the propensity of sleepers to dream more vividly. Lucidity was found to consistently precede REM bursts along with heightened activity of the prefrontal cortex, the region of the brain responsible for logical thought. However, lucid dreams still remain a highly controversial phenomenon. Adversaries within the scientific community highlight present naivety when it comes to distinguishing dreams, day-

dreams, memory, and ultimately deciphering our own objective realities.

2/3 of parents admit to fabricating an answer rather than reveal their own ignorance

Why can’t humans fly? Devastatingly, but perhaps unsurprisingly, it is mathematically impossible for humans to fly. Natural flight is dependent on a critical tripartite balance between wingspan, strength, and body size. Even an Icarian contraption of wax and feathers would demand a wingspan of over 6.5 metres to enable the average male to take to the skies. But do not be disheartened, there may be hope yet – British inventor Richard Browning has created one of the world’s first flight suits, dubbed Daedalus, father of Icarus. Presently contained to a modest hover, the six miniature thrusters of Browning’s ‘Iron Man’ suit are capable of hitting speeds of 200 mph and altitudes of a few thousand feet. However, your new means of daily commute does come with some hefty disclaimers - primarily the need to master controlling 130 kilos of thrust. This enormous physical demand sees Browning completing a weekly schedule of cycling, totalling 150 km, a 40 km run and three intensive bodyweight training sessions. The suit has led to the foundation of Project Gravity, a company specialising in human propulsion technology, meaning this may just be the beginning of flight for us mere mortals. Admittedly, following a child’s train of thought often seems impossible. Addressing their questions however, both the fundamental and the frivolous, with patience and enthusiasm has great value. From such opportunities we can nurture the minds of the next generation, whilst simultaneously cultivating our own. Issy MacGregor is a second year Genetics PhD student at The Institute of Genetics and Molecular Medicine

fe atu re s

Owning the world Clari Burrell explores land use conflict and resolution The Dakota access pipeline protests in North America recently made headlines across the world. Native Americans and their supporters fought the decision to route an oil pipeline close to the Standing Rock Sioux reservation. Their protests were not enough to stop the development of the pipeline, and deep concerns remain that their water supply is vulnerable to contamination. Wide reporting raised awareness of land ownership and land use conflict. Such a clash is not an isolated incident. In 2014 the government of Ecuador faced protests over its decision to open the Yasuni National Park, a pristine area of the Amazon, to oil exploration. In the Arctic, controversial oil drilling is going ahead in spite of opposition. Agriculture also eats up more and more land to meet demand for food and resources. The production of palm oil, for example, has led to deforestation and loss of habitat for endangered species. Land is limited, and conflicts over how it should be owned, used, and managed will only increase as the human population grows. In light of this, protected areas and national parks stand as the sometimes threatened guardians of the Earth’s remaining wild places. They are invaluable for maintaining biodiversity and protecting endangered species, yet companies or governments are sometimes happy to compromise if more immediate profits can be made from alternative land uses, such as mineral extraction or energy projects. Too often local communities also feel robbed of land they would wish to use to support themselves. In many countries conservation areas are surrounded by poor communities. This inevitably leads to resentment when people cannot access resources such as land for crops or livestock, firewood, or water. Protecting ecosystems and biodiversity is extremely important, but if it comes with the price of human misery the lines become more blurred. Africa, as home to some of the world’s most iconic national parks, faces many land use conflicts. There is currently a severe poaching problem across the continent: up to 35,000 elephants were killed last year. Current figures from the African Wildlife Foundation suggest that the black Rhino population has decreased by 97.6 % since 1960. More positively, there are successful models of

land use conflict resolution out there. African Parks, an NGO that specialises in taking over the direct management of parks on long term agreements states that “..Parks are a choice of land-use. For these parks to survive in the longterm, local people need to value them, and therefore must derive benefits from the park’s existence.”. African Parks has the largest counter-poaching force in Africa, equipping law enforcement teams to deal with even the most highly organised and well armed crime syndicates. It is their community engagement work, however, which ensures long term sustainability for the parks. Involving local people in the management of the parks results not only in job creation, but also in better networks that feed back information on potential poaching activity. Investment in education and infrastructure in surrounding communities is prioritised, as is ensuring local people have access to platforms to make their needs and concerns heard. African Parks hopes to take on responsibility for 20 parks covering an area of 10 million hectares by 2020. This would mean sustainable protection for a vast geographically and ecologically diverse area. Land use conflicts are of course not restricted to poorer countries. Over 50% of Scotland is owned by fewer than 500 people, and the words of Adam Smith still ring true today: “As soon as the land of any country has all become private property, the landlords, like all other men, love to reap where they never sowed, and demand

a rent even for its natural produce.”

As soon as the land of any country has all become private property, the to reap where they never sowed

We have a crippling housing crisis in the UK but restrictions on building housing on rural land are strict. Tenant farmers, landowners, conservationists, and energy and leisure companies, amongst others, all have different ideas of how the land should be used. The Land Reform (Scotland) Act, passed last year, went some way to protecting the rights of tenant farmers and providing a fund for community buy-outs, but many felt it did not go far enough in making land use fairer in Scotland. Rejected amendments to the Bill included placing restrictions on the amount of land that could be owned by any one individual, and a ban on land ownership by companies based in offshore tax havens. What is certain is that land is wealth and power and ultimately all of us have a huge stake in how the Earth’s limited resources are used. Clari Burrell is a second year plant science PhD student.

Image from National Park UK

Autumn 2017 | 45

reg u l ar s : p o l i t i c s

Filter bubbles and the future of politics Teodora Aldea puts her news feed on trial and investigates the effects of the filter bubble on politics The political landscape has witnessed several turns of events recently that surprised many, from the election of Donald Trump as president of the United States, to the decision of the United Kingdom to leave the European Union. And while one side of each election will have invariably celebrated the desired result, the other side was left not only disappointed, but also baffled at how wrong the predictions had been. Why had nobody seen this coming? Some argue that the surprise came as a result of many people living in a bubble in which they are surrounded by individuals who already agree with them and consume media that already appeals to their core beliefs. This leaves little room for new information or exposure to ideas different from our own, creating an echo chamber which continually feeds our own opinions back to us. Without healthy political debate, we are prevented from forming a realistic and integrative view of the political beliefs beyond our immediate circle. While the press and television have received their fair share of finger-pointing and bias accusations, there is another, less palpable bubble that has been proposed as the culprit: the internet, and specifically social media, where a large proportion of the population find their news. Networks like Facebook and Twitter are well known to filter the content shown to users based on several algorithms, which

Illustration by Julia Rowe

46 Summer 2017|

take into account previous interactions with members and posts, such as likes and comments. This contributes to what Eli Pariser (founder of viral content website Upworthy and renowned political activist) once defined as the filter bubble, or “that personal ecosystem of information that's been catered by these algorithms”. These bubbles are problematic regardless of which side of the debate we find ourselves on; while they can perpetuate legitimate and professionally produced news, this information might only reach those who are already aware of it. Likewise, social media can also help disseminate misinformation or “fake news” which invariably affect people’s perceptions and the way they vote in elections. This is especially amplified by the fact that the quality and veracity of the news content distributed on social media are not assessed in any way. Add to this the viral nature of social media and you’ve got thousands of people arguing about an out-of-context Hillary Clinton selling weapons to ISIS. One of the main side effects of this is users being isolated from content they might disagree with and, in some cases, prevents accurate news from reaching some people, thus fueling misinformation-driven propaganda. Even more worrying is the fact that the algorithms and processes through which this happens remain largely unknown and mostly unregulated, which means they can never be properly studied, un-

derstood and improved by those outside the companies which develop them. It makes sense, then, that more effort should be put into studying the filter bubble and its effect on democracy. This is no trivial task, especially since details such as the algorithms that social networks use to curate content are generally not made public, but progress is still being made. For example, Professor Philip N. Howard from the Oxford Internet Institute leads a team of academics whose main body of work is studying the impact of digital media on the global political landscape. His research has paved the way to understanding the significant impact that these platforms can have on the democratic process and is focused on concepts such as digital activism and computational propaganda, as well as the distribution of fake news aided by the use of automation in the form of bots. For instance, during the US presidential election, Professor Howard’s research shed light on the Twitter bots used for propaganda and uncovered that, while both candidates owed a percentage of their Twitter traffic to automated accounts, the pro-Trump bots accounted for nearly a third of his traffic and tweeted four times as much as the pro-Clinton bots did. Though this kind of research is in its infancy, many experts believe the phenomenon is to some degree influencing voters’ opinions and choices, and are subsequently wary of the disastrous effects it could have if political elites abuse this tool in order to engineer public opinion. Although it is essential that companies like Facebook and Twitter take responsibility and try to be transparent about their technology or try to regulate it, the power to change the status quo also lies with users themselves. Reclaiming our own feeds, engaging with people who disagree with us, and developing a more thorough fact-checking habit can only help in forming a more moderate and balanced view of the surrounding world and maybe bursting that bubble. Teodora Aldea is a first year PhD student currently researching adipose tissue biology.

reg ul a rs: tec hn o lo gy

Inspire launch grow: innovation on the doorstep James Ozanne takes a look at some of the latest cutting edge innovations coming out of the University of Edinburgh The University of Edinburgh is internationally recognized for the high caliber of its research and so it comes as no surprise that innovation also thrives on campus. To help capitalize on this environment and translate these ideas into the wider world, the University hosts the annual Inspire Launch Grow competition. Student and staff entrepreneurs pitch their startup businesses to a panel of judges who in turn offer expert business advice and cash prizes. This competition thus also provides the rest of us with a great showcase of the latest and greatest innovations coming out of the University. Let's take a look at three of the most interesting businesses from this year’s competition, which might soon be affecting our daily lives. The first contestant to catch my eye was biotechnology researcher Lissa Herron and her Emerging Innovation Award-winning venture, Eggcellent Proteins. Besides the fantastic name for lovers of bad puns, this spin-out aims to use chickens as bioreactors for protein production. They do this using a novel genetic engineering approach to create birds that produce vast quantities of the desired protein in their eggs. The protein can then be harvested and used in a wide range of applications from basic research to, perhaps most excitingly, medicine. Protein-based therapeutics have been a rapidly growing class of drugs in re-

Image from Harvard iLab

cent years, prompted in particular by the rise of lucrative antibody-based therapies. The current methods for producing these proteins use cell-based bioreactors, which are time-consuming, expensive and difficult to scale up. In contrast, there is already a highly optimized agricultural infrastructure for chickens which, together with the potential to produce high yields of protein per egg, means that this technique is a competitive alternative. We'll have to see how this early-stage technology develops and is adopted, but if the hype is to be believed, poultry farming could one day be more than just the purvey of the food industry. Highlighting the great diversity of research taking place in Edinburgh, the next contestant to draw my attention came from the geosciences. The company Sigma Tree, fronted by Murray Collins, specializes in mapping and quantifying forest biomass using sophisticated imaging techniques. In the face of climate change and a rapidly growing human population, anthropogenic damage to the environment is clearly an issue of pressing concern. Therefore, having the tools to track and subsequently manage this damage is of much importance. In Sigma Tree's case, they have developed an impressive automated image analysis system that can map deforestation and other forestry changes based on satellite images, allowing companies

and governments to efficiently manage their forestry resources over vast areas. This will be particularly valuable in the tropics of southeast Asia and South America, where poorly managed (and in some cases illegal) deforestation are of most concern. In the end, the impact of this technology will largely depend on how the stakeholders in the forestry sector use it. As awareness of environmental damage grows, we can hope that Sigma Tree's technology will be adopted and used to make a real difference. The final contestant that drew me in combined the agricultural and environmental themes of the two previous companies in his young startup, MiAlgae. Runner-up for the prestigious Innovation Cup prize, Douglas Martin's company aims to make an impact in the multi-billion pound livestock feed market. Currently, the majority of livestock feed is supplemented to provide extra essential nutrients that improve growth rates and the health of animals. However, many supplements currently come from highly unsustainable sources. These include soybeans, whose cultivation has led to mass deforestation, and fishmeal, made from pulverized dried fish, which puts even more strain on our overfished oceans. MiAlgae believe that the solution to this problem can be found in microscopic algae that can be processed to produce a supplement high in protein and essential oils. Using the company's innovative aquaculture technology, they plan to grow this super-charged microalgae at an industrial scale with high efficiency, even utilizing waste water in their system. If MiAlgae can scale up, they present themselves as the potential sustainable future of the livestock feed industry. This competition goes to show the wide range of innovative business ventures coming out of the University. Furthermore, it highlights a general trend of companies spinning out of academia and the far-reaching potential these ventures could have for our world. Whether they deliver, only time will tell. James Ozanne is a second year Roslin Institute PhD student. Summer 2017 | 47

reg ul ar s : ar t s

Medicine and the art of representation Haris Haseeb writes of the historical and current significance of the art of representation in medical practice At the heart of any scientific enquiry is representation. In the physical sciences for instance, the complexities of relativity are represented, both numerically and symbolically, in an equation of three constituent parts: energy, mass, and the speed of light. In the chemical sciences, the sum of atomic interaction is represented isomerically; I am fondly reminded of the familiar high-school image of cis and trans alkenes, and the nuances of their respective geometric forms. Medical science is no exception to this phenomenon, for the representation of the human body, and more specifically the form of its interior structures, is of profound significance not only in our conceptualization of medicine, but also our application of it. The very nature of representing (and in a sense, recreating) images of the human form however, has changed vastly across historical periods in time. Whilst it is clear that the art of representation has played a critical role in both our understanding and teaching of the medical sciences, reflecting upon how this has been achieved at different periods in time informs us not only of the evolution of medical practice, but also of a subtle yet significant shift in the progress of science. It makes chronological sense to consider the anatomized body of the Renaissance as the point at which the art of representation emerged prominently as a method of understanding, applying and ultimately celebrating the phenomenology of the human body. In his Corporis Fabrica, Vesalius would not only create a compelling account of the anatomical structures of the human interior, but so too would he revolutionise the manner in which corporeality would be represented. His groundbreaking exercise in anatomy and illustration radically reconceptualized the role of representation in medicine, aligning itself, if unintentionally, with an Aristotelian aesthetic ideology which placed ideas of truth and objectivity at the centre of the anatomical sciences. The commitment to objectively represent the body’s anatomical form laid the foundations for the growth of empiricism in the following centuries. However, where the anatomists of the 16th century explored the contents of

48 Summer 2017|

the human interior and represented their structures as microcosmic constellations of a theologically ordained universe, the later anatomists of the Enlightenment exchanged this divine representation with that of mechanism. Importantly, while there had been a shift in intellectual thought, exchanging theology for science, the method through which anatomical representation was achieved was largely the same - to display and present the body as it was objectively observed.

The anatomized body, dissected and displayed in distinct geometric planes, became emblemized as the great artistic symbol of medicine

This recognisably modern anatomical tradition, rooted in the art of both dissection and representation, would dominate and to an extent define medical practice for the duration of the 18th and 19th centuries. The anatomized body, dissected and displayed in distinct geometric planes, became emblemized as the great artistic symbol of medicine, whilst also remaining at the core of medical teaching. Its value then, was simultaneously aesthetic and instructive. It was during the 20th century, a period in time which would see the greatest raw progress across the scientific disciplines, that medicine’s preoccupation with anatomisation would become displaced by an alternative mode of representation. Empiricism, concerned only with observable data, outgrew itself, and instead, the medical enquiry would look to the newly emerging fields of imaging and functionalization. Though it is often the X-ray that is referred to as the most revolutionary mode of representation in modern med-

icine, its essence is closely aligned to that of the anatomical dissection. The key distinguishing feature, of course, is the replacement of the steel knife with a less invasive (though at the time no less harmful) invisible spectrum of electromagnetic radiation. In turn, the X-ray, as in the instance of dissection, would produce accurate images of the human interior which dealt ultimately in objective truths, presenting the human form as it had anatomically been observed. It was not until the later advent of the electrocardiogram (ECG) that the medical profession would begin to resist the established precedent of actualities and instead attempt to represent the human interior as an abstraction of its anatomical self. This shift in our method of understanding functionalization radically revolutionized not only the delivery of care to patients, but also represented the beginnings of a fundamental transformation in our conceptualization of the complexities of the human form. At the heart of this conceptual shift was a growing fascination with electricity. The electrical phenomenology of the body had long been the subject of scientific enquiry, most notably in the context of galvanism in the 18th century. However, it was only until the end of the 19th century that a method emerged from which the electricity of the body, and specifically the heart, could be represented and read as a reflection of both structure and function. In 1901, and inspired by the works of earlier cardiologists, the Dutch physician Willem Einthoven successfully managed to plot five electrical deflections, caused by waves of depolarisation spreading across the length of cardiac walls, on a graph of voltage against time. The device he used to achieve this was truly remarkable; the string galvanometer, weighing in excess of 600 pounds, would exploit the predictability of the heart’s electrical rhythms, using electrodes to measure the potential difference at a given interval during cardiac contraction. As it later transpired, underpinning the five deflections were three distinct anatomical events: the depolarisation of the heart’s upper chambers, the

reg ul ars: arts

Illustration by Andrea Chiappino

subsequent depolarisation of its lower chambers, and finally, repolarisation - a sequence of electrically driven events essential in the ejection and delivery of blood to feed our metabolic processes.

Beyond simply representing the heart as an abstraction of its anatomical self, the ECG also provides information about its temporal orchestration

And so, not only would Einthoven’s device make clearer the succession of the heart’s electrical events, but so too would it provide information about its anatomical integrity through a mode of functional representation which,

though empirically verifiable, was fundamentally abstract - no more than a series of peaks and troughs on a page. Because of its relative infancy, Einthoven’s galvanometer could only be used initially to assess disorders of the heart’s intrinsic rhythm. However, this discovery catalyzed immense progress in our understanding of cardiac function, and as our grasp of electrophysiology improved, so too did our means of measurement, and over a half a century later, the ECG as we know it today became well established as an essential diagnostic tool in clinical practice. The details and complexities of the processes which explain the mechanism of the ECG are well and truly outwith my comparatively elementary understanding of biophysical science. What I am able to say, however, is that contained within the relative secrecy of its electrical deflections is a quiet, topographical correspondence between the heart’s anatomical structures, and its subtler electrical processes. The result of measuring the relationship between these complex interactions is both outstanding and fundamentally abstract; five modest

deflections, and their respective pathological dysmorphisms, bring voice to an otherwise silent interior structure. But beyond simply representing the heart as an abstraction of its anatomical self, the ECG also provides information about its temporal orchestration; images of the myocardium, as it moves from relaxation to contraction, are presented to us in concurrence with the passage of time. Its representation then, does not only reflect a technological revolution within the medical profession; it also serves as a quiet reminder of the close relationship between the empirical rigor of medical science, and the intricate abstractions of its conceptualisation and application - a thought which, as scientists, we seldom acknowledge. Haris is entering his 5th year of medicine at the University of Edinburgh, and his current areas of research explore the interactions between medical science and the arts.

Summer 2017 | 49

reg ul ar s : l e t t e r s

Dr Hypothesis EUSci’s resident brainiac answers your questions Dear Dr Hypothesis, I have recently watched Interstellar and was slightly confused by how gravity slows down time. Is this science fiction? If not, is it possible that one of two twins could take a quick lap around the nearest black hole and come back to Earth to find that the other twin had aged more? Confused Callum Dear Confused Callum, No this is far from science fiction! The gravitational fields caused by heavy objects do indeed affect the flow of the passage of time. This is a consequence of Einstein's 1915 theory of general relativity which describes how matter and energy distort space and time around them. In this framework, space and time are knitted together to form a four dimensional continuum called spacetime that can be stretched and curved by matter. A good analogy to help picture this is to imagine placing a rubber sheet over a pool table. This sheet represents the spacetime. If you then place a pool ball on top of the middle of the sheet, the ball will rest on the table, pulling the sheet down. This effect is akin to how the gravity from a massive object distorts spacetime. You may then decide to roll a malteser across the rubber sheet. What would you see? The malteser, here representing an object of tiny mass compared to the ball, would of course roll down into the dip created by the pool ball. Had there been no pool ball the malteser would have continued along its path undisturbed. This is how the presence of mass alters the shape of spacetime, which in turn alters how objects move through it.

Space and time are knitted together to form a four dimensional continuum called spacetime

In the rubber sheet analogy it is only the spatial dimension that is curving and stretching, but in the real world the time dimension behaves in the exact same way. The consequence of this is that clocks located in an area of high levels of gravity run slower. If at the time of Earth's formation someone had placed a clock at the top of Mount Everest (ignoring of course that the geological movements that created Mt. Everest hadn't happened yet!) and one at sea level, they would now be 39 hours out of sync. This time dilation effect means that the satellites used for GPS navigation have to make corrections for their clocks running quicker than those on Earth. Now to answer your question about the twins ageing differently due to the effects of a large gravitational field, this is exactly what general relativity predicts and so the science in Interstellar passes the test (on that count at least). Both you and your twin

50 Summer 2017|

Illustration by Lucy Southen

will have travelled an equal distance through spacetime, but you will have travelled further through the spatial dimensions while he or she will have travelled further through the time dimension. Hence why your earthbound twin appears to have aged more.

Clocks located in an area of high levels of gravity run slower

Whilst all this sounds extraordinary, what I have postulated to you has been largely conceptual; the difficulties arise in practice. As mentioned previously, the effects of this process in a gravitational field the size of Earth's would have only caused a clock difference of 39 hours over 4.3 billion years. So to make this adventure worth your while you will need to travel to somewhere with a very strong gravitational field to slow your time down. Say we spend one year on the surface of the sun, this will give us just shy of one minute less ageing per year than our earthbound counterpart. This, of course, ignores the obvious problems associated with standing on the surface the sun, which clocks in at a toasty 5505°C! The real limiting factors here are that the human body cannot withstand living in gravitational fields more than two to three times the strength of the one found on Earth due to its inability to pump blood to the head. Combine this with the obvious difficulties of actually getting to a place with enough gravity to elicit a noticeable effect, then the likelihood of anyone attempting to do this any time soon appears slim. Unfortunately, as one might suspect, the only answers to evading our genetic fate continue to lie within the realm of fiction. Dr Hypothesis' alter ego is James Hitchen, a 1st year physics PhD student at the School of Physics and Astronomy.

regul a rs: re v ie ws

Review: The Handmaid’s Tale TV adaptation The Handmaid’s Tale depicts the dystopian society of Gilead, formerly the USA overtaken by a Christian fundamentalist group. The rapid degradation of society to form a ‘new‘ Old Testament-inspired regime, led by a military-style dictatorship, Helping students and staff results in the removal of women’s rights. Women are no longer their current able succeed to read or write,in have employment or haveroles any freedoms at all. This upheaval happens under the pretext of declining birthand in their future careers, rates and sterility due to pollution and disease. As a consequence, the few women in the population become a commodibyfertile providing University wide ty - a national resource viewed essentially as walking wombs. support for teaching, The show is a close adaptation of Margaret Atwood's novel of learning the same name. Though originally written in 1985, and researcher both novel and show carry a disconcerting relevance amongst development. the political uncertainty of our Trump era. This is precisely what makes the show so poignant. It is beautiful both in the cinematic and material contexts, yet it is equally shocking More information canperformances be found at: in its stark political content. The by Elizabeth Moss (seen previously in ‘Mad Men’), Samira Wiley (from ‘Orange is the New Black’), and Yvonne Strahovski (notable in ‘Chuck’) are particularly noteworthy. The characters are presented with such intimacy and nuance - just one of the

reasons why this is one of the best shows on TV right now. student learning development The narrative follows the Handmaid ‘Offred’, a fertile woman whose position in society is to act as a surrogate for the researcher skills development— wives of prominent commanders in the new government and researchThe planning, communication bare their children. time period of the show is unclear, yet professional throughoutskills, the series flashbacksdevelopment, are integrated showing charcareer acters’ lives beforemanagement, and during thebusiness uprising. and It is so close to and reality it isenterprise, uncomfortable. Ofmore particular importance is not only the harsh curtailing of women’s rights, but also that of scicontinuing development ence. Individual rights, professional social liberty, and scientific knowledge practice in teaching, are washedand to the side in sharing this future. Fundamentalist religion, and supervision fuelled by alearning distrust in science and government, results in the disregarding of doctors, scientists, and the highly educated alike. support for curriculum, For example, it is forbidden to mention that men may be sterile. programme and The only downside of theassessment show is that it doesn’t include design present and development the racial element in the book - the curtailing of minorities. Even so, this does allow more focus on the main narrative. A must watch for women and men alike, particularly those with science backgrounds – a warning for the future! Alice Stevenson is a master of Chemistry graduand.

student learning development

Helping students and staff succeed in their current roles and in their future careers, by providing University wide support for teaching, learning and researcher development. More information can be found at:

researcher skills development— research planning, communication skills, professional development, career management, business and enterprise, and more continuing professional development and practice sharing in teaching, learning and supervision support for curriculum, programme and assessment design and development

Summer 2017 | 51

19-23 Feb #FCL18

EUSci #21  
EUSci #21