Page 1

Issue 19: Spring 2016








How can we delay the inevitable?

Why we engage in self-sabotage

Time as the ultimate agent of evolution



Contents Focus

9 Circadian clocks: the controllers of sleep Lisa Hilferty takes a look into the world of biological clocks 10 Your grandmother’s diet and you Jade Parker explores how environmental factors affect the lifespan of your cells 11 After death Rebecca Pitkin meets with Dr Elena Kranioti to discuss the science that begins at life’s end 12 Cancer and immortality Fiona Ramage discusses the delicate balance between slowing ageing and facilitating cancer progression 14 Immortal animals Lisa Hilferty looks into the animals whose ability to live forever has left humanity striving for immortality 16 From scanning lemons to brain operations Hannah Johnston discusses the advantages and history of magnetic resonance imaging 17 Einstein and the speed of light James Cockburn discusses Einstein’s theory of Special Relativity 18 The evolution of complexity Daniel Soo questions the common assumption that complexity evolves in a straight line through time 20 A cognitive chronology Julia Turan explores how we perceive time 21 Tick tock goes the (body) clock Polina Shipkova explores the concept of social jet lag and its relevance to health 22 Procrastination nation Vicky Ware looks into the science behind putting off the things that matter most

Illustration by Eliza Wolfson

2 Spring 2016 | eusci.org.uk

24 How the Cold War clocked the birth of neurons Daniel Soo examines how the curious enlistment of a Cold War carbon isotope led to new insights on neurogenesis in our brains

40 Waves in space and time Nico Kronberg explores a whole new way for us to investigate the universe


25 Up in the air Miguel Cueva investigates how climate change affects the aviation business

41 The European experiment Ruairi Mackenzie looks at how the outcome of June’s EU Referendum might impact British science

26 At Earth’s end Adelina Ivanova shares her perspective on humanity’s race against our planet’s decline

42 John O'Keefe: a modern Renaissance man Eirini Papadaki picks the brain of a Nobel Prize winning neuroscientist

27 A zoo for the ages Angus Lowe looks at the case for extraterrestrial civilisations, where they might be, and how to test one hypothesis

43 Cyber­-roaches Calum Turner explores the creation of an army of remote controlled cockroaches equipped to find survivors at disaster scenes

28 The nature of time Caroline Stillman unscrambles our understanding of the direction of time flow in our universe

44 Opening the womb of discussion on human embryonic law Alyssa Brandt makes the case for better representation in human developmental legislation


29 Communicating science in Edinburgh Polina Shipkova investigates how the Edinburgh Science Festival communicates science 30 The poultry predicament Rakhi Harne examines the production of the world’s most popular meat 32 Mission to Mars Meghan Maslen explores the science behind why Scott Kelly spent one year in space

45 GM plants - a weapon against excess carbon? Viktoria Dome discusses innovative strategies that could enhance carbon capture and storage in plants 46 Dr. Hypothesis EUSci’s resident brainiac answers your questions 46 Review: Stichers Miguel Cueva reviews Freeform’s new sci-fi crime drama ‘Stitchers’

34 Ratted out Kerry Wolfe examines the controversial Shiant Isles Seabird Recovery Project 36 The next phage YuGeng Zhang explores how viruses may hold the answer to antibiotic resistance 38 Finding the way back Saishree Badrinarayanan explores the mechanism behind path integration and its navigational uses Cover illustration by Elena Purlytė




News Team Shinjini Basu, Hans Sonntag-Hunting, Polina Shipkova, Mike F. Müller, Lynda-Marie Taurasi, Robb Hollis, Natasha Tracey, Mia von Scheven

Dear Readers,

Focus Team Lisa Hilferty, Jade Parker, Rebecca Pitkin, Fiona Ramage, Hannah Johnston, James Cockburn, Daniel Soo, Julia Turan, Polina Shipkova, Vicky Ware, Miguel Cueva, Adelina Ivanova, Angus Lowe, Caroline Stillman Feature Authors Simone Eizagirre, Asimina Pantazi, Stefano Albrecht, Eleanor Spring, Lisa Hilferty, Mia von Scheven, Polina Shipkova, Rakhi Harne, Meghan Maslen, Kerry Wolfe, YuGeng Zhang, Saishree Badrinarayanan, Nico Kronberg Regulars Authors Polina Shipkova, Meghan Maslen, Kerry Wolfe, YuGeng Zhang, Saishree Badrinarayanan, Nico Kronberg, Ruairi Mackenzie, Calum Turner, Chiara Herzog, Alyssa Brandt, Amelia Howarth, Viktoria Dome, Miguel Cueva, Eirini Papadaki Copy-editors Miguel Cueva, Caroline Stillman, Adelina Ivanova, Niels Menezes, Polina Shipkova, Catherine Lynch, Ozioma Kamalu Sub-editors Oswah Mahmood, Niels Menezes, Lisa Hilferty, Daniel Soo, Fiona Ramage, Shiau Haln Chen, Catherine Lynch, Holly Fleming, Polina Shipkova, Pranav Bheemsetty, Nico Kronberg, Mike Freya Mueller, Jane Brennan, Clare McFadden, Simone Eizagirre, Brian Shaw, Ashley Dorning, Owen Gwyd James, James Ozanne Art Team Elena Purlytė, Joanne Pilcher, Jemma Pilcher, Charlotte Capitanchik, Eliza Wolfson, Ashley Dorning, Alyssa Brandt, Scott D'Arcy, Ana Rondelli, Lucy Southen, Katie Forrester, Anna Mikelsone, Lynda-Marie Taurasi, Prerna Vohra, Áine Kavanagh

Editor Alessandra Dillenburg

Editor Kerry Wolfe

Deputy Editor Meghan Maslen

Though the academic year may have ended, EUSci’s still here to bring you our latest issue. We’ve tackled a complex theme for Issue 19, so be sure to take the time to flip through our pages. Have a browse through articles that range from social jet lag to immortal animals to understanding that pesky procrastination problem most of us know all too well. Head to page 8 to explore all that and more. Need to ease into the issue before attempting to unravel the intricacies of time? Check out our news section on page 4 to keep yourself up-to-date in the world of science. Then flip to page 6 to catch up on the latest research taking place in Edinburgh. As usual, we’ve worked hard to fill our main sections with an assortment of fascinating features, which start on page 31. We kick off the section by giving you the scoop on science communication in Edinburgh—something we’re clearly keen on. We then explore NASA’s plan to send humans to Mars with a look at astronaut Scott Kelly’s year in space and its implications for an out-of-this-world travel experience. Our feature on gravitational waves takes us beyond our own solar system with a discussion on how this recent discovery will enhance our ability to investigate the universe. This issue’s regular fixtures start on page 41, where we examine how the upcoming EU referendum will affect British Science. We have a cuppa with John O’Keefe, the winner of the 2014 Nobel Prize in Medicine, and chat with our Dr Hypothesis about the merits of the gluten-free fad. We end the magazine with a review of Stitchers, a TV programme we’re sure you’ll be binge-watching in no time. When you’ve finished flipping through this issue, be sure to check out our newly revamped website! We’ve given our online presence a makeover, complete with an updated news section and links to previous issues. Head to www.eusci.org to have a look. We’d like to thank the IAD for their support in printing this issue. We’d also like to recognize our editors, writers, and illustrators—we couldn’t have done this without you all. The magazine staff is always looking for new talent to join our team, so please get in touch at euscimag@gmail.com or by visiting our website to subscribe to our mailing list. We hope you enjoy this latest issue, and look forward to bringing you Issue 20 next winter!

Alessandra and Kerry

Deputy Editor Selene Jarrett

Focus Editor Vicky Ware

News Editor Priya Hari

Layout Editor Áine Kavanagh

Layout Editor Vivian Ho

Art Editor Jemma Pilcher

Spring 2016 | eusci.org.uk 3

n e ws

Travel a #wildmile for autism The University of Edinburgh’s Patrick Wild Centre for Research into Autism, Fragile X Syndrome, and Intellectual Disabilities celebrated its five-year anniversary on the 16 of April 2016 with the launch of its #wildmiles campaign. The #wildmiles campaign is a yearlong initiative that encourages centre members, supporters, and members of the public to travel a mile with the aim of raising awareness about autism and intellectual disabilities. The Patrick Wild Centre brings together basic scientists and clinical researchers with the aim of understanding the neurological basis of intellectual disabilities as well as developing and testing new therapeutic interventions for these disorders. In December 2015, a research team led by developmental psychologist Dr Sue Flecther-Watson investigated the possibility of improving social communication skills in children under six-years of age diagnosed with autism using a specially designed iPad game. The centre also actively engages with people affected by these conditions and their families to help them better understand their circumstances. For example, it’s hosting a ‘family in residence’, where a family affected with fragile X syndrome (FXS) is spending a year collaborating with different branches of the centre and taking an active role in commenting on the teaching and research related to FXS. The #wildmiles launch on the 16th saw a group of 25

Image courtesy of Shinjini Basu

members, friends, and family climb Arthur’s Seat; a group of students cycle to North Berwick; and students swim, followed by staff and students running at the Great Edinburgh Run on Sunday the 17th. The campaign is open to the public, so the next time you go out for a run or a cycle contemplate donating your miles by sharing a picture on the centre Facebook page (Patrick Wild Centre) or Twitter (@PWCentre) using the #wildmiles hashtag. Shinjini Basu

Hunting for the smallest possible genome Is it possible to create a living organism from scratch? No, according to a new study from the Venter Institute in California published in Science magazine. Craig Venter and his colleagues were the first to engineer an organism with a fully synthetic genome in 2010. Based on this achievement, their new study describes an attempt to create a minimal genome that includes only genes essential for growth and survival. The end result, a bacterium labelled Syn 3.0, contains 473 genes–fewer than any naturally occurring organism. However, the difficulties encountered by the team show that we are still very far away from

Image of Craig Venter and colleagues courtesy of Steve Jurvetson, Wikimedia commons

4 Spring 2016 | eusci.org.uk

understanding the molecular basis of life. Based on existing knowledge from the literature and limited data identifying essential genes for life, the group’s initial attempt at creating a viable organism with a minimal genome failed. Instead, they went on to systematically disrupt the function of different genes, excluding non-essential ones step by step, until they arrived at Syn 3.0. Of the final 473 genes, roughly one third were of unknown function. The initial goal of the study—rational design of a simple organism where the biological meaning of each gene is clear—remains elusive. The researchers provide some clues about the main difficulties, listing several classes of genes that proved important in designing minimal genomes. Firstly, they identified many quasi-essential genes that are necessary for robust growth, requiring them to compromise between genome size and growth rate. Secondly, they pointed at so-called 'synthetic lethal pairs'. This term describes two genes that supply the same essential function; you can delete one, but not both. While this redundancy is a useful safeguard from an evolutionary perspective, it also serves to illustrate that understanding relationships between different genes is still a big hurdle in molecular biology. The design of Syn 3.0 demonstrates our limited knowledge of the genome as an overriding architect of life. Many organisms require more genes than are contained in a minimal set, for example to adapt to different environmental conditions. In humans, there are more than 20,000 genes that give rise to different cell types in our bodies. Since we cannot yet explain the fundamentals needed for growth and survival, then how can we hope to fully tease apart the genetic mechanisms underlying human development and disease? Hans Sonntag-Hunting


New 3D bioprinter brings promise for organ transplants Scientists from the Wake Forest Institute for Regenerative Medicine in North Carolina, USA, have developed a 3D bioprinter, which has the potential to produce organs ready for transplant into patients. This new technology is called an integrated tissue-organ printer (ITOP), and it has certain advantages over other methods. The bioprinting technologies we have so far are limited in their ability to produce tissues of larger size and suitable structural integrity. ITOP overcomes these challenges due to its innovative design which allows scientists to produce artificial parts of the human body. The researchers tested their technology by bioprinting bones of the skull, ear-shaped cartilage and muscle. One of the unique features of ITOP is that it vascularises the produced parts. This is essential because vascularisation is the process of forming blood vessels in the tissue. The bioprinted parts did not have real blood vessels, but instead had an alternative which performed the same function. Microchannels were embedded in them, which allowed the transport of nutrients within the artificial tissue. This means viable tissues of greater size than before can be produced. The process of bioprinting requires the desired tissue, for example ear cartilage, to be scanned in order to obtain imaging data. This information is processed to create a program to control the bioprinter. Bioprinters use a gel containing cells and biodegradable molecules. The programme has instructions on how to deposit layers of the gel, so that the produced tissue is exactly the same as the original one. Normally, when multiple layers are deposited, the product can become unstable. However, ITOP was able to address this issue.

The structural stability of the produced tissues comes from a unique feature of the methodology used in the printing process. In addition to the elements described above, another hydrogel without cells is also printed, and it provides the initial structural stability the tissue needs. This gel is characterised as sacrificial, because eventually the cells in the bioprinted tissue produce a substance which replaces the gel. This study brings us one step closer to the printing of viable tissues and contributes to the work towards creating artificial parts for transplantation into patients. The research summarised here may help bridge the gap between the demand for tissues and organs for transplants and the small number of donors, which is an important issue in our society. Polina Shipkova

Image courtesy of Wake Forest School of Medicine

Recruiting the immune system to the fight against cancer This February, some of the major UK newspapers were raving about a revolutionary breakthrough in cancer research. Some of the the bolder ones were even reporting a cure. The cause of the hype: a specific type of cancer immunotherapy that would reprogramme the immune system to fight cancer. Study results presented at a conference in Washington elicited the media echo, even though the research has yet to be published in a peer-reviewed journal. Accordingly, it is advisable to wait until the research has passed the scientific quality control and all details of the study are known before making any conclusions publicly. In the meantime, it is worthwhile to look at the science behind this new approach as well as publications that demonstrate high remission rates in specific cancer patients using similar therapies. The immune system is designed to search and destroy foreign elements in the body, such as cancer cells. However, tu-

Image of T cell courtesy of NIAID, Wikimedia Commons

mours have evolved a variety of mechanisms to evade this destruction. The current approach uses one type of immune cell, the T-cell, which is a type of white blood cell that has the potential to destroy tumour cells if only it recognises them. Those T-cells are extracted from a patient and genetically modified to express a chimeric antigen receptor (CAR). This allows the T-cells to recognise the cancer cells specifically and subsequently attack the tumour. The challenge of applying this technique is to find a surface molecule that is specific to a certain type of cancer but does not cause the T-cells to target other vital tissues. Small-scale studies have already reported severe to lethal side effects because T-cells started to attack normal tissues. The present study that sparked all the media attention was designed for specific types of blood cancers in patients that have become resistant to other therapy approaches, and it was indeed very efficient in eradicating tumour cells. A similar study that was published in 2014 had equally high remission rates. However, patients also suffered from serious side effects. All of them developed a severe syndrome, caused by an overreaction of the immune system, so they had to be hospitalised, with some of them even taken into intensive care. It will be a challenge for future research to make this treatment safer and find specific surface molecules that enable the targeting of other tumour types. While the strategy is promising, it seems it should be regarded as a last resort, due to the massive side effects. Mike F. MĂźller Spring 2016 | eusci.org.uk 5

res earch i n e dinb u rg h

Advances in gene modification could lead to disease-resilient livestock Scientists at the Roslin Institute feel they are on the fast track to producing genetically modified pigs resistant to African swine fever. Genome editing allows scientists to make precise modifications to an organism’s genetic makeup. These genome editors are commonly known as ‘molecular scissors’ because they can be used to cut parts of DNA. There are three types of genome editors: Transcription activator-like effector nuclease (TALEN), Zinc Finger Nuclease (ZFN), and the most recent and currently being discussed in the media, CRISPR-Cas9. In 2013, researchers at the Roslin Institute used both TALENs and ZFN with a pig zygote, or a fertilized egg, to compare the results, and were able to produce live genome-edited pigs. Pig

26 showed that gene editors were able to precisely edit the genome of a pig zygote by removing exactly one DNA base without any trace of alteration to the rest of the vast genome of the pig. This was a valuable advancement in producing viable livestock resilient to disease. Nearly three years after Pig 26, Roslin scientists have progressed one step further. Spread by ticks, African swine fever is highly contagious and fatal to farmed pigs. However, the pig’s wild cousin, the warthog, is resistant to the disease, thanks to its genetic makeup. Roslin scientists altered the genetic code of the farmed pigs by editing five letters of a gene, making it the same to that of the warthog. They believe the genetic modification is able to produce pigs possibly resistant to the African swine fever virus. Although the disease has never been found in the UK, outbreaks are rampant in Russia and Sub-Saharan Africa, causing concern among farmers that the disease could spread. Controlled trials are currently underway at the Roslin Institute to test whether the genetically modified pigs are indeed resistant to the disease. Lynda-Marie Taurasi

Image courtesy of Wikimedia Commons

Personalising ovarian cancer treatment Why do some patients respond well to treatment while others don’t? Why do some cancers shrink away when exposed to therapy, while others grow on regardless? In ovarian cancer, a disease which claims the lives of around 150,000 women per year worldwide, differences in survival and response to therapies are particularly striking. Some patients respond well to treatment, while others quickly succumb to their disease. Researchers in the Ovarian Cancer Translational Research Group at the Edinburgh Cancer Research Centre are working to try and uncover the reasons behind these differences between patients. The idea is to look back at previous patients and examine their disease at the molecular level. This can involve looking at exactly how certain genes are encoded by sequencing their DNA, or by looking at how the genes are being used in the tumour. Then we compare what we find to which treatments

Image courtesy of FreeDigitalPhotos.net

6 Spring 2016 | eusci.org.uk

worked well for a patient to see if we can find groups of people that respond well to certain therapies. Once we’ve found these associations, new patients can be tested to see which of these groups they fit into. In this way, their therapy can be tailored to the biology of their disease—an idea commonly known as personalised medicine. Understanding the biology behind ovarian cancers also helps us identify new ways to target cancer cells, while leaving normal cells unharmed. These new treatment options might be drugs that are already being used in other diseases or cancer types, or they might be completely novel. Targeted therapies can be very effective, but most tumours eventually become resistant to these agents. Understanding how this resistance develops is important for finding ways of re-sensitising disease to therapy, and for choosing new treatment strategies once a particular drug has stopped working. Looking at patients who respond exceptionally well or who don’t respond at all is particularly key to understanding this. Together, these research avenues point us toward new drugs for the treatment of ovarian cancer. They help us identify the patients that are likely to benefit most from new therapies, have the highest chance of success on conventional chemotherapy and those who may benefit from different ways of administering treatment. The key is to rapidly translate what we find in the laboratory back into clinical practice to improve the treatment and management of patients diagnosed with ovarian cancer.

Robb Hollis

res ea rc h in edinburgh

The art of drinking with your bum Black and brown tiny little creatures, buried in the flour of plastic lunch boxes, sitting in a warm incubator. This is what I found when Dr Barry Denholm, a lecturer from the Centre for Integrative Physiology at the University of Edinburgh, showed me his ‘fun project’. I got the feeling that it might be ‘scientific madness’ to study how little beetles are able to drink through their anus, but I realized this was a fascinating and smart system with a great vision behind it. Tribolium castaneum (red flour beetle) and Tenebrio molitor (mealworm beetle) are a widespread and devastating pest of flour and cereal products as they are able to survive under very dry conditions. These pests posed a problem to the Ancient Egyptians over 4000 years ago, and an effective method to combat these beetles is still lacking to this day. In most insects, the renal tubule—the urine-producing part of the kidney—floats freely; in beetles, it sits very close to the wall of the rectum. This system is called the cryptonephridial system and functions to efficiently conserve water. The urine flows in one direction, and the faeces in the rectum in the opposite direction, like a countercurrent system. The renal tubules manage to produce a very high salt concentration in their lumen, and thus draw water from the rectal content by osmosis. This means that the beetle can produce very dry faeces, which is important for the insect to preserve water. This specialisation was described in desert beetles. Buried in the sand they come to the surface before dawn, open their bums, and take up the water from the moist air by using this adapted system.

Arthur Ramsay had last described the cryptonephridial system over 50 years ago. Dr Denholm is keen to revisit Ramsay’s findings and take it to the next level by using modern molecular techniques. Identifying the genes and molecules essential for this system to work efficiently may offer opportunities for manipulation. Besides being a very interesting system for physiologists and entomologists, Dr Denholm’s work might lead to new approaches of pest control with a huge impact to our society. In addition, this system—the most powerful water-conserving system known to nature—might be of inspiration to engineers and designers in the field of biomimicry, developing sustainable solutions by imitating nature’s patterns and strategies. Mia von Scheven

Image of red flour beetle courtesy of Peggy Greb, Wikimedia Commons

HER2 breast cancers and resistance to targeted drugs The big C. The sentence everybody dreads: “I’m very sorry, it’s Cancer.” In the UK, more than 350,000 people hear this every year. Of those, half die of their disease within 10 years. Sometimes, a cell has a mutation that can be either inherited or acquired through your lifetime. Every cell in your body has lots of small mutations; usually these don’t have any effect. Occasionally, one of these mutations will let the cell carry on dividing, overcoming the normal mechanisms put in place to stop uncontrolled cell division. In cancer, a mutation means the cells don’t stop dividing and die as they should. This means the mass of cells continues to grow bigger. Cancer can occur in any cell type in the body, although some cell types are more likely to get a mutation resulting in cancer. One of these is a

Image of purple cancerous cells in breast tissue, courtesy of National Cancer Institute, Wikimedia Commons

group of cells of the breast. Each year, breast cancer accounts for the majority of the new cancer cases. The normal breast is made up of lots of fatty tissue and mammary ducts. The duct cells are where the tumors originate. A lot of different types of breast cancers exist. They are first classified by whether or not they have a genetic mutation that predisposes to developing both breast and ovarian cancers. This mutation is in the BRCA1/2 genes—the same mutation Angelina Jolie has. However, most breast cancers come from mutations that are picked up throughout our lives. They are then sorted by the types of receptors on their cell surfaces. Most acquired breast cancers are caused by two hormone receptors, oestrogen and progesterone, and are quite easily treated. The type of breast cancer I study, HER2 positive breast cancer, is caused by a type of receptor on the cell surface, giving signals to the cell to carry on dividing when it shouldn’t. This type of breast cancer is more difficult to treat than hormone receptor breast cancers, even though there are many drugs that target the receptor. Most breast cancer deaths are caused by this type. The cancer cells are clever; they’ve found ways to carry on growing without using the receptor that they used to rely on. This means that the targeted drugs no longer work. In my PhD in the Brunton lab at the Institute for Genetics and Molecular Medicine, I’m trying to find out how the cells become resistant to the drugs, and why they are more aggressive than other breast cancers. Natasha Tracey Spring 2016 | eusci.org.uk 7

focu s

Time This issue is an absolute bumper. We’ve had so many great submissions and writers have got to grips with a wide range of topics within this issue’s focus on time. Regardless of which area of science you’re interested or involved in, time has an impact on it and your daily life. If you’re interested in the psychological standpoint on time, how we perceive its passage and whether we use it effectively check out Julian Turan’s article (p22) on perception of time or my contribution on procrastination (p24). Caroline Stillman goes one step further in exploring the passage of time by investigating whether we even known what time is (p30), and James Cockburn explores Einstein’s theory of special relativity (p19). Unlike physicists, biologists don’t directly study the nature of time itself, but time impacts everything they do. Raki Harne looks at the beginning of life and the developments that happen like clockwork to allow chickens to develop from egg to embryo to hatchling (p14). Lisa Hilferty looks at nature’s daily clock, circadian rhythms, on page 9 and Jade Parker looks at what happens when the cellular clocks stop ticking in a piece on cellular senescence (p10). Rebecca Pitkin continues this theme with an exploration of what happens after the clocks stop ticking—when bodies start to decompose (p11). As Lisa Hilferty found, some animals don’t experience the ravages of aging in the same way humans do, and are in fact immortal (p6). Hannah Johnston has looked back in time at the history of brain imaging (p18) while Fiona Ramage looked into the timescale of cancer (p12). More in the field of biology comes from Daniel Soo who has looked at the effect of time on evolution (p20) and in another piece has explored how scientists determine how old things are through carbon dating (p26). Looking at things from an environmental perspective, Miguel Cueva looks at the time scale of global warming (p13) and Adelina Ivanovna investigates how long the Earth will continue to be habitable by humans (p14). Looking further afield, Angus Lowe investigates why humans haven’t yet been visited by extra-terrestrial life, and ponders if aliens are ignoring us (p29). All in all, an unprecedented collection of Focus articles for EUSci this issue, so a huge thank you to all the writers and editors who’ve made that possible. Sit back and spend some well-deserved time reading everyone’s contributions… Vicky Ware, Focus Editor Illustration by Alyssa Brandt

8 Spring 2016 | eusci.org.uk


Circadian clocks: the controllers of sleep Lisa Hilferty takes a look into the world of biological clocks One of the first things we all do when we wake up is reach for our phones to see if there is enough time for another five minutes of sleep. Little do we know that all our cells know exactly what time it is. All organisms, from bacteria to human beings, have an internal representation of time—they possess a biological clock. William Schwartz, a professor of neurology at the University of Massachusetts, stated that “all biological clocks are adaptations to life on a rotating world.” Biological clocks respond to external stimuli such as light and temperature. A range of experiments which involved removing these stimuli whilst keeping the organism in constant conditions— usually dim light—have shown that biological clocks are self-sustaining and originate from inside the organism. These experiments showed that even without social and light cues, daily cycles do not become disorganised—they still last about 24 hours. These cycles remain constant for many years without external cues. There are three types of rhythms for biological clocks: ultradian, infraradian, and circadian. Ultradian are recurrent rhythms that repeat within a 24-hour period, while infraradian rhythms can cycle every few days. We shall focus on the third type, circadian rhythms, which last for approximately 24 hours. As the rhythm does not last exactly 24 hours, biological clocks are like a fast or slow running watch that needs to be reset daily, usually by light. One study used a hamster model to show that a mere half-second of light was able to reset the clock when the hamster was kept in constant darkness. These clocks are regulated by a centre in the brain called the superchiasmatic nucleus (SCN). The SCN receives direct inputs from the eye, allowing it to create and synchronise rhythms based on light signals. There are also peripheral clocks in different tissues of the body, and they all act differently dependent on which tissue they are in. A number of genes termed ‘clock genes’ have been discovered, and their expression changes throughout the day. The main ones are called clock, bmal1, per, and cry. A simple explanation of their interaction starts with clock and bmal1 forming a compound called clock/ bmal1. This activates per and cry genes, resulting in protein production. These

proteins form a per/cry compound to stop their own production by interacting with clock/bmal1, producing what is called a negative feedback loop. Clock also influences hormone release, mood, and behaviour.

Disruption of circadian rhythms has been associated with major depressive disorder We often hear the term biological clock in reference to a woman’s fertility, but it has many roles in the body. The circadian clocks control the release of hormones, such as melatonin and cortisol, as well as the nocturnal release of prolactin and growth hormone. They also have roles in controlling timing of ovulation, and the reproductive period of seasonal breeders such as sheep. Disruption of circadian rhythms has been associated with major depressive disorder (MDD). People with MDD have a significant reduction in rhythmicity in cyclic genes. Circadian clocks regulate sleep, so in patients with depression, an early predictor of response to antidepressants is when sleep patterns return

to normal, indicating the clinical relevance of circadian rhythms. This disruption has also been linked to addiction. Recent studies have shown that chronic exposure to alcohol and other addictive substances can lead to lasting alterations in circadian rhythms that contribute to a cycle of addiction and relapse. Clock has been shown to behave as a negative regulator of biochemical rewards you get from drugs. These results have brought into question the impact clock genes have across the body, and the influences they have on a range of other diseases, prompting more research into circadian rhythms. Biological clocks not only inform us of what time it is, but could also be the key to understanding a range of diseases. So as you go through your daily activities, take a minute to appreciate your SCN just ticking away in the background, ensuring everything is running smoothly.

Lisa Hilferty is a fourth year Reproductive Biology student

Illustration by Elena Purlytė

Spring 2016 | eusci.org.uk 9

focu s

Your grandmother’s diet and you Jade Parker explores how environmental factors affect the lifespan of your cells Ageing may sometimes feel like it hits you overnight, the discovery of a grey hair or wrinkle that shocks us into rushing for the anti-ageing creams. But the process of ageing is actually very gradual, with each cell in our body possessing a biological clock that is slowly preparing the cell for old age. This clock, which is ingrained in our genetic makeup, can only go on ticking for so long. The crucial component that decides when time runs out are the telomeres. Repeat sequences at the end of each chromosome, these time-keepers shorten with each cell division. Once the telomere reaches a critically short length, usually after many rounds of cell division, the cell reaches senescence. In essence, our cells are programmed to die from the moment we are born. The age at which we will die, however, is not written in stone and is influenced by our genes, the environment, and our diet. One way in which our lifespan can be manipulated is through foetal programming. What your grandmother ate whilst your mother was in the womb could have consequences on your health and lifespan. Dietary changes as simple as increasing the amount of protein eaten during pregnancy and eating less protein whilst breastfeeding could increase the next two generation’s lifespans. The importance of prenatal nutrition was highlighted and proven during the Dutch famine of 1944, in which pregnant and lactating women faced an extreme reduction in their daily calorie intake. This period of maternal starvation had a profound effect on the health of the general population, with children conceived during famine being more likely to suffer from coronary heart disease and other chronic diseases. On the flip side, the benefits of calorie restriction without malnutrition have been shown in animal trials. In a 1935 study, researchers at Cornell University limited the number of calories that the rats received whilst maintaining vitamin and mineral levels at a normal level. The animals following the calorie-restricted diet lived longer and were able to delay the progression of age-related diseases. It is thought that calorie restriction works by diverting the body’s energy and resources away from reproductive processes and into the maintenance of cells. A study conducted at Cambridge

10 Spring 2016 | eusci.org.uk

Illustration by Jemma Pilcher

in 1994 went further to prove that genes and hormones involved in reproduction are harmful in later life, suggesting that bypassing reproduction may allow us to increase our lifespan.

In essence, our cells are programmed to die from the moment we are born One of the most talked about hormones in regards to ageing has been insulin, with lower insulin levels being associated with longevity and centenarians being more likely to carry the genetic variant that suppresses the insulin-like growth factor. In a 2003 study from the University of Maryland, researchers also discovered that calorie restricted animals had decreased insulin levels. The possibility of being able to take control of ageing with something as simple as insulin sent medical researchers into a frenzy, trying to design a drug that could prove to be the fountain of youth. This new era of ‘geroscience’ could effectively allow doctors to be able to address the one common underlying factor of conditions such as cancer, dementia, and

diabetes—ageing. However, the elixir of life remains a distant goal for scientists, as drug trials are still in their early stages. There are some big questions and issues to be dealt with if scientists are to be successful in their venture. A healthy ageing population would put a major strain on society. Although the healthcare system would save millions in their fight against age-related diseases, the population dynamics of society would be severely disrupted. The rising retirement age would mean fewer jobs for the young, the demand for basic needs would increase, as would the general cost of living. With advancing knowledge of ageing at a cellular level, scientists are creeping closer and closer to discovering the secret of bending the laws of nature to their will and ultimately delaying ageing. Hopefully, if we ever reach that point they would have also spent some time considering how to accommodate the healthy older nation.

Jade Parker recently graduated from the Royal Veterinary College


After death Rebecca Pitkin meets with Dr Elena Kranioti to discuss the science that begins at life’s end Forensic anthropologist Dr Elena Kranioti is in the midst of closing up a cold case for the Pathology Division of the Ministry of Greece. They contacted her in late 2015 to help identify a body that had been found on the island of Crete seven years ago. The corpse was already highly decomposed when it was discovered, and a partial skeleton is now all that remains. Dr Kranioti began a detailed examination of the remains. Analysis of the skull and pubic bone suggested this individual was male, and probably not local to the area. However, the overall condition of his teeth and skeleton made his age much harder to determine. His teeth implied he was old, yet his skeleton pointed to middle age. Luckily, small fragments of rib had been recovered, and in such circumstances, the chemical properties and architecture of the ribs can be used to estimate the age of an adult with an accuracy of two to three years. This individual would have been just shy of 40. Who we are, how long we’ve lived— and, later, how long ago we died— becomes engrained in our flesh and bones. Biological markers, in part laid down by physical and chemical clocks that start ticking before we’re born and continue long after our death, give vital clues to our identities and personal journeys. It is a fascination with these markers and the stories they can tell that drives forensic pathologists and anthropologists to work at the line between life and death. Such stories are not always pretty. Dr Kranioti has a particular research interest in suspicious deaths, suicides, and mass disasters, where accurate knowledge regarding the time of death matters a great deal. Bodies decompose in phases, undergoing physical and chemical changes which start at the moment of death. In such circumstances, the biological clocks that begin to tick post-mortem are key to uncovering when and how someone died and what happened after his or her demise. At death, our body temperature changes as it equalises with the environment. Carbon dioxide builds up as our cells stop receiving oxygen. Within minutes, brain cells begin to die, followed by a cascade of other cells. Between six and 36 hours after death, the body experiences temporary rigor mortis as lactic

acid and myosin in the muscles form a gel-like substance. At the same time, decomposition begins as microorganisms within the intestine anaerobically digest the cells in their vicinity and spread to invade other parts of the body. The bacteria putrefy the body, releasing gases which fill the body cavity, which leads to a bloating that begins to be visible two to three days after death.

Bodies decompose in phases, undergoing physical and chemical changes which start at the moment of death Dying cells lose their structural integrity and release enzymes, further breaking down tissues. Tissues liquefy on different timeframes, providing another useful marker. The face, for example, will become unrecognisable after approximately four weeks. The tissue melts away into the environment leaving fairly stable skeletal remains. Or, if conditions allow, the body will start to mummify. Here, fat tissue undergoes hydrolysis leading to the formation of a waxy substance known as adipocere, forming a firm cast around the body that can persist for centuries. However, as Dr Kranioti is keen to point out, the timescales for these soft tissue processes are highly variable,

affected by both the external environment and the condition of the body. “Most researchers shift towards analytical methods since descriptive methods are not reliable. This is a field that will always have room for more research,” she says. Microbial clocks, which use changes within the microbiome of both the corpse and the surrounding soil environment to estimate post-mortem interval (PMI) offer a promising new avenue. In a study published in eLife in 2013, Metcalf et al. found that measurements of the microbial communities could be used to estimate PMI to within approximately three days over a 48-day period following death. The soil chemistry around grave sites is also an area of active research. Ongoing experiments using near-infrared spectroscopic analysis of soil samples to determine PMI show particular promise, predicting PMI to within 13–16 days in 1.5 to 3.5 year-old corpses. Dr Kranioti herself is sticking with skeletons. In research still to be published, she has developed an analytic method which allows for accurate age estimation from the material properties of tiny bone fragments. For the stories that begin with death, new chapters are opening because the clocks don’t stop when we die. Each new technique helps bring us closer to knowing the journeys of others after they’re gone. Rebecca Pitkin is an Edinburgh-based freelance science producer, writer and editor

Illustration by Eliza Wolfson

Spring 2016 | eusci.org.uk 11

focu s

Cancer and immortality Fiona Ramage discusses the delicate balance between slowing ageing and facilitating cancer progression If you were asked to visualise human immortality, what would you see? Would you imagine complete invincibility? Or would you settle for a more realistic vision, perhaps immunity to illness, or even eternal youth? With modern medicine, many illnesses are now being cured, prevented, or managed at a higher rate than ever before. Therefore, a reasonable degree of disease immunity will perhaps be within our grasp in the nottoo-distant future. Being resistant to the effects of ageing is more troublesome. It remains conceivable, as some living creatures seem to escape the ravages of age to some degree. This idea that ageing can be prevented or slowed down is the focus of several research groups across the globe.

Cancer cells do not die of old age, they have to be killed Some cells in our body have the innate ability to resist ageing. Stem cells are one of these types, as are the cells that produce other cells responsible for sexual reproduction. Both of these fulfil their role of maintaining important cell populations by producing new cells, and are able to continue dividing and creating new cells for far longer than most other cells in our body. Unfortunately for us, these native, healthy cells are not the only ones that approach immortality: this is a common feature of cancerous cells. This ability is highly advantageous to them. If they can continue to divide almost indefinitely, cancer cells do not die of old age—they must be killed. Inside the cells of our body, information on how the cell should develop, function, reproduce and ultimately die is contained inside DNA, which is deciphered by the cell. A lot of important information is spaced out along the DNA molecule, and its loss or damage could have devastating consequences. The structure of DNA poses certain problems; for example, when DNA is copied during cell division, the very ends of the molecule cannot be copied and are lost. Additionally, the exposed ends of the DNA molecule can be recognised by

12 Spring 2016 | eusci.org.uk

repair mechanisms as ‘damage’, which could be problematic. Telomeres act as a protective ‘cap’ at the ends of DNA, and solve these two issues. They prevent the detection of DNA ends as damage, and crucially, are composed of ‘unimportant’ DNA: its loss poses less of a threat to the cell. Interestingly, it is the length of these telomeres that determines the cell’s biological age. They shorten with each cell division, and when a critical length is reached, the cell can no longer reproduce, and sometimes faces death—it enters senescence. Accumulation of senescent cells and loss of regenerative potential leads to ageing and gradual decrease in function of tissues and organs. In most situations, telomere shortening is inevitable and final. Telomeres are created by an enzyme called telomerase, which extends the ends of DNA molecules. Telomerase is active very early in development, and most cells lose the ability to use it shortly after this time period. Some cells maintain it, but it is then used transiently. In a very limited number of cells, this isn’t the case. Telomerase can be active in cells which need to reproduce regularly and in a sustained manner. In some stem and reproductive cells, telomerase activity can be sufficient to offset ageing provoked by telomere shortening. This results in some degree of immortality for these cells. Wouldn’t it then be possible to reactivate telomerase to confer immortality to the cells of our body and defy ageing? Unfortunately, the answer isn’t so simple. Avoiding senescence and cell death is a common feature of cancerous and precancerous cells, and this is achieved partly through the use of telomerase to extend their telomeres. Not all cells that continue to use telomerase are cancerous, but it has been shown that bypassing telomere shortening is a crucial step towards the proliferation of cancerous cells. In fact, replicative immortality is one of the main hallmarks of cancer, and can be achieved partly by avoiding telomere shortening. It must be noted that avoiding telomere shortening isn’t inherently dangerous, it merely contributes to the threat posed by cancer cells. However, when combined

with other defining features of tumours, it becomes menacing. One defining trait of cancer cells is their ability to sustain growth and proliferation, and exceed normal limits of cell numbers, to subsequently defy normal tissue structure and function. They also evade normal growth suppression mechanisms. Cancer cells have the ability to bypass certain checkpoints in the cell-division cycle, where their progression would normally be blocked, allowing them to divide indefinitely. These cells also avoid normal tumour-suppressor mechanisms, which habitually induce cell death in damaged cells, and can therefore continue to divide despite critical damage. Finally, cancer cells can induce the formation of blood vessels to support the energy demands of tumours, and are able to invade other tissues and spread to other parts of the body. When considered together with these other hallmarks of cancer, it is easy to see why immortality of these cancerous cells and their ability to continue to divide almost indefinitely poses a threat. Is it possible to activate telomerase in cells without promoting cancer?

Whilst ageing seems undesirable, it might be a necessary evil Some studies in genetically modified animals have been carried out in an attempt to answer this question. It seems that animals which have increased telomerase activity are more prone to tumour development. On the other hand, mice deficient in telomerase activity, or with reinforced tumour suppressor mechanisms, are more resistant to tumours. However, both also often show signs of premature ageing, and excessive senescence can be equally damaging. Other studies have shown that overexpressing tumour suppressor genes in mice whilst maintaining other normal functions can result in cancer protection without causing accelerated ageing, giving us hope that there is potential to limit one without affecting the other. Thus, whilst ageing seems undesirable, it might be a necessary evil, until we


Illustration by Charlotte Capitanchik

know more about telomere lengthening and tumorigenic activities. Ageing is, in some sense, a control mechanism to prevent cancer proliferation, and even excessive proliferation of healthy cells. The fact that about 90% of cancer cells show some levels of telomerase reactivation demonstrates the importance of limiting telomerase activity. Despite this risk of accelerating ageing, is there a possibility of modifying telomerase activity in cancer to limit or reverse its development? Research into the mechanisms of cancer development and proliferation has led to many possible avenues of cancer therapy being explored, from more general therapies like chemotherapy or radiotherapy, to treatments targeting very specific mechanisms of cancer cell functions. Research is being undertaken to hinder all of the main hallmarks of cancer, separately or in combination. It makes sense to target the telomere lengthening capabilities of cancer cells. If such a function is so fundamental to cancer cell survival and identity, blocking

this function should limit its proliferation, hopefully without affecting normal cells.

Ageing is, in some sense, a control mechanism to prevent cancer proliferation, and even excessive proliferation of healthy cells A number of different telomerase-targeting therapies are currently being explored, targeting many of the different processes that regulate telomerase. Therapies have been developed targeting the part of telomerase that acts as a template for DNA synthesis, as well as the part of telomerase which adds this DNA to the ends of telomeres. Using small molecule therapies, or even involving the immune system or gene therapy, have generated mixed results. A prom-

ising avenue is modifying telomerase’s access to telomeres. Immortality, or rather, resistance to ageing, is far from being fully achieved, with cancer being one of the many limiting steps in reaching this goal. With increasing understanding of the biology of cancer and ageing, the right balance between defying ageing and avoiding cancer may be determined in the not-toodistant future, and would be of immeasurable benefit to us all.

Fiona Ramage is a student on the MScR Integrative Neuroscience programme Spring 2016 | eusci.org.uk 13

focu s

Immortal animals Lisa Hilferty looks into the animals whose ability to live forever have left humanity striving for immortality There are two things guaranteed in life: death and taxes. Although there is nothing we can do to eliminate the latter, the search for immortality has been going on for years. With the current leaps forward in scientific and medical research, the goal of extending life spans indefinitely is becoming increasingly realistic. Advances in medicine have led to a number of people living over the age of 100, with the oldest verified person being a French woman who died at 122 years old. Looking into the animal kingdom has given researchers more hope, as a number of animals that can seemingly live forever have been identified. This discovery has given insights into the methods evolved by mother nature that humans could exploit.

It can be cut into multiple pieces and a fully functional flatworm will develop from each section Consider the lobster. These animals get more fertile and powerful as they age. As lobsters grow, they moult, replacing their now too small shell with a new one. This process uses a large amount of energy, directly related to the size of the new shell. They do this because they express an enzyme called telomerase that repairs long repetitive DNA segments at the end of chromosomes called telomeres. Telomeres get shorter when a cell divides—a process needed to make a new shell in lobsters. If they get too short, the DNA cannot be replicated effectively, so the cell dies. The continued expression of telomerase in lobsters prevents this (in humans, telomerase is tightly regulated and defects can cause cancer). If older lobsters tried to moult, they would die from exhaustion due to their size. The shell therefore gets cracked, infected, or falls apart causing their death. In February 1977, the largest recorded lobster to date, weighing 20.14 kilograms, was found in Nova Scotia, Canada. Determining a lobster’s age is

14 Spring 2016 | eusci.org.uk

difficult, but a lobster weighing around nine kilograms was estimated to be 140 years old, so the giant Canadian lobster could easily have been over 200 years old. Recently, a new method for ageing lobsters has emerged. Researcher Raouf Kilada and colleagues have identified areas of the lobster anatomy that do not moult. These structures, namely the eyestalk and gastric mills, accumulate rings as the creature ages—similar to other marine animals and trees—which can be used to accurately calculate the age of an individual lobster. It is important to note that lobsters are not technically immortal; they are merely biologically immortal. Biological immortality is a term that describes creatures that have a stable or decreasing rate of cell ageing as the whole organism gets older. Biologically immortal animals can die from disease, natural disasters or predation, but they do not age. Lobsters are one example of this; others include turtles and tortoises. Certain turtle species do not age after reaching sexual maturity, showing no difference between juveniles and adults. A giant Galapagos land tortoise that died in 2006 in an Australian zoo was confirmed through DNA analysis to be 176 years old. Another example of this is found in a species of flatworm known for its amazing ability to completely regenerate from a small set of cells. It can be cut into multiple pieces and a fully functional flatworm will develop from each section. This is equivalent to a human growing from an amputated limb. This remarkable capability of this species is due to the fact its every cell is similar to a stem cell. Each cell has the potential to become any needed cell type to completely regenerate. Most animals evolved to undergo sexual reproduction to increase genetic variation. The more variation within a species, the greater the chance that the species will evolve to survive in changing conditions such as global warming and disease. However, one organism has developed a different method to achieve this. The distant relatives to flatworms, called Bdelloid rotifers, are microscopic creatures that live in freshwater. For the past 80 million years, every member of

the 400 known species of bdelloid has been female and reproduced asexually. So how have they survived for millions of years? The answer is simple—they steal genes from other organisms. A study in 2008 comparing the genetic sequence of bdelloids to those of other animals showed that 10% of bdelloid DNA is from different species. This integrated foreign DNA originated from 500 different creatures. A range of theories has appeared to explain how bdelloids can incorporate foreign DNA. One involves absorption via eating. A second theory is that they incorporate the DNA when repairing double-stranded breaks in DNA. Bdelloids have been shown to survive periods of time (up to nine years) in very dry conditions, which can cause damage to the creature’s DNA. It is thought that this is a major driving factor for the species to evolve this unique ability of foreign DNA accumulation. However, there is officially only one immortal animal, a type of jellyfish called Turritopsis dohrnii (T. dohrnii), which is 4.5 milimetres in size. Unlike lobsters and turtles, T. dohrnii uses a method known as transdifferentiation, which is the ability for a cell to change into a different cell type.

Even though humans as a whole are not immortal, there is a subset of immortal cells within the human body This process is rare and does not require the transforming cell to be a stem cell or a progenitor cell. Currently, it is unclear if all cells in T. dohrnii have the potential to become any other cell or if it is confined to a certain group of cells. T. dohrnii develops into an adult completing sexual maturity like the majority of other species. However, instead of continuing down the path of aging, the T. dohrnii reverts back into its immature state. After a period of time, the jellyfish can go through maturity again to become an adult. This cycle can be repeated indefinitely, making the T. dohrnii a truly


Illustration by Joanne Pilcher

immortal creature. It is fair to say the T. dohrnii has mastered the art of avoiding the aging process. Humans have been working on methods to do the same. The process of freezing has been around for a long time, as has the idea of freezing human beings, causing a method called cryopreservation to emerge. Cryopreservation has come a long way over the past few decades and emerged as an area of much scientific interest. Recently, it has been used to preserve fertility in young cancer patients undergoing chemotherapy. The first child born using frozen sperm and the first child from a frozen embryo were born in 1958 and 1985, respectively. Many companies have been offering to freeze deceased loved ones. Although to date there has been no confirmed case of a person ‘coming back to life’ after cryopreservation, progress in other fields such as reproductive biology shows it is theoretically possible. Some additional clinical relevance for research into cryopreservation is that it could potentially enhance transplantation medicine. If whole organs could be frozen at the moment of an individ-

ual’s death and be revived without any damage to the organ, it could allow them to be transported across longer distances to the most critical patients rather than looking just at patients within the immediate area.

Although the answer to how to achieve human immortality is still unknown, delving into the mechanisms used in the animal kingdom could assist this continuing and growing search that shows no sign of slowing down.

There is officially only one immortal animal It is worthy to mention that even though humans as a whole are not immortal, there is a subset of immortal cells within the human body. Stem cells have attracted a lot of interest over the past couple of decades and can be classed as immortal. Cell lines commonly used for research are immortal if given the appropriate factors to survive. Cancer cells are also immortal, dividing indefinitely and spreading into various organs located far away from their point of origin. This uncontrolled growth is usually caused by the cancer turning off inhibitory factors and switching on stimulating compounds.

Lisa Hilferty is a fourth-year Reproductive Biology student Spring 2016 | eusci.org.uk 15

focu s

From scanning lemons to brain operations Hannah Johnston discusses the advantages and history of magnetic resonance imaging You may remember the scene in the film Hannibal when Paul Krendler gets fed his own brain whilst still alive by Dr Hannibal Lecter. But thanks to immense advancements in a technique known as Magnetic Resonance Imaging (MRI), we don’t need to crack open the skull to observe all of the brain’s intricacies and mysteries. For example, now we can model the finer details such as neurons in 3D to better image the progress of disease, because what do you call a skull without 100 billion neurons? A no brainer! MRI has come a long way since its introduction to clinical applications in the early 1980s. But first, how does MRI actually work? Well, it’s the fact that our bodies constitute around 70% water that makes imaging possible. The nucleus of the hydrogen atoms in water is made up of one positively charged proton which spins on its own axis. This generates an intrinsic magnetic field, analogous to a bar magnet with both north and south poles. The protons are sensitive to an external magnetic field and behave in a similar way to a compass needle: when a magnet is applied, the protons can either align with or against the magnetic field. Most protons happily align parallel to the magnetic field as this is the lower energy state. A pulse of radiofrequency energy is applied which the protons absorb and flip into the higher energy state. Once the magnetic field is removed, the protons relax back to their random orientation, emitting the excess energy that they absorbed. The emitted energy is mathematically converted (I’ll spare you from the equations!) into a signal which is then translated into an image. This is the general principle of MRI. To put into context the size of the magnets used in MRI, the magnetic field produced can be a whopping 50,000 times stronger than the Earth’s field. An electromagnet of equivalent strength can lift a car! The phenomenon of MRI was first discovered in 1947 by Felix Bloch and Edward Mills Purcell. Raymond Damadian, however, regarded himself as the true inventor of MRI as he published the first image of a human chest cavity in 1977. He had initially volunteered to image himself but unfortunately he was too fat for anything to be detected. The instrument only worked on more slender subjects.

16 Spring 2016 | eusci.org.uk

MRI used to be termed nuclear magnetic resonance imaging but the inclusion of the word ‘nuclear’ didn’t go down particularly well with the public. MRI doesn’t actually use ionising radiation compared with other well-known techniques such as Computed Tomography (CT/CAT). CAT scans image bones but MRI can ‘see’ through the bone and provide detailed, highly resolved snapshots of soft tissue. Tumour cells appear darker than healthy tissue; but it’s not just tumours that this technique can detect.

We don’t need to crack open the skull to observe all of the brain’s intricacies and mysteries Brain connectivity is modelled using an extension of MRI called Diffusion Tensor Imaging (DTI). MRI scans show white matter, which is light grey in appearance due to the fat content in myelin, a layer of insulation surrounding the nerve fibers. Because water favours diffusion alongside the neurons as opposed to across them, DTI can track this water movement and show where

Illustration by Jemma Pilcher

the neurons are located in 3D space. It’s used in patients with autism, traumatic brain injury, and schizophrenia. Furthermore, Grace O’Hare, a technical consultant for Medtronic, the world’s largest medical technology development company, explains how its main advantage over other imaging techniques is that it can scan the brain in real-time. This provides an unprecedented degree of accuracy in neurosurgical planning and navigation during the operation. Previously, a scan could only be taken before the surgery with potential problems associated, such as the patient moving or the brain shifting. MRI has come a long way since the first images of a lemon, a finger and a human head were published in the mid-1970s. Now, the entire body can be imaged which has saved the lives of many due to the technique’s ability to detect such diseases like cancer and neurodegenerative disorders. Hopefully you’ll now agree that MRI is a much better and non-invasive way of imaging the brain as opposed to Hannibal Lecter’s methods!

Hannah Johnston is a Chemistry PhD student


Einstein and the speed of light James Cockburn discusses Einstein’s theory of Special Relativity Albert Einstein. Unquestionably one of the greatest geniuses the world has ever known. The story goes that by the age of sixteen he had already started wondering about a question that would lead him to develop one of his most famous theories: the theory of Special Relativity. Einstein was thinking about what would happen if you tried to catch up to a beam of light. The conclusion, at the time of thinking, was that you would see a stationary or ‘frozen’ light wave once you matched the light’s speed. Einstein, guided by the equations that describe light waves, thought that this should not be possible. To get around this, he asserted that the speed of light is the same to all observers, no matter how fast they move. In this way, it is impossible to catch up to a light beam because you would always see it moving away from you at a constant speed—the speed of light—three hundred million metres per second. The remarkable consequence of saying this is that it necessarily implies that time does not tick over at the same rate for everybody. In other words, the passage of time is a completely relative experience. For example, imagine you are standing at a train station. If a train zooms past you at, for example, half the speed of light, Einstein’s equations tell you that time for people on that train appears to be moving at a slower rate than it is for you. But this is not a universal fact. For the people moving on the train, it is equally correct to say they are at rest, and that the station moves past them at half the speed of light in the opposite direction. Therefore, they conclude that time is moving slower for the person on the platform than for them. These two different points of view seem contradictory but are both absolutely correct in a given frame of reference—either the train or the station. We call this effect ‘time dilation’. Sound like science fiction? It’s a tested fact and one that we use in technology today. Take, for example, the Global Positioning System (GPS). This works by having a number of satellites orbiting the Earth that we can communicate with. The satellites send and receive time signals and use this information to accurately determine the location of an object on the Earth. Because the satel-

Illustration by Scott D'Arcy

lites are moving with respect to us, we have to take into account this relativistic effect of time moving at different rates. If we do not take this into account then we cannot accurately calculate the difference in time signals between the satellite and receiver, resulting in a huge loss of positioning precision.

It is impossible to catch up to a light beam because you would always see it moving away from you at a constant speed—the speed of light Along with time dilation there is also an effect in special relativity called ‘length contraction’. It is essentially the other side of the same coin. To an observer in a moving frame with reference to a stationary one, they will measure the distances along their direction of motion to be less than the stationary observer would. This effect was something television manufacturers had to worry about. Before our modern-day versions, tel-

evisions worked by firing electrons on a screen using cathode ray tubes. The speed of the electrons was about 30% that of the speed of light and so relativistic effects were important. In particular, one had to think about the fact that the distance between the tube and the screen would appear, to the electron, to be considerably less than what we would measure it to be. This means that manufacturers had to configure the magnets that are used in such a way as to counter-balance this, and so end up with meaningful images on our television screens. Overall, special relativity is a mind-boggling and incredible theory of our universe which has so far passed all tests performed upon it with flying colours. We must use it if we want to advance, whether that be technologically or scientifically; the distinction between the two is, after all, only relative.

James Cockburn is a third-year PhD student in Theoretical Particle Physics in the Higgs Centre at the University of Edinburgh Spring 2016 | eusci.org.uk 17

focu s

The evolution of complexity Daniel Soo questions the common assumption that complexity evolves in a straight line through time It’s now hard to imagine Darwin’s theory of evolution being anything less than canonical. This, however, wasn’t always the case: consider the scientific world at the dawn of the 20th century. In the intellectual ferment of Darwin’s On the Origin of Species, evolution had grown to become widely accepted. Its driving mechanisms, on the other hand, had never been more contentious. Unpopular from the start, Darwin’s theory of natural selection had in the past four decades increasingly lost ground to alternate theories—so much so that in 1903, the German botanist Eberhardt Dennart wrote, "we are now standing at the deathbed of Darwinism.”

Evolution was then the high road with few turn-offs and no U-turns: linear, progressive, and towards an ultimate end Natural selection—where only the fittest by chance of random mutation survive and pass on their genes—felt too chaotic. Theories such as orthogenesis, meaning ‘straight-line evolution’, were widely popular instead, precisely because they prescribed an evolution stripped of its capriciousness. The orthogenesists believed that all organisms exercised an innate tendency to evolve ‘progressively’, marching towards a vague idea of advancing form, function, or beauty— be it in successively larger hooves, or perhaps increasingly intricate shell patterns. Evolution was then the high road with few turn-offs and no U-turns, linear and towards an ultimate end. Although orthogenesis became scientifically untenable by the 1950s, it’s not hard to see its allure. If we distil orthogenesis past its anthropocentric overtones and lack of viable explanations, we see at its core an intuition that most of us might share: that evolution ought to move forward, from simplicity to complexity. A cursory look at evolution’s overarching narrative seems to confirm such a bias. Simple cells into complex ones; single-cellular life before multicellular; sea sponges, barely much

18 Spring 2016 | eusci.org.uk

more than a soup of undifferentiated cells, diversifying into the radial explosion of forms, structures, and behaviours we see today. It was evolution’s unwavering movement through time that orthogenesists claimed as proof. The deeper they dug into the fossil layer, the simpler the life forms they discovered. Eventually, when subsequent fossil evidence failed to map so neatly to this theory, it was found that this trend of progressive complexity—while still largely true—was in fact jerky, full of irregularities and sometimes even reversals. George Gaylord Simpson, an influential palaeontologist whose Major Features of Evolution in 1953 effectively rang the death knell for orthogenesis, described this trend as ‘rectilinear evolution’. Rectilinear (only relating to a straight line) evolution meant that the evolutionary tree could be shaken out and loosened. This meant that split daughter branches growing eccentrically, or ’backwards’, could now be given due recognition. The lineages that splintered away from evolution’s arrow went through what was known as regressive evolution, where the loss of an ancestral trait occurs. Such a concept in itself is not new; Darwin himself studied the degeneration of eyes in cavefish. Since his time however, modern genetic analysis has uncovered a host of lineages so bizarre that they flout all assumptions we hold of evolution. The poster child of extreme regression, myxozoans are a group of aquatic parasites so microscopic that they were once thought to be protozoa: single-celled organisms often categorised with amoeba and slime mold. However, these unremarkable sac-shaped cells are instead ridiculously reduced animals from the phylum cnidaria—of jellyfish, corals and sea anemone. Myxozoans have seemingly caromed against the edge of advancing complexity, and swung backwards in time. Descending from a jellyfish-like ancestor capable of swimming, predation, and digestion, myxozoans now consist of barely more than a handful of cells that infect various fish and worms in order to fulfil their own life cycle. Accompanying the shrinking of

its size comes a whittling down of its genetic material. Kudoa iwatai, a species of myxozoa, has a genome size (sum total of all DNA) of 22.5 million base pairs— amongst the smallest animal genomes ever recorded. Today, K. iwatai has since shucked away 30% of its protein-producing genes compared to its closest relative, having lost genes essential for complex multicellular life. Interestingly enough, despite its extensive genetic pruning, myxozoans have retained a distinctive family heirloom. In the myxozoan Myxobolus cerebralis, cnidarian stingers have been modified into tubes that latch onto the skin of salmon or trout. From such tubes, infective cells are then launched into the fish where they travel to its cartilage and bones, causing cysts that deform and cripple the host. These fish are then unable to swim naturally and die early from sickness or predation—upon which legions of M. cerebralis are then released into the water.

The lineages that splintered away from evolution’s arrow went through what was known as regressive evolution While the myxozoans may be one of the most dramatic examples of regressive evolution, they not alone. Bumping up the roster includes other internal parasites like various protozoa and viruses, and those species living independently such as wingless insects and caecilians— limbless amphibians once thought to be snakes. In a way, even we are ‘shrinkers’ to a degree, having since lost our genes for functional tails and extensive body hair. Even stranger though, is how certain organisms have seemingly rocked back and forth on their evolutionary track through time. It seems that modern winged stick insects first lost, then re-evolved their wings, perhaps on multiple occasions. To the orthogenesists, evolution’s vacillation in direction would have been an absurd concept. Even to us, evolution’s flirtation with complexity can seem ungoverned and chaotic—so much

focus so that biologist Stephen Jay Gould likened it to a drunkard’s walk, where organisms have an equal chance of stumbling towards simplicity or complexity. The notion that evolution is wholly opaque, however, is an extreme view. While mutations and genetic drift (non-selective changes in the gene pool by chance) are random, natural selection tends to follow certain conventions that can explain why simplicity and complexity often prevail in either direction. Natural selection places no premium on complexity for its own sake. Rather, organisms move towards complexity or simplicity when doing so advances their ability to pass on their genes. In lesser cases of regressive evolution, traits that are no longer vital for survival are often naturally lost over time. For example, the respective ancestors of the ostrich, rhea, and emu, likely lost flight independently in response to the extinction of natural dinosaur predators, thus inhabiting a new evolutionary niche for land-bound birds. In the case of parasites, regressive evolution itself may bring about more direct advantages. In outsourcing its own needs to its host, the parasite mooches off with cheap and abundant energy; this energy can then be diverted to other aspects such as its reproduction. As selective pressures tilt towards increasing parasitism, the reduction of genome size and body plan of such parasites often increases their rate of reproduction and growth, creating positive feedback for sustained shrinking. Conversely, positive feedback also features in predator-prey and parasite-host relationships, but in the opposite direction, with each side trying to one-up the other with yet more elaborate defences and counters. Evolution is perhaps then not so much a straight path or drunken walk, but rather natural selection’s pragmatic

Myxozoans have seemingly caromed against the edge of advancing complexity, and swung ‘backwards in time’ arbitration of the forces that push an organism towards complexity or simplicity. To this end, the words ‘simplicity’ and ‘complexity’ can often feel inadequate in describing the meaningful relationship between both drives. Genetic trade-offs and swaps—where downsizing is accom-

Illustration by Alyssa Brandt

panied with enhancement elsewhere— continually happen, with gene and genome sizes sometimes being unreliable proxies for the complexity of features they code for. Even M. cerebralis' insidious brand of parasitism is an example of how incredibly specialised and sophisticated behaviour can arise from even the most reduced organisms. For such reasons, the scientific community has since become increasingly wary of value-laden terms like ‘regressive’ evolution and naive, simplistic definitions of complexity. The orthogenesists were neither unintelligent nor unscientific. They, however, held certain assumptions that shaped the world they saw around them. A century after the height of the movement, it seems that perhaps our

biases haven’t changed that much. It’s still easy to judge complexity in a way favourable to us, and even easier to believe that linearity and order hold steady through time. Evolution though, as the myxozoans gloriously remind us, continually refuses to be penned in by our impositions—remaining errant and unruly in all its strangeness, and marching to the beat of its own drum.

Daniel Soo is a third-year visiting Psychology student from Singapore Spring 2016 | eusci.org.uk 19

focu s

A cognitive chronology Julia Turan explores how we perceive time Time: the precious currency of life we all try to budget wisely. Our sense of time is essential not only to the practicalities of our days, but also to abilities including movement, speech, and memory. While the idea of one internal clock in the brain might be simple and appealing, the brain in fact has various different systems for time. Some keep track of time on the order of milliseconds, while others rely on decades. Our various senses also deal with timing information separately. Timing in the past versus the future are different tasks in the brain, which is somehow managing to stitch together all these different aspects of time. Put simply, this is essential to how we construct reality. Before we can talk about the brain circuits involved, let’s first acknowledge how difficult it is to pin down a definition for time. Descriptive definitions might be something along the lines of: our ability to grasp the duration, change, and order of occurrences in our experience. This includes both the rate of time passing, but also how long it has been since some other event. Or, in the words of Brian Greene, physicist and professor at Columbia University, “time is that which allows us to see that something has changed.”

Time is that which allows us to see that something has changed Is time in our brain really a separate entity from memory, attention, and decision-making, or is timing just one of the inputs into these circuits? Research thus far suggests that time is distinct— at least to some extent. For instance, the duration of a stimulus is encoded separately from other information. Difficulties with ‘interval timing’, meaning the anticipation of periodic events, is specifically disrupted in certain neurological disorders and also in aging, suggesting that this ability is separate from memory, attention, and decision-making. While we have been discussing time in the context of cognitive abilities specific to humans, we share our ability to sense time with many animals, includ-

20 Spring 2016 | eusci.org.uk

Illustration by Ana Rondelli

ing fruit flies, scrub jays, and rats. In fact, the control of our sleep-wake rhythm is located in the same part of all mammalian brains. This cycle influences several of our bodily functions, including metabolism and sleep. Specific time-keeping genes, for instance, tell our glands to secrete a hormone called cortisol to wake us up in the morning, and melatonin to help us sleep at night. Our basic biology of timekeeping may not have changed much since our mammalian ancestors, but perhaps there is a difference in how modern day humans view time compared to our predecessors. Around 500,000 years ago, we developed the ability to speak, which is a tool for moving around in time, as we can speak of the past or present. Neanderthals, who existed around 100,000 to 40,000 years ago, already buried their dead, which suggests an anticipation of the afterlife. It was only in 6000 BC that writing, calendars, and clocks were invented. Although we have come a long way in terms of ‘cultural equipment’ to tackle time, researchers theorise that our innate ability for dealing with time hasn’t changed much since the Neanderthals. Fast-forward a few more thousand years to the 1880s, when psychology research into time began. Already in 1890, William James asked, “To what cerebral process is the sense of time due?” The field had to wait several more decades, until around the 1960s, for brain imaging to be invented. We now have the

ability to measure brain activity both spatially, asking which part of the brain is active, and temporally, asking what the time-course of brain activity is. To this day, research still hasn’t pinned down the exact brain areas that monitor time; although a few different areas have been implicated. We do know that neurons have an intrinsic ability to detect and respond to time. Even when they are placed in a dish in a lab, they can do so. These properties suggest that most of the brain is involved in keeping time, just in different ways. Different timing systems also exist depending on the sense involved. A simple example is the speed at which we process audio versus visual input. Our auditory system is much more precise, which is why audio recordings have many more segments of music per second than videos have images per second. The brain has the ability to merge these disparate inputs into one ‘now’. This ability is disrupted in schizophrenics, and thus schizophrenics merge sights and sounds over a longer time window than those without a disorder. As researchers forge ahead in the broad field of time research and our scientific understanding of it continues to grow, perhaps this elusive sense will start to make more sense. Julia Turan is completing her MSc in Science Communication and Public Engagement


Tick tock goes the (body) clock Polina Shipkova explores the concept of social jet lag and its relevance to health Even for those lucky enough not to have experienced jet lag, it is probably a familiar phenomenon. This is the term for the condition a person sometimes develops after a flight, when he or she has crossed several time zones. Since jet lag can manifest itself in any traveller regardless of age, a lot of research has been devoted to understanding it. We know sleep disturbance, indigestion, difficulty concentrating, and loss of appetite are just a few of the possible symptoms. We also know the more time zones a person crosses, the more severe the symptoms become. We have a good idea of how to treat jet lag as well. The beauty of science is that there is always something new to discover. For instance, there’s another jet lag condition which you may not have heard of, called social jet lag. It occurs when a discrepancy arises between a person’s sleeping patterns during his or her workdays and days off. In other words, the different times a person falls asleep and wakes up during these days can cause social jet lag. This condition affects many people in industrialised societies, and can become chronic, meaning it occurs repeatedly throughout a person’s life. Therefore, it is essential to understand this phenomenon

Illustration by Áine Kavanagh

and how it works. Social jet lag and traveller’s jet lag are both caused by disrupted circadian rhythms, or body clocks. Every person has an internal biological clock, which is responsible for making him or her fall asleep and wake up and is strongly influenced by our environment, specifically by light. However, circadian rhythms become much more fascinating when one learns they control more than our sleeping patterns. They are also important for appetite, digestion, body temperature, and blood pressure. These internal clocks really make our body tick. And when there is something wrong with the clock, our body does not tick properly. It was recently found that social jet lag is associated with a higher body mass (BMI) index. BMI is currently the most common method of determining how healthy your body weight is. The individual’s weight is divided by his or her squared height, and the resulting number is the BMI. A person with a BMI between 25 and 30 is characterised as overweight, and if it is over 30, the individual is considered obese. Studies have established that this association means disruptions of our internal clocks might contribute to obesity, and partially

constitute the reason behind the global obesity epidemic. Obesity may not be the only condition to which social jet lag contributes. Researches also noticed a correlation between depressive symptoms and a probability of smoking, which was particularly strong in young people who fall asleep late in the evening. This means treating sleep disturbances and social jet lag could be helpful in preventing smoking and depressive symptoms.

Treating sleep disturbances and social jet lag could be helpful for preventing smoking and depressive symptoms Social jet lag, specifically sleep disturbances, may also contribute to cognitive issues. A particular issue is achieving less in school, which could potentially have long-term consequences such as achieving less in life. Smoking, drinking, and stimulants are often used to cope with these irregular sleeping patterns, but they can have further negative effects on health. So lesson learnt: we should take care of our biological clocks. There is a serious problem though. Some people’s clocks tell them to fall asleep earlier in the evening, while others do so later in the evening. However, many jobs nowadays start at 9:00 in the morning and finish at 5:00 in the evening, so they are only suitable for early sleepers’ schedule. Scientists have recommended that work schedules become more flexible, so that people with different sleeping patterns have the opportunity to adjust their work schedules and lessen the effects of social jet lag. In any case, more research on social jet lag and its effects on human health is needed. Until then, we can try to adjust our body clocks ourselves in an effort to prevent social jet lag. With this in mind, I know I will be trying to get up every day at the (relatively) same hour. Polina Shipkova is an MSc Science Communication and Public Engagement student Spring 2016 | eusci.org.uk 21

focu s

Procrastination nation Vicky Ware looks into the science behind putting off the things that matter most It’s 7.15am and I’m up early to ensure that today, today is going to be efficient and productive. Thirty minutes later I’m eating breakfast, checking social media, and chatting with flatmates. An hour later, a feeling of inadequacy has begun to creep into the pit of my stomach. I bat it away and continue cleaning the toaster—someone’s got to do it. Later, on closer examination, I’m not sure what that feeling is—self-loathing? Shame? Or a mixture of the two? I’m even less sure why I don’t heed it and get on with some work. All I want is to do some work. Actually that’s not true. All I want is to do some really, really good work. All I want is to pass my exams and get good grades, so I don’t feel like a failure. It won’t be the results that are failures, but me. There is nothing worse than the emptiness of the evening after a misspent day. I haven’t done the work I need to do, I haven’t had fun either, and it's one day closer to my deadline. I’m drained, but haven’t achieved anything. I’m wasting my time.

The underlying causes of procrastination go beyond being lazy or not feeling like doing work But why does this cycle of avoidance and self-loathing occur? And why do some people seem immune to it? Procrastination has become somewhat of a buzzword in recent years—everyone on social media claims to be doing it. While research has shown that most people procrastinate at some point, not everyone is a ‘procrastinator’. Much like people who claim to be ‘just sooo OCD’ because they clean the kitchen once a semester, it is a disservice to people who actually suffer from procrastination to claim to have the same problem. A 2016 study published in Death Studies found procrastination to be positively correlated with suicide— especially in women—hinting that the underlying causes of procrastination go beyond being lazy or not feeling like doing work. The first research into procrastination was done on university students

22 Spring 2016 | eusci.org.uk

in 1997, and published in Psychological Science. It found that procrastinators were, at first, less stressed than their non-procrastinating peers but as time went on, they suffered much more stress than the people who steadily worked their way through a semester. Their delay tactics didn’t pay off in the overall-stress experienced scores, and their work suffered too. Another insight into the reasons behind procrastination comes from a study published in 2000 in the Journal of Research in Personality, which found that procrastinators only delay preparing for a task when told it is going to be a test of their intelligence. When told a task is just for fun, procrastinators read material given to them that would prepare them for the task. To test whether procrastination results in negative outcomes, or is merely another form of time management, social scientists published research in the Journal of Social Behaviour and Personality in 2000, tracking students’ emotion and level of procrastination eight times per day in the final five days before a deadline. When students were putting off work, they felt guilt. This suggests an experience similar to mine—they weren’t merely deciding to work later, and enjoying their time not working, they were experiencing negative emotions related to not working. Normally, people learn from their mistakes. Leaving it too late to study for an exam or work on an essay results in a worse grade and more stress, so next time you might start working earlier. Procrastinators seem unable to learn from the negative consequences of delaying work. This might be due to the fact that they don’t delay because they think it’s the best course of action, but because of their emotions surrounding the work, and what the work they do says about them as a person. Research has shown that procrastinators actually reduce the chance of learning from their mistakes by trying to make themselves feel better about their behaviour in the present. Research at Bishop’s University in Canada found that procrastinators were more likely to use language that saw the positives in their procrastination, such as “At least I went to the doctor before it got worse,” when talking about having delayed going to the doctor when worried about health.

Non-procrastinators say, “If only I had gone to the doctor sooner,” reflecting an inability to accept their error. By being unable to accept their mistake, procrastinators also fail to accept they need to make a change to their behaviour.

Procrastinators seem unable to learn from the negative consequences of delaying work Procrastinators are not able to accept their error, because they need to feel good in the moment. Research by Bishop’s University, published in Social and Personality Psychology Compass, suggests a two-part theory on why procrastinators function as they do: they comfort themselves about their current behaviour by thinking they’ll be better able to emotionally deal with the task they’re putting off later. They can’t cope with the stress of re-structuring an essay now, but in the future they’ll be totally capable of doing a perfect job. As the deadline approaches, there is a tipping point after which the need to do something overrides the negative emotions surrounding the work. Is there anything in the research about what you can do about being a procrastinator? Happily, yes, and while there are no true quick fixes, there are some things that could help you in the short term. A 2010 study published in Personality and Individual Differences found students who forgave themselves after procrastinating on studying for a first exam were less likely to delay studying for subsequent exams. There is a subtle but important difference between this and the doctor visiting scenario above. It doesn’t mean denying that the behaviour was wrong and like the procrastinators who delayed a health check saying, “At least I studied eventually, I forgive myself.” It means accepting you made a mistake but that the mistake doesn’t make you a bad person, and moving on. Breaking study down into achievable chunks is another way to spread work out over a more reasonable period of time, as could blocking access to distractions by going to the library to study or blocking your access to social media.


Illustration by Lucy Southen

Unfortunately, research suggests that doing this may take more self-regulation than procrastinators generally possess.

Students who forgave themselves after procrastinating on studying for a first exam were less likely to delay studying for subsequent exams Finding something positive about the process, not just the outcome, is another method to stop procrastination. Rather than thinking about studying as purely a means to an end—passing an exam or getting a degree—find something that you actively enjoy or are getting out of doing the study itself. Setting pre-deadline personal deadlines has also been found to be somewhat effective, research in Psychological Science suggests, although not as effective as an official deadline. Writing down that you have to finish a piece of work, or finish studying for an exam, a week before you

actually do results in procrastinators doing more work than they would otherwise have done. If procrastination is a serious issue for you, affecting your grades and life in general, counselling might be a good way to go. Procrastination is a complex issue and just when you think you’ve got it sorted, you realise it has found a way back in. You might be in the library, but you’ve just spent the entire morning colour coding your agenda, when you really need to write that essay due next week. Counselling can be helpful in deciphering the reasons for your behaviour and helping you help yourself do something about it. If you are a student at The University of Edinburgh, you have access to a free student counselling service. There are also tactics to deal with people you know who are procrastinators. In a 2011 paper in Psychological Science, researchers found people are more likely to procrastinate on a task if they believe their partner will ultimately help them with it. So make sure you’re not enabling someone else’s procrastination. If you want your partner, brother, or anyone else you care about to

do something, you have to let them do it rather than enabling their procrastination by taking away the consequences of their failure to start work sooner. These days, I’m much less of a procrastinator, but it does creep in sometimes. I’ve learnt to be aware and actually see it as a sign that something is really important to me. The more I care, the more pressure I feel and the more likely I am to put it off. Researching this article has given me a better understanding of why I do it and what I can do to stop it. It’s an on-going practice, not a pursuit of perfection. I’m only human, after all.

Vicky Ware is a distance-learning MSc Next Generation Drug Discovery student Spring 2016 | eusci.org.uk 23

focu s

How the Cold War clocked the birth of neurons Daniel Soo examines how the curious enlistment of a Cold War carbon isotope led to new insights on neurogenesis in our brains Even decades after its waning, the Cold War still retains a haunting quality: a reminder of mankind on the brink of its own nuclear destruction. Ironically enough, it seems that it has also given us a novel way to study life—by clocking the birth of our cells through a technique called bomb-pulse dating. During the Cold War, rampant nuclear testing resulted in the formation of carbon 14 (14C), a radioactive isotope with two more neutrons than the abundant run-of-the-mill carbon 12 (12C). As 14C rarely forms in nature, this almost doubled its amount in the atmosphere. Subsequently, as 14C steadily leached into the oceans, it was found that this decreasing trend—a dying ‘pulse’—was so global and predictable that decreasing concentrations of 14C could be banded into respective years. As 14C is absorbed into the food chain, first through photosynthesis and then subsequent predation, it eventually makes its way up to us, with its specific 14C concentration becoming incorporated into the DNA of our newest cells. Our bodies then function somewhat akin to well-serviced cellars: each new batch of cells carrying their unique isotopic imprint of 14C—a mark of their vintage. Bomb-pulse dating is then the candle that illuminates the etchings on these casks. Not all parts of our bodies renew themselves at the same rate. For one, the adult human brain has traditionally been thought to be incapable of renewing itself. Bomb-pulse dating, however, has so far provided the most convincing evidence otherwise, leading to a new surge of research in human neurogenesis—the generation of new brain cells called neurons. While cell dating technologies are not new, bomb-pulse dating presents two big advantages over other methods: safety and precision. Unlike other procedures that label the building blocks of DNA in new cells, bomb-pulse dating carries no risk of inducing toxicity or mutation. More impressively, while previous techniques could only point to the evidence of neurogenesis, bombpulse dating has proven so accurate that neuron turnover rates of specific zones in the brain can now be studied.

24 Spring 2016 | eusci.org.uk

Such precision has spurred fascinating comparative studies between us and our mammalian relatives. It turns out that humans are somewhat unique, especially with regards to where neurogenesis takes place. In most mammals, the creation of new neurons is often restricted to two areas of the brain: the hippocampus and the sub-ventricular zone. Bomb-pulse dating has found that approximately 700 new hippocampal neurons are created each day in adult humans, resulting in an estimated 1.75% turnover rate. While this rate seems to be comparable in other mammals such as mice, neurogenesis in the sub-ventricular zone seems to play out rather differently.

Bomb-pulse dating is then the candle that illuminates the etchings on these casks In mammals such as mice, rats and nonhuman primates, new sub-ventricular neurons mostly migrate towards the olfactory bulb, which controls the sense of smell. In humans, however, the majority of such neurons likely migrate to the striatum instead, a part of the

Illustration by Jemma Pilcher

brain that regulates motor behaviour and responses to reward or punishment. Scientists speculate that the evolutionary departure of neurogenesis migration from the olfactory bulb to striatum in humans parallels our lessening reliance on smell, in favour for the cognitive robustness and flexibility that neuron renewal in the striatum may bring. As bomb-pulse dating continues to help us understand our brain’s natural capacity for renewal, scientists are hopeful that inducing adult neurogenesis may one day be used in clinical therapies. Given that recent studies show that new neurons migrate differently in a diseased brain, perhaps one day site-specific neurogenesis could replace cells lost in brain areas affected by Alzheimer’s or Parkinson’s diseases. Despite this surge of optimism, it seems that bomb-pulse dating may soon become unusable by 2050. As levels of 14 C continue to drop in the atmosphere, its ‘pulse’ and gradient will soon become so flat that any bomb pulse-dating would be too inaccurate for use. Faced with such transience, science can only strive to beat the clock. Daniel Soo is a third-year visiting Psychology student from Singapore


Up in the air Miguel Cueva investigates how climate change affects the aviation business Time is certainly important when you travel. For most of us who have travelled by plane, we know that being on time is essential. Arriving early at the airport, checking-in, going through airport security, waiting to board the flight, and finally departing are all part of the routine. It can be cumbersome and time-consuming, but once you are in the air, you might think your travel time is set in stone. However, this is not always true. Aircrafts fly through an atmosphere where the meteorological characteristics change daily. Extreme weather frequently delays flights, and in some cases, even shortens them. Airlines certainly want their customers to arrive on time, but they’re still at the mercy of the weather, which has significantly shifted due to climate change. Modern aviation has always affected climate change, but has climate change had an impact on aviation? Meteorologist Paul Williams from the Department of Meteorology at the University of Reading agrees that, “it is becoming increasingly clear that the interaction is two-way and that climate change and global warming have important consequences for aviation.” Climate change has intensified turbulence, caused the implementation of restrictions on take-off weight, extended flight routes, and most importantly, lengthened journey times. The influence of climate change on flight routes and travel times is seen across the globe. Transatlantic flights from Europe to America are becoming longer, while journeys in the opposite direction are shorter. Additionally, due to an unprecedented variability in wind patterns, flight journey times are very inconsistent. A well-publicised transatlantic crossing from New York to London on 8 January 2015 took a record time of five hours 14 minutes, whereas the average westbound journey takes around six hours 40 minutes. It is ironic how one of the biggest emitters of anthropogenic greenhouse gases is affecting its own business. One example of this inconsistency was seen on an eastbound transatlantic flight from New York to Edinburgh. The expected flight time was around seven hours and 15 minutes, but the flight time suddenly changed. The pilot announced to the crew and passengers that due to a

Image courtesy of Wikimedia Commons

strong tailwind, the plane would land in Edinburgh 40 minutes earlier than scheduled. The massive Boeing 757-200, with a 100-200 knot tailwind, can accelerate to above 650 miles per hour, making the flight times significantly shorter. Airline customers do not mind arriving earlier than expected. However, they mind when flights are delayed or extended. On a westbound transatlantic flight to New York City from Edinburgh, the expected travel flight time is around eight hours, a difference of 45 minutes mostly due to wind. The Boeing 757-200, with a 100-200 knot headwind, loses speed and will have to consume more jet fuel to counteract such a wind pattern.

Due to an unprecedented variability in wind patterns, flight journey times are very inconsistent The business of commercial aviation is changing. It wasn’t until the recent decades that flying became widely accessible. More flight routes are created every year and more passengers are able to fly to any destination they want. Because of this, more jet fuel is being consumed than ever before. Ultimately, this booming business is set to generate higher concentrations of greenhouse gases, thus contributing to the already

dire consequences of climate change. A vicious cycle has been created, as climate change is causing flight times to get longer, leading to airplanes consuming more jet fuel, causing carbon dioxide (CO2) emissions to rise. Geophysicist Kristopher Karnauskas from the Woods Hole Oceanographic Institution in Massachusetts stated, “In 2014, there were an estimated 49,871 commercial airline routes, with 102,470 flights per day, with an overall variability round-trip flying time of one minute, a variability that is costing airlines around $3 billion on fuel costs and emitting 10,000 million kilogram of CO2 per year.” With a growth in aviation and an increased variability in wind patterns that affect air times, it is fair to assume that aircraft will collectively be airborne for an additional estimated two thousand hours each year. It’s a vicious cycle our society will have a hard time to break out of. The future of the aviation business, and our societal dependence on it, will ultimately aggravate our planet’s ongoing climate crisis. Additional research, debates, policy making, as well as changes to societal attitudes, is needed to minimise the detrimental effects of commercial aviation on climate change. Miguel Cueva is a first-year PhD Researcher at SynthSys, School of Biological Sciences, University of Edinburgh Spring 2016 | eusci.org.uk 25

focu s

At Earth’s end Adelina Ivanova shares her perspective on humanity’s race against our planet’s decline The most assuring thought about every competition is the knowledge that the finish line is present at a definite place. However, what if you are running a race where the finish line is constantly moving closer or further away? This is humanity’s reality, and the end of the race means the end of life as we know it. We may not realise it, but we are competing against the decline of our planet as Earth is spiraling towards becoming uninhabitable.

We may not realise it, but we are racing against the decline of our planet Earth is declining on multiple levels. Earth’s tectonic plates will cease to move and slowly level off the landscape. No new mountains will be formed and erosion will gradually diminish the existing ones. Eroded material deposited into water basins will cause flooding and contamination that will lead to the extinction of most marine life. The progressive brightening of the Sun is an even bigger threat to Earth’s habitability. In the next 4.8 billion years, the Sun will become more than 67% brighter than it is now. In half a billion years, the hotter Sun will produce more rainfalls, reducing

Illustration by Scott D'Arcy

26 Spring 2016 | eusci.org.uk

atmospheric carbon dioxide levels to lower than ten parts per million, which is the threshold at which plants can photosynthesize. Even if some plants manage to adapt to lower carbon dioxide levels, the decrease in water due to vigorous evaporation caused by the hotter Sun will likely lead to the extinction of most of Earth’s flora and fauna. Although these events were set into motion long ago, the rate at which they are progressing is concerning because it almost surely dooms the Earth to eventually become uninhabitable. Humans are also contributing to Earth’s decline. Between 46 and 58 thousand square miles of forest are lost due to urbanisation and industrial development each year. This contributes to a loss in our natural source of oxygen and poses a substantial threat to our floral biodiversity. The pollution in the water basins is reducing the availability of clean water. Extracting fossil fuels as our primary energy source disturbs natural landscapes. However, humans are trying to outrun Earth’s demise. Extensive research is being carried out to implement sustainable approaches that reduce the need to drill for fossil fuels and in turn decrease pollution and emissions. Sustainable inventions are becoming more popular with the introduction of electric cars, solar panels, and renewable energy sources. Although we are trying

to counteract the damage humans have already done to the planet, there is a good chance it may be too late to save Earth. Therefore, we need to look for a home elsewhere.

It is our duty to save the home we have been abusing for centuries Other research focuses on finding ways to live in space. Dr Al Globus, a contractor for the Nasa Ames Research Centre in California, supports the idea that humans may move to live in space-colonies orbiting the Earth by 2100. The Mars One initiative plans to send the first unmanned mission in 2020 and the first crew of permanent settlers by 2026 with the aim of designating Mars as our next home planet. Whilst establishing a human population in space seems quite appealing as a science fiction set in reality, it does not seem like a viable solution. To establish permanent settlements and colonies on Mars or the moon would still not save us from the threat of the continuous brightening of the Sun. The only absolute solution would be moving to a new solar system. This requires a level of technologcial development humanity has not yet achieved. The main thing to remember through all of this is that our planet is not what we should be racing against. It provides the perfect conditions to sustain all forms of life. Implementing sustainable practices is a good start to embracing a way of living that supports our planet, but research should also be focused on developing ways to deal with the biggest threat of all—the brightening of the Sun. Earth is not our enemy. The race is not lost yet. There is still time to right our wrongs. It is our duty to save the home we have been abusing for centuries. Humans have enormous potential, and now is the time to focus it in the right direction.

Adelina Ivanova is a first year Chemistry student


A zoo for the ages Angus Lowe looks at the case for extraterrestrial civilisations, where they might be, and how to test one hypothesis Why is space so silent? By all accounts, the number of stars in the universe is either infinite or so large it might as well be. According to even the most conservative speculations, a similarly unfathomable number of habitable planets must also exist. In the Milky Way alone, lower estimates suggest a population of 100 billion stars. What’s more, life on Earth has existed for an infinitesimal fraction of the universe’s 13 billion years of existence, and intelligent life an even smaller fraction. Considering this enormous cosmological timescale, civilisations on our planet emerged in a relative blink of an eye after the first organisms came into existence. Therefore, it seems as though other worlds have had plenty of chances to cultivate an intelligent civilisation, and more than enough time for all of these to become far more advanced than our own. Sophisticated quantitative and categorical models have been constructed to try to define hypothetical civilisations. The Drake equation, for instance, gives the number of active communicating civilisations in the galaxy while the Kardashev scale classifies civilisations according to their ability to harness resources (the Earth is currently at a 0.7 out of 3 possible levels of complexity according to Carl Sagan). For some astronomers, like those involved in SETI (Search for Extraterrestrial Intelligence), aliens are not reserved for the realm of science-fiction: they are waiting to be discovered.

It is the idea that intelligent aliens are out there, listening to us, but choosing to ignore us This is the essence of the Fermi Paradox, which refers to the apparent emptiness of our galaxy despite an overwhelmingly large number of opportunities for civilisations to develop. A variety of theoretical solutions to the paradox exist, including the simple idea that we might actually be alone or special. Another solution could be that a so-called ‘Great Filter’ is responsible for this startling lack of life: there may come a point in the development of civilisations at which most fail to survive, and it is thus

Illustration by Katie Forrester

quite rare—or impossible—for a species to become substantially more advanced than our own. Perhaps the most exciting and testable of all these solutions is the idea that intelligent aliens are out there, listening to us, but choosing to ignore us. This explanation is called the Zoo Hypothesis, which suggests there are civilisations aware of our planet that either cannot be bothered to contact us or are making an active decision not to do so for unknown reasons. One way to test the Zoo Hypothesis is to search for these aliens by sending messages to them. The idea, proposed by Dr. João Pedro de Magalhães from the Institute of Integrative Biology at the University of Liverpool, is to test if aliens are indeed listening in on our communications, we could invite them to respond to us. The message would be collaborative and open to input from people of all backgrounds. Speaking to EUSci, Dr. Magalhães says, “We would like to have as many people as possible aware of—and thinking about—our project and proposal.” The Zoo Hypothesis is one solution to the Fermi Paradox, but it is by no means the most probable. Extraterrestrial communication might just be so fundamentally different from our own

that sending a message would be impossible, or the recipients might not exist in the first place. Even if civilisations able to comprehend the message are out there, there is no guarantee they are close enough for a radio signal to reach them intact. Moreover, there may be negative consequences of successfully contacting a civilisation far more advanced than our own. If they do care about us, would they be friendly? Although Dr. Magalhães admits that a response from an alien source would be unlikely, he believes that the chance of establishing contact—no matter how improbable—makes the project worthwhile. Any civilisation which has survived longer than our own will undoubtedly be able to impart wisdom worth sharing, as our species’ capacity for self-destruction remains a threat through nuclear warfare, biological weapons, and global warming. If a ‘Great Filter’ explains the empty galaxy, how long will it take to discover what wiped everything else out? If it’s the Zoo Hypothesis, when do we get to meet our spectators?

Angus Lowe is a first-year Physics and Computer Science student Spring 2016 | eusci.org.uk 27

focu s

The nature of time Caroline Stillman unscrambles our understanding of the direction of time flow in our universe We are all familiar with the concept of time. Its passage defines our lives. Yet, time is an exceptionally difficult thing to define—there is still no universally agreed upon understanding. That said, the widely accepted physical interpretation of time is that it serves as one of the four dimensions of the universe, an inherent part of the spacetime in which we exist. In keeping with his theory of special relativity, Einstein postulated that as you approach the speed of light, time slows down. This means that if we could travel at the speed of light we would experience timelessness.

Time must move forward as the egg goes from unbroken to scrambled However, this does not explain why time only travels forwards and never back. It clearly has only one direction in this universe, but it is not immediately obvious why this is. The surest way to measure the direction of time is to look at the entropy of a system. Entropy is a measure of

disorder: an unbroken egg is low in entropy, as there are relatively few ways to organise the egg particles to form an unbroken yolk surrounded by unbroken white matter. A scrambled egg is high in entropy; there are many more ways to organise the particles now, as white and yolk can intermingle freely. We can then measure the direction of time by looking at when the egg was in each state, and we know that time must move forward as the egg goes from unbroken to scrambled. We have learned from experience that one cannot unscramble an egg and return it to its original state, as that would reduce the overall entropy. This gives us a solid interpretation of the arrow time: it points in the direction in which entropy increases. However, we have no current theory that explains why the universe began in a state of low entropy. Richard Feynman, noting that the universe once had a very low entropy for its energy content, said, “[time] cannot be completely understood until the mysteries of the beginnings of the history of the universe are reduced still further from speculation to scientific understanding.” One possible answer to this conundrum was posed in the nineteenth

Illustration by Joanne Pilcher featuring image from Wikimedia Commons

28 Spring 2016 | eusci.org.uk

century in the form of Boltzmann’s multiverse. He postulated that our universe exists merely as a fluctuation inside some greater multiverse, which exists at high entropy and in thermal equilibrium. If such a fluctuation were to produce a region of low entropy, our universe could exist in it. This would explain why our universe experiences the passage of time, as time would pass until the entropy reached some maximum, which would signify the fluctuation in the larger universe disappearing. The major flaw in this theory is that it relies on a fluctuation producing a low entropy situation. This is of course possible, but it seems unlikely that this level of order should exist for our entire observable universe, as it does.

Time serves as one of the four dimensions of the universe, an inherent part of the spacetime in which we exist There are many other theories that also attempt to define time. One is the so-called ‘static interpretation of time’. This theory states that our universe is a static, unchanging physical object that our consciousness plays out for us, as a reel of film is played out on a screen. The past, present and future are already set, and we simply experience this predetermined set of events. This theory is perhaps more interesting from a philosophical point of view, as it opens the discussion on whether we have free will, and is also exceptionally difficult to prove. Boltzmann’s multiverse theory at least offers us a possible explanation for time existing as we know it in our universe. Whether there is some greater universe of which we are only a fluctuation remains to be seen. Is time universal? As of yet, we cannot be sure. Only time will tell.

Caroline Stillman is a third-year Physics student

fe atures

Communicating science in Edinburgh Polina Shipkova investigates how the Edinburgh Science Festival communicates science If we play a game of associations and I say “science in Edinburgh,” chances are one of the first things that comes to mind would be Dolly the Sheep, the first cloned mammal. She was famously born in Scotland’s capital 20 years ago. However, her creation is not the only scientific achievement that came from this city. Edinburgh has a long and significant history of contributions to science, which come from either people born here, graduates, and those working at the University. Names like Alexander Fleming, James Clerk Maxwell, Charles Darwin, Alexander Graham Bell, Joseph Black, and many others speak for themselves. Today Edinburgh is still a hub for scientific work. It is also a place where an interest in science is found throughout the public. The relatively new field of science communication flourishes in the Scottish capital. In my experience, when I mention the term ‘science communication’, people seem to have a sense of what I am talking about, but do not have a clear idea of what it actually is. Science communication, public engagement, and science outreach all refer to the practice of bringing science and society closer together. This can take many forms, ranging from science festivals, centres, and museums to scientific writing, media, education, and even policy. It can come from active researchers, science communication practitioners, or from anyone generally interested in science. The breadth of this field makes it possible to pursue different types of science communication and therefore engage with people of all ages and backgrounds. This is why I want to practice science communication. However, when I mention this, I am often faced with the question, ‘what sort of job can you get?’. Normally I answer by explaining that I can work in any of the areas mentioned above. But let us be more specific and take a look at the exciting and diverse work being done at Edinburgh’s most prominent science festival. The Edinburgh International Science Festival (EISF) is one of the largest in Europe. Its huge brochure outlines the dozens of workshops, discussions, dropins, special events, and exhibitions available for families and adults. The activities cover all aspects of science and are unified around a central theme each year. It is truly wonderful to see so many

Image from EISF website. Credit: Chris Scott, EISF

people attending these events. Many families take a couple of hours or even a whole day to bring their children to learn about science and hopefully develop a real interest in it. The festival happens every spring over the course of two weeks.

Edinburgh is a place where an interest in science is found throughout the public EISF has taken its mission a step further by initiating several educational projects to reach even more children and engage them with science. Its biggest project is called Generation Science. Last year it delivered science shows and workshops in over 600 primary schools around Scotland. Hands-on interactive sessions are a great way of sparking children’s interest in science. Importantly, when it is done as early as primary school, this interest can build and develop through the years. Whereas Generation Science has run for several years, new projects are also being developed such as Careers Hive, which had its pilot run at the end of February and beginning of March 2016. The idea behind it is to draw secondary school students’ attention to science, technology, engineering, and mathemat-

ics (STEM) and show them the variety of job opportunities this field offers. Another relatively new project is called Fuselab Go, where in 2015, teenagers aged between 15 and 18 worked on ideas about sustainable development on a new planet. This is an excellent example of a creative and captivating event about interdisciplinary science, which I would definitely attend it if it was not for the age limit. Even beyond all this work in Scotland, EISF operates internationally as well. It has been acting as a partner for the Abu Dhabi Science Festival since 2011 and has cooperated in its execution. EISF have also delivered workshops in India, Beijing, and Germany. The focus of this article has been on EISF as the 2016 festival starts very soon. However, many other excellent examples of science communication exist in the United Kingdom. If I were to write about all of them, I would have to publish a book. Science communication is important and passionate people are needed for the field to grow.

Polina Shipkova is an MSc Science Communication and Public Engagement student Spring 2016 | eusci.org.uk 29

fe at u re s

The poultry predicament Rakhi Harne examines the production of the world’s most popular meat With the world population increasing at an alarming rate—it’s estimated to exceed nine billion by the year 2050— the consequences of failing to achieve increased food production at an equal rate are serious. The majority of the world’s population depends upon livestock as a source of protein. Poultry—which includes chickens, quails, turkeys, and geese— has an important role. Chicken alone accounts for 35% of all meat eaten globally, which will soon surpass pork consumption. There are more chickens in the world than any other species of birds or domestic animals.

There are more chickens in the world than any other species of bird or domestic animal Globally, around 52 billion chickens are reared per year. According to the International Egg Commission, five billion hens are set aside to produce eggs. In Britain, 95% of people eat chicken at least twice per week. Consuming chicken has numerous nutritional benefits, as it is a source of meat and eggs and contains high proportions of protein and little to no fat. Humans have domesticated chickens since 2000 B.C. Today's breeds are mainly descended from the wild red jungle fowl or grey jungle fowl from Southeast Asia. After centuries of selective breeding, chickens now exist in many colours, sizes, and shapes. People now keep them for meat, eggs, feathers, and even as pets. Today there are two industries that produce chicken, the purebred poultry and commercial industry, which differ from each other in their methods of developing various types of domestic fowl. While the purebred exhibition industry continues to select and breed fowl for standard conformations and feather colours, the commercial industry develops specialised hybrids for meat (Broilers) and egg (layers) production. The purebred fowl of today are basically the same as they were 100 years ago and are mainly raised as a hobby, whereas the commercial poultry industry has developed into a science that aims to produce

30 Spring 2016 | eusci.org.uk

highly nutritious meat and eggs with extreme efficiency. Today’s commercial breeds look far more bulky than they were a century ago. Broiler chickens take less than six weeks to reach slaughter size. Layers can produce up to 300 eggs a year, after which the flock becomes unviable. Although this mode of mass production is efficient and keeps up with demand, these birds develop bone defects, have brittle skeletons, suffer from heart stress and, most importantly, are not disease resistant. They are extremely susceptible to bacterial, viral, and parasitic pathogens. It takes no time for the disease to become endemic, which raises concerns for transmission of the disease to humans. One possible solution for producing healthier chickens is selective breeding, which takes place by mating birds with beneficial traits, which is achieved by using their genome sequences. Some of these traits include the capacity to form strong bones, low feed consumption, disease resistance, increased egg laying abilities, enhancing the blood’s oxygenation, and a high growth rate. Researchers at The Roslin Institute in Edinburgh is currently researching some of these desirable traits. A result of this sort of scientific work is the creation of genetically modified chickens. Some of the birds are undergoing tests to determine the potential treatment of liver damage. Others chickens contain genes that interfere with their ability to transmit of a virus that causes bird flu, which in turn would reduce the risk of passing the disease to humans. Marking cells of the immune system in this way allows the researchers to study the chickens post-mortem to see where invading microbes have clustered.

Science, the team reported it had successfully created a genetically manipulated chicken model. In these chickens, a gene is silenced in a way that causes the eggs of these birds to lack the protein ovomucoid, which many people are allergic to. The researchers are also attempting to remove the protein ovalbumin, another allergen abundantly found in egg whites. The authors mention their results could have agricultural and industrial applications, particularly in reducing allergy concerns for people who might have an immune response to foods and vaccines containing egg whites.

The commercial poultry industry has developed into a science that aims to produce highly nutritious meat and eggs with extreme efficiency None of these breeds have yet been commercialised for consumption, in spite of the evident advantages. In-depth research is required to study any long term health effects of consuming these genetically modified chickens.

In Britain, 95% of people eat chicken at least twice per week In another study led by Isao Oishi of Japan's National Institute of Advanced Industrial Science and Technology and Takahiro Tagami of the National Agriculture and Food Research Organization's Institute of Livestock and Grassland

Rakhi Harne is a first year PhD student studying Developmental Biology at The Roslin Institute

fe atures

Illustration by Ashley Dorning

Spring 2016 | eusci.org.uk 31

fe at u re s

Mission to Mars Meghan Maslen explores the science behind why Scott Kelly spent one year in space You may have heard that NASA astronaut Scott Kelly recently returned from almost a year in space. His time spent at the international space station (ISS) was part of a set of initiatives proposed by the NASA Authorisation Act. This act outlines several objectives, one being to maximise the presence of astronauts in space to prepare for future flights to Mars in 2030. Now, as you can probably imagine, a round trip journey to Mars is no small feat. Many years of scientific study are necessary before NASA’s maiden voyage. We not only need developments in the technologies and infrastructure to support this unprecedented 140 million mile journey, but must also dedicate research time and money to understand the toll a long duration in space might have on the human body. Orbiting 230 miles above earth, the ISS is a microgravity laboratory that serves as the perfect platform for assessing the physiological effects of the constant weightlessness and increased radiation exposure associated with long term space travel. To date, the most common problems experienced by astronauts are significant bone loss, muscle atrophy, headaches, nausea and vision changes. Certain practices have already been implemented to counteract these adverse events but the benefits to astronauts have only been measured for shorter three to six month missions. Given that a journey to and from Mars is estimated to take three years, NASA scientists appreciate that a longer term study of the health effects and the benefits of potential solutions already in place is desperately needed. This is where Scott Kelly comes into play. Scott Kelly was chosen from several astronauts to become NASA’s test subject for one year in space. Now, it is important to note that Scott Kelly is not just any astronaut. He has an identical twin, Mark, a retired NASA astronaut. This provided NASA with a unique opportunity: not only could they explore how long stays in space affect Scott, but they could also compare findings to those collected from their control subject, Mark, back on earth. Given their almost identical genetics, the only difference or “variable” is their living environments. Scientifically, this is especially powerful because

32 Spring 2016 | eusci.org.uk

it means that they could use these two men to define the genetic basis for how the body reacts to the different stresses encountered in space. Using various techniques, they sought to assess the body’s response to spaceflight at different biological levels, from minute alterations in gene expression all the way to larger physiological differences. This established the pioneering Twins Study, defined by NASA as “a multifaceted national cooperation between universities, corporations and government laboratory expertise”. Both Scott and Mark were required to supply urine, blood, faecal samples, collect different skin swabs and take a multitude of tests when both were at peak physical condition before Scott’s journey, at several time points throughout the year in space, and will continue to do so at various times post-Scott’s return to earth. In short, the twins became the experiment.

Many years of scientific study are necessary before NASA’s maiden voyage So what exactly have they been investigating? Well, this large-scale initiative allocated three years of funding for ten individual investigations, all focused on evaluating health changes in four categories. These took into account the effects of space on human physiology as a whole, on behavioural health, on the body at the molecular level and lastly, on the human microbiome. Rather than detailing all ten studies, here are four examples taken from each category to give an idea of the extent of scientific knowledge that will be gained from Scott’s year in space. What is one problem the majority of astronauts experience? Significant changes in vision. Scientists hypothesise that the shift of fluid towards the head that results from weightlessness causes increased pressure in the brain. This may push on the eyes, causing them to change shape and consequently hamper the astronaut’s eyesight. One physiological study therefore set out to assess the impact of long duration space travel on internal fluid distribution. The goal throughout the year was to measure

fluid movement at the cellular and whole body levels in order to understand why this might be happening. One method tested on this mission to counteract this headward fluid shift involves the astronaut wearing a suit that applies pressure to his or her lower extremities. Scott’s year in space will tell us whether he benefitted from this approach and will also provide recommendations for other preventative measures that may be feasible for future missions. Another study, which evaluated behavioural health, looked at the effect of space travel on an astronaut’s cognition. A three year journey to and from Mars will require extensive skill, focus and consistent cognitive ability to complete the mission effectively. Taking into account the notion that there are a number of environmental stressors unique to the space environment, scientists designed an array of cognitive tests to measure Scott Kelly’s neurological performance at different time points. Briefly, these tests measure comprehension, emotion recognition, abstract reasoning, risk decision making and spatial orientation. The results will guide the development of protocols that will help astronauts maintain cognitive ability for future space exploration. Now to delve into one of the more molecular studies. The cells that make up the tissues of the human body house chromosomes, rope-like structures that contain all of our genetic information. The ends of these chromosomes are protected by caps, repeating DNA sequences called telomeres. With every cellular division these telomeres shorten, which ages the cell over time. This characteristic means that each cell only has a finite number of divisions before it dies. The telomere can therefore be viewed as our cell’s aging clock and serves as a biological marker of aging. Interestingly, the rate at which these telomeres shorten can be accelerated by stress. Since a trip to Mars has its own inherent stresses, it is thought that prolonged exposure to the many insults of space might influence telomere shortening and consequently speed up biological aging in the astronaut. Furthermore, this could predispose the astronaut to developing age-related illnesses like cardiovascular disease. So, could it be that Scott is now older than

fe atures

Illustration by Lynda Marie Taurasi

Mark? Looking at Scott’s telomeres may be able to tell us. If there is a correlation identified between spaceflight and aging, this investigation could highlight what steps should be taken to mitigate these effects in the future.

They could use these two men to define the genetic basis for how the body reacts to space The final investigation focused on the human microbiome, which relates to the bacteria and yeasts that live on and in the human body. Though it is slightly discomforting to know that we share our bodies with millions of microbes, some of these species are critical to maintaining our health by helping to regulate different biological processes including aiding in proper digestion and helping us battle infection. Previous studies, however, suggest that astronauts experience changes in their microbiomes during space travel, presumably in response to the extreme conditions

encountered. This is a concern because this may involve the loss or replacement of so-called “good” microbes with “bad” opportunistic pathogens, therefore compromising health and increasing an astronaut’s susceptibility to infection throughout the journey. Findings from this experiment will direct the development of therapies used to combat any microbiome changes that might negatively impact human health. So what did space do to Scott? Unfortunately, because Scott just returned to earth a few weeks ago, there is little data available to show how his body responded to the year in space or whether it has compromised his health in any way specifically. Details will probably not be publicly available for a while given that Scott and Mark will be continually monitored for an additional two years to assess how Scott’s mind and body adjust to life back on earth. We do know, however, that continuous weightlessness stretched Scott’s vertebrae making him taller than when he left. Though he may have been excited by the prospect of gaining a couple of inches on his brother, being reacquainted with gravity for a few days

soon returned him to normal height. The beauty of the Twins Study is that it could highlight how an individual’s biology reacts to space travel. It has the potential to guide what advancements in treatment regimens, exercise protocols, living arrangements and other countermeasures need to be made to lower health risks and improve how astronauts tolerate the experience. By participating in this study, Scott and Mark have very likely taken NASA one step closer to achieving the goal of launching missions to Mars.

Meghan Maslen is a second-year PhD student Spring 2016 | eusci.org.uk 33

fe at u re s

Ratted out Kerry Wolfe examines the controversial Shiant Isles Seabird Recovery Project When Tom Nicolson, owner of the Shiant Isles, forgot a part of his tent while camping on one of his islands last summer, he knew he was in for a rough night. He settled into the main island’s small house—his first time ever sleeping inside the snug 30-foot-by-18-foot bothy in the 26 years he’s been visiting the area. When he awoke in the middle of the night and found the door blown open, he understood at once that his fears had come true. Something rustled below him. The quiet tapping of tiny toenails filled the confined space. Nicolson peered above the head of his sleeping bag and saw the source of the noise. A black rat sat atop his extra mattress, gnawing on

Image from Wikimedia Commons

34 Spring 2016 | eusci.org.uk

an apple. Others scurried about, unbothered by his presence. Nicolson won’t have to worry about such encounters in the future. The Royal Society for the Protection of Birds (RSPB) launched the fieldwork phase of its four-year rat eradication initiative on the Shiant Isles over the winter of 2015/2016. The Recovery Project, the product of a joint initiative between the RSPB, Scottish Natural Heritage, and the Nicolson family, costing £900,000, aims to cull the area’s black rat population. But RSPB didn’t spearhead the project for the sake of Nicolson or other humans on holiday. They are killing the rats to save the birds.

The Shiant Isles are an important nesting and breeding site for seabirds. These uninhabited pockets of land lie just under five miles south of the Isle of Lewis in the Outer Hebrides, making them a fairly popular spot for bird watching enthusiasts willing to brave the elements. Nigel Nicolson bought the islands in 1937 just before World War II, then later passed them onto his son Adam, who then passed them on to his son Tom. They are a sentimental space, rough and rugged, yet a welcome respite from a hectic urban life. “We’re very proud to have them,” the youngest Nicolson says. On nice days, the Shiants are a peaceful place, where swatches

fe atures of green grass carpet the rugged cliffs while windswept waves crash ashore below. But conditions aren’t usually ideal for human life. The isolated location and often turbulent weather make forging a living rather difficult. The rustic bothy —with no electricity, running water, or toilet—offers few human luxuries. The last people to permanently reside on the main island left over a century ago, creating 900 acres of prime real-estate for the area’s local avian settlers.

They are killing the rats to save the birds Thousands of birds flock to the area, as the lonely landscape provides an optimum respite from life at sea. Species such as razorbills, guillemots, and puffins —10% of the UK’s puffin population nests on these islands—use the Shiants as a refuge to secure a mate, nest, and raise their young. The breeding colonies converge on the cliffs and surrounding grassy areas each summer, peppering both the land and the surrounding sky with their presence. According to the RSPB, the islands are one of Europe’s most important seabird breeding sites. They surmise that the Shiants could potentially offer a suitable habitat for species such as storm petrels and Manx shearwaters, which are believed to have once inhabited the area. But the birds aren’t the only ones to have found their island oasis on the Shiants. Black rats, scientifically known as Rattus rattus, carved their own niche in the area after swimming ashore in the 19th century. These shipwrecked squatters have since formed one of the UK’s only black rat colonies. Their numbers have historically fluctuated throughout the seasons, with winter seeing their population drop to between 1000-3000 individuals. Though relatively small in size—a black rat’s body length ranges from 10-24 centimetres—these rodents are known to have a mighty influence on local ecosystems in many parts of the world. They are stalwart survivors, capable of forging a living in even the starkest circumstances. Their tendency to prey upon eggs and chicks, however, does not bode well for burrowing bird species. Island areas are particularly vulnerable to the effects of a rat invasion, as the native species evolved without the threat of such predation. It’s because of this that people con-

tinually classify ship rats as pests, even though they’re among Britain’s rarest animals. The RSPB blames them for an overall decline in seabird numbers. This is why they’ve undertaken the Shiant Seabird Recovery Project. By eradicating the entire rat population on these islands, they hope to create a safe haven for birds. With Nicolson’s permission, the organisation partnered with Wildlife Management International, a pest control team from New Zealand, to rid the area of its resident rodents. After nearly three years of planning and securing funds, workers finally braved the winter weather in one of Britain’s windiest places to undertake the job. It’s the best season to undertake eradication, as the rat population naturally decreases in autumn. The pest controllers prepped themselves with proper first aid and emergency procedure training and stocked up on food supplies before hunkering down for a wet winter on the isles. “You’ve got to be very well prepared to just survive in that sort of place in the middle of winter, never mind to go out and actually do work,” says Stuart Benn, Communications Officer for RSPB’s North Scotland region. The eradication team spent months scouring the three islands that comprise the Shiants, climbing cliffs and scrambling scree to plant poisoned bait in any crevice that could possibly contain a rat. It was a daunting task, but one with results that have left the RSPB “guardedly optimistic.” The rats, it seems, are gone. It’s too early to say for certain, but it appears as though the project has been a success. The RSPB hoped to kill all the rats in one go, as even a small handful of survivors could easily cause a population rebound. The organisation will continue to monitor the area for any signs of rat inhabitants. They will also broadcast the calls of the birds they hope to attract to the islands, should the desired species fail to migrate to the islands on their own accord. Yet though the rats are gone, controversy still lingers. Critics of the plan lament the anthropogenic extirpation of such a locally rare species. Killing the rats to save the birds is an act that raises ethical questions about the roles humans play in environmental management. It’s a heated topic, one with no right or wrong answer. It also calls into question the weight of human values, highlighting our tendency to favour more aesthetically pleasing species over ones we label as pests. Some even argue that the concept of interventional conservation takes the

‘natural’ out of ‘nature’. The science that the RSPB used to make its case has also come under scrutiny. It cites a study that says nearly 70% of rats captured on the Shiants several years ago had consumed feathers and quills. However, it is unclear whether the rats had predominantly preyed upon live birds or if they had instead scavenged on corpses. The RSPB also used storm petrels and Manx shearwaters as its spearheading species when attempting to garner support for the project. But there’s no record of storm petrels ever using the Shiants. Furthermore, the only evidence of the latter species once being present on the islands stems from a humerus bone ‘almost certainly’ from a Manx shearwater found in a shell midden, which is the byproduct of human domestic waste and not animal predation.

Purging the islands of their resident rodents was the easy option, as changing human behaviour is far more difficult than strategically planting poisoned bait Even with the black rats eradicated, there’s no guarantee the seabird populations will swell to their former grandeur. Factors such as anthropogenic climate change, pollution, unsustainable fisheries, and inappropriate/unregulated human development continue to threaten their well-being. Purging the islands of their resident rodents was the easy option, as changing human behaviour is far more difficult than strategically planting poisoned bait. It’s a controversial issue, but the decision made cannot be undone. The rats, those smart shipwrecked survivors, are gone. The birds can now rest easy upon the rugged shores. We’ve killed one species to save another, adding a chapter of anthropogenic annihilation to the islands’ history.

Kerry Wolfe is an Environment, Culture, and Society MSc student Spring 2016 | eusci.org.uk 35

fe at u re s

The next phage YuGeng Zhang explores how viruses may hold the answer to antibiotic resistance From the early days of penicillin, discovered in 1928 by Alexander Fleming, antibiotics have been an effective, reliable medication in the fight against our bacterial nemeses. However, in a few decades this may all change. Overuse of antibiotics in recent years is rendering many of these drugs useless. Continued bacterial exposure to antibiotics increases the selection pressure on microbes to evolve resistance, allowing these resistant bacterial strains to survive antibiotic treatments. Fleming himself warned of microbial resistance in his Nobel speech in 1945. In November 2015, his worst fears were brought to life when researchers in China identified certain strains of the bacterium Escherichia coli (E. coli) that exhibited resistance against colistin; an

Illustration by Eliza Wolfson

36 Spring 2016 | eusci.org.uk

antibiotic of last resort. The worrying aspect of this particular development was that colistin resistance was found to be rapidly exchanged amongst members of a bacterial population by horizontal gene transfer. This is the passing from one bacterium to the next, a bit like a deadly form of pass the parcel. Resistance can then spread through whole bacterial colonies, populations and species within a relatively short timescale. Therefore, it seems the fate of humankind inevitably lies in fulfilling the biggest cliché in current science reporting: being consigned back to the pre-antibiotic ‘dark-ages’. But before we all don our plague doctors’ apparel, we might wish to consider an alternative to antibiotics in

a little-known treatment called phage therapy. Phages, or bacteriophages, are viruses that specifically target bacteria. Despite being relatively low-profile in the public domain, phages have been studied and used extensively in bio-research for at least a century.

Phages look a lot like something from a sci-fi novel Structurally, phages look a lot like something from a sci-fi novel. They possess a regularly shaped head structure, containing the DNA, which is connected to a long shaft (the tail sheath)

fe atures that ends at a baseplate structure from which a number of spindly tail fibres extend like the legs of a spider. The virus gets to work by first establishing a good handshake with the host as the tail fibres bind specifically to binding sites on the surface of the host cell. Unlike animal viruses, which at this stage will enter the host cell, the phage instead sits down on the surface of the bacterium and punctures a hole in the bacterial cell membrane through which phage DNA enters the host. This allows the phage to add its own bundle of instructions, in the form of its DNA, to the bacterial genome. When the viral genes are expressed, the instructions to make new phages are read and the phage’s work is complete. Eventually, the host bacterium will burst under the internal pressure of producing swarms of new viruses and release them into the surrounding environment. This allows the infection cycle to start afresh.

Overuse of antibiotics in recent years is rendering many of these drugs useless So what advantages do phages bring over antibiotics? Phages operate in a totally different mechanism to antibiotics, which is preferable in the present situation of increasing antibiotic resistance. This means antibiotic resistant bacterial strains will be as equally susceptible to phage infection as non-resistant bacteria. However, the idea of administering viruses to an already ill patient feels rather counterintuitive. The obvious question some might ask is what if we humans become infected by the phage? We won’t. Phages have evolved over millions—if not billions—of years to become really efficient at infecting bacteria. They cannot just turn around one day and infect a human cell. For starters, human cells do not possess the receptors required for phages to begin the initial stages of infection. Even though your own body cells are not likely to be infected by phages, a more reasonable concern might be that the populations of microbes living as part of human gut flora will be adversely affected by the phages. After all, there are more ‘friendly’ bacterial cells in your body than human cells. These ‘friendly’ bacteria that reside mainly in the intestinal tract of humans perform a number

of essential functions such as aiding in digestion and activating the immune system. Therefore, it might be expected that using phages to combat ‘unfriendly’ bacterial infection would kill off some of our ‘friendly’ bacteria as a side effect. This raises the topic of host specificity in phages. Unlike broad spectrum antibiotics, phages can be extraordinarily picky about the bacteria they attack—so much so that some phages are known to operate at the strain-specific level. This arises due to the great variability in binding sites on the surface of bacterial cells. For example, phage T4 might be able to attach to a binding site consisting of two consecutive glucose molecules present on one particular strain of E. coli but would not be able to attach to the surface of a different strain of E. coli that possesses a binding site consisting of a different combination of sugar molecules. This potentially allows phage therapy to target only pathogenic bacterial strains, leaving your gut bacteria unaffected. Sounds good so far. But what if bacteria eventually evolve resistance to phages, as they have with antibiotics? The key difference between phages and antibiotics is that one is a biological system and the other is a static chemical. The advantage of using a biological system, such as a phage, to fight another biological system (i.e. the bacteria) is that co-evolution can take place whereby both systems are constantly adapting ingenious ways to outcompete each other. Using the analogy of the Red Queen from Alice in Wonderland, phages can keep running, staying ahead of bacteria whereas antibiotics cannot run at all and eventually are caught. When resistance begins to emerge in bacteria against a certain type of phage, an evolutionary pressure is exerted on the phage, which allows them to positively select for random mutations beneficial to the phage. By chance, some of these mutations may offer some phages the ability to infect previously resistant bacteria. In addition, mutation rates in phages are much higher than in bacteria and therefore allow them to stay one step ahead of the bacteria in this evolutionary race. By this token, resistance would not really be much of an issue with phages, at least, certainly not as much as it is for antibiotics. In light of these advantages, it is perhaps no surprise that the idea of phage therapy is not a new one. It has been used for nearly 90 years in Georgia and many of the former Soviet states. So why are people not being treated with phages in the UK? A main issue with using phages

in clinical treatment is the question of how it should be administered into the body. Oral uptake will quickly deposit the phages into the extremely acidic conditions of the stomach, rapidly inactivating them. Intravenous injections do not seem plausible either, as these would simply expose the phages to the human immune system. This would immediately launch a response to neutralise what the body sees as another outside threat. This problem can be partially overcome by applying phages directly onto the skin to treat infections that have been localised to these outer layers of the body. However, this practice severely limits the range of bacterial infections that can be treated using phages and would be useless in most cases of infection where the pathogenic bacteria are present within the body.

Eventually, the host bacterium will burst under the internal pressure of producing swarms of new viruses In recent years, clinical studies of phage therapy have demonstrated little concrete evidence of the overall efficacy of this treatment in tackling cases of bacterial infection. For the past few decades, antibiotics have served our needs so well that there was not much interest in alternative treatments, as these were seen by most clinicians as something that ‘may or may not work’. But now, it may be worth exploring phage therapy afresh as our cherished antibiotics fall victim to microbial resistance one by one. More research into the clinical viability of phage therapy is therefore desperately needed. Together with some genetic tinkering of phages, this may be just what is required to avoid a post-antibiotic cataclysm in the near future.

YuGeng Zhang is a sixth-year student at George Heriot’s School in Edinburgh Spring 2016 | eusci.org.uk 37

fe at u re s

Finding the way back Saishree Badrinarayanan explores the mechanism behind path integration and its navigational uses During his expeditions to the Polar Sea, Ferdinand von Wrangel, a celebrated German explorer of the early 18th century, observed something peculiar about the people of the tundra region. He noticed they possessed an infallible instinct that helped them trace their way back to a particular settlement. He was further amazed by the precision with which they manoeuvred their way back home, despite traversing an environment devoid of visual landmarks and prone to severe climatic changes. Fascinated by von Wrangel’s observation, Charles Darwin suggested that this ability to navigate may not be unique to any cohort, but was instead an intuitive form of navigation used by humans and nonhuman animals to find their way. It isn’t surprising the iconic biologist suggested something significant about the workings of the brain long before an attempt had been made to show it with experiments. It is well known that navigation is essential for the survival of almost all species. Human and nonhuman animals travel far and wide in search of food and shelter and still maintain the ability to return to their initial location using the shortest distance possible. As this process would require one to learn and remember a location, many scientists have proposed that the brain is capable of combining external indicators, such as visual information, along with self-motion cues to help an animal navigate. In the absence of external signals, animals like pigeons, desert ants, and bats have shown the ability to integrate sensations from the body and feedback commands from the motor system, called self-motion cues.

Illustration by Jemma Pilcher

38 Spring 2016 | eusci.org.uk

This provides an effective mechanism for them to travel back to a starting location. An animal's ability to use self-motion cues to return to a starting point without the use of landmarks or external sensory input is known as path integration.

While ants make use of celestial mapping as a compass to help them find their way, the story is a little different in other animals The first thorough description of path integration in mammals was provided by Mittelstaedt in 1980. In this study, a female gerbil was separated from her offspring in a circular arena. The task required the gerbil to find her young and return back to the starting point. Given that these animals possess a strong motivation to rescue their offspring, the mother would set out on an elaborate path in search of them. The researchers were intrigued to notice that on her way back after rescuing her offspring, she would return using a straight route to the starting point. They noticed that this ability to return was consistent even in complete darkness. Using different parameters to further assess this ability, they noticed that the gerbil employed self-motion cues to track her movement from the initial point. Finally, Mittelstaedt hypothesized that this principle or ability to path integrate could be critical for vertebrate navigation. Imagine you are an ant in the Sahara

desert. In terms of size, you are no larger than a thumbnail but you can still cover a distance of one metre in one second. At temperatures as high as 41ºC at ground level, you begin to rise from your burrows in search of the corpses of other insects. In pursuit of your prey, you travel around 200 metres from home, resorting to a harrowing route that involves frequent turns and stops. Once you find your prey, you promptly take the straight route back home. As only a few landmarks are present on this landscape, your ability to navigate is a great mystery for many. What could this mighty ant be using as an internal map? A study at the University of Zurich suggested ants may use polarised light as a compass. Due to the receptors sensitive to the polarised light present in their brains, the ants are able to lock onto the pattern of polarisation while finding their way back from their outbound journey in a way that is similar to celestial mapping sailors employed to navigate via constellations and patterns present in the sky. While ants make use of celestial mapping as a compass to help them find their way, the story is a little different in other animals. With recent advances in spatial navigation, researchers now know that a complex circuit comprising of the hippocampus (a seahorse shaped structure) and the entorhinal cortex (the neighboring brain region) for our ability to navigate. Spatially tuned cells in these regions build a mental map of our surroundings. Place cells present in the hippocampus were first identified by John O’Keefe in the 1970s. He noticed when an animal was in a particular location or passed the same spot, these cells would fire. Grid cells present in the entorhinal cortex were discovered by MayBritt and Edvard Moser in 2005. They noticed these cells activated in a distinctive pattern when an animal entered an area. Later on, they realized grid cells also computed for the position, direction, and translational motion of an animal’s movement. The three scientists were awarded the Nobel Prize in Medicine in 2014 for their discovery of these cells. To test the influence of visual and external cues on the firing property of these cells, researchers recorded signals from place and grid cells in the presence of visual

fe atures

Illustration by Anna Mikelsone

stimulus and in darkness. They found that grid cells were able to sustain their firing ability in complete darkness, while place cells had reduced activity. This finding led researchers to believe grid cells could be potential path integrators, as the cells were still actively computing for the animal’s position. Another group of cells in the brain known as the head direction cells act as an internal compass. Using feedback sensory information and equilibrium from the vestibular system, these cells encode for the direction in which the animal’s head is facing. These cells combine information from the heading of an animal relative to a landmark and also maintain directional tuning in darkness based on the self-motion cues. All these cells play an integral, albeit complex role in the spatial circuit, but the unresolved question still remains: how do they help us path integrate? Researchers around the world are investigating this mechanism using cutting-edge techniques. While computational models have been developed to understand the process of path integration, studies on humans are being conducted to delineate the brain structures and mechanisms responsible for us to path integrate. These studies have revealed that the medial temporal lobe could be responsible for path integration. In a task that required participants to track their way back to a start point, patients without a temporal lobe (usually as a result of a surgical procedure to control

epilepsy) found it relatively difficult to navigate back.

Studies have revealed that the medial temporal lobe could be responsible for path integration To further test the contributions of specific brain regions in positional tracking, researchers at Boston University conducted a functional magnetic resonance imaging (fMRI) study in healthy individuals. Each participant traced their way back to a start location in a virtual reality setup. The sparse virtual environment presented to the participants moved in a loop trajectory. The nature of the visual stimuli presented made it difficult for the participants to use landmarks to navigate. The fMRI scans enabled the researchers to identify active brain regions involved in tracking. They concluded that the hippocampus and another brain structure known as the retrosplenial cortex were active during this task. In a comprehensive experiment conducted at University College London, scientists examined the role of visual information and motor feedback on path integration. In two separate experiments that required healthy participants to engage in a path integration task in a virtual reality setup, the re-

searchers concluded that visual input and motor feedback combine into a single representation to help us navigate in darkness. They also believe this mechanism might be responsible for generating visual imagery when we try to remember a particular place. So why do we have such intricate space mapping mechanisms? One theory is that, in addition to performing navigational techniques, this circuit could be responsible for helping us store memories. It has been suggested that the ability to recall experiences in reference to a place employs mechanisms similar to path integration. It is interesting to note that the medial temporal lobe, which helps perform path integration in humans, is also the seat of memory storage. Studies in support of this hypothesis are growing and many researchers have begun conducting navigational experiments in patients with Alzheimer’s disease. Understanding spatial navigation and path integration will provide an insightful model to look into the neural mechanisms responsible for psychological phenomena such as memory.

Saishree Badrinarayanan is an MSc Integrative Neuroscience student Spring 2016 | eusci.org.uk 39

fe at u re s

Waves in space and time Nico Kronberg explores a whole new way for us to investigate the universe On 11 February 2016, the Laser Interferometer Gravitational-Wave Observatory (LIGO) announced the discovery of gravitational waves. Physicists all over the world were excited to hear that another of Albert Einstein's predictions made in 1916 had been proven correct. But before we discuss why this discovery is so exciting, let’s talk about what gravitational waves are. Einstein's theory of general relativity discusses how objects exert a gravitational pull on each other. It describes this pull as a bending of spacetime, a distortion of length and time scales that gets stronger the heavier its source object is and the closer you approach it. Imagine two objects held together by their mutual gravitational attraction: this could be the Earth and the Moon or much heavier objects, such as two stars or even two black holes. As they spin around each other, they continue to distort space and time around them. Their motion, however, means that these distortions, rather than being constant and static, now travel away in waves. As a gravitational wave travels through the universe, space and time are periodically stretched and compressed in any region the wave passes through. That’s exactly what made LIGO's discovery possible. LIGO is made up of two detectors that are 3000 kilometres apart. Each detector is composed of two perpendicular four kilometre long laser beams; a gravitational wave passing through the

Image courtesy of NASA via Wikimedia Commons

40 Spring 2016 | eusci.org.uk

Earth creates a characteristic pattern of stretching and compressing one arm of the detector compared to the other. So, if we can use lasers to measure the length of the detector, and if gravitational waves leave an unmistakable signal of lengthening and shortening, why did it take us a hundred years to finally confirm Einstein's prediction? As it turns out, the distinct, periodic stretching caused by a gravitational wave is extremely small. In fact, the effect of the wave detected by LIGO was so small, it stretched the entire planet Earth by less than the diameter of an atom. This is why it has taken physicists, engineers, and material scientists until now to develop sensor technology and shielding techniques sensitive enough to pick up a signal as weak as that.

Light can be blocked. Gravitational waves cannot Now that we have some idea of what gravitational waves are and why it is so difficult to see them, the question remains: what's so great about being able to detect these tiny spacetime waves anyway? It is difficult to think of any practical applications for gravitational waves. The masses and energies involved in rippling spacetime to a noticeable degree

are unimaginable and, more importantly, unattainable. The event detected by LIGO involved two black holes with 36 and 29 times the mass of the sun. Just before they merged into one, the power they emitted as gravitational waves was equivalent to the combined power output of all the stars in the observable universe. And even that was incredibly difficult for us to detect. So why are scientists so excited about this discovery? Gravitational waves are interesting largely because they are created under such extraordinary conditions. We expect to see gravitational waves produced when a star explodes as a supernova, when neutron stars or black holes orbit one another and eventually collide, or even when two supermassive black holes, the kind that form the centre of galaxies like our own, pass by each other or merge. Many of those environments have never been accessible to us before. For the entire history of human astronomy, we have observed the sky using light— but light can be blocked by dust, gas, or the outer layers of a star. Gravitational waves, on the other hand, cannot. They travel just as easily through a neutron star or the disc of plasma spiralling into a supermassive black hole as they do through empty space. By analysing the gravitational waves that reach us from the extremest of places, we hope to understand what happens there and whether our theories of matter and gravity hold true in those conditions. We hope to learn about the properties and internal structures of supernovae and neutron stars, the behaviour of black holes as they approach and ultimately collide with each other, and the nature of gravity itself. We will be able to test our understanding of space and time in settings where the effects of gravity overshadow everything we have ever seen. Where we have previously relied on waves of light to look at the sky, we are now entering an era of exploring the universe using waves of gravity. We are opening up a whole new window on the universe, and we have no idea what we’ll find on the other side. Nico Kronberg recently completed a PhD in Cosmology

regul a r s : poli ti cs

The European experiment Ruairi Mackenzie looks at how the outcome of June’s EU Referendum might impact British science In June, voters throughout the UK will be asked whether or not they want to remain part of the European Union (EU) in a national referendum—the outcome of which will undoubtedly have ramifications across Britain, including the science sector. In the past several weeks, a large group of Science, Technology, Engineering and Mathematics (STEM) workers, led by the megastar physicist Stephen Hawking, have been warning that leaving the EU would be nothing short of a disaster for UK science. On the other side of the argument, pro-Brexit groups such as Scientists for Britain forecast a bright future for an independent UK science sector. With such diametrically opposed viewpoints, it’s likely that the truth lies somewhere in the middle. Though the EU is primarily a political body, it exerts control over science in two ways: money and manpower. The EU’s central science budget, Horizon 2020, is worth a rather staggering €67 billion (£52 billion) over the next seven years whereas the UK’s annual science budget is £4.6 billion. While anti-EU scientists regularly highlight the superior academic standing of UK universities as compared to their European counterparts—six UK universities rank in the Times Higher Education’s top 25 as compared to just one European university—the UK’s consistently impressive performance in rankings also means it receives a disproportionate share of this honey pot. The UK contributes 11% of the EU’s total budget but receives 16%

Image courtesy of Pixabay

of total EU science funding in return. Whilst countries outside the EU can still benefit from Horizon 2020, it’s clear that the UK currently has an excellent deal in terms of monetary funding. Groups like Scientists for Britain claim that a Brexited Britain would not necessarily see a reduction in funding from Horizon 2020, but this seems to be hopeful thinking, given that an independent UK would sacrifice any influence it has over how the EU distributes these funds.

The UK contributes 11% of the EU’s total budget but receives 16% of total EU science funding in return In purely monetary terms, staying in the EU seems a more attractive option. However, money alone cannot produce good science, and the quality of scientists in research institutions is just as important. The collaboration of researchers across borders is at the core of scientific progress, and movement across those borders should be made as simple as possible. Currently, 15% of UK academics originate from other EU countries, as many as from every non-EU country combined. Any scientist who has interacted with the research visa system for non-EU researchers will know it

can be a painfully slow process, and the likelihood is that such a system would have to be brought in for EU workers if Brexit were to take place. These changes would apply equally to students as well as researchers. EU students currently take advantage of low tuition fees which would likely be scrapped upon Brexit, and the future stability of programmes such as ERASMUS would be at risk if Britain left the EU. The debate around this referendum has been characterised by a complete unwillingness on either side to give any quarter to the arguments of the opposition. This means that although the points above might be overcome by diplomatic channels, through renegotiation of treaties and accords, the UK Science Minister, Jo Johnson (officially pro-EU) has been unwilling to present any contingency plans for UK science in the event of Brexit. He has repeatedly refused to go beyond saying, “we are focused on making the most positive case for Britain’s future in a reformed European Union, and all efforts are going on that.” This means that the Brexiters’ arguments remain effectively untested and they are allowed to conjure a rosy postEU future without being undermined by realistic analysis. What is undeniable is that leaving the EU moves the fate of a large part of British and European science into the hands of diplomats and politicians. Equally clear is that interactions between scientists goes far beyond shared budgets—it is the sharing of ideas that really stokes the fires of progress. While Brexit doesn’t directly threaten this collaborative spirit, it will certainly weaken it. Admittedly, the way each person votes in this referendum will be affected by varied and complex factors, and the arguments about the political and social benefits of leaving the EU are sure to continue long after the polls have closed. But it seems that if one’s priority is the advancement of science, for both UK and the EU as a whole, then voting to leave the EU is a step in the wrong direction. Ruairi Mackenzie is a fourth-year BSc Neuroscience Student Spring 2016 | eusci.org.uk 41

regul ars : i nte r view

John O'Keefe: a modern Renaissance man Eirini Papadaki picks the brain of a Nobel Prize winning neuroscientist Professor John O’Keefe is the Director of the Sainsbury Wellcome Centre for Neural Circuits & Behaviour and Professor of Cognitive Neuroscience in the Department of Cell & Developmental Biology at University College London. O’Keefe was awarded the Nobel Prize in Medicine in 2014 for his work on hippocampal place cells—a specific set of neurons in a part of the brain that are activated based on the spatial location of an animal. O’Keefe also had a major role in founding the British Neuroscience Association (BNA), which started as an informal meeting of scientists on the upper floor of a London pub. The BNA is now a 50-year-old massive organization that brings together scientists from varied backgrounds from all over the UK. O’Keefe has had an unusual journey from engineering student to neuroscientist, and finally to Nobel Laureate—and I had the privilege of speaking to him about his life’s work. Eirini Papadaki (EP): Did you ever have doubts about what you wanted to study? John O’Keefe (JO): I was one of these people that took time to decide what I wanted to study. I actually started in secondary school doing classics, and after high school I went to work. Eventually I decided to study aeronautical engineering. Keep in mind that was in the 50’s— the year of Sputnik. Eventually I ended up as an engineer in an aircraft company. Then I decided I wanted to study again. I did philosophy and psychology, and realized a lot of internal questions that philosophers were interested in would probably be addressable if we study the brain with techniques that were just then becoming available. So I started to concentrate on studying the brain. At that time, there was no field called neuroscience. I was very lucky to get a position at McGill University, which was one of the few universities where you could study the brain, and eventually I moved to London, where I’ve stayed ever since.

42 Spring 2016| eusci.org.uk

Image courtesy of Wikimedia Commons

EP: In the era of overspecialisation, do you think it would be beneficial to approach neuroscience in a more interdisciplinary manner? JO: Well, this is a very important question. It is true I was very fortunate that since I had a background as an engineer, I could use and exploit some of the advances in electrical engineering right at the early stages of my career. This enabled me to, for example, design and build amplifiers which would allow me to record from cells of free moving animals. I’m a great believer that neuroscience in particular relies heavily on bringing many disciplines together and using the strengths of each of those to answer questions which each one couldn’t address by itself. EP: What do you think of the ‘publish or perish’ notion? JO: Part of the problem is that neuroscience attracts people from multiple fields while there are limited positions. Inevitably, that puts a lot of pressure on everyone to publish in high impact factor journals. What I am trying to do in the new institute is to give the people the freedom to explore and make mistakes. After all, science is the art of making good mistakes, but making better mistakes next time. But at the end of the day, a good paper in a high impact factor journal is the most straightforward way for one to be judged by his peers, and there is a

limit on how much one can mitigate that pressure. I think the best we can hope for is to reduce the number of publications by avoiding the ‘salami-slicing’ technique of trying to produce as many papers as we can, and instead aim for publications that explore a topic in depth. EP: Now that you’ve won a Nobel Prize, do you feel fulfilled after years of scientific research? JO: In some ways you become more vulnerable! I keep motivated to keep going. I get an enormous pleasure from working in the laboratory. Sometimes I sneak into the lab when other people are going home and do some of my own experiments myself. I will continue to do that until I cannot anymore and then I’ll just walk off into the sunset. Needless to say, this interview was an amazing opportunity. He may not have liked the herbal tea I offered him, but had a warm smile and was humble despite his reputation. The message that struck me most from my time with John O’Keefe is that everyone should take their very own path to discover what really defines them, and that this journey can sometimes be long and unconventional.

Eirini Papadaki is a third-year PhD student

regul a r s : tec h nology

Cyber-roaches Calum Turner explores the creation of an army of remote controlled cockroaches equipped to find survivors at disaster scenes There are few environments more challenging than those following a natural or man-made disaster. Often constricted by rubble and debris, and potentially contaminated by chemicals or radiation, it is difficult to imagine a more inimical environment for humans. However, such areas can be populated by survivors in need of rescue. The problem of locating victims of catastrophe has driven the creation of swarms of robots, but these designs are often too large and clumsy to fully explore the aftermath of a disaster. The difficulty of creating centimetre scale synthetic robots has led several research groups to develop designs based on insects, which have been optimised by millions of years of evolution to navigate constricted environments. This line of thought has recently advanced through investigating the potential of controlling insects directly. Though this development may seem more suited to a superhero blockbuster, researchers at North Carolina State University have been able to remotely influence the behaviour of insects. By perturbing the locomotory control systems of Madagascar hissing cockroaches (Gromphadorhina portentosa) in a manner analogous to the reins and bridle controlling a horse, scientists have been able to control their motion and create biological robots, or 'biobots'. This feat was achieved by grafting electrodes to the antennae of anaesthetised cockroaches. The application of a small voltage to the antennae influenced the direction of the cockroach, allowing researchers to induce the insects to follow S-shaped tracks by manually con-

trolling the voltage. This approach was refined by the creation of a computer controlled system which automatically detects the position of the cockroaches and adjusts their motion accordingly. This more refined mechanism will hopefully allow researchers to adapt the biobots for increasingly dynamic scenarios. The most recent development has been to equip each cockroach biobot with three microphones on circuit board 'backpacks'. The microphones allow the direction of a sound to be established using a complex algorithm. When combined with the computer control system, the cockroaches are able to automatically locate a sound source and move towards it. This breakthrough is key to the development of search and rescue biobots, which is the ultimate goal of the researchers.

Researchers have been able to remotely influence the behaviour of insects One of the key players in the research group is Dr. Alper Bozkurt, Assistant Professor of Electrical and Computer Engineering at North Carolina State University. Dr. Bozkurt explained his research in a press statement. “In a collapsed building, sound is the best way to find survivors...The goal is to use the biobots with high-resolution microphones to differentiate between sounds that matter—like people calling for help—from sounds that don’t matter—like a leaking

Cockroach biobot. Image courtesy of Alper Bozkurt (NCSU)

pipe.” He also proposes that “once we’ve identified sounds that matter, we can use the biobots equipped with microphone arrays to zero in on where those sounds are coming from.” Though Dr. Bozkurt's research may have an admirable end goal, some have questioned the ethics of biobots in general. For example, the commercially available kit, RoboRoach, has been called into question because it allows users to control their own cockroaches after attaching the electrodes themselves. The laboratory researchers anaesthetised the insects before surgically removing sections of antennae, but the facilities involved in this procedure are specialised and not included in the commercial kit. Commentators on the science news website, Live Science, have pointed out that the instructional period that professional neuroscientists undergo would not be completed by those buying the kit. Therefore, the cockroaches are likely to be treated inhumanely. Despite some ethical objections, research into the biobots continues, with researchers recently pioneering an 'invisible fence' which constrains the motion of the biobots to within a certain area. This system is under development and will eventually allow the cockroaches to remain within a certain distance of each other and even autonomously move towards light sources to recharge solar cells and allow extended deployment. The researchers point out in the paper that the surgical implantation of electrodes was “performed adhering to appropriate ethical standards,” citing previous work on the ethical treatment of invertebrates and helping to allay fears of inhumane treatment. Though this is an emerging field of study, rapid progress has been made in the field of biobotics. The ethical problems of the field remain to be resolved, and much work is needed to refine the cockroaches before field-testing or actual deployment, but the idea of controlled insect armies may soon leave the realm of science fiction and become an established science fact. Calum Turner is a fourth-year Astrophysics student Spring 2016 | eusci.org.uk 43

re gu lars : s ci at r ibe

Opening the womb of discussion on human embryonic law Alyssa Brandt makes the case for better representation in human developmental legislation A tiny hand twitches, floating in a dark, warm, liquid expanse. It has been developing for nine weeks. It has eyes, a slowly shaping brain complete with lobes, and a beating heart. The fingers are no longer webbed and its tongue is developing taste buds. Though its whole body is only 25 millimetres long, the embryo is undeniably starting to transition from its previously reptilian appearance and beginning to resemble a human infant. One of the heaviest questions to answer is when do we become human? Is it immediately after conception? Is it a few weeks in, perhaps when there are fully formed eyes, or a heart pumping blood through miniscule vessels? Is it when an infant takes its first unassisted breath? When does a clump of cells become something capable of feeling and learning? Innumerable policies, protests, and writings have hammered the issue relentlessly. Life has been said to begin before fertilisation when the gametes are formed, suggesting that the potential for life has the same consequence as life itself. Some suggest life begins at conception, whilst others say it is when the organs begin to form, or when it becomes a ‘foetus’ rather than an ‘embryo’ at 10 weeks old. The varying opinions span millennia, which can often complicate the discussion. As society and technology evolve, questions of when humans are human oddly become more socially relevant. Recent research on human embryos has

Image courtesy of Wikimedia Commons

44 Spring 2016| eusci.org.uk

opened new doors into eradicating genetic diseases and changing the face of human reproduction. However, the research was met with cautious criticism and ethical concerns. Were the researchers tampering with human life? If they were, was this human life equivalent to a breathing human? In the same vein, the willful termination of a pregnancy is a constant in the struggle of defining when a human life begins, reflected in the undulating policies across the world. Whether we like it or not, humans are biological machines. Biology will always be inextricably linked to society. Our challenge is in making sure that the policies we create reflect the population’s values while still upholding basic human rights. Often, our political endeavours around the issue of reproductive and embryonic law have a gaping hole, as half the population is not well represented. Women account for 51% of the global population and they are the ones who bear the responsibility of giving birth to the next generation of humans. Despite this, they make up a meagre proportion of the vast majority of governing bodies. Rwanda is an exception with 63.8% of parliamentary positions being held by women. Only one other country, Bolivia, reaches at least 50%. According to the United Nations, a figure of 30% is considered an ‘important benchmark’ for female representation. Scotland seems to be doing a little better, with 35% of Scottish Parliamentary positions going to

women. Thankfully, many governments have been breaking records in the past few years for the number of women in government positions. We may be making progress, but when it comes to reproductive legislation, women are not only underrepresented in the creation process, but often silenced when trying to maintain an open dialogue. They are the ones most directly affected by the creation or modification of such legislation.

Whether we like it or not, humans are biological machines. Biology will always be inextricably linked to society In addition, science and politics homogenise with difficulty. The southern region of the United States illustrates how a lack of scientific literacy can have an effect on politics, even in a developed nation. When lawmakers do not know how abortion works or do not possess basic knowledge of human foetal development, laws regarding these issues become inadequate. This can hamper scientific progress and personal freedom. Recognising the importance of different yet relevant perspectives, scientific evidence, and the potential benefits associated is the first step in bringing positive change. In the case of human embryonic legislation, whether it is scientific inquiry or reproductive health, it is more important than ever to embrace the knowledge and opinions of those directly involved. These include the scientists and women (not mutually exclusive), whose work and bodies are being regulated. This is not just an issue of ‘pro-life’ or ‘prochoice’. This is an issue of making sure that we as humans are acknowledging the convoluted nature of biology, while having adequate representation. Alyssa Brandt is an Integrative Neuroscience MSc student studying neural development

regul a r s : in n ovati on

GM plants - a weapon against excess carbon? Viktoria Dome discusses innovative strategies that could enhance carbon capture and storage in plants Human activities, chiefly the use of fossil fuels, are responsible for the emission of nine gigatonnes of carbon into the atmosphere annually. Five gigatonnes of this anthropogenic carbon release can be managed by terrestrial and oceanic systems, while the rest stays in the atmosphere in the form of carbon dioxide and methane, causing the greenhouse effect. To combat this, numerous strategies and innovative technologies are being developed worldwide that could potentially reduce atmospheric carbon. Yet, the most promising solution still rests with the natural ability of plants to store carbon—a process perfected over 350 million years of evolution. The distribution of carbon between the atmosphere and land depends largely on the photosynthetic uptake of carbon dioxide by plants. The annual uptake of the global terrestrial system is 123 gigatonnes, from which 120 gigatonnes is returned to the atmosphere almost immediately through plant and microbial respiration, land use, fires, and other disturbances. The residual three gigatonnes account for long-term carbon storage, known as biosequestration. Christer Jansson and his colleagues at the Lawrence Berkeley National Laboratory have proposed increasing this biosequestering capacity of plants. These plants would then act as enhanced carbon sinks, with the aim of doubling the amount of carbon they remove from the atmosphere. To achieve this, Jansson suggests that genetic engineering of plants should not only be used for biofuel production, but also for the modification of the plants’ capacity to store more carbon. During photosynthesis, plants convert carbon dioxide into carbohydrates, which are transported from the leaves (source organs) to branches, stems, seeds, and roots (sink organs) for storage, growth, and cell-wall synthesis. The carbon absorbed by plants has the potential to stay in the ecosystem for decades or even centuries, representing a powerful tool in climate change mitigation by removing residual atmospheric carbon. To achieve this, researchers need to develop plants that are even more efficient in harvesting light energy and converting carbon dioxide to carbohydrates.

In this regard, several objectives have already been identified. An example of this is targeting source organs, where an attempt has already been made to increase the photosynthetic rate of ‘C3’ plants (such as rice). This has been carried out through the introduction of genes encoding the photosynthetic pathways of ‘C4’ plants (such as maize). C4 plants have evolved a photosynthetic pathway that enables them to concentrate carbon dioxide, thereby eliminating reactions with oxygen molecules. Hence, C4 plants are able to fix and reduce carbon dioxide more efficiently than C3 plants. Though early attempts of applying this technique to rice have been met with little success, the introduction of C4 genes into C3 plants could potentially increase their rate of photosynthesis and help tackle climate change.

The most promising solution still rests with the natural ability of plants’ to store carbon Another potential strategy has identified sink organs as a target. It has been suggested that the rate at which sink organs accept carbon from source organs has an impact on the efficiency of photosynthesis and carbon partitioning, with sink organ strength governing the rate

of carbon transport into storage. Genetic engineering could target enzymes that increase the capacity of plants to transfer carbon by increasing their turgor pressure. Alternatively, engineering transcription factors and other regulatory proteins is an another strategy to increase the sink strength of plants. Finally, plant hormones are important in determining wood development and biomass formation and could be potentially modified to increase the carbon storage of plants. The vast accumulation of excess carbon in the atmosphere causes global warming. Action must be taken to prevent this and the associated physical impacts from happening on our planet. There are many ongoing debates about the best way to capture and store the residual carbon we produce. However, by studying plants, which have naturally evolved an efficient ability to do so, we might find the best solution we need. The introduction of new traits into plants not only provides an opportunity to better understand the process of photosynthesis and carbon transfer, but it could also play a key role in climate change mitigation. There are many strategies being developed to tackle this issue and further research into the field of plant engineering could uncover a promising solution.

Viktoria Dome is a second year Ecological and Environmental Science student with an interest in Plant Sciences

Illustration by Prerna Vohra

Spring 2016 | eusci.org.uk 45

regulars : l e t ter s

Dr Hypothesis EUSci’s resident brainiac answers your questions I have recently switched diets and am trying to be the healthiest I can be. I noticed gluten-free food seems to be a big thing now. Will my health benefit from leaving out gluten? Healthy Heather Dear Healthy Heather, Because the concept of gluten is rather elusive to the public, we need to go back and understand what gluten actually is in order to fully understand the whole story. Gluten is a mixture of proteins found in grains such as wheat or rye. When combined with water, it forms the scaffold for dough and is the ingredient that allows it to rise. It is also gluten that gives bread and cakes that nice, chewy texture. Therefore there is nothing bad about gluten per se. However, the interaction of how your body processes gluten and your immune system can be harmful when you suffer from a certain condition called coeliac disease. Proteins in your food are cut by enzymes called proteinases and peptidases in your stomach and bowel, respectively. This is a crucial step for taking up amino acids which are the building blocks of proteins. These enzymes do not cut randomly. Instead, they have cut at specific sites and but are unable to carry out this function if there is a high amount of specific amino acids, called prolines and glutamines, in the protein. This is the case for gluten, which leads to the incomplete cutting of gluten. The so called gluten peptides may cause havoc if they get into the wrong hands. Our immune system constantly monitors our bodies and recognises molecular patterns such as foreign DNA or amino acid sequences. Specified cells present these foreign molecular patterns to the naïve cells of our immune system, basically briefing them for war wherever they find that pattern. This

Image courtesy of FreeDigitalPhotos.net

46 Spring 2016| eusci.org.uk

usually doesn’t happen with the food we eat because proteins are fully broken down into single amino acids that our body reuses. However, people with coeliac disease have their immune system activated against gluten peptides. A full-blown immune response against gluten peptides is detrimental for us because this generally takes place within the bowel, causing chronic inflammation if gluten consumption is continued. This results in diarrhoea, lethargy, and growth deficits if it occurs at young ages. If undiagnosed, coeliac can lead to four-fold increased risk of death and a two-fold increased risk for developing cancer. Currently, the only therapeutic option is to have a strict lifelong gluten free diet. It is great that the food industry is providing a multitude of gluten-free options, thus allowing patients with coeliac disease to enjoy food they usually would have to refrain from.

The interaction of gluten and your immune system can be detrimental for you under a certain condition—coeliac disease

However, the prevalence of coeliac disease is only about 1%, yet there is an increasing global trend to avoid gluten even amongst non-coeliac sufferers. A new disorder called non-coeliac gluten sensitivity (NCGS) is on the rise. While NCGS apparently makes symptoms worsen if its sufferers ingest gluten, most of these patients are self-reported, casting doubt on the scientific evidence that gluten itself causes of symptoms in NCGS. Other components of wheat—not just gluten—may cause NCGS. Furthermore, a rigorous cross-over design study by Peter Gibson’s group at Monash University found that symptoms such as pain, nausea, and bloating in patients with self-reported NCGS worsened when placed on a ‘treatment diet’ regardless of gluten presence or absence. In a second experiment of the same study, the symptoms even worsened when the treatment diet was the same as the baseline, gluten-free diet. These findings suggest that symptoms of NCGS may originate from the nocebo effect rather than actual physiological responses to gluten. The nocebo effect can cause adverse reactions purely due to the patient’s expectation of how a treatment will affect them, even if it is a sham. Overthinking your food may be bad for you, but gluten itself is actually not unless you suffer from coeliac (and then, really do stay away from it!). However, if you do enjoy gluten-free food, there’s obviously nothing holding you back from having it.

Dr Hypothesis’ alter ego is science enthusiast and Neuroscience PhD student Chiara Herzog

regul a r s : revi ews

Review: Stitchers Miguel Cueva reviews Freeform’s new sci-fi crime drama ‘Stitchers’ What if you couldn’t tell time? What if investigators could peek into victims’ memories? Freeform’s new science fiction crime drama series, ‘Stitchers’, explores these very questions. The main character, Kirsten Clark, a highly intelligent doctorHelping students and staff al student at Caltech, is afflicted by a fictional condition called ‘Temporal Dysplasia’, her unable to sense the succeed in which theirmakes current roles passing of time. Kirsten gets recruited into the CIA’s secret and in their future careers, ‘Stitchers Programme’, which hacks into the brains and subconscious the recently deceased with the hope of using their by ofproviding University wide memories to solve crimes. support for teaching, The Stitchers Programme is composed of other equally intriguing characters, including Cameron Goodkin, the brillearning and researcher liant neuroscientist who developed the ‘stitching’ technology development. and Linus Ahluwalia, a bioelectrical engineer who specialises in technical communications. There is also Camille Engelson, a talented computer science graduate student and Kirsten’s More information canand bevery found at: characters pseudo best friend. These brilliant capable follow the orders of a former CIA operative and leader of the www.ed.ac.uk/iad programme, Maggie Baptiste. Without any formal investigative training, the gang always ends up getting into trouble and hindering the investigation, which adds a comedic undertone. It’s always refreshing to see how geeks and nerds try to solve crimes, not with brawn but with wit. The gang escapes perilous situations with ease, sleuthing their way into solving crimes with tidbits of inside

information obtained from a corpse’s memory. student learning development The main characters are true nerds and geeks, who regularly make science fiction, fantasy, comic book, and pop culture skills development— references researcher throughout the series. Whilst the continuous referresearch communication encing seems forced atplanning, times, it makes the characters more enprofessional development, dearing to skills, the audience. Throughout the first series of ‘Stitchcareer management, ers’, they solve some bizarre crimes,business includingand a serial bomber, a muggingenterprise, gone awry, aand mysterious more car accident, an epidemiologist’s suspicious death and a dodgy suicide. continuing development ‘Stitchers’ is a crimeprofessional drama that relies heavily on science fiction and and not so much onsharing actual science to solve crimes. It is practice in teaching, a unique and unconventional take on a well established genre, learning and supervision which is especially appealing for those who enjoy crime dramas support for acurriculum, but are after something bit different. This amusing and enterprogramme and assessment taining show is easy to binge watch and the socially awkward design and development yet endearing characters make it hard for a sci-fi fan not to love. Stitchers Series I and the ongoing Series II are available through Amazon.

Miguel Cueva is a first-year PhD researcher at SynthSys in the University of Edinburgh

student learning development

Helping students and staff succeed in their current roles and in their future careers, by providing University wide support for teaching, learning and researcher development. More information can be found at:

researcher skills development— research planning, communication skills, professional development, career management, business and enterprise, and more continuing professional development and practice sharing in teaching, learning and supervision support for curriculum, programme and assessment design and development


Spring 2016 | eusci.org.uk 47

Profile for EUSci Magazine

EUSci #19  

Issue 19 of the Edinburgh University Science Magazine

EUSci #19  

Issue 19 of the Edinburgh University Science Magazine

Profile for eusci