Issuu on Google+

The Cambridge University science magazine from

Cambridge University science magazine


Lazy Universe Bioterrorism . Bicycles . Dinosaurs Ethics . Nomenclature . Art

Lent 2013 Issue 26

JFM 1956 - 2013

Journal of Fluid Mechanics

Vital research from the definitive source The University of Cambridge has online access to the Journal of Fluid Mechanics, from Volume 1, Issue 1 until the most recent volume. From January 2013, JFM Rapids will also be available, for short, high-impact papers across the full range of fluid mechanics research.

Access today at

Lent 2013 Issue 26

Cambridge University science magazine





Living in Fear


Senses in Symphony


The Journey of the Bicycle


One to Another


Digging for Dinosaurs


On the Cover News Reviews

Sarah Smith examines biological terrorism and its effect on science


Shi Khoo and Vanda Ho take a look at the cognitive perspective of synaesthesia

Science and Policy

Karsten Koehler explores the history of the bicycle and our understanding of the physics of cycling

FOCUS Lazy Universe BlueSci explores the universal principle of energy minimisation across the sciences




Amelia Penny explores the expedition of the HMS Challenger

Behind the Science

Amelia Penny discusses the importance of the fossil record and the impact of fossil-hunters

BlueSci was established in 2004 to provide a student forum for science communication. As the longest running science magazine in Cambridge, BlueSci publishes the best science writing from across the University each term. We combine high quality writing with stunning images to provide fascinating yet accessible science to everyone. But BlueSci does not stop there. At, we have extra articles, regular news stories, podcasts and science films to inform and entertain between print issues. Produced entirely by members of the University, the diversity of expertise and talent combine to produce a unique science experience.

Matthew Dunstan looks back at the history of the naming of elements


Alessandro Bertero looks at our increasing ability to change the fate of our cells

About Us...

Nicola Love looks into the science and ethics of Mitochondrial Replacement Technology

3 4 5

Jordan Ramsey looks at the life of one of modern science’s most divisive figures

Arts and Science Zac Kenton discusses the mathematical basis of the great artist M.C. Escher

Weird and Wonderful




Committee President: Jonathan Lawson Managing Editor: Felicity Davies ............. Secretary: Lizzie Bateman .................................. Treasurer: Tim Hearn ..................................... Film Editors: Nick Crumpton & Alex Fragniere Radio: Anand Jagatia................................................ Webmaster:.................................................... Advertising Manager: Fiona Docherty.............. Events & Publicity Officer: Jordan Ramsey ........... News Editor: Joanna-Marie Howes Web Editor: Vicki Moignard



Issue 26: Lent 2013 Editor: Nathan Smith Managing Editor: Felicity Davies Business Manager: Michael Derringer Second Editors: Jonathan Lawson, Philipp Kleppmann, Jannis Meents, Luke Burke, Nicola Love, Rebecca Buckley, Vicki Moignard, Luke Burke, Oliver Marsh Copy Editors: Jonathan Lawson, Luke Burke, Jannis Meents, Theodosia Woo, Robyn Cooper News Editor: Joanna-Marie Howes News Team: Chris Creese, Milly Stephens, Ruth Waxman Reviews: Oliver Marsh, Graham Prescott, Laura Pearce Focus Team: Zac Kenton, Matt Dunstan, Hinal Tanna Weird and Wonderful: Laura Burzynski, Laura Pearce, Jonathan Lawson Pictures Team: Robin Lamboll, Philipp Kleppmann, Jannis Meents, Robyn Cooper, Laura Pearce, Theodosia Woo , Laura Burzynski Production Team: Robin Lamboll, Philipp Kleppmann, Jannis Meents, Robyn Cooper, Laura Pearce, Theodosia Woo, Laura Burzynski Illustrators: Alex Hahn, Aleesha Nandhra, Christos Panayi, Nicola Kleppmann Cover Image: Dr Jim Haseloff

ISSN 1748-6920

Varsity Publications Ltd Old Examination Hall Free School Lane Cambridge, CB2 3RF Tel: 01223 337575 This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License (unless marked by a ©, in which case the copyright remains with the original rights holder). To view a copy of this license, visit http://creativecommons. org/licenses/by-nc-nd/3.0/ or send a letter to Creative Commons, 444 Castro Street, Suite 900, Mountain View, California, 94041, USA.

2 Editorial

Sticks and Stones as long as there have been scientists there has been conflict. From

the denial of supernatural diseases by Hippocrates to the open clashes between Stephen Hawking and Leonard Susskind about the nature of black holes, arguments seem to be an inevitable part of scientific progress. Even such enigmatic figures as Newton and Einstein have been drawn into public feud. Newton famously headed an investigation into the dispute between himself and Leibniz as to who was the first to discover calculus and declared himself to be the sole and true discover, proclaiming Leibniz to be a fraud. Often conflict within science can be seen to be more underhand than open, particularly in the partitioning of credit for scientific discoveries. It seems almost a common occurrence for key figures in the development of a technology or theory to be dismissed in favour of other individuals whose input is questionable at best. Alexander Fleming, known by many as the ‘Father of Antibiotics’, was a gifted bacteriologist but was not the first to describe the action of Penicillin and though he contributed to the knowledge of the fungus, he had long since abandoned the study of Penicillin by the time its antibiotic properties were established successfully by a team in Oxford. Following the joint success of Florey, Chain, Heatley, and others at Oxford in isolating and purifying penicillin, Fleming re-emerged in the media, credited as the sole discoverer of penicillin. The work of the Oxford lab remained relatively unknown, at least in the public eye. In 1945, after much internal debate, Florey, Chain, and Fleming were jointly awarded the Nobel Prize in Physiology and Medicine. Conflict also presents itself in terms of conflict of interest. In 2008, Harald zur Hausen was awarded the Nobel Prize for discovering that the Human Papillomavirus is responsible for certain forms of cervical cancer. Whilst this was an important discovery, the award was overshadowed by controversy as parts of the Nobel Foundation were sponsored by a company that produced HPV vaccines. In this issue of BlueSci, we look at the life of Craig Venter who continues to court controversy and conflict with many in the scientific world. We also explore the nomenclature of the chemical elements and the history of disputes over naming rights. We examine conflicting views on the publication of material that could benefit terrorism and on the ethics of mitochondrial replacement therapy. Conflict is the basis of developing scientific knowledge as one model, supported by new evidence, is debated and often supersedes that which went before. Scientific disputes are important in engaging public interest, gaining media coverage and bringing emotion to what can be an otherwise a rather dry and formal area. I hope the following articles capture your imagination. As always, if you would like to get involved please get in touch.

Nathan Smith Issue 26 Editor Lent 2013

Synthetic Biology Haydn King explains the scientific discipline behind this issue’s cover image

Lent 2013 Issue 26

Cambridge University science magazine


Lazy Universe Bioterrorism . Bicycles . Dinosaurs Ethics . Nomenclature . Art

Its compact genome makes the Livewrwort a promising chassis for SynBio

Lent 2013

shown in the image are the repeating air pore structures from the leaves of Marchantia polymorpha, a species of liverwort. The repeating air pores which are composed of only a few dozen cells, make an interesting study of cell morphology (structure), but it is not these which may bring this common weed into the limelight in the near future. Despite being a nuisance to horticulturalists everywhere, this lower plant has the potential to become the next chassis for an exciting new field: synthetic biology (SynBio). Synthetic Biologists apply common engineering principles such as standardisation to biology. Although these principles have been used to great effect in more traditional disciplines such as structural or mechanical engineering (where the standardisation of parts like the bolt spurred on the industrial revolution), they have not been applied to biological systems in the past. Indeed, had the name not already been taken, SynBio might well have been called genetic engineering. While the two fields sound similar, their entire philosophy is radically different. English comedian Simon Munnery effectively summed up the distinction by saying that “genetic engineering isn’t really engineering at all. The engineering equivalent would be to throw a load of steel and a load of concrete into a river and if someone managed to walk over it, calling it a bridge”. Genetic engineering uses genes in new contexts, but makes no attempt to understand the underlying scientific principles. In genetic engineering you throw everything you can at the problem and see what sticks. In SynBio, engineers find and characterise interesting and useful biological functions from a wide range of organisms such as the bioluminescence of the firefly or the arsenic detection systems found in many bacteria. With these parts, new ‘programs’ can be written in DNA code to perform novel functions. An important example is the production of artemisinic acid, a chemical precursor to artemisinin — the most effective antimalarial available. Although Atemisinin Combination Therapies (ACTs) are highly effective, malaria is still a major killer, with between 2000 and 3000 people dying of the disease each day in 2010. Part of the reason why ACTs have not yet proved fully effective is that artemisinin is rather hard to come by. It is found naturally in sweet wormwood (Artemisia annua) and can be extracted directly from the plant or synthesised chemically, but

both methods are rather time-consuming, expensive and unscalable. In 2006, a team of synthetic biologists successfully introduced the metabolic pathway responsible for biosynthesis of artemisinic acid into yeast, a faster growing and easier to harvest organism. When the resulting drug becomes available next year, it will be produced on an industrial scale not unlike the brewing process at a cost per treatment of only $0.25. So how does our cover come into this? Plants have a natural advantage over yeast: yield. When deciding how much energy to put into the artemisinic acid pathway, a delicate balance had to be struck. Too much and the added effort would inhibit growth of the cells, too little and there wouldn’t be enough artemisinin produced to make the process worthwhile. It is hoped that by using plant chloroplasts as a host this will become less of a problem. Chloroplasts are organelles (simpler compartments within a cell) which perform photosynthesis and produce large amounts of energy for the cell from sunlight. But despite their comparative simplicity, chloroplasts are clearly well suited to producing large amounts of things; for example the enzyme RuBisCo is probably the most common protein on earth and found in chloroplasts. Each cell can contain hundreds of chloroplasts, each one containing up to a thousand copies of the chloroplasts genome. In this way it would be able to have several thousand copies of the artemisinic acid pathway in a single cell, each one being directly powered by photosynthesis. Exploiting this natural ability would be of key importance to projects like the Artemisinin Project and could make mass producing important biological substances significantly cheaper than current methods. Haydn King is a 4th year undergradutate in the Department of Engineering

manfred morgner

The Cambridge University science magazine from

On the Cover 3

News climate scientists are constantly trying


to come up with new ways to delay or reduce climate change. ‘Solar dimming’ for example has been put forward as a way to delay temperature rises and could be achieved by brightening clouds or putting mirrors into space. However, computer simulations at the Department of Energy’s Pacific Northwest National Laboratory suggest that the success of this strategy would depend on the Earth’s ‘climate sensitivity’ to increased carbon emissions. For the first time, solar reduction methods have been explored in a computer model which takes into account the effects of carbon emissions on climate change. This model has shown that the change in the Earth’s temperature and therefore the need for solar dimming is intrinsically connected with the sensitivity of the Earth to an increase in CO₂ concentration. The researchers compare this to the sensitivity of humans to sunlight; some people may be fine without sun block whilst others will burn in minutes. If the Earth is only moderately sensitive, solar dimming may be unnecessary. Conversely, if the Earth is highly sensitive to carbon emissions, solar dimming may provide an effective means of reducing temperature rises. The researchers have now devised a metric to quantify how much solar radiation management will be needed to keep warming under a particular temperature change threshold. This will then be able to assess the potential success of any solar dimming attempts. DOI: 10.1007/s10584-012-0577-3 ms

Check out or @BlueSci on Twitter for regular science news and updates

Beluga whale mimics human speech in current biology, Ridgway

and colleagues report the startling find of human speech mimicry by a Beluga whale named NOC. Before reaching sexual maturity, NOC learned to imitate the phrases of his human handlers. The discovery was made when a diver thought someone had instructed him to get out of the water, but it was in fact NOC repeating the word “out”. To investigate, Ridgway and team compared NOC’s calls with human speech patterns and discovered that they shared the same amplitude-rhythm and frequency, and that NOC’s mimicking calls were several octaves lower than normal whale calls. Whales make sounds by passing air through ‘phonic lips’, a structure similar to the human nasal cavity. When the membranes of the phonic lips touch, they produce vibrations in the surrounding tissues that can be controlled with great sensitivity. The air is then sent to the vestibular sac located near the blowhole for later re-use or release. Experiments suggest that NOC varied the pressure in his nasal tract, and over-inflated his vestibular sacs in tandem with muscular adjustments to his vibrating phonic lips to effectively mimic human speech. This study builds on earlier anecdotal reports of wild whales making calls that sounded like human speech. The question is now whether this is an attempt to communicate with humans or simple mimicry. DOI:10.1016/j.cub.2012.08.044 cc Katy silBErgEr

Does Earth need planetary sun block?

Fighting cancer with exercise

4 News

to invading infections. Dr. Bilek reports that often, post-chemotherapy analysis of blood samples show a high number of these immune cells in the senescent form, making it harder for the immune system to fight off future cancer. Accordingly, if exercise helps to rebuild the population of responsive immune cells ready to fight infection, then this could contribute to the inverse association we see between exercise and the risk of developing cancer. For-the-Press/releases/12/39.html rw microBE worlD

exercise has been shown in the past to correlate with a reduced risk of both primary and secondary incidents of cancer, as well as to improve the prognosis of cancer patients. However, until now there has been little insight into the potential causes behind this interesting correlation. Dr. Bilek and colleagues at The University of Nebraska Medical Center and a team at The Rocky Mountain Cancer Rehabilitation Institute have analysed the immune response cells in the blood of cancer survivors both before and after a 12-week exercise programme. The results show that following exercise, a significant number of the cells had transformed from a senescent form into a naïve, more readily active form that is quicker to respond

Lent 2013

Reviews How the Hippies Saved Physics – David Kaiser

W. W. Norton & Co, 2012, £12.99

What DO bell’s non-locality theorem, hot-spring bathing, and the spirit of Harry Houdini have in common? According to science historian David Kaiser, they all played a part in a major shift in thinking amongst quantum physicists; one which leads right to our contemporary view of the subject. How the Hippies Saved Physics deals with a period in American science, beginning towards the end of the Cold War and lasting up until the present day. Kaiser charts the increasing influence of the ‘Fundamental Fysiks Group’ at Berkeley within a physics profession dominated by a non-philosophical ‘shut up and calculate’ mindset. Through ever-expanding social connections with prominent public groups, CIA figures enquiring about the military potential of telepathy, plus the rising sales figures of textbooks such as The Tao of Physics, their major questions began to pervade the consciousness of the public and mainstream physicists alike. While on one level this is a piece of exemplary historical research, it also offers insights into key questions pervading studies of science: the border between acceptable and pseudo-science; the impact of unconventional thinking; the role of non-scientists’ networks of communication. For anyone looking for a highly intriguing and very entertaining depiction of the growth of an abstract science within real-world settings, Kaiser has provided an exemplary study. Om

Wild Hope – Andrew Balmford One OF the major successes of the conservation movement has been to create mass awareness of the biodiversity crisis we have created through our actions. In Wild Hope, Andrew Balmford argues that while the creation of awareness has undoubtedly been important, there is a risk of convincing people that the situation is so severe any actions to improve it will be futile. He addresses this balance by highlighting examples of conservation success stories, with the aims of both inspiring hope and analysing why some conservation projects succeed where so many others fail. By considering a range of examples—from community-led conservation of cloud forests in Ecuador (which saves both rare species and water supplies) to protecting India’s rhinos with armed guards, or reforming environmental protection laws to persuade previously hostile American landowners to promote threatened woodpeckers on their properties—Balmford shows that solutions will need to be carefully tailored to each location and problem. There are some important constants, such as leadership, tenacity, bold thinking and good research that are present in the successful conservation movement. This book is everything one could want from a University of Chicago Press, popular conservation book: well written, informative, passionate, and inspiring. GP

2012, £17.00

It’s Not Rocket Science – Ben Miller What is the Higgs particle? What is the science behind baking? And is there intelligent life

Sphere, 2012, £12.99

Lent 2013

elsewhere in our galaxy? These are just some of the questions Ben Miller tackles in his book It’s Not Rocket Science. This book is brimming with detail and will enlighten even the most able minded. Perhaps best known as one half of the comedy duo ‘Armstrong & Miller’, Ben Miller covers topics ranging from the workings of the Large Hadron Collider (“a circular underground racetrack for protons”), cosmology, evolution and genetics, food science and climate change. Although the ideas and theories behind the science are complex, Miller manages to make them seem simple and accessible, using hand drawn diagrams, references to popular culture, and humour (as one might expect!). He also discusses the phenomena we still don’t know or understand and conveys how important and exciting scientific research is. The chapter on climate change is a highlight. He presents all the evidence for and against man-made climate change in a balanced manner, as opposed to polarised arguments often heard in the media. In addition, the author’s anecdotes make for a particularly entertaining reading. His passion and enthusiasm for science is obvious from the outset and makes reading this book a pleasure. LP reviews 5





Living in Fear

sarah smith examines biological terrorism and its effect on science “don’t panic” – these cautionary words, made famous by Douglas Adams in his book The Hitchhiker’s Guide to the Galaxy, should perhaps be engraved above the door of the National Science Advisory Board for Biosecurity (NSABB). Their kneejerk reaction to censor the publication of two papers, each outlining how avian influenza, or bird-flu, might be engineered to spread more easily between humans, came close to causing a ‘panic pandemic’. Highly pathogenic avian influenza (HPAI) H5N1 virus can infect humans and even causes death in some cases. However, this virus is currently unable to transmit between humans in the same way that normal flu can – through coughing or sneezing – and has only been transmitted to people who have had contact with infected birds. Several research groups have been investigating the spread of flu in ferrets (the best model of flu in humans) with the goal of understanding the likelihood of a bird-flu pandemic.

CyNTHia GoLdsmiTH

The H5N1 virus (shown in gold) has caused the death of thousands of people and millions of birds

Ron Fouchier, principle investigator of one of these groups, won the battle to report his recent findings when his paper “Airborne transmission of influenza A/ H5N1 virus between ferrets” was published in full in Science, overriding the NSABB recommendation that the manuscript be published in a redacted form. By withholding key details that would allow the paper 6 Living in Fear

to be replicated, the cornerstone of modern scientific research would have been compromised; all findings and methods are published in full in a peer-reviewed journal to encourage professional scrutiny, critique and replication, thereby ensuring scientific accuracy and progress. The NSABB is a federal committee comprised of representatives of the scientific community who unanimously voted for censorship due to the potential to misuse the findings. The NSABB maintained that it was possible for terrorists to use this research to create a highly infectious influenza strain that could be used in a bioterrorist attack. The committee was particularly cautious over this case because HPAI virus is reported to kill over half of the people it infects, in contrast to 1 in 1000 for a typical infection. This is comparable lethality to the ‘Spanish’ flu outbreak of 1918, which killed more than 20 million. Estimates of lethality are, however, likely to be over-estimations as they only include patients who attended hospital with acute symptoms, thus discounting all those who have mild responses to the virus. The Centres for Disease Control and Prevention have produced a list of agents that could be abused by bioterrorists, including infective viruses and bacteria, such as influenza, which spread through a population infecting many people long after the initial attack, and isolated toxic chemicals, for example Botulinum neurotoxin, which only affect those directly exposed. Surprisingly, bioterrorism has been around for a long time. Hannibal, one of the most successful naval commanders of the 1st century BC, was known to fill pots with venomous snakes before he threw them onto the decks of rival boats during warfare. In the 14th century, Tatar forces hurled corpses infected with The Plague (Yersinia pestis) over the city walls of Kaffa during a siege. By contrast, there have been only four incidents of bioterrorism in the ‘modern’ era, two of which resulted in fatalities, both from the use of anthrax (Bacillus anthracis). In 1979, at a military facility in the Ukraine, a technician forgot to exchange a crucial filter on a Lent 2013


Biological attacks can fail to incite terror when they are mistaken for natural events

machine holding powdered anthrax. The machine was switched on and dispersed a cloud of the toxin over the city of Sverdlovsk, killing 100 people. Subsequently in 2001, letters laced with anthrax were sent through the mail to a number of media offices and US senators by a rogue employee of Fort Detrick, a US defence laboratory, causing five deaths. In both cases the release of anthrax was from an officially sanctioned facility that was permitted and equipped to safely contain and use the toxin. Without this, it would have been unlikely that the bacteria could be grown without infecting the person handling it. The two remaining non-lethal attacks were carried out by terrorist groups, the only reason we know this to be true is because they confessed afterwards. In a ploy to prevent people voting in the 1984 Wasco County elections, followers of the Rajneeshee group deliberately caused food poisoning to 751 people by contaminating salad bars in 10 restaurants in Oregon with bacteria (Salmonella enterica). Although 45 people were hospitalised, none died and the authorities assumed it was accidental until a year later. This case highlights a key flaw in bioterrorism – that attacks can be mistaken for natural events, negating the desired effects on the public psyche. Nine years later the Japanese cult Aum Shinrikyo attempted an attack using airborne anthrax. The attack failed, because the cult had used the strain of the bacterium utilized for vaccinations, which cannot cause disease. This blunder demonstrates that a bioterrorist needs a strong scientific background to carry out an attack successfully. That makes a total of 105 deaths known to be caused by biological agents since the turn of the 20th century. The comparisons between this figure and the lives lost from terror attacks, such as 9/11, need not be laboured to make a point. These examples illustrate the difficulties of manipulating biological agents; without secure handling facilities and specialist knowledge. Potential bioterrorists are likely Lent 2013

to accidentally harm themselves without infecting others and certainly without making a political point or inciting terror. Previous research into ‘Spanish’ influenza virus further emphasises the irrational nature of the NSABB’s response to bird-flu research. Several key papers on ‘Spanish’ flu have been published in full, although it ostensibly proves a similar bioterror threat, if judged by the NSABB’s standards. Censorship only attracts attention to the potential hazards and further incites public fears regarding the possibility for insidious use of scientific research. By blocking communication of scientific ideas we risk being unprepared for the natural progression of influenza towards a highly lethal and contagious form that is much more probable than a successful bioterrorist strike. Paul Keim, director of the NSABB, controversially said, “I can’t think of another pathogenic organism that is as scary as this one”, in reference to HPAI. This is a typical response to the media hype that has surrounded new forms of flu virus in recent years, inciting public fears. Understanding how these viruses spread is crucial to preparing for a pandemic if it happens. For this approach to work, all the information needs to be available to scientists. This need is greater than the remote chance that the information will be used in destructive acts. Were the NSABB’s concerns about “Airborne transmission of influenza A/H5N1 virus between ferrets” justified? Three months after the call for censorship, the board reversed its decision, agreeing that revised versions of the papers should be published in full. Perhaps they allowed their initial fear to cloud their judgment when weighing the benefits of this research against the actual threat of bioterrorism in the modern age. Sarah Smith is a 3rd year PhD student studying virology at the Wellcome Trust Sanger Institute Living in Fear 7


e Ale




Senses in Symphony

Shi Khoo and Vanda H o take a look at the cognitive perspective of synaesthesia


Synaesthesia has been likened to the psychedelic hallucinations experienced after taking drugs such as LSD

8 Senses in Symphony

to musician edith, sound is a gastronomic experience. A major third literally tastes sweet, a perfect fourth like mown grass. But if tasting musical intervals were not strange enough, she also sees notes as colours: a C looks red, an F-sharp looks violet. Edith has a condition known as synaesthesia, which is derived from the Greek for “union of the sensations”. It means that sensations in one modality cause additional experiences in a specific second modality that was not directly stimulated. Synaesthesia has been described in the scientific literature for over 300 years. In 1689, John Locke described “a studious blind man [… who] bragged one day that he now understood what scarlet signified. [...] It was like the sound of a trumpet”. It was only in 1883 that Francis Galton investigated the experiences of synaesthetes for the first time. The majority of individuals had similar historical patterns: subjects experience synaesthesia from an early age but descriptions of such experiences were often met with ridicule, therefore discouraging self-expression, even though the experiences persisted. Within the scientific community, research into synaesthesia was neglected when the study of behaviour fell out of favour and neurophysiological studies took centre stage. The only available data on which to base the research was the subjective account of those who report their experiences. A breakthrough came in 1987, when Baron-Cohen devised a Test of Genuineness (TOG) to distinguish synaesthetes from controls based on the stable pairings of synaesthetic inducers and concurrents. Synasethetes show a 70% consistency in cross-modal percepts compared to 20-30% in controls, providing for the first time a reliable diagnosis for synaesthesia and proving its concrete existence. Before considering the aetiology of synaesthesia, it is important to recognise this phenomenon not

as a curious abnormality, but as a variant of normal human perception. A normal everyday example of such a cross-modal perception would be the perception of our body in space, where visual, somatosensory, and vestibular signals, transduced by anatomically separate sensory systems, are combined into a unitary non-ambiguous representation. Like normal perceptions, synaesthetic percepts feel real, they are not mere metaphorical comparisons of one modality with another. Perhaps because of the vividness of the percept, parallels have been drawn between synaesthesia and hallucinations, commonly experienced in psychotic conditions or under influence of psychedelic drugs like LSD. They have also been compared to visual after-images, an illusion of an image that persists after exposure to the stimulus has ceased. In imaging studies, it was found that primary sensory areas of the brain involved in colour vision are not activated in synaesthetic experiences, making the neural substrate of synaesthesia unlikely to be true colour perceptions. In keeping with the notion that synaesthesia is a variant of perception, it can manifest itself in different ways. For example, synaesthetes show a spectrum of phenotypes in the vividness of the percept: ‘projector’ synaesthetes view their percepts in the external world, e.g. they see a splash of red when hearing the note C, while ‘associators’ see them in the mind’s eye, only ‘knowing’ that C is red. The very concept that one sensory experience can trigger another implies that sensory integration occurs at some point in the neural pathway that gives rise to the abnormal perception. The question as to how synaesthesia arises can thus be divided into two parts: what are the differences in the brain that lead to synaesthesia, and how do these differences arise? Differences that lead to abnormal sensory integration can either be due to anatomical or functional causes Lent 2013

Lent 2013

Some synaesthetes experience colour sensations when they perceive sounds and listen to music

dereK JoHN lee

– there are either aberrant physical connections in the brain, or atypical functional activity present in normal cerebral circuits. Theories concerning the anatomical basis propose cross-connections at different levels of the neuronal signalling pathways that process sensory information. This can occur locally between areas processing different aspects of a stimulus (for example colour vs shape), at a site where modalities converge (‘multisensory nexus’), or by feedback from a higher to a lower centre of the same modality. Brain connections can be measured with a technique known as Diffusion Tensor Imaging, where detection of coherent diffusion of water molecules indicates better connectivity. Evidence thus far seems to support that synaesthesia is, at least partially, caused by increased anatomical connections. An ideal example to illustrate this is the graphemecolour synaesthesia. Sites for processing grapheme (word shape) and colour lie adjacent to each other. Functional magnetic resonance imaging (fMRI) studies have shown increased connectivity between these regions in synaesthetes, suggesting involvement of local cross-connections. Simultaneously, the parietal lobe - a multisensory nexus - also shows increased cross-connectivity. Evidence for involvement of this region comes from the fact that inhibition of this area can attenuate interference of synaesthetic colour in naming the real colour of a grapheme. Theories about anatomical connections are appealing because it is easy to visualise how cross-connections at different levels in the processing stream could explain two different phenotypes of the phenomenon: higher and lower synaesthesia. For higher synaesthetes, the conceptual properties of a grapheme trigger colours, whereas for lower synaesthetes, it is the physical properties that trigger the colour. This means that a higher synaesthete would see both the digit 4 and the Roman numeral IV in the same colour, a lower synaesthete would not. Differences in anatomical connections could even explain the variations in vividness of synaesthetic percepts: projectors have been shown to have greater connectivity in the parietal lobe compared to associators. The fact that evidence for anatomical theories is strong does not rule out the possibility that functional abnormalities may contribute to synaesthesia. Recent research proposes that consciousness exists when a threshold number of neurons process the same information. This could mean that when processing pathways are hyperactive, additional modalities that are not normally registered could become conscious, thus leading to synaesthesia. One such example is mirror-touch synaesthesia, where tactile sensations are experienced when watching another person being touched. In these individuals, “empathy neurons”,

or “mirror-neurons” are hyperactivated. This might suggest that synaesthesia is not a consequence of abnormal neuronal pathways but of a malfunction in normal pathways that are common to everyone. Despite these controversies, and as yet unexplained observations, the question remains: how does synaesthesia develop? One current hypothesis is that it has developmental origins and is due to genetic mutations, which affect neuronal pruning. This regulatory process lasts from birth to sexual maturation and fine-tunes the neuronal structure of the brain by reducing the number of neurons and synapses, thus regulating anatomical and functional connections. The idea behind this is that all babies are born synaesthetic, and that defects in neuronal pruning would lead to synaesthesia persisting into adulthood. While the shroud of mystery that once surrounded synaesthesia may have been lifted, our understanding of synaesthesia is far from complete. Firstly, it seems far too simplistic to suggest that the diversity of synaesthetic experience can be explained by a single mechanism. On the other hand, the fact that different members of the same family can inherit different forms of synaesthesia could mean that some forms share common neurological mechanisms. Further exploration of the genetic basis of synaesthesia will clarify the different hypotheses, and promises to unravel some of the secrets of our consciousness, perception and cerebral development. Shi Khoo is a 3rd year student in the Department of Physiology, Development and Neuroscience. Vanda Ho is in Stage 1 Clinical School Senses in Symphony 9

NA AleeshA NdhrA

The Journey of the Bicycle Karsten Koehler explores the history of the bicycle, and how our understanding of the physics of cycling has developed over time for centuries, inventors have searched for

a swift carriage that frees man from relying on animals. In 1813, more than 70 years before the invention of the modern automobile, Karl von Drais, a forestmaster from Mannheim (in modern day Germany), built a four-wheeled vehicle seating up to four people and powered by winding cranks using arms and legs. Believing he had found the solution for human-powered transport, he felt confident to demonstrate it before state leaders assembled at the Congress of Vienna. Patent examiners were less impressed, and denied that any ground had been gained over walking. But then something unexpected at the other end of the world gave prominence to the need for new means of transport. In April 1815, the volcano Tambora, in the Sunda Islands of Indonesia, erupted, killing tens of thousands and throwing tens of millions of tons of sulphate and 160 cubic kilometres of volcanic ash up to 43 kilometres into the stratosphere, where some of the finer ash particles remained for years. This blocked Earth’s surface from sunlight, and could well be an explanation for the global drop in temperature of 0.40.7° C recorded in the summer of 1816. The cooling


A two-wheeler seating the rider on a saddle

10 The Journey of the Bicycle

was strongest at the North American East Coast and in Western Europe, with a summer temperature 2° C below average, resulting in1816 being dubbed “the year without a summer”. Frosty springs caused widespread crop failures, and grain prices in Strasbourg (over the Rhine from Mannheim), almost tripled between 1815 and 1817. Grain scarcity not only triggered widespread famines in Europe and North America, but also a transport crisis as land transport almost entirely relied on grain-fed horses. Following the failure of his unsuccessful design, Drais became convinced that kick propulsion was not only the natural, but also the most efficient way to use human power, making mechanical drives unnecessary. Also, all contemporary drives were not only cumbersome, but inefficient in harnessing human power, as they were built for hand-propulsion as it remained unknown that the legs were far more powerful in exerting a circular motion. Drais’ inspiration was that a much lighter construction would reduce the effort required to move it so strongly that a mechanical gear would not be needed anymore and moving about would be more fun. Not depending on a heavy frame to carry the mechanism around, the machine could be built much lighter, and consequently the whole paradigm was shifted to a completely new design: a two-wheeler seating the rider on a saddle, who moved the vehicle forward in a running motion. This velocipede (“swift foot”) used up-to-date carriage technology and was constructed from well-seasoned ash with iron-shod wheels, brassbushings in the wheel bearings, and even a brake (a sail was available as an accessory), and was not much heavier than some of today’s bikes at 22 kilograms. Though believing in a “natural” mode of propulsion, Drais’ new invention introduced a principle unseen in nature: a machine that is selfrighting when set in motion! It could be stabilised by steering the front wheel, which was even possible Lent 2013

Lent 2013

1816/17, but increased demand for human-powered transport might have given additional economic impetus: in 1817, the Dresdner Anzeiger “hoped” that with the reduced dependence on horses brought about with the running-machine, “the price of oats will fall in the future”. Knock-offs of the running machine were produced abroad, such as the British-built “dandy-horse”. As contemporary roads were unpleasantly bumpy, urban riders resorted to sidewalks, endangering pedestrians (especially riders without brakes). Consequently riders were soon banned from urban sidewalks from Philadelphia (1819) to Calcutta (1820). Also, as people were not generally familiar with balancing an object, several public demonstrations of the Draisine ended in broken bolts and subsequent derision by disappointed users and caricaturists. Moreover, by 1818, grain prices had fallen back to pre-crisis levels, reducing the economic impetus. Aspiring engineers now looked at more promising targets, such as railways, and Drais was among them; the two-railed human-powered handcar is also named Draisine after its inventor. The velocipede remained in obscurity for forty years until French mechanics Michaux and Lallement added pedals to the front hub and by 1890, bikes had already adopted the diamond shape that remains their most common silhouette today. A twist of history is that an invention inspired to combat the effects of global cooling might be useful in fighting global warming. Though Drais’ invention failed to raise general interest, he can be credited to have introduced the principle of a self-stabilizing twowheeler, a concept of balance not found in nature. But despite nearly 200 years of cycling history, the mechanisms keeping it up are still not completely understood.

The velocipede remained in obscurity for forty years until French mechanics Michaux and Lallement added pedals to the front hub

PuBlic domAiN

with both feet off the ground. A two-wheeler cannot stand upright unsupported and so has a low static stability compared to a four-wheeler. However, once it starts moving forwards (at about 21 km/h with a modern bike), the two-wheeler self-stabilises and gains a dynamic stability, even if the rider does not hold on to the handlebars, or is not sitting on it at all, as seen in Jacques Tati’s film Jour de Fete (1949). Moreover, a two-wheeler has a much higher dynamic stability in sharp turns than a four-wheeler, as the cyclist can balance by leaning into the direction of the turn. It is unknown how Drais got the idea that two-wheelers self-stabilise, but he might have learned balancing while ice skating. He might also have learned intuitively to set his velocipede in motion slowly by driving in wavy lines and falling from the left to the right foot and back until the machine gained such a speed that it stabilized itself, just like a child on a balance bike today. The physics of bicycle self-stabilisation were first analytically described in the 1890s by French mathematician Emmanuel Carvallo and Cambridge undergraduate Francis Whipple. The first effect held responsible for bicycle stability is the gyroscopic effect. This is caused by the angular momentum of the spinning wheels, causing the main axis of the bike to return to its initial position after disturbances. Another stabilising force, also involved in making turns, is the trailing effect: this is where the front wheel contacts the ground behind the steering axis of the bike and automatically aligns the direction the bike is travelling in. However, neutralizing both these effects is not enough to build an unrideable bike, as David E.H. Jones’s tortuous efforts showed (using counter-spinning wheels against the gyroscopic effect, and a trail-less fork). Moreover, in 2011 Dutch researchers demonstrated that the same is true even for a riderless bike. Altogether, at least 17 different parameters are crucial for keeping a bike upright, from each wheel’s radius and mass to the position of the centre of the mass, to the angle of the steering axis — and this does not even take the correcting actions of the rider into account. On Thursday, 12th June 1817, the velocipede was demonstrated to the public: Drais left his home in Mannheim and made his way towards Schwetzingen. After about 7.5 km on the best paved road of Baden, he turned back home, taking little longer than one hour. The running-machine offered two to three times the speed of a pedestrian with the same effort, as 4-5 metres could be covered with one walking step. One contemporary noted: “The rider pushes the wheels along when they won’t go alone and rides them when they will.” We don’t know if the invention of the runningmachine was directly inspired by the food crisis of

Karsten Koehler is a Biochemist and Postdoctoral Researcher in the Department of Pathology The Journey of the Bicycle11

es Ale A hr Nd NA hA

One to Another

Alessandro Bertero looks at our increasing ability to change the fate of our cells

Cell nuclei from mature frogs can be placed into tadpole eggs

12 One to Another

despite being a theme in many works of science fiction, metamorphosis (from the Greek meta “change” and morphe “form”) is hard to find in any human biology textbook. We are all familiar with the notion of a caterpillar transforming into a butterfly, but we firmly agree that similar kinds of wizardries are peculiar to (or a peculiarity of ) bizarre insects, exotic fishes and monstrous super heroes. While this holds true for the normal life cycle of a human being, recent advances in the fields of stem cells and developmental biology suggest that we are far less unchangeable than we previously realized. It is well known that the many hundreds of different cell types that provide the building blocks of our body all possess identical genetic material. Unsettling as it was at the time of its discovery, this apparent paradox has been eventually reconciled by the strenuous work of two generations of geneticists. Over time, geneticists have proven that tight regulation in the expression of our genetic code is the key to the many observable differences in cell identity and specialisation. A vast array of proteins silently pulls the strings of our cellular destiny by interacting with our DNA, sculpting its structure in order to

“hide” the undesired pieces of genetic information. As cells progress from the primitive state of the early embryo through the different stages of development, this process gradually defines the different colours, shades and tints of our cellular palette, by gradually restricting the expression of genetic traits down to the minimum required for each specific cell function. This elegant hierarchical model implies that each cell gradually loses its ability to become many different kinds of cell by choosing to proceed down a particular branch of development at each crossroad of the cell-fate highway. However, as for every good rule there have to be some exceptions. Curiously enough, the first and more notable of such incongruities was observed even before the establishment of the model itself, when John Gurdon completed the first successful cloning experiment. He showed in 1962 that the fate of an otherwise lifeless, immature tadpole egg whose nucleus had been removed could be rescued by implanting the foreign nucleus of a mature frog cell, thus artificially creating life. The profound implication of this experiment was that the genetic information previously “hidden” in the mature donor could be successfully utilised by the DNA-deficient tadpole egg, showing that the status of the adult cell is not irreversible. More recently, this notion was further expanded by the work of Shinya Yamanaka, who showed how a similar

GeOff GAlliCe

ViCky BrOCk

Butterflies metamorphose, but can these changes happen in human cells?

Lent 2013

Lent 2013

as the scar is not able to contract, which imposes an increased burden on the rest of the muscle. This situation gradually evolves into heart failure, as the oxygen supply to peripheral organs becomes insufficient. Unfortunately, this is relatively common in Western countries, and a heart transplant is the only radical solution to the problem. Any development which aims at regenerating the damaged muscle in the early phases after a heart attack would be very useful. Deepak Srivastava and Eric Olson’s research groups believed that if they could make the cells in the scar “transform” and become tissue that could contract, then the lifespan of the heart after a heart attack would be increased. To this aim, they genetically engineered some modified (noninfectious) viruses in order to carry the genetic information required for the expression of some “muscle-transforming” proteins, and they injected them into the affected areas of the hearts of mice. Strikingly, not only did they observe some degree of transformation of scar cells into contractile muscle tissue, but this was sufficient to reduce the cardiac dysfunction in the mice. Micrograph of the heart after a heart attack


“reprogramming” of adult cells to a primitive state can be achieved by exposing a cell to a cocktail of only four proteins . Notably, Gurdon and Yamanaka were recently awarded the Nobel Prize for Physiology and Medicine 2012, an acknowledgement of the tremendous impact of their discoveries. While Yamanaka’s work has established the current paradigm of cell-fate manipulation by the use of only a few defined factors, there are many other less famous examples of similar processes. As many as eighteen years before Yamanaka’s breakthrough, the pioneering work of Andrew Lassar proved that the expression of a single protein called MyoD was able to transform cultured human skin cells into muscle — a process which we could call “metamorphosis”. More recently, fuelled by the hopes raised by previous studies, an army of scientists rushed towards the discovery of a medical philosopher’s stone, the ability to convert skin cells into clinically-desirable cell types. Unlike their ancient counterparts, these modern alchemists were successful in their attempts, and the metamorphosis of skin into blood, liver, pancreas, brain and heart is no longer a fantasy of science fiction writers. The discovery of cell-fate manipulation is certainly Nobel Prize winning, but does it have any practical uses beyond the creation of frogs and Frankenstein? The answer is unequivocally yes with applications extending throughout the medical and biological community. Firstly, cell transformation allows scientists to create experimental tissues in which to study human diseases, thus circumventing the difficult task of acquiring precious samples from human donors. In addition, these same cells could be transplanted into patients to regenerate damaged tissues, with scientists currently investigating the possibility of introducing insulin-producing cells into the pancreas of diabetics and neurons into the brains of Parkinson patients. Finally, an even more ambitious approach would be to instruct the cells of our own body to transform into a desired tissue which needed to be regenerated. This would bypass the need to isolate and cultivate the cells in a laboratory, thus dramatically reducing timing, costs and impact on the patient. Despite sounding like the plot of a science fiction movie, this kind of medical treatment might be closer than we would have ever dreamed. Earlier this year two separate studies independently reported the successful in vivo reprogramming of cardiac non-muscle cells into contractile tissue. After a heart attack, the heart muscle which suffered from a prolonged lack of oxygen dies and is substituted by cells of the connective tissue. These cells form a scar. This mechanism preserves heart function over the short term, but eventually leads to dysfunction

Is this the dawn of a new generation of regenerative medicine-based treatment? Is this the first step towards reaching human immortality? Scientists predict that this technology will make the leap from laboratory bench to patient’s bedside in the next few decades, but stress that we can probably answer yes to the first question but add that even the most optimistic estimates for this kind of treatment to reach patients cannot be shorter than ten or twenty years. Much more research will be needed before we can properly understand and control cellular reprogramming in a safe and efficient way. As for human immortality, we can leave it to the words of the famous science fiction novelist Arthur C. Clarke: “The only way of discovering the limits of the possible is to venture a little way past them into the impossible”. Alessandro Bertero is a 1st year PhD student in the Department of Surgery One to Another 13


Digging for Dinosaurs Amelia Penny discusses the importance of the fossil record, and the impact of fossil-hunters on our historical knowledge in may of this year, a dinosaur fossil hit international headlines. The specimen, a beautifully preserved Mongolian tyrannosaur called Tarbosaurus bataar, had sold at auction in Manhattan, New York for US$1.05 million. Amid an outcry from palaeontologists, Mongolian president Elbegdhorj Tsakhia intervened, alleging that the Tarbosaurus had been collected illegally, part of a booming trade in stolen and smuggled fossils. The case, still ongoing, is a window into a thriving industry which is littlenoticed by most people, but a constant menace to our understanding of past life. The problem of irresponsible fossil collectors is nothing new. Some Victorian collectors, taking their cue from big game hunters, would routinely hack the skull off an impressive fossil, leaving the rest, now headless, in the rock. The glamour of big species, particularly carnivorous ones, is still a powerful driver in today’s fossil markets. A single tooth from a Tyrannosaurus rex found in Montana in 2011 sold for £36,000, and another T. rex, nicknamed ‘Sue’, became the world’s most expensive dinosaur fossil when it sold to the Field Museum in Chicago for US$8.4 million. But increasingly, private collectors


Mighty Beast Tarbosaurus bataar was up to 12 metres long

14 Digging for Dinosaurs

want more scientifically important fossils, which often requires illegal collecting. Famous invertebrate localities such as the Ediacara Hills in Australia, or the Burgess Shale in Canada are regularly targeted too. Losing dinosaurs to private collectors is unfortunate, but the little-publicised loss of these more obscure fossils is an equal scientific tragedy. The rocks of the White Sea-Arkhangelsk region of Russia hold some of the earliest traces of animal life on Earth, preserving an aquatic ecosystem over 550 million years old. Increasing scientific interest has come at a predictable price – there has also been an increase in poaching, evidenced by tools, rubbish and disordered dig sites left behind. Since 2005, huge illegal excavations have been made in the area, removing hundreds of cubic metres of fossil-bearing rock. Worse, the nature of the fossil deposits means that widely varying communities of mysterious Ediacaran organisms are localised in very small areas, each with their own unique fossil fauna. A single poaching expedition, targeting one of these tiny communities, could remove dozens of species from the fossil record forever. If we’re lucky, they may be rediscovered once they go to sale – new species from the area have shown up at international fossil fairs - but they are likely to be just a taster of the ancient biodiversity lost on the illegal market every year. Governments have forged a variety of laws to protect their most important fossils. Heavy fines or imprisonment are usual penalties, but whatever the law says, all governments hit the same major problem: enforcement is almost impossible. Protecting fossilrich areas often requires sending patrols over a huge, remote area. In the USA, the National Parks Service employs a network of monitoring staff to repeatedly survey important fossil sites for signs of poaching. If signs are found, palaeontologists may carry out an ‘emergency excavation’, involving rapid removal of all the fossils in the area. Emergency excavation Lent 2013

Lent 2013

Hundreds of thousands of paleo-tourists flock to the Dinosaur Provincial Park in Canada each year

Amelia Penny is a 4th year undergraduate in the Earth Sciences Department A sabre-toothed cat; one of the many fossilised animals found in the mudstone of the Badlands



safeguards some fossils, but it doesn’t catch poachers, and can only work after poaching has begun, so some of the fossils have already been lost. Could scientists do more to combat the problem? The problem of conserving Earth’s palaeodiversity has obvious but under-recognised parallels with our efforts to conserve modern ecosystems. As with modern conservation efforts, better communication with communities living near important sites could work wonders to help. Canadian fossil hunter Michael Ryan, describing work in the Gobi desert, articulates the problem well: ‘They see us driving these big fancy trucks and taking the bones away. As rich Europeans and North Americans coming in there, it’s hard to say, “Thou shalt not do these things,” because that’s what it appears we’re doing.’ Besides, not all fossil poachers are out to make big profits. Whether palaeontologists like it or not, fossils represent a stunning economic resource for poorer people who happen to live in fossil-rich areas. When even small fragments of bones or teeth can fetch hundreds of dollars, the fossil record begins to look like a natural resource just like any other. Casual collecting on protected land can also damage the record, though collectors do not always realise it. In cases like this, a little more dialogue between palaeontologists and local communities could be all that’s needed to reduce losses. When important fossils are discovered, the findings too often go into the scientific literature and are not shared with a wider audience. Well-publicised fossil localities can be an important draw into an area, and in some cases even generate a tourist industry of their own. North America is world-leading at this, drawing hundreds of thousands of palaeo-tourists every year to exceptional fossil localities such as the La Brea Tar Pits and Mammoth Hot Springs in the USA, and Dinosaur Provincial Park, in Canada. Yoho National Park, the home of the Burgess Shale, also cashes in on its palaeontological heritage, running popular guided hikes up to the important outcrops. This approach clearly won’t work everywhere, but with enough resources, good fossils can be exploited without harming scientific endeavour. There will always be a need for patrolling and straightforward law enforcement, but with a little imagination,

people can benefit from their fossil resources without destroying them. In the summer of 2010, I was looking for fossils in the Badlands National Park, South Dakota. For weeks, I and the field team I was working with had been finding wonderful fossils in the crumbling mudstone of the Badlands - sabre-toothed cats, rhinos, early dogs, horses, camels, turtles and alligators. Alongside the US National Parks Service, we had been prospecting in remote areas which had been closed to palaeontologists for decades. The rocks were prolific and the fossils often fascinating, too numerous to collect. As I worked, I began to construct a mental picture of these creatures, visualising how they might have moved, and the landscapes where they might have lived. The South Dakota of 35 million years ago began, very gradually, to take shape. A week or so later, things had changed. We had moved to a new locality in the same rocks, this one an easy stroll from the highway, a long blue ribbon through the barren landscape. In a day of searching under the huge, white sun, we found nothing. Not a splinter of bone, nor a solitary tooth, remained. This ancient and bizarre fauna had been entirely erased. In researching this article, I found fossils from those Badlands rocks on sale online for hundreds or thousands of dollars. We might not realise it immediately, but those figures have terrible implications for fossil conservation, as the market for illegal fossils wipes life’s history clean. New approaches could yet help us to save it.

Digging for Dinosaurs 15

walter corno

Lazy Universe


BlueSci explores the universal principle of energy minimisation across the sciences

one of the most central principles used in all of

modern physics is ‘The Principle of Least Action’. As the title suggests, the statement roughly translates to “energy is minimised” or, alternatively, “physics is lazy”. Roughly speaking, the principle means that the basis of all physics lies in minimising the use of energy. To take an example of how we use the principle, we can consider a soap bubble on a loop of wire. The shape of this bubble can be predicted using the principle of least action. This shape is the form with the lowest energy, and altering the shape of the bubble will require more energy. This is why bubble surfaces are always smooth, like spheres, and why you can’t create one with sharp edges, like a pyramid or a cube, as this violates the principle. Furthermore, we can address a question that has puzzled mankind for millennia: why does light travel in straight lines? The principle of least action gives us the answer. The reason relies on the fact that a straight line is the fastest and easiest way to get from one place to another—and for light, this is the path of least action. But the principle goes much deeper. The principle can be used to calculate the equation of motion, which determines how objects move—how everything will proceed from start to end, accounting for objects colliding with each other and all the physical forces at work. Before we established this principle, the laws that govern the object’s evolution had to be generated by hand, relying primarily on observations and good guesses. One of the most famous examples of an equation of motion is Newton’s 2nd Law, “F=ma”. This says that when a force (F) is applied to an object with a certain mass (m), it will accelerate (a) based on the size of the force. This is a mathematical expression of the obvious statement “things accelerate more if you push them harder, and lighter things accelerate faster than heavier things.”

Focus 17


the shape of canyons is defined by the principle of least action

18 Focus

eSa/HUbble & naSa

the principle of least action can be used to explain the behaviour of everything— from atoms to galaxies

eqUinox GrapHicS ©

The principle of least action means that you can derive this equation of motion, and hence can say exactly what will happen without any hocus pocus guesswork. Furthermore, whilst F=ma only works on our human, everyday scale, the principle of least action is so broad it covers all areas of physics—from the way planets and galaxies move around, to the subatomic collisions of electrons and quarks. Action is similar to energy in many respects, but there is a subtle difference, which is part of the magic of the principle of least action. In physics, we define the total energy of an object to be comprised of the kinetic energy (due to the object’s motion, faster moving objects have more kinetic energy) plus its potential energy (due to a force, for example gravity— heavier objects have a larger potential energy). In contrast, we define the action of an object to be the kinetic energy minus the potential energy. Notice the minus, which makes the action subtly different to the total energy. When we push a box along the floor, we can weigh it, we can measure its position and speed, but we can’t directly measure its total energy. Instead we calculate the total energy from the above quantities. Similarly,

we have to calculate the action, rather than observe it directly. This puts energy and action on an equal footing—the only difference is that we’re more used to the concept of energy, for sociological rather than scientific reasons. Having defined what the action is, we can now finally define what the principle of least action says: “objects move on paths that minimise the action”. On the human, day-to-day scale, we can use the principle of least action to derive Newton’s 2nd law, and this in turn tells us everything we need to know. But physics has moved on since the days of Newton. We now know that on the largest scales of black holes, solar systems and galaxies we instead need Einstein’s General Relativity, rather than normal Newtonian gravity (which gives us apples falling from trees). Conversely, on the smallest scales, gravity has little or no effect and Quantum Mechanics comes into play. Other forces must also be considered, such as the electromagnetic and nuclear forces that hold atoms together, which are important at this tiny scale. Although these are completely different areas of physics, operating on vastly different scales, remarkably we can use the same principle of least action to derive the vastly different equations of motion, no matter what scale we’re working at. This means we only have to make one simple assumption—the principle of least action—rather than assuming all the equations of motion, and putting them in by hand. Actually, the equations of motion turn out to be just specific examples that were worked out by good guesswork by physicists in the past. Philosophically, assuming only the principle of least action is much nicer and it makes physics much simpler. Chemistry is, at its heart, a study of energy. Nature as we observe it arises from energy minimisation at an atomic and molecular level. As such, in order to predict what happens in a chemical reaction, we simply need to consider the relative energies of all of the chemicals involved and those with the lowest total energy will be the products we obtain. The ability to understand these interactions lies behind a plethora of scientific advancements, such as the development of new drugs, designing new and more efficient ways

Lent 2013

FlUor doUblet

to produce energy, and the modelling of natural processes like the climate. What might seem like a very simple fact—that chemical entities, given enough time and energy will always tend towards the state and structure that has the lowest total energy—actually hides a wealth of detail and complexity that have enabled chemistry to advance so far in the last century. Crucially, time and time again we observe in nature and in the laboratory materials and molecules that don’t make sense, and that don’t seem to occur in their lowest energy state. Trying to understand these many exceptions is what leads us to advance our knowledge of the universe. With the advent of more powerful computing, chemistry has begun employing new high-tech tools to help researchers to more accurately predict the complex and detailed structures and properties of inorganic materials. Inorganic materials are those that do not contain carbon and hydrogen atoms. The Materials Project (formerly the Materials Genome, before the name was requested by President Obama for a policy initiative) is an exciting new pursuit from the Massachusetts Institute of Technology. Prof. Gerbrand Ceder and his team are building a massive database that aims to catalogue every known inorganic material. Using computational approaches developed in the past decade, the optimal structure and energy of each material can be determined using a concept known

as density functional theory (DFT). This technology can be used to refine and improve predictions of how a material is structured based upon the concept of energy minimisation. Given a poorly approximated structure based on experimental data, the computer is able to construct a much more detailed idea of how chemical structures work on an atomic level. This should theoretically give the most accurate prediction for the actual structure of the substance being studied, which can then be stored and recorded for use in experimental models. Lent 2013

Once you know the energy of a material, you can actually predict a lot about how that material will behave in various chemical situations. With a little more work, the Materials Project will be able to calculate information on various thermodynamic and electrical properties of a material, and add these to the database too. Our understanding of chemicals is only limited by our knowledge of their energy and structure. While DFT is still not completely accurate, it is accurate enough to provide a stable theoretical grounding for new experimental investigations. So far, the Materials Project includes over 30 000 materials, supporting other existing collections of chemical structures, such as the Inorganic Crystal Structures Database. More are added every day, and the project is always being updated and adding new features and results. A key benefit is that the Materials Project database is open source, meaning anyone can access information on materials from the database (try it out at http:// Interested members of the public can make use of this fantastic new tool to learn more about the world around them, giving it the potential to join other successful popular science initiatives such as Foldit or Galaxy Zoo. For scientists, all researchers can be given access to the nuts and bolts of the database allowing them to modify it and run their own sophisticated models which can help to predict the results of different experiments and which may be industrially, commercially or medically significant. It can also be used to find the most efficient way to produce new chemicals from easily available substances. One prominent application is the determination of ways to make better lithium ion batteries for more efficient and long lasting energy stores. You can even use the structures of known materials to estimate the properties of a material that hasn’t even been made yet. Whilst it may seem the domain of the physical sciences, energy minimisation has been and continues to be fundamental to the evolution of almost all living things. In the process of natural selection, the mechanism of evolution first described by Charles Darwin, those that survive and successfully pass on their genes to the next generation are the ‘fittest’. Energy is the currency used in this competition for fitness and, if used efficiently, will lead to survival of a species. For most wild animals the supply of energy is limited and highly fought over, so many different methods of energy minimisation have evolved to suit different environments. Energy saving is being practised by even the simplest life forms, such as bacteria, which have especially small genomes. The smaller the genome,

Horia varlan


we can predict the product of any chemical reaction by considering the relative energies of all chemicals involved

the complex structures and properties of crystals are defined by the concept of energy minimisation

Focus 19

clownfish form a symbiosis with poisonous sea anemones. While the latter offers protection from predators, the fish remove parasites from the anemone

20 Focus

JoacHiM S. Müller


the seeds of the strangler fig start life as a parasite high up atop a host t ree. They grow air roots downwards, completely enveloping and eventually strangling the host tree

the less energy and time are used in making copies of DNA and producing new bacteria , allowing them to multiply much more rapidly than larger organisms. Evolutionary studies have suggested that the first bacteria were highly independent organisms, some of which later evolved to adopt two key methods of energy conservation in ecology, symbiosis and parasitism, whereby they depend upon other living things. Both methods involve the direct dependence of an organism on the other. In symbiosis both partners gain something from the relationship and are generally made stronger, by contrast a parasite gains from its host but gives it nothing in return, which often weakens, and sometimes even kills the host. The genomes of parasitic and symbiotic bacteria are even smaller and more efficient than other bacteria. This is because bacteria like these tend to acquire deletion mutations, they lose parts of their genome very easily, hence the parasite and symbiont (an organism that lives by having a symbiotic relationship with another living thing) genomes started to shrink. The bacteria don’t need the genes they lose because the products of these genes are provided by the host that they depend on, which is why the parasites or symbionts are able to exist. Why waste your own energy to duplicate a large genome and produce the molecules that you need to survive, if you can get them from your host? In fact, this is how mitochondria and chloroplasts, the energy processing powerhouses of more complex organisms, like plants and animals, are thought to have evolved, through bacteria living and growing in co-operation with each other, the endosymbiotic theory. The endosymbiotic theory was first suggested and described by the Russian botanist Konstantin Mereschkowski in 1905, based on the morphological similarities between chloroplasts (found inside plant cells) and cyanobacteria (free-living blue-green algae), both of which make energy by photosynthesis. However, the theory was not taken seriously until the 1960s, when it was discovered that mitochondria (thought to have evolved from proteobacteria) and

chloroplasts possessed their own DNA, separate from the genome of the rest of the cell. This very small amount of DNA is not enough for chloroplasts or mitochondria to be able to live alone. It is now thought that over millions of years of co-operative living, the majority of genes were transferred to the host cell and the symbiotic relationship between bacterial cells gave rise to a new, more complex form of life—eukaryotic cells (the basic units that make up plants, animals and fungi). The eukaryotic cell produces and provides most of the gene products required by the mitochondria and chloroplasts. These ancient bacteria are no longer organisms in their own right, but rather internal cellular structures (organelles) that are functional parts of eukaryotic cells. This is the ultimate achievement when it comes to ‘fitness’—surviving and passing your genome onto the next generation. The cyanobacteria and proteobacteria succeeded through expert energy minimisation. Their genes are now copied and conserved in all complex living things. These higher organisms are also subject to the same concepts of energy reduction. In the animal kingdom, access to easily available energy, such as food, varies depending on the time of year, the weather, their environment and the number of individuals competing for that energy. Not all the animals in a population will have equal share of resources, or an equal need for them, so they have to compete for it. Most of the energy they have is needed for maintenance, fighting off disease and basic survival. The rest is used for growth and reproduction, or stored as fat, which aids future survival. Some species have offloaded much of this expenditure onto others. For example, cuckoos have become masters of disguise, able to hide their eggs in the nests of other bird species. The cuckoo eggs avoid detection because they have evolved to mimic the colour and pattern of their favoured hosts’ eggs. Amazingly, if the host fails to detect and reject the cuckoo eggs, once hatched, the cuckoo chick will push other eggs over the edge of the nest, this ensures that the newborn cuckoo survives in preference to the offspring of the host bird. The bird that made the nest and laid the other eggs will happily feed and defend the cuckoo chick, despite the fact that it often differs widely in appearance from it’s adopted mother. In this ruthless, parasitic way cuckoo species reduce the energy cost of reproduction by tricking other birds into rearing their young whilst the cuckoo parents use their energy on their own survival. In the ocean, there are several mutualistic symbiotic relationships that reduce energy expenditure in both species. The remora fish swim alongside sharks, while eating parasites off the sharks’ bodies, helping them Lent 2013

survive. The fish receive protection from predators and bits of food when the sharks feed. In this way, the sharks use less energy on fighting disease, and the fish save the energy they would otherwise have to use defending against predators and finding food. While such relationships are fruitful ways to save energy, the physiology of many animals has also evolved so they can conserve energy on their own. Keeping our bodies warm (thermoregulation) in the winter is energy-expensive. While we wear thick woolly jumpers and put the heating on, other animals have developed sophisticated ways to save energy on heating. Fat tissue is an ideal body insulator. With less water and blood than other tissues it conducts heat less easily. Mammals such as whales, seals, and polar bears, have a thick layer of fat (known as blubber) under their skin, to keep warm in the cold oceans around the North Pole and Antarctica. In much the same way, most land mammals have a coat of fur that traps air—another very effective insulator. Thermoregulation is not the only challenge posed by winter; it is also very difficult to find food. To make precious fat stores last the season, many animals go into hibernation, an inactive state with lower body temperature and reduced metabolism and with no energy wasted on moving around, hunting, eating or reproducing. Although mostly associated with mammals in the winter months, a small number of animals hibernate in the summer, a state called aestivation, practised by molluscs, arthropods, reptiles, amphibians, and even a couple of mammals, allowing them to conserve water and energy in the heat.

albert KoK


The law of conservation of energy in physics states that the total amount of energy in an isolated system remains constant over time. The law means that energy can change form within the system, and move within the system, but that energy can be neither created nor destroyed. Survival and fitness of a species depends on it making the best use of the limited energy available to it. While it is necessary and obvious, it is fascinating to observe the different mechanisms of energy conservation, evolved by different species, which have allowed them to exist today. We have seen the power of the principle of least action to reproduce the physics we know about, but now physicists are exploring how changes to these actions can lead to dramatic changes in the theory that follows. This is a way of predicting new and exciting physics that we have never even dreamt of! One thing almost all physicists agree on is that the principle of least action should appear in a fundamental way in the Universal Theory of Everything. Once we uncover the theory of everything, we may indeed become masters of the universe, and the key to this will undoubtedly be related to the minimisation of energy.

remora fish swimming alongside a shark while eating parasites off its body. In return, the much smaller fish receive protection and bits of food when the shark feeds

a Shining Bronze Cuckoo being fed by its unsuspecting foster parent— the much smaller Brown Thornbill

Hinal Tanna is a 2nd year PhD student in the Department of Oncology Matt Dunstan is a 2nd year PhD student at the Department of Chemistry


Zac Kenton is a 4th year Undergraduate studying Mathematics

Lent 2013

Focus 21

Babies with Three Parents?


Nicola Love looks into the science and ethics of Mitochondrial Replacement

Mitochondria are organelles important in the process of respiration

inside our cells we have the result of an unexpected event that allowed us to evolve from a single cell to the complex multicellular organisms we are today. This tiny life changing structure or organelle is the mitochondrion, and is often described as the power plant of the cell as it is responsible for providing the energy a cell needs to carry out a number of important functions. Scientists believe that mitochondria evolved from a free-living bacterium that became engulfed and sustained by a larger single-celled organism in a process known as endosymbiosis. In return for food and shelter the smaller organism provided the larger cell with energy allowing the cell to become more complex over many generations until it became the cells we have today. Over time mitochondria lost many of their genes and became dependent on the host for its replication. Now just an organelle, the mitochondria retains a set of 37 genes that help it to operate. Mutations in mitochondrial genes or in nuclear genes that encode mitochondrial proteins can lead to disease and even death. Unlike most genetic diseases where errors in the nuclear genes can be inherited from either parent, defects in mitochondrial DNA are strictly inherited from the mother only. Mitochondria are present in the unfertilized egg and once the egg is fertilized the embryo divides and replicates them, resulting in almost every cell in the body having mutated and defective mitochondria. Mitochondrial diseases – of which around 50 are known – affect one in every 6,500 people. Although many sufferers have mild or no symptoms, defective mitochondria can cause severe health problems affecting a number of organs, particularly the organs


Egg cells contain the mitochondria that will, in the event of fertilisation, be passed onto the zygote

22 Perspective

that require the most energy like the brain, heart and muscles. Despite ongoing research at present no treatment is available and, even in the best-case scenarios, clinicians can only manage the symptoms that arise. Scientists, both in the United States and the United Kingdom, have been working on new techniques based on In Vitro Fertilization (IVF) technology to prevent mitochondrial diseases that are caused by faulty mitochondrial DNA (mtDNA). The first technique developed, pro-nuclear transfer (PNT), is performed immediately after fertilization. Scientists remove the pronucleus, which is derived from the nuclei of the egg and sperm, from an embryo with unhealthy mitochondria and transfer it into a donated embryo with healthy mitochondria that has had its pronucleus removed. This new embryo contains nuclear DNA from the intended father and mother and healthy mtDNA from a third donor. A second technique, Maternal Spindle Transfer (MST), involves the transfer of DNA from the mothers egg to a enucleated egg with healthy mitochondria before fertilization. MST differs principally from PNT in that eggs, rather than embryos, are destroyed. Mitochondrial replacement techniques hold great promise for women who carry genes that cause mitochondrial disease, but who want to have children. They eliminate the risk of genetic disease being passed down to future generations. Although these techniques have had success in the lab, with healthy “three-genome” primates being born, there approval for use in the clinic is still a long way off. Mitochondrial replacement would result in a modified embryo creating human life, a medical first, and this raises a number of ethical questions that need to be addressed before the technology can be routinely used. The main issue arising from mitochondrial replacement is that the resulting child will have DNA from the mother and father which will give it its characteristics, but it will also have a tiny amount of mitochondrial DNA from a third party female donor. Even though this foreign mtDNA would comprise only 0.2 per cent of the child’s DNA, this use of extra genetic material has lead to sensationalist headlines about “three-parent babies”. This is a misleading term because it suggests the resulting child would exhibit external characteristics of the donor, something that is not entirely true. Lent 2013

the uK is the closest to being able to offer MST and/or PST to patients. M. OsMONd, WELLCOME iMagEs

Even so, the child would have mtDNA from a third person, and opponents of MST and PNT worry that this may impact on the child’s sense of identity and alter society’s perception of parenthood. Crucially this is the first genetic modification of a human to be incorporated into the germ line, the DNA that passes onto future generations, something that is currently banned in the UK. There is also the concern that changing the heritable characteristics of a person may lead to a “slippery slope” where it may be seen to be morally just to modify embryos in more trivial ways in the future, for example modifying the genes which give us our characteristics such as height and hair colour. Questions have also arisen about the safety of the technique, not only for the baby who results from the egg, but also for the child’s descendants, as opponents of the procedure fear that any biological faults introduced by the technology may not have consequences for a number of generations. A number of groups, including the UK’s leading centre for Bioethics; the Nuffield council for Bioethics, believe the treatment to be ethical if it is found to be safe and effective, something which the scientists at Newcastle University carrying out the research hope to show in the next five years. There will inevitably be risks if and when the procedure is tried in humans, but these must be weighed against the risks associated with mitochondrial disease. As with any new technology dealing with the creation of life and genetic modification this is an emotive issue and the regulatory body the Human Fertilization and Embryology Authority (HFEA) has

held a public consultation, the results of which will be available in early 2013, to gage public attitudes prior to a parliamentary vote on changing the law to allow three genome embryos to be created. The debate on whether MST and/or PNT should be allowed is centred in the United Kingdom as no other country is as close in being able to offer this technology to patients. Other countries conducting research in mitochondrial replacement, such as the US, have strict laws and funding restrictions that govern the use of human embryos. Although many feel that creating genetically modified children is a line we should not cross, for many parents affected by mitochondrial disorder, this may be their only chance to have a child free or a potentially fatal genetic disease. Whatever the politicians decide in the parliamentary vote, the debate on mitochondrial replacement will continue with advances in reproductive technology. Nicola Love is a PhD student in the Department of Physiology, Development and Neuroscience


some fear that biological faults arising from mitochondrial replacement may affect not only the child but also future generations.

Lent 2013

Perspective 23

Anything but Elementary Matthew Dunstan looks back at the history of the naming of elements

Poisonous antimony was known as the ‘monk-killer’

Gold is thought to be the first element to be named

24 Science and Policy

Take for example, trying to identify chemicals through a confusing mix of common names and so-called ‘scientific’ names. When you’re trying to make a good cake, would you prefer some baking soda, also known as sodium bicarbonate, or some bicarbonate of soda, also known as sodium hydrogen carbonate? They are quite different compounds, and can have a very different impact on upon the success of your cooking. This is not to say that naming is irrelevant to chemists. In fact there is an entire organisation, the International Union of Pure and Applied Chemistry (IUPAC) that is in charge of developing standard practices within all aspects of the field, from chemical names to the formats of chemical publications. Despite their very broad remit, most members of the public (and even chemists) are only likely to hear about the IUPAC for one reason—the naming of new elements. The names of many elements were known and accepted long before the IUPAC was officially founded in 1918. The etymology of these names varies wildly, due to the correspondingly varied circumstances in which they were first isolated or recognised by scientists. The oldest name for an element is believed to be gold, which is thought to have been derived from the Proto-Indo-European word ghel, meaning yellow or bright. The language has been conservatively estimated to have been spoken five thousand years ago. Certainly, not all elements were as easy to identify as gold. As scientific techniques and theories developed so too did the understanding of what elements actually were. This led to a clearer need to give each a unique name to identify them and as we have slowly filled in more and more of the gaps of the periodic table left by Dmitri Mendeleev, so have the varied stories behind each of these names grown. While the connection between lead’s chemical symbol Pb, and plumbing (both being derived from the original Latin ‘plumbum’) can be easily deduced, many entries hold far more interesting stories. Take antimony for example, with symbol Sb. It was known to the Ancient Egyptians as an important ingredient in kohl, an ancient version of eye cosmetics. They were even able to distinguish between the native metallic form and the sulphide, and called the former msdmt (in hieroglyphs) which became sdm and then stimmi after it was adopted into Greek. This in turn evolved into the Latin stibium, which was used as the basis for its symbol after its adoption by the Swedish chemist Jons Jakob Berzelius in his writings in 1813 and 1814.

The origins of the actual name antimony are a little more muddled. Even though the name can be traced back to Medieval Latin, where it was known as antimonium, the origins of this word are unclear, with two main competing theories offering radically different explanations. The first theory derives from the name’s similarity to the French word antimoine, literally meaning ‘monk-killer’. Considering that many early alchemists were monks, and antimony is poisonous, this sounds like a likely source. But an equally convincing explanation comes from the Greek construction ‘antimonos’, meaning ‘never alone’. This refers to the fact that natural antimony deposits never occur by themselves, and are instead always found mixed in with other ores. Neither theory has been conclusively accepted, and remains an ambiguity present in the periodic table to this day. More recent controversies have derived from the convention mentioned earlier that the discoverers of an element should have the right to name it. This system becomes problematic when the discovery of an element is contested, either because another scientist believes they discovered it first, or because the original observation is believed to be incorrect. In the 18th and 19th centuries the latter was more common, when modern characterisation techniques were yet to become available, and errors were often made in determining whether new compounds were in fact new elements or simply new forms of existing elements.



chemistry and language don’t always mix well.

This is best exemplified in the controversy surrounding the discovery of niobium and tantalum, the former of which was first reported to the Royal Society in 1801 by Charles Hatchett. He suggested the name ‘columbium’, as he had discovered the element in some samples given to the British Museum from Massachusetts and wanted to acknowledge this source in his choice of name. The very next Lent 2013

year, in 1802, Swedish chemist Anders Gustaf Ekeberg reported that he had also discovered a new element with very similar properties to Hatchett’s columbium, which he named ‘tantalum’ in reference to the mythical King Tantalus, whose torture – consisting of standing in pool of water which would constantly recede when he stooped to drink it –was deemed comparable to the great difficulty Ekeberg found in trying to dissolve tantalum in water. It was subsequently thought that both claims referred to the same element, and it wasn’t until 1848 that the matter was settled. The German mineralogist and chemist Heinrich Rose tested a sample of Hatchett’s columbite, and found it contained two distinct elements, the first of which was tantalum, and the second he called niobium (after Niobe, the daughter of Tantalus). Despite Hatchett’s prior claim, the IUPAC finally adopted the names proposed by Rose in 1950, ending over a century of controversy (although niobium is still sometimes called columbium in the American mineral community). In the past twenty years, the discovery of new elements has shifted to the heavy transuranic elements, which never occur naturally and are only glimpsed for minor fractions of a second in the laboratory. Due to the great cost and complexity of these experiments, only a few research groups in the world have been able to work in this area, and the delay from initial discovery , to replication, to verification can last decades. Given the time taken, it is not surprising when the discovery of elements is claimed by two different laboratories simultaneously. This competition occurred in the latter part of the 20th century between groups from the Joint Institute for Nuclear Research in Dubna, Russia and at the University of Calfornia, Berkeley, which disputed the discovery and hence the naming of the elements between numbers 104 and 106.

Each group preferred different names. In particular, the Russians wanted element 104 to be called kurchatovium after Soviet nuclear physicist Igor Kurchatov, while the Americans preferred rutherfordium after Ernest Rutherford. The Americans also wanted to call element 106 seaborgium after Glenn Seaborg, who had pioneered the discovery of many earlier transuranic elements, but this contradicted the policy of the IUPAC not to name elements after people who were still alive. In 1994, the IUPAC declared that the labs should share the credit for the discoveries, and hence attempted to come up with a compromise over the names, proposing dubnium for element 104 to please the Russians in return for allowing the Americans to name element 106 rutherfordium (at the same time, they proposed the name joliotium for element 105 in honour of French physicist Frederic Joliot-Curie). After further objections from the Americans, who wanted to be able to name element 106 whatever they wanted (as the Russians had not proposed a name), in 1997 the IUPAC finally accepted the names rutherfordium, dubnium and seaborgium for the three elements respectively. Our knowledge of the contents of the periodic table is certainly greater than that of Mendeleev and his contemporaries when they were first trying to systematically identify and organise the elements. However, with the confirmation of flerovium and livermorium as the names of elements 114 and 116 respectively by the IUPAC only in the past year, we should always remember that while the elements themselves may be distinct and immutable, the language we use to describe them is far from constant. Matthew Dunstan is a 2nd year PhD student in the Department of Chemistry


the first widely accepted periodic table was published by Dimitri Mendeleev in 1869

Lent 2013

Science and Policy 25

HMS Challenger Amelia Penny explores the expedition of the HMS Challenger which marked the beginning of oceanography

An illustration from the scientific results of

the voyage of HMS Challenger

was a great blue blank on the surface of the globe. Naval expeditions such as the voyage of HMS Beagle in 1831-6 had brought back important observations of the oceans, but were concerned mainly with the surface waters, being as much military exercises as scientific ones. Azoic Theory, the idea that there was no life in the ocean deeper than 300 fathoms (about 550 metres) had also taken hold, leading many to assume that the deep ocean was a submarine wasteland not worth exploring . The expedition which finally blew open the deep oceans for research was that of HMS Challenger, planned by the influential scientists Charles Wyville Thompson and William Carpenter and led by George Strong Nares, who was later to become a distinguished Arctic explorer. Built in Woolwich in 1858, in the year that Darwin presented his theory of evolution, Challenger was to become the first oceanographic ship that ever sailed. Originally a warship, Challenger was painstakingly converted into a research vessel, most of her cannon removed to make space for laboratories and storerooms for specimens and scientific equipment. She set sail from Portsmouth in 1872, and travelled almost 69,000 miles around the world over the next four years, amassing thousands of samples of rocks, sediment and deep sea organisms. These would be the starting point for study of this greatest and most neglected of Earth’s landscapes.

nationaL oceanic & atmospheric administration

Beryx decadactylus one of the deep-sea fish collected by HMS Challenger

Challenger’s first discoveries came almost immediately. The first sediment samples dredged up were, as expected, pale grey Globigerina ooze. Composed almost entirely of the calcite shells 26 History

of foraminifera, a group of complex unicellular organisms, this ooze is now known to be an important carbon store in the oceans. As the ship moved south, however, they discovered something entirely new – a red clay composed of radiolarians,

public domain

in the mid-nineteenth century, the deep ocean

which have silica skeletons. This apparently esoteric discovery has major implications for the way the oceans respond to changes in chemistry and climate. John Murray, one of the ship’s naturalists, had shown that Globigerinae are ubiquitous in surface waters all over the world’s oceans, and there should be a constant rain of their shells into the deep oceans. Yet over vast areas, their shells are absent from the sediment on the sea floor. So where have all these shells gone? This observation was the first evidence of the now well-recognised Carbonate Compensation Depth, the level at which calcium carbonate shells sinking into the deep ocean are dissolved at the same rate as they rain in. We now know that this depth has fluctuated with time, changing the amount of carbon which ocean sediments store, and the pH of the oceans. Ocean acidification and the storage of CO2 in ocean sediments have become important issues in modern oceanographic studies, and are important parameters in our models of how the oceans might respond to future environmental changes. Since the Challenger expedition, huge ocean drilling programs have recovered many cores of deep-sea sediment, and studies of oxygen isotopes in the calcite shells have shown us how ocean temperatures have changed over geological time – an excellent source of empirical information on the history of Earth’s climate. Challenger’s findings also conclusively disproved the Azoic Theory, which had stifled research into deep-sea biology for so long. Dredging and sounding expeditions by Wyville Thompson on HMS Lightning and HMS Porcupine had already found some Lent 2013

public domain

HMS Challenger on her 1872- 1876 expedition around the world

evidence that there was life in the oceans below 300 fathoms, the depth limit set by the Azoic Theory, but Challenger smashed those depth records, recovering annelid worm tubes from 3000 fathoms. These early hints at a deep-ocean ecosystem have blossomed in recent years into a host of new discoveries by deep sea rovers such as the Alvin submersible. The extent of deep-water ecosystems and their need for protection as the environment above them changes are still poorly understood, and dives regularly turn up entirely new species even today. The discovery of life around hydrothermal vents in the ocean floor in the 1970s revolutionised our ideas of what life needed to survive. Guided by our sunny, surface ecosystems, which rely on energy from photosynthesis, scientists were confronted with ecosystems which are independent of sunlight, relying on chemical energy instead. Consequently, we have become far more open to the possibility of life on other planets, or in areas previously considered sterile. Another of Challenger’s objectives was to measure the depth of the ocean as it journeyed around the world. Its results revealed some tantalising clues about the underlying landscape of the ocean floor. While crossing the Atlantic, the scientists on board began to notice a pattern in the water temperatures they were recording, with a ‘cold stream’ to the west and a ‘warm stream’ to the east, part of the North Atlantic ocean circulation which includes the Gulf Stream. George Nares, puzzled by how water bodies of two different temperatures were kept separate over such great distances, speculated that they could be being kept apart by a great line of shoals stretching from Lent 2013

north to south across the Atlantic. This was the first inkling we had of the existence of the mid-Atlantic Ridge, where the North American and Eurasian plates move slowly apart as new oceanic crust forms between them. In 1973, almost one hundred years after Challenger crossed the ridge, the French deep-sea submersible Archimède made the first descent to the ridge and discovered the narrow zone where new crust forms. The theory of plate tectonics was only newly accepted, still finding its feet. Studies of the deep ocean features first surveyed by Challenger helped to establish its place in the realms of scientific fact. On their return to England in May 1876, the Challenger researchers and crew were a depleted and exhausted party. Under the stress of cramped and unpleasant conditions on board, and strenuous work which few understood, around a quarter of her original crew of 269 had deserted or died. Challenger was eventually broken up in 1921, but by this time the legacy of the expedition was established. The huge quantity of sediment samples, bottled specimens and oceanographic data collected on the expedition took over 20 years to analyse, and the resulting report ran to 50 volumes. The deepest point in the ocean, at about 10.9km below the surface at the southern end of the Mariana Trench, is named the Challenger Deep after the expedition. It remains a great frontier in ocean research to this day, in the spirit of its oceangoing namesake. amelia penny is a 4th year undergraduate in the earth sciences department History 27

Craig Venter vs The World

PLoS BioLogy

Jordan Ramsey looks at the life of one of modern science’s most divisive figures.

John Cra ig Venter is one of the most controversial characters in modern biology

28 Behind the Science

badass scientists around the world look to one man for inspiration. In fact, the scientific community has seen few recent controversies to rival that created by one John Craig Venter. It’s a common story in science, really — sick of the limitations of publiclyfunded research, Venter left a position at the National Institutes of Health (NIH) to find his own capital and gain freedom to pursue things his way. But when Venter became president of Celera Genomics back in 1998, his aim was not just to go about quietly conducting his own avenue of research. Instead, he challenged the Human Genome Project to a race to complete the first genome mapping. The Human Genome Project aimed to identify our genes and determine the sequence of the base pairs that form our DNA, which contains all the genetic instructions to make us who we are. Often labelled an egomaniac, Venter infuriated many by suggesting that the $3-billion Human Genome Project — backed by the US Department of Energy and the National Institutes of Health (NIH) — turn its attention to sequencing the simpler mouse genome. He was confident that his “whole-genome shotgun sequencing” approach would reduce costs and accelerate the pace of discovery, while scientists involved in the Human Genome Project believed this would lead to more inaccuracies compared to their “clone-by-clone” method. Unlike the clone-by-clone method, whole-genome shotgunning breaks up DNA and relies on computer software to work out where each piece should go. Without creating a map for each bundle of DNA before sequencing it, scientists were worried that the software would get confused and be unable to establish where the DNA fragments belonged. James Watson, co-discoverer of DNA and head of the NIH branch of the Human Genome Project for some time, was of the opinion that Venter’s shotgun sequencing “could be run by monkeys” and didn’t consider it science. Another concern was what Venter would do with a monopoly over the important information from the sequenced genome once finished. It was certainly a ballsy move on Venter’s part to take on such an endeavour and to make such outrageous claims in the process. Where exactly did he get the guts to be so brash and so bold? Venter was born in 1946 in Salt Lake City, Utah, and grew up in a working class suburb near San

Francisco, California. He was reportedly a poor student, preferring to spend his time on the beach. In order to avoid being drafted into the Vietnam War, Venter instead enlisted in the Navy, but ended up there anyway after being court-martialled for disobeying a superior officer who happened also to be his girlfriend at the time. In a move that was good for neither him nor likely his relationship, Venter’s six month stint working in a hospital in Da Nang meant witnessing thousands of soldiers dying, a rocket blasting through his sleeping quarters, and his attempted suicide by swimming out to sea. Returning to the States, Venter married and attended community college, before studying biochemistry and subsequently physiology and pharmacology to graduate with a PhD in 1975 from the University of California, San Diego. In his Mercedes-driving, bell-bottom-wearing years that followed as a faculty member at the State University of New York, he divorced his wife and was remarried to one of his students. Venter joined the NIH in 1984 where he presented his idea to use whole-genome shotgunning to accelerate the sequencing of the human genome. But his idea was rejected by the NIH, so he went about finding venture capital to try it out himself. The bacterium Haemophilus influenzae was his first project and a promising start, becoming the first organism to have its genome mapped back in 1995. So began the drama for which Venter is now renowned. He was made president of the new company Celera and boldly pledged to finish mapping the human genome in three years, four years ahead of the projected finish date of the Human Genome Project. Watson’s response, as former head of the NIH branch of the Human Genome Project, was to proclaim “He’s Hitler”. And indeed, Venter was an easy enemy: egomaniacal, seemingly capitalistic, and apparently unconcerned over the quality of his data (though shotgun sequencing has since become standard practice). A close, bitter race ensued and the Human Genome Project stepped up its efforts, resulting in a tie announced in 2000. In the end Venter’s pledge was broken, Celera’s stock plummeted, and he was driven out of the company. Disheartened, he told a journalist “My greatest success is that I managed to get hated by both worlds.” Lent 2013

synthetic life when they inserted man-made DNA into cells that were capable of self-replicating. After ten years of work for twenty scientists at an expenditure of approximately US$40 million, Venter can check this off his to-do list. The DNA was complete with “watermarks” written into it and included an encrypted alphabet and punctuation along with an email address to contact for anyone capable of cracking the code. The creation of synthetic life and its ramifications are, in true Venter fashion, controversial to say the least. The life and achievements of J. Craig Venter, still on-going, are indeed remarkable. From The Human challenging the US government in a race to map genome Project the human genome to creating synthetic life, aims to indentify Venter is a man who dares do the unthinkable. and map the many genes which His ego may have created a few enemies for him form the human along the way and he seems to take considerable genome pleasure in creating controversy, but he’s somehow made science a more human endeavour. Scientists see internal disagreements all the time in the literature, lab meetings, and in comments we get back from the reviewers of our manuscripts. The public often has a very different view of them, as a cooperating entity whose sole directive is to make discoveries to better the human condition. But with Venter, they see the claws come out and drama unfold with all the ugly human ego and selfishness science can often entail in reality. Maybe he is just what we need to shake up the public’s perception of scientists, and perhaps even of ourselves. equinox gRaPHiCS ©

For some, this defeat would be enough. He had money from Celera, he had made his mark on genomics, and he’d had more than fifteen minutes of fame (or infamy). But after licking his wounds, Venter re-emerged and with his considerable earnings founded the J. Craig Venter Institute in Maryland. The controversial figure continued his work in genomics with a number of projects. In his Global Ocean Sampling Expedition, Venter climbed aboard his transformed luxury yacht, the Sorcerer II, and proceeded to circumnavigate the globe, picking up microbial species along the way to be shipped back and sequenced. In 2007 Venter published the results of the exploration in PLoS Biology, which highlighted the tremendous amount of genetic diversity in the marine microbial community. Even this expedition raised doubts, with some questioning the merit of blindly sequencing the genomes of unidentified microbial species and without having a specific scientific question in mind. Venter hopes that eventually his trip will be looked upon as a Darwin-style adventure (a trip to the Galapagos Islands led Darwin to form his revolutionary theory of evolution), potentially by publishing his sequences in a freely-accessible public database. In other synthetic biology projects, Venter is dedicated to developing new sources of fuel. He first seeks to find a ‘minimal genome’ that will provide a base for inserting engineered genes to perform a desired task – in this case, making cellulosic ethanol to be used as fuel. Of course, this engineered genome is no use without a cell in which to perform its functions. In a Science publication in May 2010, Venter’s team reported being the first to create

Jordan Ramsey is a 2nd year PhD student in the Department of Chemical Engineering and Biotechnology

References Features Living in Fear - Senses in Symphony - Cytowic, R. (2002). Synesthesia: A Union of the Senses. MIT Press. The Journey of the Bicycle - one to another - Song, K., et al., Heart repair by reprogramming non-myocytes with cardiac transcription factors. Nature, 2012. 485(7400): p. 599-604. Digging for Dinosaurs -

Regulars Babies with Three Parents - anything but elementary - HMS Challenger - Report on the scientific results of the voyage of H.M.S. Challenger during the years 1873-76 : under the command of Captain George S. Nares, R.N., F.R.S. and Captain Frank Turle Thomson, R.N. (1887) Craig Venter v s The World - art, Maths and the universe - Penrose, R. (2006). The Road to Reality: A Complete Guide to the Laws of the Universe. Vintage. Lent 2013

Behind the Science 29

Art, Maths and the Universe Zac Kenton discusses the mathematical basis of the great artist M.C. Escher


m.C. Escher is famous for his mathematical artwork, using tessellating shapes, warped perspectives and varying geometries

from the puzzling works of M. C. Escher to the

grand architecture that rises above our cities, the deep relation between geometry and art underlies a lot of what we find beautiful. The artistic construction of geometric ideas has inspired breakthroughs in mathematics, cosmology and physics, and conversely, ideas from geometry are applied in modern art to construct images that we find naturally beautiful. Many mathematicians and physicists believe that beauty is an indicator of truth when it comes to the equations that govern our world. The most defining example of this interaction can be found in the work of M.C. Escher, a Dutch artist of the early twentieth century. Escher combined his appreciation of geometry with his creativity to construct works of art that fool our mind’s sense of space, creating pieces of great intrigue and beauty. M. C. Escher was born in Leeuwarden, in the Netherlands in 1898. He was a sickly child, who failed to excel at school, despite obvious artistic talents. In 1919, he went to art college. He was briefly a student of architecture, but quickly changed to the decorative arts, having realised that life as an architect was not for him. Escher produced his first notable work in 1937, inspired

by the mathematical, geometric shapes he had seen while travelling through Spain and Italy. He kept records of mathematics in his notebooks, which he hoped would help him to forge a union between mathematics and his art. Geometry was a great inspiration to Escher, and many of his artworks are based on intricate geometric shapes. But do we really understand what we are seeing when we are struck by the beauty of Escher’s artwork? Geometry is the study of the relationships between lines, points and surfaces. It is the mathematics of shapes. Whether we know it or not, we are all familiar with what’s called Euclidean geometry, with its straight lines, angles, and circles, studied in maths classes around the world. Euclid stated five rules from which all other results can be derived. One of these is the Parallel Postulate, which states that given any line we will always be able to draw exactly one parallel line through a point not on the given line. But we can relax the Parallel Postulate, constructing new geometries which differ to Euclidean geometry, each with their own intrinsic beauty. If we have no parallel lines, then we can build what is known as elliptic geometry. Alternatively, if more than one parallel line can be drawn, we can construct a hyperbolic geometry. Elliptic geometry is best understood by imagining the surface of a sphere. As our geometry has changed, our definition of what a straight line will be must change. Imagine the path of an aeroplane around the globe. In elliptic geometry, this is what straight lines are like. Straight lines are represented as equatorial circles wrapped around the sphere. Aircraft follow these equatorial circles when flying across the globe, as they are the shortest distance between two points


Escher used more than one kind of geometry in his art. 1. shows Euclidean, 2. Elliptical and 3. Hyperbolic

30 Art and Science

Lent 2013

explanation as to why we are wrong requires help from hyperbolic geometry. Inside the hyperbolic geometry of Escher’s Circle Limit I, each fish is the same size, so adding two fishes end to end gives two fish lengths. But from outside the hyperbolic geometry, the Euclidean length is less than two fish lengths. If the addition of speeds functioned according to hyperbolic geometry, say by letting one hyperbolic-fish length represent half the speed of light, then from our exterior Euclidean perspective the total speed of the shotput is still less than the speed of light. So Einstein was right. Hyperbolic geometry has a huge range of uses other than inspiring art and explaining relativity. But there’s another question we would like to ask what type of geometry does our universe have, on a cosmological scale? Euclidean geometry might be enough to tell you the distance to the shops, or even the length of a train journey. But if we go up a scale, to the geometry of planet Earth, we see we need spherical geometry. If we go up another scale, to the universe as a whole, we lose our ability to make assumptions altogether. We can’t just assume the universe is totally flat and Euclidean; perhaps it exhibits spherical geometry, or the beautiful hyperbolic geometry above. This is one of the unresolved problems of cosmology, and physicists are conducting research to determine the curvature of the universe as a whole. Possibly surprisingly, so far the signs show the universe seems more or less Euclidean, but with advancements in technology allowing us to see ever further into space, who knows what shape the universe will eventually reveal itself to be.

on the sphere. Many of Escher’s works help to visually illustrate the differences between elliptic geometry and Euclidean geometry. The differences were ones he made use of artistically, to create new and innovative patterns such as in the bulging section of his work: Balcony. In contrast, hyperbolic geometry cannot be thought of in terms of aeroplanes and the Earth’s surface. Instead, we can represent the entire hyperbolic space as the interior of a circle in the Euclidean plane. In other words, the interior of what you think of as a normal 2D circle. The boundary of the circle represents infinity. Straight lines are represented by either an arc of a circle which meets the boundary at right angles, or a straight line through the centre. After discussions with geometer H.S.M. Coxeter, Escher produced the beautiful Circle Limit I. Escher highlights some of the straight lines in this geometry, which coincide with the spines of the fish. A square in this geometry has angles which sum to less than the angles of a square in Euclidean geometry. All of the fish are the same size in the hyperbolic geometry, yet in this representation in a Euclidean plane they have been distorted and appear smaller. Not only is hyperbolic geometry beautiful, it also has physical applications. When formulating his laws of relativity, Einstein’s great insight was that nothing can travel as fast as the speed of light. Suppose we conduct our own thought experiment, as Einstein did: What happens if you’re flying at half the speed of light, relative to someone standing still, and you throw a shotput out in front of you at half the speed of light again? Surely the observer would see the shotput travelling at the speed of light, but Einstein’s insight Zac Kenton is a 4th year undergraduate studying Mathematics was that nothing could travel as fast as the speed of All M.C. Escher works © 2010 The M.C. Escher Company - the Netherlands. All rights light, so either we’re wrong, or Einstein was wrong. reserved. Used by permission. Unsurprisingly, it’s not Einstein, although the Lent 2013

Escher’s Balcony illustrates the beauty of elliptic geometry

Circle Limit I uses hyperbolic geometry. Could this be the shape of the universe?

Art and Science 31

Weird and Wonderful A selection of the wackiest research in the world of science

Bottoms Up rarely does progression in scientific research require a level of self-sacrifice that borders on the utterly disgusting. However, in 1983 Soviet virologist Dr Mikhail Balayan used his own body as an incubator for hepatitis E. Previously defined as epidemic non-A, non-B hepatitis by Dr Robert Purcell this virus was noted for its vicious outbreaks, especially amongst pregnant women, a characteristic not seen in other forms of hepatitis. It also proved notoriously difficult to examine due to the relatively scarce numbers of patient samples to test on, a challenge which prevented further identification. Whilst Balayan was studying an outbreak in Soviet-controlled central Asia, he wanted to take some samples back to the laboratory for further study but lacked the refrigeration necessary to transport them. Undeterred, he made a smoothie from yoghurt and stool samples and drank it, travelled back to his lab and waited until he became ill. Using his own stool samples he was able to show that epidemic non-A, non-B hepatitis was similar to hepatitis A and yet a distinct disease. The virus was subsequently sequenced in 1990 and given the new name hepatitis E. China has recently announced development of a new hepatitis E vaccine, meaning that eradication of this disease which claims 70,000 lives a year can finally begin to be realised. Balayan’s contribution was vital to the furthering of knowledge about the virus, but it was probably some time before he could look at a smoothie again! lb

Sweet Victory



W.a lex




us tr

at or



does eating chocolate increase your chances of

32 Weird a nd Wonderful

winning a Nobel Prize? This was the question asked by Frank Messerli at St Luke’s-Roosevelt Hospital in New York. Dietary compounds called flavanols, which are found in cocoa, green tea and red wine, are known to improve cognitive performance. He investigated whether chocolate consumption had an impact on a population’s cognitive function, as measured by the number of Nobel Laureates within particular countries. Of the 22 different countries studied, Switzerland consumes the most chocolate - 11.9kg per person each year - and also

has the highest number of Nobel laureates per capita. A similar positive correlation was observed for other countries and Messerli has estimated that it would take an extra 0.4kg of chocolate per person per year in order to increase the number of Nobel Laureates in a given country by 1. Of course other factors were not taken into account such as differences in the level of science funding and social and cultural factors within each country. In addition, the specific chocolate intake of past and present Nobel winners is not known. Nonetheless, if you dream of winning the Nobel Prize, perhaps you should start eating more chocolate! lp

The Fern Monster a lot of biology is about giving names to things,

so much so that it has almost become a competition between some researchers to discover new species, proteins, and genes and give them the wackiest names they can imagine. A trend that makes the news every so often, is to name a new species in honour of a celebrity. David Attenborough is a particularly popular target for this, with six species named in his honour, ranging from extinct plesiosaurs to spiders and pitcher plants. Other celebrity-named species include the Schwarzenegger beetle, the Obama lichen and the Pratchett turtle. In a recent paper, a team from Duke University, revealed their decision to name not one species, but a whole group of 19 central and southern American fern species, in homage to Lady Gaga. The new Gaga genus, a group of closely related species, is made up of 17 known species that have been renamed following detailed scientific investigation. There are also two newly reported species, Gaga germanotta, named after the music star’s birth name, and Gaga monstraparva, meaning ‘little monsters’. The team presents many reasons for the association, including a highly characteristic GAGA sequence that is a key part of the genetic code common in all 19 species, as well as the resemblance between the juvenile form of these plants and Lady Gaga’s iconic dress from the 2010 Grammy Awards. However, the final decision was made in order to show support for Gaga’s empowering attitude and support for outcasts and minorities everywhere. jl Lent 2013

Write for Cam


ge U




ce m


Cam scien bridge ce m Unive agaz rs ine fro ity m




Lent 20 Issue 12 23


-692 1748 ISSN

6920 00




9 77 1748

Alan Sex . Z Turi ng . oonotic The D Roya isease s. l So ciety Agein g .D avid Clary


Feature articles for the magazine can be on any scientiďŹ c topic and should be aimed at a wide audience. Deadline for the next issue is

We need writers of news, feature articles and reviews for our website. For more information, visit

3rd February 2013

Email complete articles or ideas to

For their generous contributions, BlueSci would like to thank: School of Technology If your institution would like to support BlueSci, please contact

Upload your CV at Naturejobs and let’s take your career sky high Ready to take your job search to the next level? Upload your CV and cover letter at the new improved and take full advantage of the world’s largest science jobs board. Your saved information will be immediately available, so you can quickly and easily apply for one of the 10,000+ science vacancies found at

To find out more, visit:

Search jobs on the go.

Download the new Naturejobs mobile app

Follow us on:


BlueSci Issue 26 - Lent 2013