The Cambridge University science magazine from
9 771748 692000
9 771748 692000
Cambridge University science magazine
Michaelmas 2009 Issue 16
Regulars Periodic Table, NASA, Meet Adam, Physics Busking, African Fruit Bats
The Manhattan Project The story behind the bombs and an interview with Lorna Arnold.
Get your article published in BlueSci... Email: email@example.com Submitting Articles: Deadline for article submissions is 30 October 2009 Articles should be ~1200 words Send completed articles or get in touch with potential ideas More details can be found on our website
in association with
A New Science Magazine for Cambridge
Cambridge’s Science Magazine produced by
Cambridge’s Science Magazine produced by
in association with
The morning after the night before
100 years of E=mc2
in association with
Cambridge’s Science Magazine produced by
Cambridge’s Science Magazine produced by
in association with
When to trust your instincts
Crossing the great divide: the art of astronomy
Cambridge’s Science Magazine produced by
Risk & Rationality
in association with
The search for alien life
Cambridge’s Science Magazine produced by
in association with
in association with
Issue 7 Michaelmas 2006
The Energy Crisis What are our options?
Mars or Glory A giant leap or a distant view?
New Parts For Old
The future of organ transplants
What’s all the fuss about?
The genes that make us human
Mind-reading computers and brain biology
AIDS: 25 Years On
The Sound of Science New perspectives on music
• The Science of Pain • World of the Nanoputians • • For He’s a Jolly Old (Cambridge) Fellow • Designer Babies • cover_LN
• Robots: the Next Generation? • Mobile Medicine • • Climate Change • Forensic Science •
• Hollywood • Science & Subtext • • Synaesthesia • Mobiles • Proteomics •
• Artificial Intelligence • Obesity • • Women In Science • Genetic Counselling •
The Future of Science
Are phones really a healh risk?
Past, present and future
Foreseeing breakthroughs in research
Views from Cambridge
Why do we love it?
• Grapefruit • Dr Hypothesis • • Probiotics • Quantum Computers •
• Drugs in the Sewage • Quantum Cryptography • • Time Truck • Gaia • Pharmacogenomics •
• String Theory • Schizophrenia • Antarctica • • Science and Film • Teleportation • Systems Biology •
Cambridge’s Science Magazine produced by
Cambridge’s Science Magazine produced by
in association with
Issue 8 Lent 2007
The Future of Neuropsychiatry Unraveling the biological basis of mental health
in association with
Cambridge’s Science Magazine produced by
Issue 9 Easter 2007
Issue 10 Michaelmas 2007
Biometrics Big Brother is fingerprinting you
Cambridge’s Science Magazine produced in association with
in association with
Issue 11 Lent 2008
Cambridge’s Science Magazine produced in association with
Issue 12 Easter 2008
The Large Hadron Collider
Europe’s £5 billion experiment
The Challenges of Engineering Life
Cambridge’s Science Magazine produced in association with
Cambridge’s Science Magazine produced in association with
Issue 13 Michaelmas 2008
Issue 14 Lent 2009
Colour in Nature Iridescence explained
Hydrogen Economy The Future of Fuel
Mining the Moon
All For Shrimp
No Peppered Myth
An unexpected fuel source
Darwinian Evolution in Action
Conservation of marine environments
Darwinian Chemistry Selection of the fittest molecules
Does biodefence research make us safer?
• Poincaré Conjecture • Science Documentaries • Pharmaceuticals • • Human Uniqueness • The Whipple Museum • RNAi • • Stock Markets • Parliamentary Office of Science and Technology •
The story of a Victorian zoologist • Fair Trade with a Difference • Science and Comic Books • • Proteins that Kill • Human Evolution • Enterprise in Cambridge •
Sea Monsters In the wake of the giant squid
Ruby Hunting • Science Blogging • Extremes of Pain The Mullard Observatory • The Government’s Chief Scientific Advisor
More than we thought
Science in the Media
Alfred Russell Wallace
Inﬂuential Science Reporting
Physics of Human Behaviour African Rock Art . Intelligent Plants . Physics of Rainbows Sci-fi . Human Nutrition Research . Fish Ecology
Inside a Vacuum
First Predicted in 1895
Co-evolution in Action
Saccades and Disease
A Natural Collector
Global Warming Cuckoo Trickery
Saliva’s Secrets . Aubrey de Grey Appetite Control . Biofuels . Science and the Web
Scientists at Play . Space Travel . Scent Technology Organ Donation . The Carving Power of Blood Flow
Randomness . Electronic Paper . Huntington’s Disease Stories from CERN . Birds . Ultrasound Therapy
Contact firstname.lastname@example.org to get involved with editing, graphics or production www.bluesci.co.uk
The Extinction of Physics?
On the Cover News Pavilion
Evolution Inside us
3 4 5 21
Frederik Floether examines whether a Theory of Everything could lead to the demise of a discipline Robert Williams looks at the incredible feat of B Lymphocytes
Seeing the Invisible
Bárbara Ferreira tackles the misconception surrounding black holes and describes how scientists can ‘see’ them
The Protecting Virus
Revolution in Substance
This issue’s FOCUS examines the Manhattan project and the development of Britain’s nuclear program
Sahil Kirpekar and Ali Ansary introduce their innovative strategy for tackling drug delivery for recovering turberculosis patients
A Day in the Life of...
Away From the Bench
Samuel Wright recounts his weekend as a magician at the Green Man Festival
Arts and Reviews
Rose Spear meets Adam, the first of a new generation of robotic scientists
Book Reviews Dr Derisive
Nicholas Gibbons explains how metamaterials can reveal what we can’t see and make what we can see invisible
Chih-Chin Chen discusses the latest hope to combat the flu virus
The Atom Bomb
Chris Adriaanse and Sonia Aguera talk to Jim Bagian about becoming an astronaut Alison Peel braves the wilds of Africa to look at the spread of viruses in bats
Amy Chesterton looks at the mathematics behind a perfect tune Lindsey Nield tracks the evolution of the periodic table
12 28 10
Issue 16: Michaelmas 2009
Editor: Katherine Thomas Managing Editor: Amy Chesterton Business Manager: Michael Derringer
days get shorter and the nights colder, we will all be wrapping up warm and trying our hardest not to get sick. As well as the continually evolving seasonal bugs, this year we also have swine flu to contend with. But fear not, this issue of BlueSci explores the daily evolution going on within our immune system to halt the invading pathogens and keep us healthy. We also look at the development of a new type of flu vaccine which will not only protect us from the current circulating flu strains, but also against all future mutations of the virus which may come our way. This issue marks the retirement of Dr Hypothesis who has been answering your scientific AS THE
Sub-Editors: Matthew Levin, Thomas Kluyver, Cat Davies, Jon Heras Second Editors: Daniel Shanahan, Shaenandhoa García-Rangel, Raliza Stoyanova, Harriet Dickinson, Sonia Aguera, Rose Spear News Editor: Swetha Suresh News Team: Rachel Swain, Lindsey Nield, Thomas Kluyver Book Reviews: Swetha Suresh, Djuke Veldhuis, Amy Zhang Focus Team: Cheng Chong, Tristan Farrow, Jennifer Moore Dr Derisive: Mike Kenning
Pictures Editor: Ian Fyfe Cartoonist: Adam Hahn Cover Image: Pola Goldberg-Oppenheimer Distribution Manager: Samuel Wright Publicity: Matt Child
Chris Adriaanse President
EACH TERM we bring you science
Varsity Publications Ltd Old Examination Hall Free School Lane Cambridge, CB2 3RF Tel: 01223 337575 www.varsity.co.uk email@example.com BlueSci is published by Varsity Publications Ltd and printed by HSW Print. All copyright is the exclusive property of Varsity Publications Ltd. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, without the prior permission of the publisher.
queries for the last five years. We are delighted to welcome his replacement Dr I.M. Derisive who is here to answer all your science related questions, whatever they may be. Whether you are interested in novel materials that could be used to make the invisibility cloak a reality, have always wondered what it would be like to travel in space or want to find out about robots who can conduct your experiments, analyse the data and even provide the conclusions, BlueSci has it for you. So take five minutes out, sit back and have a read. KT
stories written by students ranging from local research to ground breaking discoveries. This term marks the 5th anniversary of BlueSci magazine. To mark the occasion and gear up for another five successful years, we’ve given the magazine a fresh look. We hope you like it. BlueSci started with the aim of providing a forum for scientists to share their work and the training to make it possible. This still remains the focus of the society through our magazine, website and events. There are ample opportunities for new and current students, undergrads and postgrads: writing and editing for the website or
magazine, filming and audio for videos and podcasts as well as graphics and design. Join our projects or start you own. We offer advice and training to get you started and throughout the term we organise a series of talks and workshops on various aspects of science communication. So get involved. Submit an article about your research. Join our news team for some regular writing experience. Or make a film about the science that fascinates you. Who knows, maybe you’ll end up joining some of our alumni at the BBC, Nature or New Scientist. We look forward to hearing from you. CA Email firstname.lastname@example.org for more information Michaelmas 2009
Unstable Research Katherine Thomas on the physics behind the cover image The Cambridge University science magazine from
Cambridge University science magazine
Michaelmas 2009 Issue 16 www.bluesci.co.uk
COATINGS, SOLAR CELLS, SENSORS
and medical implants all exploit properties of thin film polymers including their viscous and elastic nature, morphology and optical behaviour. Despite their widespread use, questions still remain about the stability of polymers, particularly in thin films, which often exhibit properties differing substantially from those of the bulk. Pola Goldberg-Oppenheimer is trying to answer some of these questions during her PhD in the department of physics. Thin films can crack, develop holes or even destabilise to form droplets. Most of the time smooth even surfaces are required, so those developments are unwelcome. However, in some cases, by exploiting the destabilisation process, precise control can be gained over the instabilities that form and novel structures produced. Pola has been looking at what happens when electrostatic forces – the forces between particles resulting from their electric charge – are applied to thin films. These forces are able to destabilise smooth films. The resulting instabilities, known as electrohydrodynamic instabilities, have a characteristic spacing that depends on film thickness, air gap, polymer properties and the applied electric field. By adjusting these parameters, the dimensions of the structure can be predetermined. The experiment is conducted by sandwiching the polymer film in a capacitor-like device between two electrodes with a small air-gap. The sample is heated close to its transition temperature where it changes from having a ‘glassy’ or solid-like nature to a liquid-like one. A small voltage is applied across the two electrodes resulting in a high electric field and a buildup in the electrostatic pressure at the air surface. This destabilises the film and generates an undulating surface. Over time, these undulations grow until they span the air gap and reach the upper electrode. The final morphology consists of a hexagonal array of
16 > 9 771748 692000
Regulars Periodic Table, NASA, Meet Adam, Physics Busking, African Fruit Bats
The Manhattan Project The story behind the bombs and an interview with Lorna Arnold.
Optical images of instabilities: completely formed regular columns, (left), Saffman-Taylor instabilities (middle) and irregular columns (right).
Katherine Thomas is a PhD student in the Department of Physics
9 771748 692000
Periodic Table, NASA, Meet Adam
pillars with well defined spacing. Unfortunately, things don’t always go according to plan and the image on the front cover shows what can happen when the capacitor device is not quite correctly assembled. Here, finger like instabilities have formed, due to the upper electrode and polymer film coming into contact and trapping air. These are known as Saffman-Taylor instabilities and occur when one liquid is replaced with another of lower viscosity. Saffman and Taylor originally observed these instabilities when they were looking at water-oil mixtures between two parallel plates. During the 1930s ‘water flooding’ was assumed to be an efficient method of extracting oil from wells. However, Saffman and Taylor observed destabilisation of the normally welldefined water-oil interface. In closely spaced parallel plates, instabilities such as fingers, fractal trees and dendrite or branched patterns were observed. For the oil companies, this meant that after a certain time only water would be recovered from the well. Pola has also been looking at electrohydrodynamic instabilities that form when the upper electrode is replaced by one which is no longer flat and smooth, but contains some sort of patterning. The instabilities that form are drawn towards the protrusions, resulting in the formation of a positive replica of the imposed structure. These methods can produce and replicate patterns on sub-micron, possibly sub-tenth of a micron, length scales and are a promising low-cost alternative to conventional optical-patterning techniques. Pola is particularly interested in using electrohydrodynamic instabilities to pattern novel systems and materials, in an attempt to make the pattern formation process faster and the resulting structures smaller, thus making this technique more practical, reliable and robust.
On the Cover 3
News The latest Cambridge news and research. Check out www.bluesci.co.uk for weekly science news updates during term Cambridge designed solar car unveiled
unveiled by jenson button at the Goodwood Festival of Speed in July, Endeavour
is one of the projects marking the 800th anniversary of the University of Cambridge. The car’s power comes entirely from solar energy captured by a six metre squared covering of high-efficiency silicon cells. Underneath this skin is an ultra-efficient electric vehicle weighing just 170 kilograms. Able to cruise at 60 miles per hour using the same power as a hair dryer, up to fifty times less than a normal petrol-fuelled vehicle, designers say it could provide a model for other forms of green transportation. Computer simulations allowed the car’s aerodynamics, rolling resistance, weight and electrical efficiency to be optimised for minimum energy requirements. The car is fitted with a control system that provides battery management and an electric braking system which regenerates energy. In addition, the four student drivers who will pilot the car during the race only need to steer, as an advanced cruise control system automatically adjusts speed according to road conditions and weather forecasts. Armed with these innovations, Endeavour is being touted as Britain’s brightest hope in October’s exhibition of ecologically friendly vehicles. ln
Oh you fat rat!
may do more than just make you pile on pounds. Scientists led by Dr Andrew Murray have found that, compared to rats on a low fat diet, those fed on high fat food ran 35% less distance on a treadmill, had less efficient hearts and poor memory. What’s more, they observed this difference after just nine days. In the high fat diet regime, 55% of the daily calorie intake was similar to the Atkins diet, which is high in fat and low in carbohydrates, whilst the low fat diet would be similar to us eating nothing but muesli. The team believe that some of the differences in performance may be due to increased levels of an uncoupling protein in heart and muscle cells which makes conversion of food to energy less efficient. Some athletes choose a high fat diet, thinking it gives them more energy; however this study shows that, even in the short term, a low fat diet is more beneficial for high performance. There are also implications for diabetics, where high levels of fat accumulate in the blood, and for people on diets similar to Atkins. rs
eating a high fat diet
Chlamydia tests in a jiffy chlamydia is currently detected by sending samples off to a lab and waiting days for the result. A new method,
developed by the Diagnostic Developments Unit at the University of Cambridge, gives accurate results in just an hour, with no special equipment. Chlamydia is a sexually transmitted bacterial infection, which often lingers symptomless, in both men and women, causing them to spread the disease unawares. In extreme cases it can also cause infertility. Rapid screening methods for women have already been developed, but as lead researcher Dr Helen Lee put it, “that only tackles half the problem.” A key part of the test is ‘FirstBurst’, a device which collects only the first 5 ml of urine. This sample carries large numbers of Chlamydia, making them easier to detect. Even so, the technique only identifies about 80% of infections, so more sensitive laboratory tests will still be useful. While the ‘silent epidemic’ of Chlamydia remains a large problem in developed countries, in developing countries where people live many days from a hospital, a quick result is essential. tk Michaelmas 2009
THE GOLDEN RATIO The golden ratio is an irrational inherent aesthetic preference. It can be found in paintings by Leonardo Da Vinci, the violins of Stradivarius, the Pantheon, the Great Pyramids of Giza, Stonehenge, your body and all of nature.
Anne E Thomas www.ann-thomas.com Pavillion 5
FYF E IAN
The Extinction of Physics?
Frederik Floether examines whether a Theory of Everything could lead to the demise of a discipline MANY PHYSICISTS DREAM of completing a Theory of Everything: a fundamental concept, based on which all physical phenomena in the universe could be explained. Ironically, such a theory could threaten the existence of physicists themselves. How close are we to developing this theory and will it indeed make physicists redundant? In order to understand why physicists believe a Theory of Everything is a real possibility, it is imperative to look at the history of the subject. Put simply, the study of physics has consisted of describing the four fundamental forces – electromagnetism, gravity, the strong force and the weak interaction – and linking seemingly disparate phenomena by unifying different forces. Work by Faraday, Maxwell and Einstein demonstrated that electricity and magnetism are two different manifestations of a single electromagnetic force. Similarly, Weinberg, Glashow and Salam showed that both the electromagnetic and weak force could be unified in a single electroweak interaction. More recently, researchers have tried to demonstrate that, at high energies, there is a unification of the electroweak and strong force as part of a so-called ‘Grand Unified Theory’. Ultimately, theorists such
as Leonard Susskind, one of the fathers of string theory, aim to include gravity, the one force which has proven particularly resistant to unification, in a Theory of Everything. There are two different paths for the future of physics and both need to be considered to evaluate whether physicists might soon become an endangered species. On one hand, we could assume that ‘the Theory of Everything’ will be found based on the history of physics. One of the candidates is the aforementioned string theory. In brief, string theory argues that all particles are created as a result of the vibrations of extremely tiny strings. Although perhaps the most promising of any Theory of Everything candidate, controversy rages over whether the theory makes testable predictions. If string theory, or any of its successors, proved to be entirely successful, could one then dispense with all physicists and use their generous research funding for more worthwhile pursuits? For many reasons, the answer is an emphatic no. First, only experienced physicists will be able to apply such a fundamental theory in real-world calculations. Quantum mechanics, for instance, yields few analytic solutions. The celebrated Schrödinger Equation,
For weekly science stories, videos, podcasts and magazine back issues check out the BlueSci website
www.bluesci.co.uk 6 The Extinction of Physics
describing the wave-like behaviour of one or more particles, can only be solved analytically in a few idealised situations. The most fundamental theory in physics will presumably consist of equations that are even more difficult to solve. A Theory of Everything would also open up questions of enquiry that we cannot yet fathom and would undoubtedly shed new light on previously ‘solved’ problems. For example, when the American physicist Julian Schwinger developed a mathematically successful theory of the motion of charges (i.e. quantum electrodynamics), his colleague Richard Feynman was able to illuminate the problem from a more visual perspective by inventing the now-ubiquitous Feynman diagrams. Furthermore, although they may be reluctant to admit it, many physicists have a strong philosophical bent. For instance, despite the fact that quantum mechanics was developed during the early 20th century, interpretations of how it shapes our understanding of ‘reality’ continue to be proposed to this day. A Theory of Everything is likely to be at least as challenging to comprehend. Another scenario is that a Theory of Everything does not exist. This is not implausible. Apart from past experience and perhaps an intuitive preference for fundamental simplicity, no significant evidence points in the direction of a Theory of Everything. One alternative, corroborated to some degree by the continual discovery of smaller and smaller particles, over the last 100 years, is that the discipline will unfold much like an onion. This would mean that modern, ultra-sensitive experiments would show discrepancies between current theories and the way the universe actually works. New theories would have to be developed to account for the deviations; alternatively, empirical verification could come after Michaelmas 2009
the proposal of such theories. This process would continue ad infinitum. Mao Zedong, was a strong proponent of the view that finding smaller and smaller particles would be a never-ending process. Indeed, the idea that physics will continue to unfold indefinitely is consistent with Thomas Kuhn’s ideas about the nature of scientific progress. In his seminal contribution The Structure of Scientific Revolutions Kuhn argues that science does not progress in a linear manner but rather by paradigm shifts, effectively excluding the possibility of an ultimate Theory of Everything. Kuhn’s ideas have been borne from the history of science. At the dawn of the 20th century, for example, physics was considered to be complete: Maxwell’s equations elegantly accounted for electromagnetism, Newtonian mechanics described the general motion of all bodies and Boltzmann statistics effectively handled multi-particle systems. However, the advent of both Einstein’s Theory of Relativity and quantum mechanics made it clear that profound physical concepts had so far been neglected. If Kuhn’s thesis is correct, such paradigm shifts will never cease. Interestingly, even if future developments continue to bring physics ever closer to ‘completion’, the subject may never be fully consistent. Ultimately, physicists have disagreed widely on whether physics could be unified in a single theory. While two giants of 20th century physics, Murray Gell-Mann and Albert Einstein, played prominent roles in the development of a Theory of Everything, another giant, Richard Feynman, argued that such theories are not falsifiable. Whatever the final truth may be, physicists are probably not out of a job. Frederik Floether is a Natural Sciences Tripos Part II student in the Department of Physics The Extinction of Physics 7
Evolution Inside Us
Robert Williams looks at the incredible feat of B Lymphocytes
The structure of an antibody showing the location of the variable region coded for by the VDJ units (right)
8 Evolution Inside Us
antibodies are an astonishing product of the immune system. These
Y-shaped proteins, which are made by a subset of white blood cells, are central to the destruction and removal of infectious diseases. The function of an antibody is simple: to recognise and stick to a specific molecular ‘label’ – called an antigen – on the surface of an unwelcome visitor in the body. These pathogens, which may be bacteria, viruses or parasites, are unlike human cells and each pathogen will have its own unique antigen. Once antibodies are stuck to the pathogen, it can usually be destroyed or neutralised by other components of the immune system. Does this mean that our genes carry the code for every antibody we may ever need, to deal with any of the countless pathogens that we may encounter during our lives? The answer is no, because the immune system has developed a far better way to generate the antibodies
it needs: evolution on a molecular scale. This evolution, as we might expect, occurs through genetic mutation and promotes survival of the fittest – but the process takes weeks rather than centuries. The result is the expression of strongly binding, highly specific antibodies against any pathogen that has been detected. The first ingenious part of this antibody evolution comes from the requirement for diversity. For the most strongly-binding antibodies to be preferentially selected, there has to be a massive range of antibodies to choose from, each with a slightly different antigen shape preference. B lymphocytes (B cells), the cells that make antibodies, are indeed created in a way that means each expresses one of an enormous repertoire of antibodies. This is achieved by the unique design of the DNA sequences that code for each different antibody protein. The DNA sequence for every antibody is identical except for the small part that determines the antigenbinding region found at the tips of each Y-shaped protein molecule. This section, called the ‘VDJ’ unit, is formed by selecting three units from three different regions along the gene (called V, D and J). In humans, there are around 65 V units, 27 D units and 6 J units and any combination is possible – a sort of genetic pick ‘n’ mix. But that’s not all; the complete antigenbinding site is constructed from two separate protein chains whose antigen-binding regions are coded for by different VDJ units – in other words, a combination of two already random pick ‘n’ mix selections! The number of possible sequence combinations is enormous. It is estimated that B cells express as many as 100 billion different antibodies, with each cell rearranging its DNA to randomly encode one unique antibody with its own unique binding site. Following an infection, a selection process takes Michaelmas 2009
B cells fight amongst themselves in a competition for survival signals, with the lottery of somatic hypermutation adding even more variation to their antibody repertoires. The few whose antibodies have been improved by this process are preferentially selected. Only the fittest cells survive, and in just a couple of weeks after an infection, this induced natural selection process results in long-living B cells which secrete their specific antibodies into our blood, sometimes for the rest of our lives. It is this antibody generation that provides us with life-long protection against potentially lifethreatening pathogens and is the process that is stimulated by vaccinations. It is amazing to think that the process of evolution through natural selection not only underlies the adaptation of species to their environments, but is also the mechanism used by our immune system to keep us alive.
The process of selection to produce highly specific antibodies against the antigen (above)
Robert Williams is a PhD student in the Babraham Institute
place. The B cell initially expresses its antibodies on its surface where they act as receptors. This allows the cell to first find out whether its antibody is of any use, by seeing if it will recognise any of the pathogenic antigens. This initial step takes place in the spleen and also in the lymph nodes. Antigens from invading pathogens are carried by cells, conveniently named antigen-presenting cells (APCs), from infection sites to these locations. APCs display antigens to the antibody receptors on vast numbers of B cells, in the hope that a small number have antibodies with a good fit. Those B cells fortunate enough to meet this requirement receive an activating signal from another type of lymphocyte, called a T cell. These signals promote the survival, growth and division of the B cells to create more copies of only those that express the correct antibody in a process called clonal expansion. The army of activated, antigen-specific B cell clones forms a cluster called a germinal centre, where they use a further trick to generate even better fitting antibodies. Somatic hypermutation occurs in germinal centre B cells that have already proven their worth by activation upon binding to an antigen. Within these cells, the activation initiates a process in which the DNA is broken in two, at the V regions described above. The join is then repaired, but in an error-prone way, which leads to point mutations in the DNA sequence. These tiny sequence changes alter the shape of the antigen-binding region in the encoded antibody â€“ some changes improve binding, while others make the antibody worse than it was before. The thought of mutation occurring in our bodies probably brings about images of cancer or deformities â€“ far from desirable consequences. Yet evolution has encouraged high mutation rates in B cells to provoke a rapid natural selection process.
Evolution Inside Us 9
Seeing the Invisible IAN FYFE
Bárbara Ferreira tackles the misconception surrounding black holes and describes how scientists can ‘see’ them
This artist’s concept depicts a supermassive black hole exploding at the centre of a galaxy. (below)
in the recent star trek movie, a flash-forward shows an older and wiser Spock embarking on a mission to save planet Romulus from a deadly supernova explosion. His solution is the mysterious ‘red matter’, a substance capable of creating a black hole that would absorb the impact of the exploding star. It is clear from this inaccurate depiction in J.J. Abrams’ blockbuster that, while the nature of stars and planets are widely known, audiences are less familiar with black holes and the material that surrounds them which allows us to detect these otherwise invisible objects. When a massive star has burnt all the light elements in its core, it reaches the end of its life cycle. The end is not peaceful. The collapse of the nucleus generates a shock wave that causes the external layers of the star to burst into a violent supernova explosion. The centre collapses into an extremely dense core, called a neutron star.
10 Seeing the Invisible
However, if the star is heavy enough, the core will keep on shrinking until it becomes a black hole. Observational astronomers usually detect and study objects using the light emitted or reflected from their surface. But in the case of black holes, light can not escape from them. So how do we know that they exist? In the same way that one can perceive a person completely covered in bed because of how their body shapes the sheets, a black hole can be detected by studying the matter surrounding it. After a supernova explosion, remaining gas and dust swirls around the black hole. The gas and dust are close enough to feel the gravitational attraction but far away enough to resist being completely sucked in. This matter eventually settles down in a disc-like shape that orbits the central object, creating an ‘accretion disc’. These messy structures are crucial for scientists, as they shed light on the otherwise invisible dark bodies. Unlike the bodies they surround, accretion discs are neither black nor nearly invisible. Light is reflected and emitted from them, providing essential and palpable information about the vicinity of black holes. As the matter in the disc gets closer and closer to the central object, slowly being transported inwards, it heats up and releases energy, in the form of X-rays. Astronomers can detect this highly energetic radiation, and provided they have an idea of how heavy the object is, they know they are in presence of an extremely compact body; just like one can see the undulations in bed sheets. But your eyes can mislead you into believing that a person is underneath the sheets when it is really Michaelmas 2009
The distortion of space-time by a neutron star (left) and black hole (right)
a pile of pillows. Neutron stars are like the pile of pillows. With a huge mass and small volume, these compact objects have a gravitational pull almost as strong as that of a black hole and affect their surroundings in a similar way to their dark siblings. So how do we tell the difference? Einstein’s theory of general relativity predicts the existence of black holes and brings about an assortment of associated concepts which can help distinguish between neutron stars and black holes. An example is the notion of an event horizon, a region inside which nothing, not even light, can escape. Further away from the hole lies another important boundary, the marginally stable orbit, inside which matter can not maintain stable orbits (unlike, for example, the planets of the solar system, which have stable orbits). Instead, it rapidly spirals around the central object, being pulled in until it eventually reaches the event horizon and disappears into the black hole. Therefore, an accretion disc around a black hole does not reach the compact object but has an ‘inner edge’ as the marginally stable orbit. Neutron stars have a slightly weaker gravitational pull and, as a consequence, their accretion discs can extend to the surface of the star and interact with it. Since the result of such interplay can be detected with modern telescopes, astronomers are able to determine whether the object under investigation is a neutron star or a black hole; they can distinguish between a pile of pillows and a body underneath the bed sheets. But this is not where the story ends. In the same way that one can tell if a person completely covered by sheets is fat or thin, tall or short, one can use accretion discs or orbiting stars to study the properties of black holes. While numerous quantities (mass, radius, luminosity, to name a few) Michaelmas 2009
are required to characterise a star, black holes can be fully described by only two parameters: mass and spin. Black holes with different masses have distinct effects on bodies that orbit them, even if at large distances. Indeed, the difference is so evident that a closer look is not essential; accretion discs are not necessary to determine how heavy the black hole is. Simply by measuring the orbital period and velocity of a circulating star, astronomers can find a good approximation for the mass of the black hole. Spin is much more complicated to measure than mass because its effects are subtle and only noticeable extremely close to the black hole. In fact, fast and slow spinning black holes both influence an orbiting star in exactly the same way. The difference can only be noticed in the accretion disc, whose inner edge is closer to the hole if that hole is spinning faster. Therefore, astronomers can use the diameter of the marginally stable orbit, to estimate how fast the black hole rotates; the smaller the diameter, the larger the spin. Although often overlooked alongside the mysterious objects they orbit, black hole accretion discs are crucial to the labelling and study of their central bodies. They are the structures that allow scientists to design accurate models of the vicinity of compact objects. Bárbara Ferreira is a PhD student in the Department of Applied Mathematics and Theoretical Physics Seeing the Invisible 11
FE FY IAN
The Protecting Virus Chih-Chin (Kevin) Chen discusses the latest hope in the fight against the flu virus
pandemic flu â€“ a global outbreak of the virus â€“ has been a constant presence throughout human history and occurs with a regularity that is chilling. This year it was swine flu, before that we had fears of an avian flu outbreak. This year the effects have been particularly devastating. The World Health Organisation declared swine flu as a global pandemic and there have been over 180,000 cases and 1,400 deaths in more than 100 countries. Even excluding pandemics, each year up to 500,000 people will die from seasonal flu. Although we are unable to halt outbreaks of flu, the latest research may provide us with a weapon to help combat it. Endemic diseases such as bubonic plague, smallpox, cholera and scarlet fever have almost all been eradicated by modern medicine. Science has also helped to prolong the lives of sufferers of deadly diseases such as cancer and AIDS. So why has a cure for flu proved so elusive and why is it so hard to prevent epidemics?
Flu Virus invading the cells in the respiratory tract (right)
12 The Protecting Virus
The answer lies in the fact that flu carries its genetic information as RNA. The virus lacks an enzyme to check for errors in its genome, so there is no mechanism to ensure that replication of its RNA remains accurate. The resultant high mutation rate in the RNA allows the virus to continually evolve and hence evade our immune system. In an infected cell, the virus can also exchange genetic material with the host. When this occurs on a large scale, completely new viral strains may emerge to which most people will have little or no resistance, as was the case with swine flu. Despite these difficulties, science has made some progress in the fight against flu with preventative vaccines â€“ inactive or non-infectious versions of particular strains of the virus. When injected, these cause the immune system to generate antibodies to these weakened strains, priming it to respond quickly to infection by active forms of these same strains and thereby reduce the severity of the
when a protecting virus is present. The reason for this is not entirely clear. However, it does suggest that, even if we were to administer the protecting virus widely, it is unlikely that a superbug would emerge through excessive drug usage. The protecting virus opens up a new path towards fighting flu and could make it much easier to reduce the chances of future global pandemics occurring. Some flu viruses can cross the species barrier and be carried by birds, pigs or humans. Currently, treating animals, particularly wild ones, is much harder than treating humans. Simply adding the protecting virus to the food and water of animals would massively reduce the cost and effort required to fight infection. Animals that may carry the virus need not be slaughtered and the spread of the virus from country to country could be slowed down or even halted. Trials still need to be carried out in birds and humans. However, the same concept may help in the fight against similar viral diseases with fast mutation rates, for example, rubella, Dengue fever or even hepatitis B, which are all caused by RNA viruses, if a suitable form of the protecting virus can be made for each one.
The protecting virus (green) has a much shorter RNA strand than the normal virus (red) (top).The traditional flu vaccination is prepared using chicken eggs (bottom)
C-C Chen is a Part 1b student in the Natural Sciences
symptoms. However, the vaccine only works for the strains present in the shot, which are chosen from among those circulating at the time. The rapid evolution of the virus means that new flu strains constantly emerge, so that the winter flu virus changes composition from year to year and the vaccine administered one year may offer little protection in the next. Wouldn’t it be better if we could produce a single vaccine that would provide long-term protection against not just current strains, but also new strains of flu? Such is the hope of new research at Warwick University. Nigel Dimmock, professor of virology, has proposed the use of a ‘protecting virus’ to counter flu infections – a synthetic non-infectious virus strain that could protect us from flu for the rest of our lives. The idea sounds a bit like a fantasy; How exactly would willfully infecting yourself with one virus protect you from another? Dimmock’s new method effectively transforms an active, infectious virus into a vaccination by shortening its genome. A normal influenza virus contains eight separate RNA strands. The protecting virus is identical except for an 80% base deletion on strand one, which renders it harmless: it cannot replicate its RNA or synthesise its protein coating and lacks the genes which code for proteins responsible for infectious symptoms and viral replication. When a harmful strain of virus is present in the same cell, the genomes of both strains of the virus can be replicated. However, the genome of the protecting virus is shorter than the regular flu virus and its rate of replication consequently higher (the exact ratio remains unclear and may vary from organism to organism). This is the key to preventing infection. The higher replication rate of the protecting virus limits the amount of the harmful virus and the number of cells it infects. Amazingly, the body is protected from all present and future flu strains, because the protecting virus is always replicated faster than any harmful flu virus. It is therefore much more effective than an ordinary vaccination, which only offers protection against specific strains. Despite its remarkable properties, it is unclear how long such a protecting virus can remain in the body. Experiments carried out by the team at Warwick have shown that the protecting strain is effective even when given up to six weeks before infection. A single dose of current anti-viral drugs like Tamiflu and Relenza gives at most 24 hours of protection. The protecting virus can also work if administered 24 hours after exposure to the active flu virus and may do so following even longer delays. Furthermore, the experiments so far suggest that the infectious virus does not undergo mutation
The Protecting Virus 13
A Revolution in Substance Nicholas Gibbons explains how metamaterials can reveal what we can’t see and make what we can see invisible
Metamaterials can be used to bend electromagnetic radiation around objects as though they aren’t there (right)
materials, which all have a positive index. Although this may appear innocuous, it lays the framework for a startling effect known as electromagnetic cloaking, or more colloquially as invisibility. By covering an object with an appropriate ‘meta-layer’, it can be completely shielded from incident light, making it invisible to an observer. To understand how this works, consider how an everyday opaque object is viewed. Imagine a spherical object such as an apple, resting on a table. Light hitting the apple will be scattered from it. The manner and angle at which it is scattered provides information regarding its position and texture. An observer can then collect this light and process it to give detailed information on the object. However, with our electromagnetic cloak, light arriving from behind the object is bent around it due to negative refraction in such a way that it arrives at the eye without any apparent change in path. There is no evidence of any obstacle in its path. The object doesn’t even cast a shadow. This effect was recently IAN FYFE
the greek root ‘meta’ means ‘beyond’ and in the context of a metamaterial, it refers to a manmade substance that has extraordinary properties not seen in any natural material. It is well known that the chemical structure of a material directly shapes its optical properties. More recently it has been recognised that the physical structure of a material can also play a crucial role. This discovery has led to a surge of research into ‘next generation materials’. By exploiting the influence of physical structure, metamaterials provide an unmatched freedom over the manipulation of light. A metamaterial is composed of an array of tiny building blocks designed to mimic the atoms of naturally occuring materials. These elements can be thought of as ‘artificial atoms’ and are carefully designed so that they resonate at a specific frequency, giving the material a unique electric and magnetic response. These ‘artificial atoms’ must be significantly smaller than the wavelength of the incident light. Consequently, the incoming light acts as if it is short-sighted – it cannot distinguish the individual elements and ‘sees’ instead a homogeneous medium with well-defined optical properties. The behaviour of the light is determined by the collective response of all the individual elements within the metamaterials. Designed appropriately, metamaterials can have various distinctive and peculiar optical properties. Perhaps one of the most interesting is a negative refractive index. Light is refracted or bent on entering a material and the refractive index dictates both the direction and the angle of the bending. A material with a negative index of refraction will bend light in the opposite direction to that of conventional
demonstrated successfully with infrared radiation using a nano-porous carpet structure constructed entirely of silicon. The carpet cloak was placed over an object and by mimicking the reflection of a completely flat surface, the object was successfully hidden. The ultimate goal is to extend the effect across the entire visible light spectrum and towards the famed ‘invisibility cloak’. Metamaterials also hold the key to a powerful imaging tool, popularly known as the ‘super-lens’. These devices have demonstrated an ability to far surpass the resolution of conventional lenses giving unmatched detail. Current lenses work by collecting the light emitted or reflected from an object and focusing it into a clear image. However, there comes a point where details are simply too small to be resolved further. In physics, this point is known as the diffraction limit and for a long time it was believed to be a fundamental barrier arising from the wave-nature of light. Surprisingly, this restriction can be neatly sidestepped through the use of metamaterial lenses, offering unparalleled imaging power. In the case of a standard lens, the majority of an image passes though, but the finest details decompose rapidly after leaving the object and are lost forever. On the other hand, an appropriately designed ‘metalens’ is able to sustain and even amplify these elusive components until the final image is formed, including all the tiniest details. The super-lensing effect was predicted at the start of the century and since then such lenses have been successfully constructed, achieving accurate resolution down to 50 nanometres, ten times greater than the best glass lens can manage. This has particular implications in microscopy, where such lenses could be used to image tiny objects such as DNA and single molecules, which were previously too small to be observed directly. They could also be used to optically write patterns with the high level of detail currently only possible using techniques such as electron beam lithography, one use of which is to pattern nanoscale circuits.
The top image was created by photolithography onto an organic polymer using a silver superlens, while a conventioanl lens was used for the bottom image
Many of the initial successes with metamaterials were for manipulating light in the microwave region of the electromagnetic spectrum. Metamaterials must be composed of elements significantly smaller than the wavelength of the light, which has hindered their fabrication. The first successful demonstration of negative refraction, in 2001, involved a two-dimensional array of split copper rings. These copper rings were designed to respond to microwave radiation, with a wavelength of the order of centimetres. The rings therefore only needed to be on the scale of millimetres, which was no huge engineering challenge. The experiment proved for the first time some of the unique properties which had already been predicted by theory. Metamaterials developed rapidly from this initial success and within a few years there were promising advances towards ‘optical metamaterials’, which resonate with infrared and visible light, hugely expanding their potential uses. The challenge lies in the fact that as we slide down the electromagnetic spectrum from microwaves to the visible, the wavelength of the light decreases by a factor of 10,000. In the optical region of the spectrum, light has a wavelength of around a micrometre, so the structural elements must be on the scale of nanometres. This has inspired a plethora of creative and ingenious designs. Metamaterials now take many shapes and forms; from intricate arrays of silver nanospheres, to precisely coupled gold nanorods and even repeating stacks of ‘nano-sandwiches’. The manipulation of light has always been vastly important across the breadth of scientific research, from astronomy right through to biology and medicine. Conventionally, natural materials have been chosen and optimised to achieve this goal to the best of their abilities. Metamaterials can now offer a power and flexibility which has previously existed only in the dreams of science fiction. At this rate, when the last instalment of Harry Potter finally arrives on our cinema screens, his much-coveted invisibility cloak may already have become rather passé. Nicholas Gibbons is a PhD student in the Department of Physics
The Manhattan Project As the UK government continues discussion on the renewal of Trident, our missile-based nuclear weapons arsenal, Bluesci looks back on the only two bombs ever to be used in war and Britain’s role as a nuclear power.
HIROSHIMA, a western city on the largest island of Japan, was destroyed by a single bomb, whose core was the size of a baseball. Nagasaki, on one of the smaller islands, followed. The atomic bomb or “Manhattan Project” could be said to have started in Nazi Germany 1939, when German chemist Otto Hahn proved that uranium could be split by a neutron, releasing a significant amount of energy. If that energy could be harnessed in a uranium chain reaction, the power would be staggering. Spurred on by the fear of such a weapon in enemy hands, three scientists working in America: Leo Szilard, Edward Teller and Enrico Fermi, authored a letter to then President Roosevelt. It urged America to accelerate nuclear research with
the aim of ensuring that the Allies developed the bomb first. To give their letter the required force, they convinced noted pacifist, Albert Einstein, to put his name to it. He agonised for months over the decision. Later he was a vociferous campaigner against nuclear weapons co-authoring the Russell-Einstein manifesto, whose offspring is the ethical science society Pugwash. In light of these recommendations, in October 1939 Roosevelt set up an advisory council on uranium, allocating it $6000 for materials. By the end of the war the project had the full weight of government and military funding behind it. The project was enormous. It employed roughly 500,000 people over four main sites: uranium separation at Oak
Ridge Tennessee, with electromagnetic separation led by Ernest Lawrence; plutonium processing via a nuclear reactor at Hanford. Arthur Compton was in charge of the initial chain reaction work led by Enrico Fermi at the Metropolitan Laboratory in Chicago. The bomb design and construction team at Los Alamos was led by Robert Oppenheimer. In order to succeed, they had to overcome significant technological barriers. Firstly, they needed enough fissile material. Natural uranium contains 99% U-238. Since, only the rarer isotope, U-235, is susceptible to the chain reaction, it was important to find a process to separate U-235 from the more abundant U-238. The work at Lawrence’s laboratory exploited the mass difference between the isotopes. A strong magnetic field would deflect the electrically charged uranium by different degrees, depending upon the mass, separating and purifying the U-235. Two other methods of separation were also used: gaseous diffusion and liquid thermal diffusion. The former uses a porous barrier through which the lighter isotope in a gas will pass more easily, whilst the latter is a convection-based process. The engineering of these projects was huge and required considerable skill – the coils required to make the magnets for separation used 14,700 tons of silver. The building constructed to house the gaseous diffusion equipment was the largest in the world at that time. Yet despite this, the town and site remained relatively secret. Each stage required additional construction and expertise. In order to achieve the first self-sustained chain reaction, Fermi and Szilard designed and built the world’s first nuclear reactor, Chicago Pile-1. They arranged Uranium pellets, graphite blocks to slow down the
neutrons and cadmium-based control rods to absorb neutrons released by the uranium and slow down the reaction, in a carefully calculated structure. In 6 June 1944 the Allies launched the Normandy invasion. Germany surrendered on 7 May 1945 without completing its own atomic bomb project. The scientists and military were deeply divided about what to do with the completed bomb. The war against Japan was growing fiercer. The longer the war continued the more lives would be risked. This was the weapon to end all wars. However, some saw the bomb’s purpose as complete. Those at the Chicago Laboratory voiced their opinions in the Jeffries Report of November 1944. Its contents asked that the American people be told about the Manhattan Project, the destructive potential of the bomb and its implications for future international relations. Scientists such as Szilard, integral in making the bomb, were
The Fat-Man bomb detonated over Nagasaki (above). The key players in the building of the bombs (below)
determined that the bomb should never be used. It should remain a deterrent. Szilard collected 67 signatures of eminent scientists in a petition urging the new President Truman not to use the bomb. The bomb was tested on 16 July 1945 and the surrender of Japan demanded. Japan rejected the Allies’ proposal and on August 6th the first U-235 bomb was dropped, on Hiroshima. Hiroshima was chosen because it was flat, one of the few Japanese cities not to have been totally devastated by previous US bombing and the headquarters of the Second Japanese Army. This would be a true demonstration of the weapon’s power. However, civilians were by far the greatest victims of the bomb, outnumbering soldiers by six to one. When the bomb was detonated there was an immense burst of light and heat, lasting a fraction of a second,
with the temperature at the target point reaching 4000 degrees Celcius. Those within half a mile were incinerated; further out, the heat blistered and tore the skin from people’s bodies. The blast wave travelled the city at two miles per second, destroying everything in its path. Those who survived exhibited painful symptoms including nausea, fever, ulceration and bleeding of the mouth, eyes and lungs symptoms of radiation sickness. For years to come, children
showed abnormal growth and radiationinduced disorders. Three days later, the plutonium bomb ‘Fat Man’ was dropped on Nagasaki. As before, the effect of the bomb was horrific. 70,000 people died in the city that day and more over the ensuing years. Japan surrendered on August 14th 1945, signalling the end of the Second World War. With its end, the world was confronted with a weapon whose destructive power was unprecedented. Its
advocates had hoped that by unleashing its power the world would be terrified into lasting peace. However, the political landscape had changed. In 1943 Klaus Fuchs, the Russian spy within Los Alamos, had already started contacting Soviet agents with details of the bomb. America would not remain the only nuclear-armed superpower for long.
How Does a Nuclear Bomb Work?
So how much energy can you get from fission of uranium? Let’s imagine we have one kilogram of uranium (the bomb dropped on Hiroshima had 64 kilograms!). When one atom is split, the energy released is around two trillionths of a Joule. Not much, except that there are approximately 3x1024 atoms in one kilogram of material. The total energy released is about 60 trillion Joules. In comparison, the explosion of one kilogram of TNT releases only four million Joules. For a sustainable nuclear chain reaction there must be a critical mass of material. To prevent premature fission, the nuclear materials are therefore stored in subcritical mass amounts, which must be bought together for the reaction to take place. In a gun-triggered nuclear bomb, a gun fires one piece of sub-critical mass into another to form the supercritical mass. This method was used in the ‘Little Boy’ bomb dropped on Hiroshima on 6 August 1945. The explosion was equivalent to 14 million kilograms of TNT. In an implosion bomb, a shock wave is created, compressing the sub-critical pieces together to form the critical mass. This method was used on ‘Fat Man’ detonated over Nagasaki on 9 August 1945, creating an explosion equivalent to 23 million kilograms of TNT. As well as the fission nuclear bomb, there are also fusion bombs. Instead of splitting an atom into two, fusion involves binding two nuclei, to form a heavier atom. The process is carried out
at high temperatures, giving it the name “thermonuclear bomb”. The whole process of fusion in the bomb takes about one 600 billionth of a second. The destructive force is many more times that of a fission bomb, typically on the order of a quadrillion kilograms of TNT. The Tsar bomb dropped by the Soviet Union during nuclear testing in the 1960s yielded an explosion a billion times greater than that of Fat Man.
with an atom. An atom consists of three subatomic particles; protons, neutrons and electrons. Protons and neutrons form the nucleus of the atom while the electrons orbit around them. In nature, atoms of the same element, but with different numbers of neutrons, exist. These are called isotopes. Uranium, which is used to make atomic bombs, has three naturally occurring isoptopes; U-234, U-235 and U-238; where the number indicates the atomic mass. Isotopes of uranium are radioactive and unstable. They decay to form new elements and in doing so release energy. The half-life of U-235, however, is 704 million years which doesn’t make it the most ideal energy source. So how is uranium used to create such a powerful bomb? To derive a large amount of energy from uranium in a short period of time, the nucleus can be artificially decayed; split into smaller pieces by bombarding it with neutrons. This process is called induced-fission and effectively mimics the natural decay but on a much shorter time-scale. The neutron is absorbed by the uranium nucleus, increasing its atomic mass by one. The element then becomes more unstable and rapidly decays, splitting into two lighter and faster moving atoms plus two or three singular neutrons. These new neutrons go on to repeat the process causing a chain reaction and a rapid increase in the number of fission reactions taking place.
Cheng T. Chong is a PhD stuident in the Department of Engineering
Nuclear fission of Uranium (below)
LET’S START AT THE BEGINNING
Jennifer Moore is PhD student in the Department of Physics
The Mushroom Cloud RISING THOUSANDS OF FEET HIGH,
the mushroom-shaped cloud formed after a nuclear explosion is very distinctive. The bomb detonates just above the ground releasing an immense amount of heat. At the centre of the explosion the temperature can reach several million degrees centrigrade. The result is the sudden formation of a large fireball near to the ground containing hot, lowdensity gas. Density differences between the hot air in the fireball and the cold surrounding
air cause the fireball to rise. As it rises, air, weapon debris and dust are sucked inwards and upwards, creating a strong updraft and inwardly flowing winds. This forms the distictive mushroom stem. Inside the mushroom head, the hot gas rotates in a doughnut-like shape. The fireball grows in size but gradually starts to cool down. As the temperature drops, vapour condenses, forming visible clouds. Those clouds contain water droplets, along with debris and nuclear materials. The mushroom head continues to grow in diameter, rising vertically. It stops when the cloud no longer has a lower density than the surrounding air.
The cloud starts out a red or reddish brown colour due to the presence of nitrous acid or nitrogen oxides. As the fireball starts to cool water starts to condense forming droplets, the colour changes, turning white.
Britain and the bomb How Britain learned to love the bomb but never stopped worrying – an interview with Lorna Arnold, Britain’s official nuclear historian “IT WAS THE FIRST BIG nuclear reactor accident in the world. There had been accidents on reactor sites before, but none of them had gone beyond a small area. This one was an accident which could have devastated the whole of north-west England, created a vast fallout and killed a lot of people.” That is how Lorna Arnold describes the fire that broke out at the military reactor at Windscale, Cumbria in 1957. Later renamed Sellafield, the site still operates today as a nuclear fuel reprocessing plant. Few people can claim to know as much about the development of British nuclear weapons as 93 year old Lorna Arnold, official historian of the UK Atomic Energy Authority. In the position since the 1960s, and a former Cambridge student, Lorna was among a handful of individuals throughout the Cold War to have access to key atomic scientists and to secrets so sensitive, they were never written down. After leaving Cambridge, Lorna found herself working in the War Office during the Second World War and then in the British sector of a ravaged Berlin. In 1946 she moved to Washington, Michaelmas 2009
where she found herself the only female diplomat and Britain’s first woman in the Foreign Service. From the Pentagon in Washington she coordinated shipments of food and goods that kept Germany from starving. With a modesty that seems anachronistic in today’s society, she puts her extraordinary trajectory down to good luck and all the very nice people she was fortunate to meet. Here she brings back to life with great passion and energy events long boxed up in dark and dusty recesses of Whitehall archives: “It was a very tricky time for [Windscale] to happen. The British government had just reached a point where they had arrangements set up for negotiations with the Americans to start a programme. The close wartime and nuclear relationship had been cut off absolutely dead in 1946 by the Americans; they were not going to give information to any other country at all on nuclear matters. So the British, who had been very important in the American nuclear programme and in fact had initiated the whole thing, had been forever trying to get back into the partnership. In 1957 they were in sight
of negotiations with the Americans to renew the partnership. They had also just completed a big round of hydrogen bomb tests that established Britain’s credentials as a serious nuclear power.” Lorna gives short shrift to rumours that Britain never really did develop a genuine hydrogen bomb. “There was a warhead that worked, but it was never engineered for service use because we went into partnership with the Americans and took over an American design. [The bomb] had a British core but an American design. “Anyway, this was a very tricky point in the negotiations. The Americans might say, ‘Not likely, you’re too Focus 19
incompetent, look at this awful accident [at Windscale], you don’t know what you’re doing.’ They were very afraid that all hopes of the American partnership would be ended. “At the same time as this was going on, the British government had just announced a national programme of civil nuclear power stations. The plan was to build 12 nuclear power stations over ten years, providing a very considerable proportion of the country’s electricity needs. It was the first civil nuclear programme in the world. “They were so afraid that this accident would set that back and there would be an immediate outcry, ‘We cannot possibly have a civil nuclear power programme because these reactors are so dangerous.’ ” The reactor that caught fire was not a power reactor at all. The first two reactors built in Cumbria at Windscale, were started in 1946 and built purely to produce plutonium for weapons. The official inquiry that followed, faulted the engineers who were working when the reactor fire broke out. The report was not without consequence for their careers. In her book, ‘Windscale, 1957: Anatomy of a Nuclear Accident’ published in 1995, Lorna set the record straight, in spite of pressure to toe the official line. The inquiry blamed the very people who had in fact averted a terrible disaster by taking huge risks to themselves. The actions of these people were beyond praise, says Lorna. What about the international context? “It is very difficult to remember now the extraordinary sense of urgency and fear in the early Cold War days. The Americans were more panicky than we were, even 20 Focus
though they were geopolitically so secure. They were virtually immune to Russian attack, but we were open to attack, particularly in the very early days after the war, we had nuclear-capable American airfields all over eastern England. The chiefs of staff said that we were now the prime target. If Russia wanted to attack she couldn’t reach America but could reach American airfields in the UK. So the two reasons that gave the government a sense of urgency to have a nuclear weapons capability, were partly to have a deterrent against Russian attack and also the feeling that we couldn’t rely on the Americans to defend us if the Russians attacked. There was the feeling that the Americans would pay more attention and be more cooperative if we had a nuclear capability. We didn’t want to be treated just like a client state. It was very much thought that the Americans didn’t think much of us, but they were very impressed by the extreme efficiency of the British weapons programme. Though this was not said publicly to be its purpose, I have no doubt at all that it was to have influence and status with the Americans.” What was the deterrence value of nuclear weapons? “Things are different now; I don’t think anybody would have the nerve now to say it was a deterrent. You can’t make it stand up. If we hadn’t got nuclear weapons and we decided not to develop them, that would be one thing, but being a nuclear power, the third nuclear power in the world, to give them up would be stepping down. “I’ve spoken to two Russian weapons scientists’ right at the heart of the [Soviet] weapon program. I asked one, German
Goncharov, what the Russian view during the Cold War of the British nuclear deterrent was? What effect did it have on Soviet thinking? Did it influence you in any way? He said: ‘I don’t think we ever gave it any thought. It was of no significance, we were interested in your program because it was scientifically very clever. If we could gain anything by espionage that would be one thing, but we weren’t influenced by your British deterrent.’ Then I asked a colleague, a Russian specialist at Stanford University: ‘While you’re in the Moscow nuclear archives, do look through for any reference at all to the British nuclear deterrent and what the Soviet authorities think about it.’ When he came back, he said: ‘Well, I’ve found one reference to the British nuclear deterrent by an old Soviet general who mentioned it very much in passing. Otherwise I found no other reference to it at all.’ Goncharov said to me, ‘Why should we take it seriously? We were concentrating on the Great Satan across the water. Your little deterrent was just an irrelevant appendage to it.’ ” What about Barack Obama and nuclear disarmament? “I think he will get somewhere but I am not too starry eyed about it because it’s only a question of reducing the stockpiles by so much. They could reduce them by that much and what they’ve got left is still overwhelming. Obviously it’s better than increasing the stockpile; every reduction is very welcome, but nobody could call it disarmament. They will still be left with something like a 1000 warheads each and if you think what a single H-bomb can do – obliterate a whole city and cause long lasting devastation. Only six H-bombs would be enough to wreck this country. I think it will be difficult for Obama. One has to start somewhere. It’s obviously better to start with downsizing than going for what you know you can’t get, which is complete disarmament. I wish him well. [Former American president] Reagan did well on downsizing the stockpiles but both the superpowers have got enormous stockpiles and what good it has done them I am really unsure.” Tristan Farrow is a PhD student in the Department of Physics Michaelmas 2009
Tackling Tuberculosis Sahil Kirpekar and Ali Ansary introduce their innovative strategy for tackling the challenges of drug delivery for recovering turberculosis patients is a continuing public health threat and is close to becoming a global emergency. The respiratory and infectious disease is most commonly seen in people with lowered immunity and is easily transmitted through the air from person to person during close contact. In most cases of tuberculosis (TB) the inhaled bacteria infect the lungs, although they can spread to other parts of the body. Tuberculosis is treated with long courses of antibiotics which usually last for many months or even years. Although this is an effective therapy, the prevalence of TB is rising because patients do not always complete their prescribed treatment. Three million die from TB annually and each year more than 400,000 new cases with antibiotic resistance are diagnosed. These new drug resistant forms of the bacteria generally stem from poor patient compliance, where sufferers do not follow the required course of treatment, compounded by a long period where the patient often feels better before completing their treatment and stops taking the drugs. Ensuring patients complete a full course of treatment is critical to preventing the outbreak of new drug resistant strains. The reasons for non-compliance amongst TB sufferers vary. Patients find it troublesome to take a combination of pills and as symptoms improve, they may forget or avoid taking their drugs. Furthermore, some patients do not ever feel physically unwell. Current treatments for TB are only effective when patients take their medication as instructed. To address compliance issues, the World Health Organization has established a control program to monitor and evaluate the disease regularly; this is known as Directly Observed Treatment, Short-course, or DOTS. Strategies such as DOTS aim to assure patient compliance and avoid drug resistance, yet problems remain. Regular monitoring proves difficult in areas where the availability of healthcare staff is limited or patients need to travel long distances to clinics. Those working at the community level are still seeking an assured way of increasing compliance in order to reduce the number of patients developing resistance and ultimately improving the overall results of tuberculosis control programmes. To optimise the measures taken by the
Mycobacterium tuberculosis bacteria under a scanning electron microscope.
WHO, Inderm plans to collaborate with existing programs such as DOTS. In order to address this public health problem, Inderm, a start-up company founded by four Cambridge students, is developing a drug-delivery device for TB. Inderm addresses the need for a solution that enforces therapeutic compliance by creating a biodegradable, subdermal drug-delivery device that can release the appropriate therapy over the full treatment period. The system will release the required drugs at a controlled rate, assuring proper treatment of affected patients at a target cost competitive with current treatment costs. Since significant non-compliance becomes an issue once patients begin to feel better, the Cambridge team is focusing on patients during the last four months of treatment. Currently Inderm is working to raise funds to continue developing their prototype device. The work is focused on the drug-release profile and patient attributes, assuring patient safety, effectiveness and acceptance. The goal is that Inderm can store and release the TB drugs in a way that matches the therapeutic profile, without degradation, toxic accumulations, irregularities in the release profile or negative interactions with the delivery system or the patient. Sahil Kirpekar and Ali Ansary recently completed an MPhil in Bioscience Enterprise at the Department of Biotechnology and Chemical Engineering Initiatives 21
The Great Beyond Chris Adriaanse and Sonia Aguera talk to Jim Bagian about becoming an astronaut
James Bagian posing for his official NASA photo
22 A Day in the Life of...
into space. By the age of 12, he realised that perhaps it wasn’t the most realistic of aspirations. “It was like wanting to be president of the United States, or maybe as unrealistic for me as becoming the British Prime Minister.” Having outgrown his desire to become an astronaut, Jim attended engineering school followed by medical school and trained to become a doctor. It was here, while between surgeries, that he saw a NASA advertisement calling for applications for astronauts. Despite being discouraged by his lack of the ‘right kind of flying experience’ he had gained while at engineering school, Jim applied anyway. To his surprise he was selected to become a space shuttle astronaut and, at the time, the youngest person to be selected. Jim joined NASA in 1980 and went into space on two separate occasions for five and ten days respectively in March 1989 and June 1991. The main task during Jim’s first mission was to place a satellite into a geosynchronous orbit. Simultaneously they were also conducting a number of science experiments. Jim was looking at a treatment for space motion sickness – a potentially debilitating and dangerous consequence of weightlessness. “The symptoms sound like motion sickness that you might get on a boat or car or whatever, but it had never been successfully treated. It was a major problem. About 75% of all first time flyers get sick to one degree or another. Sometimes it’s just minor, stomach pains or such, but others can be vomiting every ten minutes.” James wanted to try a drug already used on Earth, arguing that possible side-effects were minimal compared to the potential symptoms. During his flight, the drug was tested on several fellow astronauts and it worked with nearly a 100% success rate. “Space motion sickness had a lot of impact on how we did missions because we had to allow for people being sick. Now if they’re sick, you inject them and in twenty minutes they’re good to go.” Jim’s time in space was extremely tightly choreographed. Every five minutes were accounted for except for bathroom breaks that “you had to squeeze in” somehow. Their schedules were also intertwined so being off yours could affect others. “Everything is accounted for: when you’re doing this experiment, when you’re doing that experiment, when you’re using the computer, when you’re downloading
like many children, Jim Bagian dreamed of going
data, when you’re uploading data, how much power you’re pulling on your experiments – because if all the high powered experiments ran at the same time, you couldn’t supply them. “You can’t be like ‘Well, maybe I’ll do that experiment later rather than now,’ because that will have huge ramifications. Will you be able to get the data stream downloaded and get the scientists the information on the ground? Will you have enough power to even do it? So you weren’t sitting around thinking ‘maybe I’ll look out the window a little bit more’ or ‘maybe I’ll have tea now’ – that wasn’t happening. You were very busy the entire time.” Jim describes weightlessness as like being in a swimming pool whose water is the same temperature as your skin – essentially it feels like nothing, and you quickly adapt your behaviour. Your reflexes when you ‘drop’ something become redundant and you learn than when you put something ‘down’, you have to just let it float so that when you turnaround to retrieve it, it may only have moved an inch. By all accounts, the view is quite spectacular: “You can see a tremendous amount of detail. Looking at the sea with your eyes, it’s a bunch of different blues. You can see cold water upwellings. You can see a ship’s wake a thousand miles long and see the ship up at the front. You can see in some cases roads with your naked eye. Michaelmas 2009
Filming the Earth’s surface (top). Monitoring Pilot John E. Blaha’s blood flow during his first mission. (middle). Floating through the Spacelab Life Sciences module aboard the Earthorbiting Columbia (top).
I could even see the football field two blocks from my parent’s home.” This spectacular clarity comes from the lack of atmosphere. When you’re looking down from orbit, you’re not looking through a lot of atmosphere which is already relatively thin only a mile up. In contrast, when you’re looking horizontally, you’re often looking through miles of atmosphere that seriously impedes your ability to see objects far away. The difference is incredible. According to Jim, from 180 miles up and with a regular pair of binoculars “you can see the surf break over Polynesian islands and the palm trees on the beach.” Perhaps the oddest thing to happen to Jim occurred during the first couple of hours of his first mission. The shuttle had just settled into orbit, opened the payload doors and was approaching the night time side of Earth, when he saw something rather strange. “I saw what appeared to be a red flashing light – like a rotating red beacon on an aeroplane and it seemed to be about 100 metres away. I was like, ‘what am I seeing here? Is this a UFO?’ It was there for about 30 seconds, flashing and then it went out.” Perplexed, James kept quiet but then 45 minutes later, it appeared again as they reached the daytime side of the Earth. “And when it got to full daylight, I saw that is was a washer – the little metal disk with a hole in it – and it had probably been left there in a payload bay by a mechanic and had floated out into space and was in orbit with us and rotating, so at sunrise it would flash red as it caught the sunlight, flashing every second.” The story gets stranger. When the crew returned to Earth he discovered that there was a bogus press report from someone claiming to have intercepted radio messages from the space shuttle saying that they had birthed a UFO in their payload bay. Jim had been identified as the person who had made the transmission. Years, even decades later, Jim would get calls from Hollywood producers keen for an insight into how the UFO had docked. Jim has some solid advice on becoming an astronaut: “The fact is that you have to apply! I was 24 years old when I submitted my application and really didn’t think I stood a chance.” “It’s very challenging. Most of it is not flying but getting ready to fly. So if you’re someone who doesn’t like doing some of the development work and you don’t like engineering, then you’re not going to like being an astronaut because that’s what most of your work is. Flying to me is the anti-climactic part. It’s a nice experience and all, but the satisfaction comes from making sure that the mission is possible to do.” In all seriousness, Jim stresses that you should pursue you interests. His had inadvertently prepared him for a career as an astronaut. “I had done different things because I had a passion for them. Trying to fill out my resume and become an astronaut wasn’t on my mind.
Yet all those things are what lead to me being selected. I wasn’t doing it to earn merit badges. If you’re doing things that you wouldn’t normally do, then maybe you’re preparing yourself for the wrong carear.” Post-NASA, Jim became the chief patient safety officer and the director for the National Safety Centre of Patient Safety for the Department of Veteran Affairs and was given a carte blanche at a time before patient care was an established medical discipline. Medical errors can have serious consequences for patients and Jim has been able to use his skills as an engineer and physician to make dramatic improvements. “This job didn’t exist until I had it. Nobody in the world had a job like this and when the person that ran it said, ‘safety is an issue will you come and look at it’, I thought absolutely, because I always chafed the way that medicine was run in that regard. This was an opportunity to make a difference.” Chris Adriaanse is a PhD student in the Department of Chemistry and Sonia Aguera is a PhD student in the Department of Pathology. A Day in the Life of... 23
Into Africa ALISON PEEL
Alison Peel braves the wilds of Africa to look at the spread of viruses in bats
wading thigh-deep through a murky swamp at 4 am,
with reeds reaching far above my head and vegetation squishing through my waterproof walking shoes isn’t my ideal Sunday morning, but it certainly beats lab work! This particular adventure took place in a small patch of mushitu forest in Kasanka National Park. As one of Zambia’s smallest national parks, it offers extraordinary diversity of bird life and unique opportunities to witness species that are rare in other parts of Africa. But I was there for different reasons. Every year, from a particular day in late October until the beginning of January, several million straw-coloured fruit bats (Eidolon helvum) descend on this small patch of mushitu forest within the park. These fruit bats are widely distributed across subSaharan Africa, with easily recognisable colonies: it’s hard to miss a squabbling roost consisting of millions of individuals, especially since they are often found in the busiest parts of major cities. It’s thought that they migrate seasonally to make the most of variations in fruit availability. The colony at Kasanka is the largest known. It’s an awe-inspiring sight to watch the bats leave the roost to feed. Shortly before dusk falls, black silhouettes pepper the African sky as far as the eye can see.
The colony of fruit bats at Kasanka (above) and Alison taking biological samples from a fruit bat (below)
24 Away From the Bench
I was there to collect blood and genetic samples from a few of the vast population. Each morning, my guide Changwe and I waited for the colony to return from their night time feeding, before the heat of the day arrives and when the bats are weary from their night time activities. I am part of a collaboration investigating the role of the fruit bat as a host for viruses that can infect humans. Lagos Bat Virus (a rabies-type virus) had been found in this species before, but in colonies in Ghana, Malawi and Zambia, we have also found a Henipavirus (a new genus of viruses in the same family as measles). These viruses had only been found previously in Australia, Asia and Madagascar, where transmission from bats to humans has resulted in fatalities. Human cases have not been reported in Africa, even though the colonies are in close proximity to humans and are a common source of bush meat. However, this could be due to misdiagnosis and is an important area for future research. To truly understand how the viruses circulate within and between colonies, more information on the migratory patterns and connectivity of colonies across Africa is needed. For example, little is known about where the individuals in the Kasankan colony come from, or how they manage to arrive with such remarkable accuracy year on year. My research addresses this by comparing genetic samples from colonies across the whole continent. Results so far point to one freely mixing population across continental sub-Saharan Africa. However, some isolated island populations in the Gulf of Guinea appear to be genetically distinct from the mainland population. Comparing the infection status of these colonies with those on the mainland could give us vital insight into the circulation of the viruses. It appears that fruit bats have evolved with these viruses over millions of years and that ‘spillover’ transmission to other species, including humans, may result from an ongoing loss of their natural habitat and other man-made environmental changes. Further research will help us to understand the enigmatic lifestyle of these fascinating fruit bats, so that we can minimise human health risks, whilst maintaining the natural and crucial role that these bats play in the ecosystem. Alison Peel is a PhD student in the Department of Veterinary Medicine Michaelmas 2009
The crowd enjoying the music at the Green Man festival
music festivals are generally places where you can go to escape normality for a few days. The last thing you would probably expect to find is a group of scientists demonstrating experiments. This year I was part of a team of physicists who stepped up to the challenge at the Green Man Festival in Wales, by becoming â€˜physics buskersâ€™. We were there as part of the Physics in the Field scheme, which is run by the Institute of Physics. Our goal was to perform physics tricks and raise awareness of physics amongst the festival goers, the majority of whom do not have scientific backgrounds and would not normally seek science out. Our team of five was made up of four student volunteers and the coordinator Zbig. As an incentive to volunteer we were given free tickets for the festival and five meals a day. I met the other volunteers bright and early on the first morning, with no idea what to expect, as I had never taken part in physics busking before. We were all a little nervous, and the driving rain did not help! We were presented with a box full of balloons, bottles, film canisters, skewers, and a plethora of other common household artefacts. Who would we appeal to, if indeed we appealed to anyone? Or would we disappear into the backdrop of the festival, only visited by the odd drifter on a mission to find a clean toilet? After all, could physics buskers really hope to compete with the musically acclaimed folk-tronica that had attracted the visitors in the first place? The answer, to our amazement, was yes. We were at times overwhelmed by the number of people
Samuel Wright recounts his weekend as a magician at the Green Man Festival
stopping by to catch a bit of science, in between the other amusements on offer. The response to our demonstrations was consistently positive from our onlookers, young and old. But our biggest fans were the youngest of our visitors. There were a large number of children at the festival and our magic tricks struck a chord with their imagination. Our job as physics buskers was to demonstrate a physical principal by using basic everyday objects. We performed a little trick and then explained the science behind what we did. It was a real buzz to see the sheer bewilderment on the faces of our patrons as they pushed a skewer through a balloon without popping it, heard the drone of a vibrating coat hanger amplified by a string, or set off a rocket using just a tablet of alka seltzer. I really enjoyed my weekend of physics busking. I have yet to meet a graduate student that has not, at some point, become disenchanted with their research, if not science as a whole. Taking part in schemes like this not only promotes and popularises our work, but also helps to remind us why we became interested in science in the first place. If the opportunity arises to take part in anything like this, then I recommend you give it a try. You wonâ€™t regret it. More information about the Physics in the Field scheme as well as a detailed explanation of all the tricks and example movies can be found at www.physics.org/events Samuel Wright is a PhD student in the Department of Physics
Away From the Bench 25
The Perfect Melody Amy Chesterton looks at the mathematics behind a perfect tune beautiful music is an art form.
European youth Orchestra playing at the Albert Hall. (right)
26 Arts and Reviews
It is appreciated across all cultures and is an important part of social life. Those who can play music are considered gifted. Those who compose are geniuses. Music stems from a passion, channelled through artistic flare, seemingly beyond the creativity of most people’s conscious thought. Frederick Delius, the English composer, described music as an ‘outburst of the soul’ while Ludwig van Beethoven described it as ‘the mediator between the spiritual and the sensual life’. So what is it that sounds makes music sound good? Mathematics holds the answer. There are many ways in which mathematics and music are related. Both have abstract concepts and use their own set of symbols. Both are associated with high intelligence and can be somewhat perplexing to a novice. There are aspects of music that are obviously maths related; the timing of a beat or the length of the notes. But the similarity stretches further and is a hot area of research for physicists and mathematicians alike. The first link between mathematics and music was made in the 6th Century BC by the Greek mathematician Pythagoras. Legend has it, that when going about his daily business, Pythagoras walked past a busy blacksmiths. As he listened to the repetitive bashing of hammers on anvils, he noticed something interesting. All the anvils sounded harmonious, apart from one, which thudded in between the gentle ringing. Closer inspection revealed that the anvils’ masses were all in simple ratios, one twice the mass of the other, another three times the mass and so on. The mass of the anvil which stood out had no simple relationship to the others. It is the idea of ratios which holds the key to musical appreciation. Sound travels as compression waves with oscillating pockets of high and low air pressure. The rate at which these air pockets reach our ear drum determines the note we hear. A higher frequency of air pockets produces a higher sounding pitch while a lower frequency of air pockets produces a lower sounding pitch. The secret to sweet music is to line up these air pockets in orderly or interesting ways. Take the note ‘middle C’, approximately the centre key on a piano. Middle C has a frequency of 262 Hertz. This means that 262 pockets of air
hit your ear drum each second. A note guaranteed to sound great alongside middle C is the note with double its frequency, just as the anvil and one with twice its mass sounded good together. The note with double the frequency is defined as being an ‘octave’ higher and is given the same name, C. Distinguished as ‘High C’, it has a frequency of 523 Hz. Every second air pocket from high C arrives with an air pocket from middle C, the ratio is 1:2 and this sounds great! This natural phenomenon of octaves has been referred to as the ‘basic miracle of music’ and is the basis of the entire note system for many musical systems, including ours. The Ancient Greeks defined there to be five equidistant notes in-between the octave and played their music accordingly. In western culture the octave is split into 12 equally spaced semi-tones. These are C, C sharp, D, E flat, E, F, F sharp, G, G sharp, A, B flat, B, High C. Playing two notes that are an octave apart, one after the other, always sounds good. The first two notes in Somewhere Over the Rainbow are one octave apart, as are the first two in I’m Singing in the Rain. This simple fraction is nice but it can sound hollow and boring. To create interesting music we must explore other relationships. C and G have a ratio of 3:2. Every second pocket of air from the note C is accompanied by the third from G. This frequent meeting of air pockets guarantees that C and G complement rather than clash. These two
notes are found together at the opening of the Star Wars theme tune. By contrast, C and F-sharp are not quite so nicely related. The pulses (almost) line up at the seventh and fifth air pockets but this is too long a time period and too infrequent for the combination to sound good. As a result they are rarely found together. As well as being important in melodies, these ratios also apply to chords. In a chord combinations of notes are played together simultaneously. To sound harmonious, the frequencies of the notes must be small multiples of each other. ‘C major’ is one of the most popular chords, combining C, E and G. All three frequencies line up, almost perfectly, every 0.0155 seconds, giving a sweet, harmonious and happy sound. The symphonies in C major of Beethoven, Mozart and Schubert all used this combination, as does the popular British nursery rhyme Row, Row, Row Your Boat. The E major chord comprises E, G-sharp and B, which line up every 0.12 seconds. This sounds just as sweet but a little higher but because it is made up of higher frequency notes. Musical instruments are tuned to play notes exactly. Even a slight error in frequency could ruin a musical piece. Instruments are designed to very accurate specifications, which for many have remained unaltered for centuries. Stringed instruments use string lengths to create notes; to double the frequency the length is halved. Wind instruments use the positions of holes to the same effect. The material of choice is also important, as each material resonates with its own natural frequency. Materials are chosen for each instrument to reinforce certain harmonics and distinguish it from other instruments. This is why the violin is clearly distinguishable from the piano, Michaelmas 2009
Amy Chesterton is a PhD student in the Department of Chemical Engineering
Despite being a popular instrument for over 500 years, the violin in its present form has remained practically unaltered. (below)
even though both may be playing the same note. Keys also make use of maths, where the key defines which scale the piece is based on. Changing the key of a song involves changing every note by a certain amount. Each note is raised or lowered by some fixed value, usually so it can be played more easily on a given instrument. Changing key does not affect the song. Although the pitch of the notes has changed, the pattern is still the same. The song is just as recognisable and catchy and the careful maths you’re listening to remains the same. The theory of mathematics in music is of course an incredibly complex topic and one which can only be truly appreciated by those proficient in both. Scientists have long been known for their appreciation of mathematics and it seems now that the same can be said for artists. Whether conscious or not, mathematics sounds good. So if you plan to create a musical masterpiece, you’d better make sure your have your calculator handy!
Arts and Reviews 27
Lindsey Nield tracks the evolution of the periodic table no chemistry textbook or laboratory wall would
be complete without a prominently placed periodic table. This seemingly simple arrangement contains information about the atomic elements and is the culmination of the work of many scientists over more than a century. The table’s discovery is usually attributed to Dmitri Mendeleev, but he was not the only one to puzzle over the periodicity of the elements, nor the first to look for a way of arranging them. Indeed, Mendeleev’s table bears only a passing resemblance to the current form as subsequent discoveries have initiated changes. The periodic table contains over 100 elements, 92 naturally occurring, the others produced in nuclear reactors or particle accelerators. The elements are listed in ascending order of atomic number, that is the number of protons contained in a single atom, but arranged ‘periodically’ with elements of the same valence (describing their bonding power), which exhibit similar properties, being grouped together. The first accepted periodic table was developed before the configuration of the atom was known; so how did we arrive at this design? It began with the concept of an element – a substance that cannot be broken down into any simpler substances by chemical reaction. Aristotle believed there were just four: earth, air, fire and water, but this idea was surpassed when the first chemical element was recognised. Many of what we would now call elements, were known from pre-history. We cannot know who first used gold for ornamentation, or who ushered in the Iron Age by identifying iron. We do know that German alchemist Hennig Brand was the first to discover an element by experimentation in 1649. Whilst searching for the Philosopher’s Stone, a mythical object that would turn
other metals to gold, he instead isolated phosphorus. During the next 200 years, a vast body of knowledge concerning the elements was acquired and by 1869 a total of 63 had been discovered. Scientists began to recognise patterns in the physical and chemical properties that the elements displayed. The atomic weight of the element (how heavy its atoms are in comparison to an atom of hydrogen, the lightest element) came to have particular significance. The German chemist Johann Döbereiner noticed that strontium’s atomic weight (88) lies midway between those of calcium (40) and barium (137); elements that possess similar properties such as high reactivity with air and hydrogen gas production on contact with water. In 1829 after discovering two further groupings: the alkali metals lithium, sodium and potassium and the halogens chlorine, bromine and iodine, he proposed that nature exists in triads, with the properties of the middle element being an average of the other two - the Law of Triads. Scientists soon found that group extended beyond triads and began to look for ways to arrange the elements. Lexandre-Emile Béguyer de Chancourtois, a French geology professor, used their atomic weights to plot the elements as a continuous spiral around a cylinder. His ‘telluric screw’ had some success in lining up elements of similar properties but on publication in 1862 his contemporaries found it difficult to understand. Scientists in other countries only became aware of his work years later. One year later English chemist John Newlands noticed that when arranged in order of increasing atomic weight, elements separated in the list by an interval of eight had similar properties. Comparing this trend with the octaves of music, he referred to it as the Law
Flame colours produced by the compounds of various elements when burnt in methanol. From left to right: potassium, copper, cesium, boron and calcium.
Bringing Elements to the Table
of Octaves. However, this law broke down for elements with atomic weights higher than that of calcium and when presented to the Chemical Society (one of the forerunners to the Royal Society of Chemistry) his work was criticised and his contribution was not recognised for another two decades. Building on the foundations of these scientists, the competition came down to two men: German chemist Julius Lothar Meyer and Russian chemistry professor Dmitri Ivanovich Mendeleev. Both men produced remarkably similar results at the same time, while working independently of one another. Meyer constructed a table listing elements in increasing weight order, with those of the same valence appearing in the same column. The periodic nature of the elements was clearly demonstrated by plotting a graph of atomic volume against atomic weight. Unfortunately for Meyer, when he published his table in 1870 it was a year too late. Available to the scientific community since 1869, Mendeleev’s table claimed priority, leaving Meyer’s to only confirm the discovery. Mendeleev’s table listed the elements in the same arrangement but with some superior features. Knowing atomic weights were not always accurate, he placed certain elements out of order to preserve group properties. As more accurate weights became available, his order proved to be correct. He had such confidence, that he used his table to predict the properties of missing elements and was proved correct with gallium, germanium and scandium all discovered within his lifetime. One thing Mendeleev did not foresee was the existence of a whole new group of elements; the noble gases. Sir William Ramsey and Lord Rayleigh published their discovery of argon in 1895. There was no space in the periodic table for this new element, but after Ramsey went on to discover neon, krypton and xenon, group ‘zero’ was added to the table, so-called due to the zero valencies of the elements. Michaelmas 2009
Mendeleev’s table was still not quite perfect. 1914 English physicist Henry Mosley determined the number of protons in an atom of each element by bombarding them with X-rays. Arranging the elements in order of increasing atomic number rather than atomic weight, he eliminated the remaining inconsistencies. The last major changes to the periodic table resulted from the work of Glenn Seaborg. Beginning with plutonium in 1940, he discovered all the transuranic elements from 94 to 102 through neutron bombardment of uranium. Element 106 is named seaborgium in his honour. Bombardment experiments continue to produce new elements; in June 2009 the heavy element 112 was officially recognised and will most-likely appear with the name ‘copernicium’ after the astronomer Copernicus. The German team who made the discovery already have their sights on finding element 120. Thus we arrive at the modern periodic table. The elements are the building blocks of matter and knowledge of them is fundamental to understanding everything from the stars and planets to life on Earth. The periodic table represents a complete map of the elements and is an indispensable guide to the universe around us.
Mendeleev painted by Russian artist Ilya Yefimovich Repin (left).
Lindsey Nield is a PhD student in the Department of Physics
Rose Spear meets Adam, the first of a new generation of electronic scientists
Adam stood behind one of his many makers (above)
meet adam, the first of a new generation of electronic scientists. Adam lacks your dexterity, creativity and fleshy looks, but this robot is the first to demonstrate the abilities many rely on to conduct their research. Created by researchers from Cambridge and Aberystwyth, Adam has the ability to conduct experiments, analyse results and develop hypotheses. You might say that any researcher could complete these activities. However, can any researcher work non-stop, devoting all their attention to the research? Adam can work continuously and progress automatically through every step of the scientific process. Adam is not the first robot to assist with scientific research. Robots with a range of functions are already used in a variety of disciplines, each designed to make scientists’ research faster, easier and more cost-effective. The basic functions of these robots include automated pipetting, sampling and measurement; repetitive activities that prove tedious for humans. For instance, the introduction of automated chromatography has improved the speed of compound purification: accelerating chemical synthesis and pharmaceutical drug development. Using robots allows scientists to run multiple experiments at once, maximising output and advancing the pace of research. Robots with more advanced functions are used where conditions prevent investigation by human scientists. Robots are currently conducting research in a number of extreme environments, including the surface of spaceships, the inside of volcanoes and the bottom of oceans. Examples include Dante I and II for exploring volcanoes in Alaska, the Mars Lander and Sentry for deep-sea investigations. All previous robots – from the very basic automated pipetter to the advanced Mars Rover – have two main
limitations: they cannot interpret the significance of their results or plan new experiments. At most, these robotic assistants collect data for humans to analyse. This has led to a bottleneck, particularly in the biological sciences, where robots are generating new data faster than human scientists can retrieve and analyse it. Enter Adam and his kin. With the ability to both conduct experiments and to draw conclusions from the results, Adam is a significant technological breakthrough. Previously separate functions of robots and humans can now smoothly proceed in one continuous process, sidestepping the human bottleneck. This revolution in automation is made possible by a process termed ‘active learning’. Active learning involves a cycle of experimentation and analysis, where each successive experiment is derived from the previous results. With each new set of data, the robot scientist is able to generate new hypotheses to explain the experimental results. Subsequently, the robot scientist uses the same logical programming to determine which experiments have the highest chance of falsifying the greatest number of potential hypotheses. In the case of Adam, this iterative logic process was applied to the field of genetics. Adam investigates the effects of different growth conditions on the proliferation of selected yeast strains. This data is then used to match genes from the yeast Saccharomyces cerevisiae with the enzymes that they encode. During experimentation, Adam performs five basic operations, from the retrieval of selected yeast strains and frozen storage to measurement of growth using an automated plate reader. Similar to a student gaining background knowledge on their research, Adam required the input of background information on various topics, including knowledge of S. cerevisiae metabolism, before he tackled the problem of yeast genetics. With this background knowledge and advanced software for hypothesis creation and experiment design, Adam was able to hypothesise that one enzyme was encoded by three distinct genes: a conclusion, subsequently verified by results obtained manually by human colleagues. The result of Adam’s experimentation was understandably a modest discovery, yet it hints at a future where humans and robots work as a team to investigate some of the most time consuming challenges in science. Rose Spear is a PhD student in the Department of Materials Science and Metallurgy Michaelmas 2009
Books The Lives of Ants THEY ARE EASILY STEPPED ON, robotic and seemingly forever busy.
OUP, 2009, £16.99
In The Lives of Ants, evolutionary biologist Laurent Keller and science writer Elisabeth Gordon team up to show us that there is more to these tireless workers than simply being highly cooperative earth and food movers. The book manages to present fascinating yet obscure morsels of information on almost every page, and it is the lesser-known facts held by myrmecologists (ant specialists) that make this compact book worth reading. Even a cursory browse will allow you to discover that weaver ants use dances and squeaks to exchange messages, and that Bahamian ants link together to form floating rafts to help them survive floods. With a constant emphasis on the similarities between ant and human societies, the book aptly closes with a section on how research into ant behaviour has found applications in everything from designing communication networks to sending tiny robots to Mars. Delivered in highly readable bite-sized chapters the pair successfully navigates through the science of ant biology to present an accessible piece of writing. With variety to entertain and inspire an audience ranging from the casual naturalist to the impressionable young entomologist, this little gem gets a solid recommendation for your coffee table. AZ
Worlds on Fire
CUP, 2009, £23.99
“GRAB YOUR SPACESUIT and your helmet and bon voyage!”Worlds on Fire starts with an exploration of volcanism on earth, delving into topics for both non-specialists and students of earth and planetary sciences. But beyond this, the book takes a unique turn, guiding the reader on field trips to landmark volcanoes throughout the solar system. Frankel provides basic geological information about each volcano. He suggests when, where and how to visit volcanoes such as Etna in Sicily or even Sapas Mons on Venus, “an essential, albeit difficult destination for the serious volcano lover”. This exploration of what volcanoes on other planets would be like to visit and the challenges of studying them, makes the book stand out from more conventional textbooks. Although the level of technical information is perhaps not enough to satisfy the specialist reader, this is an engaging, clearly written, accessible book, which is recommended whether you want to learn more about geological processes or just generally appreciate the range of volcanoes in our solar system. It is easy to dip in and out of the various chapters and it avoids the clichéd ‘textbook feel’ with its imaginative approach to the subject. DV
The Ten Most Beautiful Experiments that will stimulate the scientist inside everyone. Science in the modern era has become industrialised, but this book showcases the brilliance of individuals whose clear and simple experiments continue to impact the science of today. The book begins with Galileo’s formulation of the ‘Time Square Law’ and continues by tracing the discoveries of Newton, Harvey, Lavoisier and Galvani. Their rigorous methods of observation and analysis are wonderfully described; the experiments are the protagonist in these first five chapters. Despite the jarring intrusion of personal details in the discussion of Faradays work on electromagnetism, the remaining sections on Joule, Pavlov, Michelson and Millikan maintain the same lucid tone as the first five. Chapters are short, succinct and keep the reader’s attention throughout. The author conveys what he set out to do, by showing how, using simple tabletop apparatus, logical reasoning and dogged perseverance, these great minds discovered the hidden laws of nature. SS
A DELIGHTFUL READ
Vintage Books, 2009, £8.99
Book Reviews 31
Anne, Of course science can offer us a logical explanation, one need only delve into physical chemistry to find the answer. When highly volatile substances such as marmite become superheated, for instance when a warm knife is used to remove them from a pot, they enter an extremely excitable state. In such a state, they are susceptible to changes in the Higgs field. Marmite, like certain other things, is closely linked to religion due to its love it/ hate it dichotomy. As such, there is a clear entanglement process between a Gideon bible and the marmite atoms on the plastic lid. Illustrations present in the bible manifest themselves as changes in the localised Higgs field around the lid thereby condensing the marmite atoms into the pattern of a well-known historical figure, creating a media storm. A similar process is performed by Nike and other sports brands, where they entangle the material they use to make their logos with shampoo additives, causing their brand logo to spontaneously appear on the back of people’s heads. Dr I.M. Derisive Walkers crisps have been running a promotion to find a new flavour for their crisps. Is there a scientific method to arrive at the perfect flavour? Munchies Mark
32 Dr Derisive
Mark, Naturally, the best method is a statistical one. Using the renowned Triple Asymmetric Solitary Tangent Efficiency (TASTE) test, it is possible to determine the favourite flavours of every human being on
Email your scientific queries to email@example.com
There was recently a story about Jesus’ face appearing on a marmite lid. I’m not sure I believe in miracles, I was hoping that science might have an explanation. Agnostic Anne
Your questions answered by I.M. Derisive
the planet through sampling only a population of ten children, six geese and one newborn infant of unknown gender (hence the ‘Triple Asymmetric’ moniker). The results of this analysis are fed through a complex algorithm whereby the numbers A=1, B=2, etc., are Summated, Multiplied, Executed, Loaded and Lowered (the SMELL transformation) to give a 17 digit number. This number can then be translated into a single digit code using the Yacksenfive Uncoupled Matrix (YUM) machine. From this, it has been discovered that by far the best flavour is salt and vinegar, although certain anomalous results indicate that banana and prawn may well be the way to go. Dr I.M. Derisive I read that objects going into a black hole undergo a process of stretching called ‘spaghettification’. It got me to thinking, what would happen if you put spaghetti into a black hole? Durum Dan Dan, What an excellent question! Indeed, scientists at the LHC (Large Hadron Collider) hope to answer just that over the coming years. Their experiments will involve putting spaghetti pieces into the collider, accelerating them to nearly the speed of light and colliding them against a stationary bed of tomato sauce. This will allow the behaviour of spaghetti in the resulting transient black holes to be observed. This project has been assigned a total budget of $10 billion, though the majority of that spending will be on bread for mopping up (apparently seeded batch loaves are at a premium). Of course, your question focuses by implication on what would happen if you fed spaghetti into a black hole end-on. If however, the strand were to be fed sideways, the obvious result would be tagliatelle - a mechanism which is, in fact, currently used by the pasta making industry. Indeed, given the cost of making these black holes by other means, most production is now out-sourced to Switzerland. Dr I.M. Derisive Michaelmas 2009