Page 1

The Cambridge University science magazine from

18 >

9 771748 692000

18 > 9 771748 692000

ISSN 1748-6920

www.bluesci.co.uk

ISSN 1748-6920

Cambridge University science magazine

Easter 2010 Issue 18

FOCUS 50 years of lasers From communications to clean energy Biomarkers . Placebo Effect . Earthquakes Consciousness . Prader-Willi Syndrome . Chronophotography


TELECOMS TECHNOLOGY MEDIA SUMMER INTERNSHIPS 2010

slow routes are for vacations, not vocations

For more information and to apply online visit


Contents

Features

Regulars

6

In Search of the $1000 Genome

3 4 5

On The Cover News Pavilion

8

Not All in the Mind?

22

Behind the Science

24

Perspective

25

Away From the Bench

26

Arts and Reviews

28

History

Technology

Book Reviews Dr Derisive

10

Elizabeth Batty outlines the race for cheaper genome sequencing Ryan Breslin discusses the mechanism of placebo pain relief and whether or not we should exploit it

The High Stakes of Earthquakes

Owen Weller reports on the devastation of seismic activity and the work invested in preparing for the fallout Never-Ending Hunger Kate McAllister looks into Prader-Willi Syndrome and the insatiable desire to eat

Rupak Doshi investigates how scientists hope to find an answer to this ancient philosophical debate

12 14

I Think, Therefore I Am?

FOCUS

30

16

The Laser Fifty Years On

31 32

BlueSci looks at how this physicists’ toy has become the tool of choice in so many areas of life and how it may play a vital role in solving the impending energy crisis

Taylor Burns describes the life behind the science of Stephen Jay Gould Gemma Thornton gives her perspective on the future of scientific funding Djuke Veldhuis descends into the watering holes of the ancient Garamantes Swetha Suresh examines a unique juxtaposition of science and art that reveals the intricacies of movement Alex Jenkin looks at the past, present and future of agriculture Lindsey Nield sheds light on the uses of biomarkers

6

28 6 12 8

Contents

1


Issue 18: Easter 2010 Editor: Gemma Thornton Managing Editor: Cat Davies Business Manager: Michael Derringer

Sub-Editors: Chris Adriaanse, Taylor Burns, Wing Ying Chow, Ian Fyfe, Philip Haycock, Alex Jenkin, Lindsey Nield, Imogen Ogilvie, Helen Parker, Jessica Robinson, Djuke Veldhuis Second Editors: Antje Beyer, Wing Ying Chow, Jesse Dunietz, Ian Fyfe, Philip Haycock, Heather Hillenbrand, Warren Hochfeld, Alex Hyatt, Alex Jenkin, Lindsey Nield, Imogen Ogilvie, Raliza Stoyanova, Katherine Thomas, Annabel Whibley News Editor: Katherine Thomas News Team: Jelena Aleksic, Nicholas Gibbons, Imogen Ogilvie Book Reviews: Taylor Burns, Wing Ying Chow, Heather Hillenbrand Focus Editors: Anders AufderhorstRoberts, Rosie Powell-Tuck Focus Team: David Joseph, Wendy Mak Dr Derisive: Mike Kenning Pictures Editor: Natalie Lawrence Production Team: Ian Fyfe, Alex Hyatt, Alex Jenkin, Wendy Mak, Lindsey Nield, Helen Parker, Jessica Robinson Cartoonist: Alex Hahn Cover Image: Tristan Heintz Distribution Manager: Katherine Thomas

ISSN 1748-6920

Varsity Publications Ltd Old Examination Hall Free School Lane Cambridge, CB2 3RF Tel: 01223 337575 www.varsity.co.uk business@varsity.co.uk BlueSci is published by Varsity Publications Ltd and printed by The Burlington Press. All copyright is the exclusive property of Varsity Publications Ltd. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, without the prior permission of the publisher.

Gemma Thornton Editor

approach the end of another academic year, older and (maybe) wiser, our thoughts are divided between reflecting on all that has gone before and anticipating what new and exciting experiences the future might hold. This idea is echoed throughout this issue of BlueSci. In FOCUS, we look back on how during the last fifty years lasers have become the tool of choice in so many areas and forward to how they might provide answers to future challenges. Technology describes the growing use of detectable biomarkers for monitoring disease, and we also consider the technological advances that AS WE

might make genome sequencing cheap enough for personalised medicine to become a reality. Finally, Perspectives turns a critical eye on the potential impact of the economic downturn on scientific research funding in the UK. Whether you are interested in the causes of high death tolls during earthquakes and the best strategies to reduce them, understanding the genetics behind hunger and feeling full, or how science might help us to finally understand the relationship between thought and consciousness, all is explored within BlueSci. So find yourself a nice sunny spot, relax and enjoy. GT

Cat Davies Managing Editor

THIS MONTH, I inadvertently kicked off a heated debate on student science media. Opinions fell into two broad camps: a smaller but louder student contingent decrying the lack of interest in the student science community by working journalists, and the opposition – those professional science writers who had ‘made it’ through expertise, tenacity and a bit of luck. The basic issue was whether the science media as an industry supports those trying to break in. There are many networks which we as aspiring science writers can get involved with to get ahead – Nature Network Cambridge hub, The Association of British Science Writers and its student group, the

Psci-Com forum and Stempra are all good places to start (after BlueSci, obviously). Joining up is the easy part; after that it’s up to us, our contacts and our ability to work for free. Perhaps it’s this cut-throat process which fuels the opinion that as students, our path to a paid post in science communications is riskier and more onerous than a nice comfortable graduate training scheme. Hey, big risks, big rewards. Not everyone will be nice as we carve out our names as freelancers, but in my experience, most will be helpful. Good luck and come and see us at BlueSci soon for an insight into the whole exciting process. CD Contact: managing editor@bluesci.co.uk

About us: Established in 2004 to provide a forum for science communication, each term BlueSci publishes the best science writing from across the University. As Cambridge’s longest running science magazine we combine high quality writing with stunning images for a truly unique magazine. Guaranteed to get you thinking. President: Ian Fyfe Secretary: Alex Hyatt Treasurer: Jessica Robinson Publicity Officer: Shauna-Lee Chai

2 Editorial

Easter 2010


New Neurons From Old Jessica Robinson reviews the story behind this issue’s cover image

The Cambridge University science magazine from

Easter 2010 Issue 18

Cambridge University science magazine

18 >

ISSN 1748-6920

18 > 9 771748 692000

9 771748 692000

ISSN 1748-6920

www.bluesci.co.uk

FOCUS 50 years of lasers From communications to clean energy Biomarkers . Placebo Effect . Earthquakes Consciousness . Prader-Willi Syndrome . Chronophotography

picture of a hippocampal neuron is featured on this issue’s cover. The neuron is the green central cell, and is shown surrounded by red astrocytes and actin filaments in magenta. Tristan is a second year PhD student at the Cambridge Centre for Brain Repair and is studying brain plasticity and neuronal regeneration with the aim of restoring functionality to damaged neurons. Permanent neuronal damage is a common phenomenon that causes many devastating conditions including paralysis, Parkinson’s disease, stroke and multiple sclerosis. The brain we are born with is filled with neurons ready to make connections with one another in response to different environmental stimuli, a property known as plasticity. Up until the age of five, the brain develops rapidly and we learn most of the skills we need for life, so high levels of plasticity are vital. Later in life, plasticity reduces, but remains an active and essential process, and is believed to be key to how memories are made. When damage occurs to part of the brain, plasticity is thought to be essential for the compensatory mechanisms that occur. Because of the greater plasticity in their brains, young children have a much greater ability to compensate than adults. For example, if a young child is blinded, the occipital lobes – normally responsible for interpreting signals from the retina – take over functions of nearby regions of the brain, for example those controlling touch as when reading Braille. This is known as cross-modal plasticity. Certain cases may result in superior skills involving touch or other modalities, which is why we sometimes hear of blind people having a ‘sixth sense’. Most tissues, such as skin and bone, are very good at repairing themselves after injury, but the cells that make up the brain and spinal cord are not. The reason for this is believed to be two-fold. Firstly, after an axon is cut, it only mounts a meek effort at regeneration. Secondly, astrocytes, which are cells surrounding the neurons, secrete proteoglycans that stop the spread of degeneration from the initial site of injury. However,

TRISTAN HEINTZ

Hippocampal neurons expressing either beta4 integrin in the cell body but not the axon (top) or alpha5 integrin in the dendritic spines (right)

Easter 2010

TRISTAN HEINTZ

TRISTAN HEINTZ’S

these molecules create an inhibitory environment that turns off plasticity and prevents natural repair. The result is a glial scar, much like a scar you would get after a deep cut where the skin is unable to repair itself properly. Tristan is attempting to boost the axon’s repair mechanisms. His work centres around proteins called integrins, which span the cell membrane and allow neuronal cells to interact with the extracellular matrix surrounding them. Initially it was thought that the only function of integrins was to cling on to the matrix and ensure the cell remained in one place. However, it has been found that they have important roles in signalling, as they make direct contact between the interior and exterior of a cell. The green staining in the image shows the location of alpha9integrin, which Tristan has overexpressed in hippocampal neurons in culture. By increasing the amount of alpha9integrin in the neuron, a damaged neuron gains the ability to regenerate itself. The integrins appear to do this by grabbing hold of the glycoproteins in the extracellular matrix and pulling broken ends back together, reforming the connection. The extracellular matrix also plays a vital role in the decrease in plasticity as we age. Over time the matrix becomes very dense and full of inhibitory proteoglycans, which form a net around the axon. Research has shown that by breaking down this net with enzymes, such as chondroitinase, new connections are possible and rats are able to relearn simple tasks after brain injury. By studying in detail the proteins and support structure surrounding neurons, it may be possible to kick-start the natural regenerative potential of our bodies and repair broken neural pathways in the central nervous system. This will then open up new treatment options to the many people affected by central nervous system damage. Jessica Robinson is a PhD student in the Department of Oncology On The Cover 3


News of scientists led by researchers in the Department of Zoology have found two genome ‘hotspots’ responsible for the similarities between two distantly related species of butterfly. Heliconius erato and Heliconius melpomene are species found in Central and South America. Despite their distant relationship, both have evolved the same splashes of red and yellow on their black wings, signalling to predators that they are extremely unpalatable. Surprisingly, despite butterflies having thousands of genes, the same narrow ‘hotspots’ (areas of frequent change) in the genome are altered in both species to produce the same visual patterns on their wings. This finding is unexpected for two reasons. Firstly, the fact that identical genes have changed in both species points towards tighter constraints on possible mutations than expected. Secondly, the genes involved are not the expected candidates responsible for generating individual colours or for controlling their location on the wing. Instead, the genes in question are involved in general signalling pathways with other known functions and are not known to regulate wing patterning. This means that a tightly controlled and complex chain of molecular signals exist to result in a particular wing pattern. Dr Chris Jiggins from the team said: “We think it’s more likely to be some novel method of cellular signalling, which could be important in many other insect species.” ja

a team

Check out www. bluesci.co.uk, or BlueSci on Twitter http://twitter.com/ BlueSci for regular science news updates

4 News

Currents of climate change MALENE THYSSEN

RICHARD BARTZ

H.erato (left) and H. melpomene (near right)

MALGORZATA MILASZEWSKA

Morphogenesis on a tight leash

researchers from the Department of Earth Sciences have found that millennial-scale and glacialinterglacial timescale changes in ocean circulation have different dynamics. Understanding the dynamics of oceanic currents is crucial for interpreting changes in the climate and may have implications for future climate predictions. The team investigated changes in the isotope ratio of neodymium (Nd) in coatings precipitated on fish teeth and on a planktonic animal called foraminifera, allowing them to reconstruct past changes in water mass source and structure. Other isotope data using protactinium-231 (Pa) and thorium-230 (Th), highlighted changes in the circulation strength of the oceans. Using these data, both taken from the Bermuda Rise in the western North Atlantic, the scientists discovered different dynamics of ocean circulation. This is the first time such a comparison has been made. During oceanic overturning, surface water is sent downwards, bringing up anoxic deep water, and killing many oxygen-breathing organisms. This process is associated with millennial-scale events and affects the upper ocean, unlike those associated with deglacial climate events, which affect the whole ocean. The shifting of oceanic currents is inexorably linked to climatic changes, dynamics that must be better understood if we are to make accurate predictions of future change. io

AID-ing the reversal of cell differentiation research at the Babraham Institute has revealed new insights into the process by which cells develop specific character and type through chemical transformations. Cell fate is determined by both the actual genetic code of the cell and external factors such as chemical changes to the DNA (epigenetic marking) and associated proteins. One type of epigenetic marking is the addition (methylation) or removal (demethylation) of methyl groups from bases, which results in changes to the level of expression of that gene. It is now evident that one particular protein, AID, is crucial for complete cellular reprogramming and, through demeythlation, is able to completely remove the ‘identity tags’ of a specific cell. It was already known that the embryo removed

inherited methylation patterns early in development before forming new patterns of its own, but how and to what extent was a mystery. In collaboration with UCLA, Cambridge researchers have used next-generation sequencing technology to pinpoint exactly how this process works. They recorded huge drops in methylation levels, from 80 per cent to 7 per cent, when the cell reset took place, and showed that AID seems to be required for the resetting process to occur so thoroughly. Problems in the regulation of this cell flexibility may be involved in diseases such as cancer. The discovery of how this protein regulates demethylation could have implications for regenerative medicine and stem cell research. ng Easter 2010


Sharks possess a skeleton that is made almost entirely of cartilage. Clearing the tissue and staining the remaining cartilage reveals forms of increasingly delicate intricacy. Shark Fin Blues by Andrew Gillis Winner of the Graduate School of Life Sciences Image Competition 2010

Pavilion 5


In Search of the $1000 Genome

AL

EX

HA

hn

Elizabeth Batty outlines the race for cheaper genome sequencing in 2001, before the first human genome sequence was published, hundreds of scientists were already meeting to discuss the future of genomic research. Included in their vision was the dream of sequencing a human genome for $1,000 or less. This idea, which seemed almost fictional at the time, has in less than a decade become a question of when, not if, we can achieve it. Nature asked dozens of leading scientists in 2007 what they would do if it became possible to sequence a genome for $1,000. Their answers reveal the diversity of research that will be revolutionised by inexpensive genome sequencing. From tracing human origins and understanding dog behaviour, to generating better strains of rice, the benefits are wide-ranging. As well as the implications for research, a low-cost genome sequence brings personal genome sequencing into the price range of medical testing. This would lead to personalised medicine, where cancer treatments could be targeted to specific gene changes, and drug prescriptions based on an

Natalie Lawrence and equinox graphics

Reducing the cost of genome sequencing will revolutionise biomedical research

6 In Search of the $1000 Genome

individual’s genotype. As Francis Collins, a leader of the original Human Genome Sequencing project puts it: “The real question is, what wouldn’t we do?” The human genome contains all the hereditary information for an individual and is stored on 23 pairs of chromosomes. Sequencing the genome refers to the process of determining the exact order of the three billion chemical building blocks (called bases) that make up the DNA that carries this information. An entire human genome cannot be sequenced at once, as only a short stretch can be read at a time, so the genome must first be split into fragments which are sequenced and then finally reassembled. One of the earliest methods of DNA sequencing was the chain termination sequencing method, developed by Fred Sanger and colleagues in 1977. This method was costly, at around $10 per base pair in 1985, but the development of automated sequencing systems and advancements in technology reduced the price to $1 per base by 1995, and allowed sequencing of up to 100,000 bases per day. The cost dropped further to $0.10 per base in 1998 with the development of the ABI Prism sequencer, which made it possible to undertake larger-scale sequencing projects. This automation of the Sanger sequencing technique was used to map the first human genome sequences. While these technologies were further refined, sequencing a whole genome required millions of dollars and months to complete and was still based on techniques developed in the seventies. It was clear that new technologies would be needed if the $1,000 genome was to be achieved. The current generation of sequencing technologies (known as next-generation or second-generation sequencers) achieve their improvements in speed and cost by performing many sequencing reactions in Easter 2010


Easter 2010

Oxford nanopore technologies

parallel. The method used is known as sequencing-bysynthesis, where the DNA sequence is read off as the molecule is created. These systems must successfully combine complicated biochemistry with highresolution imaging and require sufficient computing capacity to cope with the huge quantities of data produced. The cost and speed of sequence generation must be balanced with the accuracy and length of the sequences achieved. A variety of approaches have been developed to meet these technical challenges. The first next-generation sequencing technology to be commercially available was the Roche/454 FLX sequencer, which relies on a form of sequencing-bysynthesis known as pyrosequencing. In this system, the sample of DNA is fragmented and stuck to tiny beads that are placed within water droplets in an oil and water emulsion. Each individual drop acts as a minute, self-contained reactor, and the single fragment of DNA is copied up to a million times on the surface of the beads. Hundreds of thousands of beads are placed in picolitre-scale wells on a glass slide (one picolitre is one trillionth of a litre), allowing the addition of each base to the DNA to be monitored by the emission of detectable light. The whole genome sequence of James Watson was produced in two months using an FLX sequencer, at a cost of less than a $1million. In contrast, the whole genome sequence of J. Craig Venter, completed only a few months previously using traditional Sanger sequencing, was estimated to cost $100 million. The FLX platform has also been used in the sequencing of ancient DNA, including the extinct woolly mammoth and Neanderthal genomes. Sequencing technology first developed here in Cambridge, at the Department of Chemistry, forms the basis of the Illumina Genome Analyzer, a leading competitor of the Roche/454 sequencer. Instead of sticking the DNA to beads, the Illumina technology sticks millions of DNA fragments to the surface of a glass slide. Fluorescently-labelled bases are added to the DNA fragments, and a CCD (charge-coupled device) camera takes an image of the whole slide, allowing millions of sequences to be read off at once. The Illumina technology was used in the recent sequencing of the giant panda genome and the first cancer genome sequence. Illumina have recently launched a service offers personal genome sequencing to consumers at a cost of $48,000 each – an improvement over the $1 million sequence of the Watson genome, but still a long way from the $1,000 target. It is the upcoming third-generation sequencing technologies that seem most likely to achieve the $1,000 genome target. The genome sequencing company Complete Genomics published three human genome sequences in early 2010 for less than $4,400 per individual, but there is a cost in accuracy. The technologies have an error rate of one wrong

base in every 100,000 bases, which adds up to nearly 60,000 errors in the complete genome. This makes it potentially difficult to separate any important diseasecausing mutations from the sequencing errors. Another player in the third-generation sequencing market is Pacific Biosciences, whose SMRT (Single Molecule Real Time) technology can monitor DNA synthesis as it happens. This monitoring gives an increase in speed of up to four orders of magnitude compared to second-generation technologies. It also allows for much longer fragments of DNA to be sequenced than the second-generation systems, which are limited by the need to stop the reaction each time the sequence is read. The most promising of the third-generation technologies is nanopore sequencing. Here the sequence is read as each DNA base passes through a protein pore by measuring the change in ionic current caused by the different bases partially blocking the pore. This is a potentially revolutionary technology, as it requires none of the expensive enzymes and reagents or the precision cameras needed for sequencing-by-synthesis. While the potential improvements in cost and speed are huge, a nanopore sequencer has yet to be brought to market. Whoever wins the race to the $1,000 genome, the rewards are great. The Archon X-PRIZE for genomics is offering $10 million to the first team to sequence 100 human genomes in under ten days for under $10,000 each, and eight teams have already signed up. Beyond the financial rewards, whoever produces the $1,000 genome will have achieved a goal that seemed like science fiction only a few years ago, and which could bring about profound changes in all areas of the biological sciences, affecting the lives of people around the world.

In nanopore sequencing, changes in ionic current will allow bases to be read in real time

Elizabeth Batty is a PhD student in the Department of Pathology In Search of the $1000 Genome 7


Not All in the Mind? al ex

Ryan Breslin discusses the mechanism of placebo pain relief and whether or not we should exploit it

hn ha

The existence of the placebo effect is well established. The necessity of double blind, randomised trials with control groups shows how seriously the scientific community takes the influence of placebo on study results. However, the placebo effect is still poorly understood. Where does it come from? Why does it exist? Can and should we use it? Perhaps the last of these questions is the most controversial in current public health policy. The use of homeopathy by the NHS continues to fuel intense debate. Controlled studies suggest that it is the placebo effect that causes patients to see results from these alternative therapies. Is it unethical to frame these treatments as ‘medicine’? Furthermore, are scientifically developed drugs being turned down by patients when these placebos are administered? Is this safe? Despite these concerns, the potential benefits of placebo therapies are thought-provoking. The effect has been reported to cause remission in a wide range of conditions: cancer, Parkinson’s, depression, and inflammatory disorders, to name a few. Probably the most commonly documented role of placebos

Hallie White

Choose your medicine: but which one is real and which a placebo?

8 Not All in the Mind?

is in providing effective pain relief, exemplified by accounts of soldiers injured in battle who feel no pain until they are in a place of safety. If physiological mechanisms underlie the observed placebo effect in pain relief, why shouldn’t we try to understand them and use them to our advantage? The argument that the placebo effect ‘is all in the mind’ is essentially true, but its apparent psychological activation does not imply the lack of a genuine mechanism. In the example of pain, it has been shown that descending neural systems from the midbrain are capable of modulating perception of pain by interacting with neuronal synapses in the spinal cord. These systems arise mainly from a part of the midbrain called the periaqueductal grey region (PAG) and a second area in the brain called the raphé nuclei. Neurons extend from these areas down into the spinal cord to interact with the pain-carrying nerves arriving from the rest of the body. When the descending neurons in the spinal cord are activated they cause the release of a set of messenger proteins called enkephalins. Enkephalins are a type of opioid, naturally occurring pain relievers produced by the body’s nervous system. Each pain-sensing nerve has opioid receptors and when enkephalins bind to these receptors they inhibit the nerve’s ability to communicate its message any further. As a result, the brain can limit the number of pain signals that get through to the central nervous system, modulating our perception of pain. Pain-relieving drugs such as morphine work by targeting these same opioid receptors on pain nerves. Essentially, the brain is able to naturally relieve pain and the placebo effect is the psychological activation of this ability. The effect is surprisingly strong. It has been shown that direct electrical stimulation of the PAG region is capable of inducing sufficient analgesia to enable abdominal surgery without anaesthetic. Easter 2010


equinox graphics

Synapses in the spinal cord are modulated by naturally occurring pain relievers

Easter 2010

dangerous side effects are potential consequences of many pain-relieving drugs, this would certainly be an attractive option. Moreover, it seems possible to activate it entirely psychologically and avoid offtarget stimulation or inhibition of other nervous system components (the cause of side effects in conventional therapies). Clinical use of the placebo effect is not without problems. The effect observed varies greatly between individuals, and the ethical implications of administering a ‘drug’ that contains no pharmacologically active ingredients are numerous. Also, as the activation of the placebo effect is psychological, its effectiveness might be reduced if the population being treated was made aware of the mechanism. More importantly, would a greater awareness of the placebo effect amongst the general population undermine the efficacy of current drugs since some of their success could be undocumented results of this phenomenon? Whatever one’s views on the deliberate use of the placebo effect in pain medicine, a physical, neurological basis for it does exist. It may be only a matter of time before we also uncover the mechanisms behind the placebo effect in other conditions. As our understanding of the placebo effect is constantly improving, perhaps we should not be too quick to dismiss it as a valid potential treatment. Ryan Breslin is a second year preclinical medical student

Capsaicin, found in chillies, is a potent stimulator of pain receptors

Made With Molecules

Interestingly, this effect is lost following intravenous injection of the drug naloxone, which binds to the opioid receptors and blocks them. Naloxone is normally used to counter the effects of overdoses of opiates such as morphine and heroin, which could lead to life-threatening inhibition of the central nervous system and respiratory system. A recent study illustrates this principle in action. Volunteers were injected with capsaicin in their hands and feet, just under the skin. Capsaicin (the active ingredient in chilli peppers) is a potent stimulator of heat and pain receptors. When a non-medicated topical cream was applied to the site of injection, with the volunteers being told it was pain-relieving, they reported significantly less pain. Importantly, the analgesic effect was specific to the site at which the cream was applied and volunteers did not report any soothing effect at other untreated sites where capsaicin was also injected. The intravenous injection of naloxone again removed the pain-relieving effect. This suggests that the placebo effect is not only significant but also site-specific. Studies in rats have shown that direct electrical stimulation of different parts of the PAG area induce site specific analgesia. This implies a neuronal mechanism within the brain that is able to modulate pain perception in particular parts of the body, and contrary to previous belief, a general release of opioids throughout the central nervous system does not underlie the effect. Naloxone’s blockage of the placebo effect on pain strongly suggests this is the same system being activated by higher centres in the brain when analgesia is expected. No other systems capable of achieving the same effect have been found. It seems that a complex system causes the placebo effect, including neurological circuitry within the brain that inhibits pain at specific sites upon belief that analgesia will occur. Perhaps medical practitioners can take advantage of the placebo effect by initiating its activity, allowing them to alleviate pain in a site-specific manner. Given that unwanted and potentially

Not All in the Mind? 9


al ex

ha

hn

The High Stakes of Earthquakes Owen Weller reports on the devastation of seismic activity and the work invested in preparing for the fallout that in our lifetime we will observe the deadliest earthquake on record, with potentially over one million fatalities. Over the last decade alone there have been many devastating earthquakes. The 2003 Bam earthquake in Iran resulted in 30,000 deaths. The 2004 Sumatra earthquake and resultant tsunami killed at least 230,000 people. In 2008, 88,000 people died as a result of the Sichuan earthquake in China, and the recent earthquake in Haiti caused 230,000 fatalities. With several urban populations such as Tehran and Istanbul rapidly expanding on major active fault systems, these statistics are forecast to get worse. However, with improved understanding of earthquake behaviour, this need not be the case. The study of earthquakes is an increasingly fastmoving field thanks to the development of spacebased technologies, such as Global Positioning System and Synthetic Aperture Radar Inferometry, which allow millimetre-scale surface deformations to be observed. This has greatly improved geologists’ understanding of the earthquake

it is predicted

EQUINOX GRAPHICS

US FEDERAL GOVERNMENT

The earthquake in Haiti demonstrated the devastation that can be caused

10 The High Stakes of Earthquakes

cycle, the repeated sequence of crustal strain accumulation and release. The two processes act on very different time scales. Strain accumulates slowly due to the differential motion of crustal segments across a locked fault. This causes a longterm build up of elastic energy until the fault can no longer take the strain. The elastic energy is then suddenly and catastrophically released, rupturing the fault and generating a variety of seismic waves, which we experience as an earthquake. These processes can be monitored today in real time, but past earthquakes can also be analysed by excavating trenches and dating disrupted horizons, revealing how often earthquakes have occurred and how far fault lines have moved over time. Bringing all this information together, geologists can learn about how and when landscapes have evolved, and understand more about active earthquake zones. Crucially, geologists can use such information to comment on both the size and likelihood of future earthquakes for a particular area. Broadly speaking, earthquakes only occur in the upper crust, where rocks are cool enough to behave in a brittle manner. At greater depths, rocks are closer to their melting temperature and deform gradually by flow in the solid state. This puts a limit on the depth of faults and, since earthquake magnitude is proportional to fault area, the lateral extent of a fault indicates a maximum magnitude. Predictions about future earthquakes can be made by observing for how long and how rapidly a fault has been accumulating strain. Combined with knowledge of the characteristic maximum strain a fault can accommodate, important predictions can be made. However, these predictions are at best on the timescale of decades, and accurate earthquake prediction remains elusive. The focus instead Easter 2010


Easter 2010

AGENCIA BRASIL

needs to be on preparing for earthquakes, using knowledge of when, where and how large the earthquake may be, in order to direct resources appropriately. The question is therefore what causes the majority of deaths from earthquakes, and what can we do to prevent this? When earthquakes occur under water, the main hazard comes from the generation of a tsunami. Earthquakes known as megathrusts occur at plate interfaces where one plate moves below another. These produce the largest magnitude earthquakes in the world due to their huge fault area. If the megathrust causes a displacement of the sea floor then the water, being effectively incompressible, is also displaced with a wavelength comparable to the fault width. Crucially, if this wavelength is much greater than the sea depth then a tsunami is generated, since the resulting wave behaves in a non-dispersive manner, travelling with a speed proportional to water depth. Once this wave reaches shallow depths, it increases in size, forming a large amplitude wave such as that witnessed around Sumatra on 26 December 2004. The catastrophic devastation wreaked by this earthquake led to calls for the widespread (and costly) installation of early warning systems. However, many megathrust earthquakes do not result in tsunamis; the rupture dies out below the sea floor, typically due to the presence of sediment piles and clays that act as barriers, stopping rupture propagation. An early warning system could therefore lead to a ‘cry wolf ’ situation. They clearly have some use, but are not a silver bullet. Perhaps more important is to educate people to recognise the hall-marks of an impending tsunami, such as a large earthquake followed by a receding water line, and to provide adequate escape routes. Several stories of survival from 2004 attest to this, with islands with a folklore tradition of tsunami identification experiencing far fewer casualties. On land, the biggest killer from earthquakes is man-made: the collapse of buildings and subsequent suffocation or crushing. Earthquakes produce a variety of waves; those containing shear motion are the most destructive for buildings. The damage caused to buildings is strongly dependent on their proximity to the epicentre, as seismic energy is inversely proportional to distance squared. The response of the building to shaking depends on the fundamental period of vibration of the building, which is related in part to its stiffness. Seismic waves with periods similar to that of the building will cause resonance. A large portion of earthquake energy is ‘short-period’, which resonates with stiffer objects. Therefore, these either need to be made more flexible, or stronger. The technology to do this has already

been developed and can be easily applied to new builds. This includes foundations that can move independently of the building, thereby reducing the destructive lateral shaking motions, and structural reinforcement. Far more problematic and expensive is retro-fitting older buildings to make them safe. Again and again we observe high death tolls in earthquake-hit places where construction is inadequate: the 2003 earthquake in Bam, where local buildings were made from mud-bricks; the 2008 earthquake in Sichuan, where multiple schools collapsed due to poor workmanship; and the 2010 earthquake in Haiti, where the incredible poverty of the area and the use of poor building materials exacerbated the devastation. While the death count is still being confirmed, it is thought that the Haiti earthquake is one of the deadliest earthquakes on record. Seismically active regions, such as Japan and California where building codes are more stringent, would not have suffered anywhere near this level of devastation for an earthquake of the same size. Ultimately, it is not earthquakes that kill, but poorly constructed buildings and a lack of education. Both of these are rectifiable. Geologists can do their part in deducing where earthquakes are most likely to occur, but in the end it is the responsibility of individual governments to improve education, enforce appropriate building regulations and support retro-fitting projects. However, with earthquake recurrence intervals being far greater than political terms, this will require a seismic shift of its own in the mindset of politicians.

Inadequate construction causes the collapse of buildings and deaths that could be avoided

Owen Weller is a fourth year Natural Sciences Tripos student in Earth Sciences The High Stakes of Earthquakes 11


ce ren aw Na ta lie L

Never-Ending Hunger Kate McAllister looks into Prader-Willi Syndrome and the insatiable desire to eat

of making ourselves bikini (or speedos) ready for the upcoming summer, armed with delusions of self-belief and willpower, many of us will join a university sports club or start a new healthy eating plan. Maintaining a healthy weight is a case of eating less and exercising more. Simple, right? We are all aware that obesity in the western world is at epidemic proportions; the World Health Organisation predicts that by 2015, 700 million adults will be obese. Moreover, obesity has well-established associations with type-2 diabetes, reduced life expectancy and cardiovascular complications. Overcoming this condition is a major target for 21st century society. So, cut out

faced with the prospect

JUAN CARREÑO DE MIRANDA

Young girl from the 17th century, believed to have suffered from Prader-Willi syndrome

12 Never-Ending Hunger

the Big Macs, go for a run…how hard can it be? But keeping weight off can be a constant battle for some people, as anyone with Prader-Willi syndrome will tell you. The syndrome is a relatively rare neurodevelopmental disorder, caused by loss of gene expression, or the deletion of a region of genes on chromosome 15. The exact genes involved are still not known. Sufferers have learning disabilities and, most strikingly, severe obesity. From birth, babies with Prader-Willi develop more slowly and are extremely floppy (hypotonic). These are the key symptoms of the disorder in newborns. As infants, affected children show very little interest in food, and many have to be tube fed. In later childhood, this disinterest in dinner time evaporates and is replaced by over-eating, an obsession about food and the desire to eat almost all of the time. People with Prader-Willi need around three times the calorie intake that healthy people need before they feel remotely full, and this feeling of satiety often disappears within minutes. Some sufferers will resort to eating raw meat, frozen foods, pet food, and in extreme cases, even items such as coal, when food is not available. Losing weight is made even more tricky by the fact that the syndrome is associated with a lack of muscle tone, and some sufferers also have problems with coordination, making exercising more difficult. Prader-Willi syndrome is not solely a disorder of obesity, but has other striking characteristics that make the syndrome unique. Professor Tony Holland, the Health Foundation Chair in Learning Disabilities and a consultant psychiatrist for Cambridgeshire and Peterborough NHS Trust, is a leading researcher in this field. He explains: “Prader-Willi syndrome shows a very complex array of biological, psychological, social and clinical characteristics, and these often vary between Easter 2010


Easter 2010

EQUINOX GRAPHICS Alex hahn www.alexhahnillustrator.com

individuals. For example, Prader-Willi is also associated with short stature, an unusually high pain threshold, and in rare cases, vomiting.” Through his work, Tony Holland frequently comes face-to-face with the tremendous stigma associated with obesity that causes Prader-Willi sufferers to face unfair accusations of laziness and greed, rather than a recognition that they face a constant and life-long struggle to restrain from eating. Treatments that have proved successful in other types of obesity, such as pharmacological agents and weight loss surgery, have either proven unsuccessful or have led to severe complications, even death. Currently, the only effective management strategy is life-long rationing of food, which severely limits the independence of sufferers. Parents living with children or adults who have Prader-Willi often resort to locking the fridge or putting alarms on food cupboards. The feeling of incessant hunger can lead to frequent tantrums when food is denied, increasing the strain within families. Additionally, even family members can find it hard at times to accept that this is not simply due to a lack of willpower. The constant requirement for watchfulness and restriction can be wearing. For some, the only option is supervised living in specialised care homes, which may be betterequipped to deal with the symptoms and associated behaviour of Prader-Willi. For many people with Prader-Willi, the establishment of specialised care homes has been significant in helping them to help themselves in the constant battle against obesity. The heart-rending reality of this condition was illustrated in a 2007 documentary on Channel 4, Can’t Stop Eating, following the day to day lives of people with Prader-Willi living in Gretton Homes, a specialised residence for sufferers. But Prader-Willi syndrome is not the only genetic disorder of obesity. Professor Stephen O’Rahilly and Dr Sadaf Farooqi at the Metabolic Research Laboratories at Addenbrooke’s Hospital have been involved in huge advances in understanding the genetic control of food intake, particularly in the discovery of obesity disorders caused by changes affecting a single gene. One such disorder is congenital leptin deficiency, which leads to over-eating and under-developed gonads similar to that seen in Prader-Willi. This disorder can be dramatically improved using leptin replacement therapy. However, so far none of the genes identified in other disorders of obesity are candidates for direct involvement in Prader-Willi. Nevertheless, it is thought that looking carefully at advances in other genetic models of obesity could shed light on the abnormal mechanisms in the syndrome. Some mechanisms that have been identified are connected to food intake regulation, which is

governed by a complicated interplay of gut and brain signaling, involving the hypothalamus and areas associated with emotion and motivation. Supporting this, brain imaging studies have shown abnormalities in various regions in the brain in Prader-Willi patients. Interestingly, these abnormalities are most pronounced after eating, supporting the view that the problem in PraderWilli is not so much a voracious hunger, but more to do with failing mechanisms of satiety. This year marks the 7th international conference of the Prader-Willi Syndrome Organisation in Taiwan, an event held every three years, and Tony Holland is optimistic: “We are learning more about Prader-Willi syndrome every day. We are looking for ways to help people with Prader-Willi and are undergoing pilot studies into treatments that could potentially help to overcome the over-eating. The picture is complex but I think it provides us with an amazing opportunity to understand the physiology behind obesity and food intake.” Thankfully, with increased information about Prader-Willi syndrome, and access to genetic testing, parents are more aware of the disorder and can prepare themselves for its implications. Growth hormone therapy is becoming increasingly common in people with Prader-Willi to combat short stature, and has been successful in helping some people avoid the life-threatening obesity to some extent. However, the desire to be satisfied remains a chronic problem and one that still puzzles scientists and clinicians.

Parents often need to lock their fridges to prevent PraderWilli children from overeating

Kate McAllister is a research assistant in the Department of Psychiatry Never-Ending Hunger 13


NCE

E AWR

LIE L

NATA

I Think, Therefore I Am? rupak Doshi investigates how scientists hope to find an answer to an ancient philosophical debate

VIncEnT MonTIBus www.unnuageenbouteille.fr

The Thinker by Auguste Rodin

14 I Think, Therefore I Am?

DISTINGUISHING THOUGHT from consciousness is a perfect example of the chicken-and-egg scenario. Take a moment to recall an occasion when you were conscious without a single thought in your mind, or when you truly thought about something while deep in an unconscious sleep-like state. You probably will not be able to. Thought and consciousness exist in many forms and, in their widest context, seem to be inseparable. So does our ability to think stem from being conscious, or are we conscious because of our ongoing thoughts? There has been a sudden burst of interest among neuroscientists to foray into unexplored territories of human behaviour. Phenomena such as thoughts, feelings, emotions and consciousness have long been seen as complex functions, central to ‘being human’. These topics were widely discussed amongst philosophers, until Koch and Crick (of the famed Watson-Crick duo) argued in the early 1990s that ‘consciousness’ needed a definitive scientific explanation. But what exactly is it that needs explaining? In his landmark book The Conscious Mind, David Chalmers distinguishes between two types of consciousness: psychological and phenomenal. The first involves brain functions such as memory, learning, and processing, whilst the other is responsible for the unique kind of ‘feelings’ or ‘experiences’ that are associated with some sensory responses. Brain functions like awareness and attention bridge the two categories. For example, you need to be both attentive and aware in order to learn in a classroom (psychological),

or simply to enjoy the brightness of the sunshine outside (phenomenal). While reading a book, our mind often wanders and, although we keep reading, we don’t exactly comprehend that particular paragraph’s content. This is a classical case of inattentive consciousness, where although we are processing the conscious experience of reading, our thoughts seem to lead our attention away. Only once we become aware that our mind has wandered do our thoughts return us to the book. But how do scientists define thoughts? Thought is broadly defined by anything involving creativity, novelty, imagination, mind-wandering, daydreaming, decision-making and everydayplanning. People experience at least two broad types of thought: spontaneous thought such as an unprovoked desire to read a book, and directed thought such as deciding which book to read. It has been proposed that thought begins with an external sensing of the environment that feeds back to the brain and in turn results in internal stimulation. This is possibly why thoughts can stimulate various actions and behaviour. The internal feedback is also often used to explain the ‘inner world’ experience – the commonplace feeling that our subjective filtering processes are as dynamic as the world around us. Since thoughts constitute the core of both awareness and attention, there is an obvious connection between the various types of thought and both types of consciousness. But does thought or consciousness come first? Whilst it may initially seem easy to imagine scenarios to segregate thought from consciousness, often this is impossible. For example, as we gradually relax from alertness towards sleep, our thoughts decrease and degrade, seemingly in response to the changing level of consciousness. However, if we are stressed and preoccupied with thoughts, these can arouse consciousness, interfering with our ability to sleep, Easter 2010


Easter 2010

The brain is the seat of both consciousness and thought, but which comes first?

Equinox graphics

resulting in insomnia. These contradictions capture perfectly the chicken-and-egg situation. In some cases, it appears that attentive thoughts can modify consciousness. This leads to disparity between what our senses tell our brains and what our brains tell ‘us’. Take magic tricks; although an object’s image may be fully formed in the visual cortex of our brain, we don’t ‘see’ it due to the magician’s ability to distract us. In other words, the audience is forced to direct their attentive thoughts away from that particular object. Vice versa, we can find ourselves consciously doing something different from what we were thinking of doing. Since thoughts drive most, if not all, voluntary actions, knowing what drives thought will yield useful information on free will, and whether humans have it. Regardless of which comes first, it seems clear that thought and consciousness are tightly intertwined. Or are they? Recent findings in cognitive neuroscience suggest that the two phenomena might not be as interconnected as we think. In one study it was found that intelligent decision-making was most effective if a short period of distraction was introduced. This suggests that unconsciousness plays a role during the distraction period, and helps the human goaldirected thought process. Interestingly, in a recent experiment where subjects had to make a decision and act accordingly, the volunteers unconsciously processed their decisions in their brains up to ten seconds before they consciously knew about their own decision. Such ‘paranormal’ findings have led to a recent surge in consciousness research. Understanding the interconnection between thought and consciousness is not only important from a scientific point of view but might also have practical benefits. For example, some forms of thought seem to have clinically beneficial effects on

the body. These include the placebo effect, where it is the belief in medicine rather than the drug itself that improves a patient’s health, as well as enhancedawareness meditation, which has been shown to raise levels of antibodies in the blood. Conversely, decreasing thoughts and concentration via a technique called transcendental meditation reduced stress in college students. The question remains, who or what are the ‘us’ and ‘we’ with whom our brains seem to be communicating? This is analogous to asking what causes consciousness, if it is causable in the first place. Hierarchically, who or what lies above the level of common brain functions imposes a fascinating but difficult question. Research in this area could shed light on a range of clinical phenomena, all of which involve a poorly understood connection between thought, consciousness and the body. Ultimately, the answer will not only settle centuries-old disputes but may also help decipher one of the greatest puzzles of human existence. Rupak Doshi is a PhD student in the Department of Pharmacology

I Think, Therefore I Am? 15


The End of Age

BlueSci looks at the biology of ageing, how resea and overcome it and the impact o 22 Katpitza

Lent 2010


FOCUS

Lasers: Guiding Us Into the Future BlueSci looks at how this physicists’ toy has become the tool of choice in so many areas of life and how it may play a vital role in solving the impending energy crisis

Dr Evil: Here’s my second plan. Back in the 60s, I had a weather changing machine that was, in essence, a sophisticated heat beam which we called a ‘laser’. Using these ‘lasers’, we punch a hole in the protective layer around the Earth, which we scientists call the ‘Ozone Layer’. Slowly but surely, ultraviolet rays would pour in, increasing the risk of skin cancer. That is unless the world pays us a hefty ransom. Number Two: ……That also already has happened. Austin Powers: International Man of Mystery

2010 marks 50 years since the first laser was

geing? Lent 2010

GRAND-DUC

w research is helping us to understand pact on society if we could live longer

switched on. At that time, the laser was the culmination of decades of work. More than that, it represented humanity’s ability to harness light in order to use it as a tool, and became the cornerstone of a new field of physics: photonics. Initially however, it was an idea without an application, and perceived by some of those outside the scientific community as little more than a fancy gimmick. Not surprising then that Dr Evil, the villain in the Austin Powers movies, had little understanding of what the laser means to us today. Since the 1960s, the laser has progressed in leaps and bounds from the status of scientists’ toy to well-known and widely available device that is cheap to manufacture and used in a vast range of applications. Focus 17


Confocal imaging allows us to study cell dynamics (left, middle) whilst powerful laser systems are used in the lab and by the military (right)

18 Focus

We see lasers all around us, during work presentations or in our home CD drives. On TV we watch scantily-clad secret agents somersault over them to break into top secret facilities. And we are probably aware that they have groundbreaking applications in fields such as medicine. But do we know how they actually work? Certainly we know that when we press a button a beam of light shoots out, but do we know where it comes from or what makes it different from, say, a really powerful torch? When it comes to lasers, the problem is not that we do not understand them, but that we fail to appreciate why they are even special. So what are lasers, why are they special and how do we already take advantage of their special qualities? And what part will lasers play in our future? Our story begins with a seminal paper by Albert Einstein, published in 1917. In it he proposed the idea of “light amplification by stimulated emission of radiation” (abbreviated to L.A.S.E.R.). Like many of Einstein’s ideas, the theory was brilliant but it wasn’t immediately clear how such an idea could be applied in practice. Indeed it was over a decade before the first part of the puzzle, the concept of stimulated emission, was demonstrated by the German physicist, Rudolf Ladenburg. The idea of stimulated emission was inspired by quantum mechanics, an area of science that dominated physics for a large part of the 20th century. It relies on the idea that the electrons that orbit the nucleus of an atom have specific values of energy, known as energy states. The lowest energy level is called the ground state, and higher energy states are called excited states.

USAF

GARETH ROGERS

FOCUS

To get an electron from the ground state to an excited state, we need to add energy to the system. But the excited states are not stable, and when the electron falls back to its ground state, it emits excess energy in the form of light. This process, called ‘spontaneous emission’, actually happens all the time, but the energies involved are so small that we don’t notice. If we want to take advantage of this process, we must raise all the electrons from the ground state into the excited state at the same time. This is called population inversion and is generally achieved by pumping energy into the system. Because the electrons enter the excited state at the same time, they also all return to the ground state together, emitting a huge amount of light in the process. However, this still isn’t enough – the light needs to be amplified even further in order to produce a useful laser. This is achieved by putting the source of light in a cavity between two mirrors, which have the effect of amplifying the signal. Just as when you stand between two mirrors in a hair salon and see an image of yourself repeated to infinity, the laser light in the cavity is also amplified over and over again. One mirror is partially reflective and partially transparent, so that each time the light bounces backwards and forwards, a fraction of it escapes, producing a long continuous beam of light. Combining the optical cavity with stimulated emission gives us the basis upon which all lasers operate. This method of producing light means that lasers have some very handy properties. Firstly, because laser light is produced from electrons moving between specific energy levels, the light that they produce is all of the same wavelength. This is exploited by researchers using confocal Easter 2010


FOCUS

microscopes, where lasers producing different wavelengths are used to light up fluorescent tags attached to specific proteins in cells. As each tag can be a different colour and the lasers can precisely illuminate minute areas for fractions of time, multiple tagged proteins can be observed simultaneously for a long time in living cells without damaging them. This allows scientists to obtain spatial and temporal information about how protein behaviour changes under different physiological conditions. Secondly, because all the light from the laser is created together and emitted from the same laser cavity, it takes the form of a very narrow beam, which is maintained all the way along its length, enabling lasers to point accurately and focus their energy on this specific point. These properties are exploited in a myriad of applications, from telecommunications to surgery, as well as potentially controversial defence projects. Lasers are used in optical fibres, which transmit huge quantities of digital data every day. The low spreading of the beam ensures that the signal is not lost or degraded over long distances and also means that multiple signals can be sent along a single fibre without them interfering with each other. This, along with the invention of the diode laser in 1962, brought about a revolution in telecommunications. Diode lasers are semiconductor components that operate as lasers but on a much smaller scale, meaning they can be easily integrated into circuitry and mass produced. These diode lasers are not just used in telecommunications but also in more simple devices such as laser pointers. Optical fibres are also used in medicine as Easter 2010

USAF

Jim Shryne usaf

The Airborne Laser is the next generation development of the Star Wars plan of the 1980s

endoscopes, allowing surgeons to see inside the body. Lasers can also be used to burn away damaged tissue in specific locations. Whereas previously the correction of bad eyesight relied on stomach-churningly invasive techniques in which incisions were made deep into the cornea, now computerised lasers can be used, evaporating less than ten per cent of corneal tissue. Plastic surgeons use lasers too; from disfiguring keloids to unsightly stretch marks, to tattoos, moles, acne, age spots and wrinkles, all these can be safely and accurately removed with laser techniques. The laser also played an influential role at the height of the Cold War. In 1983, US president Ronald Reagan proposed the design and construction of several ground, air and space-based lasers that could be used to intercept and destroy ballistic missiles. The media sensationally referred to this programme as ‘Star Wars’; perhaps an appropriate name given the way in which it seemed to blur the distinction between reality and science fiction. The plan was audacious, provocative and highly controversial because potentially it would have allowed the United States to strike with impunity. In an already volatile political climate, it seriously threatened to upset the balance between nations and we can perhaps be thankful that technical limitations rendered the programme financially unfeasible. However, scientific and technological progress has now overcome many of the Star Wars programme’s stumbling points, and once again ambitious laser defence projects are in development by the US. During trials in February of this year the ‘Airborne Laser Test Bed’ was successfully used to shoot down a Focus 19


LAWRENCE LIVERMORE NATIONAL LABORATORY

FOCUS

Can such a tiny sphere of hydrogen provide us with clean energy by using laser power to recreate the chemistry within the Sun?

20 Focus

moving target. Infrared sensors detect an enemy missile, whose trajectory is tracked with a first laser while a second laser collects data from the atmosphere surrounding the missile. This information is used to programme a gigantic chemical-oxygen-iodine laser, weighing in at 18 tonnes and situated on the nose turret of a Boeing 747, to fire out a brief pulse of intense energy. The result? The missile target is blasted into smithereens in around 12 seconds. But information concerning the range over which this system operates has not been released. Critics point out that not only was the moving test target of known trajectory and source but also that the scheme is so expensive that just one plane is being used in its development. So the extent to which this system will truly realise Star Wars is debatable. Modern conflicts are more likely to be based on controlling rapidly dwindling natural resources such as oil. Climate change adds an extra dimension, as fossil fuels are not only scarce but also environmentally harmful. Power from nuclear fission is currently the most plausible alternative energy source, but the supply of uranium is also finite and this process produces significant amounts of harmful radioactive waste. Ultimately a clean alternative energy source is needed. Scientists hope to use lasers in an experiment that recreates fusion, the process by which stars, including our Sun, generate their energy. In this process, two hydrogen atoms are fused together to form a helium atom. As the mass of the helium is less than the two starting hydrogen atoms, the extra mass is released as energy, without generating radioactive waste. However, for the process to be feasible, the energy

output has to exceed the energy input, and as yet, fusion experiments have failed to manage this. A major challenge is that fusion requires very high temperatures and pressures. The National Ignition Facility in the USA is hoping to achieve these by using high-powered lasers to focus a large amount of energy onto a very small target. Almost two hundred laser beams are focused onto a single pea-sized pellet of heavy hydrogen (a form of hydrogen with an extra neutron in the nucleus of each atom). The laser beams can deliver 500 trillion watts in a short pulse, lasting about 20 billionths of a second. This heats the pellet to a temperature hotter than the centre of the sun, and the result is a miniature star. Even though this ‘star’ only lasts for a minuscule length of time, it is predicted to produce more energy than is put in by all the lasers combined. Should this experiment be a success, it would pave the way towards a clean and compact form of energy generation. The laser has come a long way. Back in 1960 it was a solution in search of a problem. Today it is ubiquitous in our day-to-day lives in a way that would have been unimaginable 50 years ago. Whether it be in fundamental research, medicine, communication or entertainment, the laser has made its mark, and indications suggest that its role in nuclear fusion will make it integral to finding solutions to the impending energy crisis. Lasers have started a revolution in scientific endeavour and changed the way we think about light by allowing us to control it and use it as a tool. The distant future, as ever, is difficult to predict, but there is every reason to believe that the laser will continue to play a vital role in many more new and emerging technologies. Easter 2010


FOCUS

Blu-Ray remember laser-written CDs and DVDs replacing cassettes and video tapes. Now in 2010, following a tediously long format war with Toshiba’s ‘HD DVD’, the ‘BluRay’ format is set to supercede DVD. What is at the heart of this new technology? Why, the development of a new blue-violet laser, of course! Blue-violet lasers have a shorter wavelength than the red lasers used to read and write data onto conventional DVDs: just 405 nm in comparison to 650 nm. This tiny wavelength enables the laser to focus with increased precision so that more data can be stored within the same space. As a result of this, Blu-Ray allows up to 500 GB to be stored on a single CD-sized disc. While this might not seem like much of a change, it is certainly going to make your favourite films look sharper and sound even better.

most of us

Optical Traps and Tweezers

KATZEIMSACK72

in the early 1970s, scientists made the incredible discovery that laser light can be used as a tool to manipulate matter on minuscule scales. The light generated by two opposing laser beams creates an environment which can selectively trap tiny objects and manipulate them, allowing us to study their properties. Optical traps, as they are known, are highly useful tools in biology and have been used to micro-manipulate algae, individual human cells, viruses, bacteria and even single strands of DNA. Recent research at the University of Cambridge has shown that using an optical trap to deform cells allows the mechanical properties of cells to be studied directly. In particular, cancer cells have been shown to react differently appearing to be more easily deformed than healthy cells. Optical tweezers are a related tool. These first appeared in 1986 and consist of a single laser beam which is channelled through a microscope objective to produce a strong focus point. This single beam is usually used to trap a tiny bead to which a molecule of interest such as a motor protein is attached. How the motor protein moves can then be studied in minute detail.

Anders Aufderhorst is a PhD student in the Department of Physics

Rosie Powell-Tuck is a Natural Sciences Tripos student in the Department of Zoology

Wendy Mak is a PhD student in the Department of Physics

David Joseph is a first year Natural Sciences Tripos student

Easter 2010

Focus 21


The People’s Palaeontologist Taylor Burns describes the life behind the science of Stephen Jay Gould 1946, between visits to Grant’s Tomb and the Statue of Liberty, five-year-old Stephen Jay Gould first walked into the American Museum of Natural History. The young New Yorker, led by his father, gazed in wonder at the museum’s great Tyrannosaurus rex; a twenty-foot-high monstrosity that sprang to life in his vivid imagination. As they exited into Central Park, Gould, still awe-struck, announced that his future lay in palaeontology. And thus the path of one of the great scientific minds of the 20th century was set. Gould was a larger-than-life intellectual who, while being one of the most well-known scientists in the world, took an active interest in his social and cultural surroundings. His curiosity led him to engage in political, artistic, and athletic endeavours that were avoided by many of his colleagues. Born in Queens, New York, to a poor Jewish family, Gould’s childhood was humble yet enriching; it was an upbringing he was proud of. His father, a court reporter, often espoused Marxist ideology in their small borough home. This created an environment in which scepticism and suspicion of authority were encouraged, while prejudice and hypocrisy were derided, moulding Gould’s intellectual character and shaping his values. During his childhood, his twin obsessions of dinosaurs and baseball occupied much of his time. His father would frequently remind him that the declarations of five-year-olds were not binding and that a career in professional baseball was more

in

KATHY CHAPMAN

Stephen Jay Gould, one of the great scientific minds of the 20th century

22 Behind the Science

difficult to attain than it might seem – or indeed in palaeontology for that matter. Despite these warnings against disappointment, Gould went on to become a distinguished Harvard professor with over 100 honorary degrees, and a part-time baseball analyst. As his academic stature increased, Gould also became recognised for his interests outside of science. For a 1984 documentary, Gould and his son spent an afternoon playing catch with his childhood hero, Joe DiMaggio. This inspired him to write frequently on the structure of baseball, particularly about its statistical anomalies and his beloved New York Yankees. His essay The Streak of Streaks, explaining the incredible statistical significance of DiMaggio’s 56-game hitting streak, became a classic, and led him to be featured as an analyst in the documentary Baseball. As Gould’s health deteriorated in the late 1990s, his then wife, Rhonda Roland Shearer, bought him a device that allowed him to follow games no matter where he was. He could often be found checking the scoreboards between lectures. Weeks before he died, Gould was still working busily on his next book about baseball. Gould’s passion for baseball was matched by his enthusiasm for the arts. His social circle was dominated not by scientists, but by artists. As an adult he was also a science fiction film fan, although Gould often complained about their mediocre portrayal of science and poor plots, as well as being an avid collector of rare antique books and textbooks. He was an opera enthusiast and often sang with the Boston Cecilia Chorus. During the Arkansas Creation Trial in which he was testifying against creationism, a junior lawyer broke into a rendition of Amazing Grace, whereupon Gould picked up the tune and outshone everyone. But when he came to the line about worshipping God for 10,000 years, he looked at his associates, an embarrassed grin spread across his face, and as so often happened in Gould´s company, the silence was broken with a chorus of laughter. His upbringing and beliefs made Gould a champion for numerous causes.Throughout the

“Science is not a heartless pursuit of objective information. It is a creative human activity, its geniuses acting more as artists than as information processors.” Easter 2010


james Kebinger

Baseball was the other major passion of Stephen Jay Gould

1960s, he could be found on the picket lines of the civil rights movement, organising protests in the US and the UK. In his book, The Mismeasure of Man, Gould passionately attacks racism and racist theories of intelligence, something that Gould and his father had fought against since his youth. Gould also appeared in a Canadian courtroom to voice his support for the medical use of marijuana, an extremely controversial stance at the time. Gould’s scientific writing was infused with moral concern, often articulated through artistic references that he felt could best convey the essence of human weaknesses and virtues. His Natural History columns, later reprinted in nine volumes, incorporate a phenomenal number of literary, athletic, philosophical and foreign language references. One page could easily contain a Latin proverb, a Shakespearean quote and a film review. The Telegraph described Gould’s penmanship as featuring “baseball metaphors lurking in the shadows of medieval cathedrals, the better to illustrate an arcane point about fossils.” Gould saw the world through one integrated lens and could inspire those around to experience that same intellectual harmony, however briefly. Although he majored in geology, Gould also studied history and philosophy extensively, becoming fascinated by human language and architecture. During his life, he studied and gained conversational knowledge in over ten languages, not including the numerous pidgin languages he dabbled in. Gould’s curiosity and passion for new experiences frequently led him across oceans and into disparate communities, from upscale Parisian neighbourhoods to the most remote villages in South America. He could even give in-depth tours of foreign cities he had never visited before. Rita Colwell, then director of the National Science Foundation, was once pulled away from a Lisbon conference by Gould for an impromptu personal tour of local cathedrals. His knowledge of architecture and infectious enthusiasm turned potentially mundane church visits into Colwell’s fondest tour memory. Easter 2010

For his students, Gould was a Harvard treasure – his lectures on evolutionary theory and paleontology were packed with undergraduates of all disciplines. His entertaining public demeanour, so influenced by his Jewish upbringing in Queens, led to journalists dubbing him ‘Mr. Evolution’. In a typical New Yorker accent, with its sharp ‘A’s and twangy ‘O’s, he would smile and gesture wildly in front of a rapt audience of first-year students, frequenting tangents that he felt were more relevant than genetic mutation was to 18-year-olds. His talks would include cartoons, jokes, riddles, and baseball facts, punctuated with an occasional dismissal of his ‘stuffy’ and ‘short-sighted’ scientific colleagues. Visiting Stanford University in 1998, Gould regaled a packed audience with a discussion about the interaction of art and science. He talked about Edgar Allan Poe’s book on molluscs and Leonardo da Vinci’s use of water imagery in the Mona Lisa. He had the unique quality of never dumbing down science, but rather making students feel as though they were sitting in a leather chair beside him, in a mutual learning process that was as enlightening for the instructor as for the student. Gould died in his SoHo apartment from his second bout of cancer in 2002, surrounded by his beloved collection of books, stacked so high that a multi-storey ladder was needed to reach them. With the dogged determination displayed throughout his life, Gould was singing, lecturing, writing and watching baseball until two weeks before he passed away. He may be one of the few scientists whose death was mourned as much by Shakespearean scholars, baseball players and artists as it was by the scientific community. Like the twenty-foot T. rex that shaped his life story, Stephen Jay Gould was a giant; a man who loved science, writing, family, baseball, and the arts, and somehow – almost as if he were an evolutionary impossibility himself – found a way to conquer them all. Taylor Burns is a Part IIA PPSIS student concentrating in Psychology Behind the Science 23


Funding in Crisis? Gemma Thornton gives her perspective on the future of scientific funding economic downturn left many nations considering how to best invest limited funds to promote future growth and prosperity, and it seems they took differing approaches regarding scientific funding. France, Germany and the US all aim to, or have already, increased science funding in various ways. In contrast, the British government is likely to cut research funding to help balance the books, whilst investment by independent charities may also be constrained by the economic situation. It seems the UK science community faces some lean years ahead. But how will these affect the researchers in academia and industry? Will this lead to a ‘brain drain’ of talent heading overseas in search of support, as suggested by Ralph Cicerone, president of the US National Academy of Sciences, at the AAAS meeting last February? Of the many thousands of graduates and postgraduates in academic research, less than ten per cent will obtain permanent tenure and professorships. Most leave academic research and enter various professions. Yet the most obvious alternative – industry – may not be a safe choice either; Astra Zeneca, Pfizer and GlaxoSmithKline all announced global job cuts in the last few months. So how can current research levels be sustained when money for investment is lacking? The UK has benefitted from a decade of increasing investment, and for its size has always punched well above its weight in producing high-quality research. The UK government argues that whilst overall science funding may fall, projects with ‘economic or social benefit’ will be prioritised. This requires the scientists to speculate on the future impact of a project when applying for funding, and assumes a simple linear model from knowledge to product. Yet few of the truly influential scientific discoveries that we rely on today could ever have been predicted. There are many critics of this approach within the scientific community, including the Director of the Wellcome Trust, the Royal Society and former Nobel laureates, who believe that such prescriptive funding will stifle the very innovation it seeks to encourage. Indeed, the Wellcome Trust has taken the diametrically

www.freefoto.com

the global

24 Perspective

opposite approach, restructuring their grants with the aim of giving strong scientists the time to explore the questions they think most interesting in the way they think best. As many of the greatest discoveries have resulted from serendipity or trying to explain an unexpected result, history suggests this approach is more likely to maintain the true spirit of scientific research and allow the possibility of innovation. Another charge is that universities are poor at commercialising their discoveries. But this argument does not stand up when the facts are examined. The University of Cambridge and the spin-out companies that make up the Science Park in Milton provide a strong model of how scientific research, particularly in biosciences, has benefitted from investment and boosted the economy in return over the last decade. According to the recent report by the Royal Society, university spin-outs nationwide employed around 14,000 people in 2007-2008 and generated a turnover of £1.1 billion. Indeed, the same report highlights the success of the Cambridge model, emulated on a smaller scale in Oxford, Manchester, York, Southampton and Surrey. What about in industry? Here, the belt tightening may be slowing, and the big companies are also exploring other options for financing R&D. One model is to fund academic groups to find new target genes or pathways, or to screen promising drug candidates. This may become more prevalent in the current economic climate, and may provide those academic groups with a valuable source of funding outside the traditional grant structures. It seems that we are entering a period of reassessment regarding how the value of scientific research is viewed by government. The Royal Society report emphasises the need for sustained investment to ensure continued progress and economic benefit, and warns that reducing funding even over a short period may have long-term consequences. This concern is also echoed in a report by the Science Advisory Committee of MPs, published just before the Budget. We can hope that the warnings are heeded and that there is greater consultation before the final allocation. Ultimately, the final picture remains unclear, however it seems that, in the words of Bob Dylan, ‘the times, they are a’changin’. Gemma Thornton is a postdoc in the Department of Medical Genetics Easter 2010


Fluids and Foggaras Djuke Veldhuis descends into the watering holes of the ancient Garamantes

Easter 2010

“when the well is dry, we know the worth of

water.” As I stood on the barren and parched Fezzan landscape even the idea of a well seemed far-fetched. Located in south-west Libya, the Fezzan is composed of desert broken by the occasional oasis. It is surprising, therefore, that Herodotus, writing in the 5th century BC, described this area as home to “a very great nation”, namely the Garamantes, a Saharan people in power until around 600AD. Only recently in human history, during the last 10,000 years in Mesopotamia, did agriculture and domestication become the dominant way of life. Permanent villages and crops demand a water supply; relatively easy if you’re living on the edge of the Nile, but the options are limited in the desolate, rocky and sandy landscapes found on the edges of the Saharan sand seas in North Africa. The Garamantes solved this by harnessing groundwater. Foggaras (known as qanats, falajs or karez elsewhere) are subterranean systems of channels that tap into a groundwater supply and drain the water to the surface of the landscape further downslope. By tapping into a source at higher level and channelling the water, it is possible to distribute it to low-lying farming land. Standing on the surface of this landscape today, it is possible to see large heaps of spoil, the result of digging the shafts, spreading into the distance for kilometres on end. The precision with which foggaras were dug and the sheer effort needed to turn this barren landscape into a flourishing agricultural area, is something that even a modern engineer would be proud of. Tracing foggaras and associated settlements and trying to make sense of their relative chronology is a challenge. Even in January, the midday sun beamed down as we walked from shaft to shaft taking key measurements. After years studying many hundreds of these shafts, Andrew Wilson, Professor of Archaeology of the Roman Empire at the University of Oxford, has unravelled not only the technical aspects of ancient water technology, but also its relationship to settlement and economic growth. “The combination of their slave-acquisition activities and their mastery of foggara irrigation technology enabled the Garamantes to enjoy a standard of living far superior to that of any other ancient Saharan society.” To explore how the shafts were constructed and maintained, we descended inside. With harnesses and ropes attached to a wooden platform, we could move up and down the narrow shafts. The trickiest part was

DJUKE VELDHUIS

Djuke at the entrance to a foggara shaft in the Fezzan, Libya

negotiating the loose rubble of the spoil heaps piled up around their entrances. Upon easing your weight onto the rope, there is an immediate feeling of the walls closing in. At only one metre by half a metre wide, the sandstone shafts descend some 5 to 28 metres in depth, sometimes corkscrewing round in the process. Occasionally the solid sandstone walls are interspersed with bands of water-rolled gravels, but most notable are the rockcut footholds. Vertically aligned and spaced at regular intervals, these would have been used by workers to access the channel below. The horizontal channels were no longer accessible to us, blocked by hundreds of years of accumulated sand. Only the vertical foggara shafts remain. While in use, such obstruction would likely have been prevented by ‘sealing’ the shafts with huge slabs. Abseiling down these shafts, I found myself thinking about the slave labour and engineers managing these huge construction projects. The human presence in this landscape, although faded, is ever-present. At the top of some shafts are inscriptions; ancient graffiti. In addition, mud brick structures littered with pottery and grinding stones pay testament to the people who once made this hostile landscape their home. As I climbed, tired and sandy, out of the last humid hole, it struck me how the foggaras stand as an emblem of the success of the Garamantes and also their doom. After extracting at least 30 billion gallons of water over 600 years, they discovered that exploiting an easily mined fossil supply comes at a cost. No matter how ingenious the water engineering system, a fossil supply, once depleted, is no more. Djuke Veldhuis is a postdoc at the Leverhulme Centre for Human Evolutionary Studies Away From the Bench 25


Movements Frozen in Time Swetha Suresh examines a unique juxtaposition of science and art that reveals the intricacies of movement and one thinks of a medium used to freeze a moment for eternity, not to depict motion and capture fleeting movements. Yet photography has been essential in capturing stages of motion that are too transient for us to see. In a unique interplay between the worlds of art and science, the development and modification of photography has allowed us to dramatically slow down time and reveal the imperceptible intricacies of movement in the world around us. Eadweard Muybridge’s famous images of a galloping horse were the first use of photography to capture movement. Muybridge was challenged in 1872 by Leland Stanford, Governor of California, to determine whether all of a horse’s feet are off the ground simultaneously during a gallop. Muybridge used 12 cameras arranged in a straight line alongside the horse’s path; these were set off by the horse triggering electrical contacts as it ran. The resulting pictures proved conclusively for the first time that horses are indeed airborne during the gallop. Muybridge’s technique became known as chronophotography – ‘picturing time’. It meant that photography was no longer simply a reproductive medium, but could create images that would otherwise never be seen. Chronophotography led to signature images of the swift motions of human and animal bodies. Although Muybridge was not a scientist himself, his technique was soon exploited by those who were. Etienne Jules Marey, an eminent French physiologist in the late 1800s, was one of the first to use chronophotography for scientific studies. Marey believed that to discover the physical laws of animal locomotion, direct observation of the mechanisms in real time was the key. Slowing or stopping the

Marey studied the physics of movement using chronophotography (right) and modern techniques have been used to study other scientific phenomena (above)

26 Arts and Reviews

NSAPLAYER

think of photographs

movement to study it may not provide a true picture of the natural mechanism. When he came to know of Muybridge’s work, he realised that photographs provided a way to study natural movement without this kind of interference. Not satisfied with Muybridge’s array of separate cameras, Marey developed ways to capture movement with a single camera. He built a photographic gun in which a light sensitive plate was revolved behind the lens. This allowed him to record sequential images at regular intervals of time as small as 1/720th of a second. With this method, he could not only photograph the motion of land animals, but also of birds, since the trigger to record images did not have to be set in the path of motion. But he was not satisfied with the spatial or temporal detail that these sequences gave him. It was a further development of this technique that provided the ability to study movement in greater detail. Marey developed his technique to capture sequential images in the same frame. As his subject moved, a shutter was rotated in front of a stationary linear photographic plate, repeatedly exposing it at regular intervals. At each exposure, the subject was in a different position, so that the final image revealed the sequence of their movement. The interval between exposures was used to control the overlap in the image and the continuity of the sequence. Although movement was restricted to a straight line, recording consecutive phases of motion in the same frame was a revolution. Not only did this allow visualisation of the physiology of movement, but also directly showed the exact trajectory and distance of movement in a defined time. To obtain more precise data, Marey photographed indoors in darkness. He dressed his models in black suits with stitched white lines. When homme Easter 2010


Easter 2010

Although primarily a scientist, Edgerton used the strobe to capture the beauty in everyday phenomena – water running from a tap, drops impacting liquid, golfers in action. His photographs helped to popularise science with the public, who wondered at the inherent beauty of the pictures, thereby making them not only of scientific interest, but also artistic masterpieces. Chronophotography is all around us today. Films are shown at 24 frames per second, essentially 24 sequential photos presented fast enough for us to perceive movement. High speed filming of sports provides super slow-motion replays, allowing us to see the physics that is otherwise concealed by the speed of the action; the compression of a tennis ball, the bouncing of snooker balls or the devastation of a high-impact racing crash. In industry, chronophotography is used routinely in car crash tests. The cameras used are capable of taking pictures at intervals of 100 picoseconds. The super slow motion shows which parts of the car are most susceptible to stress, and ultimately allows improvements to the car’s safety to be made. Films of industrial processes, such as manufacturing techniques, can also be used as evidence when filing for patents, to prove the uniqueness of the technique. The development of photography to capture motion shows how the arts and sciences can move together. While both are developed for their own ends, they can interact and influence one another, pushing each other further than they could go alone. Marey’s and Edgerton’s collections have been digitised and can be found at http://www.bium.univparis5.fr/marey/debut2.htm and http://edgertondigital-collections.org/ respectively. Swetha Suresh is a PhD Student in the Department of Pharmacology

CALONDA

squellete, or ‘skeleton man’, performed actions, only these white lines reflected light and showed up in the photographs. This reduction of the movement allowed Marey to relate these lines to axes of motion, centres of gravity and lines of force, a previously impossible analysis of natural human movement. While art can allow even inanimate objects to represent human nature through colours, settings and composition, Marey made the opposite equally true. By reducing the human forms of his models and equating them to a skeleton, he transfigured human subjects into ‘objects’ of scientific experimentation. Marey’s techniques provided the opportunity to study many physical phenomena in a new way. In association with his assistant Georges Demeneÿ, Marey not only studied many aspects of human locomotion – jumping, walking, running, gait – but also that of animals and the flight of birds. What’s more, he used chronophotography to directly visualise the trajectories traced by balls, the paths taken by swinging strings and wires and the movement of water. He built the first wind tunnel and photographed the movement of wisps of smoke, the first recorded study of air currents. The significance of the interaction between art and science in motion capture is illustrated by the work of Ottomar Anschütz in the 1880s. Anschütz was not a scientist but a professional artistic photographer in Germany who specialised in chronophotography. He invented fast shutters and laid the foundations for the invention of the kinematograph (motion picture). In fact, when the first kinematograph was shown in Berlin, it was widely considered to be a perfection of Anschütz’s techniques. Anschütz’s most renowned images, pictures of storks in flight taken in 1884, inspired Otto Lilienthal’s first glider designs. Lilienthal obtained Anschütz’s services to film his flight. The flight was fatal for Lilienthal, but Anschütz’s recording was studied by the Wright brothers, who used it to make significant improvements to the design of their aircraft. Ultimately, Anschütz’s art contributed to the first successful flight of a powered aircraft. While Anschütz was primarily an artist who’s work benefited science, the scientist Harold Edgerton made significant contributions to art through his work in motion capture. Edgerton invented the stroboscope, a sequential flash device that is used to alter the perceived motion of an object by manipulating the observer’s view. For example, when the frequency of a flashing strobe light synchronises exactly with the frequency of a repeated motion, such as a swinging pendulum, the motion appears to be frozen at one moment.

Muybridge was the first to photograph movement (left) and the same principles are used today to reveal the physics of sport (below)

Photo courtesy of Tim Laman For more Birds of Paradise photos, go to www.timlaman.com

Arts and Reviews 27


Increasing the Yield Alex Jenkin looks at the past, present and future of agriculture is predicted to reach nine billion by 2050. According to the Declaration of the World Summit on Food Security, this means we will have to increase our agricultural output by 70 per cent. Throughout history, we have consistently developed new methods and technologies in order to feed ourselves. We have developed from hunter-gatherers into societies dependent on agriculture. Farming practices have been further advanced by the mechanisation of techniques, and we now have the ability to create designer crops to suit our needs. The disadvantages of a hunter-gatherer lifestyle seem clear to us today but in reality, the pioneers of farming had things equally tough. Early farmers had shorter life expectancies than their huntergatherer contemporaries. The practice of agriculture spread gradually as it became more favourable to expend energy cultivating crops, which yielded more predictable food supplies than hunting. According to Jared Diamond in his book Guns, Germs and Steel there are five locations where plant domestication originated independently: the Fertile Crescent in Southwest Asia, China, Mesoamerica, the Andes and Amazonia, and the Eastern United States. The reason agriculture originated in each case was due to the particular plant species present. Many plants either do not produce edible fruit, or are woody and have small yields. Conversely, annual plants, which live for only one year, put most of their energy into making large seeds (which we can eat) and invest little in woody stems, making them much better potential crops. Moreover, the seeds are adapted to surviving the long dry season before

the world population

THe Yorck Project

Wall painting from 1200BC showing an Ancient Egyptian ploughing his field

28 History

germination and so can be stored for long periods. The annuals of the Fertile Crescent were highly abundant and suitable for cultivation, meaning the area boasts some of the earliest examples of domesticated plants, dated at 8500BC. Through cultivation we became more aware of variation between different plant populations. Selective breeding was employed to ensure that crops had those traits most desirable to humans, presumably focusing on higher yields or edibility. Wild almond seeds, for example, produce highly toxic hydrogen cyanide when damaged in any way, including chewing, but identification of varieties containing a genetic mutation inhibiting hydrogen cyanide production allowed cultivation. Advantageous traits may also be acquired accidentally; modern wheat is the result of crosses between several wild and domesticated varieties, which are believed to have occurred naturally on the edges of fields, before being noted for their beneficial traits and cultivated. Today, selective breeding is a highly targeted process and, though usually aimed at maximising yield, often focuses on specific traits such as drought resistance. Empires were founded on their ability to produce food for their citizens; the Ancient Egyptians, for example, were renowned for their innovations in irrigation. Egyptian farmers harnessed the natural flooding of the Nile, digging basins that filled with floodwater that was left to saturate the earth before being drained. Little is known, however, about agriculture during the Middle Ages, though we do know that monasteries maintained farming practices, often owning large areas of land. In the 19th century, the industrial revolution saw the dawn of mechanisation in agriculture. The steam engine replaced horses and manpower, and was itself later replaced by diesel and petrol engines. The rising populations of Europe and America during this period had to be balanced by increased food production. Applying fertiliser meant that crops were not limited by the need for nitrogen in the soil and therefore could have higher yields. Initially, nitrogen compounds for fertilisers were obtained from sodium nitrate mined in Chile, but reserves Easter 2010


Easter 2010

JAMs_123

Genetically modified wheat (top) produces a greater number of high quality grains than wild wheat (bottom)

Ruafun12

soon began to run out. In 1911, the German chemist Fritz Haber developed a method for the synthesis of ammonia, which could itself be used to fertilise crops or oxidised into nitrate fertilisers. Carl Bosch scaled up the process to an industrial level, providing unprecedented access to fertilisers. Both men won Nobel prizes for their scientific contributions. Advances made by the ‘Father of the Green Revolution’, Norman Borlaug, in the 20th century further increased potential wheat yields. Having bred wheat resistant to stem rust disease in Mexico, preventing an impending food crisis, he then turned his attention to the Japanese dwarf wheat, Norin-10. This contained two genes, Rht1 and Rht2, which were responsible for decreasing the height of the wheat. This allowed more energy to be allocated to grain production, so greater numbers of higher quality grains were produced. Not only did this increase the yield per area, but dwarf varieties of wheat are resistant to lodging (falling over) during wet and windy weather, which can cause large crop losses. Dwarf wheat plants could therefore take full advantage of increased fertiliser application without increased risk of damage. Similar dwarfing genes have also been bred into barley and maize. To counter the increased application of man-made fertilisers and the introduction of industrialised intensive agriculture, the organic movement was founded. Groups formed in the 1930s and 1940s who believed that growing crops without synthetic fertilisers was a more sustainable way of farming. The movement gathered momentum with the formation of the International Federation of Organic Agriculture Movements in 1972, but the market for organic produce only began to increase rapidly in the 1990s. By 2007, the industry was valued at $46 billion. However, there are concerns that organic farming simply cannot meet the demands of our growing population. Other camps advocate genetically modified (GM) crops as the future of agriculture. The first commercial GM crop was a tomato variety called Flavr Savr, distributed in the USA by the company Calgene. Flavr Savr tomatoes had a longer shelf life and, when bred with tastier varieties, sold at a premium supermarket price. Calgene, however, were inexperienced at shipping and growing tomatoes, and the venture went bankrupt within a few years. Zeneca engineered tomatoes for tomato paste production in the UK, but the GM backlash of the late 1990s led to the end of this project. GM crops continue to be grown in many parts of the USA and are noted for their increased resistance to disease, pests and herbicides. Monsanto, the world’s leading producer of GM seed, has been repeatedly questioned over its environmental and ethical practices. However, a

more humanitarian approach is exemplified by the production of Golden Rice, a variety developed to relieve vitamin A deficiency by providing the precursor from which the body can catalyse the vitamin. An education programme is being introduced across areas where the rice is needed to improve the project’s success. Throughout the development process, intellectual property has been shared between Zeneca, the Rockefeller Foundation and Syngenta in a way that surely should be followed in all global food initiatives. The future of agriculture is at present uncertain. Although many advocate organic farming, restrictive regulations discourage many farmers from making the change. Public rejection of GM crops has made GM agriculture currently unviable in the UK, but with better communication and management we may see UK grown GM products back on the market soon. The most sustainable and productive option is likely to be a combination of approaches. Genetic modification could increase crop productivity without requiring fertilisers or ever-increasing areas of land. Crops must also be developed to deal with the consequences of climate change, be that making them more drought tolerant or able to take advantage of increased levels of carbon dioxide. Modelling is helping in this approach, but it is clear that if we are to meet the food needs of the world, decisions must be made quickly and with fluent communication between scientists, government and the public. Alex Jenkin is a third year Natural Sciences Tripos student in the Department of Plant Sciences. History 29


Lighting Up Disease Lindsey Nield sheds light on the uses of biomarkers a problem or identify a mechanism, one must first find evidence for some hypothesis; something one can test for that will prove or disprove the theory. In many scientific fields this evidence is provided by the presence of biological markers, or biomarkers. The most basic definition of a biomarker is simply a substance that can be used as a sign of an organism’s genetic or physical state. They are used in scientific fields from psychiatry, where they can help to find a genetic basis for illnesses such as schizophrenia, to geology and astrobiology, where ‘biosignatures’ point to the existence of living organisms in the past or present. However, it is in medicine and medical research where biomarkers are most commonly used. In these fields the term denotes biological traits that can identify the progress of a disease or condition. Hormones, enzymes, specific cells, antibodies, and genes can all be biomarkers. Physical changes such as tumour growth can also be considered biomarkers in medicine. These markers can be measured by a variety of methods, including physical examination (checking suspect lumps is an example), laboratory testing (of blood samples, for instance), and medical imaging by systems such as X-rays or magnetic resonance imaging. In research, biomarkers often refer to genetic markers – fragments of DNA that are often mutated or altered in specific diseases. Researchers who want to learn about the development of a disease and its response to novel treatments are able to add a detectable tag to these fragments of DNA so the resultant proteins and cells in which they are expressed can be distinguished. Their behaviour or progression during disease development or after therapy can

to diagnose

Bioluminescence monitored in a mouse. Brighter colours represent greater luminescence

GENE THERAPY 2007 REPRODUCED COURTESY OF NATURE NPG

then be studied. To advance our understanding of the biological processes involved, it is desirable to examine them as they occur in living animals (in vivo), so imaging strategies have been developed to track biomarkers that allow monitoring in real time. One such tagging system is bioluminescence imaging (BLI), which takes advantage of a process found in nature. Organisms such as the North American firefly, and many marine creatures, emit a light known as bioluminescence. These organisms use it for various reasons, such as a disguise, to lure prey, in mating rituals, or as a reaction to stress. It is produced by a chemical reaction within the organism where an enzyme, luciferase, oxidises a substrate, luciferin. This reaction produces oxyluciferin, along with the release of photons of light. Genetically engineered cells that express the luciferase enzyme (for example tumour cells) can be introduced into an animal model, and when luciferin is admitted into the bloodstream the light emitted from the engineered cells can be detected using cameras. Since mammalian cells do not emit light and no external stimulus is required, BLI allows sensitive detection and quantification of only those cells specifically engineered to exhibit bioluminescence. It thus provides low-cost, non-invasive and real-time analysis of disease processes at the molecular level in a single living animal. This is a more desirable strategy than traditional procedures, which involve analysing many animals to accurately gauge what is happening at different stages of the disease. The most common use for BLI has been the noninvasive detection of tumours and quantification of their growth and progression. Since genetic engineering is necessary to label the cells of interest, it is unlikely to be used directly in human studies. However, animal studies using BLI will be useful in predicting the human response to therapies in the early stages of development during pre-clinical trials. BLI has also been used to track the spread of bacterial and viral infections in animal models and is shedding light on the processes that lead to neurodegenerative disease such as Alzheimer’s. Such studies will greatly advance the analysis of a wide range of diseases, making the new technology of bioluminescence imaging and its ability to track biomarkers an important innovation. Lindsey Nield is a PhD student in the Department of Physics

30 Technology

Easter 2010


Books Logicomix: An Epic Search for Truth

Bloomsbury, 2009, £16.99

a CoMing-oF-age story about an orphaned boy raised in strict religious confines. A love story. Good guys versus bad guys. An exercise in self-reflexivity. A treatise on modern philosophy. A commentary on the nature of story-telling. A biography. All of these could equally describe Logicomix – the visionary, genre-bending graphic novel from Apostolos Doxiadis and Christos H. Papadimitriou – with one caveat: its main narrative is the story of mathematical logic in the 20th century. Its hero is Bertrand Russell, the Cambridge scholar, who is juxtaposed with the tragic figure of Ludwig Wittgenstein, Russell’s infamous yet capricious pupil. The struggle for mathematical proof is animated, both literally and figuratively, by the story of Russell’s life, and is contrasted with Doxiadis’ and Papdimitriou’s laboured attempts to translate the complexities of Logic to a general audience (a story which itself appears intermittently throughout the comic). And its villain? The academic establishment – bald, potbellied and peering menacingly through its wire-rimmed glasses. Original, enlightening and accessible, Logicomix is a wonderful experiment in the possibilities of the graphic novel. Read it, and you will feel nascent, inspired – caught in the art of something new. TB

No Small Matter: Science on the Nanoscale a gLossY,

Harvard University Press, 2009, £25.95

182-page introduction to all things nano, written for a general readership. Frenkel and Whitesides – two Harvard professors – start the book with bite-sized summaries on basic scientific concepts and tools that influence much of nanoscience: the atomic force microscope, the quantum, nanotubes, and wave particle duality. Yet, instead of graphs or equations, Frenkel has handpicked stunning photographs and visualisations of what the world ‘looks’ like at nanometre scales, leaving Whitesides to provide emotive metaphors describing what it is like to explore and tweak the many possibilities of nanoscience and nanotechnology. For example, a drug molecule wedged in an enzyme is like an attention seeking child, the cell is a circus, light is like a wave when it moves, like a bullet when it hits, and equations should be read like poetry. This is a popular science book, but is worth a read to share the authors’ joy and obvious passion towards this rapidly growing field. WYC

The Earth After Us: What Legacy Will Humans Leave in the Rocks?

OUP, 2009, £34.95

Easter 2010

aRe giMMiCKs a necessary evil for sparking the general public’s interest? Zalasiewicz uses a motif of future alien geologists in an attempt to make Earth geology as scintillating as science fiction. For the lay audience, however, rocks are unlikely to make the bestsellers list, no matter how gripping the book. This is problematic, as the book is clearly written for an audience with little to no previous experience in geology. It covers the basics with enough entertaining tangents to stimulate the keen reader’s interest. Geological processes and the history of their discovery are explained in a light and simple manner that is much more readable than an introductory textbook. However, recurrent speculation about ‘future explorers’ rings a bit false and gives the author too many opportunities to get on his soapbox. The book’s real strength lies in the way it cuts humans and their reign over Earth down to size, as only an understanding of geological time can. In a world of popular science that is largely human-centric, it is both refreshing and necessary to be reminded how temporary our species and its technology is in the context of the history of the Earth. HH

Book Reviews 31


Letters Your questions answered by Dr I.M. Derisive

Pete, Absolutely! Why, just the other day I was asked in the street if I wanted to buy a Big Issue, and it turns out that I did! Of course, what neuroscientists have published is well behind their true capabilities, but some work out of UCL gives us a glimpse. Having watched three film clips, volunteers were placed in a scanner and asked to recall each clip. The researchers could identify which clip was on the volunteer’s minds by their patterns of brain activity. Of course, this is only the tip of the iceberg. The only sure-fire way to prevent them reading your mind is to wear a good covering of tin foil. Just as you can’t take metal into an MRI machine, so the layer prevents the mind beams getting into your skull. For hints and tips, watch X-Men 3 or Signs. Dr I.M. Derisive I get terrible migraines.What are you boffins going to do about it? I’m in pain here! Irritable Iris

ALEX HAHN www.alexhahnillustrator.com

Iris, Firstly, there’s always the surgery route. Ask your GP about full cerebectomies. Secondly, perhaps try painkillers. Apparently you can just keep on taking them

32 Dr Derisive

ALEX HAHN www.alexhahnillustrator.com ALEX HAHN www.alexhahnillustrator.com

Email your scientific queries to drderisive@bluesci. co.uk

I’m quite concerned that the government can read my thoughts – I’m pretty sure that streetlights contain brain scanners and they’re checking up on me. Do you think it’s possible? Paranoid Pete

until you numb up. Lastly, your final recourse may be some single-pulse transcranial magnetic stimulation (sTMS). Unlike new-age hippy nonsense, this isn’t just about holding some magnetic rocks to your face and hoping. No, this is Science. And, like all good science, it looks a bit like Star Trek. The handy little box is applied to the outside of the head and magnetic pulses are delivered through the skull to produce a good brain scrambling, which is clearly what’s required. Safety tests are still underway, but initial results are promising. However, you’re only allowed to use them if you’re wearing a full lycra bodysuit and muttering to someone called Jim about you being a doctor, not a nuclear physicist. Dr I.M. Derisive My housemate is a real pain – he’s always stealing my food! And when it comes to sharing his with me, no chance! Is he an evolutionary throwback? Hungry Harry Harry, He sounds thoroughly annoying to live with! From the sounds of things, we might indeed be looking at an early prehistoric character here. Food sharing is well known amongst pack animals, and two apes will generally share food when in the same cage. A recent experiment using bonobo apes from the Democratic Republic of Congo has shown something even more remarkable. One bonobo was let into a cage containing food, then when given the option of allowing another bonobo into the same cage via a pegged door, the hairy chap would voluntarily let in his mate and share the food. The reasons for this altruism aren’t fully understood (bit tricky to interview), but may be based on expected future favours. Your housemate is clearly well behind the primate curve here – better call round the zookeeper! Dr I.M. Derisive Easter 2010


a-cover

Produced by

Issue 1

in association with

Cambridge’s Science Magazine produced by

A New Science Magazine for Cambridge

Michaelmas 2004

Issue 2

Cambridge’s Science Magazine produced by

in association with

www.bluesci.org

Lent 2005

Hangover Hell

The morning after the night before

Einstein

100 years of E=mc2

Issue 3

in association with

Cambridge’s Science Magazine produced by

www.bluesci.org

Easter 2005

Issue 4

Cambridge’s Science Magazine produced by

in association with

Michaelmas 2005

www.bluesci.org

Issue 5

in association with

Lent 2006

www.bluesci.org

Risk & Rationality

Looking Beyond

Astrobiology

When to trust your instincts

Crossing the great divide: the art of astronomy

Cambridge’s Science Magazine produced by

The search for alien life

Issue 6

Easter 2006

19/9/06

00:45

Page 1

Cambridge’s Science Magazine produced by

in association with

www.bluesci.org

in association with

www.bluesci.org

Issue 7 Michaelmas 2006

The Energy Crisis What are our options?

Mars or Glory A giant leap or a distant view?

New Parts For Old

The future of organ transplants

Our Origins

Stem Cells

What’s all the fuss about?

The genes that make us human

Face Recognition

Mind-reading computers and brain biology

AIDS: 25 Years On

Mobile Dangers

Chocolate

Opinion

Past, present and future

The Sound of Science New perspectives on music

• Robots: the Next Generation? • Mobile Medicine • • Climate Change • Forensic Science •

• The Science of Pain • World of the Nanoputians • • For He’s a Jolly Old (Cambridge) Fellow • Designer Babies •

• Hollywood • Science & Subtext • • Synaesthesia • Mobiles • Proteomics •

Foreseeing breakthroughs in research

Views from Cambridge

Why do we love it?

• Artificial Intelligence • Obesity • • Women In Science • Genetic Counselling •

The Future of Science

Are phones really a healh risk?

• Grapefruit • Dr Hypothesis • • Probiotics • Quantum Computers •

• Drugs in the Sewage • Quantum Cryptography • • Time Truck • Gaia • Pharmacogenomics •

• String Theory • Schizophrenia • Antarctica • • Science and Film • Teleportation • Systems Biology •

Get your article published in BlueSci... Email: submissions@bluesci.co.uk Articles should be ~1200 words about any scientific topic Send completed articles or get in touch with potential ideas More details can be found at www.bluesci.co.uk

Deadline for next issue is 2nd July 2010 Contact editor@bluesci.co.uk to get involved with editing, graphics or production h

n wit

ociatio

in ass ed by

gazine e Ma

produc

Cambridge’s Science Magazine produced by

Cambridge’s Science Magazine produced in association with

in association with

Cambridge’s Science Magazine produced in association with

Cambridge’s Science Magazine produced in association with

Cambridge’s Science Magazine produced in association with

Cambridge’s Science Magazine produced in association with

Scienc with bridge’s Cam ation ci so as in

ed

org luesci. www.b

by

uc

ge

1

Pa

3

ine

rg

od pr

ci.o

lues w.b

az

ag

eM

:4

20

try hia ealth syc ntal h rotrpsicis sof mgee rprinting you e u m ioe lhebar is fin BN of Big loBrgicota ww

ienc

07

1/

’s Sc

7/ LN

idge

r_

ve

co

Issue

br

Cam

07

2007

Le

Issu

20

www.bluesci.org

Issue 10 Michaelmas 2007

re e bio utu ling th e F Unrave nt

e8

ter 9 Eas

Th

The Large Hadron Collider Europe’s £5 billion experiment

rv

onm envir

iod

efe

Issue 13 Michaelmas 2008

www.bluesci.org

Issue 14 Lent 2009

www.bluesci.org

Issue 15 Easter 2009

www.bluesci.org

Resisting Temptation

The science behind self-control

Colour in Nature Iridescence explained

Hydrogen Economy The Future of Fuel

Global Warming

Darwinian Evolution in Action

Inside a Vacuum

First Predicted in 1895

Brain Barometer

Cuckoo Trickery

Saccades and Disease

Co-evolution in Action

ce Diffee of olution W ien TheaRuby icHunting n Ev • Science Blogging • Extremes of Pain Sce with Offma Tr•ad ss • ll • yHu r Observatory • The Government’s Chief Scientific Advisor Do The Mullard e Ki • Fatuirre th c s uenat enta iq njeotein m • oPr Un rlia C an • Pa caré um oin • H rkets •P a kM toc •S b es

www.bluesci.org

Issue 12 Easter 2008

No Peppered Myth

An unexpected fuel source

e or cals s uti farct mis cule oarlle fer? ist ace • he mouleral C s sa zoologharm Ai • log y st P o N e u ian Sea Monsters al W or n CfiN tte at • R chn s• oks • gicachVimctak rie um d Te ge • iniaA mic Bo rid the y of nta use mb wake of the giant squid d Co In the an rw n of e stor sear iolo M nce ience an ise in Ca me cu le Sc pr Daelectio Th B nce re ter Do hipprencSecie• En • S

www.bluesci.org

Synthetic Biology The Challenges of Engineering Life

mp r Shri ine All Foation of maren Mining the Moon ts

Conse

try

Issue 11 Lent 2008

Unlocking the Brain

More than we thought

Understanding imaging techniques

Darwin’s Competitor

Science in the Media

Crowd Control

Alfred Russell Wallace

Credit Crunch

Influential Science Reporting

Physics of Human Behaviour African Rock Art . Intelligent Plants . Physics of Rainbows Sci-fi . Human Nutrition Research . Fish Ecology

Saliva’s Secrets . Aubrey de Grey Appetite Control . Biofuels . Science and the Web

Scientists at Play . Space Travel . Scent Technology Organ Donation . The Carving Power of Blood Flow

www.bluesci.co.uk

Misattributed scientific discoveries

Randomness . Electronic Paper . Huntington’s Disease Stories from CERN . Birds . Ultrasound Therapy

Green Living . Ice Sheets . Nasal Cycling Pulsars . Wireless Communication . Fractals


Centre for Science and Policy The Sciences and Technology in the Service of Society The Centre for Science and Policy (CSaP) is a networking organisation dedicated to building links between policy makers and experts in the sciences and engineering. Launched in July 2009, the Centre supports a network of Associates, Centre Interest Groups, and Fellows, making connections and creating lasting relationships between the University and Government at senior levels.

“ ”

The Centre for Science and Policy’s workshop on ecosystems clearly made a real impact within DEFRA, and represented an extraordinarily effective use of time

Professor Paul Wiles, Chief Scientific Adviser at the Home Office

CSaP encourages the participation of undergraduate and graduate students in its lecture and seminar programme, and welcomes offers of assistance from those who would like to gain experience in the operation of the Centre.

• The Centre Interest Groups (CIGs) provide the fora where academic experts from all relevant disciplines and senior policy makers meet to discuss new ideas around key issues. • The Policy Fellows Programme is intended to provide an insight into the role which the sciences and engineering can play in the formulation of public policy, by bringing civil servants, elected officials and other policy makers to the University for brief periods as the basis for building ongoing relationships. • Policy Workshops are based on the work of the CIGs, Fellows and other researchers, and are used to exchange current thinking in science and policy issues with policy makers. • The Distinguished Lecture Series and Associate Seminars cover a range of topics closely related to those of the Centre, featuring eminent speakers drawn from the science, industry, government and policy sectors.

Contact

For information on the Centre for Science and Policy, please contact: Centre for Science and Policy University of Cambridge 11 - 12 Trumpington Street Cambridge CB2 1QA To sign up for the newsletter: www.csap.cam.ac.uk/news/newsletters

Images left to right, courtesy of Brendan Baker, Ivan Minev and Rami Louca, and Sam Cocks, Nokia Photography Competition, Department of Engineering, University of Cambridge

Tel: Fax: Email: Website:

+44 (0) 1223 768392 +44 (0) 1223 746491 enquiries@csap.cam.ac.uk www.csap.cam.ac.uk

BlueSci Issue 18 - Easter 2010  

Cambridge University science magazine FOCUS - 50 years of laser

Read more
Read more
Similar to
Popular now
Just for you