Page 1

Cambridge’s Science Magazine produced in association with

Issue 15 Easter 2009

Resisting Temptation

The science behind self-control

Unlocking the Brain

Understanding imaging techniques

Credit Crunch

Misattributed scientiďŹ c discoveries

Green Living . Ice Sheets . Nasal Cycling Pulsars . Wireless Communication . Fractals

Bored this summer? ...Write for BlueSci Email: Submitting Articles Deadline for article submissions is 10 July 2009 Articles should be ~1200 words Get in touch with potential ideas or send us the finished articles a-cover

Produced by

Issue 1

in association with

A New Science Magazine for Cambridge

Michaelmas 2004

Cambridge’s Science Magazine produced by

Issue 2

Cambridge’s Science Magazine produced by

in association with

Lent 2005

Hangover Hell

The morning after the night before


100 years of E=mc2

Issue 3

in association with

Cambridge’s Science Magazine produced by

Easter 2005

Issue 4

Cambridge’s Science Magazine produced by

in association with

Michaelmas 2005

Issue 5

Lent 2006


When to trust your instincts

Crossing the great divide: the art of astronomy

Cambridge’s Science Magazine produced by

Risk & Rationality

Looking Beyond

in association with

The search for alien life

Issue 6

Easter 2006



Page 1

Cambridge’s Science Magazine produced by

in association with

in association with

Issue 7 Michaelmas 2006

The Energy Crisis What are our options?

Mars or Glory A giant leap or a distant view?

New Parts For Old

The future of organ transplants

Our Origins

Stem Cells

What’s all the fuss about?

The genes that make us human

Face Recognition

Mind-reading computers and brain biology

AIDS: 25 Years On

Mobile Dangers



The Sound of Science New perspectives on music

• The Science of Pain • World of the Nanoputians • • For He’s a Jolly Old (Cambridge) Fellow • Designer Babies • cover_LN



• Robots: the Next Generation? • Mobile Medicine • • Climate Change • Forensic Science •

• Hollywood • Science & Subtext • • Synaesthesia • Mobiles • Proteomics •

• Artificial Intelligence • Obesity • • Women In Science • Genetic Counselling •

The Future of Science

Are phones really a healh risk?

Past, present and future

Foreseeing breakthroughs in research

Views from Cambridge

Why do we love it?

• Grapefruit • Dr Hypothesis • • Probiotics • Quantum Computers •

• Drugs in the Sewage • Quantum Cryptography • • Time Truck • Gaia • Pharmacogenomics •

• String Theory • Schizophrenia • Antarctica • • Science and Film • Teleportation • Systems Biology •

Page 1

Cambridge’s Science Magazine produced by

Cambridge’s Science Magazine produced by

in association with

Issue 8 Lent 2007

The Future of Neuropsychiatry Unraveling the biological basis of mental health

in association with

Cambridge’s Science Magazine produced by

Issue 9 Easter 2007

Issue 10 Michaelmas 2007

Biometrics Big Brother is fingerprinting you

Cambridge’s Science Magazine produced in association with

in association with

Issue 11 Lent 2008

Cambridge’s Science Magazine produced in association with

Issue 12 Easter 2008

The Large Hadron Collider

Synthetic Biology

Europe’s £5 billion experiment

The Challenges of Engineering Life

Cambridge’s Science Magazine produced in association with

Cambridge’s Science Magazine produced in association with

Issue 13 Michaelmas 2008

Issue 14 Lent 2009

Colour in Nature Iridescence explained

Hydrogen Economy The Future of Fuel

Mining the Moon

All For Shrimp

No Peppered Myth

An unexpected fuel source

Darwinian Evolution in Action

Conservation of marine environments

Darwinian Chemistry Selection of the fittest molecules

Biological Warfare

Does biodefence research make us safer?

• Poincaré Conjecture • Science Documentaries • Pharmaceuticals • • Human Uniqueness • The Whipple Museum • RNAi • • Stock Markets • Parliamentary Office of Science and Technology •

The story of a Victorian zoologist • Fair Trade with a Difference • Science and Comic Books • • Proteins that Kill • Human Evolution • Enterprise in Cambridge •

Inside a Vacuum

First Predicted in 1895

More than we thought

Co-evolution in Action

Saccades and Disease

A Natural Collector

Global Warming Cuckoo Trickery

Brain Barometer

Darwin’s Competitor

Sea Monsters In the wake of the giant squid

Ruby Hunting • Science Blogging • Extremes of Pain The Mullard Observatory • The Government’s Chief Scientific Advisor

Science in the Media

Crowd Control African Rock Art . Intelligent Plants . Physics of Rainbows Sci-fi . Human Nutrition Research . Fish Ecology

Alfred Russell Wallace

Influential Science Reporting

Physics of Human Behaviour Saliva’s Secrets . Aubrey de Grey Appetite Control . Biofuels . Science and the Web

Scientists at Play . Space Travel . Scent Technology Organ Donation . The Carving Power of Blood Flow

Randomness . Electronic Paper . Huntington’s Disease Stories from CERN . Birds . Ultrasound Therapy

Contact to get involved with editing, graphics or production


Issue 15 Easter 2009

FEATURES Exercising Self-Control

Adam Kessler justifies the need for self-indulgence during exam time ..............................................................................

Algae Living

Daniela Krug, Karuga Koinange and Chris Bowler look at the future of green living .......................................................

Nostril Nose Best

Cat Davies explores the ins and outs of nasal cycling ...........................................................................................................

Cosmic Lighthouses

Jamie Farnes describes the discovery of pulsars ....................................................................................................................

Faster and Faster

Francisco Monteiro looks at the remarkable achievements in error-free digital communications ........................................

Kapitza and the Crocodile

Boris Jardine charts the history and inhabitants of the Mond Laboratory .........................................................................................

6 8 10 12 14 22

FOCUS..................17 LIGHTING UP THE BRAIN This issue’s FOCUS examines the science and usefulness of the glossy brain pictures, now abundant in academia and the media.

REGULARS News Book Reviews On the Cover Technology A Day in the Life of... Away from the Bench Arts and Reviews History The Pavilion Initiatives Dr Hypothesis

Scientific Soundbites ............................................................................... Ross Garnaut and Siegmund Brandt .................................................. Sticky Feet ................................................................................................ Sensing our Surroundings ..................................................................... A Glaciologist .......................................................................................... Recharging Research .............................................................................. Rules of Repetition ................................................................................. Credit Crunch ......................................................................................... Jellyfish Burger .......................................................................................... Engineering the Weather ....................................................................... Answers to Your Scientific Stumpers .................................................

3 4 5 16 24 25 26 28 30 31 32


Issue 15: Easter 2009 Editor: Djuke Veldhuis Managing Editor: Amy Chesterton Business Manager: Michael Derringer

Sub-Editors: Jamie Farnes, Ian Fyfe, Jon Heras, Adam Kessler, Yinglin Liu, Silke Pichler, Dan Shanahan, Jo Smith, Arthur Turrell

Second Editors: Fran Conti-Ramsden, Harriet Dickinson, Moira Smith, Laura Soul, Katherine Thomas, Natalie Vokes, Jonathan Zwart

News Editor: Lucinda Lui News Team: Thomas Kluyver, Lindsey Nield, Swetha Suresh Focus Editor: Ian Fyfe Dr Hypothesis: Mike Kenning

Production Manager: Chris Adriaanse Production Team: Sonia Aguera, Amy Chesterton, Rose Spear, Silke Pichler

Pictures Editor: Ian Fyfe

Distribution Manager: Katherine Thomas Publicity: Matt Child

ISSN 1748-6920

Varsity Publications Ltd Old Examination Hall Free School Lane Cambridge, CB2 3RF Tel: 01223 337575 BlueSci is published by Varsity Publications Ltd and printed by Piggott Black Bear (Cambridge) Ltd. All copyright is the exclusive property of Varsity Publications Ltd. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, without the prior permission of the publisher.


Djuke Velduis Editor

AT SOME point during your time at University you, like many others, are likely to have received an email asking you to volunteer for a brain scan.You might even have been asked to do some sort of a behavioural task whilst being scanned, maybe you had to solve simple mathematical problems or press buttons in response to seeing certain shapes or words. Clearly, there is a bewildering array of brain research going on, but what exactly is the science behind those pretty pictures? Perhaps you do not care, but just saw an opportunity to get a copy of your brain image. Nevertheless, you

might also have wondered what was going on inside that machine as it hummed and clicked away as you lay still, inside the small cylinder. This issue of FOCUS, “Lighting up the brain” explores the ins and outs, benefits, limitations, and future of brain research technology. If that is not enough, elsewhere you can learn about everything from ants, algae and astrophysics to how to successfully resist temptation, conduct fieldwork in the arctic and use your mobile phone as a research station to name but a few. So sit back, relax and have a read.

Amy Chesterton

Managing Editor COME EASTER term, we long for summer sun, warm evenings and plush English gardens. But even if the sun comes out this summer, the vacation is still one large hurdle away. Before we are rewarded with any summer fun, we must first endure exam term. Whether you’re an undergraduate revising for exams, a postgraduate organising the final few supervisions or an academic gearing up to mark exam scripts, the entire University is on hold. It’ll all soon be over, but each day, the apparent significance of exams multiplies in our mind. Instead of stressing, why not take some advice from

the experts. The article “Exercising Self-Control” explains the need for chocolate cake during prolonged concentration, and argues that the last thing we should be doing is resisting temptation! “Nostril Nose Best” provides insight into our thinking; getting rid of that sniffle or blocked nose might also be the key to a clear mind. For those writing coursework, “Faster and Faster”, explains the need for error-free ‘ones’ and ‘zeros’. And for those of you brinking on genius status, take heed as “Credit Crunch” shows students are rarely recognised for their innovative thinking.


Current research conducted in the Department of Materials Science and Metallurgy will allow white lightemitting diodes (LEDs) to be produced more cheaply, opening the door to more efficient and durable lighting. Over the next few years, conventional light bulbs, based on a white hot filament of tungsten, will be phased out in favour of the more efficient fluorescent bulbs. And the next revolution in lighting is already on the horizon. Dim, coloured LEDs have been in use for some forty years. White ones are now starting to be used in particular applications – you may well have LED

Up in smoke Exposure to second hand smoke could increase the risk of developing cognitive impairment, including dementia, according to research led by Dr David Llewellyn of the University of Cambridge. Previous studies identified active smoking as a risk factor for cognitive impairment and other findings had suggested that exposure to second hand smoke could impair cognitive development in children and adolescents. However, this research is the first large scale study to conclude that second hand smoke can lead


Why is gambling addictive?

Gambling is an addictive form of entertainment with a global appeal. This Lent 2009

bike lights – but are still too expensive to replace main indoor lighting. A key reason for this is the cost of production. Most attempts to make white LEDs focus on gallium nitride (GaN), a semiconductor which can efficiently convert electrical power to light. Until recently, GaN was grown on expensive sapphire wafers. Research has shown that it can be grown on silicon wafers at about half the cost and twice the efficiency. Besides promising even greater efficiency and lifespan, LED lighting should overcome two problems with fluorescent bulbs. Fluorescent lights contain tiny amounts of mercury, a toxic metal, and have been criticised

for the quality of the light produced. LEDs contain no mercury and should be able to achieve a better balance of colour – research is even underway to use them to mimic sunlight for those suffering from seasonal affective disorder. TK

to neurological problems in adults. The study used saliva samples from nearly 5000 non-smokers over the age of 50. By measuring the level of cotinine (a by-product of nicotine) in their saliva and collecting a detailed smoking history from the participants, the level of exposure to second hand smoke was determined. A number of neuropsychological tests were performed to assess cognitive functions such as memory, numeracy and verbal fluency. These results were then added to give a global score and the lowest ten per cent were identified as suffering from cognitive impairment.

One possible explanation proposed is the increased risk of heart disease and stroke which, in turn, are known to increase the risk of cognitive impairment and dementia. Llewellyn commented that, “Our results suggest that inhaling other people’s smoke may damage the brain, impair cognitive functions such as memory, and make dementia more likely. Given that passive smoking is also linked to other serious health problems such as heart disease and strokes, smokers should avoid lighting up near non-smokers. Our findings also support calls to ban smoking in public places.” LN

might seem surprising given the fact that even experienced gamblers know it’s the house that always wins. What then causes persistence amongst gamblers who return to the roulette table again and again? A team of researchers from the Behavioural and Clinical Neuroscience Institute, at the University of Cambridge, have tried to find out why near misses make people want to carry on gambling. Subjects were asked to play a slot machine while their brains were scanned in order to determine active brain regions. If the reel reached standstill one position from the payline, then it was

treated as a near miss. If the payline was more than one position away, the result was a loss. Surprisingly, the brain scans showed that the same regions (the ventral striatum and the anterior insula) are associated with both winning money and near misses. There is evidence that the insula is activated when cocaine is taken as well. Published in Neuron, the study concludes that the brain responds to near miss outcomes like it does to wins. The next time you’re at the slot machines, either lose completely to minimise your debts, or win convincingly. SS


Worth a GaNder



than any other issue that has come before our policy in living memory.” In response, he has written a no-nonsense, straight-talking book that outlines the facts in order to inform and aid policy makers, businesses and the public. The book was commissioned by Kevin Rudd, the Prime Minister of Australia. It is written, therefore, from an Australian perspective, but don’t let this put you off. Garnaut clearly presents the facts behind the strategies and they can be extrapolated to apply to other countries.

The clear explanations and easy-to-read layout make it an excellent reference for understanding climate change. A well-structured outline with chapter summaries of key points makes it easy to find individual topics or overviews. Clearly labelled diagrams and tables help the reader to get into the ‘nitty gritty’ of the science and statistics. Beginning with a decision-making framework that weighs up the costs and benefits of mitigation, taking into account risk and uncertainty, the book goes on to explain the science behind climate change, man-made emissions and their environmental impacts. Garnaut then looks at our efforts to tackle climate change so far, and how we could move towards global cooperation and agreement. Finally, he looks at the Australian mitigation policy with a focus on the areas of energy, transport and land use. As a student of climate change mitigation technologies myself, I highly recommend this book for anyone who wishes to get to grips with climate change and its complex policy issues.

The HARVEST of a Century by Siegmund Brandt retraces remarkable achievements in physics over the last century in 100 episodes. Each episode gives an insight into the period of discovery, the influences and life of the scientist. The book starts with Roentgen’s

accidental discovery of X-rays and passes through the achievements of Curie, Rutherford, Nerst, Einstein, Hubble and Fermi, culminating in Davies and Koshiba’s finding of the mass of a neutrino. Discovery of the atom, lasers, X-rays, optics, radioactivity, thermodynamic rules, quantum theories, working of conductors, transistors, nuclear reactors and magnetic resonance all find a place here. The book is richly illustrated with photographs of scientists, instruments and excerpts from scientific communications as well as references to original papers at the end of each chapter. It moves along chronologically, so at times, the episodes feel disconnected since the early principles that form the basis of a discovery are often not implemented practically until much later. Nonphysicists may also find it challenging to grasp the many theoretical concepts presented. Re-reading is definitely essential. A background of secondary school physics is also necessary. The author

“Climate CHANGE is a diabolical policy problem,” writes Professor Ross Garnaut in his recent book The Garnaut Climate Change Review. He states that its uncertainties, insidious nature, longterm time frame and requirement of international cooperation make it “harder

“The book explains the science behind manmade emissions and their environmental impacts”


Ross Garnaut, The Garnaut Climate Change Review, CUP, 2008, £35 RRP

Tamaryn Brown is a PhD Student in the Department of Chemical Engineering and Biotechnology does introduce basic concepts, but at times, they are in the middle of a chapter and too simplistic when compared to the rest of the text. Nevertheless, the reader’s attention is sustained, as each episode consists of only three to four pages. The beauty of this book lies in the fact that, although volumes can be written about each major discovery, the author has identified and presented the most important concepts and experiments. It differs from a textbook in so far as every chapter is a story detailing how much was known at the time, what questions were asked and how these were answered. Overall, the book makes for good reading and, apart from being informative, it helps one appreciate the ingenuity and passion of great scientists. Siegmund Brandt, The Harvest of a Century, OUP, 2008, £35 RRP

Swetha Suresh is a PhD Student in the Department of Pharmacology


Sticky Feet

Cambridge’s Science Magazine produced in association with

Issue 15 Easter 2009

The science behind self-control

Chris Adriaanse interviews award-winning photographer Thomas Endlein, about his striking cover image with only three of their six legs but when weights were attached to their bodies they increased the number of legs in ground contact. The ants also change the angle of their legs towards the surface, and can alter the size of the sticky pads themselves to fine-tune their stickiness.

Studying ants can be somewhat tricky as they have a tendency to escape. Their highly adhesive feet allow them to crawl up almost any surface. A special coating is used to contain the colony in Zoology, housed in a temperature and humidity controlled room - but they can still find a way out, as Thomas explains: “We often have escapees.You can come in early one morning and find them all over the lab. With other ants we use a vacuum cleaner to collect them but, because Weaver ants are so sticky, we have to pick them up individually.” The Weaver ants’ sticky feet and weight bearing abilities help them to build

Unlocking the Brain

Understanding imaging techniques

Credit Crunch

Misattributed scientific discoveries

Green Living . Ice Sheets . Nasal Cycling Pulsars . Wireless Communication . Fractals

remarkable treetop nests. Tree leaves provide the basic building blocks but it’s up to the ants to secure and seal the nest. Leaves are held together by the ants’ feet and jaws. Then, carrying a recently hatched larva in their mandibles, they stimulate the secretion of a silk thread with their antenna. “They use the larvae as living needles to stitch the leaves together while others are acting as ‘living clamps’ to hold the leaves in place, motionless for hours, which is all quite amazing.” The set-up for the cover photo was surprisingly easy. Weaver ants are well known for their aggressive and territorial behaviour; they’ll snap at anything you put in front of their jaws. “I made use of the reflex they show when they are holding leaves together: once they grab it they won’t let go and they stay there holding on motionless for hours. This is exactly what they did with the weight.” With the ant holding tight and staying still, Thomas had plenty of time to compose his photo. For more of Thomas’ work see Chris Adriaanse is a PhD student in the Department of Chemistry THOMAS ENDLEIN

Ants, we know, are hard workers. Perhaps none more so than weaver ants that can easily carry more than one hundred times their own bodyweight. Running upside-down while carrying multiples of their own weight, the ants must resolve conflicting needs of adhesion and agility. How they manage to do this has been the research of Thomas Endlein, first during his PhD in the Department of Zoology and now as a research assistant. Thomas has been studying the ants using a variety of methods: high-speed video recordings, microscopy and force measurements to determine how the ants are able to hold on so tightly to surfaces. It’s not so much their ability to hold heavy weights, but that they can hold them even stuck to the ceiling, defying gravity. Some of the answer lies in a fluid secreted by the ants. It has long been known that insects stick by secreting a fluid in-between their soft pads and the surface they’re on. However, this fluid only solves half the mystery: once stuck, how do they remove themselves? Thomas’ research found that the ants use several clever mechanisms to precisely control their ‘stickiness’. The ants can alter the number of legs they use in contact with the surface. Weaver ants normally run

Resisting Temptation

Easter 2009



Exercising Self-Control Adam Kessler justifies the need for self-indulgence during exam time I want to prove that you are better than a rat. I know it sounds crazy, but stick with me. Imagine that you are holding a slice of warm, rich chocolate cake. It smells of fudge and lazy summer days. But, just as you lift it to your mouth, fingers tingling with desire, someone tells you that the cake is poisonous. It will taste good now, but in a few weeks you’ll keel over dead. If you believe them, you’ll be able to resist the temptation to gorge yourself on the deadly chocolate.You could put the cake down, and walk away. Unlike a rat, which would have dived straight into the cake, you have the ability to abstain from current pleasure in order to avoid future discomfort.This restraint is not unique to humans, but it has evolved to an extremely complex level.We call it self-control, and it is one of the most important characteristics we have. Our ability to show restraint can have a whole range of implications. Poor self-control has been correlated with depression, OCD, aggression and crime, while good selfcontrol has been linked to strong leadership skills, better relationships and greater academic and social success. Nobody really knows where our self-control comes 6

from or why some people have more than others. By outlining current psychological theories about the role that self-regulation plays in our lives, I want to convince you that it is worth searching for.

“Nobody really knows where our self-control comes from” Self-control operates a bit like a muscle. An hour in the weights room leaves you exhausted and weak.Your muscles get tired and you have to rest and re-energise. Similarly, if you exert self-control you deplete a limited store of mental ‘energy’; you are less able to exercise self-control in the near future.This can be demonstrated by giving participants two consecutive unrelated tests of self-regulation. In one experiment participants watched a distressing film, while refraining from showing any form of emotion.This required self-control as they struggled to repress their emotional reaction.This was followed by a test of physical selfcontrol; the subject had to hold a handgrip

exerciser tightly for as long as possible. A control group watched the same film, but were allowed to express any emotion they wanted.They performed far better on the subsequent self-control task. Restraining emotions impairs your ability to hold on to a handgrip exerciser. An odd result given there is no obvious connection between the two. But this effect has been shown over and over again. People have been asked to drink sour juice, stick their hand into cold water, or not think about white bears. One task of self-control will always inhibit performance in a subsequent task. Exercising self-control depletes a limited resource, which researchers call ‘self-regulation’. Self-regulation is needed not just for self-control, but for any task requiring us to regulate or change our mental processes.This includes making decisions, showing initiative, or giving presentations. In fact, most higher-order mental functions require self-regulation. Evidence for this comes from experiments where performing one self-regulatory task impairs performance on the second. For instance, a study by psychologist Brandon Schmeichel and colleagues looked at the effect of self-regulation on logical


reasoning. Participants were initially asked to watch a video while ignoring random words which appeared on the screen.This required self-regulation; the active control of mental processes. A control group watched the video, but without the words. The experimental group then performed worse on subsequent tasks of logical reasoning. Experiments like these indicate that many important mental functions rely, in some way, upon a single resource. This has some practical consequences. This term, allow yourself to eat as much chocolate as you want. If you’ve got an important interview, ask someone else to choose which tie you should wear. Fortunately, it is also possible to increase self-regulation. One of the best ways is to make yourself happy, which increases self-regulation. Experiments that induce happiness can reverse the decline in selfregulatory performance, giving you all the more reason to eat chocolate in exam term. Another study argues that exerting autonomous self-control – making your own choices – does not deplete self-regulatory ability, while forced selfcontrol does.The idea is that if a nasty experimenter tells you to do something, it depletes self-control. But if you do it of your own accord, then you’ll be fine. In support of this, one experiment asked participants not to eat cookies, in order to deplete their self-control. A questionnaire assessed their reasons for not eating the cookies. Some people didn’t eat cookies because they were told not to. Other people wouldn’t have eaten the cookies anyway for more autonomous reasons; for instance, being on a diet.The latter group showed better self-control on a subsequent

Could you resist this delicious chocolate cake if you knew it was poisonous?

task, implying that it is only forced selfcontrol that depletes your resource. The research that has been done on self-regulation is fascinating, but by no means complete. An observant reader may have noticed that I have avoided defining self-control.The literature offers a bewilderingly heterogeneous range of definitions, and it is extremely difficult to extract anything sensible. A typical paper, “Self regulatory failure” by Vohs and colleagues, conceptualises self-regulatory resources as an intrapsychic mechanism that controls desires, impulses and motivation.This is an almost impossibly broad definition. Saving for a pension, not thinking of white bears, and slamming on your brakes at a red light can all be seen as manifestations of self-control.These are all complex processes, and it seems unlikely

that a single construct underlies them all. It is far more likely that what we call ‘selfcontrol’ consists of multiple psychological systems. Many of the researchers that I have cited do not recognise this, and persist with a broad, sweeping definition.The experimental techniques they choose do not distinguish between different types of self-control. However, this should not diminish the value of the research.The results I have described have been reliably observed, and we are only just beginning to explore the implications.With more research and more rigour, we could come to understand one of the most important concepts in human psychology. Adam Kessler is a Part II student in the Department of Physiology, Development and Neuroscience


Animal Behaviour

Easter 2009

It’s not just humans which show self-control. Despite my initial scorn, rats can exert limited self-restraint. A simple experiment offered a rat two holes to poke its nose into. A ‘nosepoke’ into one hole was rewarded with an instant food pellet. A nosepoke into the other hole resulted in five food pellets but only after a time delay. By varying the length of the delay, you can change the amount of self-control required. Most rats are good at anything up to about a hundred seconds. Primates, of course, can do far better than that. Chimpanzees and orangutans have been taught to use a straw to suck fruit juice out of a container. When presented with a choice between a piece of fruit and a straw, they picked the straw, even if no container was present. They seem to know that the straw would eventually be more useful than the fruit, and so ignore the possible short-term gain.

7 07


Algae Living

Daniela Krug, Karuga Koinange and Chris Bowler look at the future of green living Think of algae and one might imagine a murky pond, a neglected swimming pool or a deserted stretch of coastline strewn with seaweed perhaps. In any case, it’s not normally associated with excitement or practical usefulness. However, this may be about to change. Research worldwide is exploring the potential for algae as a clean, renewable energy source. It may have the potential for providing a truly

“Algae and people may not present themselves as obvious bedfellows” ‘green’ solution to the ongoing global energy crisis. Algae differs from conventional biomass crops in that useful energy can be harnessed by different means. Like traditional crops, algae can be burnt to release energy. Uniquely, algae can also be used to produce hydrogen, a far cleaner and greener method of energy production. Under certain conditions – namely the absence of sulphur – algae switch from the production of oxygen by photosynthesis to the production of 8

hydrogen. To capture this hydrogen and subsequently use it in conjunction with a fuel cell would open up the potential for totally CO2 free energy consumption. It was this mode of energy release that inspired a recent multidisciplinary design project of Cambridge architects and engineers. The team set out to investigate the potential for the micro-generation of hydrogen from algae within a domestic residential context through a process of experimental design. They were especially interested in exploring how the needs of algae cultivation and human comfort could be reconciled into a single architectural solution. From early on in the design development process it became clear that certain environmental constraints – namely light and heat – for successful algae cultivation were analogous to those required by humans. Eukaryotic organisms, such as algae, generally thrive on exposure to high levels of light. However, the capture of gaseous hydrogen produced by the algae necessitated its housing in some form of sealed, transparent tank. Consultation with other researchers in the field of algae cultivation, who had

completed mock-ups of such tanks, confirmed that they were highly prone to over-heating. Algae are killed at temperatures over 30ºC. Humans of course, with regard to comfort, have a similar temperature threshold. Thus, it became clear that the potential existed for the algae and domestic spaces of our ‘Algae House’ to enter a symbiotic relationship, whereby one promotes the optimum environmental conditions for the other. The form of the ‘Algae House’ façade was developed as a direct consequence of this constraint. The guiding objective in the design was that, whilst temperature stability was essential, it was also desirable to obtain the maximum amount of light

“Algae technologies could play a significant role in our built environment” from the sun. Multiple cylindrical tubes of small diameter were proposed to provide optimum surface area. A fixed glazing system shaded by louvres (horizontal slats) and surrounded by a water pool was developed that would

independently control solar heat gain and light throughout the day as well as across the year. To allow the algae to function efficiently, and to reduce artificial lighting, they would need as much sunlight as possible without risking over-exposure. Therefore, through careful consideration of the

“Research projects worldwide are exploring the potential of algae”

Algae and people may not present themselves as obvious bedfellows, but this project shows that the use of algae as an energy generator within a house is not only feasible, but that cohabitation can result in a selfsustainable symbiotic system which opens up many exciting architectural possibilities for ‘green living’. This recently concluded project, developed as part of a course module, has awoken great interest and

“Algae thrive on exposure to high levels of light” enthusiasm within our team. We feel that algae technologies could play a significant role in the future of our built environment. This conviction has motivated us to establish a web platform in order to inspire fellow students, academics, and professionals to think of algae as a sustainable resource. We encourage you to get in touch if you have a general interest in algae or if you want to get involved in developing the algae living concept further. Daniela Krug, Karuga Koinange and Chris Bowler are MPhil students in the Department of Architecture DANIELA KRUG, KARUGA KOINARGE & CHRIS BOWLER

algae tubes’ orientation to the sun, direct solar heat gain was allowed only during winter months and on spring and autumn mornings and evenings. As the house plans illustrate, the shallow pool of water, or ‘moat’ that lies adjacent to the façade is intended to perform two basic functions. Firstly, the reflective properties of water are such that the amount of light reflected increases exponentially as the angle to the surface of the water decreases. This means that the pool reflects low angle sun up to the overhanging algae façade, whilst absorbing more of the higher energy, high angle, midday summer sun. Secondly, water absorbs up to a hundred times more energy from infra-red light than from visible light. As heat energy is mostly transferred

by infra-red light, the water should usefully absorb much of the heat from direct sunlight before reflecting it up to the algae. The amount of reflection was optimised by the addition of a reflective surface or coating to the pool floor. In the summer the pool also benefits the occupants of the house in providing cooling as air is drawn into the house after passing across the water. The movement of the reflected light playing across the green algae tubes and the living room ceiling would also create a visually interesting and unique living space. The total amount of energy produced through hydrogen production was calculated assuming a 10% efficiency in the conversion of light energy to hydrogen. Based on this calculation, 75 square metres of algae is estimated to produce 6570 kilowatt-hours of hydrogen per year – enough to drive an electric MINI E car from London to Beijing and back three times. To make the most efficient use of this energy, the majority of it should be converted to electricity through a fuel cell with an efficiency of approximately 50%. The associated waste heat that is produced as an inevitable consequence of this technique could be recovered to satisfy house heating needs.

Cross-section of the algae house designed by the Cambridge team

Easter 2009



Nostril Nose Best Cat Davies explores the ins and outs of nasal cycling Nasal Cycling: not a new Olympic sport for 2012, but the alternating dominance of each nostril - a physical phenomenon present in 85% of mammals - that probably includes you. As we go about our day, one nostril is more open, allowing more air to flow through it than its resting partner. A few hours later, the open nostril rests and the other flares and takes control. Try it. Put a finger under your nose and you will feel a stronger, warmer sensation on one side. Remember to try it again in a couple of hours time and you may well find the opposite. Unlike the test you may have just done, researchers have not been measuring nasal cycling by sitting in labs with their fingers under each others’ noses. It has been studied in a number of ways: hot-wire anemometers (ouch) should perhaps remain unexplained; the Zwaardemaker method relies on a calibrated cold mirror and condensation, and a more recent technique involves participants exhaling onto a piece of glass with red dye and then observing the resultant ink bloom. The wonky love hearts which are left behind reveal a striking manifestation of our nasal asymmetry. 10

This alternating vasodilation and vasoconstriction of the nostrils was first documented by Kayser, a German rhinologist in 1895 and developed by Heetderks in 1927. It has since been embraced by yoga enthusiasts in the practice of Pranayama (controlled breathing as meditation). Research into nasal cycling was taken up with gusto by David Shannahoff-Khalsa at the University of California in the early 1990s leading to a number of

“The nasal cycle is linked to the rhythm of alternating brain hemispheric activity” publications, and has more recently been investigated in relation to handedness, autism and early language impairment. So why should cycling happen? To use an analogy from elsewhere in the body, lateralisation in the brain has been postulated to take place to make maximum use of neural tissue and avoiding duplication of function. However, nostrils need not multitask, and moreover, don’t wear out unless

they have been the unfortunate conduits to substances other than air. The intriguing claim is that the nasal cycle is linked to the rhythm of alternating brain hemispheric activity, and governed by the autonomic nervous system (ANS). Using neural imaging techniques, positive correlations have been found between hemispheric activity and dominance in the opposite nostril. Suddenly and surprisingly, the nose is called upon as an integral part of cognition! We even do better in certain kinds of test when forced to breathe through the optimal nostril. ShannahoffKhalsa and Susan Jella investigated performance in cognitive tests by forcing their undergraduates to breathe through either the left or the right nostril (crocodile-clips, anyone?). When taking the right-brain based spatial tasks, the students did significantly better during left nostril breathing, whilst on the verbal tasks, more closely associated with the left hemisphere, they scored higher during right nostril breathing but not significantly so. The asymmetry in significance in this case may be due to multiple brain regions


It is thought that flamingos stand on one leg while resting the corresponding brain hemisphere

Easter 2009


mediating the skills required in the specific types of tasks. Dolphins have mastered the ability to let one half of their brains rest while the other side stays on the lookout for predators and reminds them to go to the surface to breathe. Recent evidence from nasal cycling research suggests that there may be some propensity for one side of the human brain to be more active whilst the other takes a back seat, regardless of the task at hand. Half-sleeping has been noted in other species too – note the common sight of the ‘one-legged’ flamingo, with ducks, geese, storks and herons also making like Maasai tribesmen for stretches of time. Various theories abound including the idea that these birds are resting only one hemisphere at a time; the resting leg corresponding to the contralateral sleeping hemisphere. The other side supports the body and maintains a degree of alertness when the bird is in a vulnerable state. Evolutionarily, the theory is persuasive. So could nasal cycling be an underdeveloped form of the same phenomenon? A feasible, but untested, hypothesis is that each hemisphere is resting and recuperating in roughly twohour cycles. However, wouldn’t it be more efficient if neural resources were activated as and when required, with the corresponding nostril following? An implication of nasal cycling is that if the dilated nostril is associated with greater activity in the opposite hemisphere, the less active side of the

Dolphins rest one of their brain hemispheres at a time, keeping the other half of the brain awake, exerting control over vital functions.

brain may compromise the systems it mediates. Breathe on one side for too long, could certain abilities deviate from normal development? What about cases of nasal blockage or septum misalignment? Can the brain exploit its plasticity to overcome such serious implications of minor physical anomalies? Mention nasal cycling and a common response is one of surprise. Nevertheless, the lateralisation of the brain and body is widely observed. Whenever we pick up a pen, put the phone to our ear, cross our legs, interlace our fingers or tilt our heads to be kissed we are illustrating the body’s inherent lopsidedness. The popular media commonly cite left- and right-brained tendencies to illustrate individuals’ strengths and weaknesses, and lateralisation of the brain is now a major topic within the cognitive sciences; there is even a cross-disciplinary international journal focused exclusively on lateralisation in human and non-human species. Linguists have studied neural regions and brain lesions in relation to language ability since the time of Paul Broca in the late 19th century. Such research is well established and widely respected, and what it has in common across the sub-disciplines is the top-down nature of the brain governing the body. So does, what I always understood as a facial appendage designed to warm the

air we breathe, really have the capacity to influence brain function? I would be much more likely to accept this if the causal direction was the other way around, i.e. brain beats nostril, but considering that the ANS and the hypothalamus play president and vicepresident in this system of government,

“Does our nose really have the capacity to influence brain function?” it appears that nostril dominance originates from the brain itself, and then in turn affects cortical activity. The evidence seems to suggest that the ANS starts the race, the nose cycles and the brain follows behind. So if the story of nasal cycling is true, how should we best harness it? Plug our left nostril during that presentation at work? Stick a finger in the right side during the driving test? Market a nose-flow detection kit for task/ brain-optimisation? As it seems that achieving ambi-nasality is beyond us mere mortals, perhaps we’ve just got to embrace the times when we’re down with a cold, for that is when we are truly cerebrally balanced. Cat Davies is a PhD student at the Research Centre for English and Applied Linguistics 11


Cosmic Lighthouses Jamie Farnes describes the discovery of pulsars Astrophysics is arguably the most difficult to visualise of all physical sciences. Attempting to envisage the vast 93 million miles between the Earth and the sun is exceptionally difficult, while comprehending the 13.6 billion light years to the edge of the observable universe is, perhaps, impossible.Yet despite these enormous scales, mankind has successfully developed highly sensitive technology capable of probing the mostly dark, empty outskirts of our universe. Since the 17th century, telescopes have uncovered evidence of astrophysical objects, including a host of exotic phenomena associated with dying stars, such as white dwarfs, supernovae and black holes. Pulsars are one of the many possible cosmic leftovers from the explosive death of a star, known as a Type II supernova. This occurs when a star contains enough matter that gravity eventually causes the core of the star to collapse, releasing vast amounts of energy and causing a rebound shock-wave that culminates in the outer layer of the star exploding in an enormous fireball.The core temperature of this explosion is around 100,000,000,000°C, 12

with an energy release equivalent to 100 trillion trillion million thermonuclear weapons.This remarkable stellar death leaves a highly dense object, known as a neutron star, at the centre of the explosion. Neutron stars are effectively giant atomic nuclei which, when they rotate, become detectable and are known as pulsars. Pulsars have an exceptionally strong magnetic field and a mass of approximately 1.5 times that of the sun, contained within a radius of just 15 kilometres.This means

“The signal wasn’t always there on the days when it should have been” that they are incredibly dense - a teaspoon full of material from a pulsar brought back to Earth would weigh about as much as 200 million African elephants! Pulsars rotate up to several hundred times a second and, through processes still not fully understood, emit radio waves in a fine beam.The emission of radio waves makes them detectable by telescopes on Earth,

as the emitted beam of electromagnetic radiation sweeps across the Earth in a fashion similar to a lighthouse beam sweeping across the sea.This recurring sweeping motion results in a highly regular, periodic radio signal. The discovery of pulsars was made in 1967 by Professors Antony Hewish and Jocelyn Bell Burnell using the Interplanetary Scintillation Array at the Lord’s Bridge site in West Cambridge. Bemused by the bizarre regularity of the detected radio pulses, the signal was initially considered to be man-made and attempts were made to locate the source. However, working with the signal was not simple, as Hewish explains: “The signal wasn’t always there on the days when it should have been, but it just simply wasn’t.” It was soon realised that the signal moved across the sky at the same rate as the stars – a consequence of the Earth’s rotation. This did not rule out the possibility that the equipment of other astronomers was responsible. “If the pulses were being initiated on the ground and coming in via reflection from the ionosphere, then the signal had to be coming from


The proposed SKA telescope

Easter 2009

peculiar pulsating object was less than 1000 kilometres in diameter and was roughly 100 light-years away. Meanwhile, additional pulsars were found, including one in the Crab nebula - a remnant of a supernova which lit up the night sky in 1054. As pieces of the puzzle slotted into place, the correct interpretation of these signals as originating from rotating neutron stars formed in supernovae was finally proposed. For this remarkable discovery Antony Hewish was awarded the Nobel Prize for Physics in 1974. Over 40 years since their discovery and with more than 1,800 pulsars now detected, these enigmatic objects keep providing new scientific information. Recent discoveries include pulsars that emit X-rays instead of radio waves, a pulsar with three orbiting planets and binary pulsar systems which consist of two pulsars orbiting each other. With an estimated 70,000 observable pulsars in our own galaxy, the Milky Way, only a fraction have been found. As

“It could be a communication from intelligent extraterrestrials” pulsar detection is inevitably limited by the sensitivity of modern radio telescopes, the problem beckons for bigger and better equipment.Thankfully, the calls for more advanced technology will be answered with the completion of the largest radio telescope ever built, the Square Kilometre Array (SKA). The SKA is due to be constructed in either South Africa or Australia and will consist of around 5,000 dishes alongside additional observing stations up to 3,000 kilometres away. As a consequence, SKA will have 50 times the sensitivity of any existing radio telescope and will be capable (in theory) of detecting extraterrestials’ television signals from stars as far as 1,000 light years away. It is planned to be operational by 2016 and will cost an impressive US $1 billion. The SKA will certainly revolutionise our understanding of pulsars as it should be able to detect all of the 70,000 observable pulsars in our own galaxy and will also, for the first time, be able to detect pulsars in


somewhere down-south, maybe in France. I had a colleague in the Royal Greenwich observatory and telephoned him to ask if he could think of any astronomical observations that could be doing this and he couldn’t. Ultimately, I began to think maybe there was something actually astronomical about it,” Hewish recalls. Upon confirmation that this repeating radio pulse was indeed originating from space, they considered whether the signal could be a communication from intelligent extraterrestrials – a thought that led the researchers to jokingly dub the signal LGM-1 for ‘Little Green Men’. Interestingly this raised many ethical concerns: if you discover intelligent life elsewhere, is it safe to attempt to communicate? As Hewish explains, “If they were intelligent signals, perhaps they were waiting for a signal from us because they were on a planet like Earth, which is running into problems. Overcrowded planets were quite a possibility and perhaps they were launching a signal to see if there were any green fields out there that they could come to and dominate.” If the signal was in fact sent by sentient beings on a planet, then there should have been an associated change in frequency of the received radio signal due to the orbit of the home planet about its parent star, a phenomenon known as orbital doppler shift. Hewish set about measuring this and no shift in frequency was found, ruling out a planet as the origin of the signal. This confirmed the true nature of the signal as a newly-discovered natural phenomenon, but the puzzle of what was causing the signal. remained For Hewish, the mystery was a very exciting one, “It was a wonderful time, a terrific time, but it certainly kept me awake at night!” Further investigation showed that the

3D representation of pulsar J0108

other nearby galaxies such as Andromeda. It is also hoped that the SKA may detect the first black hole and pulsar binary system (a pulsar orbiting a black hole). Binary systems are particularly useful in that they allow physicists to make precise tests of Einstein’s theory of general relativity, which describes gravity as a geometrical distortion of space-time. Indeed, binary pulsars have already been used to provide indirect evidence for the existence of gravitational waves, a key prediction of Einstein’s theory. Gravitational waves bend space-time and this subtly changes the distance between two points in space. Using the SKA as a precise timing array to time pulsars with a precision of 100 billionths of a second over 10 years, it will be possible to measure tiny distance fluctuations between us and the pulsars as a consequence of gravitational waves.This would further confirm Einstein’s theory and also provide an entirely new way of observing the universe - via gravitational waves instead of just the electromagnetic waves currently used. So do pulsars have any further surprises in store? Hewish believes they do: “Pulsar science is only just beginning, there is all sorts of science that you can do if you detect enough pulsars and with more of these binaries turning up we are starting to directly sample the stellar atmosphere of pulsars. If we can find a pulsar orbiting a black-hole, that would be a golden dream, and there’s no reason why we shouldn’t.” As is always the case in science, who knows what serendipitous discoveries could be in store in the future? Jamie Farnes is a PhD student in the Department of Physics 13


Francisco Monteiro looks at the remarkable achievements in error-free digital communications speeds and this is misleading. Data can be received more quickly if more bits are transmitted per second, but the bits themselves do not travel any faster. So what marketing should tell us is to ask for a higher bit rate, not a higher speed. These digital signals, in the form of zeros and ones, must be detected and decoded against corrupting background noise. For example, temperature causes random movement of electrons in receivers, which disrupts the signal. Error-free transmission of binary digits under such conditions is not easy. Some ones may be mistaken for zeros and vice versa, and errors increase with faster bit rates. The challenge is to maximise the bit rate whilst minimising errors. Is there a limit to the bit rate we can achieve, whilst keeping the link free from errors? This was one of the questions


We are the first generation that is able to contact friends on the other side of the world, from anywhere, at any time. Whether in the living room or in the middle of a park, we can use a tiny laptop, apparently connected to nothing except the air we breathe, to chat with friends on a webcam whilst a missed TV show streams in another window. We take this for granted, yet the development of error-free, wireless transmission is one of the most astonishing intellectual achievements of modern science. Most of us know that any piece of music, painting or text can be represented by a combination of just two symbols, known as binary digits or bits (for simplicity, we call them zeros and ones). And we know that we want lots of them coming to us in a short time. But marketing tells us to ask for higher

Progress in error correction codes allowed error rates to approach the Shannon limit


that Claude Shannon asked in his 1948 seminal paper A Mathematical Theory of Communication. Shannon formulated the concept of a channel’s information capacity; the maximum achievable rate of error-free data transfer in a given channel (the Shannon limit). He showed

“Marketing tells us to ask for higher speeds, but this is misleading” that if we transmit below the capacity of a channel, some code should exist that would allow the correction of all the bits that have been corrupted. It is similar to a word processor suggesting corrections for misspelt words; more specifically the proficiency with which it identifies the most likely correct word. Some of the brightest mathematicians, engineers, and computer scientists devoted themselves to the problem of finding such a feasible error correction code. However, by 1993, even the best codes were still performing far from the capacity limit. Then the unexpected happened. In a leading conference, a paper, claiming to have a feasible family of codes (dubbed turbo-codes) that operated near the Shannon limit, was presented by Claude Berrou and Alain Glavieux, two French engineering professors who where rather unknown at the time to the coding theory community. “They got it wrong,” people mumbled


“By 1993 even the best codes were still performing far from the capacity limit” at the end of the presentation, “They must have forgotten to divide by two somewhere!” Everybody rushed back to their labs and tried to replicate the results. They could not believe what they found: turbo-codes were performing just as claimed. However, it was unclear why they worked. At around the same time, Cambridge Professor David MacKay, along with Radford Neal at the University of Toronto, was looking at the problem from a fresh perspective. In 1995, he devised codes operating even closer to the Shannon limit. For some time, his Low Density Parity Check Codes (LDPCs) made Cambridge the home of some computers that were running the best error-correcting codes in the world. Interestingly, his research revealed that LDPCs had been devised by MIT professor Robert Gallager in his 1962 PhD thesis, but had been forgotten. This was probably because there was not enough computing power at the time to implement them, or because he did not include them in his textbook published in 1968. Mackay’s papers triggered a boom of research and LDPCs were further refined by researchers in America and Switzerland. Currently, turbo-codes play a central role in the correct detection of the bits received by mobile broadband, and help to receive images from the probes on Mars. The patent-free LDPCs will take their place soon. It had taken almost 50 years to reach the Shannon limit. But a further burst of research in the second half of the 1990s proved that the maximum possible bit rate within a fixed spectrum had not been reached. Shannon’s formula for typical electrical channels considered thermal noise only, not additional “perturbations” such as multiple reflections of the signal in the environment, as is the case in wireless communication. For many years, this type of “self-interference” was perceived Easter 2009

MIMO space-time processing takes advantage of multiple reflections that act to artificially create independent communication streams

as an additional obstacle for correct signal detection at the receiver. However, it was later proven, mathematically and experimentally, that by considering space in addition to time when designing a code, the Shannon limit could be surpassed. With rather

“4G mobiles will reach rates of up to 1 gigabits per second” complex algebra and computing, we can artificially create several independent communication streams using the so called space-time coding on multipleinput multiple-output (MIMO) systems. In electronics, this translates to the use of multiple antennas on the outside and much more processing complexity on the inside. The same MIMO principles are now being used to take advantage of the different reflecting paths that light waves can take inside optical fibres. Even in

bundles of landline cables, the mutual interference can be used in a similar way. Soon, 3.5G mobiles will provide gross bit rates of up to 100 megabits per second (Mbps) and, inside the house, the next Wi-Fi standard will provide up to 600 Mbps. Later, 4G mobiles will reach rates of up to 1 gigabits per second. To put that in perspective, in 2008 the average download speed in the UK was 4 Mbps. At 2 Mbps it takes 47 minutes to download a typical film; at 10 Mbps this is already down to twelve minutes, so at 600 Mbps it will literally take seconds. These are the plans for the next decade, but a revolution has recently started in academic circles: network coding theory and collaborative networks have all users in a network helping all other users to sustain the error-free bit flow. At this stage, the capacities for such networks are unknown, and a new Shannon is needed. Francisco Monteiro is a PhD student in The Computer Laboratory and in the Engineering Department 15


Sensing our Surroundings OUR WORLD is severely affected by a variety of environmental problems. Issues like global warming and ozone depletion make the news headlines every day. These problems are acute and global, and the majority of scientists agree that the future of our planet is at stake. In order to keep track of some of these problems, several local councils monitor the levels of traffic pollution. However, this is usually done in a small number of sparsely distributed monitoring stations, so the resolution obtained is very low, plus pollution can vary dramatically on a perstreet basis.

“Volunteers could perform these measurement tasks without training” Now imagine you had a myriad of cyclists and pedestrians carrying mobile environmental sensor devices each monitoring local pollution levels.These would create simple sensing networks that could cover an entire town.The pollution data gathered by each mobile sensor would be sent wirelessly to a central server, together with location information.This information would then be updated, in real time, and displayed as a high resolution pollution map on a public web site.This way, people would know what pollutants they are exposed to over the course of the day, allowing them to avoid areas with a high concentration of pollution.This could also trigger local and central government initiatives to reduce the concentration of pollutants in specific areas. An important stimulus to the germination of sensing networks is the increasingly environmentally-conscious public. The possibility of contributing this type of information without having 16

to actively do more than carry a device could interest many. Even without any scientific training, volunteers could perform these measurement tasks for the sake of promoting awareness of pollution problems in communities and thus improve their society. In Cambridge, these trends are already becoming reality. Bicycle couriers monitor the city air quality using mobile phones. The bicycle carries a small wireless sensor that sends pollution data via Bluetooth to the courier’s mobile. These devices also incorporate an integrated GPS receiver with location information. The mobiles then assemble this data and send it to a central server. Developed by a team of researchers, led by Eiman Kanjo from the University’s Computer Laboratory, this technology builds maps containing detailed information, at the street level, of the concentration of numerous pollutants affecting the city air. One of the challenges of this technology is minimising the device size. The first prototypes are still big; roughly the size of a large remote control. The main culprit is the sensor. However, the Cambridge community is on the verge of revolutionising this area. Owlstone, a spin-off company from the University, develops ‘dime size’ detectors. One can envision them being integrated into small devices (including mobile phones), with every willing citizen contributing to the pollution mapping process. Another interesting approach to this technology is the possibility of linking health problems to pollution data. It is known, for example, that asthma symptoms are linked to air pollution. Hence, by combining the mobile sensor technology described above with a separate device to measure lung function, it would be possible to correlate the patient’s symptoms with the air pollution around them. This data could then be sent

Eiman Kanjo

Imagine a myriad of cyclists and pedestrians with environmental sensor devices

Screenshot of pollution software ‘Airfresh’ on a Nokia N95 mobile phone

automatically to the patient’s doctor. In the future, before your morning jog, you might surf your favourite pollution map site to find the freshest air in town.Your watch would analyse the concentration of pollutants, with information being uploaded to the website. If the concentration of pollutants is above a safe level, you would be informed of a less polluted track nearby. While on the move, your watch would also keep track of your heart beat and blood pressure. If there’s something wrong, you’ll receive a prophylactic text message telling you to stop running. Or, in a critical situation, an automatic message could be sent to the emergency services with information on your condition and location. Help would be on its way. Fernando Ramos is a PhD student in the Department of Engineering and in the Computer Laboratory

Lighting up the Brain The origins of human thought, emotion and personality have been pursued throughout history. It was Franz Joseph Gall, at the beginning of the 19th century, who believed the brain was composed of separate ‘organs’ that each controlled a different aspect of character; he examined bumps on the surface of the skull and thought these provided insight into the workings of what lay beneath. In doing so, Gall was making the first attempt to determine how the structure of the brain gives rise to the mind.

Two hundred years on, we can now utilise the buzz of electrical activity and the ravenous burning of energy inside our heads to reveal the inner workings and watch the brain at work. Neuroimaging has become an indispensable tool in both research and medical diagnosis. While Gall’s methodology now seems ludicrous, some of his ideas persist. Much modern research encourages us to consider the brain as a modular structure, with individual regions performing different functions.Yet it is becoming increasingly

clear that communication between regions is at least equally important. Modern neuroimaging techniques must be considered carefully to maximise our understanding of how the brain works. Each is optimal for answering specific questions, but none are able to provide a complete view of brain function. Each technique has limitations and downfalls that complicate their interpretation. This issue, FOCUS considers some of the current techniques used to study the brain and how they may be best used to advance our knowledge of the human psyche. BACKGROUND IMAGE: EQUINOX GRAPHICS

Easter 2009


Measuring the Mind BlueSci looks at the science behind the pictures last twenty years, MRI has also enjoyed a meteoric rise to fame in ‘functional’ neuroimaging research, attempting to determine how our brains work when healthy - not just when things go wrong. Functional MRI (fMRI) uses the same principles as ordinary MRI, but is used to detect differences in blood flow within

“It is easy to become mesmerised by the detailed images of brain function” the brain. More active brain areas require more oxygen than less active ones, so blood flow increases to these regions; this is known as the haemodynamic response. The increased level of oxygen changes the local environment of nearby protons enough to be detected by MRI, allowing us to distinguish oxygenated blood from deoxygenated blood. This change in blood oxygenation level is taken as an indirect measure of neural activity, so can tell us which parts of the brain are most active. Functional MRI has become so popular amongst researchers because of its non-invasive nature and high spatial resolution. Its main alternative uses a

radioactive tracer added to the blood to measure metabolic activity. The process, known as positron emission tomography (PET), loses nearly half the spatial resolution compared to fMRI. It is also less safe since radioactive material must be injected. With fMRI, experimental subjects can be repeatedly scanned, allowing within-subject comparisons that cannot be done with PET since it would require repeated exposure to radiation. It is easy to become mesmerised by the detailed images of brain function that fMRI produces but it is not without its own limitations. If research using fMRI is to produce meaningful data, careful experimental design is crucial. Imagine having an fMRI scan for the first time.You are lying on a table with your head strapped down and a massive electromagnet that is not only inches away from your face, but makes any number of reverberating drones, whines and crashes.You are anxious, perhaps claustrophobic, and alert to listen for instructions from the experimenter. Your brain will already be buzzing with sensory input, attentional mechanisms and emotional turmoil. Then you are shown pictures, about which you must make a decision, and convey your response by pressing a button. Now


Magnetic resonance imaging, or MRI, is a familiar term. Even if we’ve not had an MRI scan ourselves or don’t know someone who has, familiarity comes from televised medical dramas showing impressive 3D images of internal organs. Although MRI can be used to image any structure in the body, it is most well known for imaging the brain. Indeed, it has become essential for learning about the structure and function of the brain and what happens when it goes wrong. MRI takes advantage of the behaviour of protons - found throughout the body as hydrogen nuclei in water - when they are subjected to a strong magnetic field. The surrounding environment determines a proton’s precise behaviour towards the applied magnetic field, making different tissue types distinguishable. MRI allows us to determine whether a proton at a specific location is sitting in, for example, fat tissue, cerebrospinal fluid, cell bodies or neural fibres. The high resolution structural images of the brain produced using MRI can be routinely used in medicine to diagnose cancers and prepare for surgery, detect lesions and other structural abnormalities and track the progress of neurodegenerative disorders, such as Parkinson’s and Alzheimer’s diseases. However, in the

Functional MRI has high spatial resolution, localising activity to within millimetres


your visual cortex is in overdrive and your motor cortex has joined in, as have memory, emotional and language networks that were unintentionally triggered by the pictures. As the test continues, you may relax, perhaps your concentration lapses and you start to think about the shopping list or the film you watched last night. How can one specific cognitive process be detected amongst this bedlam of activity? The experiment must be designed to detect a change in activity. Background activity from control conditions is later subtracted from that seen under experimental conditions leaving regions that are ‘lit up’ and assigned to specific cognitive tasks. Variation between experimental subjects, both in brain structure and cognitive approach, means that results must be heavily averaged and smoothed over time. Interpretation of such data must be approached with caution, and there are numerous potential pitfalls. Functional MRI research previously identified a region that ‘lit up’ when subjects told a lie: the anterior cingulate cortex. It was proclaimed that this was the centre for lying and could be used to develop an advanced lie detector. But further research has shown that it is active during many other tasks that involve decision-making. Had a lie detector been developed on the basis of the original research, the consequences could have been severe. Is the haemodynamic response even a reliable measure of neural activity? It seems reasonable to assume that blood flow increases to satisfy the oxygen demand of active neurons. But is it possible that increased blood supply can occur without any associated neural output at all? A recent paper, published in Nature earlier this year, suggests that this could be the case. Yevgeniy Sirotin and Aniruddha Das, of Columbia University, trained rhesus monkeys to perform a cognitive task that involved periodically fixating on a visual stimulus. Simultaneously measuring neural activity and blood flow in the visual cortex, the researchers showed that the activity and flow increased periodically in time with the monkeys’ fixation. However, when they reduced Easter 2009

Golgi-stained pyramidal neuron in the hippocampus of an epileptic patient. 40 times magnification

the visual stimulation, the neural activity in the monkeys almost disappeared, but the blood flow still fluctuated in the same cycle. Even in the absence of neuronal activity, it seems there can be detectable changes in cerebral blood flow, bringing into question the fundamental assumptions of fMRI. In addition, conventional fMRI only allows study of the brain in a “modular” fashion by localising different functions to unique parts of the brain. This idea has persisted for over two hundred years, since the time of phrenology when Franz Joseph Gall proposed that the brain was composed of 27 “organs”. In 1983, Jerry Fodor published his seminal book The Modularity of Mind and ever since, cognitive psychologists have been largely concerned with carving the mind up into functional modules, with fMRI results encouraging us to think in this way. However, this model of brain function misses one vital concept: if it is a modular system, surely the modules must communicate. How does the brain talk to itself? Imagine standing on the station platform, awaiting the arrival of your train to London. In the distance, a speck of light gradually nears and the train eases to a halt. As you watch it approach, you perceive just a train, slowing down. But that is not how your brain processes it: the shapes, orientation, colours and

movement of the train are all separated along the pathway from the eye and are processed individually in the visual cortex at the back of the head where different cells respond to each aspect of the train’s form and movement. During which, auditory and other sensory

“Cognitive psychologists have been carving the mind up into functional modules” information is collected and processed in completely disparate areas of the cortex, and finally we put it all together to complete the perception of a train. How does the brain integrate all the components to give the final perception of a single moving object? This is known as the binding problem. It is one instance of the general problem of “connectivity”. Understanding how brain regions converse with each other to integrate information may provide insight into complex mental processes such as vision, memory, attention and even consciousness. Whilst fMRI can tell us where the different components are processed, it can tell us little about connectivity in the brain. Recent research has begun to look elsewhere for answers, using electroencephalography (EEG) and magnetoencephalography (MEG) to 19



measure brain activity and discover how sensory input can result in our internal representations of the world. EEG and MEG use external detectors, attached to the scalp, to measure the electrical activity of firing neurons in the brain. The flow of ions in and out of cells during activity produces currents that can be measured using EEG. As for any electrical current, magnetic fields also result, which are measured with MEG. They are therefore giving a direct measurement of neural activity, unlike the indirect measurement seen in fMRI. This also means that they are more accurate in determining when activity occurs, but spatial resolution is sacrificed; activity can only be localised to within centimetres rather than millimetres. Traditionally, EEG and MEG have been used to measure whole brain activity during sleep and in epilepsy sufferers. However, more recently, they have been used to shed light upon how different regions of the brain may converse with each other. One recent theory attempts to explain connectivity in terms of “neural synchrony”. It suggests that if neurons from different regions of the brain want to converse, their firing patterns need to be temporally correlated. When activity in one region is increasing or decreasing, activity in the other must be doing the same. This correlated pattern of firing is often constrained to particular frequencies. For example, neurons in quite separate areas of the brain may simultaneously peak in their activity three times per second. This cyclical, frequency-locked peaking in activity may be opening and closing a window of opportunity for communication between the two regions. EEG and MEG are ideal for measuring such frequency-specific activity relating to neural synchrony. Recent research suggests that in the visual system, activity within local neural networks (groups of neurons that work together to perform a function) synchronises at high frequencies, while activity between these networks synchronises at lower frequencies. This mechanism would allow the high frequency local networks to “bind” all the information about colour, for

Clinical MRI scanner

example, before it is combined with information about shape, orientation and movement to result in the unified perception that we experience. This may be the solution to the binding problem posed when watching the train approach. Synchrony has also been implicated in

“Functional MRI has popular because of its non-invasive nature and high spatial resolution” more complex mental operations, such as working memory. This is the ability to keep immediately relevant information in mind for a short period of time. For example, if we are told a phone number, we are able to replay that in our mind for several seconds or until something distracts us. Successful retention requires communication between areas involved in sensory processing and frontal brain regions involved in conscious control of behaviour. This communication seems to be achieved through synchrony, with activity peaking in both regions three times per second. Successful short-term memory retention therefore seems to

rely on synchronous activity between two distinct cortical regions. It is only by direct measurement of neural activity with high temporal resolution that such discoveries can be made. Whilst fMRI has dominated human brain research, it can never provide us with the whole picture. Similarly, EEG and MEG are unable to give precise locations of activity. But if the limitations of each technique are understood, and different questions tackled with the appropriate tools, useful progress can be made. In combination, such techniques are more powerful, and future research could provide us with great insight into the human mind. If we can understand how the brain processes the barrage of information it receives and puts it together to construct our unique internal representation of ourselves and the world around us, then perhaps we can discover the origins of our personalities, emotions and even our consciousness. Vicky Cambridge is a PhD student in the Department of Psychiatry Aidan Horner is a PhD student in the MRC Cognition and Brain Sciences Unit


It can be inserted into the genome of almost any species and its expression can be controlled to allow easy visualisation and identification of individual cells. Modification of GFP has led to the development of many derivatives with different colours; blue, cyan, yellow and red fluorescent proteins are all widely used in the same way. The group at Harvard have exploited the properties of fluorescent proteins and their expression to create the “brainbow”. The genes for fluorescent proteins are inserted into the DNA of mice, so that they are expressed in neurons. Copies of the fluorescent protein DNA become inserted into the genome at multiple locations in each cell. At each location, one of the fluorescent proteins is expressed; which one is determined by a mechanism of the random enzyme digestion of sequences in the DNA. Different numbers of insertions and different levels of expression for each protein result in a unique combination of expressed coloured proteins in each


Information in the brain is passed along axons. Axons are fibres that project from neuronal cell bodies and carry an electrical signal, in a similar way to wires in an electrical circuit, and pass information to cells that they make connections with. Whilst imaging techniques can be used to study the living human brain, animal research allows us to look directly at the paths that axons take and the brain regions that they connect, providing insight into the way in which neural networks are built. Scientists have been attempting to build “connectomic” neuronal maps for many years. But axons form bundles that resemble the chaos of cables under the average desk, and telling them apart is a challenge. Recent work by a group at Harvard University, led by Professor Jeff Lichtman and Dr. Josh Sanes, has provided a colourful solution. Green fluorescent protein (GFP) was originally found in the jellyfish Aequorea Victoria. It is what it sounds like; a protein that glows green when under blue light.

The brainbow allows neuronal cell bodies and fibres to be individually identified and their pathways studied; here is a brainbow of neurons in the hippocampus.

Easter 2009


Ben Ravenhill discusses fluorescent proteins as an alternative to fMRI

Another cross-section displaying the brain’s connectivity.

cell. In the same way that a TV creates all colours from just three, this combination gives each cell a unique colour. How is this useful? Slices of brainbow tissue can be prepared on slides and analysed using a computer. Analysis has so far resolved over 150 different colours. Each neuron has a largely uniform colour throughout and this means that not only the cell body is coloured, but also the axons, which allows them to be traced along their pathways to their connections with other cells. Having successfully labelled neurons, the team at Harvard have since done the same with another major cell type found in the brain, the glial cells. These cells have long been thought to act as little more than support cells to neurons. However, evidence is beginning to show that this may not be the whole story, and that they may play more important roles in brain function. Although fluorescent human brains are an unlikely development, the brainbow may make a significant contribution to neuroscience. If it can be used to accurately determine the connections between specific brain regions, this information could enhance and direct research that uses fMRI and other imaging techniques to learn about the living brain. Ben Ravenhill is a second year medic



Kapitza and the Crocodile


Boris Jardine charts the history and inhabitants of the Mond Laboratory Amongst the rabble of buildings that make up the New Museums Site is a diminutive but striking piece of modernist architecture: the Mond Laboratory. The Mond is now probably best known for one peculiar feature, a leaping crocodile carved into the brick by its main entrance. After many different occupants, the building now houses the Centre of African Studies, the Mongolia and Inner Asia Studies Unit, and a ragbag of humanities students and visitors. However, for a brief period in the early 1930s, the Mond was considered to be one of the most advanced physics labs in the world. It was one of the first in England to be built in the ‘modern’ style and, were it not for the departure of its chief scientist, it might have led to yet more Nobel Prizes for the Cavendish Laboratory. The Mond lies in the courtyard of the old Cavendish - a laboratory within a laboratory. It was built specifically for the magnetism experiments of the Russian 22

physicist Pyotr Kapitza (1894-1984). Kapitza had arrived in England in 1921, as part of a Soviet project to re-establish scientific contacts with the West. He had only intended to stay for the winter to complete an experimental training course. But Kapitza quickly impressed

“The Mond was one of the most advanced physics labs in the world” Rutherford, the head of the Cavendish, and was given a project tracking the paths of a-particles. The results of this famous study were published in 1922 and 1924, by which time Kapitza was established in Cambridge and had earned his PhD. He had also started conducting experiments on magnetism, and at this point his career took off. His ability to combine engineering skills with theorisation led Rutherford to take him on as his protégé.

Soon Kapitza took his first steps to independence from the Cavendish, setting up the ‘Magnetic Laboratory’ in an outbuilding of the Department of Chemistry. His work was by then almost exclusively concerned with the resistance of metals in high magnetic fields. The results of his first experiments, conducted at the temperature of liquid nitrogen, were published in 1928, by which time he was working on new methods for liquefaction of hydrogen and helium. Gradually it became clear that the scale of the research was too large to continue in such a piecemeal fashion. In addition, the grant that had funded the laboratory was running out. Kapitza’s work was saved by a £15,000 grant from the bequest of the industrialist Ludwig Mond, given to the University to create a laboratory in his name. With this astonishing degree of freedom and backing, Kapitza was able to construct a laboratory that catered exactly to his needs. In order to make

“Kapitza arrived in England to help the Soviets re-establish scientific contact with the West”


of intense magnetic fields, by means of a short-circuited generator, and the delicate measurement of tiny alterations in the physical properties of metals. The former caused what Kapitza called a ‘minor earthquake’ and the latter used equipment highly sensitive to vibration. Kapitza’s solution to the problem was ingenious: because the magnetic field was only generated for 1/100th of a second, the seismic effects of the short circuit could be negated by placing the sensitive apparatus sufficiently far away. The total distance required was about 20 metres; the measurements would be completed before the far end of the room started shaking. In addition, the steel-framed building is in two barely-connected

Pyotr Leonidovich Kapitza (1894 – 1984)

Easter 2009

halves, so as to minimise the transmission of vibrations. The sensitive piece of kit that all of this was intended to protect was an extensometer designed by Kapitza himself. It consisted of a small vessel, suspended in and containing oil, which was connected to a sample of metal at one end, with a diaphragm at the other. The diaphragm had a tiny hole, through which any oil displaced by changes in the sample would pass. This oil would then press against a tiny articulated mirror, which diverted a beam of light in such a way as to allow a reading to be taken; the alterations in the sample were to be magnified roughly 100,000 times. Needless to say, the extensometer was very sensitive. Indeed, as Kapitza suggested, one could turn this into a benefit and use it as a very precise seismograph. Unsurprisingly, the set-up was elaborate. In addition to placing the apparatus in a vibration-damped environment at one end of the building, Kapitza had to construct it on a “massive slate plate, suspended from the ceiling by means of four thin bronze wires”. The extensometer was literally built into the Mond; and the Mond, as the only possible setting for the instrument, was a part of the extensometer. Perhaps the most startling architectural features of all were the roofs of the liquefaction rooms, which were constructed in such a way that they would quickly disintegrate in case of an explosion. Kapitza would tell nervous visitors about a recent catastrophe in a German lab without such precautions, in which debris was spread for miles around. In spite of this remarkable set-up, Kapitza completed only a handful of experiments in the Mond. In the summer of 1934 he visited Russia with his wife Anna, but when he tried to return in October he was refused permission. The Soviet government had decided that his work should be incorporated into the second five-year plan and he was offered funds to put together a replacement research institute on the outskirts of Moscow. In 1937, the equipment Kapitza had assembled in Cambridge was shipped over, and he returned to Cambridge only once more, over thirty years later. After Kapitza’s untimely departure from Cambridge, the Mond had various uses.


the most of his funds, he seized upon the sparse functionalism of architectural modernism. The project was taken on by H.C. Hughes, one of the first graduates of Cambridge’s Department of Architecture. Though Hughes’ subsequent work shows that he was keen on the new style, Kapitza drew up the initial plans, and made modifications at every stage. The main architectural challenge arose from the conflict between the generation

The Mond Laboratory, New Museums Site, Cambridge.

In its current state, offices, seminar rooms, and a library, there is little evidence of its previous life. In today’s climate of building preservation, which favours facades over interiors, the crocodile has become the building’s main emblem. It was designed by the artist and typographer Eric Gill. The crocodile has bewildered its audience. Maybe it was supposed to be

“The Mond is now probably best known for one peculiar feature” Rutherford, who himself was famously ‘snappy’? Even Kapitza seemed unsure. Sometimes he said that “in Russia the crocodile is the symbol for the father of the family and is also regarded with awe and admiration because it has a stiff neck and cannot turn back”. However, he also likened Rutherford to the crocodile in Peter Pan. The last word, I think, should be Gill’s own assessment, certainly the most mischievous of all. At the opening of the building, in February 1933, he delighted in telling the assembled reporters that the crocodile was not Rutherford at all, but stood for “science devouring culture”. Boris Jardine is a PhD student in the Department of History and Philosophy of Science 23


A Glaciologist

Since his early work on a Norwegian glacier in the 1980s, Dr Ian Willis has worked in Canada, Switzerland, Alaska, New Zealand, Iceland and Svalbard and will be visiting Greenland later this summer, trying to understand how our planet’s glaciers and ice sheets work, how they are changing, and how they might change in the future. When Ian is not hiking up glaciers, he works in the Scott Polar Research Institute, researching and teaching undergraduates and Masters students. What does your research involve? I investigate the mass balance of the world’s land ice. Like many glaciologists, I am particularly interested in whether ice masses are growing or shrinking and what controls this. Using a combination of computer modelling, airborne remote sensing and ground based instrument data, we are able to map the changing extent of glaciers and ice sheets and how they might change over the next few decades in response to climate change. I am also interested in the hydrology of ice masses and their dynamics; in other words, how water moves through them and the effects this has on their movement. 24

How much time do you spend between research and teaching? About half and half. Of course, there is quite a lot of administration associated with both teaching and research. What about fieldwork? I typically spend a few weeks or months each year doing fieldwork for my research. For example, this year I have trips planned to Greenland and Svalbard in Arctic Norway. Teaching also involves fieldwork. In recent years I have been lucky enough to take undergraduate students to the Arolla Glacier in Switzerland. It is a great opportunity for the students to learn about the techniques glaciologists use to measure the mass balance, hydrology and dynamics of glaciers and to see first-hand the changing landscape of the Alps as the climate shifts and the glaciers, rivers and vegetation respond. What does a field trip for you entail and what special training do you need before you explore these inhospitable places? Field trips are all quite different, depending on where I’m working. Next month I’ll be working on a glacier called Midre Lovénbreen in Svalbard. The glacier is close to a research base at an old mining settlement called Ny Ålesund, which at around 79°N and is one of the world’s northernmost settlements. The set up there is relatively comfortable because the infrastructure has been developed for many years now to cater for the large scientific community who work there. It is not just glaciologists that find Ny Ålesund a perfect base for their research, but also oceanographers, biologists and atmospheric and space scientists. Usually we fly by jet to Longyearbyen via Oslo and from there, via a small twin propeller aeroplane, we fly to Ny

Ålesund. Here, there is nothing but a lot of ice and a few polar bears between you and the North Pole. Expeditions from here are via skidoo pulling a sled carrying our scientific instruments. The only training we really need is to be able to drive a skidoo and fire a rifle, as there is always a chance of an unexpected encounter with a polar bear! The days on the glacier are always exhausting, as we have a finite time to collect all the data and, although repetitive, we have a lot of work to get through. This means that at the end of a long day in the Arctic, we don’t usually notice the 24 hours of daylight during the summer months and sleep very well back at the base. IAN WILLIS


Beth Ashbridge meets Ian Willis from the Scott Polar Research Institute

Field trips are not always this cosy. In the late 1990s, I worked on the Arolla Glacier in Switzerland. Arolla is one of the highest traditional villages in the Alps, at an altitude of about 2000 m. The infrastructure here was not at all comparable to Svalbard and involved us camping in fairly rough and basic conditions. We really embraced the bitterly cold wilderness up there but when the weather was good, there was nowhere in the world I wanted to be more. For more information visit: Beth Ashbridge is a PhD student in the Department of Chemistry


Recharging Research An internship had always seemed like the perfect opportunity to get some sunshine and take a couple of months out of my PhD while keeping my supervisor happy. Browsing company websites, I stumbled across some recent television advertisements by ExxonMobil. One in particular caught my eye, the subject of which was lithium-ion batteries for electric vehicles.

“The biggest publically traded oil company in the world was doing something green” Dubbed “today’s ultimate battery”, lithium-ion batteries provide power to many technologies, including mobile phones and laptops. They are portable, have high energy densities and hold charge well when not in use, leading to their use in hybrid cars. This sounded interesting - the biggest publically traded oil company in the

world was doing something green. I decided to be direct and sent off an email. They responded and here I am. My research lab is just outside of Houston, Texas, in amongst the towering oil refineries. I’m here for three months to investigate various aspects of battery separator films, the role of which is to prevent short-circuiting between electrodes (see box). Part of the reason I decided to do this internship was to experience how life might be if I take the industry route at the end of my PhD. The research process works a little differently here. Researchers submit samples to highly qualified technicians who then perform the experiments. The researchers subsequently analyse the results, plan future experiments accordingly and finally report to managers who decide which projects have promise and which should be dropped. At ExxonMobil most of the researchers have PhDs in chemistry or engineering – it is a chemical company after all – but there is one other physicist here so I am not completely alone. Overall the experience has been a good


Katherine Thomas decides to take some time away from Cambridge

Analysing images using optical microscopy

one. I now have a lot more insight into how industry works and a much greater knowledge of battery separators! In terms of everyday life, it’s not all that different to being in the lab in Cambridge. Except that the money is better. Katherine Thomas is a PhD student in the Department of Physics


Lithium-Ion Batteries

The structure of a lithiumion battery: 1. Anode 2. Polymer separator 3. Cathode 4. Polymer separator

Easter 2009

Lithium-ion batteries are made from four layers; the positive anode, the negative cathode and two polymer separators. The layers are pressed together allowing lithium ions to be transferred between the anode and the cathode through a liquid electrolyte. The separators prevent electronic contact between the anode and the cathode, stopping the system from short-circuiting, but also

allowing the ions to flow. The separator design is critical: if the battery overheated we would want the pores of the separator to close forming a barrier between cathode and anode, rather than allowing contact and potentially resulting in a fire. The polymer separators ExxonMobil have developed have enhanced permeability, higher meltdown temperature and melt

integrity. Due to better permeability the lithium ions can flow more easily meaning that energy can be provided more quickly, while the higher meltdown temperature means that the thermal safety margin of the battery is increased. For electric cars these are both very important. Nobody is going to drive an electric vehicle if they have to carry round a boot full of spare batteries.



Rules of Repetition Lindsey Nield looks into the mathematics of repeating patterns


Humans like order: regular patterns and straight lines. A quick glance around our homes, offices and most of what we see can be described using simple shapes: circles, triangles, squares, that are easily described and defined mathematically. However, objects in nature cannot be described so simply. From afar, a mountain might resemble a triangle, but as we look closer, it becomes apparent that the edges are not smooth.The rough, random details of the natural world were long thought too intricate and complex to be described accurately using mathematical formulas until some pioneering work in the field was carried out. In the 1970s the work of one mathematician Benoît Mandelbrot pointed towards the solution. He realised that many natural objects show selfsimilarity. Zoom in on one of these complicated objects and a new picture emerges that is strikingly similar to the original. Zoom in again and the same is true. At first this seems like an odd concept but a tree is a classic example. If you study a tree, you see the trunk with branches radiating from it. Each branch in turn is like a mini-tree, with protruding sub-branches. The subbranches themselves have further subbranches sprouting from them. If you

Lightning is an example of fractals in nature


look closely at any portion of the tree it looks remarkably like the original. This property of self-similarity is one defining characteristic of a group of objects known as fractals. A term Mandelbrot coined from the Latin fractus, meaning ‘broken’ or ‘fractured’, to describe these unique shapes.

“Self-similarity is one of the defining characteristics of fractals” The mathematics behind fractals actually began 100 years earlier with the discovery of ‘monsters’ – strange objects that have simple beginnings but quickly become too difficult to describe. The first was the Cantor set (see diagram), created by Georg Cantor in 1883. He took a straight line, broke it into thirds and removed the middle third, leaving two lines. He then repeated the process with those two lines, breaking them into thirds and removing the middle sections. He did this over and over again. This simple procedure creates an endlessly repeating pattern that reveals itself as you zoom in on any section of the Cantor set. A similar phenomenon is known as the Koch snowflake (diagram), and presents something of a paradox. Created by Helge von Koch in 1904, the shape is described by a line that to the eye appears to be finite, but mathematically is of infinite length. To understand how this can happen we must look at how the snowflake is created. Starting with an equilateral triangle, each side is split into thirds, the middle section removed and replaced with two lines meeting at an apex. Each time you repeat this process you replace little pieces with two that are longer than the original and so the perimeter of the shape increases. If you

repeat this an infinite number of times, the line becomes infinitely long. The Koch snowflake’s infinite length helped to solve a problem that had been affecting the measurement of coastlines. In the 1940s British scientist Lewis Richardson collated various measurement data for a single coastline. He noticed that if you measure the coastline of Britain with a 100 metre scale, say from a boat, you get one answer. However, if you were to walk around the coastline using a metre rule, you include more of the indentations in the land, resulting in a longer measurement. In summary, the more detail you incorporate, the longer the coastline. Yet another of the monsters, developed by French mathematician Gaston Julia, became the particular interest of Mandelbrot. Julia took a simple equation and added a feedback loop so that each result it gave was fed back in to the original equation to produce the next one. Julia tried to make sense of the output but could see no emerging patterns and was limited by the number of points he could generate.

“Fractals revolutionised the world of computer graphics and special effects” Mandelbrot was working at IBM when he began studying the ‘Julia set’ and was able to do something not previously possible: he used computer technology to repeat the iteration millions of times and graph the result. He noticed a pattern begin to appear and decided to combine many Julia sets into one striking image. This image, known as the Mandelbrot set (see picture opposite), has become the emblem for fractal geometry.

Iterations of the Cantor set (top) and the Koch snowflake.

Easter 2009


Artists and designers all over the world welcomed the visual potential of fractals. They revolutionised the world of computer graphics and special effects, allowing the detail and realism previously missing to be incorporated with a little iteration. The epic final fight scene in Star Wars III would not be complete without jets of lava spurting up around the two battling heroes. The lava was given its realistic appearance by the application of fractal design. A swirl effect was applied to a lava jet, and was then repeatedly miniaturised and reapplied. When all the layers were added together, it gave the lava a texture that looked like the real thing. The popularity of fractals was viewed with scepticism from some mathematicians who thought they were just an artefact from computers. However, when Mandelbrot published his book The Fractal Geometry of Nature in 1977 he proved that fractals, in some form, are all around us. Fractal-like patterns can be found throughout nature, from blood vessels in our bodies, to lightning in the sky. You may wonder what good it does to describe these beautiful natural objects with mathematics, but fractals have proven to be useful in many branches of science. One example is the fractal antenna. The inventor, Nathan Cohen, heard Mandelbrot speak about fractals at a conference and wondered how the strange shapes might work as antennas.

Zooming in on the Mandelbrot set reveals repetition of the shapes

So he made one in the shape of the Koch Snowflake. The antenna worked surprisingly well and enabled him to reduce its size dramatically. He discovered that fractal antennas can receive a greater range of frequencies than the norm, and found an application in mobile phones. Features such as Bluetooth and Wi-Fi each run on a separate frequency and without a fractal antenna, a phone would need at least two antennas. Since their discovery, fractal antennas have been implemented in telecommunications all over the world. Fractals also have some promising uses in medicine. Ary Goldberger of the Harvard Medical School found that a healthy heartbeat has a fractal pattern when investigated over different time scales. This signature may help doctors to spot heart problems. At the University of Toronto, biophysicist Peter Burns is using fractals in tumour research. When a tumour first develops, a network of tiny blood vessels forms that conventional techniques are not powerful enough to image. Using fractal geometry Burns modelled the blood flow through normal, neatly bifurcating vessels and through chaotic, tangled tumour vessels and found a significant difference that may be valuable in tumour detection in the future.

Perhaps the most ambitious use of fractals is by a group from the University of Arizona who are trying to predict how much carbon dioxide an entire rain forest can remove from the atmosphere. They have found that the distribution of large and small trees in the forest closely resembles the distribution of large and small branches on a single tree.

“Fractal-like patterns can be found throughout nature� By analysing these fractal patterns and measuring how much carbon dioxide a single leaf can take in, they can scale up to predict how much the whole forest can absorb. Fractals, at first view, are complex, irregular patterns but they can start out with the simplest of processes. Mother Nature herself has used them repeatedly to create the world around us. Mathematics may at times seem abstract, but fractal geometry proves that it is integral to the beauty and workings of our planet. Lindsey Nield is a PhD student in the Department of Physics 27


Credit Crunch Natalie Vokes exposes tales of misattributed scientific discoveries

“Geology and paleontology are rife with misplaced recognition and lost fame” to make such discoveries. He declared that his rivals were wholly incapable of doing the same, pronouncing them as ‘lazy’ and ‘ignorant’ and declared his work to be “as superfluous to them as a tale told to an ass.” Galen may have used unusually cutting language, but he was right to worry about his intellectual legacy. The history of science is full of misattributed discoveries and stolen credit. Take Carl Scheele’s particularly tragic tale of obscurity. An 18th century Swedish chemist and pharmacist, Scheele discovered eight elements, including nitrogen, oxygen, and

Engraving from William Buckland’s “Notice on the Megalosaurus or great Fossil Lizard of Stonesfield”, 1824.


chlorine, but received credit for none. In some cases his findings were simply overlooked (it seems that Swedish is not a language conducive to worldrenown). In others, he lost out to rival scientists who made independent discoveries and published first. A brilliant experimentalist with the unfortunate habit of tasting each chemical he encountered, Scheele accidently brought his own tale to a premature end during his 43rd year, when his assistants found him dead at his workbench surrounded by numerous toxic chemicals. A possibly apocryphal ending to Scheele’s ill-fated story is that Scheele was to be ennobled by Gustavus III for his discoveries, but the honour was instead mistakenly given to an obscure soldier of the same name. Scheele seems to have been unusually unlucky, but his story does share a commonality with other tales of misattribution. Credit disputes often arise when many scientists are independently working on a hot scientific problem. In Scheele’s case, a number of prominent scientists, including Antoine Lavoisier and Joseph Priestley, were investigating the increasingly controversial ‘phlogiston theory’, as a result of which both independently discovered oxygen. Their relations were amicable, but in other cases, the competition has led to fierce battles. Robert Koch and Louis Pasteur fought bitterly for the credit for discovering the cause of anthrax, while a more recent struggle took place between Robert Gallo and Luc Montagnier for the discovery of the human immunodeficiency virus. Indeed, sometimes the investigative heat and hunger for recognition has led to cases of misattribution that involve as much malice as misfortune. This was certainly the case during the early days of geology and paleontology, a field rife with misplaced recognition and lost fame. It wasn’t until the early

19th century, some thirty years after the first likely discovery of dinosaur bones, that geologists started to recognise that they were dealing with unique, prehistoric species. But when they did, the hunt was on. One eager fossilist was a man named Gideon Algernon Mantell, a country

“Scheele had the unfortunate habit of tasting each chemical he encountered” physician with a passion for seeking and collecting fossils who happened upon some large teeth he suspected were of prehistoric origin. However, the leading geologist of his time, Cuvier, dismissed them as rhinoceros’s teeth, and Mantell’s friend Reverend William Buckland cautioned Mantell to publish only after he was certain (during that time, it should be noted, LOUIS JEAN DESIRE DELAISTRE

Nearly 2000 years ago, the famous Roman physician and anatomist Galen conducted a series of groundbreaking experiments on the human body. Galen was probably the most accomplished medical researcher of the Roman period and a man not overly troubled by self-doubt. He knew that his work would revolutionise medicine and he wanted everybody else to know it. Thus, upon presenting his seminal work to other physicians, he included repeated reminders that he had been the first

Antoine-Laurent Lavoisier (1743 - 1794)


In science, the credit goes to the man who convinces the World, not to the man to whom the idea first occurs. - Sir Francis Darwin (1848-1925) (son of Charles Darwin)

the same Reverend Buckland went ahead and published his own finding of another giant prehistoric creature, which he imaginatively named the ‘megalosaurus’). Mantell therefore spent three years carefully gathering evidence and attempting, mostly unsuccessfully, to convince his peers that the teeth belonged to a previously undiscovered species from the Mesozoic era. Though his interpretation was eventually confirmed and his species named the iguanodon, Mantell became so consumed by his hobby that he neglected his medical practice and was ultimately forced to sell off his large collection of fossils to avoid financial ruin. Even so, he continued downward into destitution and his wife abandoned him in despair. Ruined and alone, Mantell then had the misfortune to fall

“Mantell was attempting to work outside the circle of accepted experts” from a moving carriage. He became entangled in the reins and was dragged behind the horse for some distance, leaving him with chronic, debilitating pain and a crooked spine. Mantell’s misfortunes were heavy indeed, but at this point he was still recognised for his contributions to Easter 2009

pulsars, hailed as the most important astronomical discovery of the 20th century. Though the graduate student Jocelyn Bell actually carried out the immediate research, she did not receive the Nobel along with her Cambridge supervisor Anthony Hewish. Bell has publicly stated that it would have been inappropriate for her to receive the award for work she did as a PhD student, but there has been considerable controversy nonetheless. Of course, for the less powerful, there is one way to secure long-term recognition: write the history of the discovery. This tactic has helped many secure their place in posterity, as many of the most famous scientists were also excellent rhetoricians and writers of history. To be sure, a written history may not secure fame during one’s lifetime, but with an engaging history perhaps one can establish a more enduring notoriety. So to all you scientists out there – developing your writing may be as essential as your pipetting. And watch out for nefarious fossil collectors. Natalie Vokes is a Part II student in the Faculty of Philosophy ELLEN SHARPLES

Carl Wilhelm Scheele (1742 – 1786)

geology and for the discovery of several new species, especially the iguanodon. But geology was an extremely competitive field, and by that time a particularly fame-hungry, ruthless anatomist named Sir Richard Owen had been quietly and illicitly claiming what credit he could. In fact, Owen had famously opposed Mantell’s assertions that the iguanodon was a new reptilian species, just as he had tried to ruin the careers of other promising young scientists. Thus, when Mantell suffered his accident, Owen used Mantell’s weakened condition to go about expunging Mantell’s contributions from the record, renaming and claiming the species Mantell had discovered. When Mantell died in 1852 from an opium overdose, Owen added insult to injury and had a section of Mantell’s twisted spine removed, pickled, and stored on a shelf of the Royal College of Surgeons. Like Scheele, Mantell was unusually unlucky, and like Scheele, he worked in a highly competitive field. But he also had another card stacked against him: he was attempting to work outside the circle of accepted experts. Though science was far less institutionalised then than it is now, and amateur scientists were far more common, Mantell’s interpretations had to be confirmed by those in positions of authority, and that put him in a weaker position. Indeed, it is now widely recognised that prestige and power go a long way to securing an individual’s scientific reputation. A sociologist named Robert Merton coined the term ‘Matthew Effect’ to describe this phenomenon, whereby more prominent scientists receive more credit than their less established peers. The Matthew Effect is especially prominent in contemporary science, where research is often carried out by a team of investigators. The injustice toward Rosalind Franklin is well known, but a similar example is that of

Joseph Priestley (1733 - 1804)




JELLYFISH BURGER, 2009, Digital Composite. DAVE BECK, digital artist, and JENNIFER JACQUET, marine scientist,


Engineering the Weather Up and down the country there are hundreds of engineers working on a multitude of projects that take into account climate change. Despite a relatively small pool of data sources in the UK, the data can be bewildering to the uninitiated. Even in the UK where we have strong institutions, educated, committed professionals and a regulatory framework that is pushing for climate adaptation and preparedness, there is a gulf between climate science and corresponding engineering challenges. Predicting reservoir yields is important for keeping bills low and waste minimal. Mott MacDonald Ltd works with several UK water companies to help produce ‘Water Resource Management Plans’ that consider climate change. Mott MacDonald uses regional rainfall predictions to build detailed models and by factoring in precipitation, river flow and groundwater levels, they are able to estimate reservoir yields. However, uncertainties in rainfall prediction can give unreliable results. Underestimating yields results in unneeded storage provisions paid for through increased

“Engineers have a reputation for simplifying where possible” water bills. On the other hand, if companies overestimate their yield, there is insufficient water to meet demand causing water rationing and supply disruption. The key is providing accurate models which minimise uncertainties. Although uncertainty in climate modelling is becoming more transparent, it is still present. The Intergovernmental Panel on Climate Change (IPCC) currently puts sea level rise by 2099 at somewhere between 18 and 59 Easter 2009

centimetres. Put in context, for a family in the East of England this can make the difference between staying or having to relocate away from rising seas. Similarly by 2080 we can expect London summers to be like those in Southern France today and we have to ensure that buildings are able to cope with rising temperatures. Engineers have a reputation for simplifying where possible: reducing systems to a black box and using what comes out. They have a tendency to treat climate data in the same way. When they indicate that Mediterranean summers will come to London by 2080, what this actually means is that this is true according to the 2002 UK Climate Impacts Program medium-high emissions scenario for 2080 when run on the Hadley Centre version three model, and that this temperature rise of six degrees is accurate to within one-and-a-half degree Celsius. This is just one of four possible UK Climate Impact Program scenarios. The water companies’ precipitation predictions for the same period have an uncertainty margin of 30%, making planning water resources particularly challenging. Current models are a good start but IPCC calculations do not consider the release of greenhouse gases from thawing tundra, nor do their sea level calculations account for carbon cycle feedback (their temperature predictions do). Climate models are of course calibrated on current situations and assume continuing validity which makes it very difficult to account for tipping effects and future forcing mechanisms as our climate changes. At a local level we must work on the regionalisation of models. Even basic dynamic downscaling of global models is currently computationally expensive. And as models get more regional the results become population specific and must account for local changes. These may include river geometry, local glacier area


Ian Ball explores the interplay between engineering and science

Flooding in Horncastle, Linconshire in 2007.

changes, increasing city sizes, changing land use and ecology and the feedback interactions between these factors. The engineering sector faces a huge challenge as it helps the world prepare for climate change. As a sector we need to educate our professionals on how to deal with climate change data. The models engineers use will need to consider changes in land cover,

“The data can be bewildering to the uninitiated” ecology, glaciation, population changes, industrialisation and many more. There is also a desperate need for models that denote the uncertainty in a meaningful, regionalised way that can be presented to governments and private clients to inform policies and practices. Engineers dealing with climate change work in an exciting and engaging sector which requires both innovative engineering and also rigorous, well-communicated science that helps to prepare us all for a more uncertain future. Ian Ball recently graduated as a Part III in the Department of Engineering 31


Dr Hypothesis Dear Dr Hypothesis, I’ve just spent the last of my phone credit trying to navigate a useless automatic call centre system. Is there no hope on the horizon for its improvement? Creditless Colin


DR HYPOTHESIS SAYS: The future you are waiting for is in a technology called Artificial General Intelligence. This kind of programming specifically starts with a set of parameters that you or I take for granted, such as the sky being blue or basic mathematical logic. It applies this knowledge with a contextual database of past choices to builds something you might identify as Artificial Intelligence. One firm has recently launched the first program to be made commercial, specifically targeted at providing a phone service. Known as SmartAction, the system can understand, for example, the implied difference between he, she or it, the idea being you can talk to it as you might a person. In the meantime though while the technology finds its feet, you’re still far better off talking to a human!

Dear Dr Hypothesis I’m a busy girl, and one thing I hate waiting around for is my phone and laptop to charge. Is there no faster way of doing it? Chatty Caroline DR HYPOTHESIS SAYS: The lithium-ion batteries in your phone and laptop may soon be replaced by cheaper lithium iron phosphate (LiFePO4) batteries. The problem with these is in the interaction between the lithium ions and the cathode, where lithium must try and enter or leave via tiny pores. This lag slows the process of charge and discharge considerably. However, three new technologies are here to help. The first is coating the cathode with carbon. The electroncloud surface allows easier movement of the ions across the surface in search of a suitable pore. The second is another coat, this time lithium phosphate, again allowing greater ease of movement. The last uses a cluster of nano-balls as the electrode, greatly increasing surface

area and thereby the available pores for the Li ions. This wouldn’t just have ramifications for your mobile and laptop, which would be able to charge fully in seconds, but also for hybrid cars and electric cars which would be able to charge in minutes. Dear Dr Hypothesis, I’m a keen advocate of renewable energy, but my understanding is that a lot of the electrical energy we produce is wasted as heat when it’s transmitted. Is there no way of improving the system? Sparky Simon DR HYPOTHESIS SAYS: There are several projects currently underway to improve power transmission. The one that instantly springs to mind is the use of superconducting cables. These are made of specific metal-ceramic mixtures, which when cooled to low temperatures give almost no resistance, reducing the loss in heat. However, this can be improved yet further with the use of High Voltage Direct Current (HVDC). Although originally Thomas Edison’s direct current (DC) lost out to Tesla’s alternating current (AC) because down-transformers for DC didn’t exist, we now have that technology. The use of DC over long distances reduces the capacitance effect seen with AC - induction of small magnetic forces outside of the cable, greatly increased when in water or underground. Better still is that superconducing HVDC cables currently in testing (Chubu University, Japan) can use their magnetic field as a power store, flattening out the current delivered which would be ideal for sporadic generation from renewable energy sources.

Email Dr H with all your scientific conundrums 32

BlueSci Issue 15 - Easter 2009  
BlueSci Issue 15 - Easter 2009  

Cambridge University science magazine FOCUS: Lighting up the Brain