Page 1

Bang! Bang!

The Music Issue Oxford Science Magazine 12th Edition Michaelmas Term 2012

"Every day yppens rygdhaa Etvhein som"e dpm peens ha em g in eot hrin somt g ine ma aem dkm min t hat toI r ing ake” mnc at di amre I ffe t ha rence” a Khdi eza, ‘09 udffe Lena Taught: Science, West Midlands a, ‘09 ud ez Lena Kh Bank Manager, HSBC Retail Now: Taught: Science, West Midlands

Now: Retail Bank Manager, HSBC

ChAnGE ThEir livES And ChAnGE yourS ChAnGE ThEir livES And ChAnGE yourS Just 62% of young people eligible for free school meals Justa62% of young people get Science GCSE grade A*-C* eligible for free school meals Take up the challenge, get a Science GCSE grade A*-C*  get involved, Teach First. Take up the challenge, get involved, Teach First. *Skills Commission, 2011 Teach First is a registered charity, no:1098294

Bang!’s Michaelmas Mixtape NOW! Science 12

music section

3 // Editorial 4 // News 5 // Chemistry in a Pint Glass 6 // Chasing the Cheats 8 // Million Dollar Man 10// Questioning the Scientific Method 12 // Love at a Distance 13 // From the Ear Straight to the Brain 14 // Bang! Talks to Maria Witek 16 // Living in Harmony 18 // The Rhythm of Life 19 // Musical Selection 20// Simulating the Dyslexic Brain 22 // Monsters of Legend 24 // Unlocking Stem Cells 26 // To the Moon and Back 27 // Amazing Algae


Published by Oxford Student Publications Limited Chairman - Rohan Sakhrani Managing Director - Stephanie Smith Company Secretary - Morgan Norris-Grey Finance Director - Max Bossino Directors - Sophie Jamieson, Douglas Sloan & Nupur Takwale

Editors Laura Soul & Jai Juneja Magazine Editor Sofia Hauck Sub Editors Anna Pouncey, Lauren Passby, Iona Twaddell, Christian Camm & Richard Millar Business Directors Hugh Lindsay & Kathryn Boast Website Editors William Brandler Publicity & Distribution Michelle Van & Elena Holtkotte

Printed by Mortons Print Limited

Creative Directors Joey Faulkner & Iona Richards Artists Amber Barton, Aparna Ghosh, Natasha Lewis, Hope Simpson Chloe Tuck, & Ning Yu Copyright Bang! 2012


Recognise your Potential with Bang! We are seeking talented applicants for our Editing, Creative, Writing, Web, Publicity and Business teams. If interested, visit: and apply by Friday of 6th Week.

Editorial “M

usic is a moral law. It gives soul to the Universe, wings to the mind, flight to the imagination, and charm and gaiety to life and to everything.” – Plato


usic is integral to the human experience. Every modern and historical culture that we know of has embraced music in some form. It enables us to express what we cannot in words: to instil fear, provoke excitement, and elicit joy or sadness. We use it to concentrate on the imminent and escape from the present, to celebrate our births and commiserate our deaths. As scientists it is our remit to understand the world around us. Music is so pervasive that it has garnered significant research attention, from Plato and Pythagoras to the music psychologists of today. Although we understand the physical nature of sound and how tones interact, our need for music and response to it remains an elusive matter. There is still huge scope to advance our knowledge of how humans relate to music. For early natural philosophers who sought to decode all aspects of the world around them, music was as much a subject of interest as mathematics or the movement of stars. With the rise to prominence of the modern scientific method, and the rapid expansion of human knowledge, ‘science’ and ‘humanities’ became remarkably diff erent disciplines. However, in recent years there has been a move toward increased crossover, where the ideas and theories of music are explored with the analytical and statistical rigour expected of science. In this term’s issue of Bang! we discover how music is helping us to understand DNA, how some combinations of tones cause chaos in our brain, and what goes on in our mind when we listen to music. We hear from a researcher in Oxford trying to understand why we dance, and find out how Darwin’s theory of evolution has been applied to musical production. The American composer Laurie Anderson is quoted as saying “Writing about music is like dancing about architecture.” Here’s hoping that writing about the science of music proves more productive. Join us as we try to learn more about something that unites us all.

Jai & Laura Editors



The Tree of Life, Now with Birds

Firefly Lantern to Light our Torches

The first family tree to include all living birds was published this week by researchers from the University of Sheffield, the University of Tasmania, and Simon Fraser University. It includes their appearance, evolution and distribution through time. The tree maps out nearly 10,000 bird species, and reveals the different rates of diversification among the birds. For example, new species of gulls are appearing at a much faster rate than species of pelicans.

Light-emitting diodes (LEDs) are already a highly energy-efficient light source, but researchers at the Korea Advanced Institute of Science and Technology have taken inspiration from the humble firefly to improve them. The organ in fireflies that emits light is called a lantern, and has three parts: a dorsal layer, the luminous layer and a layer with nano-scale structures. The luminous layer of firefly lanterns, as well as nanostructures in other insects (such as the reflective wing of the Morpho butterfly), have been well studied. This, however, is the first time that the other layers of the lantern have been investigated in detail and used in biomimetic engineering. The nanostructure of the cuticle was discovered using scanning electron microscopy, which revealed that it is a shaped like a gear with ‘teeth’ on the outer edge. This reduces the amount of light lost between the light-emitting organ and the air by decreasing their optical impedance (resistance to light). The efficiency of the light source increases as a result. The group at KAIST have designed a new lens similar to this tiny gear-like structure, and found that it significantly raised the transmittance of the LED when compared to a smooth lens of the same shape. They hope that this will be applied wherever high power LEDs are needed; mobile phone camera flashes and medical lighting are devices that could potentially be enhanced with this technology.

The Wireless Future Starts in Your Carpet Portable electronics have come a long way in the past few decades, shrinking down from the lumbering machines that filled a room to the sophisticated and delicate electronics we now carry in our pockets. However, one last barrier to portability remains: charging batteries. This is still required every few days for most devices, and effectively tethers a product to the wall. A group led by Chris Stevens at Oxford’s own Department of Engineering Science wants to change that, via inductive technology.

The tree was built using information from both the fossil record and DNA of living species. The vast scope of this project meant that data came not just from the lead research centres but also from museums, lab groups and field teams all over the world. This data will come in useful for targeting conservation efforts in a way that will most effectively maintain biodiversity. By focusing on birds with a few close relatives that represent a disproportionately large part of avian evolutionary history, key species can be saved, ensuring that future studies will not be limited to museum specimens and fossils.

You might be familiar with it already. Many electric toothbrushes are recharged this way, and the technology is similar to that used in induction cooktops. The trick is incorporation of metamaterial – a material with a specific structure that gives it properties not normally found in nature. A key advantage, and the reason that induction is often seen in kitchens and bathrooms, is that without ports, portable electronics can be made waterproof. And since these power surfaces are just simple patterned layers, they can be incorporated into fabrics. Stevens’ group demonstrated this by creating an inductive carpet, which can transfer 450 MB per second of data and hundreds watts of power. Their aim is to change the way electronics are built and re-used. Without the need for direct contacts, devices can be built modularly and be easily taken apart, something not possible with today’s soldered circuit boards. The parts could then be recycled by mixing and matching units, and re-using components in devices that are less demanding, so that the smart phone processor of today becomes the washing machine computer of tomorrow.


Sofia Hauck is a 3rd year Biological Sciences student at St. Hugh’s College. Art by Iona Richards.

Chemistry in a Pint Glass Just malt, mash, wort, ferment and pour


September 2012, the Campaign for Real Ale celebrated the more than 1000 breweries that exist in the UK. That’s a lot of beer, especially considering that most share the same main ingredients (water, malted barley, hops and yeast). So, what makes these beers so unique? The first striking difference between the various “styles” of beer is their wide range of colours. With a few notable exceptions, such as the green coloured Sign of Spring by Stonehenge Ales, they vary from pale gold through various shades of ruby all the way to black. The colour is

humulone, which give beer its bitter taste. A particularly ‘hoppy’ beer will have a crisp, sometimes metallic taste. Hop varieties, which vary in their quantity of α-acids, are often associated with certain beer types and countries. For example, most traditional British beer is brewed with a mixture of hops, including Fuggles, Goldings or Bullion hops. The extent to which the chemicals from the hops infuse into the brew is highly dependent upon the pH of the wort, which is the malty, liquor product of the mash. Thus, even if two breweries have the same recipe

if two breweries have the same recipe “E ven of hops and malt, the water they use could lead to very different beers.” determined by the malt, which later serves as the source of sugar for yeast during fermentation - meaning the malt also makes a significant contribution to the flavour of beer. During the malting process, the cereal grain barley is roasted. The longer it is roasted, the darker the resulting malt. In the subsequent mashing process, the liquor (which is often just water) and malts are heated to a number of temperatures to allow the sugars to be broken down enzymatically. The tannins in the barley husk (outer coating) consequently leach into the hot liquor, and the combinations of different malts used by the brewer leads to a spectrum of colours and flavours in the beer. The pH of the mash is usually around 5.1-5.5, but dark malts are often more acidic. As a result, more alkaline water balances, such as those found in London, will allow for brewers to make beer with darker malts.

of hops and malt, the water they use could lead to very different beers. This explains why breweries tend to cluster around good water sources, as exemplified by the plethora of breweries in Burton. A final contributor to the flavour of beer is its ion concentrations. For

Hops, the female flower of the Humulus lupulus, are the other main ingredient in beer. These preserve the brew and contribute to its flavour. The hops act as a preservative because their resins contain α-acids such as


example, Burton beer has a very distinctive smell, due to its high sulphur content, in the form of the sulphate ion (SO42-). The sulphate concentration accentuates the bitterness and metallic hop flavour of the beer. The comparatively lower sulphate content of London water, combined with increased chloride (Cl-) and sodium ion (Na+) concentrations, allows for greater use of darker malts such as those found in London Porters. Understanding the ions and pH balance is a key aspect in the art of brewing. The taste of your beer hinges on the chemical balances of the hops, water and malt used to create it. Understanding this, very much like the cellar craft required to keep the beer once it is brewed, is something that comes from experience. Like its malt brother whisky, beer has a fascinating and complex chemistry that confuses, tickles and teases the senses. Gareth Langley is a 4th year Chemistry student at Corpus Christi College. Art by Chloe Tuck.

Chasing the Cheats Why are anti-doping agencies left playing catch-up?


or as long as sport has existed, athletes have fought to gain an advantage. A small improvement that gives an athlete an extra few fractions of a second can change everything. In the 2012 Olympic women’s triathlon, following four years of training and around two hours of competition, a photo finish showed that less than six inches separated first and second place. These tiny margins between success and failure, along with the massive rewards of victory, will occasionally drive athletes to artificially improve their performance. This concept is not new; there was significant, widespread use of drugs by athletes throughout the 1950’s and 1960’s. That era of unrestricted use of performance enhancing drugs ended in 1967, after the death of British cyclist Tommy Simpson during the Tour de France. His death was the first of three watershed moments for sport that garnered worldwide attention. The second was the positive drug screen

result of Ben Johnson at the 1988 Olympic Games. After running a world record time of 9.79 seconds in the final of the men’s 100 metres, Johnson was sent home in disgrace after testing positive for Stanazolol: an anabolic steroid. Ten years later, the world of cycling was disgraced after a doping scandal in which nearly 100 riders were sent home from the Tour de France; the repercussions led to the formation of the World Anti-Doping Agency (WADA). WADA conduct all testing at the Olympic Games. They took over from the International Olympic Committee, who had been in charge of anti-doping since its introduction at the 1972 summer games. Nearly all elite sportsmen and women sign up to their anti-doping code, thereby accepting strict liability for the substances they have in their body. Alain Baxter’s inhaler mixup at the 2002 Winter Olympics, when he was excluded for use of the amphetamine containing an American version of a ‘clean’ European cold remedy, is in the eyes of WADA an unfortunate, but necessary evil. This strict liability doctrine is applied universally for all of the drugs which are outlined in five categories on WADA’s Prohibited List. ‘Anabolic Agents’ increase the rate of biochemical synthesis in the body. Anabolic steroids are the classic example and can be thought of as synthetic versions of testosterone. These drugs augment protein synthesis in the muscles, enhancing muscle bulk and also rate of recovery. Their use carries advantage to ‘power athletes’ such as sprinters and weightlifters. Erythropoietins (EPOs) is classified under ‘Hormones and Related Substances’. These agents are sometimes identical to hormones produced in the body. Artificially


A Timeline of Testosterone, Time Trials and Treachery 1950’s – widespread drug use in sport begins 1965 – The first anti-doping laws are introduced 1967 – Tommy Simpson dies during the Tour de France from exhaustion caused by taking amphetamines 1988 – Ben Johnson is stripped of his 100m gold at the Olympics after testing positive for an anabolic steroid 1998 – 100 riders are sent home from the Tour de France after being caught doping 1999 – WADA is set up to regulate and combat drug taking in sports 2012 – Lance Armstrong is stripped of all seven of his Tour de France titles since 1998 raising the level of EPO in the body stimulates red blood cell production and thus improves the oxygencarrying capabilities of an athlete. This lends advantage to endurance athletes, such as grand tour cyclists. The other major substance in this class is human growth hormone (hGH), a hormone that has anabolic properties and has seen widespread use by athletes to aid recovery and enhance muscle growth. Current detection methods for hGH are poor: a blood test is required since it cannot currently be detected through a simple urine test. The blood matrix test for hGH is not very sensitive and thus,

Modern Alchemy: Turning Molecules into Gold Medals Testosterone



Tetrahygestrinone OH





Stanozolol OH N


aforementioned hGH test. The samples are collected after an event or randomly outside of competition and sent away to be prepared and analysed. The analysis looks for metabolites – the products that the body breaks the drugs down into – in the urine. A technique called mass spectrometry is used to analyse nearly all samples. As technology has developed, tests have become more and more accurate and some manufacturers have claimed that their instruments can detect concentrations as small as 300 molecules of the drug in a 1 microlitre sample of urine, the equivalent of one credit card on 10 million football pitches. Improvements in analytical techniques in the past 15 years have been crucial in the development of new tests, particularly the urine test for EPO, which was ‘undetectable’ in the early 1990s. One of the biggest obstacles to WADA is that testing is always retrospective; you need to know what you are looking for in order to detect it. Some competitors have been able to use ‘designer steroids’ without detection until long after they have amassed a number of titles and hundreds of thousands in prize money. Improving detection methods is important, but the anti-doping agencies must also

since 2004, only three athletes have ever failed the test. The remaining categories are encountered far less often; ‘Diuretics and Masking Agents’ are drugs used by athletes in an attempt to prevent detection of other prohibited substances in their body, or in the case of diuretics, to lose weight quickly and temporarily, potentially giving an advantage in events that are split into weight categories. The remaining classes are ‘Beta-2 Agonists’ and ‘Hormone Antagonists and Modulators’. How are performance enhancing drugs detected? Nearly all drug tests are carried out on urine samples, the notable exception being the


continuously update their lists of prohibited substances, something that is difficult if there is a criminal conspiracy; the high reward of elite sport unfortunately means that there will always be a market for this. The Anti-Doping agencies are always going to be one step behind athletes who really want to cheat. They can however stay as close as possible, through improving their testing methods. This particularly applies to hGH which, should an appropriate test be developed, could be the subject of the next big drug scandal. It is very difficult to envisage a situation where sport will ever be totally clean, which means that WADA are fighting an unwinnable war. However, their pursuit of different techniques, a strong athlete training campaign and a constant, strong application of sanctions to athletes who are caught out would help this. They may be doing a good job, but when a twiceconvicted drugs cheat is allowed to race in the 100m event at the Olympic Games, is that really for the good of sport?

Gareth Langley is a 4th year Chemistry student at Corpus Christi College Art by Aparna Ghosh.

Million Dollar Man

Prosthetic science takes expensive steps forward


hough the NHS might have balked at Steve Austin’s medical bill in the 1973 cult series The Six Million Dollar Man, how close is current technology to bringing us the world’s first cyborg? The past decade has seen great strides forward in energy technologies, materials science and our understanding of the esoteric workings of human physiology. Bioengineering is one multidisciplinary field that has benefited greatly from these collaborative advances, leading to a recent surge in the development of prosthesis technology. These artificial medical aids are set to have a profound impact on those suffering from a range of debilitating conditions.

Subretinal Implant produced by Retina Implant AG free for trial patients

final price unannounced

iLimb Ultra manufactured by Touch Bionics ~£11,000 x 2:


One project that has recently been making waves is an Oxford-based research group’s ‘bionic eye’ (Bang! Issue 11, p14-18). Their subretinal implant is able to restore vision to some previously blind patients suffering from retinitis pigmentosa, a disease that causes photoreceptors on the surface of the retina to degenerate. The implant consists of an array of photodiodes that mimic the cells responsible for light detection in a healthy retina. The signals from the photodiodes are then used to stimulate surviving bipolar cells in the inner nuclear layer of the retina, bypassing the malfunctioning cone cells on the surface. Clinical trials have demonstrated a restored partial 12° width of useful vision, allowing one patient to even identify large letters and shapes. Another notable example of such technology is a cochlear implant that bypasses damaged hair receptors in the ear to restore hearing capability to previously deaf individuals. Whilst the implant does not restore perfect hearing, the technology is combined with specialist therapies to allow patients to effectively recognise and process speech.

However, the technology is not limited to external applications, as 2013 could see the arrival of the first commercially viable artificial heart. Its development comes at an important time for more than 100,000 sufferers of advanced bi-ventricular heart failure, for whom the only non-palliative treatment is hese artificial medical the transplantation of one of only 4000 donor hearts aids are set to have a available. CARMAT, the French company behind this development, has utilised advanced developments profound impact on those in synthetic biocompatible materials to reduce both suffering from a range of thrombosis (clotting) and physiological rejection by the debilitating conditions.” host’s immune response. The system forces a small external reservoir of fluid into two artificial ventricles to produce the pumping mechanism, and is powered by an external, electromagnetically coupled battery which further reduces the size of the implant. The system has come to fruition through the merging of aeronautical and medical industries, and hopes to provide a cost-effective alternative to biological donors.



Research is also being turned towards the aid of amputees, whose numbers are set to increase with an aging population prone to diabetes. In 2006 the German prosthetics company Ossür became the first to offer a powered replacement for above the knee amputees. The prosthetic uses compact inbuilt sensors and software algorithms to control a system of motors and actuators, allowing it to monitor and respond to the natural gait of its user. The result is a sleek, inconspicuous and functional unit that restores mobility to both unilateral and dual amputees. Another company leading development in so-called ‘activeprostheses’ is the Edinburgh-based Touch Bionics. The company is best known for its iLimb, a functional replacement for below the elbow amputees. The prosthetic is operated through the use of myoelectric technology, capable of detecting minute activity in surviving nervous tissue through contact with the skin. These signals are used to activate motors built in to the unit, allowing patients to control a set of gestures which include pinching, pointing and grasping, as well as full 360° rotation of the hand. Recipients can use their replacement limbs to tie their shoelaces and hold wine glasses, thanks to a system of inbuilt pressure sensors that provide automatic grip. It is a revolutionary solution that provides dexterity and fine motor control without the intrusive complications of surgery.

Cochlear Implants many manufacturers ~£40,000 x 2:


Artificial Heart produced by CARMAT, France


There is still however much work to do before technology is able to produce a perfect, lifelike replacement for our native appendages. Anyone observing the stiff, artificial movement of even the most advanced bionic alternative would see nyone observing the stiff, artificial that one important movement of even the most advanced element is missing. bionic alternative would see that The organic grace and one important element is missing.” intuition that we take for granted relies on a complex system of biological sensors that constantly feed back sensory information to the neural control centers in our brain. Without this feature, Power Knee prostheses will inevitably be restricted to conscious, machineproduced by like operation.


Össur, Germany

Across the Atlantic a team at the Applied Physics Laboratory at Johns Hopkins University are attempting to replicate exactly this. Their ‘Revolutionizing Prosthetics’ programme has recently been awarded a $34.5 million contract to begin clinical testing of their brain-controlled bionic arm. The innovative technology incorporates a direct neural interface for command of a mechanical arm with 25° of freedom. This is achieved through an implantable microchip that detects neuronal activity, which it is hoped will be capable of transmitting the data wirelessly to an external receiver. Furthermore, the team’s ambitious Proto 2 project aims to eventually return feedback from the prosthesis on position, pressure and temperature to the brain, a step which could have profound repercussions for future technology spanning far further than the prosthetics industry.

~£40,000 x 2:


Talfan Evans is a 4th year student reading Engineering Science at Keble College. Art by Iona Richards.


Questioning the Scientific Method How is the public perception of science changing?


ut science proved it” – this validation, sometimes used for the wildest of claims, demonstrates the degree of respect the scientific method is granted in the public eye. We regard the scientific method as the most trustworthy approach to solving a question – often above experience and intuition. In many cases this is rightly so; however, sometimes the scientific stamp of approval can have a blinding effect on our ability to think. Ironically, it seems the word ‘science’ can be used to excuse the need for rational justification. In 2005, John Monterosso at the University of California demonstrated this effect in action: his research team presented participants with descriptions of criminal cases, each of which highlighted either a chemical imbalance in the brain or a traumatic life experience that may have increased the perpetrator’s likelihood of committing their crime. Note that

the researchers took particular care not to invoke any mental illnesses, so as not to implicate the Mental Capacity Act. It was found that participants believed the perpetrator was less responsible forhis crime when the case description partially attributed the perpetrator’s behaviour to his unusual neurobiology rather than a traumatic life experience. If you agree with the belief of the

that science must be right, is being challenged. It is becoming increasingly apparent that the scientific method is not immune to intentional or accidental misapplication. The process by which experimental findings are published in journals can give rise to unintentional misapplication of the scientific method. It is well-known that

regard the scientific method as the most “Wetrustworthy approach to solving a question – often above experience and intuition.“ participants, you are unfortunately misinterpreting the evidence. There is only one route through which experience can affect behaviour: via the brain, and as such, the difference between a dysfunctional brain and a dysfunctional environment does not exist. In other words, a neurobiological abnormality can easily result from an abnormal life experience, rendering the two functionally equivalent. The blinding effect of science’s high status may, however, be at risk of fading.This is because the credibility of scientists, and the public’s perception

journals tend to publish neither negative results – those which find a particular variable to have no effect – nor replications. This produces a body of literature that is skewed and ignores the potential absence of an effect. An interesting example of this occurred in March 2012, when Stuart Ritchie at the University of Edinburgh reported his team’s immense struggle to publish a failed replication study investigating the possibility that unknown future events could affect our present behaviour. Ritchie was informed by three leading journals (including Science) that they do not publish replications – when replication is thought by many to be a defining feature of the scientific method. When the study was finally sent for



of respondents claimed to doubt the integrity of their own studies.”


peer review by the British Journal of Psychology, the validity of the study was examined by Daryl Bem – an unquestionably biased reviewer, given that Bem was the researcher who carried out the study Ritchie and colleagues had failed to replicate! On the other hand, the publication of Bem’s extraordinary findings in the first place deserves credit, given that journals are commonly criticised for unjustifiably favouring research that supports the status quo. Another criticism of publication via peer review is the tendency for some scientists to only consider viewpoints approved by this process. It is ironic, given that the research community relies on a non-peer reviewed channel – the media – to communicate its ideas to members of the public and other bodies which serve their needs. Such communication facilitates important discussions on the application of academic theory to the real world – discussion that would not otherwise happen due to the the time, expense and expertise required in making submissions to peer-reviewed journals. One of the reasons why human bias influences scientific practice relates to

to be considered for publication. He found that a disproportionately high number of p-values fell between 0.045 and 0.05, in other words, just below the publication cut-off point. One possible explanation for this finding is the distortion of data (perhaps unconsciously) to fit publication requirements, suggesting scientists’ personal goals to publish may at times supersede their societal duty to communicate the truth. For example, scientists may arbitrarily decide to exclude certain data points on the basis that they’re somehow anomalous.

Want to know more? Ben Goldacre, who completed a BA in Medicine at Magdalen College, is returning to Oxford this term for a talk at the Union. He is famous for his column Bad Science in The Guardian, where he has made his name uncovering dodgy statistics, useless methods and outright lies. His books, Bad Science, which reached the top position in the UK nonfiction charts, and the recently published Bad Pharma, have also earned much praise. Head down to the Union on the Tuesday of 8th Week, November 27th, at 8:30PM, to hear him speak about his experiences with everything from being sued for libel, to the danger of taking nutritional advice glossy mags, via questionable pharmaceutical trials.

Many surveys have been conducted in attempts to find out how widespread such examples of research malpractice are. In a study recently published in Psychological Science, Leslie John reported the shocking results of an anonymous survey of 2,000 US academic psychologists. In this survey

are self-interested humans “Scientists and the scientific method is not abuse-proof.” the pressure on researchers to publish their results, which can be achieved only by meeting the publishing criteria. These criteria include not only the requirement for new and positive results but also so-called statistically significant results. This was highlighted in August this year, when EJ Masicampo at Wake Forest University published an analysis of the p values cited by papers in three leading psychology journals; p-values specify the probability of an observed effect being attributable to chance, and a p-value of less than 0.05 is usually required in order for a result

researchers confessed to violating key principles of the scientific method: • 71% of respondents admitted to collecting more data if the initial result was not statistically significant • 67% admitted to reporting studies selectively i.e. depending upon whether they supported their hypotheses or not • 74% admitted to not reporting all dependent variables • 58% admitted to excluding data points after analysing the results • 35% claimed to doubt the integrity of their own studies.

Robert Blakey is a 2nd year Experimental Psychology student at St. Catherine’s College. Art by Ilse Lee.


Two points have come to light: scientists are self-interested humans and the scientific method is not abuse-proof. The combination of these factors challenges the extent to which science deserves the glamorous status it receives. At the moment many members of the public appear blind to this; their unquestioning and sometimes unhealthy confidence in science remains intact. However, if the spotlight shines any brighter on examples of scientific misconduct, the public may begin to see questionable research practices for what they really are: “the steroids of scientific competition, artificially enhancing performance” (Leslie John). This may, however, have one positive effect: people will think more rationally when they interpret a scientific claim rather than shouting out, “But science proved it!”

Love at a Distance Oxytocin’s effects on the heart


long distance relationship is hard to maintain, and while there are many factors at play in such a complex emotional investment, the hormone oxytocin could serve as an adequate scapegoat when things go wrong. Oxytocin’s size – a meagre 9 amino acids in length – belies the range of roles it plays in the mammalian body, and while many of its functions are related to pregnancy, it is also important for events leading up to conception. Oxytocin has been noted to increase feelings of trust and attachment, making people feel closer and reducing fear and anxiety.

the distribution of oxytocin receptors in their brains. Prairie voles have a higher density of receptors throughout the brain and especially in the nucleus accumbens, a region believed to play a role in aggression, pleasure and fear in humans. When oxytocin receptor antagonists are injected into the nucleus accumbens of prairie voles, they lose their monogamous nature and show no preference for a previous mate offered alongside a stranger.

quick internet search for oxytocin reveals companies “A aids’, trying to sell spray bottles as ‘business aids’ or ‘dating and myths of casinos in Las Vegas spraying it” Increased levels of the compound have been found in the blood streams of both genders during and just after sexual activity. However, after time apart, the circulating level of the hormone falls as sexual activity is no longer occurring, and the strength of emotional bonds can weaken. Though research into the social effects of oxytocin in humans is in its early stages, animal models have revealed much. A 1995 paper by Insel et al. compares the mating characteristics of prairie and montane voles. While the animals are similar in physiology, habitat and diet, prairie voles mate for life while montane voles reside in an age of free love. Prairie voles form stronger bonds with partners, are more territorial and care more for their young. This difference is thought to be caused by

Extrapolating from rodents to humans is a leap, but as with every other ‘love’-related compound out there, someone has tried bottling and selling it. A quick internet search for oxytocin reveals companies trying to sell spray bottles as ‘business aids’ or ‘dating aids’, and myths of casinos in Las Vegas spraying it to make gamblers more trusting of the house. The futility of these marketing ploys arises from the fact that the compound only has a half-life of about three minutes in the blood stream and it is likely that, when inhaled from a nasal spray, it does not enter the brain in large enough quantities to have a significant effect on amicability. Feel free to spray oxytocin on your boss before a big meeting, but the chances

of it achieving much more than a questioning look or a call for security are low. The nasal spray does, however, have alternative uses such as inducing labour. Pitocin, a synthetic form of the compound, can be used to begin contractions through interactions with oxytocin receptors in the uterine wall. This can bring about a scheduled delivery or induce labour if there are other health concerns. It was also recently discovered that oxytocin could be used to help autistic individuals interpret the world around them. Oxytocin levels in blood plasma have been noted to be lower in those with autism than in the general population, and injections of oxytocin into the brains of mice have been seen to reduce social anxiety. However, a suitable delivery mechanism to the brain needs to be developed for any of oxytocin’s effects to be similarly beneficial in humans. As with anything in science, it is difficult to predict where the research of oxytocin will take us next. Even as its roles in pair bonding and trust are being elucidated, research suggests oxytocin may help in the healing of injuries when released as a result of social interaction (Gouin et al. 2010). Whatever we learn about it next, for now we can all agree that, where friends and partners are concerned, near is better than far.

Aparna Ghosh is a 3rd year student of Medicine at Keble College. Art by Chloe Tuck.


From the Ear Straight to the Brain The benefits of music go far beyond pleasure


hether it’s Mozart or Madonna, music has always had a large impact on human culture. But why does it have such a powerful effect on us? Recent improvements in brain imaging techniques have allowed our knowledge of the neuroscience of music to expand. This has lead us to answers about music and the mind,

Listening to music enhances cognition; playing it physically alters the brain. During learning, connections between neurons that are often used are strengthened, and connections in less frequently used pathways weaken. An influential study by Elbert et al (1995), found that musicians had a larger representation in the brain

attention skills compared to those who did not. Särkämö suggests several mechanisms for this effect, including the fact that music improves mood, reducing stress and depression. It can also increase brain plasticity, which is the ability of the brain to form new connections. In this way, music does not only benefit healthy minds, but can help injured brains recover. in a

is evidence that if words are presented as lyrics “T here song, they are learned and remembered more effectively” through discovering its ability to change emotions, thoughts and even to assist recovery from neurological damage. Tolstoy wrote, “Music is the shorthand of emotion,” and he was essentially right. It can evoke emotions from happiness and peacefulness, to sadness or anger. This can be seen both subjectively and in physiological changes in the listener: music can alter heart rate and hormone levels, as well as brain states. One example of this is the ‘chills’ effect. In 2001, Blood and Zatorre showed that brain regions implicated in reward become more active while listening to music that causes the sensation of chills, providing a neurological explanation as to why we love listening to emotive music. Music also affects cognition. Research has shown that listening to enjoyable music improves performance in reasoning, memory tasks and creativity. This is through the positive feelings it generates and the associated release of the neurotransmitter dopamine. If you’ve ever wondered why you seem to be able to remember all the lyrics to ‘Call Me Maybe’, but not the chemical reactions in the Krebs cycle, this might explain it: there is evidence that if words are presented as lyrics in a song, they are learned and remembered more effectively than if they are just spoken. Whether it’s played in the background to improve mood, or actively used as a mnemonic, music can have a profound positive impact on learning and memory.

for their fingering hand (more cells devoted to controlling and responding to that hand), compared to nonmusicians. This was not the case for the other hand. It shows that playing music can enlarge the brain regions involved in fine motor control. More recently, several researchers (for example Gaser and Schlaug in 2003) have shown that musicians also have more grey matter in areas involved in hearing and visual-spatial abilities than non-musicians do. Music is also beneficial for those with neurological disorders. Many researchers are currently investigating the therapeutic effects of music in rehabilitation. For example, Särkämö and colleagues have published several studies in the last few years that show that listening to music after a stroke can help recovery. Stroke patients who listened to music daily had enhanced verbal memory and

This new research has shown us how music can enhance the mind and literally expand the brain. With neuroscience, we are beginning to understand our response to music and its place as one of the most universal aspects of human culture. This is in turn leading us to many interesting discoveries about how the human brain works generally, and has begun to shed light on the human fascination with music.

Iona Twaddell is a 2nd year Experimental Psychology student at Wadham College. Art by Amber Barton.

Ventromedial prefrontal cortex Orbitofrontal cortex

Amygdala Midbrain

“Chills” Effect Cerebral blood flow changes in brain regions thought to be associated with reward.



Bang! talks to...

Maria Witek M

aria Witek is a fourth year DPhil based at the Music Department in Oxford. She has a degree in Musicology from the University of Oslo and a Masters in Music Psychology from the University of Sheffield. Her research is a collaborative project with neuroscientists in the Department of Psychiatry at Oxford and researchers at the Centre for Functional Integrative Science at Aarhus University, Denmark.

You’re just finishing your DPhil here in Oxford, what is your research focussed on at the moment? I’m looking at the relationship between body movement, pleasure and groove, which is a type of music primarily defined in terms of its rhythmic properties and its ability to make people move. So I’m basically asking the question: what is it about certain kinds of music that make us want to dance, and why does it feel good? What are the methods that you’ve used to address the question? I’ve used a variety of methods. I did interview-based studies in the past, and an online rating survey. So I simply played music to people in connection with a survey on the internet and asked them to rate to what extent the music made them want to move, and to what extent they felt pleasure when they listened to these rhythms. Then I’ve also done neuroimaging experiments using fMRI. I asked people the same kinds of questions while they were lying in the scanner listening to the grooves, and measured their body responses. Finally I’ve used motion capture, so I’ve actually looked at how people move when they hear these types of grooves, and to what extent their movements and amount of synchronisation change dependent on the rhythmic structure of the groove.

Are you finding a link between the music and people’s response? The nice thing about using these methods is that you can compare results from the different approaches to the question. So I’ve got subjective ratings as well as neural correlates and measures of body acceleration, and there is a really clear relationship. Depending on the degree of rhythmic complexity you have, you see changes in ratings of the patterns, and the

aren’t different words for music and dance. So the two are very intimately linked, and there are a number of theories as to why music is a universal phenomenon among humans. One of them is that the biological function of music might actually be linked to our ability to engage in rhythmic activities, particularly with other people. There have been a number of studies looking at the effect that synchronising body movements to music and to other people has,

at the relationship between “I ’mbodylooking movement, pleasure and groove.” activity you see in the brain, as well as the amount of movement shown by the motion capture. Globally there’s a wide variety in the types of music that people like or gain pleasure from; do you think that there’s a link between them all? Well, rhythm itself is one thing you might say that links almost all music together. There’s no culture we know about in the world that doesn’t have music. Every culture we’ve known about in history - and today all over the world - has music in some form or another, and interestingly in some cultures there’s no difference between music and dance; there


particularly in interpersonal contexts. It has beneficial effects on social bonding and on cooperation, so it might be that music is a mechanism for motivating us to engage socially with other people, and that it’s channelled through a shared sense of rhythm and pulse in music. You started off studying music; what brought you around to what most people would consider to fall under science? I’ve been doing music since I was 6 years old; playing music was how I started getting into it. I played music at university to a certain degree, but I’ve never wanted to be a performer. I was still really passionate about music but I identified myself much more as

a listener than a performer, and so my interest in music became focussed towards my experience of music. Through that I started thinking about the physiological mechanisms underlying my experience of music, and that sort of spiralled I guess. So I started off with doing the degree in music psychology, and then for my Masters project I did physiological measurements looking at people’s chill responses to groove-based music, and that lead on to me asking these questions about the relationship between pleasure and body movement. In that sense I thought the fMRI would actually be quite a powerful tool. So it just started off with me being a listener rather than being a performer, but wanting to stay in music, and has ended up with me doing more of a science PhD. Although it’s really, really important for me to be in the middle. I think it’s very important to try and bridge this gap between the humanities and sciences. It is really hard, particularly coming from a background where you’re used to just working with ideas and developing them, and then going on to science and having to think quite rigorously about methodology and statistics and physics. Even though I’ve had to go pretty thoroughly into the science it’s really important for me to step back and think about the bigger picture and try and situate my research between humanities and sciences. Following on from that, in terms of the bigger picture and placing your work within a broader context, what are the implications of the work that you’re doing?

Methods of Investigation 1. Interviews and online surveys. Designed to quantify people’s subjective pleasure and urge to move when listening to music.

2. Neuroimaging. Functional MRI scans to measure the change in blood flow in the brain in response to music and rhythm and monitoring other physiological responses in the body.

I think we should be really careful with trying to find direct implications of studying the pleasurable effect of music. I’m not trying to reduce music to this one aspect that is going to be biologically, universally applicable and we might consider to be the key to eliciting pleasure from music. By studying the relationship between pleasure and body movement, the most important thing is to try and understand how the underlying mechanisms that frame our experience affect our more subjective experience of music. When looking at these specific relationships between our lower level perceptual mechanisms, it’s not to say that’s the only function for them or that it’s the only way that we can experience pleasure. There are so many other ways of experiencing pleasure from music that might have nothing to do with rhythm, but it’s still a very interesting question to ask.

3. Motion capture. Film and motion accelerometers (WiiMotes!) to measure how people’s movement and synchronisation changed in response to music and rhythm.

Interview by Laura Soul. Art by Iona Richards.


Living in Harmony How the mystery of beating and roughness was unraveled


lam your hand onto the piano keys, and the resulting noise will be ‘rough’. This is not meant as an aesthetic judgement, it‘s just that some notes played together have a rough quality, which contrasts with the ‘smooth’ sound that occurs when you play, for example, a major chord. Sounds are vibrations travelling through the air, and how high or low a sound is depends on the frequency of the vibrations: faster vibrations mean a higher note. Pure notes are single frequency vibrations – at any instant any sound can be expressed as a sum of pure frequency notes. Be it a car crash or Beethoven‘s 5th, one can use a computer to decompose the sound into a combination of these pure notes. We know that the way to eliminate roughness is to play musical notes together that have frequencies which relate to each other in whole number ratios – like 2:1 (the octave) or 3:2 (the perfect fifth). This trick, known since the ancient Greeks, has underpinned the construction of a great deal of Western music, particularly that for choirs and orchestras. Yet it isn’t at all clear why two notes in the ratio 3 to 2 should sound any smoother than two notes in the ratio 3.1 to 2.

In 1885 Herman van Helmholtz published a neat physical explanation for this remarkable coincidence. When two notes played together are very close in frequency, a ‘beating’ effect occurs; the volume of the sound rapidly increases and decreases. It does so at a frequency of the difference between the two original frequencies. He investigated whether beating was the cause of the roughness that we perceive and found that at low beating frequencies we can hear the changes in volume clearly. At very high beating frequencies, it’s

overtones, which are designed in most Western instruments to be harmonics (notes with a frequency double, triple etc. the frequency that is actually aimed for). You may not consciously hear the harmonics when a violin or a clarinet plays a middle C, but it is their relative combination that gives each instrument its distinctive tone. Whether two violins playing notes of different frequencies sound smooth or rough depends upon the harmonics of each instrument, and whether the interaction of any pair of the

ut of two quite simple realisations – that we hear “Onote beating as roughness, and that any real musical is composed of harmonics – the connection

between simple ratios and smoothness pops out.”

impossible to hear any effect because the beating occurs too quickly for the ear to pick up. Between these two extremes – around 30 beats per second – is a rough sound, where we can’t quite resolve the beating, yet still notice some effect. So far the theory doesn‘t predict our use of simple ratios, but real musical instruments don‘t produce a pure tone. Instead, they come with a number of

harmonics causes beating. To visualise the interactions of all the possible pairs, Helmholtz plotted graphs for perceived ‘roughness’ against beating frequency for many different pairs of notes, and then superimposed them all on top of one another – the result is astonishing. Out of two quite simple realisations – that we hear beating as roughness, and that any real musical note is





Helmholtz plotted the perceived roughness against beating frequency between pairs of notes, and then superimposed them.

1 1:1

6:5 5:4




8:5 5:3 7:4


composed of harmonics – the connection between simple ratios and smoothness pops out. Those dips appear at the major third, the perfect fifth, the octave and so on, and these are all intervals commonly used in Western music. Helmholtz didn‘t actually use the terms ‘roughness’ and ‘smoothness’. He preferred ‘consonant’ and ‘dissonant’: the former are physical facts about how sounds interact with the anatomy of the ear, the latter are aesthetic judgements. He believed that smooth sounds were consonant, but it is incorrect to assume that this connection is common to all humans. It may be surprising to discover that there are musical cultures where rough harmonies are the building blocks: the Balkan Ganga singers stand out in this respect. Their harmonic intervals are very small – the sound has an extremely rough quality, and yet those in the region associate the music with extreme joy. So some cultures intentionally use roughness in their music and others don’t, but what is actually happening in the ear that allows us all to detect it? It turns out that the frequency gaps that cause beating also cause a chaotic response in the ear.

to oscillate at this frequency and they will continuously gain energy, increasing the amplitude of the vibration. You can imagine then that a neat, if crude, way to detect the constituent frequencies of a sound would be to set up a row of oscillators – balls on

Suppose we feed two frequencies that are less than one critical bandwidth

oscillating springs are a very good model for “T hese the basilar membrane in the inner ear, which is a thin sheet of body tissue coated with a layer of tiny hairs.“

springs say – with different resonant frequencies. You‘d play a sound and see which ones oscillate, and by how much. If it‘s a pure note, then you‘d only expect one to move (the one on resonance with that frequency); a chord would strongly vibrate a few; a ‘crash’ would be a distribution across all the vibrators. These oscillating springs are a very good model for the basilar membrane in the inner ear, which is a thin sheet of body tissue coated with a layer of

be surprising to discover that there are “It may musical cultures where rough harmonies are the building blocks: the Balkan Ganga singers stand out in this respect.”

The ear relies on resonance. This is a natural phenomenon that we encounter all the time, but it is perhaps demonstrated most vividly in the collapse of bridges such as the Takoma Narrows (YouTube it!). Things that can oscillate (springs, elastic, pendulums, bridges) have a natural or resonant frequency. Force them to oscillate away from this frequency and they will hardly react; allow them

(‘critical bandwidths’). All points on one critical bandwidth will resonate with their associated frequency – one of these bandwidths is analogous to one of the oscillating balls in our model.

tiny hairs. The membrane is tapered and stiffer at one end, meaning that the resonant frequency of the membrane changes gradually as you move along it. When a given point of the membrane vibrates it causes neurons to fire in that section, sending signals to the brain. However, the membrane doesn‘t have perfect resolution, so to model it we divide it into sections of about one millimetre


apart into our model. Both frequencies will be felt by the same ball, but this ball will be unable to oscillate freely at either of the two incoming frequencies. Instead it will vibrate chaotically, unable to respond to the incoming beating wave. In the ear the same thing happens to sections of the basilar membrane. This complex vibration is sensed by the hairs, which send a chaotic signal to the brain. This is what we hear as roughness. This doesn’t aim to explain why a musical chord intrinsically sounds nice or unpleasant to any one of us – but it can predict which chords will sound rough, and the theory behind it is remarkably neat: roughness occurs because our ear isn‘t a good enough frequency analyser. Add the fact that all instruments come with harmonics, and the complex possibilities of music emerge.

David Kell is a 3rd year Physics and Philosophy student at Balliol College. Art by David Kell.

The Rhythm of Life Twisting genetic data into song reveals unexpected insights


ave you ever heard a protein? Expressing a biological element through sound may seem ridiculous, but perhaps the analogy between genetic code and music is not such a bizarre one. After all, the true ‘meaning’ of both genes and music lies in a particular combination of elements in a linear sequence. Just as a finite number of musical notes can be arranged into a plethora of different phrases, melodies and pieces, so the DNA bases are grouped to form life’s diverse array of amino acids and proteins. The idea of ‘genetic music’ came to the fore in the 1980s when several high-impact papers emerged, the first of which explored music as a means of understanding and remembering DNA sequences. Published in Nature in 1984, Hayashi and Munakata at the National Cancer Research Institute in Tokyo used a system that allocated pitches to the four DNA bases. They suggested that this helped to ‘expose’ specific sequences, making them easier to recognise, as well as providing an artistic representation of the beauty of life.

Ohno and Ohno of the Beckman institute, California, went further than this and assigned a range of notes per base. They also explored two structural features highly prevalent in both music and genetic code: repetition and palindromes. Their ‘Song in Praise of Peptide Palindromes’ was published in Leukemia in 1993. One of their more fascinating projects was to work in reverse, using the nowestablished methodology to convert Chopin’s Nocturne Op. 55 No. 1 into a DNA sequence. Astonishingly, the ‘translation’ of a recurring theme of the nocturne differed by only one nucleotide to a subunit of the enzyme DNA polymerase II in mice. This was most probably a coincidence, but it does highlight that there are some common principles at play. The 90s brought more exploratory approaches, using musical elements based on features such as hydrophobicity, molecular vibrations of DNA, X-ray crystallography, and acidity of proteins. Some composers assigned different instruments to each structural feature, for example the vibraphone for calcium binding sites and the flute for alpha helices. Others included rhythms according to sequence frequency or distribution. Thus genetic music began to evolve from something crude and functional into a more musical and artistic representation. Arguably the best-known work in the field came from Takahashi and Miller of UCLA, published in Genome Biology in 2007. They recognised that there were euphonic problems associated with a oneto-one amino acid to musical note assignment scheme, and set about adding dynamics, rhythms and

accompaniments. Their algorithm was considerably more complex, but the result is still a long way from easy listening. When I asked Takahashi how their work is progressing, she directed me to a more recent article done in collaboration with Dan Klionsky at the University of Michigan. They have produced a piece of music based on the formation of a particular protein complex. “Previously we had used a single instrument to represent one protein at a time. [Now] we have taken the project to the next phase by creating an orchestration to represent a biological process,” Takahashi explained. There is undeniable potential for such music in bringing genetics to a wider audience. Last year saw the launch of the ‘Genetic Music Project’; an opensource community art project which encourages people from all walks of life to take a creative approach to DNA-based music composition. Contributions are loosely structured around anything from the catfish genome to curator Greg Lukianoff’s genetic predisposition to heroin addiction. Not convinced that “Variations on a Theme of Acid Sphingomyelinase Enzyme” sounds like a bestseller? Check out some of the papers and sites and judge for yourself. The scientific value may not be clear, but there’s something poignant about Lukianoff’s take on it: “Another way of thinking about it is that each and every one of us and all life on this planet is made of music.” Rachel Tanner is a DPhil student in Medicine at St. Cross College. Art by Aparna Gosh.


Musical Selection Darwinian evolution is teaching us about popular music


is fair to say that musical styles have changed over the years. Now studies exploring this ‘musical evolution’ are demonstrating a new application for Darwin’s theory of natural selection. This unlikely combination may provide insight into how popular music is continually evolving, with research into the selective pressures of the music industry, and proof that applying a simple Darwinian process to computergenerated sounds can produce music without musicians. A research team based at Bristol University have developed a ‘hit potential equation’: an algorithm to predict the popularity of a song. They examined the UK Top 40 singles chart over the past 50 years, and quantified 23 musical features including tempo, loudness and harmonic simplicity. Some interesting observations arose from their research. For example, it was only from the 1980s onwards that ‘danceability’ (no doubt a fun one to measure) became a determining factor in a song’s popularity. Interestingly, the accuracy of the ‘hit potential equation’ was lowest around 1980, suggesting this was an especially creative period for pop music. Using the equation they found they could classify a song as a ‘hit’ or ‘not hit’ with a long-term accuracy of almost 60%, surpassing previous studies. The major factor in their success: a time-shifting algorithm that accounts for evolving musical taste. “We have found the hit potential of a song depends on the era,” said Dr Tijl De Bie, the research-leader. “This may be due to the varying dominant music style, culture and environment.” There are three main selective pressures on the evolution of popular music. These are the producers (e.g. musicians), the individual consumers, and the consumer-group (i.e. one’s choice of song, influenced by other people’s preferences). With our increasing ability to download, manipulate and share music via social-

networking sites, music production is being democratised and the weighting of these different selective pressures will continue to shift. But what if the producers were removed? A group from Imperial College London sought to test whether music could be created without a composer. In 2009 they started a programme that allows music to evolve via Darwinian natural selection, in a process that has clear parallels with how life evolves. They called the computer algorithm behind the study ‘DarwinTunes’. A DarwinTunes population has 100 loops of music, each of which is eight seconds long. They are streamed in a random order and volunteers rate them on a five-point scale from “I can’t stand it” to “I love it”. When twenty loops have been rated, DarwinTunes takes the top ten loops, pairs them up as ‘parents’, and allows them to ‘mate’ to create twenty new ‘daughter’ loops that contain a blend of their parents’ musical features. These daughter loops replace the original parents and the less pleasing non-parents. This process represents one ‘generation’ of musical evolution. DarwinTunes has now reached over 7200 generations, with well over 7000 web users having participated in the experiment. As the generations rolled on the music noticeably began to ‘evolve’. In a further experiment loops were taken from random generations and blindly scored by members of the public. Without knowledge of a loop’s generational age, the public consistently ranked more evolved music as most appealing. Furthermore, the researchers found that the rhythm and chord features of the DarwinTunes loops had begun to resemble those of contemporary Western music. By bypassing the usual components of producers and musicians, the programme highlights the creative role of consumer selection in shaping the music we listen to. Armand Leroi, co-author of the research said “Every

time someone downloads one track rather than another they are exercising a choice, and a million choices is a million creative acts. After all, that’s how natural selection created all of life on earth, and if blind variation and selection can do that, then we reckoned it should be able to make a pop tune.” Through both the ‘hit potential equation’ and DarwinTunes we are being shown that music – something inanimate, a cultural dynamic – is shaped by competing evolutionary forces. But perhaps more fascinating than just plotting out popular music’s evolutionary tree is waiting to see how it will evolve next. What will emerge from all the mating and mutating of the melodies and rhythms of the music that we listen to now? Alex Gwyther graduated in Biological Sciences from Magdalen College. Art by Hope Simpson.


Simulating the Dyslexic Brain Can computer models accurately replicate language defects?


ate steak in hall.” “There’s a leak in college.” “I found a wug in my room.” Just reading these sentences is an immense challenge for some individuals with acquired dyslexia – a reading disorder resulting from injury to the brain before which the individual’s reading ability was normal. Common causes of such brain injury range from strokes and tumours to head trauma and breaking of the skull. Acquired dyslexia, or alexia, has traditionally been considered the result of damage to one of two neurological routes we use when reading. However, is there scope for computer simulations of the human brain to provide a more convincing explanation for alexia? The first and third sentences at the start of this article each contain a word which is particularly tricky for individuals with different types of alexia. Those with phonological dyslexia struggle to read madeup words such as “wug”, while people who have surface dyslexia struggle to read words with irregular pronunciations such as “steak”, despite being able to read the letters “eak” when their pronunciation is regular as is the case in “leak”. The traditional dual route theory

However some surface dyslexic patients display symptoms which suggest the mental dictionary route remains partially intact. Take the example of Patient MP: he shows 40% accuracy in reading irregularly pronounced words which are rarely

he dual route theory of reading appears “Tthat to explain surface dyslexia well: it argues we either pronounce words by looking up their sounds in our mental dictionary or by applying spelling-to-sound rules”

used in English, but a staggering 85% accuracy in reading irregularly pronounced which are frequently used such as “have”. This hints at the partial survival of his mental dictionary - it is extremely unlikely that the frequency with which a word is used affects how easily we misapply a pronunciation rule to it. It is, however, well-known that a word’s frequency of use determines how easily it’s accessed from our mental dictionary. Findings such as those from Patient MP suggest that the two routes to reading interact with each other to a much greater extent than originally

reading these sentences is an immense “Just challenge for some patients of acquired dyslexia” of reading appears to explain surface dyslexia well: it claims that we pronounce words either by applying default spelling-to-sound conversion rules or by looking up information about their (possibly irregular) sounds stored in our mental dictionary. The latter memory-based route is said to be impaired in the brains of surface dyslexics, leaving them with only the rule-based route intact. Hence, they pronounce “eak” in “steak” in the same way that one would normally pronounce “leak”.

rules and proposes that there is only one route for reading. The model contains three layers of units, with each unit representing a brain cell. The visual input of a word activates the input layer consisting of ‘spelling units’, which are then associated

believed. This has led to interest in so-called ‘connectionist models’: computer simulations of human neurological networks processing language. Since proposing their landmark model in 1989, two American psychologists – Mark Seidenberg and James McClelland – have been well-known for their research in this area. Unlike the dual-route model, Seidenberg and McClelland’s simulation does not work using regular spelling-to-sound


by experience with the output layer of ‘sound units’. A layer of ‘hidden units’ exists between the input and output layers, increasing the range of possible connections. Each of these connections is ‘weighted’ by a certain value so that activation instantly spreads from the spelling units across the network, turning output sound units on or off according to their weighting. If the simulation produces the wrong sound in response to a word’s spelling, it corrects itself by changing the weight of connections between units.

Hi d


n Laye e d

It’s pronounced like “STAKE”

10 y ea r




Correct Heavy Weighting

But how do you model damage and simulate the dyslexic brain? One possibility is to reset the connection

. er.. t a sl Yay! STEAK

Incorrect Light Weighting

up words such as “wug” - a mistake which would only be made by those with phonological and not surface

simulation produces the wrong sound in “If the response to a word’s spelling, it corrects itself ” weights between layers to zero. To some extent this gives rise to the symptoms of surface dyslexia: the greater the number of connection weights reset, the more likely the model is to produce a regular pronunciation of irregular words such as “steak”. However, even before being damaged the model makes errors in pronouncing made-

dyslexia. Yet at the same time, the model can at least represent the phonologically dyslexic brain well, a representation that matches even more of the symptoms of reallife alexics if the sound units are damaged too.

Robert Blakey is a 2nd year Experimental Psychology student at St. Catherine’s College. Art by Ning Yu.


You have just spent five minutes reading 814 words, which to you may feel effortless but to many sufferers of dyslexia would pose a real challenge. There still exists no definitive answer to the difference between a ‘normal’ brain and that of an alexic individual that gives rise to this disparity in reading ability. However, connectionist models provide a step forward with their perspective of the brain as a highly interactive network of competing cells, a view which better resembles the biology of the mind than traditional theories that see the brain as a collection of distinct, independent processing channels.

Monsters of Legend Can science explain the myths?


hroughout history, man has told tales of fantastical creatures to explain the world around him. More recently science has disproved many of our tall tales, but every so often we find an explanation so wonderful, it just might make a documentary on the Discovery Channel. Cryptozoology is the study of unverified creatures. These creatures, known as cryptids, are the subjects of many great legends, from the monster at Loch Ness through to the inhabitants of the Lost World. Despite the allure of Nessie and dreams of dragons, cryptozoology suffers from something of a stigma in the scientific world because the field often attracts the sort of people that go UFO hunting. Even so, legends have to start somewhere...

The Kraken The Kraken is a sea creature so vast that it leaves whirlpools in its wake and can wrap its tentacles around a ship to drag it to the bottom of the sea. The origin of this myth is actually living in the seas of today, although it’s size may have been exaggerated a little bit. Several species of giant squid (possible real-world counterparts to the mythical Kraken) have been identified, measuring up to 14m, with eyes over 30cm in diameter.


The giant size of these species is the result everal species of giant of deep-sea gigantification, the causes squid have been identified, of which are still unclear but there measuring up to 14m.” are some hypotheses. When a large animal dies and sinks to the ocean floor it provides an abundance of food. Scavengers that are larger are evolutionarily selected for because they can detect and travel to the food faster. This explains gigantification in 76cm crustaceans. However the colossal squid, Mesonychoteuthis hamiltoni, has a metabolism too low to travel long distances quickly which a giant deep-sea scavenger would need to do. It is instead hypothesised to be a ‘sit and float predator’ that uses its tentacles to catch fish that can’t detect it in the dark. Metabolic calculations suggest that a 5kg fish would provide a 500 kg colossal squid with enough energy for 200 days. This strikingly low metabolism is a result of the extremely cold water that they live in, at depths below 1000m in the Southern Ocean. This cold may also explain their size. In ectotherms (cold blooded animals) cold temperatures lead to a greater final body size because although they grow more slowly, they do so for a longer period of time. Despite their size they are prey for sperm whales, which have been found with scars from the hooks on the tentacles of giant squid. Somewhere out in the deep blue oceans the mythical fight between a whale and a giant squid is actually happening!


W birds-of-prey “Sflyome have been observed to off with mammals as large as sloths.”

Roc Rocs are mythical birds of prey so massive that stories involve them carrying off an elephant. Some birds have been observed to fly off with mammals as large as sloths or lambs, but the storytellers of old probably got a little carried away with the size of rocs. The largest real world bird-of-prey is Haast’s eagle from New Zealand; weighing a huge 15kg with a 3m wing span. Disappointingly, humans have caused their demise by hunting their prey (the 2m tall moa bird) to extinction. Haast’s eagle is an example of island gigantification,. When a species arrives on an island there may be new ecological roles available to them which they evolve to fill. When Haast’s eagle first colonised New Zealand there were no large land mammals and so no large predators. In most ecosystems mammals fill the role of large predators but in their absence Haast’s eagle took on this role. The eagle then evolved to become larger and larger so that it could prey on the animals of the island that had no other natural predators.

El Chupacabra

the legends the beast “Inisa spiny often described as lizard man”

This is the legend of a blood-sucking creature from Puerto Rico. It stalks the land, draining animals of their blood through vampiresque puncture wounds in the neck, like a Latin American Edward Cullen. In the legends the beast is often described as a spiny lizard man, but most sightings report a kind of warped skinless wolf. Unfortunately the scientific community has concluded that the truth is not as exciting. Chupacabras are actually coyotes infected with the parasite Sarcoptes scabiei, which explains their furless appearance and foul smell. People had thought that because their prey was not found in a pool of blood, it must have been sucked dry. However, the bodies can just as easily be explained by a bite to the throat crushing their wind pipe (a common attack for carnivores). As for the uneaten bodies, that at least remains mysterious. The poor ill coyotes may just have been scared away by all the Chupacabra-hunting cryptozoologists!

Most of the animals found in human legends have undergone some form of gigantification, and in the myth their size is exaggerated even further. According to the storytellers; bigger really is better. David Wells is a 3rd year Biological Sciences student at Queen’s College Art by Iona Richards.


Unlocking Stem Cells Are pluripotent cells the miracle of the future, or a research dead-end?


tem cells are seldom out of the media. Their possibilities are reminiscent of science fiction – cell ‘rewinding’, organ regeneration and disease elimination, to name a few. A stem cell is a cell that is capable of changing, or differentiating, into a particular cell type. Humans have around 220 different cell types. A so-called ‘pluripotent’ stem cell has the ability to become any one of these. In comparison ‘multipotent’ stem cells are less versatile, so they can only differentiate into a handful of cell types. Adults possess some multipotent stem cells in their bone marrow and spleen. These cells can transform into some of the types of blood cells we need to maintain a functioning immune system. In patients with leukaemia, their bone marrow stem cells do not give rise to fully functioning blood cells. Thus the patient has a compromised immune system. A way of treating this is to perform a bone marrow transplant, which introduces functioning stem cells into an immune-compromised patient, allowing recovery. This is an example of stem cell therapy, and was first performed in the 1960s. What makes stem cells so exciting? Let’s take the biggest killer in Britain as an example of their potential. Each year in the UK there are 94,000

replace those that are progressively lost during the course of the disease. Of course, this Frankenstein-esque notion of growing hearts, brain tissue, or even limbs in the lab is exaggerated. Other less dramatic possibilities for stem cells include drug development and more sophisticated disease modelling techniques. Furthermore, we could be moving towards a future where diabetes, heart disease or paralysis need not restrict people’s lives. The therapeutic potential of stem cells is vast; sadly, these cells have proved rather difficult to handle, both in a political and practical sense. Human embryonic stem cells (hESCs) are the ‘gold standard’ for pluripotency. However, the method of obtaining hESCs involves the destruction of early stage human embryos, and thus provokes an extreme reaction from some people. Research embryos are donated by IVF clinics; they would otherwise be destroyed beacause couples consider them to be surplus material. Nevertheless, many people believe that the use of these embryos is tantamount to murder. The ethical quagmire that faces researchers working with hESCs in labs worldwide means that it could be years before progress is made. This is particularly true in the US, although

blastocysts are donated by IVF clinics “R esearch where they would otherwise be destroyed. Nevertheless, many people believe that the use of these embryos is tantamount to murder.”

deaths from coronary heart disease (CHD); additionally, up to 2.4 million people live with the condition. Modern medicine cannot undo the damage to the heart done by CHD and this limits the lives of patients living with it. Stem cells could be used to improve the heart’s condition. In addition, the charity Parkinson’s UK has so far invested two million pounds into stem cell research. They state that their eventual hope is to “grow new dopamine-producing nerve cells” to

recent policy change by the Obama administration has improved prospects for American researchers. Another unavoidable drawback of using hESCs is the threat to patients posed by immune rejection. The introduction of foreign tissue in the form of hESCs provokes an immune reaction – the immune


system destroys the cells before any therapeutic change can occur. Hence immunosuppressant drugs are required to prevent this rejection. However, when hESCs are introduced into immunosuppressed mice, growths form throughout the body. These growths are known as teratomas; they are usually benign but in rare cases may be highly dangerous. This is a clear disadvantage when considering

potential patient administration. This doesn’t mean hESCs don’t still have tremendous promise, as Chen et al. demonstrated this year when they succeeded in reversing deafness in gerbils, giving hope to 250 million hearing-impaired people worldwide. However we cannot ignore the evident drawbacks of embryonic stem cells. Despite their potential, the ethical and practical constraints of hESCs made researchers look elsewhere. In 2007, Yamanaka and his team of Japanese researchers made a discovery so simple and elegant that it was initially dismissed it as fakery. They used a simple cocktail of transcription factors - proteins controlling gene expression - to literally ‘rewind’ cells to their embryonic state. Just three or four factors are required, which are delivered to an adult somatic cell – the term ‘somatic’ refers to the cells that make up our internal organs, skin, bone, blood, and connective tissue. The result of this process is an embryonic stem cell known as an induced pluripotent stem cell (iPSC). The beauty of these cells is that no foetal material is required – a patient skin biopsy would provide the requisite tissue – and hence there is no ethical backlash.

Cocktail of Transcription Factors

Fibroblast Cell

Reprogrammed Cells

Hem Prog atop eni o to iet r C ic el ls

iPS cells

So iPSCs appear to be an elegant solution. Of course, scientific progress is never so simple! The success rate of this cell reprogramming remains low. The cells that are successfully reprogrammed are identified by noting whether they have the same morphology, cell surface markers and growth properties as naturally derived stem cells. They may also be confirmed as ‘true’ iPSCs when their differentiation capacity is tested. Yamanaka’s cells had a successful reprogram rate of just 0.1 to 1%. This must be hugely improved if we are ever to generate the number of cells necessary for therapy; unsuccessfully reprogrammed cells are unsafe to introduce into a host organism. Additionally, one of the commonly used transcription factors

my o cy


te s

Ce l

rd io

Ad i p


Do pa m Ne i ne rgic uro ns

Neural Ce lls

Moton e u ro n

is an oncogene – it has the potential to cause cancer – so iPSCs programmed using this cannot be introduced into patients. iPSCs are an exciting prospect for research; however looking beyond the media fanfare it is clear that much progress is yet to be made. With around 200 laboratories worldwide currently undertaking stem cell research, we can’t be sure of what the future will bring. Scientists don’t yet understand the mystery of cellular reprogramming – the methods are still in their infancy. iPSCs are not a replacement for embryonic stem cells in research, as both have much to offer the scientific

Sophie McManus is a 2nd year student of Biomedical Sciences at Magdalen College. Art by Iona Richards and Natasha Lewis.


at ic


yt es

There is also an almost complete reduction to the risk of immune reaction in the host animal as its own cells are used. A patient’s fibroblasts, a type of cell found in connective tissue, could be ‘rewound’ to stem cell state, before being exposed to the relevant conditions and transcription factors to produce healthy new tissue. For example, new insulin-producing cells could be made from a diabetic’s fibroblasts, before transplantation into their pancreas. Simply put, iPSCs exhibit the majority of hESC properties. Proof of concept is available, as research teams in several different countries have reported success in producing muscle cells and other types of tissue.

Pan c r e


community. In years to come, we will be able to understand human disease better than ever before, as well as treat those who are suffering more efficiently. That is cause for a great deal of hope.

To the Moon and Back The unprecedented and unsurpassed Apollo 11


believe this nation should commit itself to achieving the goal, before the decade is out, of landing a man on the moon and returning him safely to Earth.” These were the words of President John F. Kennedy in a speech to the US congress in 1961, issuing a new challenge in the midst of the Cold War. Despite Kennedy’s attempts to boost American space exploration before his assassination, the USSR remained one step ahead. They had already launched their manmade satellite, Sputnik, in 1957, the first of its kind to orbit the Earth. Furthermore, the first man in space, the first man to execute the spacewalk, and the first spacecrafts to orbit and perform a soft-landing on the moon were all launched by the Soviet Union. It seemed the US was forever playing catch-up. Luckily for the Americans, NASA’s Apollo programme was gaining speed, and the Apollo 11 spaceflight was soon to achieve its ultimate goal in the space race. On July 16th 1969, watched by vast crowds and millions of television viewers worldwide, Apollo 11 was launched by a Saturn V rocket from the Kennedy Space Centre in Florida. The spaceship consisted of two parts: the Command and Service Module, Columbia, and the Lunar Module, Eagle. Manning the craft were Command Module Pilot Michael Collins, Lunar Module Pilot Edwin Aldrin Jr (better known as “Buzz”), and Commander Neil Armstrong.

Apollo 11 entered lunar orbit on July 19th, 75 hours and 50 minutes after launch. Eagle and Columbia separated on July 20th, initiating Armstrong and Aldrin’s descent to the lunar surface in the Eagle, while Collins remained in lunar orbit aboard Columbia. As they rapidly approached the surface, Armstrong realised the terrain was a lot rougher than first expected; Eagle would not be able to land automatically as planned. He took partial control of the craft and was forced to avoid landing in a large crater. Nevertheless, 102 hours and 45 minutes into the mission, Eagle landed safely on an area known as the Sea of Tranquillity. Armstrong immediately reported back to Earth. “Houston, Tranquillity Base here. The Eagle has landed!” Houston responded: “Roger, Tranquillity. We copy you on the ground. You’ve got a bunch of guys about to turn blue. We’re breathing again. Thanks a lot.” Approximately six hours later, watched by over 600 million captivated viewers worldwide, Neil Armstrong became the first human to set foot on the moon. His message back to Earth was simple yet profound: “That’s one small step for man, one giant leap for mankind.” Buzz Aldrin followed him about 20 minutes later and described the scene as “magnificent desolation”. While on the surface, the astronauts took photographs of the landscape, collected 21.7kg of moon rocks, and took two core tube samples of the lunar surface which contained material from up to 13 centimetres below the ground. Scientists would later run tests on these samples, which were found to show no traces of water and could not provide evidence for any life forms ever having existed on the moon.

After spending more than 21 hours on the moon, around two and a half of which had been spent walking on the surface, Armstrong and Aldrin re-docked with and climbed back aboard the Columbia, and headed back to Earth. The Eagle remained in lunar orbit, presumably crashing into the moon’s surface after a few months. Concluding a mission lasting over 195 hours, Columbia landed safely in the Pacific Ocean on July 24th, where the astronauts were retrieved by the USS Hornet. The mission was a tremendous success, and Kennedy’s legacy had been fulfilled. The Apollo 11 spaceflight had essentially ended the space race. But perhaps this historic mission can be best summed up by what Armstrong and Aldrin left behind on the surface: two commemorative medals bearing the names of the three Apollo 1 astronauts who’d been tragically killed in a launch pad fire, as well as two cosmonauts who’d lost their lives in accidents. Names of many NASA leaders and a silicon disk containing goodwill messages from 73 countries also remained. But maybe most significant of all, the plaque signed by all three crew members as well as then President Richard Nixon, which read: Here Men From The Planet Earth First Set Foot Upon The Moon July 1969, A.D We Came In Peace For All Mankind

Mahnoor Naeem is a 2nd year Chemistry student at Keble College. Art by Natasha Lewis.


Amazing Algae The unicellular organisms that control the planet


redicting our planet’s response to climate change could be key in mitigating its damaging effects. Models for making these predictions utilise two main sources of information: lab-based experimentation, to understand climatic variables in a controlled environment, and records of past climate preserved in sediments. A great deal of research that is incorporated into predictive models focuses on a surprising source: single celled marine algae. These tiny organisms are normally only visible where upwelling, nutrientrich ocean waters cause rapid population explosions, which result in vast green ‘algal blooms’ that can be photographed from space. Photosynthetic algae are responsible for 45% of all primary productivity on Earth, and are the base of the ocean’s food chain. 70% of the Earth’s surface is covered by water, and so changes in the abundance, location and evolution of marine algae will dominate changes in life on Earth overall. They do not just play an important role in the food chain; they are also intrinsic to the global carbon cycle. In the top layers of the ocean, marine phytoplankton draw down carbon dioxide from the atmosphere for use in photosynthesis – converting


ll nwa Plymouth Cor

it to oxygen and chemical energy. Some classes of algae also take up carbon dioxide (CO2) and use it to biomineralise by building hard shells of calcium carbonate (CaCO3) around themselves. When these organisms die they fall to the ocean floor, forming sediment layers and removing carbon from the active carbon cycle. Over millions of years these sediments lithify into rock, locking the CO2 away from the atmosphere: the white cliffs of Dover are an excellent example of this. Carbon dioxide is the main greenhouse gas contributing to global warming, and so the natural drawdown of CO2 via the photosynthesis and biomineralisation of these organisms is an important factor in global climate. How can we use these organisms to predict future climate response? As mentioned, research groups across many countries are using a variety of lab-based approaches to do just that. They grow communities of particular species and subject them to controlled temperature, CO2 and nutrient (such as phosphorous and iron) conditions. By doing so they discover which species, if any, can cope with changes in their environment most readily, and are therefore most likely to survive into the future. We can also observe algal responses in the lab to make estimates of

An algal bloom occured off the coast of Cornwall in 1999 and was photographed via satellite.

previous changes in climate. Due to their hard shells, algae from millions of years ago all the way up to the present day are preserved in sediments on the ocean floor. Researchers take cores of these ancient sediments and extract and analyse the chemical composition of both the hard shells, and any organic material that remains. There is a relationship between a live organism’s rate of uptake of certain compounds and the state (temperature, chemistry etc.) of its seawater environment. This signal is locked in when the organism dies. By finding the exact relationship between the organisms and their environment, we can use changes in their composition through time as a proxy for past changes in environment. The ultimate goal is to establish relationships in controlled laboratory conditions and then try to observe the same relationships in the geological record. Of course it is never that simple, as many factors can simultaneously affect the compositions, making each proxy imperfect. It is the researchers’ job to try and untangle all these conflating factors, and to corroborate conclusions from one proxy by matching it with others. By no means do we fully understand the Earth’s biological response to climate shifts. However, this research brings us closer to doing so, hopefully allowing us to focus future efforts on reducing the detrimental effects that change in the state of the oceans and atmosphere has on biological systems.

Laura Soul is a DPhil student in Earth Sciences at Worcester College. Art by Laura Soul.


Bang! is online Our website is bursting with articles, artwork, breaking news and events. You can read our blogs, download digital versions of past issues, and find ways to contact us. Follow us on Facebook & Twitter to be kept up to date with the latest scientific advances in Oxford and beyond! @bangscience

The Centre for Doctoral Training on Theory and Simulation of Materials (TSM-CDT) at Imperial College London is the UK’s centre of excellence for research in theory and simulation, inspired by the challenges of today and the future


What you bring: ‡6WURQJDSWLWXGHIRUWKHRU\DQGPDWKHPDWLFV ‡3DVVLRQIRUFXWWLQJHGJHUHVHDUFKLQDUHDVLQFOXGLQJ  theoretical condensed matter physics  solid and fluid mechanics  quantum chemistry  energy materials  aerospace and structural materials

Applicants should have or expect to obtain a first class degree in physics, engineering, chemistry, materials or applied mathematics. Apply now for entry in October 2013. For further details visit

Bang! Science Magazine, Issue 12  

Issue 12 of Bang! Science Magazine - the Music Issue

Bang! Science Magazine, Issue 12  

Issue 12 of Bang! Science Magazine - the Music Issue