The Oxford Scientist: Famous Firsts (#1)

Page 1

the Oxford Scientist

Hilary Term 2018


HAVE YOU THOUGHT ABOUT...

A CAREER AS A PATENT ATTORNEY?

An intellectually challenging and rewarding career option

What Does It Involve?

advert

Training as a Patent Attorney is a career path that will enable you to combine your understanding of science with legal expertise. You will leave the lab environment yet remain at the cutting edge of science and technology, applying your knowledge and skill in a commercial context. You will help to protect intellectual property assets and grow businesses.

James Egleton MChem in Chemistry, University of Oxford (2011) DPhil in Organic Chemistry, University of Oxford (2015)

Sound Interesting? Patent and Trade Mark Attorneys in London, Oxford, Cambridge and Munich. We welcome applications from exceptional candidates at any time of the year. Eleanor Healey BA and MSci in Natural Sciences, University of Cambridge (2011) DPhil in Structural Biology, University of Oxford (2015)

www.jakemp.com/careers


contents 4

Editorial

5

News & What’s On

7

Scientists Online

8

The Oxford Scientist Debate

9

The Oxford Scientist Interviews Prof. Ursula Martin

11

I Am Therefore I Think?

13

Quantum Computers: Solving the Unsolvable

14

Do Androids Drive Electric Lorries?

15

First Humans: an Infographic

15

Neanderthal: Brute or Brainbox?

17

The Immortal Woman

18

Firestarter

19

The End of the Giants

21

Beam Me Up, Scotty

22

Checkmate

23

Technology Imitates Life

24

Bitcoin: Can’t make Head or Tails of it?

25

DNA Doctoring

26

Around the World in a Solar-Powered Plane

28

First Contact

29

Crossword

30

The Oxford Scientist Schools Writing Competition

corrections The Michaelmas 2017 print issue of Bang! contained two misattributed articles. Ian Foo wrote ‘My Other Car Drives Itself ’, and Joseph Elliott wrote ‘The Automation Revolution’.

Copyright The Oxford Scientist 2018

editors-in-chief Jacqueline Gill Pat Taylor deputy editors Thomas Player (PRINT) Harri Ravenscroft (WEB & BLOG) Olivia Shovlin (NEWS) creative director Joy Wang business team Qi Ou (MANAGER) Silvia Shen publicity Ina Hanninger sub-editorial team Freddy Barnes Jessica Cherry Lewis Fry Theodore Keeping Malhar Khushu Ellen Pasternack Bramman Rajkumar Hannah Ralph Claire Ramsay Brianna Steiert artists Jacob Armstrong Emma Brass Priyadarshini Chatterjee Laura Cooper Tiffany Duneau Gulnar Mimaroglu

ospl staff chairman secretary managing director finance director tech director strategic director events director non-executive directors

India Barrett Molly Flaherty Polly Halladay Bryce Ning Utsav Popat Harry Gosling Tess Hulton Louis Walker Rebecca Iles Katie Birnie

2


join our senior editorial team for TT18 editors-in-chief web editor

Manage the production of the magazine, liase with the publishers, and have final say on all editorial and creative decisions.

Runs the blog section of our website, and is also responsible for site maintenance.

print editor creative director

Commisions the articles for the magazine, and heads up the team of sub-editors who make sure every article meets our high print standards.

Designs all aspects of the magazine, and commisions the illustrations from our art team. Lays-in the magazine in InDesign, so experience with this software is essential.

news editor business team

Leads the news team, who write stories for the news section of the print issue, and for the website throughout the term.

The Business Manager leads the junior team members in securing adverts for the magazine and website.

email editor@oxsci.org asap to apply

www.oxsci.org /oxsci


editorial

First Expedition to reach the South Pole Roald Amundsen, 14th December 1911 We are delighted to welcome you to the very first issue of our new magazine. Over the past decade, many dedicated editorial teams have carefully nurtured and developed our predecessor, Bang!, until it has now evolved into The Oxford Scientist. We are still committed to creating a ‘graphically gorgeous’ science magazine, written and created entirely by Oxford students, but are proud to present a different look, a more modern feel, an updated website, and plenty of new features. We will continue to provide a platform for people to ‘write about, read about, explore, and be inspired by the fascinating and beautiful workings of the world around us’, as was the intention of the original editors ten years ago. Just like Amundsen and his team made the journey across the Antarctic to reach the South Pole for the very first time in 1911, we too are exploring new horizons and setting out to discover new things in our magazine. In this first issue, we felt it only fitting to explore some of the famous firsts that have occurred throughout scientific history. From ancient firsts such as the earliest humans, through the recent firsts of quantum teleportation and electric lorries, to future firsts such as the use of in vivo gene editing to cure genetic diseases. Throughout its redesign and production, The Oxford Scientist has relied upon, and will continue to rely upon, the hard work, skill, and endless imagination of our committed team of students. We have been extremely privileged to work with some truly amazing individuals, without whom this magazine would never have made it off the ground. We would like to thank them all for their contributions and dedication, and hope that many more of you will get involved with future issues of The Oxford Scientist. Pat Taylor and Jacqueline Gill Editors-in-Chief

4


news New ‘universal’ flu vaccine in the pipeline

The John Radcliffe hospital

contributed by Louie Iselin

O

xford University has worked in conjunction with Vaccitech to develop what may be a solution to the ever-present risk of an influenza pandemic. Current vaccines expose the humoural immune system, which involves antibodies, to external proteins like haemagglutinin, which frequently change by mutation. However, the new MVA-NP+MP1 vaccine targets T-cell mediated immunity, which recognises both internal and external viral proteins presented by infected cells. It could boost T-cell responses to proteins that are too important to the virus to mutate rapidly, such as NP (nucleoprotein). If successful, the new vaccine could be paired with the current antibody-stimulating vaccine to protect against a pandemic.

World’s most powerful model simulates behaviour of the universe

A

strophysicists have developed a new simulation model for the universe, which has produced over 500 terabytes of data. The model was developed after a form of an existing highly parallel moving mesh code, AREPO, was produced. This was used on the Hazel Hen machine at the High Performance Computing Centre in Stuttgart—the fastest mainframe computer in Germany. Scientists have obtained data that can be compared against observational data to test the “hierarchical” hypothesis of galaxy formation. The data has also been used to accurately illustrate how galaxies are dynamically clustered together in space, and how they are affected by supermassive black holes and dark matter.

5

Superconducting switch leads the charge for next generation AI

S

cientists at the John Radcliffe hospital have tested an artificial brain that represents a new wave of cutting-edge AI. These “neuromorphic” (literally nerve-like) computers are supposed to process information and behave in a manner similar to the human brain. This particular device is able to help doctors identify life-threatening heart conditions earlier than they could hope to without the computer’s help, and is forecasted to one day catch 4000 cases of lung cancer every year. Much of the credit for this upcoming technological milestone lies with a newly-launched superconducting switch, which can behave much like a human synapse. At only ten micrometres wide,

the switch tailors its signal outputs so that they are appropriate to the inputs it receives, effectively learning from its environment and experiences. It’s even measurably more powerful than the already impressive biological form, being able to send out over a billion pulses per second (human synapses’ peak output is 50 signals per second). This is a promising idea, because the strength of any synaptic connection depends entirely on the frequency with which nodes can signal to one another. It is hoped that this technology will allow us to make driverless cars capable of making complex ethical decisions, amongst other exciting applications.

E-liquids damage immune cells

A

study conducted at the University of Rochester Medical Centre has directly exposed white blood cells (known as monocytes) to flavoured E-liquids in order to test if they really are a safe alternative to cigarettes. They found that cultures exposed to the liquids, which were nicotine-free, showed increased production of biomarkers for

inflammation and tissue damage. Worse still, some flavours such as cinnamon and vanilla induced significant cell death. They have been described by the authors as being observably toxic to white blood cells when inhaled, despite being harmless if ingested. The group responsible for the work are calling for better regulation of products.


Discovery of the molecular mechanism for dopamine secretion

A

team at the Harvard Medical School have gained their first detailed insights into the molecular machinery involved in the release of dopamine in the brain. Up until now, most studies regarding this important neurotransmitter have related its receptors and the mechanisms by which ineffective secretion causes disease—relatively few have studied how healthy cells deliver the chemical into the brain’s tissues. The group from Harvard has learnt that, rather than depending on “volume transmission” (the slow secretion of neurotransmitter in bulk to large areas of the brain), dopamine is actually released at high rates with a great deal of spatial precision. This occurs at specialised sites called active zones. Super-resolution microscopy was used to identify the areas into which dopamine-releasing neurons project, which led to the discovery of the

Cheap “nanofoam” catalyst could make clean energy from water

W

ith international interest in renewable energy increasing, the need to store energy generated by intermittently effective wind and solar sources is becoming more and more pressing. Attempts to use generated energy to split water into industrially useful hydrogen have fallen flat, largely because of the expensive requirement of a platinum catalyst. However, scientists specific proteins that mark out the zones. Subsequent genetic studies deleted a single protein, known as RIM, from these dopamine-releasing hotspots, and this was enough to completely stop dopamine secretion in mice. Deletion of other proteins found at different types of active zones had no effect, suggesting that the molecular machinery responsible for dopamine delivery is unique to other machines within the brain. This is exciting for two reasons: it confirms the link already suggested between RIM protein and neuropsychological conditions like schizophrenia, and a unique mechanism for dopamine release might allow the development

at Washington State University have found a method which produces large amounts of an inexpensive, sponge-like “nanofoam” from nickel and iron. The material has a large surface area and shows little loss of activity over standard 12-hour activity tests, making it a prime potential source for future clean energy generations.

of a drug that acts exclusively with that machinery to recreate a normal neurochemical balance in sufferers’ brains. If successful, drugs developed on the back of this new knowledge could replace current treatments that work by flooding the brain with an excess of dopamine, causing potentially debilitating side effects. Instead, conditions like Parkinson’s, schizophrenia, and addiction could be controlled, alleviating symptoms like low mood and poor motor control, learning, and memory. edited by Olivia Shovlin

what’s on science as a revolution examination schools

fabulous fluorine museum of natural history

super science saturday museum of natural history

Public lecture by Nobel prize-winning geneticist Professor Sir Paul Nurse. This lecture celebrates the launch of the University of Oxford’s new Centre for the History of Science, Medicine, and Technology. Registration essential.

Professor Veronique Gouverneur discusses fluorine’s position in the periodic table gives it unique properties. This lecture will discuss how fluorine chemistry has advanced medical imaging for diagnostic and pharmaceutical drug development.

Join University researchers at the museum’s “big science bonanza”, this time focused on People & Planet, to learn all about Earth and the people that live on it.

6th March (Tue Week 8) 17:00–19:00

artificial intelligence: a deeply human pursuit mathematical institute

2nd March (Fri Week 7) 17:00 have phd students “human pipettes”? new biochemistry building

become

Science Innovation Union Oxford hosts Prof. Stephen Caddick of the Wellcome Trust, who will discuss their strategy regarding early career researchers with a panel of PhD students.

6th March (Tue Week 8) 18:00–19:30

horizon lectures: john blashford-snell

amey theatre, abingdon

Colonel John Blashford-Snell, the first explorer to descend the Blue Nile and navigate the full length of the Congo River, will be delivering a lecture on his exploits and life experiences. 8th March (Thurs Week 8) 19:30–21:00

10th March (Sat Week 8) 12:00–16:00

Professor Fei-Fei Li, Google Chief Scientist of AI, will give the 2018 Lorna Casselton Memorial Lecture. Booking required. 23rd April (Mon Week 1 Trinity) 17:00–18:00

6


scientists online

Even more content from the dedicated student science writers of Oxford can be found on our brand new website Go to www.oxsci.org to find out more

the quantum in cancer

A new understanding of the causes and treatment? 12th January 2018

from the Oxford Scientist blogs

A

n ambitious agreement: Tokyo, December 13th, 2016. Five organisations join forces. Equipped with accelerating lasers and deflecting, superconducting magnets, they will develop a quantum scalpel. Their ambition is zero cancer deaths, says Toshio Hirano, chief of the National Institutes for Quantum and Radiological Science and Technology (QST) in Japan. Using the same star-building tools as nuclear fusion, QST aims to bring quantum technology to the forefront of contributions to human society. Unlike a surgeon’s scalpel, the quantum scalpel attacks tumours without cutting skin, using heavy ions fired through the body. In contrast to radiotherapy’s Gamma and X-Rays, charged particles, such as protons, release most of their energy upon reaching the target tumour. This reduces non-targeted damage and harmful side-effects. Proton Beam Therapy centres are increasingly common, but far more effective are their heavyweight cousins, carbon ions. Three times as damaging as X-Rays, carbon ions’ double-strand DNA breaking ability leaves cancerous cells beyond repair. More massive ions, like Oxygen, could even battle the most radiation resistant tumours. Sounds great, but heavier particles are harder to move. Deflecting carbon atoms with magnetic fields is much harder than guiding protons, and requires huge, expensive, 670-ton accelerators, some of which are over 100m in length. Hence, just eight centres worldwide offer carbon ion treatment. Ideally, a compact machine with several different ion types, each with varied properties, could attack all forms of cancer in a single treatment. With help from Toshiba, Mitsubishi, Hitachi and Sumitomo, QST aims to drastically reduce the technology’s cost and size, so it can be practically distributed to hospitals worldwide. Flashback to Starbucks in the late 90s, a physicist and biologist sit chatting. Radical ideas splash the coffee. The DNA molecule’s rungs are hydrogen bonds; perhaps DNA mutates when a hydrogen atom tunnels to the wrong side of the molecule. This atom can be in a superposition of states, giving a mutated and non-mutated molecule.

7

A glamorous thought, but no tea stained napkin scribbled with equations explaining mutations emerges. They drop their controversial conversations and return to nuclear science and diagnosing meningitis. After all, quantum effects are delicate, only observable in tightly controlled, supercooled conditions, how could molecules bustling around at room temperature ever exhibit them? Yet, recent advancements in photosynthesis and bird migration have led to Jim Al Khalili and Johnjoe McFadden awakening an interest in quantum biology. They returned to that Starbucks theory of quantum effects underlying mutations, and are developing experiments to test it. If correct, a new understanding of how cancerous mutations occur could be revealed, though the theory could take years to verify. Perhaps the quantum technology for cancer’s destruction lies in the quantum biology of cancer’s creation. Understanding how nature harnesses quantum effects at room temperature could improve the Quantum Scalpel; cancer’s mechanisms may indirectly provide its cure. Maria Violaris is a Physics student at Magdalen


the Oxford Scientist debate

was Antoine Lavoisier the ‘first’ chemist?

B

yes

y burning together hydrogen and oxygen, Antoine Lavoisier blasted chemistry into the future. An 18th century French scientist and nobleman, he found that water was produced in this explosive reaction. This proved water could be created, and thus could not be an element. The 2000-year-old theory of the elements—that all substances were made of earth, air, fire, or water—was blown apart. In this series of experiments, Lavoisier uncovered the process of combustion. Contemporary theory claimed a fire element called “phlogiston” was released when a substance burned. However, Lavoisier noticed that as phosphorus burned, it increased in weight. If the phosphorus was losing phlogiston, it should have been losing weight. He realised that phlogiston does not exist; instead air combines with the phosphorus, generating light and heat as fire. Based on these observations, Lavoisier identified oxygen as a new gas. Lavoisier published the first chemical naming system in 1787. The old language of chemistry was confusing, full of astrology and alchemical mysticism. Lavoisier replaced irrational names like “vitriol of Venus” with names that echo their chemical composition, such as “copper sulphate”. Mercury “calx”, which is formed when mercury reacts with oxygen, was renamed mercuric oxide. This system could expand with new discoveries, and led to the nomenclature of modern chemistry. Through experimenting with reactions in airtight vessels, Lavoisier found that total weight remained the same, no matter which reaction happened. He theorised that mass would be conserved for any chemical reaction, calling this the ‘Law of Conservation of Mass’. This separates chemistry from alchemy. In chemistry components are rearranged, but their quantity and character remains the same, whereas in alchemy it was believed gold could be magically conjured. This law helped Lavoisier to undertake some of the earliest truly quantitative chemical experiments, discoveries which would help turn chemistry into a rigorous science, marking him out as the first chemist. Tragically, Lavoisier’s contributions were not enough to save his life. He could well have become widely recognised today as the first chemist, but he was executed in 1794 following the French Revolution. Rejecting an appeal for his life, in order to continue scientific experimentation, the judge said, ‘The Republic needs neither scientists nor chemists; the course of justice cannot be delayed.’ Louis Minion is a Chemistry student at Balliol

W

no

hile Lavoisier undoubtedly changed the face of chemistry, the history of chemistry stretches far beyond him, all the way back to 1200 BC and the Babylonian perfume-maker Tapputi. The first chemist that we have any record of, Tapputi’s skills were inscribed on a clay tablet. The tablet describes how she perfected numerous experimental techniques that chemists still use to this day, such as distillation, cold enfleurage (capturing scents using animal fats), and filtration. Most notably, Tapputi pioneered the use of solvents, using distilled water and grain alcohol to carry fragrances, whereas her contemporaries smeared oils straight onto the skin. This revolutionary idea meant her perfumes diffused further and lasted longer than any others from the era. Some of her ground-breaking methods were not rediscovered for thousands of years. Some may question whether this was really chemistry or just “fancy cooking”. Admittedly, it is unlikely Tapputi had a full understanding of her techniques, although her refined methods stand out beyond “cooking”. Indeed, many major advancements in chemistry were serendipitous—Teflon, Play-Doh, and penicillin were all discovered accidentally, and we would not hesitate to call their discoverers chemists. However, even accepting that a chemist must have some understanding of the scientific method and develop theoretical explanations for natural phenomena, we don’t need to fast-forward quite as far as Lavoisier. 82 years before Lavoisier’s birth, Robert Boyle published The Sceptical Chymist, staking a strong claim for being the first modern chemist. Boyle argued for the existence of atoms (which he called corpuscles) and set the foundation for kinetic theory, envisioning reactions as results of collisions of moving particles. Although the book was largely philosophy, it set an important theoretical basis for Boyle’s other achievements: discovering the inverse relationship between the volume and pressure of a gas (Boyle’s law), introducing the litmus test, and pioneering the scientific method. Importantly, Boyle also pushed for the recognition of chemistry as a discipline separate from alchemy, which certainly helped its credibility in the long-term. So, if inventing and using experimental techniques is enough to be a chemist, then Tapputi takes the crown. However, if some theoretical understanding of chemistry is necessary for someone to have truly been a chemist, then Boyle clocks in just ahead of Lavoisier as the world’s first chemist.

Asher Winter is a Chemistry student at St Hugh’s

8


the oxford scientist interviews

Prof. Ursula Martin Ursula Martin CBE FREng FRSE is a Professor of Computer Science at the University of Oxford, whose research interests span mathematics, computer science, and the humanities. Here, Jacqueline Gill talks with her about her upcoming book Ada Lovelace: the Making of a Computer Scientist, published in April 2018.

You’ve had a very interesting and successful career, can you tell us what the highlight has been so far? It’s very difficult to pick out a highlight because there are just so many different things. But I think that underlying it all has been the wonder at mathematics, and all its form and variety. My Dad didn’t have the chance to have a university education, but he was very keen on maths. He got me switched on to maths when I was about 7 or 8, and I think that has informed a whole lot of my career, first as a research mathematician, then applying maths in computer science, and then getting very involved with policy work. You’ve recently written a book called Ada Lovelace: the Making of a Computer Scientist. What were your reasons for writing the book? The thing that got me on to Ada Lovelace is something that I think is quite important to talk about. I went through breast cancer a few years ago, and that meant I went through some pretty gruelling chemotherapy. There was 18 months where I couldn’t do anything much except read trashy novels. And when I came out of it I realised that I couldn’t do maths fast enough—it had just messed with the maths part of my head. But when I was ill I had been looking around on the web, and I got interested in all sorts of other things and I read lots of history. And then I came to Oxford and somebody approached my department from the Bodleian and said that they’d got this wonderful archive of this wonderful person Ada Lovelace, and would somebody help them to do something with it. Everybody else hid I think, but I jumped up and I thought ‘oh, that sounds fun’! When I started looking at the Bodleian archive I realised that there were all her maths lessons, and nobody had really written about these before, because people who write biographies of her tend really to not like maths very much. She didn’t go to a university, but she had a correspondence course with one of the

9

professors at UCL. So I and my colleague Chris Hollings put it all in the right order (Chris did that bit!) and we transcribed it, and we’ve been able to show that she really was a very talented mathematician. Ada Lovelace is known for being the first computer programmer. Can you tell us a bit about her and her work? What she’s known for is working with a chap called Charles Babbage. Babbage was an inventor, and one of his inventions was early mechanical computing machines. If you go to the Museum of the History of Science, just opposite Blackwell’s, you can actually see one of these—it’s like a giant piece of clockwork with lots of cogs and wheels, and it really was a computing machine. He designed a huge one which would have been 200 feet long, but he never actually built it because he was a bit impractical and very good at quarrelling with people. But the person who he really inspired with this was Ada Lovelace, and she wrote a paper about what it could do.

She’s talking about ‘can a computer think?’ and her answer to that question is, ‘well, it can only do what we tell it to do’ What’s amazing about the paper is that she picks up on so many things that are important for computing today. You open this Victorian scientific journal, and the pages are quite tiny, and then there’s a great big fold-out that covers half the table, which looks like a giant spreadsheet. Actually, it’s setting out how the programme would compute some quite complicated piece of mathematics called the Bernoulli numbers. And it’s got all the elements of a modern programme, like registers and loops. In the paper, she explains all of this in a very abstract sort of way. She’s talking about ‘can a computer think?’ and


her answer to that question is, ‘well, it can only do what we tell it to do’, and that’s a remark that was later picked up by Turing. She also talked about the machine doing things other than work with numbers, such as how it might compose music.

Then that becomes an interesting question in itself—why have there been people so keen to completely over-claim, and people who have been so sexist and unpleasant in some of the things they say about her? We’re trying to redress the balance.

Lovelace’s ideas were far ahead of their time, but how much of her work do you think is still relevant today?

Women are typically under-represented in computer science today. Have you found it much of a struggle to break through the glass ceiling?

I think what’s relevant is this idea that we can think widely about the capabilities of the machine. What you have is this idea of abstraction, that’s the key point. This idea of abstraction underlies many kinds of thinking about the theory of computer science and about modern software. If you want to solve a hard problem, then you abstract away from the details, implement the principles of the thing, and then put in the details later. And that’s an idea that she articulated very early on, but Babbage never did.

Well, I was just an oddity I think! I came in through maths where there were more women. In the early days of computing in the 60s and 70s, a lot of women came in through administrative roles. Then as computing became more of a degree level subject, that changed the dynamic.

One thing that is rather striking is the amount of controversy about her, frankly because she’s female What’s really extraordinary about it is that you read the paper as a modern computer scientist, and of course the language is a bit flowery and a bit Victorian, but you think ‘oh, she’s talking about things I understand and she’s thinking about them in the same way that I think about them’! How did Lovelace manage to overcome the problems faced by women in the 19th century, in order to become a pioneer of computer science? Well, partly because she was posh actually. She had the money and the connections, but also she did have this intense personal drive. People often say women weren’t scientists in the Victorian age, but a number of women were interested in science, because it hadn’t become so professionalised yet. But yes, for Lovelace she was from a particularly grand background. One thing that is rather striking that’s come out in our research is the amount of controversy about her, frankly because she’s female. You get people saying she was wonderful and you couldn’t have Silicon Valley without Ada Lovelace, but then you also get a backlash of people saying she was wildly overhyped, she didn’t write the paper, but that Babbage had thought putting her name on the paper might be a good look because she was so well-connected. But that’s clearly not true because when you study her archive, she really was mathematically very competent.

Shifting the image is hard. In Oxford, we really are trying to do a lot to put it right. But it is a hard struggle because I think what’s happened, particularly in schools, is that computing has become perceived as a boys’ subject. There are a lot of attempts going on now to improve the teaching in schools, but sadly it’s not a thing that’s going to change immediately. Without giving too much away, what new insights into Lovelace’s life and works does your book reveal? I think mainly that she was a talented, perceptive and hard-working mathematician. She had a great attention to detail and she also really liked to think about big concepts. There’s no question that she did the work. Plus it’s got lots of beautiful pictures and lovely illustrations from the archives—it just looks beautiful, which we’re very pleased about. The book is called Ada Lovelace: the Making of a Computer Scientist. What made you decide to become a computer scientist, and do you see any similarities between yourself and Ada? I suppose what made me become a mathematician, or a computer scientist, was this enthusiasm for maths from a very young age. Which in my case I got partly from my father, and Ada Lovelace got from her mother. Once you have a personal drive and commitment to do something, then that overcomes many obstacles. If you have the confidence and the enthusiasm, you don’t perceive them as obstacles at all. And I have modern medicine... Ada Lovelace died aged just 36. Also, I’m in a world where I could become a Professor at Oxford. Just so many things that are different and great advantages that I have had over previous generations, and over Lovelace’s generation, which you have to be very thankful for I think.

About the Book Ada, Countess of Lovelace (1815–1852), daughter of romantic poet Lord Byron and his highly educated wife, Anne Isabella, is sometimes called the world’s first computer programmer and has become an icon for women in technology. But how did a young woman in the nineteenth century, without access to formal school or university education, acquire the knowledge and expertise to become a pioneer of computer science? Featuring images of the ‘first programme’ and Lovelace’s correspondence, alongside mathematical models, and contemporary illustrations, this book shows how Ada Lovelace, with astonishing prescience, explored key mathematical questions to understand the principles behind modern computing. For your chance to win a free copy, turn to our crossword competition on page 29.

10


I am therefore I think? Testing the limits of human consciousness

M

ost people do not spend their life analysing their own actions: when you read this sentence, you are probably not thinking ‘I am reading a sentence’. Despite experiencing the visual stimulus required to produce a sentence in the brain, most people would not see themselves as “actively reading” individuals, separate from the world around them. Nevertheless, we know that we are separate from the environment, and we can distinguish ourselves from others—we are aware of our own existence. This is a fact we take for granted, and it begs the question: when do we first become self-aware? It is impossible to imagine, much less remember, a world without self-awareness. But defining when our awareness arises has important and wide-ranging ethical and medical implications. Self-awareness is dependent on the development of two conditions: physical consciousness—the awareness of physical sensations—and introspective awareness. This second characteristic is the “meta” awareness of physical consciousness: possessing an understanding of being distinct from the environment and being aware of being conscious. An example would be knowing that you are reading this sentence while reading it.

Knowing that pain is caused by nerves firing in your body is not the same as knowing how pain feels

The question of physical consciousness and its causes go beyond neuroscience and into philosophy, ending in the perhaps dissatisfying conclusion that a gap exists between a physical event and the accompanying sensation—knowing that pain is caused by nerves firing in your body is not the same as knowing how pain feels.

11

We can however try and pinpoint a moment in time when the human form begins to experience physical sensations. One method of examining this consciousness is through using EEG (electroencephalogram) studies that measure electrical brain activity. The biological conditions that constitute physical consciousness have not been established, but it has been proposed that similar EEG patterns indicate a similar level of consciousness. Experiments in which infants aged between 5 to 24 months were shown different facial images demonstrated that they have similar EEG responses to those of adults, albeit with a slower reaction time, which goes some way to showing that young children experience a level of physical consciousness akin to that of adults. Evidence that physical consciousness may arise even earlier than infancy comes from animal studies. EEG signatures of foetal mice show that the brain activity during the last trimester of pregnancy is identical to that of REM (rapid-eye-movement) and slow-wave sleep in mammals. If this type of sleep, associated with dreaming, is taken as a surrogate for consciousness, then perhaps physical consciousness exists by the 24th week after conception. Of course, the question still remains: is sleep, and in particular dreaming, analogous to physical consciousness? The answer to this perhaps depends on whether foetuses dream and if so, what they dream about. The second, introspective, part of self-awareness has been studied in a number of seminal investigations. These include the famous mirror test for visual self-recognition. Infants aged between 6 and 24 months were placed in front of a mirror with a smudge of rouge on their noses. The youngest infants reacted towards their mirror image as they would to a stranger, whilst the oldest realised the image was them and touched the smudge on their own nose. The children in the medium age-range exhibited ambivalent withdrawal.


Evolutionary theories support the idea that definite self-awareness arises from ages two to four: children rapidly develop their social behaviour in these years, and self-recognition is crucial in allowing more complex societal structures. Other theories place the beginning of introspective self-awareness much later in a child’s life. The linguistic theory argues that children become introspectively self-aware once the “self-talk” aspect of their language disappears and becomes inner speech. According to neurobiologist Yochai Ataria, the constant internal monologue in our heads forms the self. This would mean that introspective self-awareness arises in each individual at different rates, and long after the individual has gained consciousness.

It is impossible to imagine, much less remember, a world without self-awareness We can see that numerous theories exist concerning the origin of self-awareness, and it remains a question of crucial importance. Many ethical and medical issues depend on knowing definitively when we gain self-awareness.

is formed and others arguing that physical consciousness only arises during the aforementioned sleep-like foetal state. There are also medical implications, such as slow development of introspective self-awareness, which has been associated with social disorders such as autism. Another key area is infant anaesthetics: traditionally, young infants (less than a week old) were given low-to-zero doses of anaesthetics, as it was believed that their physical consciousness was not yet fully developed. Much earlier formation of the self-conscious experience would shape new anaesthetic guidelines for infant clinics. As novel methods of measuring brain activity in infants and foetuses are established, accurate ways of determining consciousness and self-awareness allow us to come closer to finding the exact point when we develop from a mass of cells to a conscious, self-aware human being. A world in which we are completely unaware of ourselves is interesting to imagine—even more intriguing is the fact that we once existed in such a world. Silvia Shen is a Maths and Philosophy student at Pembroke

Abortion limits range from 12 to 24 weeks of pregnancy, with some arguing that we become conscious as soon as the brain

WE ARE RECRUITING

Staff required in Oxfordshire / Berkshire Passionate about science?

WE WANT

Would you like to inspire children with science? Are you looking for extra income during term time, weekends or school holidays? We have hours to suit all. Do you want free training on Primary Science resources?

YOUadvert

If you have (or know someone who has) a passion for science, love working with children and have a surplus supply of energy, then you could be just the individual to expand our team! Part-time/Term-time After School Club Instructors This is a term-time position with part time hours to suit. Working in local Primary Schools delivering weekly, 1 hour science club sessions. School Holiday Activity Camp Assistants Opportunities during school holidays to help out at science camps for Primary School aged children. 6.5 hours per day, 5 days a week.

Weekend Party Entertainers A weekend position with flexible hours. Parties can be hosted in customer’s homes or village halls. Party start times are 11am or 3pm and we provide 1-2 hours of entertainment. Candidates need to be confident, energetic, outgoing, reliable and organised.

TO JOIN

OUR CREW

Exciting opportunities to inspire children

www brightsparksscience co uk

All positions require travel within the Oxfordshire, Berkshire or surrounding areas, so a driving licence and car is essential. To Apply: please send a copy of your CV with a covering email detailing what would make you an ideal science instructor to: enquiries@brightsparksscience.co.uk Successful candidates will be fully trained on all science kits required to be delivered and DBS checked before going solo.


quantum computers: solving the unsolvable Will quantum computing soon move from science fiction to science fact?

Q

uantum computing has been in the news for most of the last decade. Much like nuclear fusion or true artificial intelligence, it has been hailed as one of the scientific “holy grails” of the 21st century. Until recently, it has been a goal confined to the distant future. Yet recent contributions from academia and industry have shown that progress is being made at an exponential pace. Soon, this may allow us to step into the realm of accessible and powerful quantum computing. Quantum computing is ideal for solving problems that involve large volumes of data or possible options. These problems are conventionally unsolvable using classical computers, which use a binary system of bits—information represented as a combination of ones and zeroes. Quantum computers use qubits (quantum bits), that employ a mechanism known as superposition. The closest classical interpretation of this quantum phenomenon is that the qubit exists in multiple states of one and zero simultaneously. More accurately, superposition describes systems that, under absolutely identical conditions, are each measured to have different values of a certain quantity (such as spin or momentum), in a way that cannot be predicted with any certainty. This can reduce computation times immensely, in some cases from 101000 years (past the heat death of the universe) to just a matter of days.

This can reduce computation time immensely, from 101000 years to just days

There are numerous applications for quantum computers. Problems involving quantum systems are clear candidates, such as determining atomic energy levels. A less obvious but important use is optimisation problems of complex systems. These appear in fields such as machine learning, protein folding, and drug mechanisms, as well as supply chain management and transport system planning. In addition, quantum computers are capable of breaking traditional encryption methods used throughout modern security. This has given rise to the fields of quantum information, communication and encryption to counter this. In June 2017, Chinese scientists used their satellite, Micius, to achieve photon entanglement—a quantum

13

connection between two physically separated particles—over a world-record distance of 1200 km. (You can read more about this achievement in our article on page 21.) Currently, the divide between classical and quantum computers is a computational limit called quantum supremacy, beyond which the efficiency of quantum computing surpasses the classical regime. The ideal quantum computer offers the potential to use millions of qubits, far beyond the capacity of our most powerful devices. However, at present our engineering capabilities limit quantum computers to only a few dozen qubits. Supercomputers can simulate the processing capacity of quantum computers, up to a limit of 49 qubits, albeit with exponentially increasing difficulty and resource requirements. In October 2017, this simulation of a real quantum computer was extended by IBM to 56 qubits, using new computational methods, and the limit looks set to be pushed further in upcoming years. Engineers and researchers working on real quantum computing will have to reach an increasingly higher bar in order to outperform classical computation. The primary difficulty of realising quantum computers lies in the types of qubits used to construct them. The requirements for a functional qubit are exacting. Firstly, the qubit must be a physical system that can exhibit superposition. Secondly, functional qubits must be able to transfer information reliably with as little data corruption as possible. Thirdly, qubits must exhibit superposition under a range of conditions (such as low temperature and small size) that current levels of engineering can achieve. Several different models of qubit exist, and at present none of them meet all these criteria well enough to form the basis of a useful quantum computer. Google and IBM, key players in the quantum computing race, currently use two such systems, called superconducting circuits and ion traps. Superconducting circuits take advantage of the quantum properties of superconductivity at low temperature, while ion traps confine large ions in a cavity containing electromagnetic fields. Unfortunately, both models suffer from relatively large sizes, and this introduces difficulties when


attempting to operate a useful number of qubits (about several million). New models of qubit may be the answer. In September 2017, Australian researchers proposed a novel, smaller “spin qubit” scheme using phosphorus nuclei. These can be controlled with electric fields, allowing for integration with electronic circuits. Such schemes could make quantum computers easier to fabricate, and circumvent some of the technical difficulties in achieving quantum supremacy. In addition to physical engineering, the rate of improvement in theoretical models of quantum computing is also rapidly accelerating. In August 2017, IBM researchers reported using sophisticated methods to characterize, for the first time, the ground state energies of large molecules such as beryllium hydride. This demonstrates progress towards the potential for quantum computers to solve larger problems that classical computers simply cannot. In February of the same year, researchers from the University of Sussex described the first blueprint to build a fully modular large-scale quantum computer. Their scheme would enable the transmission of quantum bits, in the form of charged ions, between individual computer modules. Modularization allows a quantum computer to be broken down into smaller, specialized parts. These components can then be fabricated separately and with less difficulty. These developments pave the way for collaboration between different groups of researchers to build more sophisticated quantum computers. IBM has shot ahead of its competition this past year, particularly in its efforts to engage stakeholders. Since 2016,

it has made a 5-qubit, and subsequently 16-qubit, quantum computer available for public use via cloud network access. In May 2017, it selected key players to receive early access to “IBM Q”, a 20-qubit quantum computer designed for scalability that will eventually be upgraded to 50 qubits. These entities included banks, automobile makers, technology giants, chemical manufacturing companies, and universities, with the University of Oxford among the recipients. This marks an industry first in providing accessible quantum computing for commercial and scientific interests. IBM has also provided software tools that allow users greater versatility in running simulations and testing problems using these computers. Quantum supremacy has not yet been surpassed and, at present, physically realised quantum computers have a fairly unacceptable rate of error propagation. Offering such early access to quantum computing, however, paves the way for industries and researchers to become accustomed to the technology. It lets potential customers adapt their systems to it and, most importantly, plan future applications for it. This means that when quantum computers do become viable, there will already be a waiting market. Useful quantum computing may be just on our doorstep. It is already possible to create simple applications that use quantum computing to solve known problems, and in just a few years we may finally have access to the long-vaunted quantum computer—laying bare the answers to an entire swathe of previously untouchable scientific problems. Ian Foo is a Physics student interested in optical metasurfaces

do androids drive electric lorries?

T

he days when electric vehicles were restricted to the humble milk float are over. Multinational car companies Tesla, Volkswagen, and Daimler AG have announced the development of electric lorries, suitable for mass production, boasting performance figures that put their diesel equivalents to shame.

These new “smart” lorries may seem like giant steps forward, but robotics in lorries is nothing new. The digital tachograph, a device that records the speed, distance and driving duration of a lorry, has been mandatory in EU countries since 2006. ELDs (Electric Logging Devices) are also being implemented in the USA. These are attached to the engine, monitoring the speed and location of the lorry, and whether the driver is on duty. This information is then used to ensure that they adhere to the strict schedule implemented to limit driver fatigue. For those paid by the mile this can result in pressure to drive as much as they can, irrespective of the conditions or how they feel. Indeed, whilst ELDs showed an 11.7% reduction in crashes in a 2014 study, constant surveillance makes the driver more stressed and the lorry an impersonal space.

Tesla’s model claims a 20% reduction in running costs compared to diesel equivalents, a 500-mile range, and advanced self-driving capabilities. This will ultimately result in fewer fuel stops, meaning that the number of deliveries can be increased. In addition, Tesla estimates a lorry would pay back its £140,000 price tag in two years via savings on running costs alone. The self-driving capabilities will also make the lorry safer by using collision prevention software, and substantially reduce the risk of collision caused by driver fatigue. Smart electric lorries look Smart electric lorries are here to stay; they mean faster deliveries set to make trucking more profitable, safer, and greener. and cheaper running costs. However, we must ask ourselves: The concern for drivers is that self-driving lorries will spell out what will the costs of technological advancement be, and do we the end of their profession, with 285,000 people employed in really want to put the safety of UK road users in the hands of 2014. By the end of 2018 the Transport Research Laboratory machines? are set to test vehicle “platoons” on UK roads, where one human driver leads a convoy of self-driving lorries. Indeed, Google is currently testing fully self-driving cars on public roads, making Seb Elmes is an Engineering student at Wadham it foreseeable that drivers will not be required at all.

14


O

ur species, Homo sapiens, weren’t always the only humans. Around the world, fossil evidence provides tantalising hints of half a dozen or so human species that may have coexisted with our own. The best known of these is Homo neanderthalensis: the Neanderthals.

neanderthal: brute or brainbox?

Named for the Neander valley in Germany where their remains were first discovered, Neanderthals likely split before us from the same species, Homo erectus, in East Africa, before migrating northward. By the time sapiens joined them in Ice Age Europe around 50,000 years later, Neanderthals had adapted to a life in the cold. They had stocky, muscular bodies, wore animal skins to stay warm, cooked using fire, and fashioned stone tools to hunt large mammals that roamed across the tundra. The name ‘Neanderthal’ is practically synonymous with knuckle-dragging crudeness—Neanderthals even narrowly escaped being labelled Homo stupidus by Victorian biologists—but their brutish reputation may be more than a little unjust. Neanderthal brains were actually on average larger than ours, weighing in at around 1.2–1.7kg compared to our 1.3–1.4kg. Brain size doesn’t neatly correlate with intelligence, but the Neanderthals’ large brains do suggest they were far from stupidus.

first Neanderthals (first hominids to make cave art) Šipka cave, Czech Republic 40,000 years ago

Neanderthals took care of disabled group members There is some evidence, though controversial, that Neanderthals may have made art, taken part in rituals, and decorated themselves with ornaments and dyes. More certain is that Neanderthals took care of old and disabled group members. Skeletal remains show that individuals with injuries, birth defects or age-related ailments survived for many years despite being unable to fend for themselves. It also seems that Neanderthals were physiologically capable of speech: the hyoid bones in their throats appear to be adapted for the fine-tuned support of tongue and mouth muscles, as are ours. The interactions between sapiens and neanderthalensis are a great historical mystery. Were their relations hostile, or friendly? Were they able to communicate? Did they trade with each other? The archaeological record has, unfortunately, very little to say about these questions, so for the most part we can only speculate. In addition, what little evidence there is can’t necessarily be extrapolated to an entire continent over thousands of years.

Homo heidelbergensis

Homo sapiens

There is one thing we know for sure, however, which is that there was some interbreeding between the two species. Since 2010, when the first complete Neanderthal genome was sequenced, we have been able to study its similarities and differences to our own. One to four percent of modern humans’ genetic material is shared with Neanderthals—except people from Sub-Saharan Africa, where Neanderthals never lived. We know very little about how this interbreeding came about, except that it was a fairly rare event. The highest proportion of Neanderthal DNA is found in people of European and Middle Eastern ancestry; if that is you, it is certain you have some Neanderthal ancestors. Around 30,000 years ago, Neanderthals disappear from the archaeological record. What happened to them? A changing climate and increasing scarcity of prey likely played a role; our species, whose relatively advanced technology made us more adaptable, would have been less severely affected. Though there is no direct evidence for interspecies violence, it also seems plausible that clashes with sapiens were partly responsible. Humans are not well known for tolerance towards those different to themselves, or being principled over the division of scarce resources. Historian Yuval Harari even speculates that the extinction of the Neanderthals may represent ‘the first and most significant ethnic cleansing campaign in history’. The Neanderthals are surrounded by questions, many of which will likely never be answered. But the understanding that once, not so long ago, we lived alongside another species, like us but not-quite-like us, gives a more modest perspective on our place in the natural world. Ellen Pasternack is studying for a DPhil in Evolutionary Biology at Keble

15

Homo sapiens (first hominids to use agriculture) Jebel Irhoud, Morocco 300,000 years ago


humans: an infographic Denisovans

Neanderthals Homo erectus

Homo erectus (first hominids to work tools) Zhoukoudian, China 500,000 years ago

Peking man

Homo georgicus

Denisovans

Denisovans (first hominids to colonise Australia) Denisova cave, Russia 50,000 years ago

Homo heidelbergensis (first hominids to bury their dead) Kabwe cave, Zambia 80,000 years ago

Infographic created by Jessamyn Chiu, a Biology student at Somerville

16


the immortal woman

The story of Henrietta Lacks and the first immortal cell line

A

rguably one of the greatest advances in biology in the 20th century was the development of the first immortal human cell line. This type of cell line is vital to studying human disease as, unlike normal cells, they can grow indefinitely under laboratory conditions and so can be used to study diseases such as cancer. Their development began with HeLa cells, which have since been used in countless experiments, leading to approximately Henrietta Lacks in the 1940s 60,000 published papers. Despite this, many people aren’t aware of Henrietta Lacks, the woman from whom the first immortal cells were taken. Henrietta Lacks was born in 1920 in Virginia, USA. An impoverished African-American woman, she grew up in a log cabin, before joining her family as a tobacco farmer. She was referred to John Hopkins Hospital in January 1951 after suffering pain for over a year with what she described in her own words as a knot in her womb. It was there she received the diagnosis: cervical cancer. During this diagnosis, cancer cells were taken from a biopsy of her tumour and donated to the hospital’s Department of Surgery for research purposes. Less than nine months later, Henrietta passed away, leaving behind her husband and five children.

Henrietta’s cells remain to this day the most commonly used human cell line

George Otto Gey, a cell biologist in the department, was the first person to propagate her cells, which remain to this day the most commonly used human cell line. His assistant labelled this new line using the standard protocol, taking the first two letters of each of Henrietta Lacks’ names—the “HeLa” cells were born. Gey was passionate about cancer research, so freely donated samples to other labs, and quickly HeLa cells were being used in research around the world. Henrietta’s cervical cancer was likely caused by Human Papillomavirus 18 (HPV-18), which inserts its own DNA into the human genome. This led to mutations in her genes giving two specific characteristics that make HeLa cells so useful in the lab: the cells can divide many times, and they are immortal.

17

They are prolific dividers because of insertion events between HPV-18 and Henrietta’s DNA, which leads to the activation of genes that can cause uncontrolled cell growth and division if switched on permanently. Their immortality is due to the reactivation of a particular enzyme (telomerase) that means DNA is not lost when a cell divides, and so HeLa cells are not restricted by the usual maximum number of cell divisions. Thanks to these characteristics, HeLa cells have been used in many research studies, including to further our understanding of how HPV infects cells and causes cervical cancer. This has enabled the development of an HPV vaccine, which was introduced for girls in the UK from 2008. The cells were also used in the 1953 development of the polio vaccine by Jonas Salk to examine the propagation of the virus, which avoided the need for animals or patients as test subjects.

HeLa cells are not restricted by the usual maximum number of cell divisions

In 1955 HeLa cells were the first cells to be cloned. This lead to the discovery of techniques for producing colonies from single cells, which have been used in numerous genetic experiments since. Finally, HeLa cells have been invaluable in the fight against cancer, particularly in discovering how telomerase, which was first isolated from HeLa cells in 1989, can allow cells to divide indefinitely. Major controversy surrounds Henrietta’s story for two main reasons. Firstly, her identity has often been forgotten throughout history, with many researchers calling her “Helen Lane” or “Harriet Lane” and thus obscuring her story. Furthermore, neither Henrietta nor her family were aware that her cells had been taken for research until the family were told in the 1970s, and since then have been unaware of developments such as the publishing of the HeLa genome in 2013. It could be said that Henrietta Lacks was one of the most important women in science in the 20th century. Her contribution to scientific research is immeasurable, yet her story serves as a reminder of the importance of transparency and an ethical approach in science—to remember the person behind the cells in the test tube. For further reading, try ‘The Immortal Life of Henrietta Lacks’, Rebecca Skloot (2010) Jessica Ellins is a Biochemistry student at Pembroke


firestarter

Historic wildfires in California may be a sign of times to come

A

s 2017 drew to a close, California was burning. Six huge fires ripped across the state, razing forests, forcing 100,000 people to evacuate, and destroying 1000 homes. The largest of these was the “Thomas Fire”, now the biggest in modern Californian history, searing an area the size of Hong Kong. Unusually, these fires plagued the state in December, the start of the rainy season and historically a fire-free month. From 2000 to 2015, there were only eight large fires in December. In contrast, 2017 alone had six. Stranger still, the start of 2017 saw the end of California’s long drought, with historic levels of rainfall. Why were last December’s fires so ferocious?

The abundance of rainfall earlier in the year may have been a curse in disguise Surprisingly, the abundance of rainfall earlier in the year may have been a curse in disguise. With the land finally free of drought, there was an explosion in the growth of new vegetation. Unfortunately, the summer that followed was very warm and dry, drying out the fresh new shrubs and trees to create the perfect kindling. The hot, arid Santa Ana winds arrived late in the year, exacerbating drought conditions. As towns and cities in the area grow, residents are building homes and businesses much closer to the edge of forests, both raising their risk of being caught in a fire and increasing the chances of starting a blaze. For example, the Eagle Creek Fire in Oregon, which burned for three months and consumed over 40,000 acres of land, was started in September 2017 by teenagers playing with fireworks. Wildfires are not always a negative event, and play an important role in many forest ecosystems. Fires clear the forest floor of debris, allowing nutrients to return to the soil and promoting new growth. The delicate balance is disturbed if the fires burn for too long though, leaving the ground too dry for new life. Ecologist Camille Stevens-Rumann of Colorado State University studied forests in the Rocky Mountains between 1988–2011, revealing that the number of sites with no regrowth after a wildfire almost doubled after the year 2000. The hotter, drier conditions of the 21st century are creating wildfires that burn hotter and for longer, becoming a hazard rather than a help for forests.

Human-caused climate change is driving up temperatures in the western USA, feeding a vicious cycle. As forests burn, they release vast quantities of carbon dioxide into the atmosphere, contributing to the greenhouse effect and in turn causing further warming and fires. In October 2017, Californian wildfires released more pollution than all the cars in the state normally would in a whole year. As the trend of warming in the area continues, it seems that a longer, more intense wildfire season will become the new normal. Yufang Jin, Assistant Professor of Remote Sensing and Ecosystem Change at UC Davis, predicts that fires caused by Santa Ana winds will burn an areas 70% greater than seen today by 2050. Perhaps a way to tackle this emerging threat is to encourage a new type of forest, more resistant to wildfires. This can be achieved by planting trees such as maple, poplar, and cherry, which are less flammable than pine, fir, and other conifers. Stevens-Rumann suggests that ‘[forest] managers may want to plant species that are adapted to the current and future climate, not the climate of the past’. Another strategy is to simply limit the intensity of the fires, so that enough seeds survive to allow forest regeneration. However, this may be impossible as wildfires are becoming larger and harder to control than ever before.

In October 2017, Californian wildfires released more pollution than all the cars in the state in a year As the new year began, firefighters continued to battle the blazes threatening their state. The cost of wildfires to the USA in 2017 has reached $10 billion and the potential health impacts of the smoke and ash are likely to raise this further. With the western USA bracing for longer droughts and hotter weather, 2017 has given us a glimpse of what the new status quo could be. This applies not only in California but in many other areas around the world prone to wildfires, such as Portugal and Indonesia, and highlights the urgency with which we must tackle climate change as it threatens our natural world Daniel de Wijze is an Earth Sciences student at St. Hugh’s

18


the end of the giants Unsustainable living may not be a recent human trait

O

ur planet’s biosphere is in trouble. With planetary warming, ocean acidification, deforestation, and countless other environmental tragedies, it has become abundantly clear that human beings are having a detrimental effect on life on Earth. But our species’ impacts on life aren’t limited to recent history. Looking at the archaeological and geologic record, the hand of humankind can be traced all the way back to the last ice age. The Pleistocene is the name geologists give to the period of time which includes the last ice age—from roughly 2.5 million to 12,000 years ago. At this time, humans existed exclusively in hunter-gatherer societies and were rapidly expanding from their African homeland to colonise the other continents. These early hunters shared their chilly planet with giants: megafauna, animals with body mass exceeding 45kg. Over the course of a few thousand years—the blink of an eye in geological terms— more than half of all species of megafauna became extinct. Their demise has been previously attributed to natural climate change, but the emerging consensus is that human beings played a significant role in these extinctions.

Over the course of a few thousand years, more than half of all species of megafauna became extinct The past two and half million years have seen repeated fluctuations between glacial and interglacial times. Glacial periods are characterised by ice sheets which extend beyond the poles, into regions such as northern Europe and North America, as well as much more extensive glaciation in mountainous regions like the Alps and the Himalayas. Interglacials, meanwhile, are much warmer and typically shorter periods of time where much of that ice retreats. We currently live in one of these warm spells—the Holocene, which began at the end of the Pleistocene, around 12,000 years ago. Alternations between the glacial and interglacial states can be relatively rapid and such transitions are thought to have caused the extinctions of animals in the past. The unusual thing about the late Pleistocene and early Holocene

19

is that the number of extinctions is much higher than at other deglaciation events, and disproportionately affected the megafauna whilst having relatively little impact on smaller animals. Paul Martin proposed the idea of human ‘overkill’ in the 1960s. His hypothesis was that animals that had never before encountered humans could be rapidly hunted to extinction. We have examples of similar scenarios in the historical past, such as New Zealand’s moa birds, which became extinct a few hundred years after the arrival of the Māori around 700 years ago. Large animals might be particularly vulnerable due to their generally slow reproductive rates. For example, the modern African bush elephant has a gestation period of 22 months and gives birth once every five years on average. Small animals, with rapid reproduction rates, might have had time to evolve a fear of humans and develop adaptations to avoid being hunted by them—this may not have been possible for megafauna. Unsustainable rates of hunting might have fed human population booms in the short term, but may also have led to widespread extinction amongst the megafauna. Computer models that simulate the ranges and populations of megafauna have confirmed that overkill is a plausible explanation for the Pleistocene extinctions. Although physical evidence in favour of Martin’s hypothesis is hard to come by, some of the best can be found in Australia and North America. In Australia the earliest humans arrived at least 50,000 years ago. We know that humans must have lived alongside the megafauna for some time, as numerous cave paintings depict animals that no longer inhabit Australia. Archaeological records show that Pleistocene humans cooked and ate the


and gomphotheres from markings found on the bones of these animals which are indicative of human tools. Some bone markings suggest point-spear impacts, whilst others show scraping patterns, suggesting that the dead animal’s flesh would have been removed from the bone. These artefacts offer indisputable proof that humans were living alongside and hunting the ice-age megafauna. Meanwhile, in northern Eurasia, about 40% of all megafaunal species have gone extinct in the past 120,000 years. This is lower than in either Australia or the Americas, but still much higher than the long-term average extinction rate for megafauna. Many large species that became extinct in Europe during the Pleistocene survived until much later on Mediterranean islands—on the Greek island of Tilos, dwarf elephants survived until perhaps as recently as 6,000 years ago, and dwarf hippos survived on Cyprus until around 10,000 years ago. Both of these dates closely correspond to the first arrival of humans on each island.

eggs of Genyornis newtoni—a two metre tall flightless bird, six times heavier than an emu. Other Australian megafauna included the Diprotodon, a multi-tonne hippopotamus-like herbivore, Sthenurus, a genus of kangaroos that may have been carnivorous, and Megalania, a lizard up to ten times heavier than the Komodo dragon.

Dwarf elephants survived until perhaps as recently as 6,000 years ago Through the rock record of Australia we can see that human arrival in different regions coincided with rapid decreases in populations of megafauna. This isn’t recognised through animal fossil remains, but instead through a type of fungus called Sporormiella that only grows in the faecal matter of large herbivores. Thus, the number of spores from Sporormiella in the geological record can be taken as an indicator of the abundance of large herbivores in that area. Crucially, the record shows that large decreases in the amount of Sporormiella don’t correlate with major changes in climatic conditions, but to the first appearances of humans in a given area. The first humans in North America arrived long after Australia had been settled. They crossed the now-submerged Bering land bridge to Alaska and travelled via an ice-free corridor to the interior of the American continent around 12,000 years ago. What they found there included giant land sloths the size of double-decker buses, massive straight-tusked elephant-like gomphotheres, and woolly mammoths (with which they would have been familiar with from their Eurasian homeland). We know that early Americans hunted mammoths, mastodons

For many species of megafauna, the pace of their extinction was so rapid that little to no evidence of coexistence has been preserved—not so in the case of the famous woolly mammoth. The coexistence of mammoths and humans is attested to not just in the evidence from bone-tool interaction, but also in numerous cave paintings. Over 100 depictions of woolly mammoths (alongside bison, horses, woolly rhinos and other animals) are preserved in Rouffignac Cave in southwestern France, dating to 13,000 years ago. Mammoths became extinct in mainland Eurasia around 10,000 years ago. Environmental change caused a reduction of open steppe environment in favour of forests, reducing the natural range of the mammoths. Undoubtedly, climate played a significant role in their decline, but their reduced range would also have made them extra vulnerable to predation by human hunters. Their last refuge was Wrangel Island, a remote Arctic island 140km north of mainland Siberia, where a small population survived until around 3,700 years ago. Their ultimate demise was not a consequence of human hunting, but instead a result of a genetic bottleneck that meant inbreeding was widespread, leading to problematic mutations. Whilst the last mammoths were not directly hunted by humans, the tragic circumstances of their final extinction are an important indication that species’ whose habitat ranges have been significantly reduced are unlikely to survive in the long term. The biological diversity of megafauna we see in the world today is only a shadow of what it was just a few thousand years ago. Learning more about the fate of the Pleistocene megafauna teaches us poignant lessons regarding the unique position of human beings in the natural world, and of the unintentional impacts we can have when we fail to make use of our foresight. _________________________________________________ Matt Sutton is a Master’s student researching Palaeoecology and Environmental Change

20


beam me up, Scotty What the first quantum teleportation to space could mean for future technology

21

S

ince Captain Kirk was first “beamed up” by Scotty over 50 years ago, teleportation between Earth and space had been restricted to the world of science fiction. Restricted, that is, until July 2017, when reports appeared of the first quantum teleportation between the Earth and space by a team of scientists at the University of Science and Technology of China. Although its name conjures up futuristic images, quantum teleportation is relatively commonplace in physics labs around the world. What makes this summer’s demonstration so remarkable is the distance—teleportation was performed from the Earth to a satellite over 870 miles away. This proof that teleportation is achievable over long distances opens doors for significant advances in technology.

Teleportation was performed from the Earth to a satellite over 870 miles away

To perform teleportation from Earth to space, the scientists needed to create a pair of entangled photons, keeping one on Earth and beaming the other to the satellite. To improve the photon’s chances of reaching space, the researchers used a special transmitter, called Ngari, located in the Tibetan mountains. The high altitude decreased the amount of air between the transmitter and the satellite Micius (named after the Chinese philosopher), which is equipped with a very sensitive photon detector. Because of the entanglement between the photons, a measurement on the ground-based photon instantly changes the state of the photon at the satellite. In this way, the state of the photon is transferred from Earth to the satellite. The result is that the photon on the satellite has become the photon that was previously on the ground—it has been teleported. Over 32 days, the scientists sent millions of photons and found positive results indicating teleportation in 911 cases. Whilst that might not sound impressive, the impact of their accomplishments cannot be understated. This is the first time that any object has been teleported from Earth to space, and the distance is about eight times that of the previous record. The scientists say that ‘this work establishes the first ground-to-satellite up-link for faithful and ultra-long-distance quantum teleportation’, citing this as ‘an essential step toward global-scale quantum internet’.

To clarify, no people or things were teleported in the traditional sense of the word. The properties of a photon (a particle of light) are transferred from one to another by making use of the phenomenon known as quantum entanglement. This allows two particles to share the same state, irrespective of the distance between them. Imagine the two particles as a pair of coins that always “agree” which face they show when looked at or measured. This means if one shows heads (H), then the other will also show heads, and similarly for tails (T). This interaction between the two happens instantly, irrespective of distance, and was famously described by Einstein as ‘spooky action at a distance’. In September, the team used the same satellite to beam photons to Beijing and Vienna, generating Unlike a coin, the state of a quantum particle quantum encryption keys that allowed teams in does not need to be well defined—it can be in these cities to video chat with complete security. a superposition (combination) of heads and Because detecting the photons disturbs their tails simultaneously. For example, 50% H + quantum states, would-be hackers cannot intercept 50% T. Technology exploiting superposition the keys without their activities being noticed. and entanglement of particles would allow the construction of a “quantum computer”, which A complete quantum internet is still years away, but could outperform all current technology. (For more offers the possibility of worldwide communication on quantum computing, take a look at our article on page and computation that dwarfs current technology in 13). both speed and security. To achieve this, significant improvements would be needed in the reliability of However, there is a downside. Quantum particles the link between the ground and the satellite, but are fragile, and building this type of computer has the potential offered by these two proof-of-concept proved difficult. The act of observing a particle can experiments is enormous. destroy its quantum behaviour. This “observation” can happen when the particle interacts with the _______________________________________ environment. Practically, this means quantum Thomas Hird is a Physics PhD student researching systems need to be precisely controlled, thus Quantum Technology restricting the distances we have been able to send the states.

‘spooky action at a distance’


How human perseverance led to the first world champion chess machine

M

an vs machine. We hear the phrase often enough, but how can we even compare the two?

Chess was considered to be one of the first activities in which human ability could be compared to that of a computer. In May 1997, a machine (named Deep Blue) was set against chess world champion Garry Kasparov in New York. With the score tied after the fifth game of a six-game match, Deep Blue went on to take the sixth game and clinch victory. The match went down in history as the first in which a machine beat the chess world champion. Deep Blue was a machine 12 years in the making but to gain a full understanding of the story we must return to 1956 and a scientific laboratory in Los Alamos, where a group of hydrogen bomb researchers were starting to grasp chess programming. It was here that the first chess program was written. Playing a variant of chess on a reduced 6x6 board, later dubbed “Los Alamos Chess”, it beat a novice in 23 moves. This made it the first computer to defeat a human in a chess-like game: a good start. A decade later, in 1967, Dr Hubert Dreyfus—MIT philosophy professor and author of the book What Computers Can’t Do—accepted the challenge to play Mac Hack VI, a chess program created by MIT student Richard Greenblatt. Dreyfus famously doubted whether a computer could serve as a model for the human brain, but went on to lose the match, being checkmated in the middle of the board.

The move that had thrown him was so unlike that of a machine that he was convinced the programmers were cheating

Chess programs were starting to beat amateurs, and people began to ask just how far the machines could go. In 1968, a group of artificial intelligence researchers said that a computer would defeat the chess world champion within ten years. Following this claim, International Master David Levy bet £1,250 that no machine would beat him in that time frame.

He was challenged several times but never beaten, and consequently won his bet. Nevertheless, he wrote of his last opponent that it was ‘very, very much stronger than I had thought possible when I started the bet’. When, in 1989, Levy was finally defeated by the computer, Deep Thought, it was clear that times had changed. In 1996, world champion Kasparov played a six-game match against the promising Deep Blue and managed a 4-2 victory. Over the course of the next year, Deep Blue was updated and improved. Whilst previously scanning 100 million positions per second, it could now scan over double that, and analysed 74 moves ahead. This is compared with chess masters, who can think around ten moves ahead. A rematch with Kasparov was arranged, bringing us back to the start: May 1997, New York, and that remarkable first victory for a machine over the chess world champion. Kasparov claimed that his loss in game two of the six game match was ‘not just a single loss of a game, it was a loss of the match’. The move that had thrown him was so unlike that of a machine that he was convinced the programmers were cheating. It was later revealed that this move had arisen from a glitch—the computer was faced with so many options that it could not work out the best one, so it chose at random. Research into game-playing machines has only increased since 1997, with the methods employed also advancing significantly. In 2016, Google’s AlphaGo machine defeated master Go player Lee Se-dol using methods from artificial intelligence— very different from the brute force employed by Deep Blue. But perhaps there is a lesson to be learnt in Deep Blue’s early victories: after all the effort to create a machine capable of defeating the chess world champion, the feature that caused the win was a human-like decision arising from a programming error. _________________________________________________ William Moore is a Chemistry student at Lincoln

22


technology imitates life

How sharks and geckos have inspired recent technological innovations

B

iomimicry is an approach that takes inspiration from the time-tested structures and strategies found in nature, providing innovative solutions to challenges in fields such as materials science, engineering, architecture, robotics, and medicine. One particularly impressive example can be seen in the shark-inspired boats conceived by NASA, whose scientists developed a drag-reducing coating for ships inspired by the microscopic scales on shark skin. These scales, called denticles, are akin to tiny teeth. They reduce drag by creating turbulence at the water interface, preventing the viscous ‘boundary layer’ that normally forms around objects moving in a fluid. The shark-inspired innovation was deemed an unfair advantage for the winning team of the America’s Cup sailing race in 1987, after which the technology was shortly banned. Likewise, gecko-skin inspired technology has borrowed the key element of the gecko’s gravity defying grip: rows of tiny hairs, known as setae, which cling to almost any conceivable surface. The attraction of each hair is minute, working through simple electrostatic attractions, but the net effect is extremely powerful. This extraordinary ability led to the development of an incredible adhesive, with setae-imitation structures so strong that an index card sized strip can hold up to 318kg.

Complex, image-forming eyes have evolved independently some 50 to 100 times in the last few million years

The way animals perceive the world around them has inspired developments in the field of optics and machine vision. Remarkably refined imaging systems can be found in many animals, with complex, image-forming eyes evolving independently some 50 to 100 times in the last few million years. Arthropods—a large group of species including insects and crustaceans—offer particularly enticing routes to technological solutions. Arthropods have compound eyes, which are made up of hundreds of tiny lenses working together. Long, cylindrical units called ommatidia are clustered in a dome, with the lenses facing outward. The compound eye is able to see practically all the way around the organism, and can also maintain both near and far objects in constant focus. Inspired by this, scientists have created the first compound eye style camera, which mimics the arthropod eye. This camera has a curved lens made up of 180 individual smaller lenses. Taking nature as a guide allows the possibility of creating a curved lens, as opposed to existing flat camera lenses. These

23

offer a field of vision of 160 degrees, without the peripheral distance and light distortion that are common in traditional wide-capture lenses. Like a compound eye, this impressive feat of engineering also boasts infinite depth of field. Its potential uses include security surveillance, high quality medical imaging, and improving unmanned flying vehicles. Further inspiration comes from one of the most intricate eye designs found in nature. Boasting 16 types of colour receptors, compared to the three found in humans, and with the ability to sense polarisation, the mantis shrimp’s eyes are practically unrivalled. This unique sea creature and its powerful vision have inspired researchers to create an ultra-sensitive handheld camera that can also see both colour and polarisation. When light reflects off a surface or passes through a filter, the resulting vibrations of the light wave occur in a single plane, as opposed to three dimensions, and this light wave is now said to be polarised. Humans can’t detect this change, but many animal species use polarised vision to communicate, to find food, or even to navigate by sensing the polarisation patterns in the sky. The mantis shrimp eye stacks light-sensitive elements on top of one another, each filtering out a particular angle of polarised light. By replicating this structure, researchers have created a polarisation-sensing camera that could be used to detect cancer, where disorganised structures scatter light differently from healthy cells.

Many animal species use polarised vision to communicate, to find food, or even to navigate by sensing Arthropods have had the luxury of 530 million years of evolution through natural selection. Our species, in comparison, only evolved some 200,000 years ago, meaning we have been around for a mere 0.004% of the Earth’s history. It is only fitting that we should look to these tried and tested designs, which developed before our species had even evolved, to inspire our own technology. _________________________________________________ Annika Schlemm is a Biology student at Christ Church


bitcoin: can’t make head or tails of it? The Oxford Scientist demystifies Bitcoin

I

n 2008 a paper was mysteriously uploaded to the public internet. Its author, writing under the pseudonym Satoshi Nakamoto, claimed to have finally achieved the holy grail of politically minded cryptographers: a decentralised digital currency. Nakamoto named his invention “Bitcoin”.

and make sure that everyone who wants to spend money has enough in their account to do so, bundling these transactions together into a “block” that they then send to all the other miners. Each block of newly approved transactions needs to be generated from the blocks that came before it: a “blockchain”.

Nakamoto’s paper brings together three features to overcome the challenges of decentralised banking: digital signatures, the use of a distributed ledger, and what’s called “proof of work”.

If the miners are to take turns confirming blocks, we also need some way to coordinate who gets to confirm the next one without keeping a central database of whose turn it is. Proof of work is Bitcoin’s solution to this problem.

Digital signatures predate Bitcoin by decades, and rely on a mathematical trick known as trapdoor functions. Trapdoor functions are what allow us to securely send messages over the internet—for example when sending money with the old-fashioned banking systems that Bitcoin seeks to replace. A trapdoor function is a way of generating mathematical equations that are infeasible to solve, unless you know a secret “private key” that makes them trivially easy to solve. Once a solution to one of these equations is presented, it is easy to check if it works. A working solution serves as proof that the person who presented it had access to the private key. So with a private key you can generate “digital signatures” that anyone can then verify. Bitcoin uses digital signatures to make sure that only you can spend your digital money. Your private key forms your digital “wallet” and you use it to generate a signature whenever you want to send money to someone else, in much the same way that physical signatures are used to transfer money with chequebooks. Digital signatures might help us prove that we really do intend to spend our money, but they don’t prevent us spending more money than we actually have (the equivalent of signing cheques when your bank account is empty). To solve this problem you need a ledger—a big list recording how much money each person has in their account. Keeping track of this ledger has traditionally been the role of banks who record and check the account balances of all their customers. In order to avoid relying on a central authority for its function, Bitcoin needs to store a ledger in some other way. To keep track of how much money everyone has, without having to trust just one central authority, Bitcoin shares the job between hundreds of volunteers, called “miners”. The miners take turns to check all the transactions people want to make,

All the miners who want the privilege of confirming the next block of transactions (and therefore deciding which transactions take place and which ones do not) compete in an international competition to solve a tricky mathematical problem using brute force computational methods. As soon as a miner solves the problem they can add a valid block to the chain, and the next problem is generated based on the contents of this new block. Once this has happened it is more profitable for all the other miners to give up on the last problem and start on the next one, and in doing so they acknowledge all the transactions confirmed by the last block.

An explosion of decentralised currencies building on Bitcoin’s original proof of concept

The last few years have seen an explosion of decentralised currencies building on Bitcoin’s original proof of concept. The uniting vision behind them seems to be automating away the role that central authorities like banks and exchanges play in enabling transactions between strangers. Decentralisation, however, does not come without its perils. The original Bitcoin chain has found itself literally torn apart by miners and developers with competing visions for how it should function, splitting off into multiple competing sub currencies. And yet the technology behind Bitcoin, along with its original vision, continues to exercise a huge influence that is still being felt nearly ten years later. _________________________________________________ Mahmoud Ghanem is a Computer Science and Philosophy Student at Hertford

24


DNA doctoring

Editing the human genome within a patient is being attempted for the first time in a landmark Californian trial

T

he blueprints of the human body are encoded in our DNA by a simple sequence of four units, or bases. Analysis of this genetic code reveals how a change in a single base can drastically alter the function it encodes and result in devastating genetic disease. As a result, precise DNA editing in order to correct these mutations and provide permanent cures to genetic diseases has long been a “holy grail” of genetics research. The last 20 years have seen scientists edging closer to this once distant goal. Using enzymes called nucleases, which can be used to cut DNA at a desired target site, a number of DNA editing techniques have been developed. This break in the DNA allows a replacement copy of the gene without the harmful mutation, known as a “donor template”, to be inserted into the gap, enabling the DNA to give the correct instructions. This approach has been used in recent years to edit small numbers of human cells in the laboratory, with corrected cells transplanted back into the patient. While reasonably effective, this approach is limited to the few tissues (such as blood) that can be removed and grafted back into the body, making it inappropriate for the majority of genetic diseases which affect other

tissues such as the liver, lungs, and brain. Transferring this editing technology to an “in vivo” approach is therefore crucial for expanding its use. Unfortunately, in vivo editing is more challenging as the DNA cutting enzymes and replacement template must be delivered to the target organ while preventing it from cutting DNA in the wrong place. Development of the process has proved very challenging but for the first time, in November 2017, in vivo editing was attempted in humans as a treatment for Hunter’s disease.

It has the potential to revolutionise the treatment of Hunter’s disease Hunter’s disease is a rare genetic condition caused by mutations in the gene that encodes an enzyme responsible for breaking down a class of carbohydrates called mucopolysaccharides. The mutations stop the enzyme functioning properly, causing the carbohydrates to build up within cells, damaging organs including the brain, heart, lungs, and joints. This leads to brain damage, reduced heart and lung function, pain, and premature death. The ongoing trial is testing the safety of a gene editing therapy, developed by pharmaceutical company Sangamo Therapeutics, in a small number of patients. The treatment, given by a simple blood infusion, uses a virus that invades liver cells to deliver one of the many types of engineered DNA cutting enzymes and

25

a donor DNA template containing a functioning copy of the gene specific to the liver. If successful, the treatment should allow the modified liver cells to produce the enzyme that breaks down the problematic carbohydrates and prevent their accumulation within cells. Sadly, the enzyme is unable to cross the blood-brain barrier, so even if the process is successful it will not prevent brain damage. Nevertheless, it still has the potential to revolutionise the treatment of Hunter’s disease. While Hunter’s is extremely rare, a positive result would have implications reaching far beyond this small pool of around 2000 patients. If successful, the trial could provide proof of principle that gene editing in vivo is a viable medical option and pave the way for a number of therapies to progress to human trials. In the future this could lead to treatments for far more common genetic diseases like cystic fibrosis, which affects 70,000 people worldwide. It could even be applied to infectious diseases such as HIV, by editing cells to make them more resistant to infection, or common diseases like heart disease, by manipulating cells to reduce circulating cholesterol. This trial marks the first step of what is likely to be a long journey; it will be years, if not decades, before in vivo gene therapy is widely used in clinical practice. It is, however, a very exciting moment in genetics, and researchers all over the world will be eagerly awaiting the trial’s results in a few months’ time. ________________________________ Nisha Hare is a medical student at St Catherine’s


around the world in a solar-powered plane The ambitious new frontier of air travel

I

n the 1800s, scientists were convinced that aeronautical flight was nothing short of fantasy. ‘I can state flatly that heavier than air flying machines are impossible’, declared Lord Kelvin—at the time the President of the Royal Society— in 1895. However, just eight years later, the Wright brothers gloriously proved him wrong. Imagine, then, what those 19th century scientists would have thought of ‘heavier than air flying machines’ powered by nothing but sunlight. This was exactly the brave new ambition that two Swiss pioneers, engineer André Borschberg and psychiatrist and balloonist Bertrand Piccard, aspired to achieve in 2003. They too faced initial skepticism. After reaching out to countless airplane manufacturers, they were repeatedly told that it was impossible to make a plane that is simultaneously so lightweight and so large. Eventually, it was a boat manufacturer who constructed the pieces for the first prototype of their aircraft.

An aircraft the size of a Boeing 747, weighing less than a car

Thus, the Solar Impulse was born. After achieving the world’s first 26-hour manned solar-powered flight in 2011 (eight years after they started), Piccard and Borschberg wanted to take it a step further. That same year, now with more faith and support from the wider community, they began construction of the Solar Impulse 2. With ingenuity, dedication, and an indomitable spirit, their team built an aircraft the size of a Boeing 747, weighing less than a car, and with the power of a small motorcycle. To achieve a weight that is proportionally ten times lighter than the best commercial glider, their design used foam “honeycombs”, sandwiched between special carbon fibre composites that are themselves three times less dense than a sheet of paper. With 17,000 solar cells spread on its wings, Solar Impulse 2 stored enough energy to stay aloft at night using custom-made lithium-ion batteries that accounted for 45% of its overall mass. Beginning its circumnavigation from Abu Dhabi in March 2015, Solar Impulse 2 covered a total distance of 43,000 km over 558 hours. In doing so the team overcame gruelling obstacles

to achieve previously unthinkable triumphs. Structural failures during static tests, financial troubles, turbulent crosswinds in China, an overheated battery—each challenge tested the team as much as it pushed the limits of the technology.

Zero fuel and zero emissions

At the aircraft’s historic landing in July 2016, Ban Ki-moon, the then UN secretary-general, had this to say. ‘Solar Impulse has flown more than 40,000 kilometers without fuel, but with an inexhaustible supply of energy and inspiration. You may be ending your around the world flight today, but the journey to a more sustainable world is just beginning. The Solar Impulse team is helping to pilot us to that future.’ With zero fuel and zero emissions, could Solar Impulse 2 be the ‘future’ he speaks of ? Could solar energy someday power mainstream airplanes? To this question, Piccard responded, ‘It would be crazy to answer yes and stupid to answer no. Today we couldn’t have a solar-powered plane with 200 passengers. Maybe one day’. Several problems currently stand in the way. For one, even the most efficient solar panels are currently limited to around 24% efficiency. And while every solar cell added increases the power produced, it also adds to the weight of the aircraft. Other ongoing issues include the structural fragility and hefty costs associated with this type of plane. Solar Impulse 2 may have proven the potential for emissionless flight for two passengers, but for the weight of 200 and itself ? That is a distant, but hopeful, future. Despite whatever new challenges arise in the movement towards renewable energy, surely what really matters is that we are as brave as Piccard and Borschberg in our approach. Solar Impulse 2 wasn’t a dream to revolutionise the world— it was a dream to revolutionise our attitudes and mindsets. It has proven to us once more that we can overcome seemingly impossible feats, and inspires new hope for the future of renewable energy. _________________________________________________ Ina Hanninger is an Engineering student at Hertford

26



first contact

Is there something out there... and do we really want to find it if there is?

T

he first contact with intelligent extra-terrestrial life. Countless movies, books, and short stories tell the tale—a highly intelligent, technologically advanced alien species arrives on Earth. Sometimes they ask to meet our leader, often such pleasantries don’t get in the way of global conquest. Aliens represent the ultimate unknown, and it is only natural they should be used as horror fodder. Nevertheless, we live in a time when the line between science fiction and fact is constantly blurring. In mid-October 2017, “‘Oumuamua”, an 800m long, 80m wide object, hurtled past the Earth. It was argued that its distinctly unusual shape indicated intelligent design, so the Breakthrough Listen Project devoted the 100m Green Bank Telescope to scanning it for radio frequencies for ten hours. The probability that ‘Oumuamua is a vessel or probe for intelligent beings is low. But perhaps that is a good thing? The search for extra-terrestrial intelligence (SETI) has captured the imaginations of scientists and billionaires alike, but no one seems to be asking the question: do we really want to be found?

No one seems to be asking the question: do we really want to be found? Data from the Kepler space observatory indicates there are billions of exoplanets strewn across our galaxy that, like Earth, orbit within the “Goldilocks Zones” of their respective stars. These planets are theoretically able to sustain liquid water on their surface, which is thought to be a prerequisite for the development of “Earth-like” life. The known exoplanet closest to our solar system, Proxima Centauri b, just happens to be one of these billions. Let us imagine that an intelligent species did evolve there. What sort of technology would they need to visit us, 40 trillion kilometres away? The fastest ever human-made object is NASA’s Juno probe, which reached 265,000 kilometres per hour with help from Jupiter’s gravitational pull. At this enhanced speed, the journey to Proxima Centauri b would take 17,000 years. Even light takes over four years to make the journey. It seems safe to assume, therefore, that visitors from even our closest extra-solar neighbour would need technology vastly superior to ours. Of course, first contact need not be physical. We may first learn of intelligent life via radio messages beamed across

space, much like ‘A Message from Earth’—a powerful digital radio signal containing photographs and messages that was broadcast towards the Gliese 581 planetary system. The signal, sent in 2008, will arrive in 2029. It is not unfathomable that by 2050 we might receive a response. Their message to us, just like our message to them, would probably be completely beyond translation. All we would deduce about one another would be that we are intelligent and equipped with the technology to send and receive radio signals. Perhaps our messages will be taken as an invitation. Perhaps that invitation will be accepted.

The signal, sent in 2008, will arrive in 2029. it is not unfathomable that by 2050 we might receive a response The history of first contact between human societies indicates that, when worlds collide, the civilisation with the more complex technology is always victorious. This is typically the civilisation whose explorers make the journey. The principles of natural selection apply universally and they are, to quote Charles Darwin, that ‘the strongest live and the weakest die’. Any alien species will most likely possess a competitive survival instinct refined by resource limitations—can humans expect to be treated any differently by an alien species to the way they treat each other? Then again, perhaps it is unfair to suggest alien visitors would be as cruel as we are. It may be that science fiction has projected our many flaws onto aliens. After all, if our own explorers were to discover fledgling life on Mars, surely we would not seek to destroy it—one hopes we would in fact be desperate to preserve it. Indeed, the more hopeful works of fiction portray the alien visitor as our saviour. A “Superman”, to protect us from the worst of ourselves, and guide us along a path to a progressive and fantastical future. These two opposing visions could hardly be more different, and whether we will find a friendly intelligence, or be found by a hostile one, would appear to be a coin toss. Our galaxy is one of a hundred billion in the observable universe. Our sun is one of a hundred billion stars in the galaxy. For now, despite all our dreaming, we are very much alone. Perhaps we should take comfort in our anonymity. _________________________________________________ Ray Williams is a Human Sciences student at Wadham

28


crossword

Complete this crossword for a chance to win Professor Ursula Martin’s book, “Ada Lovelace: the Making of a Computer Scientist” (see our interview with the author, Professor Ursula Martin, on page 9)! Simply unscramble the highlighted letters to give the surname of a Nobel Prize winning Physicist, and send this in to competition@oxsci.org by 4th March 2018.

Across 1 Process of instantly moving something from one location to another (13) 9 Clergy title (abbrv.) (3) 10 Often (incorrectly) referred to as a “yam” (5,6) 11 Voltaic ____, earlier type of battery (4) 12 American slang for slide rule (9) 15 Principal (7) 17 Third most common native language (7)

29

19 At a previous time (7) 21 Language from Pakistan and India (7) 22 ____ Lacks, woman from whom the first immortal cell line is derived (9) 24 Change (4) 26 Events by which species disappear (11) 29 Mimic; primate (3) 30 Realism, cubism, impressionism are all examples of this (8,5)

Down 1 Unladen weight of a vehicle (4) 2 See 5 Down (7) 3 Stick (5) 4 Cereal plant (3) 5,2 The first chemist? (7,9) 6 Online slang for expressing a point of view (3) 7 Organisation that runs recreational services for the British Armed Forces (abbrv.) (5) 8 Continuously growing list of records linked by cryptographic methods (5) 12 Often used as a protein substitute (3) 13 Carl ____, American astronomer and science communicator (5) 14 Dumbstruck (10) 16 Excuse (5) 18 Used to describe a complex number with no real part (9) 20 Moves circularly (7) 21 Round edible seed (3) 23 Giulio ____, Nobel Prize winning chemist famous for his work on polymers, especially the catalysts he developed with Karl Ziegler (5) 24 Sssssssss (5) 25 Section of DNA (4) 27 Egg, or young, of parasitic insect (3) 28 French word to indicate assent (3)


the Oxford Scientist Schools Writing Competition The Oxford Scientist is proud to announce our first ever Schools Science Writing Competition, in collaboration with Oxford Sparks.

Calling all budding science writers in UK schools! • • •

Are you currently a school, sixth form or college student in the UK in Year 10, Year 11 or Year 12 (or equivalent)? Are you fascinated by science and want to communicate it to those around you? Would you like to see your work published in the next issue of The Oxford Scientist AND win a £50 Amazon voucher?

If you answered YES to all of the above, then all you need to do is write a 700-word article about a “scientific discovery” of your choice by 27th April 2018. There are no right or wrong topic choices, so your article could discuss anything from early scientific discoveries such as Darwin’s theory of evolution, to more recent scientific discoveries such as the observation of gravitational waves. Once you have written your article, you can upload it on our website at www.oxsci.org/schools/. Articles must be submitted by 27th April 2018. Articles will be judged by our panel of experts, and the winning article will be published in the next issue of The Oxford Scientist. The winning entrant will also receive a £50 Amazon voucher, sponsored by Oxford Sparks. The runners-up will have their articles featured on our website. If you have any questions about the competition, please email competition@oxsci.org. If your school, sixth form or college would like to subscribe to The Oxford Scientist for just £15 per year, please contact editor@oxsci.org.


www.oxsci.org /oxsci


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.